- Feature Selection: Since there are 16 features and you are using only two of them, it might be possible that the data classification is not fully encapsulated in those two features and hence you are getting low accuracy. You might want to add more features, or try different combinations of features (manual feature selection) to see which features are important for classifying the data.
- Feature Scaling: Check if the features you are using for clustering are on a similar scale. If the features have different magnitudes, it might be necessary to scale them to ensure that they contribute equally to the clustering process. You can use techniques like min-max scaling or standardization to normalize the features.
- Evaluation Metrics: Consider using other evaluation metrics such as precision, recall, or F1 score to assess the performance of your clustering algorithm. These metrics can provide more insight into how the features selected might be leading to a bias in the classification boundary.
- Alternative Clustering Algorithms: You can try other clustering algorithms such as k-means or Gaussian Mixture Models, to see how their results compare with the performance of the Gaussian distribution method.
- https://in.mathworks.com/help/stats/gmdistribution.cluster.html
- https://in.mathworks.com/help/stats/clustering-using-gaussian-mixture-models.html