A short description of each question is included below.
- Q1: Bayes error and cauchy distribution
- Q2: Bayesian minimum risk classifier
- Q3: Bayes classifier decision boundary
- Q4: Bayes classifier and normal distribution
- Q5: Parameter estimation using maximum likelihood (MLE) and applying Bayes classifier
- Q6: Parameter estimation using MLE and maximum a posteriori (MAP)
- Q7: Implementing naive Bayes classifier from scratch
- Q8: Implementing a simple pixel classification
- Q1: Parzen window variance
- Q2: Parzen window mean
- Q3: Linear regression with L1/L2 regularization
- Q4: Decision boundary using nearest-neighbor rule
- Q5: Nearest-neighbor classifier error
- Q6: Implementing parzen window density estimation from scratch
- Q7: Classifying using the Parzen Window
- Q8: Implementing Logistic Regression and K-Nearest Neighbors (KNN) classifiers from scratch, and use them to classify the
seeds.csvdataset - Q9: Implementing Linear Regression from scratch, and using it to classify the
marketing_campaign.csvdataset
- Q1: AdaBoost concept
- Q2: AdaBoost classifier error
- Q5: Decision tree and information gain
- Q3: classifying the
credit_scoring_sample.csvdataset using a Random Forest and Bagging classifiers. Additionally, utilizing Bootstrap sampling to estimate the mean of customers' age - Q4: Implementing AdaBoost classifier from scratch, and using it to classify the iris dataset
- Q6: Implementing Decision Tree from scratch using ID3 algorithm, and use it to classify
prison_dataset.csvdataset
- Q1: Multi-Layer Perceptron (MLP) and activation function
- Q2: Forward and backward propagation in neural networks
- Q5: Kernel methods and meaning of data in transferred space
- Q3: Comparing MLP and CNN concerning translational invariance feature
- Q4:
- Classifying a 4-sample dataset in a 2D space with hard SVM
- Finding a mapping to transfer a dataset to a new space where it Is linearly separable
- Q6: Classifying MNIST dataset with kernel SVM. Linear, RBF and polynomial kernels will be tested.
- Q1: Calculate within-class and between-class scatter matrices
- Q2: Model selection concepts
- Q3: Expectation-Maximization (EM) method for exponential mixture model
- Q4: Estimate Gaussian Mixture Model (GMM) using neural network
- Q5: Implementing Principal Component Analysis (PCA) from scratch, and using it to reduce the dimensionality of the fashion-MNIST dataset
- Q6: GMM density estimation for the MNIST dataset
- Q7: Clustering the
customers_dataset.csvdataset using the k-means algorithm. Also find the optimal value of k using different methods and score functions, such as:- K-means Distortion and Elbow Method
- Silhouette Score
- Davies-Bouldin Index
- Calinski-Harabasz Index
- Dunn Index
Make a folder named assets in the root of each homework, download the necessary datasets that has been ignored, and run the code.
For more information, please refer to the report of desired homework.