In this folder, we have several helper functions for building a k-nearest neighbor classifier: (1) conf_matrix.m: Builds a confusion matrix and will also output the error of the classifier. (2) fitknn.m: Takes in X, targets and new data with number of neighbors, and outputs the predicted classes. (3) irisdata.mat: Sample classification data. See below for more details on what the data represents. (4) KfoldCV.m: Splits a given data set into k-folds, ready for cross validation. (5) StandardScaler.m: Scales the data by subtracting the mean and dividing by the standard deviation. (6) TrainTestSplit.m: Splits a given data set randomly into a training and testing sets (give it a percentage for test, like 0.3). Our goal is to build two drivers: knnapp1.m: Split the data (70-30), then use the "fixed" training set, and loop through the number of nearest neighbors, getting the error for each. Plot the error to see what the best number is. knnapp2.m: Same idea as before, but take the training set and further divide into 5 folds, and find the best "k". For this homework, we'll use the iris data which really doesn't seem to need scaling.