A) both a and b B) classification C) None of these D) prediction
A) medium dimensional data B) High dimensional data C) low diamesional data D) None of these
A) None of these B) steam C) leaf D) root
A) Information Gain B) Entropy C) None of these D) Gini Index
A) None of these B) What are the advantages of the decision tree? C) Both D) Non-linear patterns in the data can be captured easily
A) Random forest are easy to interpret but often very accurate B) forest are Random difficult to interpret but often very accurate C) None of these D) Random forest are difficult to interpret but very less accurate
A) Text Mining B) Warehousing C) Data Selection D) Data Mining
A) Knowledge Discovery Data B) Knowledge Data definition C) Knowledge Discovery Database D) Knowledge data house
A) To obtain the queries response B) For authentication C) In order to maintain consistency D) For data access
A) Association and correctional analysis classification B) Cluster analysis and Evolution analysis C) Prediction and characterization D) All of the above
A) The nearest neighbor is the same as the K-means B) The goal of the k-means clustering is to partition (n) observation into (k) clusters C) K-means clustering can be defined as the method of quantization D) All of the above
A) 3 B) 2 C) 5 D) 4
A) Find which dimension of data maximize the features variance B) Find the explained variance C) Avoid bad features D) Find good features to improve your clustering score
A) data allows other people understand better your work B) Use Standardize the best practices of data wrangling C) Make the training time more fast D) Find the features which can best predicts Y
A) MCRS B) MARS C) MCV D) All of the mentioned
A) featurePlot B) None of the mentioned C) plotsample D) levelplot
A) process B) preProcess C) postProcess D) All of the above
A) True B) False
A) PCA B) ICA C) None of the mentioned D) SCA
A) True B) False |