A) None of these B) both a and b C) classification D) prediction
A) low diamesional data B) medium dimensional data C) High dimensional data D) None of these
A) root B) steam C) None of these D) leaf
A) Information Gain B) None of these C) Entropy D) Gini Index
A) Non-linear patterns in the data can be captured easily B) What are the advantages of the decision tree? C) None of these D) Both
A) None of these B) Random forest are easy to interpret but often very accurate C) forest are Random difficult to interpret but often very accurate D) Random forest are difficult to interpret but very less accurate
A) Warehousing B) Text Mining C) Data Mining D) Data Selection
A) Knowledge data house B) Knowledge Data definition C) Knowledge Discovery Data D) Knowledge Discovery Database
A) To obtain the queries response B) In order to maintain consistency C) For data access D) For authentication
A) Cluster analysis and Evolution analysis B) All of the above C) Prediction and characterization D) Association and correctional analysis classification
A) The nearest neighbor is the same as the K-means B) K-means clustering can be defined as the method of quantization C) All of the above D) The goal of the k-means clustering is to partition (n) observation into (k) clusters
A) 3 B) 2 C) 4 D) 5
A) Find good features to improve your clustering score B) Find which dimension of data maximize the features variance C) Find the explained variance D) Avoid bad features
A) data allows other people understand better your work B) Find the features which can best predicts Y C) Make the training time more fast D) Use Standardize the best practices of data wrangling
A) MARS B) All of the mentioned C) MCV D) MCRS
A) levelplot B) featurePlot C) plotsample D) None of the mentioned
A) preProcess B) process C) All of the above D) postProcess
A) False B) True
A) SCA B) ICA C) None of the mentioned D) PCA
A) False B) True |