A) Does not require a loss function. B) Is guaranteed to converge faster than any other method. C) Is only applicable to linear models. D) Offers a balance between the efficiency of batch GD and the robustness of SGD.
A) Forgetting everything a model has learned. B) Taking a model pre-trained on a large dataset (e.g., ImageNet) and fine-tuning it for a new, specific task with a smaller dataset. C) Training a model from scratch on every new problem. D) Using only unsupervised learning techniques.
A) Unsupervised learning tasks like dimensionality reduction and data denoising.b B) Predicting continuous values in a regression task. C) Reinforcement learning. D) Supervised classification of images.
A) A convolutional layer followed by an RNN layer. B) A single output neuron with a linear activation. C) Only a single layer of perceptrons. D) An encoder that compresses the input and a decoder that reconstructs the input from the compression.
A) The proportion of positive identifications that were actually correct. B) The harmonic mean of precision and recall. C) The proportion of actual positives that were identified correctly. D) The proportion of total predictions that were correct.
A) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient). B) You are evaluating a regression model. C) You need a single metric that combines precision and recall. D) The cost of false positives is high (e.g., in spam detection, where you don't want to flag legitimate emails as spam).
A) The cost of false positives is high (e.g., in spam detection). B) You need a single metric that combines precision and recall. C) You are evaluating a clustering model. D) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
A) The difference between precision and recall. B) A metric used exclusively for regression. C) The harmonic mean of precision and recall, providing a single score that balances both concerns. D) The arithmetic mean of precision and recall.
A) The variance of the input features. B) The average of the squares of the errors between predicted and actual values. C) The total number of misclassified instances. D) The accuracy of a classification model.
A) The clustering quality of a K-means algorithm. B) The performance of a binary classification model at various classification thresholds. C) The loss of a regression model over time. D) The architecture of a neural network.
A) 0.5. B) 0.0. C) -1.0. D) 1.0.
A) Visualize high-dimensional data. B) Replace the need for a separate test set. C) Obtain a more robust estimate of model performance by training and evaluating the model K times on different splits of the data. D) Increase the size of the training dataset.
A) The majority vote among its K closest neighbors in the feature space. B) A random selection from the training set. C) A single, pre-defined rule. D) The output of a linear function.u
A) Is the learning rate for the algorithm. B) Is always set to 1 for the best performance. C) Is the number of features in the dataset. D) Controls the model's flexibility. A small K can lead to overfitting, while a large K can lead to underfitting.
A) Classifying data using a decision boundary. B) Predicting a target variable using linear combinations of features. C) Finding new, uncorrelated dimensions (principal components) that capture the maximum variance in the data.u D) Clustering data into K groups.
A) Captures the least possible variance in the data. B) Is randomly oriented. C) Captures the greatest possible variance in the data. D) Is perpendicular to all other components.
A) The data is projected onto a single dimension. B) The within-cluster variance is minimized. C) The data is perfectly classified into known labels. D) The between-cluster variance is minimized.
A) Initialize the cluster centroids. B) Help choose the optimal number of clusters K by looking for a "bend" in the plot of within-cluster variance. C) Determine the learning rate for gradient descent. D) Evaluate the accuracy of a classification model.
A) Make a strong (naive) assumption that all features are conditionally independent given the class label. B) Always have the lowest possible accuracy. C) Are very simple and cannot handle complex data. D) Do not use probability in their predictions.
A) Regression algorithm for predicting continuous values. B) Clustering algorithm for grouping unlabeled data. C) Dimensionality reduction technique. D) Classification algorithm that models the probability of a binary outcome using a logistic function.
A) Number of features in the input. B) Distance to the decision boundary. C) Exact value of the target variable. D) Probability that the input belongs to a particular class.
A) K-NN models. B) Linear Regression models. C) Decision Trees to reduce overfitting and improve generalization. D) Support Vector Machines.
A) Increase the speed of a single decision tree. B) Reduce variance by training individual trees on random subsets of the data and averaging their results C) Reduce bias by making trees more complex. D) Perform feature extraction like PCA.
A) Do not require any parameter tuning. B) Build models sequentially, where each new model corrects the errors of the previous ones. C) Are exclusively used for unsupervised learning. D) Build all models independently and average them.
A) The automatic learning of features by a deep neural network. B) The evaluation of a model's final performance. C) The process of deleting all features from a dataset. D) The process of using domain knowledge to create new input features that make machine learning algorithms work better.
A) Reduce the dimensionality of image data. B) Normalize continuous numerical features. C) Cluster similar data points together. D) Convert categorical variables into a binary (0/1) format that can be provided to ML algorithms.
A) Are used for clustering only. B) Are based on tree-based models like Decision Trees and Random Forests. C) Are used for association rule learning. D) Are based on distance calculations or gradient descent, such as SVM and Neural Networks.
A) All datasets should have as many features as possible. B) As the number of features grows, the data becomes increasingly sparse, making it harder to find meaningful patterns. C) Dimensionality reduction always improves model performance. D) There are never enough features to train a good model.
A) Increase the variance of a model. B) Prevent overfitting by adding a penalty term to the loss function that discourages complex models. C) Speed up the training time of a model. D) Make models more complex to fit the training data better.
A) Increased model complexity. B) Sparse models where the weights of less important features are driven to zero, effectively performing feature selection. C) All features having non-zero weights. D) A decrease in model interpretability.
A) The output predictions of the model. B) The parameters that the model learns during training (e.g., weights in a neural network). C) Configuration settings for the learning algorithm that are not learned from the data and must be set prior to training (e.g., learning rate, K in K-NN). D) The input features of the model.
A) Searching for the best combination of hyperparameters that results in the best model performance. B) Cleaning the raw data. C) Training the model's internal weights. D) Deploying the final model.
A) Exhaustively searching over a specified set of hyperparameter values. B) Ignoring hyperparameters altogether. C) Using a separate neural network to predict the best hyperparameters. D) Randomly sampling hyperparameter combinations from a distribution.
A) Stopping the training after a fixed, very short number of epochs. B) Starting the training process later than scheduled. C) Halting the training process when performance on a validation set starts to degrade, indicating the onset of overfitting. D) Using a very small learning rate.
A) Fully connected layers, where each neuron in one layer is connected to every neuron in the next layer. B) Recurrent layers for processing sequences. C) Convolutional layers for processing images. D) A single layer of neurons.
A) Unsupervised learning problems. B) Regression problems. C) Binary classification problems. D) Multi-class classification problems, as it outputs a probability distribution over the possible classes.
A) Combines the advantages of two other extensions of stochastic gradient descent, AdaGrad and RMSProp. B) Does not require any hyperparameters. C) Is only used for unsupervised learning. D) Is guaranteed to find the global minimum for any function.
A) Normalize the entire dataset before feeding it into the network. B) Increase the batch size during training. C) Improve the stability and speed of neural network training by normalizing the inputs to each layer. D) Replace the need for an activation function.
A) Dimensionality reduction technique's effectiveness. B) Clustering algorithm's group assignments. C) Regression model's accuracy. D) Classification model on a set of test data for which the true values are known.
A) The model incorrectly predicted the negative class. B) The model correctly predicted the negative class. C) The model incorrectly predicted the positive class. D) The model correctly predicted the positive class.
A) The features are not scaled properly. B) The model is too complex for the data. C) One class in the training data has significantly more examples than another, which can bias the model. D) The learning rate is set too high.
A) Deletes examples from the majority class at random. B) Generates synthetic examples for the minority class to balance the dataset. C) Combines all classes into one. D) Ignores the minority class completely.
A) It is a simpler and less powerful approach. B) It learns by interacting with an environment and receiving rewards or penalties for actions, without a labeled dataset. C) It is only used for clustering unlabeled data. D) It requires a fully labeled dataset for training.
A) The principal components of a state space. B) A clustering of possible actions. C) A policy that tells an agent what action to take under what circumstances by learning a value function. D) A decision tree for classification.
A) Grouping similar news articles without labels. B) Generating new, original text without any input. C) Reducing the dimensionality of word vectors. D) Sentiment analysis, where text is classified as positive, negative, or neutral.
A) Represent words as simple one-hot encoded vectors. B) Represent words as dense vectors in a continuous space, capturing semantic meaning. C) Are a type of clustering algorithm. D) Are used only for image classification.
A) Two identical Convolutional Neural Networks. B) A single, large Regression network. C) A Generator and a Discriminator, which are trained in opposition to each other. D) An Encoder and a Decoder for compression.
A) Creating new, synthetic data that is indistinguishable from real data. B) Classifying input images into categories. C) Reducing the dimensionality of the input. D) Discriminating between real and fake data.
A) Clustering algorithm grouping similar images. B) Binary classifier that tries to correctly label data as real (from the dataset) or fake (from the generator). C) Regression model predicting a continuous value. D) Dimensionality reduction technique. |