A) Is guaranteed to converge faster than any other method. B) Does not require a loss function. C) Is only applicable to linear models. D) Offers a balance between the efficiency of batch GD and the robustness of SGD.
A) Taking a model pre-trained on a large dataset (e.g., ImageNet) and fine-tuning it for a new, specific task with a smaller dataset. B) Training a model from scratch on every new problem. C) Forgetting everything a model has learned. D) Using only unsupervised learning techniques.
A) Predicting continuous values in a regression task. B) Supervised classification of images. C) Unsupervised learning tasks like dimensionality reduction and data denoising.b D) Reinforcement learning.
A) Only a single layer of perceptrons. B) A convolutional layer followed by an RNN layer. C) An encoder that compresses the input and a decoder that reconstructs the input from the compression. D) A single output neuron with a linear activation.
A) The proportion of total predictions that were correct. B) The proportion of positive identifications that were actually correct. C) The proportion of actual positives that were identified correctly. D) The harmonic mean of precision and recall.
A) The cost of false positives is high (e.g., in spam detection, where you don't want to flag legitimate emails as spam). B) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient). C) You need a single metric that combines precision and recall. D) You are evaluating a regression model.
A) The cost of false positives is high (e.g., in spam detection). B) You need a single metric that combines precision and recall. C) You are evaluating a clustering model. D) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
A) The arithmetic mean of precision and recall. B) A metric used exclusively for regression. C) The difference between precision and recall. D) The harmonic mean of precision and recall, providing a single score that balances both concerns.
A) The accuracy of a classification model. B) The total number of misclassified instances. C) The variance of the input features. D) The average of the squares of the errors between predicted and actual values.
A) The performance of a binary classification model at various classification thresholds. B) The loss of a regression model over time. C) The architecture of a neural network. D) The clustering quality of a K-means algorithm.
A) 1.0. B) 0.5. C) 0.0. D) -1.0.
A) Replace the need for a separate test set. B) Increase the size of the training dataset. C) Visualize high-dimensional data. D) Obtain a more robust estimate of model performance by training and evaluating the model K times on different splits of the data.
A) A random selection from the training set. B) A single, pre-defined rule. C) The majority vote among its K closest neighbors in the feature space. D) The output of a linear function.u
A) Is always set to 1 for the best performance. B) Is the learning rate for the algorithm. C) Is the number of features in the dataset. D) Controls the model's flexibility. A small K can lead to overfitting, while a large K can lead to underfitting.
A) Clustering data into K groups. B) Predicting a target variable using linear combinations of features. C) Finding new, uncorrelated dimensions (principal components) that capture the maximum variance in the data.u D) Classifying data using a decision boundary.
A) Captures the least possible variance in the data. B) Captures the greatest possible variance in the data. C) Is perpendicular to all other components. D) Is randomly oriented.
A) The data is perfectly classified into known labels. B) The between-cluster variance is minimized. C) The data is projected onto a single dimension. D) The within-cluster variance is minimized.
A) Determine the learning rate for gradient descent. B) Evaluate the accuracy of a classification model. C) Initialize the cluster centroids. D) Help choose the optimal number of clusters K by looking for a "bend" in the plot of within-cluster variance.
A) Make a strong (naive) assumption that all features are conditionally independent given the class label. B) Are very simple and cannot handle complex data. C) Always have the lowest possible accuracy. D) Do not use probability in their predictions.
A) Regression algorithm for predicting continuous values. B) Dimensionality reduction technique. C) Classification algorithm that models the probability of a binary outcome using a logistic function. D) Clustering algorithm for grouping unlabeled data.
A) Distance to the decision boundary. B) Probability that the input belongs to a particular class. C) Number of features in the input. D) Exact value of the target variable.
A) Decision Trees to reduce overfitting and improve generalization. B) Support Vector Machines. C) K-NN models. D) Linear Regression models.
A) Reduce bias by making trees more complex. B) Increase the speed of a single decision tree. C) Reduce variance by training individual trees on random subsets of the data and averaging their results D) Perform feature extraction like PCA.
A) Are exclusively used for unsupervised learning. B) Build all models independently and average them. C) Build models sequentially, where each new model corrects the errors of the previous ones. D) Do not require any parameter tuning.
A) The automatic learning of features by a deep neural network. B) The process of deleting all features from a dataset. C) The process of using domain knowledge to create new input features that make machine learning algorithms work better. D) The evaluation of a model's final performance.
A) Cluster similar data points together. B) Convert categorical variables into a binary (0/1) format that can be provided to ML algorithms. C) Normalize continuous numerical features. D) Reduce the dimensionality of image data.
A) Are used for association rule learning. B) Are based on distance calculations or gradient descent, such as SVM and Neural Networks. C) Are based on tree-based models like Decision Trees and Random Forests. D) Are used for clustering only.
A) All datasets should have as many features as possible. B) There are never enough features to train a good model. C) Dimensionality reduction always improves model performance. D) As the number of features grows, the data becomes increasingly sparse, making it harder to find meaningful patterns.
A) Prevent overfitting by adding a penalty term to the loss function that discourages complex models. B) Make models more complex to fit the training data better. C) Speed up the training time of a model. D) Increase the variance of a model.
A) Increased model complexity. B) All features having non-zero weights. C) A decrease in model interpretability. D) Sparse models where the weights of less important features are driven to zero, effectively performing feature selection.
A) Configuration settings for the learning algorithm that are not learned from the data and must be set prior to training (e.g., learning rate, K in K-NN). B) The input features of the model. C) The parameters that the model learns during training (e.g., weights in a neural network). D) The output predictions of the model.
A) Searching for the best combination of hyperparameters that results in the best model performance. B) Cleaning the raw data. C) Training the model's internal weights. D) Deploying the final model.
A) Exhaustively searching over a specified set of hyperparameter values. B) Randomly sampling hyperparameter combinations from a distribution. C) Ignoring hyperparameters altogether. D) Using a separate neural network to predict the best hyperparameters.
A) Starting the training process later than scheduled. B) Stopping the training after a fixed, very short number of epochs. C) Halting the training process when performance on a validation set starts to degrade, indicating the onset of overfitting. D) Using a very small learning rate.
A) A single layer of neurons. B) Convolutional layers for processing images. C) Recurrent layers for processing sequences. D) Fully connected layers, where each neuron in one layer is connected to every neuron in the next layer.
A) Binary classification problems. B) Regression problems. C) Multi-class classification problems, as it outputs a probability distribution over the possible classes. D) Unsupervised learning problems.
A) Is guaranteed to find the global minimum for any function. B) Combines the advantages of two other extensions of stochastic gradient descent, AdaGrad and RMSProp. C) Does not require any hyperparameters. D) Is only used for unsupervised learning.
A) Increase the batch size during training. B) Replace the need for an activation function. C) Normalize the entire dataset before feeding it into the network. D) Improve the stability and speed of neural network training by normalizing the inputs to each layer.
A) Regression model's accuracy. B) Clustering algorithm's group assignments. C) Dimensionality reduction technique's effectiveness. D) Classification model on a set of test data for which the true values are known.
A) The model incorrectly predicted the negative class. B) The model incorrectly predicted the positive class. C) The model correctly predicted the negative class. D) The model correctly predicted the positive class.
A) The model is too complex for the data. B) The learning rate is set too high. C) One class in the training data has significantly more examples than another, which can bias the model. D) The features are not scaled properly.
A) Ignores the minority class completely. B) Deletes examples from the majority class at random. C) Generates synthetic examples for the minority class to balance the dataset. D) Combines all classes into one.
A) It learns by interacting with an environment and receiving rewards or penalties for actions, without a labeled dataset. B) It is only used for clustering unlabeled data. C) It requires a fully labeled dataset for training. D) It is a simpler and less powerful approach.
A) A clustering of possible actions. B) A policy that tells an agent what action to take under what circumstances by learning a value function. C) The principal components of a state space. D) A decision tree for classification.
A) Sentiment analysis, where text is classified as positive, negative, or neutral. B) Reducing the dimensionality of word vectors. C) Grouping similar news articles without labels. D) Generating new, original text without any input.
A) Are a type of clustering algorithm. B) Represent words as dense vectors in a continuous space, capturing semantic meaning. C) Are used only for image classification. D) Represent words as simple one-hot encoded vectors.
A) Two identical Convolutional Neural Networks. B) An Encoder and a Decoder for compression. C) A single, large Regression network. D) A Generator and a Discriminator, which are trained in opposition to each other.
A) Discriminating between real and fake data. B) Reducing the dimensionality of the input. C) Classifying input images into categories. D) Creating new, synthetic data that is indistinguishable from real data.
A) Binary classifier that tries to correctly label data as real (from the dataset) or fake (from the generator). B) Dimensionality reduction technique. C) Regression model predicting a continuous value. D) Clustering algorithm grouping similar images. |