COELE1-2
  • 1. "Mini-batch Gradient Descent" is often preferred because it:
A) Is only applicable to linear models.
B) Offers a balance between the efficiency of batch GD and the robustness of SGD.
C) Does not require a loss function.
D) Is guaranteed to converge faster than any other method.
  • 2. "Transfer Learning" in deep learning involves:
A) Taking a model pre-trained on a large dataset (e.g., ImageNet) and fine-tuning it for a new, specific task with a smaller dataset.
B) Using only unsupervised learning techniques.
C) Forgetting everything a model has learned.
D) Training a model from scratch on every new problem.
  • 3. An "Autoencoder" is a type of neural network primarily used for:
A) Predicting continuous values in a regression task.
B) Supervised classification of images.
C) Unsupervised learning tasks like dimensionality reduction and data denoising.b
D) Reinforcement learning.
  • 4. The architecture of a typical autoencoder consists of:
A) An encoder that compresses the input and a decoder that reconstructs the input from the compression.
B) Only a single layer of perceptrons.
C) A convolutional layer followed by an RNN layer.
D) A single output neuron with a linear activation.
  • 5. In the context of model evaluation for classification, "Accuracy" is defined as:
A) The proportion of total predictions that were correct.
B) The harmonic mean of precision and recall.
C) The proportion of actual positives that were identified correctly.
D) The proportion of positive identifications that were actually correct.
  • 6. "Precision" is an important metric when:
A) You are evaluating a regression model.
B) You need a single metric that combines precision and recall.
C) The cost of false positives is high (e.g., in spam detection, where you don't want to flag legitimate emails as spam).
D) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
  • 7. "Recall" is an important metric when:
A) The cost of false positives is high (e.g., in spam detection).
B) You are evaluating a clustering model.
C) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
D) You need a single metric that combines precision and recall.
  • 8. The "F1 Score" is:
A) The arithmetic mean of precision and recall.
B) A metric used exclusively for regression.
C) The harmonic mean of precision and recall, providing a single score that balances both concerns.
D) The difference between precision and recall.
  • 9. For a regression model, the "Mean Squared Error" (MSE) measures:
A) The total number of misclassified instances.
B) The accuracy of a classification model.
C) The average of the squares of the errors between predicted and actual values.
D) The variance of the input features.
  • 10. The "ROC Curve" is a tool used to evaluate:
A) The clustering quality of a K-means algorithm.
B) The performance of a binary classification model at various classification thresholds.
C) The loss of a regression model over time.
D) The architecture of a neural network.
  • 11. "Area Under the ROC Curve" (AUC) provides an aggregate measure of performance across all possible classification thresholds. A perfect model has an AUC of:
A) -1.0.
B) 0.0.
C) 1.0.
D) 0.5.
  • 12. "K-fold Cross-Validation" is a technique used to:
A) Obtain a more robust estimate of model performance by training and evaluating the model K times on different splits of the data.
B) Increase the size of the training dataset.
C) Replace the need for a separate test set.
D) Visualize high-dimensional data.
  • 13. In the K-Nearest Neighbors (K-NN) algorithm for classification, the class of a new data point is determined by:
A) The majority vote among its K closest neighbors in the feature space.
B) A single, pre-defined rule.
C) A random selection from the training set.
D) The output of a linear function.u
  • 14. The parameter 'K' in the K-NN algorithm:
A) Controls the model's flexibility. A small K can lead to overfitting, while a large K can lead to underfitting.
B) Is always set to 1 for the best performance.
C) Is the learning rate for the algorithm.
D) Is the number of features in the dataset.
  • 15. "Principal Component Analysis" (PCA) works by:
A) Predicting a target variable using linear combinations of features.
B) Classifying data using a decision boundary.
C) Finding new, uncorrelated dimensions (principal components) that capture the maximum variance in the data.u
D) Clustering data into K groups.
  • 16. The first principal component in PCA is the direction in the feature space that:
A) Is randomly oriented.
B) Is perpendicular to all other components.
C) Captures the greatest possible variance in the data.
D) Captures the least possible variance in the data.
  • 17. "K-Means Clustering" aims to partition data into K clusters such that:
A) The data is projected onto a single dimension.
B) The data is perfectly classified into known labels.
C) The within-cluster variance is minimized.
D) The between-cluster variance is minimized.
  • 18. The "Elbow Method" is a heuristic used in K-Means to:
A) Determine the learning rate for gradient descent.
B) Evaluate the accuracy of a classification model.
C) Initialize the cluster centroids.
D) Help choose the optimal number of clusters K by looking for a "bend" in the plot of within-cluster variance.
  • 19. "Naive Bayes" classifiers are called "naive" because they:
A) Make a strong (naive) assumption that all features are conditionally independent given the class label.
B) Do not use probability in their predictions.
C) Are very simple and cannot handle complex data.
D) Always have the lowest possible accuracy.
  • 20. "Logistic Regression" is fundamentally a:
A) Classification algorithm that models the probability of a binary outcome using a logistic function.
B) Regression algorithm for predicting continuous values.
C) Clustering algorithm for grouping unlabeled data.
D) Dimensionality reduction technique.
  • 21. The output of a logistic regression model is a value between 0 and 1, which represents the:
A) Distance to the decision boundary.
B) Number of features in the input.
C) Exact value of the target variable.
D) Probability that the input belongs to a particular class.
  • 22. A "Random Forest" is an ensemble method that combines multiple:
A) Decision Trees to reduce overfitting and improve generalization.
B) Support Vector Machines.
C) K-NN models.
D) Linear Regression models.
  • 23. The "bagging" technique in a Random Forest helps to:
A) Reduce bias by making trees more complex.
B) Reduce variance by training individual trees on random subsets of the data and averaging their results
C) Increase the speed of a single decision tree.
D) Perform feature extraction like PCA.
  • 24. "Gradient Boosting" machines (e.g., XGBoost) are ensemble methods that:
A) Build all models independently and average them.
B) Are exclusively used for unsupervised learning.
C) Do not require any parameter tuning.
D) Build models sequentially, where each new model corrects the errors of the previous ones.
  • 25. The term "feature engineering" refers to:
A) The automatic learning of features by a deep neural network.
B) The process of deleting all features from a dataset.
C) The process of using domain knowledge to create new input features that make machine learning algorithms work better.
D) The evaluation of a model's final performance.
  • 26. "One-hot encoding" is a preprocessing technique used to:
A) Convert categorical variables into a binary (0/1) format that can be provided to ML algorithms.
B) Reduce the dimensionality of image data.
C) Cluster similar data points together.
D) Normalize continuous numerical features.
  • 27. "Feature scaling" (e.g., normalization or standardization) is often crucial for algorithms that:
A) Are used for association rule learning.
B) Are based on tree-based models like Decision Trees and Random Forests.
C) Are based on distance calculations or gradient descent, such as SVM and Neural Networks.
D) Are used for clustering only.
  • 28. The "curse of dimensionality" refers to the problem that:
A) As the number of features grows, the data becomes increasingly sparse, making it harder to find meaningful patterns.
B) Dimensionality reduction always improves model performance.
C) There are never enough features to train a good model.
D) All datasets should have as many features as possible.
  • 29. "Regularization" is a technique used to:
A) Make models more complex to fit the training data better.
B) Increase the variance of a model.
C) Speed up the training time of a model.
D) Prevent overfitting by adding a penalty term to the loss function that discourages complex models.
  • 30. L1 Regularization (Lasso) can often lead to:
A) A decrease in model interpretability.
B) Sparse models where the weights of less important features are driven to zero, effectively performing feature selection.
C) All features having non-zero weights.
D) Increased model complexity.
  • 31. "Hyperparameters" are:
A) The output predictions of the model.
B) The input features of the model.
C) Configuration settings for the learning algorithm that are not learned from the data and must be set prior to training (e.g., learning rate, K in K-NN).
D) The parameters that the model learns during training (e.g., weights in a neural network).
  • 32. The process of "Hyperparameter Tuning" involves:
A) Training the model's internal weights.
B) Searching for the best combination of hyperparameters that results in the best model performance.
C) Cleaning the raw data.
D) Deploying the final model.
  • 33. "Grid Search" is a common method for hyperparameter tuning that involves:
A) Exhaustively searching over a specified set of hyperparameter values.
B) Ignoring hyperparameters altogether.
C) Randomly sampling hyperparameter combinations from a distribution.
D) Using a separate neural network to predict the best hyperparameters.
  • 34. "Early Stopping" is a form of regularization that works by:
A) Using a very small learning rate.
B) Stopping the training after a fixed, very short number of epochs.
C) Starting the training process later than scheduled.
D) Halting the training process when performance on a validation set starts to degrade, indicating the onset of overfitting.
  • 35. A "Vanilla" neural network, also known as a Multilayer Perceptron (MLP), is typically composed of:
A) A single layer of neurons.
B) Convolutional layers for processing images.
C) Fully connected layers, where each neuron in one layer is connected to every neuron in the next layer.
D) Recurrent layers for processing sequences.
  • 36. The "softmax" activation function is commonly used in the output layer of a neural network for:
A) Regression problems.
B) Binary classification problems.
C) Multi-class classification problems, as it outputs a probability distribution over the possible classes.
D) Unsupervised learning problems.
  • 37. The "Adam" optimizer is an adaptive learning rate algorithm that is often preferred because it:
A) Does not require any hyperparameters.
B) Combines the advantages of two other extensions of stochastic gradient descent, AdaGrad and RMSProp.
C) Is guaranteed to find the global minimum for any function.
D) Is only used for unsupervised learning.
  • 38. "Batch Normalization" is a technique used to:
A) Increase the batch size during training.
B) Replace the need for an activation function.
C) Improve the stability and speed of neural network training by normalizing the inputs to each layer.
D) Normalize the entire dataset before feeding it into the network.
  • 39. The "confusion matrix" is a table that is used to describe the performance of a:
A) Clustering algorithm's group assignments.
B) Regression model's accuracy.
C) Dimensionality reduction technique's effectiveness.
D) Classification model on a set of test data for which the true values are known.
  • 40. In a confusion matrix, the "true positives" are the cases where:
A) The model incorrectly predicted the positive class.
B) The model correctly predicted the positive class.
C) The model correctly predicted the negative class.
D) The model incorrectly predicted the negative class.
  • 41. The problem of "imbalanced classes" occurs when:
A) The learning rate is set too high.
B) One class in the training data has significantly more examples than another, which can bias the model.
C) The features are not scaled properly.
D) The model is too complex for the data.
  • 42. A technique to address imbalanced classes is "SMOTE," which:
A) Ignores the minority class completely.
B) Deletes examples from the majority class at random.
C) Combines all classes into one.
D) Generates synthetic examples for the minority class to balance the dataset.
  • 43. "Reinforcement Learning" differs from supervised and unsupervised learning in that:
A) It is only used for clustering unlabeled data.
B) It is a simpler and less powerful approach.
C) It learns by interacting with an environment and receiving rewards or penalties for actions, without a labeled dataset.
D) It requires a fully labeled dataset for training.
  • 44. "Q-Learning" is a popular algorithm in reinforcement learning that learns:
A) A clustering of possible actions.
B) The principal components of a state space.
C) A policy that tells an agent what action to take under what circumstances by learning a value function.
D) A decision tree for classification.
  • 45. "Natural Language Processing" (NLP) often uses supervised learning for tasks like:
A) Reducing the dimensionality of word vectors.
B) Generating new, original text without any input.
C) Grouping similar news articles without labels.
D) Sentiment analysis, where text is classified as positive, negative, or neutral.
  • 46. "Word Embeddings" (like Word2Vec) are techniques that:
A) Are used only for image classification.
B) Are a type of clustering algorithm.
C) Represent words as dense vectors in a continuous space, capturing semantic meaning.
D) Represent words as simple one-hot encoded vectors.
  • 47. A "Generative Adversarial Network" (GAN) consists of two networks:
A) A Generator and a Discriminator, which are trained in opposition to each other.
B) A single, large Regression network.
C) An Encoder and a Decoder for compression.
D) Two identical Convolutional Neural Networks.
  • 48. The "Generator" in a GAN is responsible for:
A) Discriminating between real and fake data.
B) Reducing the dimensionality of the input.
C) Creating new, synthetic data that is indistinguishable from real data.
D) Classifying input images into categories.
  • 49. The "Discriminator" in a GAN is essentially a:
A) Regression model predicting a continuous value.
B) Clustering algorithm grouping similar images.
C) Dimensionality reduction technique.
D) Binary classifier that tries to correctly label data as real (from the dataset) or fake (from the generator).
Created with That Quiz — a math test site for students of all grade levels.