COELE1-2
  • 1. "Mini-batch Gradient Descent" is often preferred because it:
A) Offers a balance between the efficiency of batch GD and the robustness of SGD.
B) Does not require a loss function.
C) Is guaranteed to converge faster than any other method.
D) Is only applicable to linear models.
  • 2. "Transfer Learning" in deep learning involves:
A) Training a model from scratch on every new problem.
B) Forgetting everything a model has learned.
C) Taking a model pre-trained on a large dataset (e.g., ImageNet) and fine-tuning it for a new, specific task with a smaller dataset.
D) Using only unsupervised learning techniques.
  • 3. An "Autoencoder" is a type of neural network primarily used for:
A) Supervised classification of images.
B) Unsupervised learning tasks like dimensionality reduction and data denoising.b
C) Predicting continuous values in a regression task.
D) Reinforcement learning.
  • 4. The architecture of a typical autoencoder consists of:
A) A single output neuron with a linear activation.
B) Only a single layer of perceptrons.
C) A convolutional layer followed by an RNN layer.
D) An encoder that compresses the input and a decoder that reconstructs the input from the compression.
  • 5. In the context of model evaluation for classification, "Accuracy" is defined as:
A) The harmonic mean of precision and recall.
B) The proportion of positive identifications that were actually correct.
C) The proportion of actual positives that were identified correctly.
D) The proportion of total predictions that were correct.
  • 6. "Precision" is an important metric when:
A) You need a single metric that combines precision and recall.
B) The cost of false positives is high (e.g., in spam detection, where you don't want to flag legitimate emails as spam).
C) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
D) You are evaluating a regression model.
  • 7. "Recall" is an important metric when:
A) You need a single metric that combines precision and recall.
B) The cost of false positives is high (e.g., in spam detection).
C) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
D) You are evaluating a clustering model.
  • 8. The "F1 Score" is:
A) The difference between precision and recall.
B) A metric used exclusively for regression.
C) The arithmetic mean of precision and recall.
D) The harmonic mean of precision and recall, providing a single score that balances both concerns.
  • 9. For a regression model, the "Mean Squared Error" (MSE) measures:
A) The average of the squares of the errors between predicted and actual values.
B) The accuracy of a classification model.
C) The variance of the input features.
D) The total number of misclassified instances.
  • 10. The "ROC Curve" is a tool used to evaluate:
A) The clustering quality of a K-means algorithm.
B) The architecture of a neural network.
C) The performance of a binary classification model at various classification thresholds.
D) The loss of a regression model over time.
  • 11. "Area Under the ROC Curve" (AUC) provides an aggregate measure of performance across all possible classification thresholds. A perfect model has an AUC of:
A) -1.0.
B) 0.0.
C) 1.0.
D) 0.5.
  • 12. "K-fold Cross-Validation" is a technique used to:
A) Replace the need for a separate test set.
B) Increase the size of the training dataset.
C) Visualize high-dimensional data.
D) Obtain a more robust estimate of model performance by training and evaluating the model K times on different splits of the data.
  • 13. In the K-Nearest Neighbors (K-NN) algorithm for classification, the class of a new data point is determined by:
A) The majority vote among its K closest neighbors in the feature space.
B) The output of a linear function.u
C) A random selection from the training set.
D) A single, pre-defined rule.
  • 14. The parameter 'K' in the K-NN algorithm:
A) Is the number of features in the dataset.
B) Controls the model's flexibility. A small K can lead to overfitting, while a large K can lead to underfitting.
C) Is always set to 1 for the best performance.
D) Is the learning rate for the algorithm.
  • 15. "Principal Component Analysis" (PCA) works by:
A) Finding new, uncorrelated dimensions (principal components) that capture the maximum variance in the data.u
B) Clustering data into K groups.
C) Predicting a target variable using linear combinations of features.
D) Classifying data using a decision boundary.
  • 16. The first principal component in PCA is the direction in the feature space that:
A) Is randomly oriented.
B) Is perpendicular to all other components.
C) Captures the least possible variance in the data.
D) Captures the greatest possible variance in the data.
  • 17. "K-Means Clustering" aims to partition data into K clusters such that:
A) The within-cluster variance is minimized.
B) The data is projected onto a single dimension.
C) The between-cluster variance is minimized.
D) The data is perfectly classified into known labels.
  • 18. The "Elbow Method" is a heuristic used in K-Means to:
A) Evaluate the accuracy of a classification model.
B) Help choose the optimal number of clusters K by looking for a "bend" in the plot of within-cluster variance.
C) Determine the learning rate for gradient descent.
D) Initialize the cluster centroids.
  • 19. "Naive Bayes" classifiers are called "naive" because they:
A) Do not use probability in their predictions.
B) Always have the lowest possible accuracy.
C) Are very simple and cannot handle complex data.
D) Make a strong (naive) assumption that all features are conditionally independent given the class label.
  • 20. "Logistic Regression" is fundamentally a:
A) Clustering algorithm for grouping unlabeled data.
B) Regression algorithm for predicting continuous values.
C) Dimensionality reduction technique.
D) Classification algorithm that models the probability of a binary outcome using a logistic function.
  • 21. The output of a logistic regression model is a value between 0 and 1, which represents the:
A) Distance to the decision boundary.
B) Number of features in the input.
C) Exact value of the target variable.
D) Probability that the input belongs to a particular class.
  • 22. A "Random Forest" is an ensemble method that combines multiple:
A) Decision Trees to reduce overfitting and improve generalization.
B) Linear Regression models.
C) K-NN models.
D) Support Vector Machines.
  • 23. The "bagging" technique in a Random Forest helps to:
A) Reduce bias by making trees more complex.
B) Reduce variance by training individual trees on random subsets of the data and averaging their results
C) Increase the speed of a single decision tree.
D) Perform feature extraction like PCA.
  • 24. "Gradient Boosting" machines (e.g., XGBoost) are ensemble methods that:
A) Build all models independently and average them.
B) Build models sequentially, where each new model corrects the errors of the previous ones.
C) Are exclusively used for unsupervised learning.
D) Do not require any parameter tuning.
  • 25. The term "feature engineering" refers to:
A) The evaluation of a model's final performance.
B) The process of using domain knowledge to create new input features that make machine learning algorithms work better.
C) The process of deleting all features from a dataset.
D) The automatic learning of features by a deep neural network.
  • 26. "One-hot encoding" is a preprocessing technique used to:
A) Cluster similar data points together.
B) Convert categorical variables into a binary (0/1) format that can be provided to ML algorithms.
C) Reduce the dimensionality of image data.
D) Normalize continuous numerical features.
  • 27. "Feature scaling" (e.g., normalization or standardization) is often crucial for algorithms that:
A) Are used for association rule learning.
B) Are based on distance calculations or gradient descent, such as SVM and Neural Networks.
C) Are used for clustering only.
D) Are based on tree-based models like Decision Trees and Random Forests.
  • 28. The "curse of dimensionality" refers to the problem that:
A) There are never enough features to train a good model.
B) Dimensionality reduction always improves model performance.
C) All datasets should have as many features as possible.
D) As the number of features grows, the data becomes increasingly sparse, making it harder to find meaningful patterns.
  • 29. "Regularization" is a technique used to:
A) Prevent overfitting by adding a penalty term to the loss function that discourages complex models.
B) Make models more complex to fit the training data better.
C) Speed up the training time of a model.
D) Increase the variance of a model.
  • 30. L1 Regularization (Lasso) can often lead to:
A) A decrease in model interpretability.
B) Sparse models where the weights of less important features are driven to zero, effectively performing feature selection.
C) Increased model complexity.
D) All features having non-zero weights.
  • 31. "Hyperparameters" are:
A) Configuration settings for the learning algorithm that are not learned from the data and must be set prior to training (e.g., learning rate, K in K-NN).
B) The parameters that the model learns during training (e.g., weights in a neural network).
C) The output predictions of the model.
D) The input features of the model.
  • 32. The process of "Hyperparameter Tuning" involves:
A) Training the model's internal weights.
B) Cleaning the raw data.
C) Searching for the best combination of hyperparameters that results in the best model performance.
D) Deploying the final model.
  • 33. "Grid Search" is a common method for hyperparameter tuning that involves:
A) Randomly sampling hyperparameter combinations from a distribution.
B) Using a separate neural network to predict the best hyperparameters.
C) Exhaustively searching over a specified set of hyperparameter values.
D) Ignoring hyperparameters altogether.
  • 34. "Early Stopping" is a form of regularization that works by:
A) Stopping the training after a fixed, very short number of epochs.
B) Halting the training process when performance on a validation set starts to degrade, indicating the onset of overfitting.
C) Using a very small learning rate.
D) Starting the training process later than scheduled.
  • 35. A "Vanilla" neural network, also known as a Multilayer Perceptron (MLP), is typically composed of:
A) Convolutional layers for processing images.
B) A single layer of neurons.
C) Recurrent layers for processing sequences.
D) Fully connected layers, where each neuron in one layer is connected to every neuron in the next layer.
  • 36. The "softmax" activation function is commonly used in the output layer of a neural network for:
A) Unsupervised learning problems.
B) Regression problems.
C) Multi-class classification problems, as it outputs a probability distribution over the possible classes.
D) Binary classification problems.
  • 37. The "Adam" optimizer is an adaptive learning rate algorithm that is often preferred because it:
A) Combines the advantages of two other extensions of stochastic gradient descent, AdaGrad and RMSProp.
B) Is guaranteed to find the global minimum for any function.
C) Does not require any hyperparameters.
D) Is only used for unsupervised learning.
  • 38. "Batch Normalization" is a technique used to:
A) Improve the stability and speed of neural network training by normalizing the inputs to each layer.
B) Replace the need for an activation function.
C) Normalize the entire dataset before feeding it into the network.
D) Increase the batch size during training.
  • 39. The "confusion matrix" is a table that is used to describe the performance of a:
A) Clustering algorithm's group assignments.
B) Regression model's accuracy.
C) Classification model on a set of test data for which the true values are known.
D) Dimensionality reduction technique's effectiveness.
  • 40. In a confusion matrix, the "true positives" are the cases where:
A) The model correctly predicted the negative class.
B) The model incorrectly predicted the negative class.
C) The model correctly predicted the positive class.
D) The model incorrectly predicted the positive class.
  • 41. The problem of "imbalanced classes" occurs when:
A) The model is too complex for the data.
B) The learning rate is set too high.
C) The features are not scaled properly.
D) One class in the training data has significantly more examples than another, which can bias the model.
  • 42. A technique to address imbalanced classes is "SMOTE," which:
A) Generates synthetic examples for the minority class to balance the dataset.
B) Ignores the minority class completely.
C) Combines all classes into one.
D) Deletes examples from the majority class at random.
  • 43. "Reinforcement Learning" differs from supervised and unsupervised learning in that:
A) It requires a fully labeled dataset for training.
B) It learns by interacting with an environment and receiving rewards or penalties for actions, without a labeled dataset.
C) It is a simpler and less powerful approach.
D) It is only used for clustering unlabeled data.
  • 44. "Q-Learning" is a popular algorithm in reinforcement learning that learns:
A) A clustering of possible actions.
B) A decision tree for classification.
C) The principal components of a state space.
D) A policy that tells an agent what action to take under what circumstances by learning a value function.
  • 45. "Natural Language Processing" (NLP) often uses supervised learning for tasks like:
A) Reducing the dimensionality of word vectors.
B) Generating new, original text without any input.
C) Sentiment analysis, where text is classified as positive, negative, or neutral.
D) Grouping similar news articles without labels.
  • 46. "Word Embeddings" (like Word2Vec) are techniques that:
A) Are used only for image classification.
B) Are a type of clustering algorithm.
C) Represent words as simple one-hot encoded vectors.
D) Represent words as dense vectors in a continuous space, capturing semantic meaning.
  • 47. A "Generative Adversarial Network" (GAN) consists of two networks:
A) Two identical Convolutional Neural Networks.
B) An Encoder and a Decoder for compression.
C) A single, large Regression network.
D) A Generator and a Discriminator, which are trained in opposition to each other.
  • 48. The "Generator" in a GAN is responsible for:
A) Creating new, synthetic data that is indistinguishable from real data.
B) Classifying input images into categories.
C) Reducing the dimensionality of the input.
D) Discriminating between real and fake data.
  • 49. The "Discriminator" in a GAN is essentially a:
A) Clustering algorithm grouping similar images.
B) Dimensionality reduction technique.
C) Binary classifier that tries to correctly label data as real (from the dataset) or fake (from the generator).
D) Regression model predicting a continuous value.
Created with That Quiz — a math test site for students of all grade levels.