COELE1-2
  • 1. "Mini-batch Gradient Descent" is often preferred because it:
A) Is guaranteed to converge faster than any other method.
B) Is only applicable to linear models.
C) Does not require a loss function.
D) Offers a balance between the efficiency of batch GD and the robustness of SGD.
  • 2. "Transfer Learning" in deep learning involves:
A) Training a model from scratch on every new problem.
B) Using only unsupervised learning techniques.
C) Taking a model pre-trained on a large dataset (e.g., ImageNet) and fine-tuning it for a new, specific task with a smaller dataset.
D) Forgetting everything a model has learned.
  • 3. An "Autoencoder" is a type of neural network primarily used for:
A) Supervised classification of images.
B) Predicting continuous values in a regression task.
C) Reinforcement learning.
D) Unsupervised learning tasks like dimensionality reduction and data denoising.b
  • 4. The architecture of a typical autoencoder consists of:
A) An encoder that compresses the input and a decoder that reconstructs the input from the compression.
B) Only a single layer of perceptrons.
C) A single output neuron with a linear activation.
D) A convolutional layer followed by an RNN layer.
  • 5. In the context of model evaluation for classification, "Accuracy" is defined as:
A) The proportion of total predictions that were correct.
B) The proportion of positive identifications that were actually correct.
C) The proportion of actual positives that were identified correctly.
D) The harmonic mean of precision and recall.
  • 6. "Precision" is an important metric when:
A) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
B) You are evaluating a regression model.
C) The cost of false positives is high (e.g., in spam detection, where you don't want to flag legitimate emails as spam).
D) You need a single metric that combines precision and recall.
  • 7. "Recall" is an important metric when:
A) You are evaluating a clustering model.
B) The cost of false positives is high (e.g., in spam detection).
C) You need a single metric that combines precision and recall.
D) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
  • 8. The "F1 Score" is:
A) The arithmetic mean of precision and recall.
B) A metric used exclusively for regression.
C) The harmonic mean of precision and recall, providing a single score that balances both concerns.
D) The difference between precision and recall.
  • 9. For a regression model, the "Mean Squared Error" (MSE) measures:
A) The average of the squares of the errors between predicted and actual values.
B) The variance of the input features.
C) The accuracy of a classification model.
D) The total number of misclassified instances.
  • 10. The "ROC Curve" is a tool used to evaluate:
A) The architecture of a neural network.
B) The clustering quality of a K-means algorithm.
C) The loss of a regression model over time.
D) The performance of a binary classification model at various classification thresholds.
  • 11. "Area Under the ROC Curve" (AUC) provides an aggregate measure of performance across all possible classification thresholds. A perfect model has an AUC of:
A) 1.0.
B) -1.0.
C) 0.0.
D) 0.5.
  • 12. "K-fold Cross-Validation" is a technique used to:
A) Replace the need for a separate test set.
B) Obtain a more robust estimate of model performance by training and evaluating the model K times on different splits of the data.
C) Visualize high-dimensional data.
D) Increase the size of the training dataset.
  • 13. In the K-Nearest Neighbors (K-NN) algorithm for classification, the class of a new data point is determined by:
A) A random selection from the training set.
B) A single, pre-defined rule.
C) The majority vote among its K closest neighbors in the feature space.
D) The output of a linear function.u
  • 14. The parameter 'K' in the K-NN algorithm:
A) Is always set to 1 for the best performance.
B) Controls the model's flexibility. A small K can lead to overfitting, while a large K can lead to underfitting.
C) Is the number of features in the dataset.
D) Is the learning rate for the algorithm.
  • 15. "Principal Component Analysis" (PCA) works by:
A) Classifying data using a decision boundary.
B) Clustering data into K groups.
C) Predicting a target variable using linear combinations of features.
D) Finding new, uncorrelated dimensions (principal components) that capture the maximum variance in the data.u
  • 16. The first principal component in PCA is the direction in the feature space that:
A) Is perpendicular to all other components.
B) Is randomly oriented.
C) Captures the greatest possible variance in the data.
D) Captures the least possible variance in the data.
  • 17. "K-Means Clustering" aims to partition data into K clusters such that:
A) The within-cluster variance is minimized.
B) The data is perfectly classified into known labels.
C) The data is projected onto a single dimension.
D) The between-cluster variance is minimized.
  • 18. The "Elbow Method" is a heuristic used in K-Means to:
A) Initialize the cluster centroids.
B) Determine the learning rate for gradient descent.
C) Evaluate the accuracy of a classification model.
D) Help choose the optimal number of clusters K by looking for a "bend" in the plot of within-cluster variance.
  • 19. "Naive Bayes" classifiers are called "naive" because they:
A) Make a strong (naive) assumption that all features are conditionally independent given the class label.
B) Do not use probability in their predictions.
C) Always have the lowest possible accuracy.
D) Are very simple and cannot handle complex data.
  • 20. "Logistic Regression" is fundamentally a:
A) Classification algorithm that models the probability of a binary outcome using a logistic function.
B) Regression algorithm for predicting continuous values.
C) Clustering algorithm for grouping unlabeled data.
D) Dimensionality reduction technique.
  • 21. The output of a logistic regression model is a value between 0 and 1, which represents the:
A) Probability that the input belongs to a particular class.
B) Number of features in the input.
C) Exact value of the target variable.
D) Distance to the decision boundary.
  • 22. A "Random Forest" is an ensemble method that combines multiple:
A) Support Vector Machines.
B) Linear Regression models.
C) K-NN models.
D) Decision Trees to reduce overfitting and improve generalization.
  • 23. The "bagging" technique in a Random Forest helps to:
A) Perform feature extraction like PCA.
B) Increase the speed of a single decision tree.
C) Reduce variance by training individual trees on random subsets of the data and averaging their results
D) Reduce bias by making trees more complex.
  • 24. "Gradient Boosting" machines (e.g., XGBoost) are ensemble methods that:
A) Do not require any parameter tuning.
B) Build models sequentially, where each new model corrects the errors of the previous ones.
C) Build all models independently and average them.
D) Are exclusively used for unsupervised learning.
  • 25. The term "feature engineering" refers to:
A) The process of deleting all features from a dataset.
B) The process of using domain knowledge to create new input features that make machine learning algorithms work better.
C) The automatic learning of features by a deep neural network.
D) The evaluation of a model's final performance.
  • 26. "One-hot encoding" is a preprocessing technique used to:
A) Normalize continuous numerical features.
B) Reduce the dimensionality of image data.
C) Cluster similar data points together.
D) Convert categorical variables into a binary (0/1) format that can be provided to ML algorithms.
  • 27. "Feature scaling" (e.g., normalization or standardization) is often crucial for algorithms that:
A) Are used for clustering only.
B) Are based on distance calculations or gradient descent, such as SVM and Neural Networks.
C) Are used for association rule learning.
D) Are based on tree-based models like Decision Trees and Random Forests.
  • 28. The "curse of dimensionality" refers to the problem that:
A) There are never enough features to train a good model.
B) Dimensionality reduction always improves model performance.
C) All datasets should have as many features as possible.
D) As the number of features grows, the data becomes increasingly sparse, making it harder to find meaningful patterns.
  • 29. "Regularization" is a technique used to:
A) Speed up the training time of a model.
B) Prevent overfitting by adding a penalty term to the loss function that discourages complex models.
C) Make models more complex to fit the training data better.
D) Increase the variance of a model.
  • 30. L1 Regularization (Lasso) can often lead to:
A) All features having non-zero weights.
B) Sparse models where the weights of less important features are driven to zero, effectively performing feature selection.
C) A decrease in model interpretability.
D) Increased model complexity.
  • 31. "Hyperparameters" are:
A) Configuration settings for the learning algorithm that are not learned from the data and must be set prior to training (e.g., learning rate, K in K-NN).
B) The input features of the model.
C) The parameters that the model learns during training (e.g., weights in a neural network).
D) The output predictions of the model.
  • 32. The process of "Hyperparameter Tuning" involves:
A) Searching for the best combination of hyperparameters that results in the best model performance.
B) Cleaning the raw data.
C) Deploying the final model.
D) Training the model's internal weights.
  • 33. "Grid Search" is a common method for hyperparameter tuning that involves:
A) Randomly sampling hyperparameter combinations from a distribution.
B) Exhaustively searching over a specified set of hyperparameter values.
C) Ignoring hyperparameters altogether.
D) Using a separate neural network to predict the best hyperparameters.
  • 34. "Early Stopping" is a form of regularization that works by:
A) Halting the training process when performance on a validation set starts to degrade, indicating the onset of overfitting.
B) Stopping the training after a fixed, very short number of epochs.
C) Starting the training process later than scheduled.
D) Using a very small learning rate.
  • 35. A "Vanilla" neural network, also known as a Multilayer Perceptron (MLP), is typically composed of:
A) Convolutional layers for processing images.
B) Recurrent layers for processing sequences.
C) A single layer of neurons.
D) Fully connected layers, where each neuron in one layer is connected to every neuron in the next layer.
  • 36. The "softmax" activation function is commonly used in the output layer of a neural network for:
A) Unsupervised learning problems.
B) Regression problems.
C) Binary classification problems.
D) Multi-class classification problems, as it outputs a probability distribution over the possible classes.
  • 37. The "Adam" optimizer is an adaptive learning rate algorithm that is often preferred because it:
A) Does not require any hyperparameters.
B) Is guaranteed to find the global minimum for any function.
C) Combines the advantages of two other extensions of stochastic gradient descent, AdaGrad and RMSProp.
D) Is only used for unsupervised learning.
  • 38. "Batch Normalization" is a technique used to:
A) Improve the stability and speed of neural network training by normalizing the inputs to each layer.
B) Increase the batch size during training.
C) Replace the need for an activation function.
D) Normalize the entire dataset before feeding it into the network.
  • 39. The "confusion matrix" is a table that is used to describe the performance of a:
A) Clustering algorithm's group assignments.
B) Dimensionality reduction technique's effectiveness.
C) Regression model's accuracy.
D) Classification model on a set of test data for which the true values are known.
  • 40. In a confusion matrix, the "true positives" are the cases where:
A) The model incorrectly predicted the negative class.
B) The model correctly predicted the negative class.
C) The model incorrectly predicted the positive class.
D) The model correctly predicted the positive class.
  • 41. The problem of "imbalanced classes" occurs when:
A) The features are not scaled properly.
B) The model is too complex for the data.
C) One class in the training data has significantly more examples than another, which can bias the model.
D) The learning rate is set too high.
  • 42. A technique to address imbalanced classes is "SMOTE," which:
A) Ignores the minority class completely.
B) Combines all classes into one.
C) Generates synthetic examples for the minority class to balance the dataset.
D) Deletes examples from the majority class at random.
  • 43. "Reinforcement Learning" differs from supervised and unsupervised learning in that:
A) It requires a fully labeled dataset for training.
B) It is only used for clustering unlabeled data.
C) It is a simpler and less powerful approach.
D) It learns by interacting with an environment and receiving rewards or penalties for actions, without a labeled dataset.
  • 44. "Q-Learning" is a popular algorithm in reinforcement learning that learns:
A) A decision tree for classification.
B) A clustering of possible actions.
C) A policy that tells an agent what action to take under what circumstances by learning a value function.
D) The principal components of a state space.
  • 45. "Natural Language Processing" (NLP) often uses supervised learning for tasks like:
A) Reducing the dimensionality of word vectors.
B) Grouping similar news articles without labels.
C) Sentiment analysis, where text is classified as positive, negative, or neutral.
D) Generating new, original text without any input.
  • 46. "Word Embeddings" (like Word2Vec) are techniques that:
A) Represent words as simple one-hot encoded vectors.
B) Are a type of clustering algorithm.
C) Are used only for image classification.
D) Represent words as dense vectors in a continuous space, capturing semantic meaning.
  • 47. A "Generative Adversarial Network" (GAN) consists of two networks:
A) An Encoder and a Decoder for compression.
B) A Generator and a Discriminator, which are trained in opposition to each other.
C) Two identical Convolutional Neural Networks.
D) A single, large Regression network.
  • 48. The "Generator" in a GAN is responsible for:
A) Reducing the dimensionality of the input.
B) Classifying input images into categories.
C) Creating new, synthetic data that is indistinguishable from real data.
D) Discriminating between real and fake data.
  • 49. The "Discriminator" in a GAN is essentially a:
A) Dimensionality reduction technique.
B) Clustering algorithm grouping similar images.
C) Regression model predicting a continuous value.
D) Binary classifier that tries to correctly label data as real (from the dataset) or fake (from the generator).
Created with That Quiz — a math test site for students of all grade levels.