ThatQuiz Test Library Take this test now
COELE1-2
Contributed by: Billo
  • 1. "Mini-batch Gradient Descent" is often preferred because it:
A) Offers a balance between the efficiency of batch GD and the robustness of SGD.
B) Is guaranteed to converge faster than any other method.
C) Is only applicable to linear models.
D) Does not require a loss function.
  • 2. "Transfer Learning" in deep learning involves:
A) Using only unsupervised learning techniques.
B) Training a model from scratch on every new problem.
C) Taking a model pre-trained on a large dataset (e.g., ImageNet) and fine-tuning it for a new, specific task with a smaller dataset.
D) Forgetting everything a model has learned.
  • 3. An "Autoencoder" is a type of neural network primarily used for:
A) Supervised classification of images.
B) Reinforcement learning.
C) Predicting continuous values in a regression task.
D) Unsupervised learning tasks like dimensionality reduction and data denoising.b
  • 4. The architecture of a typical autoencoder consists of:
A) An encoder that compresses the input and a decoder that reconstructs the input from the compression.
B) Only a single layer of perceptrons.
C) A single output neuron with a linear activation.
D) A convolutional layer followed by an RNN layer.
  • 5. In the context of model evaluation for classification, "Accuracy" is defined as:
A) The proportion of positive identifications that were actually correct.
B) The harmonic mean of precision and recall.
C) The proportion of total predictions that were correct.
D) The proportion of actual positives that were identified correctly.
  • 6. "Precision" is an important metric when:
A) You need a single metric that combines precision and recall.
B) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
C) The cost of false positives is high (e.g., in spam detection, where you don't want to flag legitimate emails as spam).
D) You are evaluating a regression model.
  • 7. "Recall" is an important metric when:
A) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
B) You need a single metric that combines precision and recall.
C) The cost of false positives is high (e.g., in spam detection).
D) You are evaluating a clustering model.
  • 8. The "F1 Score" is:
A) The harmonic mean of precision and recall, providing a single score that balances both concerns.
B) The arithmetic mean of precision and recall.
C) The difference between precision and recall.
D) A metric used exclusively for regression.
  • 9. For a regression model, the "Mean Squared Error" (MSE) measures:
A) The average of the squares of the errors between predicted and actual values.
B) The accuracy of a classification model.
C) The total number of misclassified instances.
D) The variance of the input features.
  • 10. The "ROC Curve" is a tool used to evaluate:
A) The performance of a binary classification model at various classification thresholds.
B) The clustering quality of a K-means algorithm.
C) The loss of a regression model over time.
D) The architecture of a neural network.
  • 11. "Area Under the ROC Curve" (AUC) provides an aggregate measure of performance across all possible classification thresholds. A perfect model has an AUC of:
A) 1.0.
B) 0.5.
C) -1.0.
D) 0.0.
  • 12. "K-fold Cross-Validation" is a technique used to:
A) Replace the need for a separate test set.
B) Increase the size of the training dataset.
C) Visualize high-dimensional data.
D) Obtain a more robust estimate of model performance by training and evaluating the model K times on different splits of the data.
  • 13. In the K-Nearest Neighbors (K-NN) algorithm for classification, the class of a new data point is determined by:
A) The majority vote among its K closest neighbors in the feature space.
B) A single, pre-defined rule.
C) The output of a linear function.u
D) A random selection from the training set.
  • 14. The parameter 'K' in the K-NN algorithm:
A) Is the learning rate for the algorithm.
B) Controls the model's flexibility. A small K can lead to overfitting, while a large K can lead to underfitting.
C) Is the number of features in the dataset.
D) Is always set to 1 for the best performance.
  • 15. "Principal Component Analysis" (PCA) works by:
A) Finding new, uncorrelated dimensions (principal components) that capture the maximum variance in the data.u
B) Classifying data using a decision boundary.
C) Predicting a target variable using linear combinations of features.
D) Clustering data into K groups.
  • 16. The first principal component in PCA is the direction in the feature space that:
A) Captures the greatest possible variance in the data.
B) Is randomly oriented.
C) Captures the least possible variance in the data.
D) Is perpendicular to all other components.
  • 17. "K-Means Clustering" aims to partition data into K clusters such that:
A) The data is perfectly classified into known labels.
B) The between-cluster variance is minimized.
C) The within-cluster variance is minimized.
D) The data is projected onto a single dimension.
  • 18. The "Elbow Method" is a heuristic used in K-Means to:
A) Help choose the optimal number of clusters K by looking for a "bend" in the plot of within-cluster variance.
B) Initialize the cluster centroids.
C) Determine the learning rate for gradient descent.
D) Evaluate the accuracy of a classification model.
  • 19. "Naive Bayes" classifiers are called "naive" because they:
A) Make a strong (naive) assumption that all features are conditionally independent given the class label.
B) Are very simple and cannot handle complex data.
C) Do not use probability in their predictions.
D) Always have the lowest possible accuracy.
  • 20. "Logistic Regression" is fundamentally a:
A) Clustering algorithm for grouping unlabeled data.
B) Dimensionality reduction technique.
C) Classification algorithm that models the probability of a binary outcome using a logistic function.
D) Regression algorithm for predicting continuous values.
  • 21. The output of a logistic regression model is a value between 0 and 1, which represents the:
A) Exact value of the target variable.
B) Probability that the input belongs to a particular class.
C) Distance to the decision boundary.
D) Number of features in the input.
  • 22. A "Random Forest" is an ensemble method that combines multiple:
A) Decision Trees to reduce overfitting and improve generalization.
B) K-NN models.
C) Linear Regression models.
D) Support Vector Machines.
  • 23. The "bagging" technique in a Random Forest helps to:
A) Reduce variance by training individual trees on random subsets of the data and averaging their results
B) Perform feature extraction like PCA.
C) Increase the speed of a single decision tree.
D) Reduce bias by making trees more complex.
  • 24. "Gradient Boosting" machines (e.g., XGBoost) are ensemble methods that:
A) Build models sequentially, where each new model corrects the errors of the previous ones.
B) Are exclusively used for unsupervised learning.
C) Do not require any parameter tuning.
D) Build all models independently and average them.
  • 25. The term "feature engineering" refers to:
A) The process of deleting all features from a dataset.
B) The evaluation of a model's final performance.
C) The automatic learning of features by a deep neural network.
D) The process of using domain knowledge to create new input features that make machine learning algorithms work better.
  • 26. "One-hot encoding" is a preprocessing technique used to:
A) Normalize continuous numerical features.
B) Convert categorical variables into a binary (0/1) format that can be provided to ML algorithms.
C) Cluster similar data points together.
D) Reduce the dimensionality of image data.
  • 27. "Feature scaling" (e.g., normalization or standardization) is often crucial for algorithms that:
A) Are used for clustering only.
B) Are used for association rule learning.
C) Are based on tree-based models like Decision Trees and Random Forests.
D) Are based on distance calculations or gradient descent, such as SVM and Neural Networks.
  • 28. The "curse of dimensionality" refers to the problem that:
A) Dimensionality reduction always improves model performance.
B) All datasets should have as many features as possible.
C) As the number of features grows, the data becomes increasingly sparse, making it harder to find meaningful patterns.
D) There are never enough features to train a good model.
  • 29. "Regularization" is a technique used to:
A) Prevent overfitting by adding a penalty term to the loss function that discourages complex models.
B) Increase the variance of a model.
C) Make models more complex to fit the training data better.
D) Speed up the training time of a model.
  • 30. L1 Regularization (Lasso) can often lead to:
A) Sparse models where the weights of less important features are driven to zero, effectively performing feature selection.
B) All features having non-zero weights.
C) Increased model complexity.
D) A decrease in model interpretability.
  • 31. "Hyperparameters" are:
A) Configuration settings for the learning algorithm that are not learned from the data and must be set prior to training (e.g., learning rate, K in K-NN).
B) The input features of the model.
C) The parameters that the model learns during training (e.g., weights in a neural network).
D) The output predictions of the model.
  • 32. The process of "Hyperparameter Tuning" involves:
A) Training the model's internal weights.
B) Cleaning the raw data.
C) Searching for the best combination of hyperparameters that results in the best model performance.
D) Deploying the final model.
  • 33. "Grid Search" is a common method for hyperparameter tuning that involves:
A) Exhaustively searching over a specified set of hyperparameter values.
B) Using a separate neural network to predict the best hyperparameters.
C) Randomly sampling hyperparameter combinations from a distribution.
D) Ignoring hyperparameters altogether.
  • 34. "Early Stopping" is a form of regularization that works by:
A) Halting the training process when performance on a validation set starts to degrade, indicating the onset of overfitting.
B) Starting the training process later than scheduled.
C) Using a very small learning rate.
D) Stopping the training after a fixed, very short number of epochs.
  • 35. A "Vanilla" neural network, also known as a Multilayer Perceptron (MLP), is typically composed of:
A) A single layer of neurons.
B) Convolutional layers for processing images.
C) Recurrent layers for processing sequences.
D) Fully connected layers, where each neuron in one layer is connected to every neuron in the next layer.
  • 36. The "softmax" activation function is commonly used in the output layer of a neural network for:
A) Unsupervised learning problems.
B) Binary classification problems.
C) Multi-class classification problems, as it outputs a probability distribution over the possible classes.
D) Regression problems.
  • 37. The "Adam" optimizer is an adaptive learning rate algorithm that is often preferred because it:
A) Combines the advantages of two other extensions of stochastic gradient descent, AdaGrad and RMSProp.
B) Is only used for unsupervised learning.
C) Does not require any hyperparameters.
D) Is guaranteed to find the global minimum for any function.
  • 38. "Batch Normalization" is a technique used to:
A) Normalize the entire dataset before feeding it into the network.
B) Improve the stability and speed of neural network training by normalizing the inputs to each layer.
C) Increase the batch size during training.
D) Replace the need for an activation function.
  • 39. The "confusion matrix" is a table that is used to describe the performance of a:
A) Regression model's accuracy.
B) Classification model on a set of test data for which the true values are known.
C) Clustering algorithm's group assignments.
D) Dimensionality reduction technique's effectiveness.
  • 40. In a confusion matrix, the "true positives" are the cases where:
A) The model correctly predicted the negative class.
B) The model incorrectly predicted the positive class.
C) The model incorrectly predicted the negative class.
D) The model correctly predicted the positive class.
  • 41. The problem of "imbalanced classes" occurs when:
A) The features are not scaled properly.
B) The learning rate is set too high.
C) The model is too complex for the data.
D) One class in the training data has significantly more examples than another, which can bias the model.
  • 42. A technique to address imbalanced classes is "SMOTE," which:
A) Combines all classes into one.
B) Generates synthetic examples for the minority class to balance the dataset.
C) Deletes examples from the majority class at random.
D) Ignores the minority class completely.
  • 43. "Reinforcement Learning" differs from supervised and unsupervised learning in that:
A) It requires a fully labeled dataset for training.
B) It is a simpler and less powerful approach.
C) It is only used for clustering unlabeled data.
D) It learns by interacting with an environment and receiving rewards or penalties for actions, without a labeled dataset.
  • 44. "Q-Learning" is a popular algorithm in reinforcement learning that learns:
A) A policy that tells an agent what action to take under what circumstances by learning a value function.
B) The principal components of a state space.
C) A decision tree for classification.
D) A clustering of possible actions.
  • 45. "Natural Language Processing" (NLP) often uses supervised learning for tasks like:
A) Grouping similar news articles without labels.
B) Sentiment analysis, where text is classified as positive, negative, or neutral.
C) Generating new, original text without any input.
D) Reducing the dimensionality of word vectors.
  • 46. "Word Embeddings" (like Word2Vec) are techniques that:
A) Are used only for image classification.
B) Are a type of clustering algorithm.
C) Represent words as simple one-hot encoded vectors.
D) Represent words as dense vectors in a continuous space, capturing semantic meaning.
  • 47. A "Generative Adversarial Network" (GAN) consists of two networks:
A) A single, large Regression network.
B) A Generator and a Discriminator, which are trained in opposition to each other.
C) Two identical Convolutional Neural Networks.
D) An Encoder and a Decoder for compression.
  • 48. The "Generator" in a GAN is responsible for:
A) Classifying input images into categories.
B) Reducing the dimensionality of the input.
C) Discriminating between real and fake data.
D) Creating new, synthetic data that is indistinguishable from real data.
  • 49. The "Discriminator" in a GAN is essentially a:
A) Clustering algorithm grouping similar images.
B) Dimensionality reduction technique.
C) Regression model predicting a continuous value.
D) Binary classifier that tries to correctly label data as real (from the dataset) or fake (from the generator).
Created with That Quiz — a math test site for students of all grade levels.