ThatQuiz Test Library Take this test now
COELE1-2
Contributed by: Billo
  • 1. "Mini-batch Gradient Descent" is often preferred because it:
A) Offers a balance between the efficiency of batch GD and the robustness of SGD.
B) Does not require a loss function.
C) Is guaranteed to converge faster than any other method.
D) Is only applicable to linear models.
  • 2. "Transfer Learning" in deep learning involves:
A) Training a model from scratch on every new problem.
B) Taking a model pre-trained on a large dataset (e.g., ImageNet) and fine-tuning it for a new, specific task with a smaller dataset.
C) Using only unsupervised learning techniques.
D) Forgetting everything a model has learned.
  • 3. An "Autoencoder" is a type of neural network primarily used for:
A) Supervised classification of images.
B) Unsupervised learning tasks like dimensionality reduction and data denoising.b
C) Predicting continuous values in a regression task.
D) Reinforcement learning.
  • 4. The architecture of a typical autoencoder consists of:
A) A convolutional layer followed by an RNN layer.
B) An encoder that compresses the input and a decoder that reconstructs the input from the compression.
C) A single output neuron with a linear activation.
D) Only a single layer of perceptrons.
  • 5. In the context of model evaluation for classification, "Accuracy" is defined as:
A) The proportion of actual positives that were identified correctly.
B) The proportion of total predictions that were correct.
C) The proportion of positive identifications that were actually correct.
D) The harmonic mean of precision and recall.
  • 6. "Precision" is an important metric when:
A) You are evaluating a regression model.
B) The cost of false positives is high (e.g., in spam detection, where you don't want to flag legitimate emails as spam).
C) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
D) You need a single metric that combines precision and recall.
  • 7. "Recall" is an important metric when:
A) You need a single metric that combines precision and recall.
B) The cost of false negatives is high (e.g., in disease screening, where you don't want to miss a sick patient).
C) The cost of false positives is high (e.g., in spam detection).
D) You are evaluating a clustering model.
  • 8. The "F1 Score" is:
A) A metric used exclusively for regression.
B) The difference between precision and recall.
C) The arithmetic mean of precision and recall.
D) The harmonic mean of precision and recall, providing a single score that balances both concerns.
  • 9. For a regression model, the "Mean Squared Error" (MSE) measures:
A) The average of the squares of the errors between predicted and actual values.
B) The accuracy of a classification model.
C) The total number of misclassified instances.
D) The variance of the input features.
  • 10. The "ROC Curve" is a tool used to evaluate:
A) The architecture of a neural network.
B) The loss of a regression model over time.
C) The clustering quality of a K-means algorithm.
D) The performance of a binary classification model at various classification thresholds.
  • 11. "Area Under the ROC Curve" (AUC) provides an aggregate measure of performance across all possible classification thresholds. A perfect model has an AUC of:
A) 0.0.
B) 0.5.
C) 1.0.
D) -1.0.
  • 12. "K-fold Cross-Validation" is a technique used to:
A) Replace the need for a separate test set.
B) Visualize high-dimensional data.
C) Increase the size of the training dataset.
D) Obtain a more robust estimate of model performance by training and evaluating the model K times on different splits of the data.
  • 13. In the K-Nearest Neighbors (K-NN) algorithm for classification, the class of a new data point is determined by:
A) The output of a linear function.u
B) The majority vote among its K closest neighbors in the feature space.
C) A random selection from the training set.
D) A single, pre-defined rule.
  • 14. The parameter 'K' in the K-NN algorithm:
A) Is the number of features in the dataset.
B) Is always set to 1 for the best performance.
C) Controls the model's flexibility. A small K can lead to overfitting, while a large K can lead to underfitting.
D) Is the learning rate for the algorithm.
  • 15. "Principal Component Analysis" (PCA) works by:
A) Predicting a target variable using linear combinations of features.
B) Classifying data using a decision boundary.
C) Clustering data into K groups.
D) Finding new, uncorrelated dimensions (principal components) that capture the maximum variance in the data.u
  • 16. The first principal component in PCA is the direction in the feature space that:
A) Captures the least possible variance in the data.
B) Is randomly oriented.
C) Captures the greatest possible variance in the data.
D) Is perpendicular to all other components.
  • 17. "K-Means Clustering" aims to partition data into K clusters such that:
A) The data is perfectly classified into known labels.
B) The data is projected onto a single dimension.
C) The within-cluster variance is minimized.
D) The between-cluster variance is minimized.
  • 18. The "Elbow Method" is a heuristic used in K-Means to:
A) Evaluate the accuracy of a classification model.
B) Determine the learning rate for gradient descent.
C) Initialize the cluster centroids.
D) Help choose the optimal number of clusters K by looking for a "bend" in the plot of within-cluster variance.
  • 19. "Naive Bayes" classifiers are called "naive" because they:
A) Always have the lowest possible accuracy.
B) Are very simple and cannot handle complex data.
C) Make a strong (naive) assumption that all features are conditionally independent given the class label.
D) Do not use probability in their predictions.
  • 20. "Logistic Regression" is fundamentally a:
A) Classification algorithm that models the probability of a binary outcome using a logistic function.
B) Regression algorithm for predicting continuous values.
C) Dimensionality reduction technique.
D) Clustering algorithm for grouping unlabeled data.
  • 21. The output of a logistic regression model is a value between 0 and 1, which represents the:
A) Probability that the input belongs to a particular class.
B) Exact value of the target variable.
C) Number of features in the input.
D) Distance to the decision boundary.
  • 22. A "Random Forest" is an ensemble method that combines multiple:
A) Support Vector Machines.
B) Decision Trees to reduce overfitting and improve generalization.
C) Linear Regression models.
D) K-NN models.
  • 23. The "bagging" technique in a Random Forest helps to:
A) Reduce variance by training individual trees on random subsets of the data and averaging their results
B) Reduce bias by making trees more complex.
C) Increase the speed of a single decision tree.
D) Perform feature extraction like PCA.
  • 24. "Gradient Boosting" machines (e.g., XGBoost) are ensemble methods that:
A) Build models sequentially, where each new model corrects the errors of the previous ones.
B) Build all models independently and average them.
C) Are exclusively used for unsupervised learning.
D) Do not require any parameter tuning.
  • 25. The term "feature engineering" refers to:
A) The process of deleting all features from a dataset.
B) The automatic learning of features by a deep neural network.
C) The evaluation of a model's final performance.
D) The process of using domain knowledge to create new input features that make machine learning algorithms work better.
  • 26. "One-hot encoding" is a preprocessing technique used to:
A) Reduce the dimensionality of image data.
B) Normalize continuous numerical features.
C) Cluster similar data points together.
D) Convert categorical variables into a binary (0/1) format that can be provided to ML algorithms.
  • 27. "Feature scaling" (e.g., normalization or standardization) is often crucial for algorithms that:
A) Are based on tree-based models like Decision Trees and Random Forests.
B) Are based on distance calculations or gradient descent, such as SVM and Neural Networks.
C) Are used for clustering only.
D) Are used for association rule learning.
  • 28. The "curse of dimensionality" refers to the problem that:
A) Dimensionality reduction always improves model performance.
B) All datasets should have as many features as possible.
C) There are never enough features to train a good model.
D) As the number of features grows, the data becomes increasingly sparse, making it harder to find meaningful patterns.
  • 29. "Regularization" is a technique used to:
A) Increase the variance of a model.
B) Speed up the training time of a model.
C) Make models more complex to fit the training data better.
D) Prevent overfitting by adding a penalty term to the loss function that discourages complex models.
  • 30. L1 Regularization (Lasso) can often lead to:
A) All features having non-zero weights.
B) Sparse models where the weights of less important features are driven to zero, effectively performing feature selection.
C) A decrease in model interpretability.
D) Increased model complexity.
  • 31. "Hyperparameters" are:
A) Configuration settings for the learning algorithm that are not learned from the data and must be set prior to training (e.g., learning rate, K in K-NN).
B) The parameters that the model learns during training (e.g., weights in a neural network).
C) The input features of the model.
D) The output predictions of the model.
  • 32. The process of "Hyperparameter Tuning" involves:
A) Cleaning the raw data.
B) Deploying the final model.
C) Searching for the best combination of hyperparameters that results in the best model performance.
D) Training the model's internal weights.
  • 33. "Grid Search" is a common method for hyperparameter tuning that involves:
A) Exhaustively searching over a specified set of hyperparameter values.
B) Ignoring hyperparameters altogether.
C) Randomly sampling hyperparameter combinations from a distribution.
D) Using a separate neural network to predict the best hyperparameters.
  • 34. "Early Stopping" is a form of regularization that works by:
A) Halting the training process when performance on a validation set starts to degrade, indicating the onset of overfitting.
B) Stopping the training after a fixed, very short number of epochs.
C) Using a very small learning rate.
D) Starting the training process later than scheduled.
  • 35. A "Vanilla" neural network, also known as a Multilayer Perceptron (MLP), is typically composed of:
A) A single layer of neurons.
B) Recurrent layers for processing sequences.
C) Convolutional layers for processing images.
D) Fully connected layers, where each neuron in one layer is connected to every neuron in the next layer.
  • 36. The "softmax" activation function is commonly used in the output layer of a neural network for:
A) Binary classification problems.
B) Multi-class classification problems, as it outputs a probability distribution over the possible classes.
C) Unsupervised learning problems.
D) Regression problems.
  • 37. The "Adam" optimizer is an adaptive learning rate algorithm that is often preferred because it:
A) Is guaranteed to find the global minimum for any function.
B) Is only used for unsupervised learning.
C) Combines the advantages of two other extensions of stochastic gradient descent, AdaGrad and RMSProp.
D) Does not require any hyperparameters.
  • 38. "Batch Normalization" is a technique used to:
A) Improve the stability and speed of neural network training by normalizing the inputs to each layer.
B) Replace the need for an activation function.
C) Normalize the entire dataset before feeding it into the network.
D) Increase the batch size during training.
  • 39. The "confusion matrix" is a table that is used to describe the performance of a:
A) Regression model's accuracy.
B) Classification model on a set of test data for which the true values are known.
C) Clustering algorithm's group assignments.
D) Dimensionality reduction technique's effectiveness.
  • 40. In a confusion matrix, the "true positives" are the cases where:
A) The model incorrectly predicted the negative class.
B) The model incorrectly predicted the positive class.
C) The model correctly predicted the negative class.
D) The model correctly predicted the positive class.
  • 41. The problem of "imbalanced classes" occurs when:
A) The learning rate is set too high.
B) The features are not scaled properly.
C) One class in the training data has significantly more examples than another, which can bias the model.
D) The model is too complex for the data.
  • 42. A technique to address imbalanced classes is "SMOTE," which:
A) Generates synthetic examples for the minority class to balance the dataset.
B) Combines all classes into one.
C) Ignores the minority class completely.
D) Deletes examples from the majority class at random.
  • 43. "Reinforcement Learning" differs from supervised and unsupervised learning in that:
A) It is only used for clustering unlabeled data.
B) It requires a fully labeled dataset for training.
C) It learns by interacting with an environment and receiving rewards or penalties for actions, without a labeled dataset.
D) It is a simpler and less powerful approach.
  • 44. "Q-Learning" is a popular algorithm in reinforcement learning that learns:
A) A policy that tells an agent what action to take under what circumstances by learning a value function.
B) A clustering of possible actions.
C) A decision tree for classification.
D) The principal components of a state space.
  • 45. "Natural Language Processing" (NLP) often uses supervised learning for tasks like:
A) Generating new, original text without any input.
B) Grouping similar news articles without labels.
C) Sentiment analysis, where text is classified as positive, negative, or neutral.
D) Reducing the dimensionality of word vectors.
  • 46. "Word Embeddings" (like Word2Vec) are techniques that:
A) Are a type of clustering algorithm.
B) Are used only for image classification.
C) Represent words as dense vectors in a continuous space, capturing semantic meaning.
D) Represent words as simple one-hot encoded vectors.
  • 47. A "Generative Adversarial Network" (GAN) consists of two networks:
A) A Generator and a Discriminator, which are trained in opposition to each other.
B) Two identical Convolutional Neural Networks.
C) A single, large Regression network.
D) An Encoder and a Decoder for compression.
  • 48. The "Generator" in a GAN is responsible for:
A) Discriminating between real and fake data.
B) Creating new, synthetic data that is indistinguishable from real data.
C) Reducing the dimensionality of the input.
D) Classifying input images into categories.
  • 49. The "Discriminator" in a GAN is essentially a:
A) Binary classifier that tries to correctly label data as real (from the dataset) or fake (from the generator).
B) Clustering algorithm grouping similar images.
C) Dimensionality reduction technique.
D) Regression model predicting a continuous value.
Created with That Quiz — a math test site for students of all grade levels.