ThatQuiz Test Library Take this test now
LEE-FELIR (F)
Contributed by: Sonio
  • 1. What is the defining characteristic of the training data used in supervised learning?
A) The data is generated randomly by the algorithm.
B) The data is unlabeled, and the model must find patterns on its own.
C) The data is generated randomly by the algorithm.
D) The @data is labeled, meaning each example is paired with a target output.
  • 2. The primary goal of a supervised learning model is to:
A) Reduce the dimensionality of the input data for visualization.
B) Memorize the entire training dataset perfectly.
C) Discover hidden patterns without any guidance.
D) Gen@eralize from the training data to make accurate predictions on new, unseen data.
  • 3. In the analogy of a child learning from flashcards, the animal's name on the card represents what component of supervised learning?
A) The@ label or target output.
B) The input features.
C) The loss function.
D) The model's parameters.
  • 4. Which of the following tasks is a classic example of a classification problem?
A) Diagnosing @a tumor as malignant or benign based on medical images.
B) Estimating the annual revenue of a company.
C) Forecasting the temperature for tomorrow.
D) Predicting the selling price of a house based on its features.
  • 5. A model that predicts the continuous value of a stock price for the next day is solving a:
A) Dimensionality reduction problem.
B) Clustering problem.
C) Classification problem.
D) Regressio@n problem.
  • 6. What is the core objective of unsupervised learning?
A) To achieve perfect accuracy on a held-out test set.
B) To @discover the inherent structure, patterns, or relationships within unlabeled data.
C) To predict a target variable based on labeled examples.
D) To classify emails into spam and non-spam folders.
  • 7. In the analogy of a child grouping toys without instructions, the act of putting all the cars together is most similar to which unsupervised learning technique?
A) Reinforcement Learning.
B) Regression.
C) C@lustering.
D) Classification.
  • 8. Grouping customers based solely on their purchasing behavior, without pre-defined categories, is an application of:
A) Linear Regression, a type of supervised learning.
B) Clu@stering, a type of unsupervised learning.
C) Logistic Regression, a type of supervised learning.
D) A support vector machine for classification.
  • 9. The main goal of dimensionality reduction techniques like PCA is to:
A) Re@duce the number of features while preserving the most important information in the data.
B) Predict a continuous output variable.
C) Assign categorical labels to each data point.
D) Increase the number of features to improve model accuracy.
  • 10. Market basket analysis, which finds rules like "if chips then soda," is a classic example of:
A) Deep learning with neural networks.
B) Classification in supervised learning.
C) Ass@ociation rule learning in unsupervised learning.
D) Regression in supervised learning.
  • 11. Semi-supervised learning is particularly useful in real-world scenarios because:
A) It is simpler to implement than unsupervised learning.
B) It requires no labeled data at all.
C) It is always more accurate than fully supervised learning.
D) La@beling data is often expensive and time-consuming, so it leverages a small labeled set with a large unlabeled set.
  • 12. The fundamental question that a regression model aims to answer is:
A) "What is the underlying group?"
B) "Which category?"
C) "H@ow much?" or "How many?"
D) "Is this pattern anomalous?"
  • 13. The fundamental question that a classification model aims to answer is:
A) "How much?" or "How many?"
B) "What is the correlation between these variables?"
C) "How can I reduce the number of features?"
D) "Whic@h category?" or "What class?"
  • 14. Which algorithm is most directly designed for predicting a continuous target variable?
A) Logistic Regression
B) k-Nearest Neighbors for classification
C) Lin@ear Regression
D) Decision Tree for classification
  • 15. A model that uses patient data to assign a label of "High," "Medium," or "Low" risk for a disease is performing:
A) Clustering
B) Dimensionality reduction
C) Mult@i-class classification
D) Regression
  • 16. In a Decision Tree used for classification, what do the leaf nodes represent?
A) The probability of moving to the next node
B) Th@e final class labels or decisions
C) The input features for a new data point
D) The average value of a continuous target
  • 17. In a Regression Tree, what is typically represented at the leaf nodes?
A) A c@ontinuous value, often the mean of the target values of the training instances that reach the leaf
B) The name of the feature used for splitting
C) A random number
D) A categorical class label
  • 18. A key strength of Decision Trees is their:
A) Inter@pretability; the model's decision-making process is easy to understand and visualize
B) Superior performance on all types of data compared to other algorithms
C) Immunity to overfitting on noisy datasets
D) Guarantee to find the global optimum for any dataset
  • 19. The "kernel trick" used in Support Vector Machines (SVMs) allows them to:
A) Initialize the weights of a neural network
B) Grow a tree structure by making sequential decisions
C) Fin@d a linear separating hyperplane in a high-dimensional feature space, even when the data is not linearly separable in the original space
D) Perform linear regression more efficiently
  • 20. The "support vectors" in an SVM are the:
A) The axes of the original feature space
B) All data points in the training set
C) The weights of a neural network layer
D) . Da@ta points that are closest to the decision boundary and most critical for defining the optimal hyperplane
  • 21. When comparing Decision Trees and SVMs, a primary advantage of SVMs is:
A) Their inherent resistance to any form of overfitting
B) Their superior interpretability and simplicity
C) The@ir effectiveness in high-dimensional spaces and their ability to model complex, non-linear decision boundaries
D) Their lower computational cost for very large datasets
  • 22. The process in supervised learning where a model's parameters are adjusted to minimize the difference between its predictions and the true labels is called:
A) Tr@aining or model fitting
B) Dimensionality reduction
C) Clustering
D) Data preprocessing
  • 23. A key challenge in unsupervised learning is evaluating model performance because:
A) There@ are no ground truth labels to compare the results against
B) The algorithms are not well-defined
C) The models are always less accurate than supervised models
D) The data is always too small
  • 24. The task of reducing a 50-dimensional dataset to a 2-dimensional plot for visualization is best accomplished by:
A) A Classification algorithm like Logistic Regression
B) A Regression algorithm like Linear Regression
C) Dimen@sionality Reduction techniques like Principal Component Analysis (PCA)
D) An Association rule learning algorithm
  • 25. If an e-commerce company wants to automatically group its products into categories without any pre-existing labels, it should use:
A) Clus@tering, an unsupervised learning method
B) Classification, a supervised learning method
C) A neural network for image recognition
D) Regression, a supervised learning method
  • 26. The core building block of a neural network is a(n):
A) Artifi@cial neuron or perceptron, which receives inputs, applies a transformation, and produces an output
B) Principal component
C) Support vector
D) Decision node in a tree
  • 27. In a neural network, the function inside a neuron that determines its output based on the weighted sum of its inputs is called the:
A) Optimization algorithm
B) Activ@ation function
C) Kernel function
D) Loss function
  • 28. Which of the following is a non-linear activation function crucial for allowing neural networks to learn complex patterns?
A) A constant function
B) Rectifie@d Linear Unit (ReLU)
C) The identity function (f(x) = x)
D) The mean squared error function
  • 29. The process of "training" a neural network involves:
A) Randomly assigning weights and never changing them
B) Clustering the input data
C) Manually setting the weights based on expert knowledge
D) Iterativ@ely adjusting the weights and biases to minimize a loss function
  • 30. Backpropagation is the algorithm used in neural networks to:
A) Initialize the weights before training
B) Efficient@ly calculate the gradient of the loss function with respect to all the weights in the network, enabling the use of gradient descent
C) Perform clustering on the output layer
D) Visualize the network's architecture
  • 31. Deep Learning is a subfield of machine learning that primarily uses:
A) K-means clustering exclusively
B) Decision trees with a single split
C) Simple linear regression models
D) Neural n@etworks with many layers (hence "deep")
  • 32. A key advantage of deep neural networks over shallower models is their ability to:
A) Be perfectly interpretable, like a decision tree
B) Auto@matically learn hi@erarchical feature representations from data
C) Always train faster and with less data
D) Operate without any need for data preprocessing
  • 33. Convolutional Neural Networks (CNNs) are particularly well-suited for tasks involving:
A) Text data and natural language processing
B) Tabular data with many categorical features
C) Image @data, due to their architecture which exploits spatial locality
D) Unsupervised clustering of audio signals
  • 34. The "convolution" operation in a CNN is designed to:
A) Perform the final classification
B) Initialize the weights of the network
C) Flatten the input into a single vector
D) Detect@ local features (like edges or textures) in the input by applying a set of learnable filters
  • 35. Recurrent Neural Networks (RNNs) are designed to handle:
A) Static, non-temporal data
B) Sequ@ential data, like time series or text, due to their internal "memory" of previous inputs
C) Independent and identically distributed (IID) data points
D) Only image data
  • 36. The "vanishing gradient" problem in deep networks refers to:
A) The loss function reaching a perfect value of zero
B) The@ gradients becoming exceedingly small as they are backpropagated through many layers, which can halt learning in early layers
C) The gradients becoming too large and causing numerical instability
D) The model overfitting to the training data
  • 37. The "training set" is used to:
A) Tune the model's hyperparameters
B) Provide an unbiased evaluation of a final model's performance
C) Deploy the model in a production environment
D) Fit th@e model's parameters (e.g., the weights in a neural network)
  • 38. The "validation set" is primarily used for:
A) The final, unbiased assessment of the model's generalization error
B) The initial training of the model's weights
C) Data preprocessing and cleaning
D) Tun@ing hyperparameters and making decisions about the model architecture during development
  • 39. The "test set" should be:
A) Use@d only once, for a final evaluation of the model's performance on unseen data after model development is complete
B) Used repeatedly to tune the model's hyperparameters
C) Ignored in the machine learning pipeline
D) Used repeatedly to tune the model's hyperparameters
  • 40. Overfitting occurs when a model:
A) Learns @the training data too well, including its noise and outliers, and performs poorly on new, unseen data
B) Fails to learn the underlying pattern in the training data
C) Is too simple to capture the trends in the data
D) Is evaluated using the training set instead of a test set
  • 41. A common technique to reduce overfitting in neural networks is:
A) Increasing the model's capacity by adding more layers
B) Using a smaller training dataset
C) Dropo@ut, which randomly ignores a subset of neurons during training
D) Training for more epochs without any checks
  • 42. The "bias" of a model refers to:
A) The weights connecting the input layer to the hidden layer
B) The erro@r from erroneous assumptions in the learning algorithm, leading to underfitting
C) The activation function used in the output layer
D) The error from sensitivity to small fluctuations in the training set, leading to overfitting
  • 43. The "variance" of a model refers to:
A) The intercept term in a linear regression model
B) The er@ror from sensitivity to small fluctuations in the training set, leading to overfitting
C) The error from erroneous assumptions in the learning algorithm, leading to underfitting
D) The speed at which the model trains
  • 44. The "bias-variance tradeoff" implies that:
A) Only variance is important for model performance
B) Decrea@sing bias will typically increase variance, and vice versa. The goal is to find a balance
C) Bias and variance can be minimized to zero simultaneously
D) Only bias is important for model performance
  • 45. A learning curve that shows high training accuracy but low validation accuracy is a classic sign of:
A) A well-generalized model
B) Overf@itting
C) Underfitting
D) Perfect model performance
  • 46. In a neural network, the "loss function" (or cost function) measures:
A) The number of layers in the network
B) The accuracy on the test set
C) The speed of the backpropagation algorithm
D) How well the model is performing on the training data; it's the quantity we want to minimize during training
  • 47. Gradient Descent is an optimization algorithm that:
A) Iteratively adjusts parameters in the direction that reduces the loss function
B) Guarantees finding the global minimum for any loss function
C) Randomly searches the parameter space for a good solution
D) Is only used for unsupervised learning
  • 48. The "learning rate" in gradient descent controls:
A) The activation function for the output layer
B) The number of layers in a neural network
C) The amount of training data used in each epoch
D) The size of the step taken during each parameter update. A rate that is too high can cause divergence, while one that is too low can make training slow
  • 49. "Epoch" in neural network training refers to:
A) A type of regularization technique
B) One complete pass of the entire training dataset through the learning algorithm
C) The final evaluation on the test set
D) The processing of a single training example
  • 50. "Batch Size" in neural network training refers to:
A) The number of layers in the network
B) The total number of examples in the training set
C) The number of validation examples
D) The number of training examples used in one forward/backward pass before the model's parameters are updated
  • 51. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) The processing of a single training example
  • 52. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) The processing of a single training example
  • 53. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) One complete pass of the entire training dataset through the learning algorithm
  • 54. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) One complete pass of the entire training dataset through the learning algorithm
  • 55. "Epoch" in neural network training refers to:
A) The processing of a single training example
B) One complete pass of the entire training dataset through the learning algorithm
  • 56. "Epoch" in neural network training refers to:
A) The processing of a single training example
B) One complete pass of the entire training dataset through the learning algorithm
  • 57. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) The processing of a single training example
  • 58. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) The processing of a single training example
  • 59. "Epoch" in neural network training refers to:
A) The processing of a single training example
B) One complete pass of the entire training dataset through the learning algorithm
  • 60. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) The processing of a single training example
  • 61. "Epoch" in neural network training refers to:
A) One complete pass of the entire training dataset through the learning algorithm
B) The processing of a single training example
  • 62. "Epoch" in neural network training refers to:
A) The processing of a single training example
B) One complete pass of the entire training dataset through the learning algorithm
  • 63. "Epoch" in neural network training refers to:
A) The processing of a single training example
B) One complete pass of the entire training dataset through the learning algorithm
  • 64. "Epoch" in neural network training refers to:
A) The processing of a single training example
B) One complete pass of the entire training dataset through the learning algorithm
  • 65. "Epoch" in neural network training refers to:
A) The processing of a single training example
B) One complete pass of the entire training dataset through the learning algorithm
Created with That Quiz — a math test site for students of all grade levels.