Explanation: AI refers to machines designed to mimic human intelligence.
Explanation: Linear regression is a supervised learning algorithm because it learns from labeled data.
Explanation: Overfitting occurs when the model learns noise from the training data instead of general patterns.
Explanation: ReLU (Rectified Linear Unit) is widely used in neural networks as an activation function.
Reference: Intro to AI & ML
I’ll analyze the existing quiz and provide two additional questions while maintaining the same format:
Explanation: Deep Learning is a specialized subset of machine learning that uses neural networks with multiple (deep) layers to progressively extract higher-level features from raw input. It’s particularly powerful for complex tasks like image and speech recognition.
Source: Deep Learning - MIT Press
Explanation: While Supervised, Unsupervised, and Reinforcement Learning are the three main types of machine learning, “Perpetual Learning” is not a standard classification. The main categories are:
Source: Types of Machine Learning - Stanford University CS229
Explanation: RL agents learn to take actions that maximize cumulative reward through feedback loops.
Reference: Reinforcement Learning - Sutton & Barto
Explanation: Decision Trees are popular for classification because they split data based on feature values to predict labels.
Reference: Decision Tree Classifier - Scikit-learn
Explanation: Gradient Descent minimizes the loss by updating model parameters opposite to the gradient direction.
Reference: Gradient Descent Explained - Coursera
Explanation: Clustering is an unsupervised learning method used to group unlabeled data by similarity.
Explanation: A confusion matrix summarizes prediction outcomes for classification models.
Explanation: Feature scaling standardizes numerical ranges, helping algorithms like gradient descent converge efficiently.
Explanation: Low bias can cause overfitting, while high bias leads to underfitting — good models find balance.
Explanation: F1-Score balances precision and recall, making it ideal for imbalanced datasets.
Explanation: Validation sets help in model selection and hyperparameter tuning before final testing.
Explanation: PCA transforms high-dimensional data into fewer dimensions while retaining variance.
Explanation: RNNs (Recurrent Neural Networks) are designed for sequential inputs like text or time series.
Explanation: Regularization methods like L1/L2 reduce overfitting by constraining model weights.
Explanation: Dropout is a regularization technique in neural networks, not an ensemble method.
Explanation: Cross-validation uses multiple folds to reduce variance in performance estimates.
Explanation: Unsupervised learning discovers hidden structures in unlabeled data.