- multiclass classification diagram
- linear regression and scatter plots
- pivot table
- K-means cluster diagram

- The product would look for new patterns in spam messages.
- The product could go through the keyword list much more quickly.
- The product could have a much longer keyword list.
- The product could find spam messages using far fewer keywords.

- data cluster
- Supervised set
- big data
- test data

- patterns
- programs
- rules
- data

- It was consistently wrong.
- It was inconsistently wrong.
- It was consistently right.
- It was equally right end wrong.

- Find labeled data of sunny days so that the machine will learn to identify bad weather.
- Use unsupervised learning have the machine look for anomalies in a massive weather database.
- Create a training set of unusual patterns and ask the machine learning algorithms to classify them.
- Create a training set of normal weather and have the machine look for similar patterns.

- regression
- boosting
- bagging
- stacking

`\_\_\_\_`

looks at the relationship between predictors and your outcome.- Regression analysis
- K-means clustering
- Big data
- Unsupervised learning

- a data entry system
- a data warehouse system
- a massive data repository
- a product recommendation system

- a decision tree
- reinforcement learning
- K-nearest neighbor
- a clear trendline

- The algorithms would help the meters access the internet.
- The algorithms will improve the wireless connectivity.
- The algorithms would help your organization see patterns of the data.
- By using machine learning algorithms, you are creating an IoT device.

`\_\_\_\_`

.- regression
- clustering
- classification
- dimensionality reduction

- It naively assumes that you will have no data.
- It does not even try to create accurate predictions.
- It naively assumes that the predictors are independent from one another.
- It naively assumes that all the predictors depend on one another.

- It is a linear regression chart.
- It is a supervised trendline chart.
- It is a decision tree.
- It is a clustering trend chart.

- Artificial intelligence focuses on classification, while machine learning is about clustering data.
- Machine learning is a type of artificial intelligence that relies on learning through data.
- Artificial intelligence is form of unsupervised machine learning.
- Machine learning and artificial intelligence are the same thing.

- The algorithms are typically run more powerful servers.
- The algorithms are better at seeing patterns in the data.
- Machine learning servers can host larger databases.
- The algorithms can run on unstructured data.

- Create an artificial neural network that would host the company directory.
- Use machine learning to better predict risk.
- Create an algorithm that consolidates all of your Excel spreadsheets into one data lake.
- Use machine learning and big data to research salary requirements.

- Training Set
- Unsupervised Data
- Supervised Learning
- Binary Classification

- You will almost certainly underfit the model.
- You will pick the wrong algorithm.
- You might not have enough data for both.
- You will almost certainly overfit the model.

- Machine learning algorithms are based on math and statistics, and so by definition will be unbiased.
- There is no way to identify bias in the data.
- Machine learning algorithms are powerful enough to eliminate bias from the data.
- All human-created data is biased, and data scientists need to account for that.

**Explanation**: While machine learning algorithms don’t have bias, the data can have them.

- The predictions of one model become the inputs another.
- You use different versions of machine learning algorithms.
- You use several machine learning algorithms to boost your results.
- You stack your training set and testing set together.

- training data
- linear regression
- big data
- test data

- centroid reinforcement
- K-nearest neighbor
- binary classification
- K-means clustering

**Explanation**: The problem explicitly states “clustering”.

- Include training email data from all employees.
- Include training email data from new employees.
- Include training email data from seasoned employees.
- Include training email data from employees who write the majority of internal emails.

- unsupervised machine learning
- binary classification
- supervised machine learning
- reinforcement learning

- K-nearest neighbor
- a decision tree
- a linear regression
- a K-means cluster

Note: there are centres of clusters (C0, C1, C2).

- aggregated trees
- boosted trees
- bagged trees
- stacked trees

- semi-supervised learning
- supervised learning
- reinforcement learning
- unsupervised learning

- In K-means clustering, the initial centroids are sometimes randomly selected.
- K-means clustering is often used in supervised machine learning.
- The number of clusters are always randomly selected.
- To be accurate, you want your centroids outside of the cluster.

- supervised learning
- semi-supervised learning
- reinforcement learning
- unsupervised learning

- random forest
- logistic regression
- KNN
- deep neural network

- Higher K values will produce noisy data.
- Higher K values lower the bias but increase the variance.
- Higher K values need a larger training set.
- Higher K values lower the variance but increase the bias.

- supervised learning
- unsupervised learning
- reinforcement learning
- semi-unsupervised learning

- It uses unsupervised learning to cluster together transactions and unsupervised learning to classify the customers.
- It uses only unsupervised machine learning.
- It uses supervised learning to create clusters and unsupervised learning for classification.
- It uses reinforcement learning to classify the customers.

- high variance and low bias
- low bias and low variance
- low variance and high bias
- high bias and high variance

- No, data model bias and variance are only a challenge with reinforcement learning.
- Yes, data model bias is a challenge when the machine creates clusters.
- Yes, data model variance trains the unsupervised machine learning algorithm.
- No, data model bias and variance involve supervised learning.

- K-means
- Logistic regression
- Linear regression
- Principal Component Analysis (PCA)

**Explanation:** Logistic regression is far better than linear regression at binary classification since it biases the result toward one extreme or the other. K-means clustering can be used for classification but is not as accurate in most scenarios.
Source:

- supervised learning
- data
- unsupervised learning
- algorithms

**Explanation**: This one is pretty straight forward and a fundamental concept.
Source:

- It will take too long for programmers to scrub poor data.
- If the data is high quality, the algorithms will be easier to develop.
- Low-quality data requires much more processing power than high-quality data.
- If the data is low quality, you will get inaccurate results.

- share common characteristics
- be part of the root node
- have a Euclidean connection
- be part of the same cluster

- Reinforcement machine learning
- unsupervised machine learning
- supervised machine learning
- semi-supervised machine learning

- You will be able to prioritize different classes of drugs, such as antibiotics.
- You can create a training set of drugs you would like to discover.
- The algorithms will cluster together drugs that have similar traits.
- Human experts can create classes of drugs to help guide discovery.

**Explanation**: This one is similar to an example talked about in the Stanford Machine Learning course.
Source:

- The system went from supervised learning to reinforcement learning.
- The system evolved from supervised learning to unsupervised learning.
- The system evolved from unsupervised learnin9 to supervised learning.
- The system evolved from reinforcement learning to unsupervised learning.

- It could better protect against undiscovered threats.
- It would very likely lower the hardware requirements.
- It would substantially shorten your development time.
- It would increase the speed of the appliance.

- Use reinforcement learning to reward the system when a new person participates
- Unsupervised machine learning to cluster together people based on patterns the machine discovers
- Supervised machine learning to sort people by demographic data
- supervised ml to classify people by body temperature

- statistics
- structured data
- availability
- algorithms

- unsupervised learning
- complex cluster
- multiclass classification
- k-nearest neighbour

- deep learning artificial neural network that relies on petabytes of data
- unsupervised ml system that clusters together the best candidates
- Not recommend ml for this project
- supervised ml system that classifies applicants into existing groups // we do not need to classify best candidates we just need to classify job applicants in to existing categories

- regression analysis
- unsupervised learning
- high -variance modeling
- ensemble modeling

- ml algorithm
- training set
- big data test set
- data cluster

- You are overfitting the model to the data
- You need a smaller training set
- You are underfitting the model to the data
- You need a larger training set

- an unsupervised machine learning system that clusters together the best candidates.
- you would not recommend a machine learning system for this type of project.
- a deep learning artificial neural network that relies on petabytes of employment data.
- a supervised machine learning system that classifies applicants into existing groups.

- you use it as your training set.
- You label it big data.
- You split it into a training set and test set.
- You use it as your test set.

- semi-supervised machine learning
- supervised machine learning
- unsupervised machine learning
- reinforcement learning

- Batch learning
- Offline learning
- Both A and B
- None of the above

- Decision Tree
- Linear Regression
- PCA
- Naive Bayesian

- Decision Trees
- K-means clustering
- Density-based clustering
- Model-based clustering

- The entropy function.
- The squared error.
- The cross-entropy function.
- The number of mistakes.

- Higher
- same
- Lower
- it could be any of the above

- good fitting
- overfitting
- underfitting
- all of the above

- This is a multiclass classification challenge.
**Explanation**: Shows data being classified into more than two categories or classes. Thus, this is a multi-class classification challenge. - This is a multi-binary classification challenge.
- This is a binary classification challenge.
- This is a reinforcement classification challenge.

- There is too little data in your training set.
- There is too much data in your training set.
- There is not a lot of variance but there is a high bias. // Underfitted data models usually have high bias and low variance. Overfitted data models have low bias and high variance.
- Your model has low bias but high variance.

- Include Asian faces in your test data and retrain your model.
- Retrain your model with updated hyperparameter values.
- Retrain your model with smaller batch sizes.
- Include Asian faces in your training data and retrain your model. // The answer is self-explanatory: if Asian users are the only group of people making the complaint, then the training data should have more Asian faces.

- Your training set is too large.
- You are underfitting the model to the data.
- You are overfitting the model to the data.
**Explanation**: // This question is very similar to Q49 but involves a polar opposite scenario. - Your machine is creating inaccurate clusters.

// I find that answer somewhat vague and unsettled. Small number of matchings does not necessarily implies that the model overfits, especially given 500 (!) independent variables. To me, it sounds more reasonable that the threshold (matching) criterion might be too tight, thus allowing only a small number of matching to occur. So a solution can be either softening the threshold criterion or increasing the number of candidates.

- What kernels extract
- Feature Maps
- How kernels Look

- 76%
- 88%
- 12%
- 0.0008%

- Wise fill-in of controlled random values
- Replace missing values with averaging across all samples
- Remove defective samples
- Imputation

- SVM
- PCA
- LDA
- TSNE

- Capturing complex non-linear patterns
- Transforming continuous values into “ON” (1) or “OFF” (0) values
- Help avoiding the vanishing/exploding gradient problem
- Their ability to activate each neurons individually.

- kullback-leibler (KL) loss
- Binary Crossentropy
- Mean Squared Error (MSE)
- Any L2 loss

| no. | Red | Blue | Green |
| — | — | — | — |
| **1.** | Validation error | Training error | Test error |
| **2.** | Training error | Test error | Validation error |
| **3.** | Optimal error | Validation error | Test error |
| **4.** | Validation error | Training error | Optimal error |

- 1
- 2
- 3
- 4

- tree nodes
- predictors // these nodes decide whether the someone decides to go to beach or not, for example if its rainy people will mostly refrain from going to beach
- root nodes
- deciders

- Set up a cluster of machines to label the images
- Create a subset of the images and label then yourself
- Use naive Bayes to automatically generate labels.
- Hire people to manually label the images

- low bias, high variance
- high bias, low variance
- high bias, high variance
- low bias, low variance // since the data is accurately classified and is neither overfitting or underfitting the dataset

- structured data
- algorithms
- time
- computer scientists

- Scikit-learn
- PyTorch
- Tensowflow Lite
- Tensorflow

- a spreadsheet
- 20,000 recorded voicemail messages
- 100,000 images of automobiles
- hundreds of gigabytes of audio files

- confidence
- alpha
- power
- significance

- naive Bayes classifier
- K-nearest neighbor
- multiclass classification
- decision tree

- when the machine learning algorithms do most of the programming
- when you don’t do any data scrubbing
- when the learning happens continuously
- when you run your computation in one big instance at the beginning

- supervised machine learning with rewards
- a type of unsupervised learning that relies heavily on a well-established model
- a type of reinforcement learning where accuracy degrades over time
- a type of reinforcement learning that focuses on rewards