McqMate

Q. |
## Which of the following is a reasonable way to select the number of principal components "k"? |

A. | choose k to be the smallest value so that at least 99% of the varinace is retained. - answer |

B. | choose k to be 99% of m (k = 0.99*m, rounded to the nearest integer). |

C. | choose k to be the largest value so that 99% of the variance is retained. |

D. | use the elbow method |

Answer» A. choose k to be the smallest value so that at least 99% of the varinace is retained. - answer |

2.5k

0

Do you find this helpful?

20

View all MCQs in

Machine Learning (ML)No comments yet

- The most popularly used dimensionality reduction algorithm is Principal Component Analysis (PCA). Which of the following is/are true about PCA? 1. PCA is an unsupervised method2. It searches for the directions that data have the largest variance3. Maximum number of principal components <= number of features4. All principal components are orthogonal to each other
- In PCA the number of input dimensiona are equal to principal components
- allows exploiting the natural sparsity of data while extracting principal components.
- ______allows exploiting the natural sparsity of data while extracting principal components.
- The selling price of a house depends on many factors. For example, it depends on the number of bedrooms, number of kitchen, number of bathrooms, the year the house was built, and the square footage of the lot. Given these factors, predicting the selling price of the house is an example of ____________ task.
- The number of iterations in apriori ___________ Select one: a. b. c. d.
- Which of the following are components of generalization Error?
- Which of the following statement is true about k-NN algorithm? 1) k-NN performs much better if all of the data have the same scale 2) k-NN works well with a small number of input variables (p), but struggles when the number of inputs is very large 3) k-NN makes no assumptions about the functional form of the problem being solved
- Having built a decision tree, we are using reduced error pruning to reduce the size of the tree. We select a node to collapse. For this particular node, on the left branch, there are 3 training data points with the following outputs: 5, 7, 9.6 and for the right branch, there are four training data points with the following outputs: 8.7, 9.8, 10.5, 11. What were the original responses for data points along the two branches (left & right respectively) and what is the new response after collapsing the node?
- Suppose, you have 2000 different models with their predictions and want to ensemble predictions of best x models. Now, which of the following can be a possible method to select the best x models for an ensemble?