730+ Machine Learning (ML) Solved MCQs

Machine learning is a subset of artificial intelligence that involves the use of algorithms and statistical models to enable a system to improve its performance on a specific task over time. In other words, machine learning algorithms are designed to allow a computer to learn from data, without being explicitly programmed.
101.

MLE estimates are often undesirable because

A. they are biased
B. they have high variance
C. they are not consistent estimators
D. none of the above
Answer» B. they have high variance
102.

The difference between the actual Y value and the predicted Y value found using a regression equation is called the

A. slope
B. residual
C. outlier
D. scatter plot
Answer» A. slope
103.

Neural networks

A. optimize a convex cost function
B. always output values between 0 and 1
C. can be used for regression as well as classification
D. all of the above
Answer» C. can be used for regression as well as classification
104.

Linear Regression is a _______ machine learning algorithm.

A. supervised
B. unsupervised
C. semi-supervised
D. can\t say
Answer» A. supervised
105.

Which of the following methods/methods do we use to find the best fit line for data in Linear Regression?

A. least square error
B. maximum likelihood
C. logarithmic loss
D. both a and b
Answer» A. least square error
106.

Which of the following methods do we use to best fit the data in Logistic Regression?

A. least square error
B. maximum likelihood
C. jaccard distance
D. both a and b
Answer» B. maximum likelihood
107.

Lasso can be interpreted as least-squares linear regression where

A. weights are regularized with the l1 norm
B. the weights have a gaussian prior
C. weights are regularized with the l2 norm
D. the solution algorithm is simpler
Answer» A. weights are regularized with the l1 norm
108.

Which of the following evaluation metrics can be used to evaluate a model while modeling a continuous output variable?

A. auc-roc
B. accuracy
C. logloss
D. mean-squared-error
Answer» D. mean-squared-error
109.

Simple regression assumes a __________ relationship between the input attribute and output attribute.

A. quadratic
B. inverse
C. linear
D. reciprocal
Answer» C. linear
110.

In the regression equation Y = 75.65 + 0.50X, the intercept is

A. 0.5
B. 75.65
C. 1
D. indeterminable
Answer» B. 75.65
111.

The selling price of a house depends on many factors. For example, it depends on the number of bedrooms, number of kitchen, number of bathrooms, the year the house was built, and the square footage of the lot. Given these factors, predicting the selling price of the house is an example of ____________ task.

A. binary classification
B. multilabel classification
C. simple linear regression
D. multiple linear regression
Answer» D. multiple linear regression
112.

Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?

A. you will add more features
B. you will remove some features
C. all of the above
D. none of the above
Answer» A. you will add more features
113.

We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, What do you expect will happen with the mean training error?

A. increase
B. decrease
C. remain constant
D. can’t say
Answer» D. can’t say
114.

We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data?

A. bias increases and variance increases
B. bias decreases and variance increases
C. bias decreases and variance decreases
D. bias increases and variance decreases
Answer» D. bias increases and variance decreases
115.

Regarding bias and variance, which of the following statements are true? (Here ‘high’ and ‘low’ are relative to the ideal model.
(i) Models which overfit are more likely to have high bias

(ii) Models which overfit are more likely to have low bias

(iii) Models which overfit are more likely to have high variance

(iv) Models which overfit are more likely to have low variance

A. (i) and (ii)
B. (ii) and (iii)
C. (iii) and (iv)
D. none of these
Answer» B. (ii) and (iii)
116.

Which of the following indicates the fundamental of least squares?

A. arithmetic mean should be maximized
B. arithmetic mean should be zero
C. arithmetic mean should be neutralized
D. arithmetic mean should be minimized
Answer» D. arithmetic mean should be minimized
117.

Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is 0.95.

A. relation between the x1 and y is weak
B. relation between the x1 and y is strong
C. relation between the x1 and y is neutral
D. correlation can’t judge the relationship
Answer» B. relation between the x1 and y is strong
118.

In terms of bias and variance. Which of the following is true when you fit degree 2 polynomial?

A. bias will be high, variance will be high
B. bias will be low, variance will be high
C. bias will be high, variance will be low
D. bias will be low, variance will be low
Answer» C. bias will be high, variance will be low
119.

Which of the following statements are true for a design matrix X ∈ Rn×d with d > n? (The rows are n sample points and the columns represent d features.)

A. least-squares linear regression computes the weights w = (xtx)−1 xty
B. the sample points are linearly separable
C. x has exactly d − n eigenvectors with eigenvalue zero
D. at least one principal component direction is orthogonal to a hyperplane that contains all the sample points
Answer» D. at least one principal component direction is orthogonal to a hyperplane that contains all the sample points
120.

Point out the wrong statement.

A. regression through the origin yields an equivalent slope if you center the data first
B. normalizing variables results in the slope being the correlation
C. least squares is not an estimation tool
D. none of the mentioned
Answer» C. least squares is not an estimation tool
121.

Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?

A. you will add more features
B. you will remove some features
C. all of the above
D. none of the above
Answer» A. you will add more features
122.

If X and Y in a regression model are totally unrelated,

A. the correlation coefficient would be -1
B. the coefficient of determination would be 0
C. the coefficient of determination would be 1
D. the sse would be 0
Answer» B. the coefficient of determination would be 0
123.

Regarding bias and variance, which of the following statements are true? (Here ‘high’ and ‘low’ are relative to the ideal model.
(i) Models which overfit are more likely to have high bias
(ii) Models which overfit are more likely to have low bias
(iii) Models which overfit are more likely to have high variance
(iv) Models which overfit are more likely to have low variance

A. (i) and (ii)
B. (ii) and (iii)
C. (iii) and (iv)
D. none of these
Answer» B. (ii) and (iii)
124.

Which of the following statements are true for a design matrix X ∈ Rn×d with d > n? (The rows are n sample points and the columns represent d features.)

A. least-squares linear regression computes the weights w = (xtx)−1 xty
B. the sample points are linearly separable
C. x has exactly d − n eigenvectors with eigenvalue zero
D. at least one principal component direction is orthogonal to a hyperplane that contains all the sample points
Answer» D. at least one principal component direction is orthogonal to a hyperplane that contains all the sample points
125.

Problem in multi regression is ?

A. multicollinearity
B. overfitting
C. both multicollinearity & overfitting
D. underfitting
Answer» C. both multicollinearity & overfitting
126.

How can we best represent ‘support’ for the following association rule: “If X and Y, then Z”.

A. {x,y}/(total number of transactions)
B. {z}/(total number of transactions)
C. {z}/{x,y}
D. {x,y,z}/(total number of transactions)
Answer» C. {z}/{x,y}
127.

Choose the correct statement with respect to ‘confidence’ metric in association rules

A. it is the conditional probability that a randomly selected transaction will include all the items in the consequent given that the transaction includes all the items in the antecedent.
B. a high value of confidence suggests a weak association rule
C. it is the probability that a randomly selected transaction will include all the items in the consequent as well as all the items in the antecedent.
D. confidence is not measured in terms of (estimated) conditional probability.
Answer» A. it is the conditional probability that a randomly selected transaction will include all the items in the consequent given that the transaction includes all the items in the antecedent.
128.

What are tree based classifiers?

A. classifiers which form a tree with each attribute at one level
B. classifiers which perform series of condition checking with one attribute at a time
C. both options except none
D. none of the options
Answer» C. both options except none
129.

What is gini index?

A. it is a type of index structure
B. it is a measure of purity
C. both options except none
D. none of the options
Answer» B. it is a measure of purity
130.

Which of the following sentences are correct in reference to
Information gain?
a. It is biased towards single-valued attributes
b. It is biased towards multi-valued attributes
c. ID3 makes use of information gain
d. The approact used by ID3 is greedy

A. a and b
B. a and d
C. b, c and d
D. all of the above
Answer» C. b, c and d
131.

Multivariate split is where the partitioning of tuples is based on a combination of attributes rather than on a single attribute.

A. true
B. false
Answer» A. true
132.

Gain ratio tends to prefer unbalanced splits in which one partition is much smaller than the other

A. true
B. false
Answer» A. true
133.

The gini index is not biased towards multivalued attributed.

A. true
B. false
Answer» B. false
134.

Gini index does not favour equal sized partitions.

A. true
B. false
Answer» B. false
135.

When the number of classes is large Gini index is not a good choice.

A. true
B. false
Answer» A. true
136.

Attribute selection measures are also known as splitting rules.

A. true
B. false
Answer» A. true
137.

his clustering approach initially assumes that each data instance represents a single cluster.

A. expectation maximization
B. k-means clustering
C. agglomerative clustering
D. conceptual clustering
Answer» C. agglomerative clustering
138.

Which statement is true about the K-Means algorithm?

A. the output attribute must be cateogrical
B. all attribute values must be categorical
C. all attributes must be numeric
D. attribute values may be either categorical or numeric
Answer» C. all attributes must be numeric
139.

KDD represents extraction of

A. data
B. knowledge
C. rules
D. model
Answer» B. knowledge
140.

The most general form of distance is

A. manhattan
B. eucledian
C. mean
D. minkowski
Answer» B. eucledian
141.

Which of the following algorithm comes under the classification

A. apriori
B. brute force
C. dbscan
D. k-nearest neighbor
Answer» D. k-nearest neighbor
142.

Hierarchical agglomerative clustering is typically visualized as?

A. dendrogram
B. binary trees
C. block diagram
D. graph
Answer» A. dendrogram
143.

The _______ step eliminates the extensions of (k-1)-itemsets which are not found to be frequent,from being considered for counting support

A. partitioning
B. candidate generation
C. itemset eliminations
D. pruning
Answer» D. pruning
144.

The distance between two points calculated using Pythagoras theorem is

A. supremum distance
B. eucledian distance
C. linear distance
D. manhattan distance
Answer» B. eucledian distance
145.

Which one of these is not a tree based learner?

A. cart
B. id3
C. bayesian classifier
D. random forest
Answer» C. bayesian classifier
146.

Which one of these is a tree based learner?

A. rule based
B. bayesian belief network
C. bayesian classifier
D. random forest
Answer» D. random forest
147.

What is the approach of basic algorithm for decision tree induction?

A. greedy
B. top down
C. procedural
D. step by step
Answer» A. greedy
148.

Which of the following classifications would best suit the student performance classification systems?

A. if...then... analysis
B. market-basket analysis
C. regression analysis
D. cluster analysis
Answer» A. if...then... analysis
149.

Given that we can select the same feature multiple times during the recursive partitioning of
the input space, is it always possible to achieve 100% accuracy on the training data (given
that we allow for trees to grow to their maximum size) when building decision trees?

A. yes
B. no
Answer» B. no
150.

This clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration

A. k-means clustering
B. conceptual clustering
C. expectation maximization
D. agglomerative clustering
Answer» A. k-means clustering
151.

The number of iterations in apriori ___________ Select one: a. b. c. d.

A. increases with the size of the data
B. decreases with the increase in size of the data
C. increases with the size of the maximum frequent set
D. decreases with increase in size of the maximum frequent set
Answer» C. increases with the size of the maximum frequent set
152.

Frequent item sets is

A. superset of only closed frequent item sets
B. superset of only maximal frequent item sets
C. subset of maximal frequent item sets
D. superset of both closed frequent item sets and maximal frequent item sets
Answer» D. superset of both closed frequent item sets and maximal frequent item sets
153.

A good clustering method will produce high quality clusters with

A. high inter class similarity
B. low intra class similarity
C. high intra class similarity
D. no inter class similarity
Answer» C. high intra class similarity
154.

Which statement is true about neural network and linear regression models?

A. both techniques build models whose output is determined by a linear sum of weighted input attribute values
B. the output of both models is a categorical attribute value
C. both models require numeric attributes to range between 0 and 1
D. both models require input attributes to be numeric
Answer» D. both models require input attributes to be numeric
155.

Which Association Rule would you prefer

A. high support and medium confidence
B. high support and low confidence
C. low support and high confidence
D. low support and low confidence
Answer» C. low support and high confidence
156.

In a Rule based classifier, If there is a rule for each combination of attribute values, what do you called that rule set R

A. exhaustive
B. inclusive
C. comprehensive
D. mutually exclusive
Answer» A. exhaustive
157.

The apriori property means

A. if a set cannot pass a test, its supersets will also fail the same test
B. to decrease the efficiency, do level-wise generation of frequent item sets
C. to improve the efficiency, do level-wise generation of frequent item sets d.
D. if a set can pass a test, its supersets will fail the same test
Answer» A. if a set cannot pass a test, its supersets will also fail the same test
158.

If an item set ‘XYZ’ is a frequent item set, then all subsets of that frequent item set are

A. undefined
B. not frequent
C. frequent
D. can not say
Answer» C. frequent
159.

Clustering is ___________ and is example of ____________learning

A. predictive and supervised
B. predictive and unsupervised
C. descriptive and supervised
D. descriptive and unsupervised
Answer» D. descriptive and unsupervised
160.

To determine association rules from frequent item sets

A. only minimum confidence needed
B. neither support not confidence needed
C. both minimum support and confidence are needed
D. minimum support is needed
Answer» C. both minimum support and confidence are needed
161.

If {A,B,C,D} is a frequent itemset, candidate rules which is not possible is

A. c –> a
B. d –>abcd
C. a –> bc
D. b –> adc
Answer» B. d –>abcd
162.

Which Association Rule would you prefer

A. high support and low confidence
B. low support and high confidence
C. low support and low confidence
D. high support and medium confidence
Answer» B. low support and high confidence
163.

This clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration

A. conceptual clustering
B. k-means clustering
C. expectation maximization
D. agglomerative clustering
Answer» B. k-means clustering
164.

Classification rules are extracted from _____________

A. decision tree
B. root node
C. branches
D. siblings
Answer» A. decision tree
165.

What does K refers in the K-Means algorithm which is a non-hierarchical clustering approach?

A. complexity
B. fixed value
C. no of iterations
D. number of clusters
Answer» D. number of clusters
166.

How will you counter over-fitting in decision tree?

A. by pruning the longer rules
B. by creating new rules
C. both by pruning the longer rules’ and ‘ by creating new rules’
D. none of the options
Answer» A. by pruning the longer rules
167.

What are two steps of tree pruning work?

A. pessimistic pruning and optimistic pruning
B. postpruning and prepruning
C. cost complexity pruning and time complexity pruning
D. none of the options
Answer» B. postpruning and prepruning
168.

Which of the following sentences are true?

A. in pre-pruning a tree is \pruned\ by halting its construction early
B. a pruning set of class labelled tuples is used to estimate cost complexity
C. the best pruned tree is the one that minimizes the number of encoding bits
D. all of the above
Answer» D. all of the above
169.

Assume that you are given a data set and a neural network model trained on the data set. You
are asked to build a decision tree model with the sole purpose of understanding/interpreting
the built neural network model. In such a scenario, which among the following measures would
you concentrate most on optimising?

A. accuracy of the decision tree model on the given data set
B. f1 measure of the decision tree model on the given data set
C. fidelity of the decision tree model, which is the fraction of instances on which the neural network and the decision tree give the same output
D. comprehensibility of the decision tree model, measured in terms of the size of the corresponding rule set
Answer» C. fidelity of the decision tree model, which is the fraction of instances on which the neural network and the decision tree give the same output
170.

Which of the following properties are characteristic of decision trees?
(a) High bias
(b) High variance
(c) Lack of smoothness of prediction surfaces
(d) Unbounded parameter set

A. a and b
B. a and d
C. b, c and d
D. all of the above
Answer» C. b, c and d
171.

To control the size of the tree, we need to control the number of regions. One approach to
do this would be to split tree nodes only if the resultant decrease in the sum of squares error
exceeds some threshold. For the described method, which among the following are true?
(a) It would, in general, help restrict the size of the trees (b) It has the potential to affect the performance of the resultant regression/classification
model
(c) It is computationally infeasible

A. a and b
B. a and d
C. b, c and d
D. all of the above
Answer» A. a and b
172.

Which among the following statements best describes our approach to learning decision trees

A. identify the best partition of the input space and response per partition to minimise sum of squares error
B. identify the best approximation of the above by the greedy approach (to identifying the partitions)
C. identify the model which gives the best performance using the greedy approximation (option (b)) with the smallest partition scheme
D. identify the model which gives performance close to the best greedy approximation performance (option (b)) with the smallest partition scheme
Answer» D. identify the model which gives performance close to the best greedy approximation performance (option (b)) with the smallest partition scheme
173.

Having built a decision tree, we are using reduced error pruning to reduce the size of the
tree. We select a node to collapse. For this particular node, on the left branch, there are 3
training data points with the following outputs: 5, 7, 9.6 and for the right branch, there are
four training data points with the following outputs: 8.7, 9.8, 10.5, 11. What were the original
responses for data points along the two branches (left & right respectively) and what is the
new response after collapsing the node?

A. 10.8, 13.33, 14.48
B. 10.8, 13.33, 12.06
C. 7.2, 10, 8.8
D. 7.2, 10, 8.6
Answer» C. 7.2, 10, 8.8
174.

Suppose on performing reduced error pruning, we collapsed a node and observed an improvement in the prediction accuracy on the validation set.
Which among the following statements are possible in light of the performance improvement observed?
(a) The collapsed node helped overcome the effect of one or more noise affected data points in the training set
(b) The validation set had one or more noise affected data points in the region corresponding to the collapsed node
(c) The validation set did not have any data points along at least one of the collapsed branches
(d) The validation set did have data points adversely affected by the collapsed node

A. a and b
B. a and d
C. b, c and d
D. all of the above
Answer» D. all of the above
175.

Time Complexity of k-means is given by

A. o(mn)
B. o(tkn)
C. o(kn)
D. o(t2kn)
Answer» B. o(tkn)
176.

In Apriori algorithm, if 1 item-sets are 100, then the number of candidate 2 item-sets are

A. 100
B. 200
C. 4950
D. 5000
Answer» C. 4950
177.

Machine learning techniques differ from statistical techniques in that machine learning methods

A. are better able to deal with missing and noisy data
B. typically assume an underlying distribution for the data
C. have trouble with large-sized datasets
D. are not able to explain their behavior
Answer» A. are better able to deal with missing and noisy data
178.

The probability that a person owns a sports car given that they subscribe to automotive magazine is 40%. We also know that 3% of the adult population subscribes to automotive magazine. The probability of a person owning a sports car given that they don’t subscribe to automotive magazine is 30%. Use this information to compute the probability that a person subscribes to automotive magazine given that they own a sports car

A. 0.0368
B. 0.0396
C. 0.0389
D. 0.0398
Answer» B. 0.0396
179.

What is the final resultant cluster size in Divisive algorithm, which is one of the hierarchical clustering approaches?

A. zero
B. three
C. singleton
D. two
Answer» C. singleton
180.

Given a frequent itemset L, If |L| = k, then there are

A. 2k – 1 candidate association rules
B. 2k candidate association rules
C. 2k – 2 candidate association rules
D. 2k -2 candidate association rules
Answer» C. 2k – 2 candidate association rules
181.

Which Statement is not true statement.

A. k-means clustering is a linear clustering algorithm.
B. k-means clustering aims to partition n observations into k clusters
C. k-nearest neighbor is same as k-means
D. k-means is sensitive to outlier
Answer» B. k-means clustering aims to partition n observations into k clusters
182.

which of the following cases will K-Means clustering give poor results?
1. Data points with outliers
2. Data points with different densities
3. Data points with round shapes
4. Data points with non-convex shapes

A. 1 and 2
B. 2 and 3
C. 2 and 4
D. 1, 2 and 4
Answer» C. 2 and 4
183.

What is Decision Tree?

A. flow-chart
B. structure in which internal node represents test on an attribute, each branch represents outcome of test and each leaf node represents class label
C. flow-chart like structure in which internal node represents test on an attribute, each branch represents outcome of test and each leaf node represents class label
D. none of the above
Answer» D. none of the above
184.

What are two steps of tree pruning work?

A. pessimistic pruning and optimistic pruning
B. postpruning and prepruning
C. cost complexity pruning and time complexity pruning
D. none of the options
Answer» B. postpruning and prepruning
185.

A database has 5 transactions. Of these, 4 transactions include milk and bread. Further, of the given 4 transactions, 2 transactions include cheese. Find the support percentage for the following association rule “if milk and bread are purchased, then cheese is also purchased”.

A. 0.4
B. 0.6
C. 0.8
D. 0.42
Answer» D. 0.42
186.

Which of the following option is true about k-NN algorithm?

A. it can be used for classification
B. ??it can be used for regression
C. ??it can be used in both classification and regression??
D. not useful in ml algorithm
Answer» C. ??it can be used in both classification and regression??
187.

How to select best hyperparameters in tree based models?

A. measure performance over training data
B. measure performance over validation data
C. both of these
D. random selection of hyper parameters
Answer» B. measure performance over validation data
188.

What is true about K-Mean Clustering?
1. K-means is extremely sensitive to cluster center initializations
2. Bad initialization can lead to Poor convergence speed
3. Bad initialization can lead to bad overall clustering

A. 1 and 3
B. 1 and 2
C. 2 and 3
D. 1, 2 and 3
Answer» D. 1, 2 and 3
189.

What are tree based classifiers?

A. classifiers which form a tree with each attribute at one level
B. classifiers which perform series of condition checking with one attribute at a time
C. both options except none
D. not possible
Answer» C. both options except none
190.

What is gini index?

A. gini index??operates on the categorical target variables
B. it is a measure of purity
C. gini index performs only binary split
D. all (1,2 and 3)
Answer» D. all (1,2 and 3)
191.

Tree/Rule based classification algorithms generate ... rule to perform the classification.

A. if-then.
B. while.
C. do while
D. switch.
Answer» A. if-then.
192.

Decision Tree is

A. flow-chart
B. structure in which internal node represents test on an attribute, each branch represents outcome of test and each leaf node represents class label
C. both a & b
D. class of instance
Answer» C. both a & b
193.

Which of the following is true about Manhattan distance?

A. it can be used for continuous variables
B. it can be used for categorical variables
C. it can be used for categorical as well as continuous
D. it can be used for constants
Answer» A. it can be used for continuous variables
194.

A company has build a kNN classifier that gets 100% accuracy on training data. When they deployed this model on client side it has been found that the model is not at all accurate. Which of the following thing might gone wrong? Note: Model has successfully deployed and no technical issues are found at client side except the model performance

A. it is probably a overfitted model
B. ??it is probably a underfitted model
C. ??can’t say
D. wrong client data
Answer» A. it is probably a overfitted model
195.

hich of the following classifications would best suit the student performance classification systems?

A. if...then... analysis
B. market-basket analysis
C. regression analysis
D. cluster analysis
Answer» A. if...then... analysis
196.

Which statement is true about the K-Means algorithm? Select one:

A. the output attribute must be cateogrical.
B. all attribute values must be categorical.
C. all attributes must be numeric
D. attribute values may be either categorical or numeric
Answer» C. all attributes must be numeric
197.

Which of the following can act as possible termination conditions in K-Means?
1. For a fixed number of iterations.
2. Assignment of observations to clusters does not change between iterations. Except for cases with a bad local minimum.
3. Centroids do not change between successive iterations.
4. Terminate when RSS falls below a threshold.

A. 1, 3 and 4
B. 1, 2 and 3
C. 1, 2 and 4
D. 1,2,3,4
Answer» D. 1,2,3,4
198.

Which of the following statement is true about k-NN algorithm?
1) k-NN performs much better if all of the data have the same scale
2) k-NN works well with a small number of input variables (p), but struggles when the number of inputs is very large
3) k-NN makes no assumptions about the functional form of the problem being solved

A. 1 and 2
B. 1 and 3
C. only 1
D. 1,2 and 3
Answer» D. 1,2 and 3
199.

In which of the following cases will K-means clustering fail to give good results? 1) Data points with outliers 2) Data points with different densities 3) Data points with nonconvex shapes

A. 1 and 2
B. 2 and 3
C. 1, 2, and 3??
D. 1 and 3
Answer» C. 1, 2, and 3??
200.

How will you counter over-fitting in decision tree?

A. by pruning the longer rules
B. by creating new rules
C. both by pruning the longer rules’ and ‘ by creating new rules’
D. over-fitting is not possible
Answer» A. by pruning the longer rules
Tags
Question and answers in Machine Learning (ML), Machine Learning (ML) multiple choice questions and answers, Machine Learning (ML) Important MCQs, Solved MCQs for Machine Learning (ML), Machine Learning (ML) MCQs with answers PDF download