

McqMate
These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) .
101. |
MLE estimates are often undesirable because |
A. | they are biased |
B. | they have high variance |
C. | they are not consistent estimators |
D. | none of the above |
Answer» B. they have high variance |
102. |
The difference between the actual Y value and the predicted Y value found using a regression equation is called the |
A. | slope |
B. | residual |
C. | outlier |
D. | scatter plot |
Answer» A. slope |
103. |
Neural networks |
A. | optimize a convex cost function |
B. | always output values between 0 and 1 |
C. | can be used for regression as well as classification |
D. | all of the above |
Answer» C. can be used for regression as well as classification |
104. |
Linear Regression is a _______ machine learning algorithm. |
A. | supervised |
B. | unsupervised |
C. | semi-supervised |
D. | can\t say |
Answer» A. supervised |
105. |
Which of the following methods/methods do we use to find the best fit line for data in Linear Regression? |
A. | least square error |
B. | maximum likelihood |
C. | logarithmic loss |
D. | both a and b |
Answer» A. least square error |
106. |
Which of the following methods do we use to best fit the data in Logistic Regression? |
A. | least square error |
B. | maximum likelihood |
C. | jaccard distance |
D. | both a and b |
Answer» B. maximum likelihood |
107. |
Lasso can be interpreted as least-squares linear regression where |
A. | weights are regularized with the l1 norm |
B. | the weights have a gaussian prior |
C. | weights are regularized with the l2 norm |
D. | the solution algorithm is simpler |
Answer» A. weights are regularized with the l1 norm |
108. |
Which of the following evaluation metrics can be used to evaluate a model while modeling a continuous output variable? |
A. | auc-roc |
B. | accuracy |
C. | logloss |
D. | mean-squared-error |
Answer» D. mean-squared-error |
109. |
Simple regression assumes a __________ relationship between the input attribute and output attribute. |
A. | quadratic |
B. | inverse |
C. | linear |
D. | reciprocal |
Answer» C. linear |
110. |
In the regression equation Y = 75.65 + 0.50X, the intercept is |
A. | 0.5 |
B. | 75.65 |
C. | 1 |
D. | indeterminable |
Answer» B. 75.65 |
111. |
The selling price of a house depends on many factors. For example, it depends on the number of bedrooms, number of kitchen, number of bathrooms, the year the house was built, and the square footage of the lot. Given these factors, predicting the selling price of the house is an example of ____________ task. |
A. | binary classification |
B. | multilabel classification |
C. | simple linear regression |
D. | multiple linear regression |
Answer» D. multiple linear regression |
112. |
Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider? |
A. | you will add more features |
B. | you will remove some features |
C. | all of the above |
D. | none of the above |
Answer» A. you will add more features |
113. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, What do you expect will happen with the mean training error? |
A. | increase |
B. | decrease |
C. | remain constant |
D. | can’t say |
Answer» D. can’t say |
114. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data? |
A. | bias increases and variance increases |
B. | bias decreases and variance increases |
C. | bias decreases and variance decreases |
D. | bias increases and variance decreases |
Answer» D. bias increases and variance decreases |
115. |
Regarding bias and variance, which of the following statements are true? (Here ‘high’ and ‘low’ are relative to the ideal model.
|
A. | (i) and (ii) |
B. | (ii) and (iii) |
C. | (iii) and (iv) |
D. | none of these |
Answer» B. (ii) and (iii) |
116. |
Which of the following indicates the fundamental of least squares? |
A. | arithmetic mean should be maximized |
B. | arithmetic mean should be zero |
C. | arithmetic mean should be neutralized |
D. | arithmetic mean should be minimized |
Answer» D. arithmetic mean should be minimized |
117. |
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is 0.95. |
A. | relation between the x1 and y is weak |
B. | relation between the x1 and y is strong |
C. | relation between the x1 and y is neutral |
D. | correlation can’t judge the relationship |
Answer» B. relation between the x1 and y is strong |
118. |
In terms of bias and variance. Which of the following is true when you fit degree 2 polynomial? |
A. | bias will be high, variance will be high |
B. | bias will be low, variance will be high |
C. | bias will be high, variance will be low |
D. | bias will be low, variance will be low |
Answer» C. bias will be high, variance will be low |
119. |
Which of the following statements are true for a design matrix X ∈ Rn×d with d > n? (The rows are n sample points and the columns represent d features.) |
A. | least-squares linear regression computes the weights w = (xtx)−1 xty |
B. | the sample points are linearly separable |
C. | x has exactly d − n eigenvectors with eigenvalue zero |
D. | at least one principal component direction is orthogonal to a hyperplane that contains all the sample points |
Answer» D. at least one principal component direction is orthogonal to a hyperplane that contains all the sample points |
120. |
Point out the wrong statement. |
A. | regression through the origin yields an equivalent slope if you center the data first |
B. | normalizing variables results in the slope being the correlation |
C. | least squares is not an estimation tool |
D. | none of the mentioned |
Answer» C. least squares is not an estimation tool |
121. |
Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider? |
A. | you will add more features |
B. | you will remove some features |
C. | all of the above |
D. | none of the above |
Answer» A. you will add more features |
122. |
If X and Y in a regression model are totally unrelated, |
A. | the correlation coefficient would be -1 |
B. | the coefficient of determination would be 0 |
C. | the coefficient of determination would be 1 |
D. | the sse would be 0 |
Answer» B. the coefficient of determination would be 0 |
123. |
Regarding bias and variance, which of the following statements are true? (Here ‘high’ and ‘low’ are relative to the ideal model.
|
A. | (i) and (ii) |
B. | (ii) and (iii) |
C. | (iii) and (iv) |
D. | none of these |
Answer» B. (ii) and (iii) |
124. |
Which of the following statements are true for a design matrix X ∈ Rn×d with d > n? (The rows are n sample points and the columns represent d features.) |
A. | least-squares linear regression computes the weights w = (xtx)−1 xty |
B. | the sample points are linearly separable |
C. | x has exactly d − n eigenvectors with eigenvalue zero |
D. | at least one principal component direction is orthogonal to a hyperplane that contains all the sample points |
Answer» D. at least one principal component direction is orthogonal to a hyperplane that contains all the sample points |
125. |
Problem in multi regression is ? |
A. | multicollinearity |
B. | overfitting |
C. | both multicollinearity & overfitting |
D. | underfitting |
Answer» C. both multicollinearity & overfitting |
126. |
How can we best represent ‘support’ for the following association rule: “If X and Y, then Z”. |
A. | {x,y}/(total number of transactions) |
B. | {z}/(total number of transactions) |
C. | {z}/{x,y} |
D. | {x,y,z}/(total number of transactions) |
Answer» C. {z}/{x,y} |
127. |
Choose the correct statement with respect to ‘confidence’ metric in association rules |
A. | it is the conditional probability that a randomly selected transaction will include all the items in the consequent given that the transaction includes all the items in the antecedent. |
B. | a high value of confidence suggests a weak association rule |
C. | it is the probability that a randomly selected transaction will include all the items in the consequent as well as all the items in the antecedent. |
D. | confidence is not measured in terms of (estimated) conditional probability. |
Answer» A. it is the conditional probability that a randomly selected transaction will include all the items in the consequent given that the transaction includes all the items in the antecedent. |
128. |
What are tree based classifiers? |
A. | classifiers which form a tree with each attribute at one level |
B. | classifiers which perform series of condition checking with one attribute at a time |
C. | both options except none |
D. | none of the options |
Answer» C. both options except none |
129. |
What is gini index? |
A. | it is a type of index structure |
B. | it is a measure of purity |
C. | both options except none |
D. | none of the options |
Answer» B. it is a measure of purity |
130. |
Which of the following sentences are correct in reference to
|
A. | a and b |
B. | a and d |
C. | b, c and d |
D. | all of the above |
Answer» C. b, c and d |
131. |
Multivariate split is where the partitioning of tuples is based on a combination of attributes rather than on a single attribute. |
A. | true |
B. | false |
Answer» A. true |
132. |
Gain ratio tends to prefer unbalanced splits in which one partition is much smaller than the other |
A. | true |
B. | false |
Answer» A. true |
133. |
The gini index is not biased towards multivalued attributed. |
A. | true |
B. | false |
Answer» B. false |
134. |
Gini index does not favour equal sized partitions. |
A. | true |
B. | false |
Answer» B. false |
135. |
When the number of classes is large Gini index is not a good choice. |
A. | true |
B. | false |
Answer» A. true |
136. |
Attribute selection measures are also known as splitting rules. |
A. | true |
B. | false |
Answer» A. true |
137. |
his clustering approach initially assumes that each data instance represents a single cluster. |
A. | expectation maximization |
B. | k-means clustering |
C. | agglomerative clustering |
D. | conceptual clustering |
Answer» C. agglomerative clustering |
138. |
Which statement is true about the K-Means algorithm? |
A. | the output attribute must be cateogrical |
B. | all attribute values must be categorical |
C. | all attributes must be numeric |
D. | attribute values may be either categorical or numeric |
Answer» C. all attributes must be numeric |
139. |
KDD represents extraction of |
A. | data |
B. | knowledge |
C. | rules |
D. | model |
Answer» B. knowledge |
140. |
The most general form of distance is |
A. | manhattan |
B. | eucledian |
C. | mean |
D. | minkowski |
Answer» B. eucledian |
141. |
Which of the following algorithm comes under the classification |
A. | apriori |
B. | brute force |
C. | dbscan |
D. | k-nearest neighbor |
Answer» D. k-nearest neighbor |
142. |
Hierarchical agglomerative clustering is typically visualized as? |
A. | dendrogram |
B. | binary trees |
C. | block diagram |
D. | graph |
Answer» A. dendrogram |
143. |
The _______ step eliminates the extensions of (k-1)-itemsets which are not found to be frequent,from being considered for counting support |
A. | partitioning |
B. | candidate generation |
C. | itemset eliminations |
D. | pruning |
Answer» D. pruning |
144. |
The distance between two points calculated using Pythagoras theorem is |
A. | supremum distance |
B. | eucledian distance |
C. | linear distance |
D. | manhattan distance |
Answer» B. eucledian distance |
145. |
Which one of these is not a tree based learner? |
A. | cart |
B. | id3 |
C. | bayesian classifier |
D. | random forest |
Answer» C. bayesian classifier |
146. |
Which one of these is a tree based learner? |
A. | rule based |
B. | bayesian belief network |
C. | bayesian classifier |
D. | random forest |
Answer» D. random forest |
147. |
What is the approach of basic algorithm for decision tree induction? |
A. | greedy |
B. | top down |
C. | procedural |
D. | step by step |
Answer» A. greedy |
148. |
Which of the following classifications would best suit the student performance classification systems? |
A. | if...then... analysis |
B. | market-basket analysis |
C. | regression analysis |
D. | cluster analysis |
Answer» A. if...then... analysis |
149. |
Given that we can select the same feature multiple times during the recursive partitioning of
|
A. | yes |
B. | no |
Answer» B. no |
150. |
This clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration |
A. | k-means clustering |
B. | conceptual clustering |
C. | expectation maximization |
D. | agglomerative clustering |
Answer» A. k-means clustering |
Done Studing? Take A Test.
Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.