

McqMate
These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) .
151. |
The number of iterations in apriori ___________ Select one: a. b. c. d. |
A. | increases with the size of the data |
B. | decreases with the increase in size of the data |
C. | increases with the size of the maximum frequent set |
D. | decreases with increase in size of the maximum frequent set |
Answer» C. increases with the size of the maximum frequent set |
152. |
Frequent item sets is |
A. | superset of only closed frequent item sets |
B. | superset of only maximal frequent item sets |
C. | subset of maximal frequent item sets |
D. | superset of both closed frequent item sets and maximal frequent item sets |
Answer» D. superset of both closed frequent item sets and maximal frequent item sets |
153. |
A good clustering method will produce high quality clusters with |
A. | high inter class similarity |
B. | low intra class similarity |
C. | high intra class similarity |
D. | no inter class similarity |
Answer» C. high intra class similarity |
154. |
Which statement is true about neural network and linear regression models? |
A. | both techniques build models whose output is determined by a linear sum of weighted input attribute values |
B. | the output of both models is a categorical attribute value |
C. | both models require numeric attributes to range between 0 and 1 |
D. | both models require input attributes to be numeric |
Answer» D. both models require input attributes to be numeric |
155. |
Which Association Rule would you prefer |
A. | high support and medium confidence |
B. | high support and low confidence |
C. | low support and high confidence |
D. | low support and low confidence |
Answer» C. low support and high confidence |
156. |
In a Rule based classifier, If there is a rule for each combination of attribute values, what do you called that rule set R |
A. | exhaustive |
B. | inclusive |
C. | comprehensive |
D. | mutually exclusive |
Answer» A. exhaustive |
157. |
The apriori property means |
A. | if a set cannot pass a test, its supersets will also fail the same test |
B. | to decrease the efficiency, do level-wise generation of frequent item sets |
C. | to improve the efficiency, do level-wise generation of frequent item sets d. |
D. | if a set can pass a test, its supersets will fail the same test |
Answer» A. if a set cannot pass a test, its supersets will also fail the same test |
158. |
If an item set ‘XYZ’ is a frequent item set, then all subsets of that frequent item set are |
A. | undefined |
B. | not frequent |
C. | frequent |
D. | can not say |
Answer» C. frequent |
159. |
Clustering is ___________ and is example of ____________learning |
A. | predictive and supervised |
B. | predictive and unsupervised |
C. | descriptive and supervised |
D. | descriptive and unsupervised |
Answer» D. descriptive and unsupervised |
160. |
To determine association rules from frequent item sets |
A. | only minimum confidence needed |
B. | neither support not confidence needed |
C. | both minimum support and confidence are needed |
D. | minimum support is needed |
Answer» C. both minimum support and confidence are needed |
161. |
If {A,B,C,D} is a frequent itemset, candidate rules which is not possible is |
A. | c –> a |
B. | d –>abcd |
C. | a –> bc |
D. | b –> adc |
Answer» B. d –>abcd |
162. |
Which Association Rule would you prefer |
A. | high support and low confidence |
B. | low support and high confidence |
C. | low support and low confidence |
D. | high support and medium confidence |
Answer» B. low support and high confidence |
163. |
This clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration |
A. | conceptual clustering |
B. | k-means clustering |
C. | expectation maximization |
D. | agglomerative clustering |
Answer» B. k-means clustering |
164. |
Classification rules are extracted from _____________ |
A. | decision tree |
B. | root node |
C. | branches |
D. | siblings |
Answer» A. decision tree |
165. |
What does K refers in the K-Means algorithm which is a non-hierarchical clustering approach? |
A. | complexity |
B. | fixed value |
C. | no of iterations |
D. | number of clusters |
Answer» D. number of clusters |
166. |
How will you counter over-fitting in decision tree? |
A. | by pruning the longer rules |
B. | by creating new rules |
C. | both by pruning the longer rules’ and ‘ by creating new rules’ |
D. | none of the options |
Answer» A. by pruning the longer rules |
167. |
What are two steps of tree pruning work? |
A. | pessimistic pruning and optimistic pruning |
B. | postpruning and prepruning |
C. | cost complexity pruning and time complexity pruning |
D. | none of the options |
Answer» B. postpruning and prepruning |
168. |
Which of the following sentences are true? |
A. | in pre-pruning a tree is \pruned\ by halting its construction early |
B. | a pruning set of class labelled tuples is used to estimate cost complexity |
C. | the best pruned tree is the one that minimizes the number of encoding bits |
D. | all of the above |
Answer» D. all of the above |
169. |
Assume that you are given a data set and a neural network model trained on the data set. You
|
A. | accuracy of the decision tree model on the given data set |
B. | f1 measure of the decision tree model on the given data set |
C. | fidelity of the decision tree model, which is the fraction of instances on which the neural network and the decision tree give the same output |
D. | comprehensibility of the decision tree model, measured in terms of the size of the corresponding rule set |
Answer» C. fidelity of the decision tree model, which is the fraction of instances on which the neural network and the decision tree give the same output |
170. |
Which of the following properties are characteristic of decision trees?
|
A. | a and b |
B. | a and d |
C. | b, c and d |
D. | all of the above |
Answer» C. b, c and d |
171. |
To control the size of the tree, we need to control the number of regions. One approach to
|
A. | a and b |
B. | a and d |
C. | b, c and d |
D. | all of the above |
Answer» A. a and b |
172. |
Which among the following statements best describes our approach to learning decision trees |
A. | identify the best partition of the input space and response per partition to minimise sum of squares error |
B. | identify the best approximation of the above by the greedy approach (to identifying the partitions) |
C. | identify the model which gives the best performance using the greedy approximation (option (b)) with the smallest partition scheme |
D. | identify the model which gives performance close to the best greedy approximation performance (option (b)) with the smallest partition scheme |
Answer» D. identify the model which gives performance close to the best greedy approximation performance (option (b)) with the smallest partition scheme |
173. |
Having built a decision tree, we are using reduced error pruning to reduce the size of the
|
A. | 10.8, 13.33, 14.48 |
B. | 10.8, 13.33, 12.06 |
C. | 7.2, 10, 8.8 |
D. | 7.2, 10, 8.6 |
Answer» C. 7.2, 10, 8.8 |
174. |
Suppose on performing reduced error pruning, we collapsed a node and observed an improvement in the prediction accuracy on the validation set.
|
A. | a and b |
B. | a and d |
C. | b, c and d |
D. | all of the above |
Answer» D. all of the above |
175. |
Time Complexity of k-means is given by |
A. | o(mn) |
B. | o(tkn) |
C. | o(kn) |
D. | o(t2kn) |
Answer» B. o(tkn) |
176. |
In Apriori algorithm, if 1 item-sets are 100, then the number of candidate 2 item-sets are |
A. | 100 |
B. | 200 |
C. | 4950 |
D. | 5000 |
Answer» C. 4950 |
177. |
Machine learning techniques differ from statistical techniques in that machine learning methods |
A. | are better able to deal with missing and noisy data |
B. | typically assume an underlying distribution for the data |
C. | have trouble with large-sized datasets |
D. | are not able to explain their behavior |
Answer» A. are better able to deal with missing and noisy data |
178. |
The probability that a person owns a sports car given that they subscribe to automotive magazine is 40%. We also know that 3% of the adult population subscribes to automotive magazine. The probability of a person owning a sports car given that they don’t subscribe to automotive magazine is 30%. Use this information to compute the probability that a person subscribes to automotive magazine given that they own a sports car |
A. | 0.0368 |
B. | 0.0396 |
C. | 0.0389 |
D. | 0.0398 |
Answer» B. 0.0396 |
179. |
What is the final resultant cluster size in Divisive algorithm, which is one of the hierarchical clustering approaches? |
A. | zero |
B. | three |
C. | singleton |
D. | two |
Answer» C. singleton |
180. |
Given a frequent itemset L, If |L| = k, then there are |
A. | 2k – 1 candidate association rules |
B. | 2k candidate association rules |
C. | 2k – 2 candidate association rules |
D. | 2k -2 candidate association rules |
Answer» C. 2k – 2 candidate association rules |
181. |
Which Statement is not true statement. |
A. | k-means clustering is a linear clustering algorithm. |
B. | k-means clustering aims to partition n observations into k clusters |
C. | k-nearest neighbor is same as k-means |
D. | k-means is sensitive to outlier |
Answer» B. k-means clustering aims to partition n observations into k clusters |
182. |
which of the following cases will K-Means clustering give poor results?
|
A. | 1 and 2 |
B. | 2 and 3 |
C. | 2 and 4 |
D. | 1, 2 and 4 |
Answer» C. 2 and 4 |
183. |
What is Decision Tree? |
A. | flow-chart |
B. | structure in which internal node represents test on an attribute, each branch represents outcome of test and each leaf node represents class label |
C. | flow-chart like structure in which internal node represents test on an attribute, each branch represents outcome of test and each leaf node represents class label |
D. | none of the above |
Answer» D. none of the above |
184. |
What are two steps of tree pruning work? |
A. | pessimistic pruning and optimistic pruning |
B. | postpruning and prepruning |
C. | cost complexity pruning and time complexity pruning |
D. | none of the options |
Answer» B. postpruning and prepruning |
185. |
A database has 5 transactions. Of these, 4 transactions include milk and bread. Further, of the given 4 transactions, 2 transactions include cheese. Find the support percentage for the following association rule “if milk and bread are purchased, then cheese is also purchased”. |
A. | 0.4 |
B. | 0.6 |
C. | 0.8 |
D. | 0.42 |
Answer» D. 0.42 |
186. |
Which of the following option is true about k-NN algorithm? |
A. | it can be used for classification |
B. | ??it can be used for regression |
C. | ??it can be used in both classification and regression?? |
D. | not useful in ml algorithm |
Answer» C. ??it can be used in both classification and regression?? |
187. |
How to select best hyperparameters in tree based models? |
A. | measure performance over training data |
B. | measure performance over validation data |
C. | both of these |
D. | random selection of hyper parameters |
Answer» B. measure performance over validation data |
188. |
What is true about K-Mean Clustering?
|
A. | 1 and 3 |
B. | 1 and 2 |
C. | 2 and 3 |
D. | 1, 2 and 3 |
Answer» D. 1, 2 and 3 |
189. |
What are tree based classifiers? |
A. | classifiers which form a tree with each attribute at one level |
B. | classifiers which perform series of condition checking with one attribute at a time |
C. | both options except none |
D. | not possible |
Answer» C. both options except none |
190. |
What is gini index? |
A. | gini index??operates on the categorical target variables |
B. | it is a measure of purity |
C. | gini index performs only binary split |
D. | all (1,2 and 3) |
Answer» D. all (1,2 and 3) |
191. |
Tree/Rule based classification algorithms generate ... rule to perform the classification. |
A. | if-then. |
B. | while. |
C. | do while |
D. | switch. |
Answer» A. if-then. |
192. |
Decision Tree is |
A. | flow-chart |
B. | structure in which internal node represents test on an attribute, each branch represents outcome of test and each leaf node represents class label |
C. | both a & b |
D. | class of instance |
Answer» C. both a & b |
193. |
Which of the following is true about Manhattan distance? |
A. | it can be used for continuous variables |
B. | it can be used for categorical variables |
C. | it can be used for categorical as well as continuous |
D. | it can be used for constants |
Answer» A. it can be used for continuous variables |
194. |
A company has build a kNN classifier that gets 100% accuracy on training data. When they deployed this model on client side it has been found that the model is not at all accurate. Which of the following thing might gone wrong? Note: Model has successfully deployed and no technical issues are found at client side except the model performance |
A. | it is probably a overfitted model |
B. | ??it is probably a underfitted model |
C. | ??can’t say |
D. | wrong client data |
Answer» A. it is probably a overfitted model |
195. |
hich of the following classifications would best suit the student performance classification systems? |
A. | if...then... analysis |
B. | market-basket analysis |
C. | regression analysis |
D. | cluster analysis |
Answer» A. if...then... analysis |
196. |
Which statement is true about the K-Means algorithm? Select one: |
A. | the output attribute must be cateogrical. |
B. | all attribute values must be categorical. |
C. | all attributes must be numeric |
D. | attribute values may be either categorical or numeric |
Answer» C. all attributes must be numeric |
197. |
Which of the following can act as possible termination conditions in K-Means?
|
A. | 1, 3 and 4 |
B. | 1, 2 and 3 |
C. | 1, 2 and 4 |
D. | 1,2,3,4 |
Answer» D. 1,2,3,4 |
198. |
Which of the following statement is true about k-NN algorithm?
|
A. | 1 and 2 |
B. | 1 and 3 |
C. | only 1 |
D. | 1,2 and 3 |
Answer» D. 1,2 and 3 |
199. |
In which of the following cases will K-means clustering fail to give good results? 1) Data points with outliers 2) Data points with different densities 3) Data points with nonconvex shapes |
A. | 1 and 2 |
B. | 2 and 3 |
C. | 1, 2, and 3?? |
D. | 1 and 3 |
Answer» C. 1, 2, and 3?? |
200. |
How will you counter over-fitting in decision tree? |
A. | by pruning the longer rules |
B. | by creating new rules |
C. | both by pruning the longer rules’ and ‘ by creating new rules’ |
D. | over-fitting is not possible |
Answer» A. by pruning the longer rules |
Done Studing? Take A Test.
Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.