133
77.4k

400+ Data Mining and Data Warehouse Solved MCQs

These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) , Common Topics in Competitive and Entrance exams .

151.

If a set is a frequent set and no superset of this set is a frequent set, then it is called ________.

A. maximal frequent set.
B. border set.
C. lattice.
D. infrequent sets.
Answer» A. maximal frequent set.
152.

Any subset of a frequent set is a frequent set. This is ___________.

A. upward closure property.
B. downward closure property.
C. maximal frequent set.
D. border set.
Answer» B. downward closure property.
153.

Any superset of an infrequent set is an infrequent set. This is _______.

A. maximal frequent set.
B. border set.
C. upward closure property.
D. downward closure property.
Answer» C. upward closure property.
154.

If an itemset is not a frequent set and no superset of this is a frequent set, then it is _______.

A. maximal frequent set
B. border set.
C. upward closure property.
D. downward closure property.
Answer» B. border set.
155.

A priori algorithm is otherwise called as __________.

A. width-wise algorithm.
B. level-wise algorithm.
C. pincer-search algorithm.
D. fp growth algorithm.
Answer» B. level-wise algorithm.
156.

The A Priori algorithm is a ___________.

A. top-down search.
B. breadth first search.
C. depth first search.
D. bottom-up search.
Answer» D. bottom-up search.
157.

The first phase of A Priori algorithm is _______.

A. candidate generation.
B. itemset generation.
C. pruning.
D. partitioning.
Answer» A. candidate generation.
158.

The second phaase of A Priori algorithm is ____________.

A. candidate generation.
B. itemset generation.
C. pruning.
D. partitioning.
Answer» C. pruning.
159.

The _______ step eliminates the extensions of (k-1)-itemsets which are not found to be frequent, from being considered for counting support.

A. candidate generation.
B. pruning.
C. partitioning.
D. itemset eliminations.
Answer» B. pruning.
160.

The a priori frequent itemset discovery algorithm moves _______ in the lattice.

A. upward.
B. downward.
C. breadthwise.
D. both upward and downward.
Answer» A. upward.
161.

After the pruning of a priori algorithm, _______ will remain.

A. only candidate set.
B. no candidate set.
C. only border set.
D. no border set.
Answer» B. no candidate set.
162.

The number of iterations in a priori ___________.

A. increases with the size of the maximum frequent set.
B. decreases with increase in size of the maximum frequent set.
C. increases with the size of the data.
D. decreases with the increase in size of the data.
Answer» A. increases with the size of the maximum frequent set.
163.

MFCS is the acronym of _____.

A. maximum frequency control set.
B. minimal frequency control set.
C. maximal frequent candidate set.
D. minimal frequent candidate set.
Answer» C. maximal frequent candidate set.
164.

Dynamuc Itemset Counting Algorithm was proposed by ____.

A. bin et al.
B. argawal et at.
C. toda et al.
D. simon et at.
Answer» A. bin et al.
165.

Itemsets in the ______ category of structures have a counter and the stop number with them.

A. dashed.
B. circle.
C. box.
D. solid.
Answer» A. dashed.
166.

The itemsets in the _______category structures are not subjected to any counting.

A. dashes.
B. box.
C. solid.
D. circle.
Answer» C. solid.
167.

Certain itemsets in the dashed circle whose support count reach support value during an iteration move into the ______.

A. dashed box.
B. solid circle.
C. solid box.
D. none of the above.
Answer» A. dashed box.
168.

Certain itemsets enter afresh into the system and get into the _______, which are essentially the supersets of the itemsets that move from the dashed circle to the dashed box.

A. dashed box.
B. solid circle.
C. solid box.
D. dashed circle.
Answer» D. dashed circle.
169.

The itemsets that have completed on full pass move from dashed circle to ________.

A. dashed box.
B. solid circle.
C. solid box.
D. none of the above.
Answer» B. solid circle.
170.

The FP-growth algorithm has ________ phases.

A. one.
B. two.
C. three.
D. four.
Answer» B. two.
171.

A frequent pattern tree is a tree structure consisting of ________.

A. an item-prefix-tree.
B. a frequent-item-header table.
C. a frequent-item-node.
D. both a & b.
Answer» D. both a & b.
172.

The non-root node of item-prefix-tree consists of ________ fields.

A. two.
B. three.
C. four.
D. five.
Answer» B. three.
173.

The frequent-item-header-table consists of __________ fields.

A. only one.
B. two.
C. three.
D. four.
Answer» B. two.
174.

The paths from root node to the nodes labelled 'a' are called __________.

A. transformed prefix path.
B. suffix subpath.
C. transformed suffix path.
D. prefix subpath.
Answer» D. prefix subpath.
175.

The transformed prefix paths of a node 'a' form a truncated database of pattern which co-occur with a is called _______.

A. suffix path.
B. fp-tree.
C. conditional pattern base.
D. prefix path.
Answer» C. conditional pattern base.
176.

The goal of _____ is to discover both the dense and sparse regions of a data set.

A. association rule.
B. classification.
C. clustering.
D. genetic algorithm.
Answer» C. clustering.
177.

Which of the following is a clustering algorithm?

A. a priori.
B. clara.
C. pincer-search.
D. fp-growth.
Answer» B. clara.
178.

_______ clustering technique start with as many clusters as there are records, with each cluster having only one record.

A. agglomerative.
B. divisive.
C. partition.
D. numeric.
Answer» A. agglomerative.
179.

__________ clustering techniques starts with all records in one cluster and then try to split that cluster into small pieces.

A. agglomerative.
B. divisive.
C. partition.
D. numeric.
Answer» B. divisive.
180.

Which of the following is a data set in the popular UCI machine-learning repository?

A. clara.
B. cactus.
C. stirr.
D. mushroom.
Answer» D. mushroom.
181.

In ________ algorithm each cluster is represented by the center of gravity of the cluster.

A. k-medoid.
B. k-means.
C. stirr.
D. rock.
Answer» B. k-means.
182.

In ___________ each cluster is represented by one of the objects of the cluster located near the center.

A. k-medoid.
B. k-means.
C. stirr.
D. rock.
Answer» A. k-medoid.
183.

Pick out a k-medoid algoithm.

A. dbscan.
B. birch.
C. pam.
D. cure.
Answer» C. pam.
184.

Pick out a hierarchical clustering algorithm.

A. dbscan
B. birch.
C. pam.
D. cure.
Answer» B. birch.
185.

CLARANS stands for _______.

A. clara net server.
B. clustering large application range network search.
C. clustering large applications based on randomized search.
D. clustering application randomized search.
Answer» C. clustering large applications based on randomized search.
186.

BIRCH is a ________.

A. agglomerative clustering algorithm.
B. hierarchical algorithm.
C. hierarchical-agglomerative algorithm.
D. divisive.
Answer» C. hierarchical-agglomerative algorithm.
187.

The cluster features of different subclusters are maintained in a tree called ___________.

A. cf tree.
B. fp tree.
C. fp growth tree.
D. b tree.
Answer» A. cf tree.
188.

The ________ algorithm is based on the observation that the frequent sets are normally very few in number compared to the set of all itemsets.

A. a priori.
B. clustering.
C. association rule.
D. partition.
Answer» D. partition.
189.

The partition algorithm uses _______ scans of the databases to discover all frequent sets.

A. two.
B. four.
C. six.
D. eight.
Answer» A. two.
190.

The basic idea of the apriori algorithm is to generate________ item sets of a particular size & scans the database.

A. candidate.
B. primary.
C. secondary.
D. superkey.
Answer» A. candidate.
191.

An algorithm called________is used to generate the candidate item sets for each pass after the first.

A. apriori.
B. apriori-gen.
C. sampling.
D. partition.
Answer» B. apriori-gen.
192.

The basic partition algorithm reduces the number of database scans to ________ & divides it into partitions.

A. one.
B. two.
C. three.
D. four.
Answer» B. two.
193.

___________and prediction may be viewed as types of classification.

A. decision.
B. verification.
C. estimation.
D. illustration.
Answer» C. estimation.
194.

___________can be thought of as classifying an attribute value into one of a set of possible classes.

A. estimation.
B. prediction.
C. identification.
D. clarification.
Answer» B. prediction.
195.

Prediction can be viewed as forecasting a_________value.

A. non-continuous.
B. constant.
C. continuous.
D. variable.
Answer» C. continuous.
196.

_________data consists of sample input data as well as the classification assignment for the data.

A. missing.
B. measuring.
C. non-training.
D. training.
Answer» D. training.
197.

Rule based classification algorithms generate ______ rule to perform the classification.

A. if-then.
B. while.
C. do while.
D. switch.
Answer» A. if-then.
198.

____________ are a different paradigm for computing which draws its inspiration from neuroscience.

A. computer networks.
B. neural networks.
C. mobile networks.
D. artificial networks.
Answer» B. neural networks.
199.

The human brain consists of a network of ___________.

A. neurons.
B. cells.
C. tissue.
D. muscles.
Answer» A. neurons.
200.

Each neuron is made up of a number of nerve fibres called _____________.

A. electrons.
B. molecules.
C. atoms.
D. dendrites.
Answer» D. dendrites.

Done Studing? Take A Test.

Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.