

McqMate
These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) .
551. |
In reinforcement learning, this feedback is usually called as . |
A. | overfitting |
B. | overlearning |
C. | reward |
D. | none of above |
Answer» C. reward |
552. |
In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called . |
A. | deep learning |
B. | machine learning |
C. | reinforcement learning |
D. | unsupervised learning |
Answer» A. deep learning |
553. |
When it is necessary to allow the model to develop a generalization ability and avoid a common problem called . |
A. | overfitting |
B. | overlearning |
C. | classification |
D. | regression |
Answer» A. overfitting |
554. |
Techniques involve the usage of both labeled and unlabeled data is called . |
A. | supervised |
B. | semi- supervised |
C. | unsupervised |
D. | none of the above |
Answer» B. semi- supervised |
555. |
Reinforcement learning is particularly efficient when . |
A. | the environment is not completely deterministic |
B. | it\s often very dynamic |
C. | it\s impossible to have a precise error measure |
D. | all above |
Answer» D. all above |
556. |
During the last few years, many algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state. |
A. | logical |
B. | classical |
C. | classification |
D. | none of above |
Answer» D. none of above |
557. |
if there is only a discrete number of possible outcomes (called categories), the process becomes a . |
A. | regression |
B. | classification. |
C. | modelfree |
D. | categories |
Answer» B. classification. |
558. |
Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset? |
A. | all categories of categorical variable are not present in the test dataset. |
B. | frequency distribution of categories is different in train as compared to the test dataset. |
C. | train and test always have same distribution. |
D. | both a and b |
Answer» D. both a and b |
559. |
scikit-learn also provides functions for creating dummy datasets from scratch: |
A. | make_classifica tion() |
B. | make_regressio n() |
C. | make_blobs() |
D. | all above |
Answer» D. all above |
560. |
which can accept a NumPy RandomState generator or an integer seed. |
A. | make_blobs |
B. | random_state |
C. | test_size |
D. | training_size |
Answer» B. random_state |
561. |
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least valid options |
A. | 1 |
B. | 2 |
C. | 3 |
D. | 4 |
Answer» B. 2 |
562. |
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters . |
A. | with_mean=tru e/false |
B. | with_std=true/ false |
C. | both a & b |
D. | none of the mentioned |
Answer» C. both a & b |
563. |
Which of the following selects the best K high-score features. |
A. | selectpercentil e |
B. | featurehasher |
C. | selectkbest |
D. | all above |
Answer» C. selectkbest |
564. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the mean training error? |
A. | increase |
B. | decrease |
C. | remain constant |
D. | can’t say |
Answer» D. can’t say |
565. |
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data? |
A. | bias increases and variance increases |
B. | bias decreases and variance increases |
C. | bias decreases and variance decreases |
D. | bias increases and variance decreases |
Answer» D. bias increases and variance decreases |
566. |
Problem: Players will play if weather is sunny. Is this statement is correct? |
A. | true |
B. | false |
Answer» A. true |
567. |
Multinomial Naïve Bayes Classifier is _ distribution |
A. | continuous |
B. | discrete |
C. | binary |
Answer» B. discrete |
568. |
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVM’s? |
A. | large datasets |
B. | small datasets |
C. | medium sized datasets |
D. | size does not matter |
Answer» A. large datasets |
569. |
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM |
A. | 1 |
B. | 1 and 2 |
C. | 1 and 3 |
D. | 2 and 3 |
Answer» B. 1 and 2 |
570. |
Which of the following is not supervised learning? |
A. | pca |
B. | decision tree |
C. | naive bayesian |
D. | linerar regression |
Answer» A. pca |
571. |
Gaussian Naïve Bayes Classifier is _ distribution |
A. | continuous |
B. | discrete |
C. | binary |
Answer» A. continuous |
572. |
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for? |
A. | underfitting |
B. | nothing, the model is perfect |
C. | overfitting |
Answer» C. overfitting |
573. |
The cost parameter in the SVM means: |
A. | the number of cross- validations to be made |
B. | the kernel to be used |
C. | the tradeoff between misclassificati on and simplicity of the model |
D. | none of the above |
Answer» C. the tradeoff between misclassificati on and simplicity of the model |
574. |
We usually use feature normalization before using the Gaussian k |
A. | e 1 |
B. | 1 and 2 |
C. | 1 and 3 |
D. | 2 and 3 |
Answer» B. 1 and 2 |
575. |
The effectiveness of an SVM depends upon: |
A. | selection of kernel |
B. | kernel parameters |
C. | soft margin parameter c |
D. | all of the above |
Answer» D. all of the above |
576. |
The process of forming general concept definitions from examples of concepts to be learned. |
A. | deduction |
B. | abduction |
C. | induction |
D. | conjunction |
Answer» C. induction |
577. |
Computers are best at learning |
A. | facts. |
B. | concepts. |
C. | procedures. |
D. | principles. |
Answer» A. facts. |
578. |
Data used to build a data mining model. |
A. | validation data |
B. | training data |
C. | test data |
D. | hidden data |
Answer» B. training data |
579. |
Supervised learning and unsupervised clustering both require at least one |
A. | hidden attribute. |
B. | output attribute. |
C. | input attribute. |
D. | categorical attribute. |
Answer» A. hidden attribute. |
580. |
Supervised learning differs from unsupervised clustering in that supervised learning requires |
A. | at least one input attribute. |
B. | input attributes to be categorical. |
C. | at least one output attribute. |
D. | output attributes to be categorical. |
Answer» B. input attributes to be categorical. |
581. |
A regression model in which more than one independent variable is used to predict the dependent variable is called |
A. | a simple linear regression model |
B. | a multiple regression models |
C. | an independent model |
D. | none of the above |
Answer» C. an independent model |
582. |
A term used to describe the case when the independent variables in a multiple regression model are correlated is |
A. | regression |
B. | correlation |
C. | multicollinearity |
D. | none of the above |
Answer» C. multicollinearity |
583. |
A multiple regression model has the form: y = 2 + 3x1 + 4x2. As x1 increases by 1 unit (holding x2 constant), y will |
A. | increase by 3 units |
B. | decrease by 3 units |
C. | increase by 4 units |
D. | decrease by 4 units |
Answer» A. increase by 3 units |
584. |
A multiple regression model has |
A. | only one independent variable |
B. | more than one dependent variable |
C. | more than one independent variable |
D. | none of the above |
Answer» B. more than one dependent variable |
585. |
A measure of goodness of fit for the estimated regression equation is the |
A. | multiple coefficient of determination |
B. | mean square due to error |
C. | mean square due to regression |
D. | none of the above |
Answer» C. mean square due to regression |
586. |
The adjusted multiple coefficient of determination accounts for |
A. | the number of dependent variables in the model |
B. | the number of independent variables in the model |
C. | unusually large predictors |
D. | none of the above |
Answer» D. none of the above |
587. |
The multiple coefficient of determination is computed by |
A. | dividing ssr by sst |
B. | dividing sst by ssr |
C. | dividing sst by sse |
D. | none of the above |
Answer» C. dividing sst by sse |
588. |
For a multiple regression model, SST = 200 and SSE = 50. The multiple coefficient of determination is |
A. | 0.25 |
B. | 4.00 |
C. | 0.75 |
D. | none of the above |
Answer» B. 4.00 |
589. |
A nearest neighbor approach is best used |
A. | with large-sized datasets. |
B. | when irrelevant attributes have been removed from the data. |
C. | when a generalized model of the data is desirable. |
D. | when an explanation of what has been found is of primary importance. |
Answer» B. when irrelevant attributes have been removed from the data. |
590. |
Another name for an output attribute. |
A. | predictive variable |
B. | independent variable |
C. | estimated variable |
D. | dependent variable |
Answer» B. independent variable |
591. |
Classification problems are distinguished from estimation problems in that |
A. | classification problems require the output attribute to be numeric. |
B. | classification problems require the output attribute to be categorical. |
C. | classification problems do not allow an output attribute. |
D. | classification problems are designed to predict future outcome. |
Answer» C. classification problems do not allow an output attribute. |
592. |
Which statement is true about prediction problems? |
A. | the output attribute must be categorical. |
B. | the output attribute must be numeric. |
C. | the resultant model is designed to determine future outcomes. |
D. | the resultant model is designed to classify current behavior. |
Answer» D. the resultant model is designed to classify current behavior. |
593. |
Which of the following is a common use of unsupervised clustering? |
A. | detect outliers |
B. | determine a best set of input attributes for supervised learning |
C. | evaluate the likely performance of a supervised learner model |
D. | determine if meaningful relationships can be found in a dataset |
Answer» A. detect outliers |
594. |
The average positive difference between computed and desired outcome values. |
A. | root mean squared error |
B. | mean squared error |
C. | mean absolute error |
D. | mean positive error |
Answer» D. mean positive error |
595. |
Selecting data so as to assure that each class is properly represented in both the training and test set. |
A. | cross validation |
B. | stratification |
C. | verification |
D. | bootstrapping |
Answer» B. stratification |
596. |
The standard error is defined as the square root of this computation. |
A. | the sample variance divided by the total number of sample instances. |
B. | the population variance divided by the total number of sample instances. |
C. | the sample variance divided by the sample mean. |
D. | the population variance divided by the sample mean. |
Answer» A. the sample variance divided by the total number of sample instances. |
597. |
Data used to optimize the parameter settings of a supervised learner model. |
A. | training |
B. | test |
C. | verification |
D. | validation |
Answer» D. validation |
598. |
Bootstrapping allows us to |
A. | choose the same training instance several times. |
B. | choose the same test set instance several times. |
C. | build models with alternative subsets of the training data several times. |
D. | test a model with alternative subsets of the test data several times. |
Answer» A. choose the same training instance several times. |
599. |
The correlation coefficient for two real-valued attributes is –0.85. What does this value tell you? |
A. | the attributes are not linearly related. |
B. | as the value of one attribute increases the value of the second attribute also increases. |
C. | as the value of one attribute decreases the value of the second attribute increases. |
D. | the attributes show a curvilinear relationship. |
Answer» C. as the value of one attribute decreases the value of the second attribute increases. |
600. |
The average squared difference between classifier predicted output and actual output. |
A. | mean squared error |
B. | root mean squared error |
C. | mean absolute error |
D. | mean relative error |
Answer» A. mean squared error |
Done Studing? Take A Test.
Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.