McqMate
501. 
Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset? 
A.  all categories of categorical variable are not present in the test dataset. 
B.  frequency distribution of categories is different in train as compared to the test dataset. 
C.  train and test always have same distribution. 
D.  both a and b 
Answer» D. both a and b 
502. 
Which of the following sentence is FALSE regarding regression? 
A.  it relates inputs to outputs. 
B.  it is used for prediction. 
C.  it may be used for interpretation. 
D.  it discovers causal relationships. 
Answer» D. it discovers causal relationships. 
503. 
scikitlearn also provides functions for creating dummy datasets from scratch: 
A.  make_classifica tion() 
B.  make_regressio n() 
C.  make_blobs() 
D.  all above 
Answer» D. all above 
504. 
which can accept a NumPy RandomState generator or an integer seed. 
A.  make_blobs 
B.  random_state 
C.  test_size 
D.  training_size 
Answer» B. random_state 
505. 
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikitlearn offers at least valid options 
A.  1 
B.  2 
C.  3 
D.  4 
Answer» B. 2 
506. 
is the most drastic one and should be considered only when the dataset is quite large, the number of missing features is high, and any prediction could be risky. 
A.  removing the whole line 
B.  creating sub model to predict those features 
C.  using an automatic strategy to input them according to the other known values 
D.  all above 
Answer» A. removing the whole line 
507. 
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters . 
A.  with_mean=tru e/false 
B.  with_std=true/ false 
C.  both a & b 
D.  none of the mentioned 
Answer» C. both a & b 
508. 
Which of the following selects the best K highscore features. 
A.  selectpercentil e 
B.  featurehasher 
C.  selectkbest 
D.  all above 
Answer» C. selectkbest 
509. 
Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias and variance with lambda. 
A.  in case of very large lambda; bias is low, variance is low 
B.  in case of very large lambda; bias is low, variance is high 
C.  in case of very large lambda; bias is high, variance is low 
D.  in case of very large lambda; bias is high, variance is high 
Answer» C. in case of very large lambda; bias is high, variance is low 
510. 
What is/are true about ridge regression?

A.  1 and 3 
B.  1 and 4 
C.  2 and 3 
D.  2 and 4 
Answer» A. 1 and 3 
511. 
Which of the following method(s) does not have closed form solution for its coefficients? 
A.  ridge regression 
B.  lasso 
C.  both ridge and lasso 
D.  none of both 
Answer» B. lasso 
512. 
Function used for linear regression in R is 
A.  lm(formula, data) 
B.  lr(formula, data) 
C.  lrm(formula, data) 
D.  regression.linear (formula, data) 
Answer» A. lm(formula, data) 
513. 
In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to 
A.  (xintercept, slope) 
B.  (slope, x intercept) 
C.  (yintercept, slope) 
D.  (slope, y intercept) 
Answer» C. (yintercept, slope) 
514. 
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is 0.95.Which of the following is true for X1? 
A.  relation between the x1 and y is weak 
B.  relation between the x1 and y is strong 
C.  relation between the x1 and y is neutral 
D.  correlation can’t judge the relationship 
Answer» B. relation between the x1 and y is strong 
515. 
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the mean training error? 
A.  increase 
B.  decrease 
C.  remain constant 
D.  can’t say 
Answer» D. can’t say 
516. 
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data? 
A.  bias increases and variance increases 
B.  bias decreases and variance increases 
C.  bias decreases and variance decreases 
D.  bias increases and variance decreases 
Answer» D. bias increases and variance decreases 
517. 
Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?1. I will add more variables2. I will start introducing polynomial degree variables3. I will remove some variables 
A.  1 and 2 
B.  2 and 3 
C.  1 and 3 
D.  1, 2 and 3 
Answer» A. 1 and 2 
518. 
Problem: Players will play if weather is sunny. Is this statement is correct? 
A.  true 
B.  false 
Answer» A. true 
519. 
Multinomial Naïve Bayes Classifier is _ distribution 
A.  continuous 
B.  discrete 
C.  binary 
Answer» B. discrete 
520. 
For the given weather data, Calculate probability of not playing 
A.  0.4 
B.  0.64 
C.  0.36 
D.  0.5 
Answer» C. 0.36 
521. 
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVM’s? 
A.  large datasets 
B.  small datasets 
C.  medium sized datasets 
D.  size does not matter 
Answer» A. large datasets 
522. 
The effectiveness of an SVM depends upon: 
A.  selection of kernel 
B.  kernel parameters 
C.  soft margin parameter c 
D.  all of the above 
Answer» D. all of the above 
523. 
What do you mean by generalization error in terms of the SVM? 
A.  how far the hyperplane is from the support vectors 
B.  how accurately the svm can predict outcomes for unseen data 
C.  the threshold amount of error in an svm 
Answer» B. how accurately the svm can predict outcomes for unseen data 
524. 
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM 
A.  1 
B.  1 and 2 
C.  1 and 3 
D.  2 and 3 
Answer» B. 1 and 2 
525. 
Support vectors are the data points that lie closest to the decision surface. 
A.  true 
B.  false 
Answer» A. true 
526. 
Which of the following is not supervised learning? 
A.  pca 
B.  decision tree 
C.  naive bayesian 
D.  linerar regression 
Answer» A. pca 
527. 
Gaussian Naïve Bayes Classifier is _ distribution 
A.  continuous 
B.  discrete 
C.  binary 
Answer» A. continuous 
528. 
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but

A.  underfitting 
B.  nothing, the model is perfect 
C.  overfitting 
Answer» C. overfitting 
529. 
What is the purpose of performing cross validation? 
A.  to assess the predictive performance of the models 
B.  to judge how the trained model performs outside the sample ontest data 
C.  both a and b 
Answer» C. both a and b 
530. 
Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors.If you remove the following any one red points from the data. Does the decision boundary will change? 
A.  yes 
B.  no 
Answer» A. yes 
531. 
Linear SVMs have no hyperparameters that need to be set by crossvalidation 
A.  true 
B.  false 
Answer» B. false 
532. 
For the given weather data, what is the probability that players will play if weather is sunny 
A.  0.5 
B.  0.26 
C.  0.73 
D.  0.6 
Answer» D. 0.6 
533. 
100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being a man 
A.  0.4 
B.  0.2 
C.  0.6 
D.  0.45 
Answer» B. 0.2 
534. 
Problem: Players will play if weather is sunny. Is t 
A.  true 
B.  false 
Answer» A. true 
535. 
For the given weather data, Calculate probability 
A.  0.4 
B.  0.64 
C.  0.29 
D.  0.75 
Answer» B. 0.64 
536. 
For the given weather data, Calculate probability 
A.  0.4 
B.  0.64 
C.  0.36 
D.  0.5 
Answer» C. 0.36 
537. 
For the given weather data, what is the probabilit 
A.  0.5 
B.  0.26 
C.  0.73 
D.  0.6 
Answer» D. 0.6 
538. 
100 people are at party. Given data gives informa 
A.  0.4 
B.  0.2 
C.  0.6 
D.  0.45 
Answer» B. 0.2 
539. 
100 people are at party. Given data gives informa 
A.  true 
B.  false 
Answer» A. true 
540. 
What do you mean by generalization error in terms of the SVM? 
A.  how far the hy 
B.  how accuratel 
C.  the threshold amount of error i 
Answer» B. how accuratel 
541. 
The effectiveness of an SVM depends upon: 
A.  selection of ke 
B.  kernel param 
C.  soft margin pa 
D.  all of the abov 
Answer» D. all of the abov 
542. 
Support vectors are the data points that lie closest to the decision 
A.  true 
B.  false 
Answer» A. true 
543. 
The SVM’s are less effective when: 
A.  the data is line 
B.  the data is cl 
C.  the data is noisy and contains 
Answer» C. the data is noisy and contains 
544. 
Suppose you are using RBF kernel in SVM with high Gamma valu 
A.  the model wo 
B.  uthe model wo 
C.  the model wou 
D.  none of the ab 
Answer» B. uthe model wo 
545. 
The cost parameter in the SVM means: 
A.  the number of cross validations to be made 
B.  the kernel to be used 
C.  the tradeoff between misclassificati on and simplicity of the model 
D.  none of the above 
Answer» C. the tradeoff between misclassificati on and simplicity of the model 
546. 
If I am using all features of my dataset and I achieve 100% accura 
A.  underfitting 
B.  nothing, the m 
C.  overfitting 
Answer» C. overfitting 
547. 
Which of the following are real world applications of the SVM? 
A.  text and hype 
B.  image classifi 
C.  clustering of n 
D.  all of the abov 
Answer» D. all of the abov 
548. 
Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting.Which of the following option would you more likely to consider iterating SVM next time? 
A.  you want to in 
B.  you want to d 
C.  you will try to c 
D.  you will try to r 
Answer» C. you will try to c 
549. 
We usually use feature normalization before using the Gaussian k 
A.  e 1 
B.  1 and 2 
C.  1 and 3 
D.  2 and 3 
Answer» B. 1 and 2 
550. 
Linear SVMs have no hyperparameters that need to be set by crossvalid 
A.  true 
B.  false 
Answer» B. false 
551. 
In a real problem, you should check to see if the SVM is separable and th 
A.  true 
B.  false 
Answer» B. false 
552. 
In reinforcement learning, this feedback is usually called as . 
A.  overfitting 
B.  overlearning 
C.  reward 
D.  none of above 
Answer» C. reward 
553. 
In the last decade, many researchers started training bigger and bigger models, built with several different layers that's why this approach is called . 
A.  deep learning 
B.  machine learning 
C.  reinforcement learning 
D.  unsupervised learning 
Answer» A. deep learning 
554. 
When it is necessary to allow the model to develop a generalization ability and avoid a common problem called . 
A.  overfitting 
B.  overlearning 
C.  classification 
D.  regression 
Answer» A. overfitting 
555. 
Techniques involve the usage of both labeled and unlabeled data is called . 
A.  supervised 
B.  semi supervised 
C.  unsupervised 
D.  none of the above 
Answer» B. semi supervised 
556. 
Reinforcement learning is particularly efficient when . 
A.  the environment is not completely deterministic 
B.  it\s often very dynamic 
C.  it\s impossible to have a precise error measure 
D.  all above 
Answer» D. all above 
557. 
During the last few years, many algorithms have been applied to deep neural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state. 
A.  logical 
B.  classical 
C.  classification 
D.  none of above 
Answer» D. none of above 
558. 
if there is only a discrete number of possible outcomes (called categories), the process becomes a . 
A.  regression 
B.  classification. 
C.  modelfree 
D.  categories 
Answer» B. classification. 
559. 
Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data. You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset? 
A.  all categories of categorical variable are not present in the test dataset. 
B.  frequency distribution of categories is different in train as compared to the test dataset. 
C.  train and test always have same distribution. 
D.  both a and b 
Answer» D. both a and b 
560. 
scikitlearn also provides functions for creating dummy datasets from scratch: 
A.  make_classifica tion() 
B.  make_regressio n() 
C.  make_blobs() 
D.  all above 
Answer» D. all above 
561. 
which can accept a NumPy RandomState generator or an integer seed. 
A.  make_blobs 
B.  random_state 
C.  test_size 
D.  training_size 
Answer» B. random_state 
562. 
In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikitlearn offers at least valid options 
A.  1 
B.  2 
C.  3 
D.  4 
Answer» B. 2 
563. 
It's possible to specify if the scaling process must include both mean and standard deviation using the parameters . 
A.  with_mean=tru e/false 
B.  with_std=true/ false 
C.  both a & b 
D.  none of the mentioned 
Answer» C. both a & b 
564. 
Which of the following selects the best K highscore features. 
A.  selectpercentil e 
B.  featurehasher 
C.  selectkbest 
D.  all above 
Answer» C. selectkbest 
565. 
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the mean training error? 
A.  increase 
B.  decrease 
C.  remain constant 
D.  can’t say 
Answer» D. can’t say 
566. 
We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data? 
A.  bias increases and variance increases 
B.  bias decreases and variance increases 
C.  bias decreases and variance decreases 
D.  bias increases and variance decreases 
Answer» D. bias increases and variance decreases 
567. 
Problem: Players will play if weather is sunny. Is this statement is correct? 
A.  true 
B.  false 
Answer» A. true 
568. 
Multinomial Naïve Bayes Classifier is _ distribution 
A.  continuous 
B.  discrete 
C.  binary 
Answer» B. discrete 
569. 
The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVM’s? 
A.  large datasets 
B.  small datasets 
C.  medium sized datasets 
D.  size does not matter 
Answer» A. large datasets 
570. 
We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM 
A.  1 
B.  1 and 2 
C.  1 and 3 
D.  2 and 3 
Answer» B. 1 and 2 
571. 
Which of the following is not supervised learning? 
A.  pca 
B.  decision tree 
C.  naive bayesian 
D.  linerar regression 
Answer» A. pca 
572. 
Gaussian Naïve Bayes Classifier is _ distribution 
A.  continuous 
B.  discrete 
C.  binary 
Answer» A. continuous 
573. 
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for? 
A.  underfitting 
B.  nothing, the model is perfect 
C.  overfitting 
Answer» C. overfitting 
574. 
The cost parameter in the SVM means: 
A.  the number of cross validations to be made 
B.  the kernel to be used 
C.  the tradeoff between misclassificati on and simplicity of the model 
D.  none of the above 
Answer» C. the tradeoff between misclassificati on and simplicity of the model 
575. 
We usually use feature normalization before using the Gaussian k 
A.  e 1 
B.  1 and 2 
C.  1 and 3 
D.  2 and 3 
Answer» B. 1 and 2 
576. 
The effectiveness of an SVM depends upon: 
A.  selection of kernel 
B.  kernel parameters 
C.  soft margin parameter c 
D.  all of the above 
Answer» D. all of the above 
577. 
The process of forming general concept definitions from examples of concepts to be learned. 
A.  deduction 
B.  abduction 
C.  induction 
D.  conjunction 
Answer» C. induction 
578. 
Computers are best at learning 
A.  facts. 
B.  concepts. 
C.  procedures. 
D.  principles. 
Answer» A. facts. 
579. 
Data used to build a data mining model. 
A.  validation data 
B.  training data 
C.  test data 
D.  hidden data 
Answer» B. training data 
580. 
Supervised learning and unsupervised clustering both require at least one 
A.  hidden attribute. 
B.  output attribute. 
C.  input attribute. 
D.  categorical attribute. 
Answer» A. hidden attribute. 
581. 
Supervised learning differs from unsupervised clustering in that supervised learning requires 
A.  at least one input attribute. 
B.  input attributes to be categorical. 
C.  at least one output attribute. 
D.  output attributes to be categorical. 
Answer» B. input attributes to be categorical. 
582. 
A regression model in which more than one independent variable is used to predict the dependent variable is called 
A.  a simple linear regression model 
B.  a multiple regression models 
C.  an independent model 
D.  none of the above 
Answer» C. an independent model 
583. 
A term used to describe the case when the independent variables in a multiple regression model are correlated is 
A.  regression 
B.  correlation 
C.  multicollinearity 
D.  none of the above 
Answer» C. multicollinearity 
584. 
A multiple regression model has the form: y = 2 + 3x1 + 4x2. As x1 increases by 1 unit (holding x2 constant), y will 
A.  increase by 3 units 
B.  decrease by 3 units 
C.  increase by 4 units 
D.  decrease by 4 units 
Answer» A. increase by 3 units 
585. 
A multiple regression model has 
A.  only one independent variable 
B.  more than one dependent variable 
C.  more than one independent variable 
D.  none of the above 
Answer» B. more than one dependent variable 
586. 
A measure of goodness of fit for the estimated regression equation is the 
A.  multiple coefficient of determination 
B.  mean square due to error 
C.  mean square due to regression 
D.  none of the above 
Answer» C. mean square due to regression 
587. 
The adjusted multiple coefficient of determination accounts for 
A.  the number of dependent variables in the model 
B.  the number of independent variables in the model 
C.  unusually large predictors 
D.  none of the above 
Answer» D. none of the above 
588. 
The multiple coefficient of determination is computed by 
A.  dividing ssr by sst 
B.  dividing sst by ssr 
C.  dividing sst by sse 
D.  none of the above 
Answer» C. dividing sst by sse 
589. 
For a multiple regression model, SST = 200 and SSE = 50. The multiple coefficient of determination is 
A.  0.25 
B.  4.00 
C.  0.75 
D.  none of the above 
Answer» B. 4.00 
590. 
A nearest neighbor approach is best used 
A.  with largesized datasets. 
B.  when irrelevant attributes have been removed from the data. 
C.  when a generalized model of the data is desirable. 
D.  when an explanation of what has been found is of primary importance. 
Answer» B. when irrelevant attributes have been removed from the data. 
591. 
Another name for an output attribute. 
A.  predictive variable 
B.  independent variable 
C.  estimated variable 
D.  dependent variable 
Answer» B. independent variable 
592. 
Classification problems are distinguished from estimation problems in that 
A.  classification problems require the output attribute to be numeric. 
B.  classification problems require the output attribute to be categorical. 
C.  classification problems do not allow an output attribute. 
D.  classification problems are designed to predict future outcome. 
Answer» C. classification problems do not allow an output attribute. 
593. 
Which statement is true about prediction problems? 
A.  the output attribute must be categorical. 
B.  the output attribute must be numeric. 
C.  the resultant model is designed to determine future outcomes. 
D.  the resultant model is designed to classify current behavior. 
Answer» D. the resultant model is designed to classify current behavior. 
594. 
Which of the following is a common use of unsupervised clustering? 
A.  detect outliers 
B.  determine a best set of input attributes for supervised learning 
C.  evaluate the likely performance of a supervised learner model 
D.  determine if meaningful relationships can be found in a dataset 
Answer» A. detect outliers 
595. 
The average positive difference between computed and desired outcome values. 
A.  root mean squared error 
B.  mean squared error 
C.  mean absolute error 
D.  mean positive error 
Answer» D. mean positive error 
596. 
Selecting data so as to assure that each class is properly represented in both the training and test set. 
A.  cross validation 
B.  stratification 
C.  verification 
D.  bootstrapping 
Answer» B. stratification 
597. 
The standard error is defined as the square root of this computation. 
A.  the sample variance divided by the total number of sample instances. 
B.  the population variance divided by the total number of sample instances. 
C.  the sample variance divided by the sample mean. 
D.  the population variance divided by the sample mean. 
Answer» A. the sample variance divided by the total number of sample instances. 
598. 
Data used to optimize the parameter settings of a supervised learner model. 
A.  training 
B.  test 
C.  verification 
D.  validation 
Answer» D. validation 
599. 
Bootstrapping allows us to 
A.  choose the same training instance several times. 
B.  choose the same test set instance several times. 
C.  build models with alternative subsets of the training data several times. 
D.  test a model with alternative subsets of the test data several times. 
Answer» A. choose the same training instance several times. 
600. 
The correlation coefficient for two realvalued attributes is –0.85. What does this value tell you? 
A.  the attributes are not linearly related. 
B.  as the value of one attribute increases the value of the second attribute also increases. 
C.  as the value of one attribute decreases the value of the second attribute increases. 
D.  the attributes show a curvilinear relationship. 
Answer» C. as the value of one attribute decreases the value of the second attribute increases. 
Done Reading?