McqMate

Q. |
## Which of the following methods/methods do we use to find the best fit line for data in Linear Regression? |

A. | least square error |

B. | maximum likelihood |

C. | logarithmic loss |

D. | both a and b |

Answer» A. least square error |

1.8k

0

Do you find this helpful?

9

View all MCQs in

Machine Learning (ML)No comments yet

- Which of the following methods do we use to find the best fit line for data in Linear Regression?
- Which of the following methods do we use to find the best fit line for data in Linear Regression?
- In a linear regression problem, we are using �R-squared� to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true?
- In a linear regression problem, we are using “R-squared” to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true?
- Suppose that we have N independent variables (X1,X2� Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it�s variable(Say X1) with Y is -0.95.Which of the following is true for X1?
- Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95.Which of the following is true for X1?
- What is/are true about ridge regression? 1. When lambda is 0, model works like linear regression model 2. When lambda is 0, model doesn't work like linear regression model 3. When lambda goes to infinity, we get very, very small coefficients approaching 0 4. When lambda goes to infinity, we get very, very large coefficients approaching infinity
- What is/are true about ridge regression? 1. When lambda is 0, model works like linear regression model 2. When lambda is 0, model doesn’t work like linear regression model 3. When lambda goes to infinity, we get very, very small coefficients approaching 0 4. When lambda goes to infinity, we get very, very large coefficients approaching infinity
- Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is 0.95.
- We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data?