McqMate
Sign In
Hamberger menu
McqMate
Sign in
Sign up
Home
Forum
Search
Ask a Question
Sign In
McqMate Copyright © 2026
→
Computer Science Engineering (CSE)
→
Machine Learning (ML)
→
In the regression equation Y = 75.65 + 0...
Q.
In the regression equation Y = 75.65 + 0.50X, the intercept is
A.
0.5
B.
75.65
C.
1
D.
indeterminable
Answer» B. 75.65
2.5k
0
Do you find this helpful?
1
View all MCQs in
Machine Learning (ML)
Discussion
No comments yet
Login to comment
Related MCQs
What is/are true about ridge regression? 1. When lambda is 0, model works like linear regression model 2. When lambda is 0, model doesn't work like linear regression model 3. When lambda goes to infinity, we get very, very small coefficients approaching 0 4. When lambda goes to infinity, we get very, very large coefficients approaching infinity
What is/are true about ridge regression? 1. When lambda is 0, model works like linear regression model 2. When lambda is 0, model doesn’t work like linear regression model 3. When lambda goes to infinity, we get very, very small coefficients approaching 0 4. When lambda goes to infinity, we get very, very large coefficients approaching infinity
We can also compute the coefficient of linear regression with the help of an analytical method called Normal Equation. Which of the following is/are true about Normal Equation? 1. We don't have to choose the learning rate 2. It becomes slow when number of features is very large 3. No need to iterate
We can also compute the coefficient of linear regression with the help of an analytical method called �Normal Equation�. Which of the following is/are true about �Normal Equation�?1. We don�t have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate
We can also compute the coefficient of linear regression with the help of an analytical method called “Normal Equation”. Which of the following is/are true about “Normal Equation”? 1. We don’t have to choose the learning rate 2. It becomes slow when number of features is very large 3. No need to iterate
We can also compute the coefficient of linear regression with the help of an analytical method called “Normal Equation”. Which of the following is/are true about “Normal Equation”?1. We don’t have to choose the learning rate2. It becomes slow when number of features is very large3. No need to iterate
How does the bias-variance decomposition of a ridge regression estimator compare with that of ordinary least squares regression?
In a linear regression problem, we are using �R-squared� to measure goodness-of-fit. We add a feature in linear regression model and retrain the same model.Which of the following option is true?
Suppose we fit �Lasso Regression� to a data set, which has 100 features (X1,X2�X100).� Now, we rescale one of these feature by multiplying with 10 (say that feature is X1),� and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?
Suppose we fit �Lasso Regression� to a data set, which has 100 features (X1,X2�X100).� Now, we rescale one of these feature by multiplying with 10 (say that feature is X1),� and then refit Lasso regression with the same regularization parameter.Now, which of the following option will be correct?