McqMate
Sign In
Hamberger menu
McqMate
Sign in
Sign up
Home
Forum
Search
Ask a Question
Sign In
McqMate Copyright © 2025
→
Computer Science Engineering (CSE)
→
Machine Learning (ML)
→
In SVR we try to fit the error within a...
Q.
In SVR we try to fit the error within a certain threshold.
A.
true
B.
false
Answer» A. true
877
0
Do you find this helpful?
2
View all MCQs in
Machine Learning (ML)
Discussion
No comments yet
Login to comment
Related MCQs
In SVR we try to fit the error within a certain threshold.
In SVR we try to fit the error within a
What are the steps for using a gradient descent algorithm? 1)Calculate error between the actual value and the predicted value 2)Reiterate until you find the best weights of network 3)Pass an input through the network and get values from output layer 4)Initialize random weight and bias 5)Go to each neurons which contributes to the error and change its respective values to reduce the error
To control the size of the tree, we need to control the number of regions. One approach to do this would be to split tree nodes only if the resultant decrease in the sum of squares error exceeds some threshold. For the described method, which among the following are true? (a) It would, in general, help restrict the size of the trees (b) It has the potential to affect the performance of the resultant regression/classification model (c) It is computationally infeasible
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is 0.95.
Suppose that we have N independent variables (X1,X2� Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it�s variable(Say X1) with Y is -0.95.Which of the following is true for X1?
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95.Which of the following is true for X1?
What is back propagation? a) It is another name given to the curvy function in the perceptron b) It is the transmission of error back through the network to adjust the inputs c) It is the transmission of error back through the network to allow weights to be adjusted so that the network can learn d) None of the mentioned
Which of the following can act as possible termination conditions in K-Means? 1. For a fixed number of iterations. 2. Assignment of observations to clusters does not change between iterations. Except for cases with a bad local minimum. 3. Centroids do not change between successive iterations. 4. Terminate when RSS falls below a threshold.
Binarize parameter in BernoulliNB scikit sets threshold for binarizing of sample features.