

McqMate
These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) .
151. |
The running time of Bellmann Ford algorithm is lower than that of Dijkstra’s Algorithm. |
A. | true |
B. | false |
Answer» B. false | |
Explanation: the number of iterations involved in bellmann ford algorithm is more than that of dijkstra’s algorithm. |
152. |
Dijkstra’s Algorithm run on a weighted, directed graph G={V,E} with non-negative weight function w and source s, terminates with d[u]=delta(s,u) for all vertices u in V. |
A. | true |
B. | false |
Answer» A. true | |
Explanation: the equality d[u]=delta(s,u) holds good when vertex u is added to set s and this equality is maintained thereafter by the upper bound property. |
153. |
Bellmann ford algorithm provides solution for problems. |
A. | all pair shortest path |
B. | sorting |
C. | network flow |
D. | single source shortest path |
Answer» D. single source shortest path | |
Explanation: bellmann ford algorithm is used for finding solutions for single source shortest path problems. if the graph has no negative cycles that are reachable from the source then the algorithm produces the shortest paths and their weights. |
154. |
Bellmann Ford algorithm is used to indicate whether the graph has negative weight cycles or not. |
A. | true |
B. | false |
Answer» A. true | |
Explanation: bellmann ford algorithm returns true if the graph does not have any negative weight cycles and returns false when the graph has negative weight cycles. |
155. |
How many solution/solutions are available for a graph having negative weight cycle? |
A. | one solution |
B. | two solutions |
C. | no solution |
D. | infinite solutions |
Answer» C. no solution | |
Explanation: if the graph has any negative weight cycle then the algorithm indicates that no solution exists for that graph. |
156. |
What is the running time of Bellmann Ford Algorithm? |
A. | o(v) |
B. | o(v2) |
C. | o(elogv) |
D. | o(ve) |
Answer» D. o(ve) | |
Explanation: bellmann ford algorithm runs in time o(ve), since the initialization takes o(v) for each of v-1 passes and the for loop in the algorithm takes o(e) time. hence the total time taken by the algorithm is o(ve). |
157. |
How many times the for loop in the Bellmann Ford Algorithm gets executed? |
A. | v times |
B. | v-1 |
C. | e |
D. | e-1 |
Answer» B. v-1 | |
Explanation: the for loop in the bellmann ford algorithm gets executed for v-1 times. after making v-1 passes, the algorithm checks for a negative weight cycle and returns appropriate boolean value. |
158. |
Dijikstra’s Algorithm is more efficient than Bellmann Ford Algorithm. |
A. | true |
B. | false |
Answer» A. true | |
Explanation: the running time of bellmann ford algorithm is o(ve) whereas dijikstra’s algorithm has running time of only o(v2). |
159. |
What is the basic principle behind Bellmann Ford Algorithm? |
A. | interpolation |
B. | extrapolation |
C. | regression |
D. | relaxation |
Answer» D. relaxation | |
Explanation: relaxation methods which are also called as iterative methods in which an approximation to the correct distance is replaced progressively by more accurate values till an optimum solution is found. |
160. |
Bellmann Ford Algorithm can be applied for |
A. | undirected and weighted graphs |
B. | undirected and unweighted graphs |
C. | directed and weighted graphs |
D. | all directed graphs |
Answer» C. directed and weighted graphs | |
Explanation: bellmann ford algorithm can be applied for all directed and weighted graphs. the weight function in the graph may either be positive or negative. |
161. |
Bellmann Ford algorithm was first proposed by |
A. | richard bellmann |
B. | alfonso shimbe |
C. | lester ford jr |
D. | edward f. moore |
Answer» B. alfonso shimbe | |
Explanation: alfonso shimbe proposed bellmann ford algorithm in the year 1955. later it was published by richard bellmann in 1957 and lester ford jr in the year 1956. hence it is called bellmann ford algorithm. |
162. |
Bellmann Ford Algorithm is an example for |
A. | dynamic programming |
B. | greedy algorithms |
C. | linear programming |
D. | branch and bound |
Answer» A. dynamic programming | |
Explanation: in bellmann ford algorithm the shortest paths are calculated in bottom up manner which is similar to other dynamic programming problems. |
163. |
A graph is said to have a negative weight cycle when? |
A. | the graph has 1 negative weighted edge |
B. | the graph has a cycle |
C. | the total weight of the graph is negative |
D. | the graph has 1 or more negative weighted edges |
Answer» C. the total weight of the graph is negative | |
Explanation: when the total weight of the graph sums up to a negative number then the graph is said to have a negative weight cycle. bellmann ford algorithm provides no solution for such graphs. |
164. |
Floyd Warshall’s Algorithm is used for solving |
A. | all pair shortest path problems |
B. | single source shortest path problems |
C. | network flow problems |
D. | sorting problems |
Answer» A. all pair shortest path problems | |
Explanation: floyd warshall’s algorithm is used for solving all pair shortest path problems. it means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph. |
165. |
Floyd Warshall’s Algorithm can be applied on |
A. | undirected and unweighted graphs |
B. | undirected graphs |
C. | directed graphs |
D. | acyclic graphs |
Answer» C. directed graphs | |
Explanation: floyd warshall algorithm can be applied in directed graphs. from a given directed graph, an adjacency matrix is framed and then all pair shortest path is computed by the floyd warshall algorithm. |
166. |
What is the running time of the Floyd Warshall Algorithm? |
A. | big-oh(v) |
B. | theta(v2) |
C. | big-oh(ve) |
D. | theta(v3) |
Answer» D. theta(v3) | |
Explanation: the running time of the floyd warshall algorithm is determined by the triply nested for loops. since each execution of the for loop takes o(1) time, the algorithm runs in time theta(v3). |
167. |
What approach is being followed in Floyd Warshall Algorithm? |
A. | greedy technique |
B. | dynamic programming |
C. | linear programming |
D. | backtracking |
Answer» B. dynamic programming | |
Explanation: floyd warshall algorithm follows dynamic programming approach because the all pair shortest paths are computed in bottom up manner. |
168. |
Floyd Warshall Algorithm can be used for finding |
A. | single source shortest path |
B. | topological sort |
C. | minimum spanning tree |
D. | transitive closure |
Answer» D. transitive closure | |
Explanation: one of the ways to compute the transitive closure of a graph in theta(n3) time is to assign a weight of 1 to each edge of e and then run the floyd warshall algorithm. |
169. |
What procedure is being followed in Floyd Warshall Algorithm? |
A. | top down |
B. | bottom up |
C. | big bang |
D. | sandwich |
Answer» B. bottom up | |
Explanation: bottom up procedure is being used to compute the values of the matrix elements dij(k). the input is an n x n matrix. |
170. |
Floyd- Warshall algorithm was proposed by |
A. | robert floyd and stephen warshall |
B. | stephen floyd and robert warshall |
C. | bernad floyd and robert warshall |
D. | robert floyd and bernad warshall |
Answer» A. robert floyd and stephen warshall | |
Explanation: floyd- warshall algorithm was proposed by robert floyd in the year 1962. |
171. |
What happens when the value of k is 0 in the Floyd Warshall Algorithm? |
A. | 1 intermediate vertex |
B. | 0 intermediate vertex |
C. | n intermediate vertices |
D. | n-1 intermediate vertices |
Answer» B. 0 intermediate vertex | |
Explanation: when k=0, a path from vertex i to vertex j has no intermediate vertices at all. such a path has at most one edge and hence dij(0) = wij. |
172. |
Using logical operator’s instead arithmetic operators saves time and space. |
A. | true |
B. | false |
Answer» A. true | |
Explanation: in computers, logical operations on single bit values execute faster than arithmetic operations on integer words of data. |
173. |
The time taken to compute the transitive closure of a graph is Theta(n2). |
A. | true |
B. | false |
Answer» B. false | |
Explanation: the time taken to compute the transitive closure of a graph is theta(n3). |
174. |
If a problem can be broken into subproblems which are reused several times, the problem possesses property. |
A. | overlapping subproblems |
B. | optimal substructure |
C. | memoization |
D. | greedy |
Answer» A. overlapping subproblems | |
Explanation: overlapping subproblems is the property in which value of a subproblem is used several times. |
175. |
When dynamic programming is applied to a problem, it takes far less time as compared to other methods that don’t take advantage of overlapping subproblems. |
A. | true |
B. | false |
Answer» A. true | |
Explanation: dynamic programming calculates the value of a subproblem only once, while other methods that don’t take advantage of the overlapping subproblems property may calculate the value of the same subproblem several times. so, dynamic programming saves the time of recalculation and takes far less time as compared to other methods that don’t take advantage of the overlapping subproblems property. |
176. |
A greedy algorithm can be used to solve all the dynamic programming problems. |
A. | true |
B. | false |
Answer» B. false | |
Explanation: a greedy algorithm gives optimal solution for all subproblems, but when these locally optimal solutions are combined it may not result into a globally optimal solution. hence, a greedy algorithm cannot be used to solve all the dynamic programming problems. |
177. |
In dynamic programming, the technique of storing the previously calculated values is called |
A. | saving value property |
B. | storing value property |
C. | memoization |
D. | mapping |
Answer» C. memoization | |
Explanation: memoization is the technique |
178. |
When a top-down approach of dynamic programming is applied to a problem, it usually |
A. | decreases both, the time complexity and the space complexity |
B. | decreases the time complexity and increases the space complexity |
C. | increases the time complexity and decreases the space complexity |
D. | increases both, the time complexity and the space complexity |
Answer» B. decreases the time complexity and increases the space complexity | |
Explanation: the top-down approach uses the memoization technique which stores the previously calculated values. due to this, the time complexity is decreased but the space complexity is increased. |
179. |
Which of the following problems is NOT solved using dynamic programming? |
A. | 0/1 knapsack problem |
B. | matrix chain multiplication problem |
C. | edit distance problem |
D. | fractional knapsack problem |
Answer» D. fractional knapsack problem | |
Explanation: the fractional knapsack problem is solved using a greedy algorithm. |
180. |
Which of the following problems should be solved using dynamic programming? |
A. | mergesort |
B. | binary search |
C. | longest common subsequence |
D. | quicksort |
Answer» C. longest common subsequence | |
Explanation: the longest common subsequence problem has both, optimal substructure and overlapping subproblems. hence, dynamic programming should be used the solve this problem. |
181. |
What is the time complexity of the recursive implementation used to find the nth fibonacci term? |
A. | o(1) |
B. | o(n2) |
C. | o(n!) |
D. | exponential |
Answer» D. exponential | |
Explanation: the recurrence relation is given by fibo(n) = fibo(n – 1) + fibo(n – 2). so, the time complexity is given by: |
182. |
What is the space complexity of the recursive implementation used to find the nth fibonacci term? |
A. | o(1) |
B. | o(n) |
C. | o(n2) |
D. | o(n3) |
Answer» A. o(1) | |
Explanation: the recursive implementation doesn’t store any values and calculates every value from scratch. so, the space complexity is o(1). |
183. |
return fibo_terms[n] |
A. | o(1) |
B. | o(n) |
C. | o(n2) |
D. | exponential |
Answer» B. o(n) | |
Explanation: to calculate the nth term, the for loop runs (n – 1) times and each time a for loop is run, it takes a constant time. |
184. |
You are given infinite coins of denominations v1, v2, v3,…..,vn and a sum S. The coin change problem is to find the minimum number of coins required to get the sum S. This problem can be solved using |
A. | greedy algorithm |
B. | dynamic programming |
C. | divide and conquer |
D. | backtracking |
Answer» B. dynamic programming | |
Explanation: the coin change problem has overlapping subproblems(same subproblems are solved multiple times) and optimal substructure(the solution to the problem can |
185. |
Suppose you have coins of denominations 1,3 and 4. You use a greedy algorithm, in which you choose the largest denomination coin which is not greater than the remaining sum. For which of the following sums, will the algorithm produce an optimal answer? |
A. | 14 |
B. | 10 |
C. | 6 |
D. | 100 |
Answer» D. 100 | |
Explanation: using the greedy algorithm, three coins {4,1,1} will be selected to make a sum of 6. but, the optimal answer is two coins {3,3}. similarly, four coins {4,4,1,1} will be selected to make a sum of 10. but, the optimal answer is three coins {4,3,3}. also, five coins {4,4,4,1,1} will be selected to make a sum of 14. but, the optimal answer is four coins {4,4,3,3}. for a sum of 100, twenty-five coins {all 4’s} will be selected and the optimal answer is also twenty-five coins {all 4’s}. |
186. |
You are given infinite coins of N denominations v1, v2, v3,…..,vn and a sum S. The coin change problem is to find the minimum number of coins required to get the sum S. What is the time complexity of a dynamic programming implementation used to solve the coin change problem? |
A. | o(n) |
B. | o(s) |
C. | o(n2) |
D. | o(s*n) |
Answer» D. o(s*n) | |
Explanation: the time complexity is o(s*n). |
187. |
You are given infinite coins of denominations 1, 3, 4. What is the minimum number of coins required to achieve a sum of 7? |
A. | 1 |
B. | 2 |
C. | 3 |
D. | 4 |
Answer» B. 2 | |
Explanation: a sum of 7 can be achieved by using a minimum of two coins {3,4}. |
188. |
What is the space complexity of the divide and conquer algorithm used to find the maximum sub-array sum? |
A. | o(n) |
B. | o(1) |
C. | o(n!) |
D. | o(n2) |
Answer» B. o(1) | |
Explanation: the divide and conquer algorithm uses a constant space. so, the space complexity is o(1). |
189. |
What is the space complexity of Kadane’s algorithm? |
A. | o(1) |
B. | o(n) |
C. | o(n2) |
D. | none of the mentioned |
Answer» A. o(1) | |
Explanation: kadane’s algorithm uses a constant space. so, the space complexity is |
190. |
The longest increasing subsequence problem is a problem to find the length of a subsequence from a sequence of array elements such that the subsequence is sorted in increasing order and it’s length is maximum. This problem can be solved using |
A. | recursion |
B. | dynamic programming |
C. | brute force |
D. | recursion, dynamic programming, brute force |
Answer» D. recursion, dynamic programming, brute force | |
Explanation: the longest increasing subsequence problem can be solved using all of the mentioned methods. |
191. |
For any given sequence, there will ALWAYS be a unique increasing subsequence with the longest length. |
A. | true |
B. | false |
Answer» B. false | |
Explanation: for a given sequence, it is possible that there is more than one subsequence with the longest length. |
192. |
Given a rod of length n and the selling prices of all pieces smaller than equal to n, find the most beneficial way of cutting the rod into smaller pieces. This problem is called the rod cutting problem. Which of these methods can be used to solve the rod cutting problem? |
A. | brute force |
B. | dynamic programming |
C. | recursion |
D. | brute force, dynamic programming and recursion |
Answer» D. brute force, dynamic programming and recursion | |
Explanation: brute force, dynamic programming and recursion can be used to solve the rod cutting problem. |
193. |
Consider the brute force implementation of the rod cutting problem in which all the possible cuts are found and the maximum value is calculated. What is the time complexity of this brute force implementation? |
A. | o(n2) |
B. | o(n3) |
C. | o(nlogn) |
D. | o(2n) |
Answer» D. o(2n) | |
Explanation: the brute force implementation finds all the possible cuts. this takes o(2n) time. |
194. |
For every rod cutting problem there will be a unique set of pieces that give the maximum price. |
A. | true |
B. | false |
Answer» B. false | |
Explanation: consider a rod of length 3. the prices are {2,3,6} for lengths {1,2,3} respectively. the pieces {1,1,1} and {3} both give the maximum value of 6. |
195. |
For a given array, there can be multiple ways to reach the end of the array using minimum number of jumps. |
A. | true |
B. | false |
Answer» A. true | |
Explanation: consider the array {1,2,3,4,5}. it is possible to reach the end in the following ways: {1 -> 2 -> 3 -> 5} or {1 -> 2 -> 4 -> 5}. |
196. |
For any array, given that at most one element is non-zero, it is ALWAYS possible to reach the end of the array using minimum jumps. |
A. | true |
B. | false |
Answer» B. false | |
Explanation: consider the array {1,0,2,3,4}. |
197. |
What is the space complexity of the following dynamic programming implementation used to find the minimum number of jumps? |
A. | o(1) |
B. | o(n) |
C. | o(n2) |
D. | o(5) |
Answer» B. o(n) | |
Explanation: the space complexity of the above dynamic programming implementation is o(n). |
198. |
The Knapsack problem is an example of |
A. | greedy algorithm |
B. | 2d dynamic programming |
C. | 1d dynamic programming |
D. | divide and conquer |
Answer» B. 2d dynamic programming | |
Explanation: knapsack problem is an example of 2d dynamic programming. |
199. |
Which of the following problems is equivalent to the 0-1 Knapsack problem? |
A. | you are given a bag that can carry a maximum weight of w. you are given n items which have a weight of {w1, w2, w3,…., wn} and a value of {v1, v2, v3,…., vn}. you can break the items into smaller pieces. choose the items in such a way that you get the maximum value |
B. | you are studying for an exam and you have to study n questions. the questions take {t1, t2, t3,…., tn} time(in hours) and carry {m1, m2, m3,…., mn} marks. you can study for a maximum of t hours. you can either study a question or leave it. choose the questions in such a way that your score is maximized |
C. | you are given infinite coins of denominations {v1, v2, v3,….., vn} and a sum s. you have to find the minimum number of coins required to get the sum s |
D. | you are given a suitcase that can carry a maximum weight of 15kg. you are given 4 items which have a weight of {10, 20, 15,40} and a value of {1, 2, 3,4}. you can break the items into smaller pieces. choose the items in such a way that you get the maximum value |
Answer» B. you are studying for an exam and you have to study n questions. the questions take {t1, t2, t3,…., tn} time(in hours) and carry {m1, m2, m3,…., mn} marks. you can study for a maximum of t hours. you can either study a question or leave it. choose the questions in such a way that your score is maximized | |
Explanation: in this case, questions are used instead of items. each question has a score which is same as each item having a value. |
200. |
The 0-1 Knapsack problem can be solved using Greedy algorithm. |
A. | true |
B. | false |
Answer» B. false | |
Explanation: the knapsack problem cannot be solved using the greedy algorithm. |
Done Studing? Take A Test.
Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.