# 430+ High Performance Computing (HPC) Solved MCQs

42
43.9k
301.

A. p
B. m-1
C. p-1
D. m
302.

A. p+1
B. p-1
C. p*p
D. p
303.

## In All-to-all Broadcast on a Mesh, operation performs in which sequence?

A. rowwise, columnwise
B. columnwise, rowwise
C. columnwise, columnwise
D. rowwise, rowwise
304.

## Messages get smaller in and stay constant in .

C. scatter, gather
305.

## The time taken by all-to- all broadcast on a ring is .

A. t= (ts + twm)(p-1)
B. t= ts logp + twm(p-1)
C. t= 2ts(√p – 1) - twm(p-1)
D. t= 2ts(√p – 1) + twm(p-1)
Answer» B. t= ts logp + twm(p-1)
306.

## The time taken by all-to- all broadcast on a mesh is .

A. t= (ts + twm)(p-1)
B. t= ts logp + twm(p-1)
C. t= 2ts(√p – 1) - twm(p-1)
D. t= 2ts(√p – 1) + twm(p-1)
Answer» A. t= (ts + twm)(p-1)
307.

## The time taken by all-to- all broadcast on a hypercube is .

A. t= (ts + twm)(p-1)
B. t= ts logp + twm(p-1)
C. t= 2ts(√p – 1) - twm(p-1)
D. t= 2ts(√p – 1) + twm(p-1)
Answer» C. t= 2ts(√p – 1) - twm(p-1)
308.

## The prefix-sum operation can be implemented using the kernel

D. all-to-all reduction
309.

## Select the parameters on which the parallel runtime of a program depends.

A. number of processors
B. communication parameters of the machine
C. all of the above
D. input size
310.

## The time that elapses from the moment the first processor starts to the moment the last processor finishes execution is called as                       .

A. parallel runtime
C. excess runtime
D. serial runtime
311.

## Select how the overhead function (To) is calculated.

A. to = p*n tp - ts
B. to = p tp - ts
C. to = tp - pts
D. to = tp - ts
Answer» C. to = tp - pts
312.

A. overall time
B. speedup
C. scaleup
D. efficiency
313.

## Which is alternative options for latency hiding?

A. increase cpu frequency
C. increase bandwidth
D. increase memory
314.

## ______ Communication model is generally seen in tightly coupled system.

A. message passing
C. client-server
D. distributed network
315.

## The principal parameters that determine the communication latency are as follows:

A. startup time (ts) per-hop time (th) per-word transfer time (tw)
B. startup time (ts) per-word transfer time (tw)
C. startup time (ts) per-hop time (th)
D. startup time (ts) message-packet-size(w)
Answer» A. startup time (ts) per-hop time (th) per-word transfer time (tw)
316.

## The number and size of tasks into which a problem is decomposed determines the __

A. granularity
C. dependency graph
D. decomposition
317.

## Average Degree of Concurrency is...

A. the average number of tasks that can run concurrently over the entire duration of execution of the process.
B. the average time that can run concurrently over the entire duration of execution of the process.
C. the average in degree of task dependency graph.
D. the average out degree of task dependency graph.
Answer» A. the average number of tasks that can run concurrently over the entire duration of execution of the process.
318.

## Which task decomposition technique is suitable for the 15-puzzle problem?

A. data decomposition
B. exploratory decomposition
C. speculative decomposition
D. recursive decomposition
319.

## Which of the following method is used to avoid Interaction Overheads?

A. maximizing data locality
B. minimizing data locality
C. increase memory size
D. none of the above.
320.

## Which of the following is not parallel algorithm model

A. the data parallel model
B. the work pool model
D. the speculative model
321.

A. mimd
B. simd
C. sisd
D. misd
322.

## What is Critical Path?

A. the length of the longest path in a task dependency graph is called the critical path length.
B. the length of the smallest path in a task dependency graph is called the critical path length.
C. path with loop
D. none of the mentioned.
Answer» A. the length of the longest path in a task dependency graph is called the critical path length.
323.

## Which decompositioin technique uses divide-andconquer strategy?

A. recursive decomposition
B. sdata decomposition
C. exploratory decomposition
D. speculative decomposition
324.

## Consider Hypercube topology with 8 nodes then how many message passing cycles will require in all to all broadcast operation?

A. the longest path between any pair of finish nodes.
B. the longest directed path between any pair of start & finish node.
C. the shortest path between any pair of finish nodes.
D. the number of maximum nodes level in graph.
Answer» D. the number of maximum nodes level in graph.
325.

## Scatter is ____________.

A. one to all broadcast communication
B. all to all broadcast communication
C. one to all personalised communication
D. node of the above.
Answer» C. one to all personalised communication
326.

A. 4
B. 6
C. 8
D. 16
327.

## Following issue(s) is/are the true about sorting techniques with parallel computing.

A. large sequence is the issue
B. where to store output sequence is the issue
C. small sequence is the issue
D. none of the above
Answer» B. where to store output sequence is the issue
328.

## Partitioning on series done after ______________

A. local arrangement
B. processess assignments
C. global arrangement
D. none of the above
329.

A. donor
B. active
C. idle
D. passive
330.

A. 8
B. 4
C. 5
D. 15
331.

## Which are different sources of Overheads in Parallel Programs?

A. interprocess interactions
B. process idling
C. all mentioned options
D. excess computation
332.

## The ratio of the time taken to solve a problem on a parallel processors to the time required to solve the same problem on a single processor with p identical processing elements.

A. the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements.
B. the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements
C. the ratio of number of multiple processors to size of data
D. none of the above
Answer» B. the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements
333.

## CUDA helps do execute code in parallel mode using __________

A. cpu
B. gpu
C. rom
D. cash memory
334.

A. work
B. worker
D. none of the above
335.

## In GPU Following statements are true

A. grid contains block
C. all the mentioned options.
D. sm stands for streaming multiprocessor
Answer» C. all the mentioned options.
336.

## Computer system of a parallel computer is capable of_____________

A. decentralized computing
B. parallel computing
C. centralized computing
D. all of these
337.

## In which application system Distributed systems can run well?

A. hpc
B. distrubuted framework
C. hrc
D. none of the above
338.

## A pipeline is like .................... ?

A. an automobile assembly line
B. house pipeline
C. both a and b
D. a gas line
Answer» A. an automobile assembly line
339.

## Pipeline implements ?

A. fetch instruction
B. decode instruction
C. fetch operand
D. all of above
340.

## A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ ?

A. super-scaling
B. pipe-lining
C. parallel computation
D. none of these
341.

## VLIW stands for ?

A. very long instruction word
B. very long instruction width
C. very large instruction word
D. very long instruction width
Answer» A. very long instruction word
342.

## Which one is not a limitation of a distributed memory parallel system?

A. higher communication time
B. cache coherency
D. none of the above
343.

## Which of these steps can create conflict among the processors?

A. synchronized computation of local variables
B. concurrent write
D. none of the above
344.

## Which one is not a characteristic of NUMA multiprocessors?

A. it allows shared memory computing
B. memory units are placed in physically different location
C. all memory units are mapped to one common virtual global memory
D. processors access their independent local memories
Answer» D. processors access their independent local memories
345.

## Which of these is not a source of overhead in parallel computing?

B. less local memory requirement in distributed computing
C. synchronization among threads in shared memory computing
D. none of the above
Answer» B. less local memory requirement in distributed computing
346.

## Systems that do not have parallel processing capabilities are?

A. sisd
B. simd
C. mimd
D. all of the above
347.

B. linearly
C. cubicly
D. exponentially
348.

## Parallel processing may occur?

A. in the instruction stream
B. in the data stream
C. both[a] and [b]
D. none of the above
349.

## To which class of systems does the von Neumann computer belong?

A. simd (single instruction multiple data)
B. mimd (multiple instruction multiple data)
C. misd (multiple instruction single data)
D. sisd (single instruction single data)
Answer» D. sisd (single instruction single data)
350.

A. instruction-level
B. loop level
D. function-level
351.

## Multiprocessor is systems with multiple CPUs, which are capable of independently executing different tasks in parallel. In this category every processor and memory module has similar access time?

A. uma
B. microprocessor
C. multiprocessor
D. numa
352.

## For inter processor communication the miss arises are called?

A. hit rate
B. coherence misses
C. comitt misses
D. parallel processing
353.

## NUMA architecture uses _______in design?

A. cache
B. shared memory
C. message passing
D. distributed memory
354.

A. sisd
B. simd
C. mimd
D. misd
355.

## In message passing, send and receive message between?

C. processor and instruction
D. instruction and decode
356.

## The First step in developing a parallel algorithm is_________?

A. to decompose the problem into tasks that can be executed concurrently
B. execute directly
C. execute indirectly
D. none of above
Answer» A. to decompose the problem into tasks that can be executed concurrently
357.

A. granularity
B. priority
C. modernity
D. none of above
358.

## The length of the longest path in a task dependency graph is called?

A. the critical path length
B. the critical data length
C. the critical bit length
D. none of above
Answer» A. the critical path length
359.

## The graph of tasks (nodes) and their interactions/data exchange (edges)?

A. is referred to as a task interaction graph
B. is referred to as a task communication graph
C. is referred to as a task interface graph
D. none of above
360.

## Mappings are determined by?

C. both a and b
D. none of above
Answer» C. both a and b
361.

## Decomposition Techniques are?

A. recursive decomposition
B. data decomposition
C. exploratory decomposition
D. all of above
362.

## The Owner Computes Rule generally states that the process assigned a particular data item is responsible for?

A. all computation associated with it
B. only one computation
C. only two computation
D. only occasionally computation
Answer» A. all computation associated with it
363.

## A simple application of exploratory decomposition is_?

A. the solution to a 15 puzzle
B. the solution to 20 puzzle
C. the solution to any puzzle
D. none of above
Answer» A. the solution to a 15 puzzle
364.

## Speculative Decomposition consist of _?

A. conservative approaches
B. optimistic approaches
C. both a and b
D. only b
Answer» C. both a and b
365.

C. size of data associated with tasks.
D. all of above
366.

## Writing parallel programs is referred to as?

A. parallel computation
B. parallel processes
C. parallel development
D. parallel programming
367.

## Parallel Algorithm Models?

A. data parallel model
B. bit model
C. data model
D. network model
368.

## The number and size of tasks into which a problem is decomposed determines the?

A. fine-granularity
B. coarse-granularity
D. granularity
369.

A. critical
B. easy
C. difficult
D. ambiguous
370.

A. interaction
B. communication
C. optmization
D. flow
371.

## Interaction overheads can be minimized by____?

A. maximize data locality
B. maximize volume of data exchange
C. increase bandwidth
D. minimize social media contents
372.

B. instruction
C. data
D. program
373.

A. s=ts/tp
B. s= tp/ts
C. ts=s/tp
D. tp=s /ts
374.

A. bit
B. data
C. instruction
375.

## _________ is a method for inducing concurrency in problems that can be solved using the divide-and-conquer strategy?

A. exploratory decomposition
B. speculative decomposition
C. data-decomposition
D. recursive decomposition
376.

A. total
B. average
C. mean
D. sum
377.

## The dual of one-to-all broadcast is ?

A. all-to-one reduction
C. all-to-one sum
D. none of above
378.

A. 2d nodes
B. 2d nodes
C. 2n nodes
D. n nodes
379.

## The Prefix Sum Operation can be implemented using the ?

D. scatter kernel
380.

## In the scatter operation ?

A. single node send a unique message of size m to every other node
B. single node send a same message of size m to every other node
C. single node send a unique message of size m to next node
D. none of above
Answer» A. single node send a unique message of size m to every other node
381.

## The gather operation is exactly the inverse of the ?

A. scatter operation
C. prefix sum
D. reduction operation
382.

## Parallel algorithms often require a single process to send identical data to all other processes or to a subset of them. This operation is known as _________?

C. one-to-all reduction
D. all to one reduction
383.

## In which of the following operation, a single node sends a unique message of size m to every other node?

A. gather
B. scatter
C. one to all personalized communication
D. both a and c
Answer» D. both a and c
384.

## Gather operation is also known as ________?

A. one to all personalized communication
C. all to one reduction
Answer» A. one to all personalized communication
385.

A. a processor
B. memory system
C. data path.
D. all of above
386.

## Data intensive applications utilize?

A. high aggregate throughput
B. high aggregate network bandwidth
C. high processing and memory system performance.
D. none of above
387.

## A pipeline is like?

A. overlaps various stages of instruction execution to achieve performance.
B. house pipeline
C. both a and b
D. a gas line
Answer» A. overlaps various stages of instruction execution to achieve performance.
388.

## Scheduling of instructions is determined?

A. true data dependency
B. resource dependency
C. branch dependency
D. all of above
389.

## VLIW processors rely on?

A. compile time analysis
B. initial time analysis
C. final time analysis
D. mid time analysis
390.

## Memory system performance is largely captured by?

A. latency
B. bandwidth
C. both a and b
D. none of above
Answer» C. both a and b
391.

## The fraction of data references satisfied by the cache is called?

A. cache hit ratio
B. cache fit ratio
C. cache best ratio
D. none of above
392.

A. simd
B. spmd
C. mimd
D. none of above
393.

## The primary forms of data exchange between parallel tasks are?

A. accessing a shared data space
B. exchanging messages.
C. both a and b
D. none of above
Answer» C. both a and b
394.

## The First step in developing a parallel algorithm is?

A. to decompose the problem into tasks that can be executed concurrently
B. execute directly
C. execute indirectly
D. none of above
Answer» A. to decompose the problem into tasks that can be executed concurrently
395.

## The Owner Computes Rule generally states that the process assigned a particular data item are responsible for?

A. all computation associated with it
B. only one computation
C. only two computation
D. only occasionally computation
Answer» A. all computation associated with it
396.

## A simple application of exploratory decomposition is?

A. the solution to a 15 puzzle
B. the solution to 20 puzzle
C. the solution to any puzzle
D. none of above
Answer» A. the solution to a 15 puzzle
397.

## Speculative Decomposition consist of ?

A. conservative approaches
B. optimistic approaches
C. both a and b
D. only b
Answer» C. both a and b
398.

C. size of data associated with tasks.
D. all of above.
399.

## The dual of one-to-all broadcast is?

A. all-to-one reduction
C. all-to-one sum
D. none of above
400.

A. 2d nodes
B. 3d nodes
C. 2n nodes
D. n nodes