430+ High Performance Computing (HPC) Solved MCQs

301.

In All-to-All Personalized Communication on a Ring, the size of the message reduces by              at each step

A. p
B. m-1
C. p-1
D. m
Answer» A. p
302.

All-to-All Broadcast and Reduction algorithm on a Ring terminates in                   steps.

A. p+1
B. p-1
C. p*p
D. p
Answer» C. p*p
303.

In All-to-all Broadcast on a Mesh, operation performs in which sequence?

A. rowwise, columnwise
B. columnwise, rowwise
C. columnwise, columnwise
D. rowwise, rowwise
Answer» B. columnwise, rowwise
304.

Messages get smaller in and stay constant in .

A. gather, broadcast
B. scatter , broadcast
C. scatter, gather
D. broadcast, gather
Answer» C. scatter, gather
305.

The time taken by all-to- all broadcast on a ring is .

A. t= (ts + twm)(p-1)
B. t= ts logp + twm(p-1)
C. t= 2ts(√p – 1) - twm(p-1)
D. t= 2ts(√p – 1) + twm(p-1)
Answer» B. t= ts logp + twm(p-1)
306.

The time taken by all-to- all broadcast on a mesh is .

A. t= (ts + twm)(p-1)
B. t= ts logp + twm(p-1)
C. t= 2ts(√p – 1) - twm(p-1)
D. t= 2ts(√p – 1) + twm(p-1)
Answer» A. t= (ts + twm)(p-1)
307.

The time taken by all-to- all broadcast on a hypercube is .

A. t= (ts + twm)(p-1)
B. t= ts logp + twm(p-1)
C. t= 2ts(√p – 1) - twm(p-1)
D. t= 2ts(√p – 1) + twm(p-1)
Answer» C. t= 2ts(√p – 1) - twm(p-1)
308.

The prefix-sum operation can be implemented using the kernel

A. all-to-all broadcast
B. one-to-all broadcast
C. all-to-one broadcast
D. all-to-all reduction
Answer» B. one-to-all broadcast
309.

Select the parameters on which the parallel runtime of a program depends.

A. number of processors
B. communication parameters of the machine
C. all of the above
D. input size
Answer» D. input size
310.

The time that elapses from the moment the first processor starts to the moment the last processor finishes execution is called as                       .

A. parallel runtime
B. overhead runtime
C. excess runtime
D. serial runtime
Answer» B. overhead runtime
311.

Select how the overhead function (To) is calculated.

A. to = p*n tp - ts
B. to = p tp - ts
C. to = tp - pts
D. to = tp - ts
Answer» C. to = tp - pts
312.

What is is the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements?

A. overall time
B. speedup
C. scaleup
D. efficiency
Answer» C. scaleup
313.

Which is alternative options for latency hiding?

A. increase cpu frequency
B. multithreading
C. increase bandwidth
D. increase memory
Answer» B. multithreading
314.

______ Communication model is generally seen in tightly coupled system.

A. message passing
B. shared-address space
C. client-server
D. distributed network
Answer» B. shared-address space
315.

The principal parameters that determine the communication latency are as follows:

A. startup time (ts) per-hop time (th) per-word transfer time (tw)
B. startup time (ts) per-word transfer time (tw)
C. startup time (ts) per-hop time (th)
D. startup time (ts) message-packet-size(w)
Answer» A. startup time (ts) per-hop time (th) per-word transfer time (tw)
316.

The number and size of tasks into which a problem is decomposed determines the __

A. granularity
B. task
C. dependency graph
D. decomposition
Answer» A. granularity
317.

Average Degree of Concurrency is...

A. the average number of tasks that can run concurrently over the entire duration of execution of the process.
B. the average time that can run concurrently over the entire duration of execution of the process.
C. the average in degree of task dependency graph.
D. the average out degree of task dependency graph.
Answer» A. the average number of tasks that can run concurrently over the entire duration of execution of the process.
318.

Which task decomposition technique is suitable for the 15-puzzle problem?

A. data decomposition
B. exploratory decomposition
C. speculative decomposition
D. recursive decomposition
Answer» B. exploratory decomposition
319.

Which of the following method is used to avoid Interaction Overheads?

A. maximizing data locality
B. minimizing data locality
C. increase memory size
D. none of the above.
Answer» A. maximizing data locality
320.

Which of the following is not parallel algorithm model

A. the data parallel model
B. the work pool model
C. the task graph model
D. the speculative model
Answer» D. the speculative model
321.

Nvidia GPU based on following architecture

A. mimd
B. simd
C. sisd
D. misd
Answer» B. simd
322.

What is Critical Path?

A. the length of the longest path in a task dependency graph is called the critical path length.
B. the length of the smallest path in a task dependency graph is called the critical path length.
C. path with loop
D. none of the mentioned.
Answer» A. the length of the longest path in a task dependency graph is called the critical path length.
323.

Which decompositioin technique uses divide-andconquer strategy?

A. recursive decomposition
B. sdata decomposition
C. exploratory decomposition
D. speculative decomposition
Answer» A. recursive decomposition
324.

Consider Hypercube topology with 8 nodes then how many message passing cycles will require in all to all broadcast operation?

A. the longest path between any pair of finish nodes.
B. the longest directed path between any pair of start & finish node.
C. the shortest path between any pair of finish nodes.
D. the number of maximum nodes level in graph.
Answer» D. the number of maximum nodes level in graph.
325.

Scatter is ____________.

A. one to all broadcast communication
B. all to all broadcast communication
C. one to all personalised communication
D. node of the above.
Answer» C. one to all personalised communication
326.

If there is 4X4 Mesh Topology ______ message passing cycles will require complete all to all reduction.

A. 4
B. 6
C. 8
D. 16
Answer» C. 8
327.

Following issue(s) is/are the true about sorting techniques with parallel computing.

A. large sequence is the issue
B. where to store output sequence is the issue
C. small sequence is the issue
D. none of the above
Answer» B. where to store output sequence is the issue
328.

Partitioning on series done after ______________

A. local arrangement
B. processess assignments
C. global arrangement
D. none of the above
Answer» C. global arrangement
329.

In Parallel DFS processes has following roles.(Select multiple choices if applicable)

A. donor
B. active
C. idle
D. passive
Answer» A. donor
330.

Suppose there are 16 elements in a series then how many phases will be required to sort the series using parallel odd-even bubble sort?

A. 8
B. 4
C. 5
D. 15
Answer» D. 15
331.

Which are different sources of Overheads in Parallel Programs?

A. interprocess interactions
B. process idling
C. all mentioned options
D. excess computation
Answer» C. all mentioned options
332.

The ratio of the time taken to solve a problem on a parallel processors to the time required to solve the same problem on a single processor with p identical processing elements.

A. the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements.
B. the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements
C. the ratio of number of multiple processors to size of data
D. none of the above
Answer» B. the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements
333.

CUDA helps do execute code in parallel mode using __________

A. cpu
B. gpu
C. rom
D. cash memory
Answer» B. gpu
334.

In thread-function execution scenario thread is a ___________

A. work
B. worker
C. task
D. none of the above
Answer» B. worker
335.

In GPU Following statements are true

A. grid contains block
B. block contains threads
C. all the mentioned options.
D. sm stands for streaming multiprocessor
Answer» C. all the mentioned options.
336.

Computer system of a parallel computer is capable of_____________

A. decentralized computing
B. parallel computing
C. centralized computing
D. all of these
Answer» A. decentralized computing
337.

In which application system Distributed systems can run well?

A. hpc
B. distrubuted framework
C. hrc
D. none of the above
Answer» A. hpc
338.

A pipeline is like .................... ?

A. an automobile assembly line
B. house pipeline
C. both a and b
D. a gas line
Answer» A. an automobile assembly line
339.

Pipeline implements ?

A. fetch instruction
B. decode instruction
C. fetch operand
D. all of above
Answer» D. all of above
340.

A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ ?

A. super-scaling
B. pipe-lining
C. parallel computation
D. none of these
Answer» B. pipe-lining
341.

VLIW stands for ?

A. very long instruction word
B. very long instruction width
C. very large instruction word
D. very long instruction width
Answer» A. very long instruction word
342.

Which one is not a limitation of a distributed memory parallel system?

A. higher communication time
B. cache coherency
C. synchronization overheads
D. none of the above
Answer» B. cache coherency
343.

Which of these steps can create conflict among the processors?

A. synchronized computation of local variables
B. concurrent write
C. concurrent read
D. none of the above
Answer» B. concurrent write
344.

Which one is not a characteristic of NUMA multiprocessors?

A. it allows shared memory computing
B. memory units are placed in physically different location
C. all memory units are mapped to one common virtual global memory
D. processors access their independent local memories
Answer» D. processors access their independent local memories
345.

Which of these is not a source of overhead in parallel computing?

A. non-uniform load distribution
B. less local memory requirement in distributed computing
C. synchronization among threads in shared memory computing
D. none of the above
Answer» B. less local memory requirement in distributed computing
346.

Systems that do not have parallel processing capabilities are?

A. sisd
B. simd
C. mimd
D. all of the above
Answer» A. sisd
347.

How does the number of transistors per chip increase according to Moore ´s law?

A. quadratically
B. linearly
C. cubicly
D. exponentially
Answer» D. exponentially
348.

Parallel processing may occur?

A. in the instruction stream
B. in the data stream
C. both[a] and [b]
D. none of the above
Answer» C. both[a] and [b]
349.

To which class of systems does the von Neumann computer belong?

A. simd (single instruction multiple data)
B. mimd (multiple instruction multiple data)
C. misd (multiple instruction single data)
D. sisd (single instruction single data)
Answer» D. sisd (single instruction single data)
350.

Fine-grain threading is considered as a ______ threading?

A. instruction-level
B. loop level
C. task-level
D. function-level
Answer» A. instruction-level
351.

Multiprocessor is systems with multiple CPUs, which are capable of independently executing different tasks in parallel. In this category every processor and memory module has similar access time?

A. uma
B. microprocessor
C. multiprocessor
D. numa
Answer» A. uma
352.

For inter processor communication the miss arises are called?

A. hit rate
B. coherence misses
C. comitt misses
D. parallel processing
Answer» B. coherence misses
353.

NUMA architecture uses _______in design?

A. cache
B. shared memory
C. message passing
D. distributed memory
Answer» D. distributed memory
354.

A multiprocessor machine which is capable of executing multiple instructions on multiple data sets?

A. sisd
B. simd
C. mimd
D. misd
Answer» C. mimd
355.

In message passing, send and receive message between?

A. task or processes
B. task and execution
C. processor and instruction
D. instruction and decode
Answer» A. task or processes
356.

The First step in developing a parallel algorithm is_________?

A. to decompose the problem into tasks that can be executed concurrently
B. execute directly
C. execute indirectly
D. none of above
Answer» A. to decompose the problem into tasks that can be executed concurrently
357.

The number of tasks into which a problem is decomposed determines its?

A. granularity
B. priority
C. modernity
D. none of above
Answer» A. granularity
358.

The length of the longest path in a task dependency graph is called?

A. the critical path length
B. the critical data length
C. the critical bit length
D. none of above
Answer» A. the critical path length
359.

The graph of tasks (nodes) and their interactions/data exchange (edges)?

A. is referred to as a task interaction graph
B. is referred to as a task communication graph
C. is referred to as a task interface graph
D. none of above
Answer» A. is referred to as a task interaction graph
360.

Mappings are determined by?

A. task dependency
B. task interaction graphs
C. both a and b
D. none of above
Answer» C. both a and b
361.

Decomposition Techniques are?

A. recursive decomposition
B. data decomposition
C. exploratory decomposition
D. all of above
Answer» D. all of above
362.

The Owner Computes Rule generally states that the process assigned a particular data item is responsible for?

A. all computation associated with it
B. only one computation
C. only two computation
D. only occasionally computation
Answer» A. all computation associated with it
363.

A simple application of exploratory decomposition is_?

A. the solution to a 15 puzzle
B. the solution to 20 puzzle
C. the solution to any puzzle
D. none of above
Answer» A. the solution to a 15 puzzle
364.

Speculative Decomposition consist of _?

A. conservative approaches
B. optimistic approaches
C. both a and b
D. only b
Answer» C. both a and b
365.

task characteristics include?

A. task generation.
B. task sizes.
C. size of data associated with tasks.
D. all of above
Answer» D. all of above
366.

Writing parallel programs is referred to as?

A. parallel computation
B. parallel processes
C. parallel development
D. parallel programming
Answer» D. parallel programming
367.

Parallel Algorithm Models?

A. data parallel model
B. bit model
C. data model
D. network model
Answer» A. data parallel model
368.

The number and size of tasks into which a problem is decomposed determines the?

A. fine-granularity
B. coarse-granularity
C. sub task
D. granularity
Answer» A. fine-granularity
369.

A feature of a task-dependency graph that determines the average degree of concurrency for a given granularity is its ___________ path?

A. critical
B. easy
C. difficult
D. ambiguous
Answer» A. critical
370.

The pattern of___________ among tasks is captured by what is known as a task-interaction graph?

A. interaction
B. communication
C. optmization
D. flow
Answer» A. interaction
371.

Interaction overheads can be minimized by____?

A. maximize data locality
B. maximize volume of data exchange
C. increase bandwidth
D. minimize social media contents
Answer» A. maximize data locality
372.

Type of parallelism that is naturally expressed by independent tasks in a task-dependency graph is called _______ parallelism?

A. task
B. instruction
C. data
D. program
Answer» A. task
373.

Speed up is defined as a ratio of?

A. s=ts/tp
B. s= tp/ts
C. ts=s/tp
D. tp=s /ts
Answer» A. s=ts/tp
374.

Parallel computing means to divide the job into several __________?

A. bit
B. data
C. instruction
D. task
Answer» D. task
375.

_________ is a method for inducing concurrency in problems that can be solved using the divide-and-conquer strategy?

A. exploratory decomposition
B. speculative decomposition
C. data-decomposition
D. recursive decomposition
Answer» C. data-decomposition
376.

The___ time collectively spent by all the processing elements Tall = p TP?

A. total
B. average
C. mean
D. sum
Answer» A. total
377.

The dual of one-to-all broadcast is ?

A. all-to-one reduction
B. all-to-one receiver
C. all-to-one sum
D. none of above
Answer» A. all-to-one reduction
378.

A hypercube has?

A. 2d nodes
B. 2d nodes
C. 2n nodes
D. n nodes
Answer» A. 2d nodes
379.

The Prefix Sum Operation can be implemented using the ?

A. all-to-all broadcast kernel.
B. all-to-one broadcast kernel.
C. one-to-all broadcast kernel
D. scatter kernel
Answer» A. all-to-all broadcast kernel.
380.

In the scatter operation ?

A. single node send a unique message of size m to every other node
B. single node send a same message of size m to every other node
C. single node send a unique message of size m to next node
D. none of above
Answer» A. single node send a unique message of size m to every other node
381.

The gather operation is exactly the inverse of the ?

A. scatter operation
B. broadcast operation
C. prefix sum
D. reduction operation
Answer» A. scatter operation
382.

Parallel algorithms often require a single process to send identical data to all other processes or to a subset of them. This operation is known as _________?

A. one-to-all broadcast
B. all to one broadcast
C. one-to-all reduction
D. all to one reduction
Answer» A. one-to-all broadcast
383.

In which of the following operation, a single node sends a unique message of size m to every other node?

A. gather
B. scatter
C. one to all personalized communication
D. both a and c
Answer» D. both a and c
384.

Gather operation is also known as ________?

A. one to all personalized communication
B. one to all broadcast
C. all to one reduction
D. all to all broadcast
Answer» A. one to all personalized communication
385.

Conventional architectures coarsely comprise of a?

A. a processor
B. memory system
C. data path.
D. all of above
Answer» D. all of above
386.

Data intensive applications utilize?

A. high aggregate throughput
B. high aggregate network bandwidth
C. high processing and memory system performance.
D. none of above
Answer» A. high aggregate throughput
387.

A pipeline is like?

A. overlaps various stages of instruction execution to achieve performance.
B. house pipeline
C. both a and b
D. a gas line
Answer» A. overlaps various stages of instruction execution to achieve performance.
388.

Scheduling of instructions is determined?

A. true data dependency
B. resource dependency
C. branch dependency
D. all of above
Answer» D. all of above
389.

VLIW processors rely on?

A. compile time analysis
B. initial time analysis
C. final time analysis
D. mid time analysis
Answer» A. compile time analysis
390.

Memory system performance is largely captured by?

A. latency
B. bandwidth
C. both a and b
D. none of above
Answer» C. both a and b
391.

The fraction of data references satisfied by the cache is called?

A. cache hit ratio
B. cache fit ratio
C. cache best ratio
D. none of above
Answer» A. cache hit ratio
392.

A single control unit that dispatches the same Instruction to various processors is?

A. simd
B. spmd
C. mimd
D. none of above
Answer» A. simd
393.

The primary forms of data exchange between parallel tasks are?

A. accessing a shared data space
B. exchanging messages.
C. both a and b
D. none of above
Answer» C. both a and b
394.

The First step in developing a parallel algorithm is?

A. to decompose the problem into tasks that can be executed concurrently
B. execute directly
C. execute indirectly
D. none of above
Answer» A. to decompose the problem into tasks that can be executed concurrently
395.

The Owner Computes Rule generally states that the process assigned a particular data item are responsible for?

A. all computation associated with it
B. only one computation
C. only two computation
D. only occasionally computation
Answer» A. all computation associated with it
396.

A simple application of exploratory decomposition is?

A. the solution to a 15 puzzle
B. the solution to 20 puzzle
C. the solution to any puzzle
D. none of above
Answer» A. the solution to a 15 puzzle
397.

Speculative Decomposition consist of ?

A. conservative approaches
B. optimistic approaches
C. both a and b
D. only b
Answer» C. both a and b
398.

Task characteristics include?

A. task generation.
B. task sizes.
C. size of data associated with tasks.
D. all of above.
Answer» D. all of above.
399.

The dual of one-to-all broadcast is?

A. all-to-one reduction
B. all-to-one receiver
C. all-to-one sum
D. none of above
Answer» A. all-to-one reduction
400.

A hypercube has?

A. 2d nodes
B. 3d nodes
C. 2n nodes
D. n nodes
Answer» A. 2d nodes
Tags
  • Question and answers in High Performance Computing (HPC),
  • High Performance Computing (HPC) multiple choice questions and answers,
  • High Performance Computing (HPC) Important MCQs,
  • Solved MCQs for High Performance Computing (HPC),
  • High Performance Computing (HPC) MCQs with answers PDF download