430+ High Performance Computing (HPC) Solved MCQs

101.

In a broadcast and reduction on a balanced binary tree reduction is done in ______

A. recursive order
B. straight order
C. vertical order
D. parallel order
Answer» A. recursive order
102.

if "X" is the message to broadcast it initially resides at the source node

A. 1
B. 2
C. 8
D. 0
Answer» D. 0
103.

logical operators used in algorithm are

A. xor
B. and
C. both
D. none
Answer» C. both
104.

Generalization of broadcast in Which each processor is

A. source as well as destination
B. only source
C. only destination
D. none
Answer» A. source as well as destination
105.

The algorithm terminates in _____ steps

A. p
B. p+1
C. p+2
D. p-1
Answer» D. p-1
106.

Each node first sends to one of its neighbours the data it need to....

A. broadcast
B. identify
C. verify
D. none
Answer» A. broadcast
107.

The second communication phase is a columnwise ______ broadcast of consolidated

A. all-to-all
B. one -to-all
C. all-to-one
D. point-to-point
Answer» A. all-to-all
108.

All nodes collects _____ message corresponding to √p nodes to their respectively

A. √p
B. p
C. p+1
D. p-1
Answer» A. √p
109.

It is not possible to port ____ for higher dimensional network

A. algorithm
B. hypercube
C. both
D. none
Answer» A. algorithm
110.

If we port algorithm to higher dimemsional network it would cause

A. error
B. contention
C. recursion
D. none
Answer» B. contention
111.

In the scatter operation ____ node send message to every other node

A. single
B. double
C. triple
D. none
Answer» A. single
112.

The gather Operation is exactly the inverse of _____

A. scatter operation
B. recursion operation
C. execution
D. none
Answer» A. scatter operation
113.

Similar communication pattern to all-to-all broadcast except in the_____

A. reverse order
B. parallel order
C. straight order
D. vertical order
Answer» A. reverse order
114.

Group communication operations are built using which primitives?

A. one to all
B. all to all
C. point to point
D. none of these
Answer» C. point to point
115.

___ can be performed in an identical fashion by inverting the process.

A. recursive doubling
B. reduction
C. broadcast
D. none of these
Answer» B. reduction
116.

Broadcast and reduction operations on a mesh is performed

A. along the rows
B. along the columns
C. both a and b concurrently
D. none of these
Answer» C. both a and b concurrently
117.

Cost Analysis on a ring is

A. (ts + twm)(p - 1)
B. (ts - twm)(p + 1)
C. (tw + tsm)(p - 1)
D. (tw - tsm)(p + 1)
Answer» A. (ts + twm)(p - 1)
118.

Cost Analysis on a mesh is

A. 2ts(sqrt(p) + 1) + twm(p - 1)
B. 2tw(sqrt(p) + 1) + tsm(p - 1)
C. 2tw(sqrt(p) - 1) + tsm(p - 1)
D. 2ts(sqrt(p) - 1) + twm(p - 1)
Answer» D. 2ts(sqrt(p) - 1) + twm(p - 1)
119.

Communication between two directly link nodes

A. cut-through routing
B. store-and-forward routing
C. nearest neighbour communication
D. none
Answer» C. nearest neighbour communication
120.

All-to-one communication (reduction) is the dual of ______ broadcast.

A. all-to-all
B. one-to-all
C. one-to-one
D. all-to-one
Answer» B. one-to-all
121.

Which is known as Reduction?

A. all-to-one
B. all-to-all
C. one-to-one
D. one-to-all
Answer» A. all-to-one
122.

Which is known as Broadcast?

A. one-to-one
B. one-to-all
C. all-to-all
D. all-to-one
Answer» B. one-to-all
123.

The dual of all-to-all broadcast is

A. all-to-all reduction
B. all-to-one reduction
C. both
D. none
Answer» A. all-to-all reduction
124.

All-to-all broadcast algorithm for the 2D mesh is based on the

A. linear array algorithm
B. ring algorithm
C. both
D. none
Answer» B. ring algorithm
125.

In the first phase of 2D Mesh All to All, the message size is ___

A. p
B. m*sqrt(p)
C. m
D. p*sqrt(m)
Answer» C. m
126.

In the second phase of 2D Mesh All to All, the message size is ___

A. m
B. p*sqrt(m)
C. p
D. m*sqrt(p)
Answer» D. m*sqrt(p)
127.

In All to All on Hypercube, The size of the message to be transmitted at the next step is ____ by concatenating the received message with their current data

A. doubled
B. tripled
C. halfed
D. no change
Answer» A. doubled
128.

The all-to-all broadcast on Hypercube needs ____ steps

A. p
B. sqrt(p) - 1
C. log p
D. none
Answer» C. log p
129.

One-to-All Personalized Communication operation is commonly called ___

A. gather operation
B. concatenation
C. scatter operation
D. none
Answer» C. scatter operation
130.

The dual of the scatter operation is the

A. concatenation
B. gather operation
C. both
D. none
Answer» C. both
131.

In Scatter Operation on Hypercube, on each step, the size of the messages communicated is ____

A. tripled
B. halved
C. doubled
D. no change
Answer» B. halved
132.

Which is also called "Total Exchange" ?

A. all-to-all broadcast
B. all-to-all personalized communication
C. all-to-one reduction
D. none
Answer» B. all-to-all personalized communication
133.

All-to-all personalized communication can be used in ____

A. fourier transform
B. matrix transpose
C. sample sort
D. all of the above
Answer» D. all of the above
134.

In collective communication operations, collective means

A. involve group of processors
B. involve group of algorithms
C. involve group of variables
D. none of these
Answer» A. involve group of processors
135.

efficiency of data parallel algorithm depends on the

A. efficient implementation of the algorithm
B. efficient implementation of the operation
C. both
D. none
Answer» B. efficient implementation of the operation
136.

All processes participate in a single ______ interaction operation.

A. global
B. local
C. wide
D. variable
Answer» A. global
137.

subsets of processes in ______ interaction.

A. global
B. local
C. wide
D. variable
Answer» B. local
138.

Goal of good algorithm is to implement commonly used _____ pattern.

A. communication
B. interaction
C. parallel
D. regular
Answer» A. communication
139.

Reduction can be used to find the sum, product, maximum, minimum of _____ of numbers.

A. tuple
B. list
C. sets
D. all of above
Answer» C. sets
140.

source ____ is bottleneck.

A. process
B. algorithm
C. list
D. tuple
Answer» A. process
141.

only connections between single pairs of nodes are used at a time is

A. good utilization
B. poor utilization
C. massive utilization
D. medium utilization
Answer» B. poor utilization
142.

all processes that have the data can send it again is

A. recursive doubling
B. naive approach
C. reduction
D. all
Answer» A. recursive doubling
143.

The ____ do not snoop the messages going through them.

A. nodes
B. variables
C. tuple
D. list
Answer» A. nodes
144.

accumulate results and send with the same pattern is...

A. broadcast
B. naive approach
C. recursive doubling
D. reduction symmetric
Answer» D. reduction symmetric
145.

every node on the linear array has the data and broadcast on the columns with the linear array algorithm in _____

A. parallel
B. vertical
C. horizontal
D. all
Answer» A. parallel
146.

using different links every time and forwarding in parallel again is

A. better for congestion
B. better for reduction
C. better for communication
D. better for algorithm
Answer» A. better for congestion
147.

In a balanced binary tree processing nodes is equal to

A. leaves
B. number of elemnts
C. branch
D. none
Answer» A. leaves
148.

In one -to- all broadcast there is

A. divide and conquer type algorithm
B. sorting type algorithm
C. searching type algorithm
D. simple algorithm
Answer» A. divide and conquer type algorithm
149.

For sake of simplicity, the number of nodes is a power of

A. 1
B. 2
C. 3
D. 4
Answer» B. 2
150.

Nides with zero in i least significant bits participate in _______

A. algorithm
B. broadcast
C. communication
D. searching
Answer» C. communication
151.

every node has to know when to communicate that is

A. call the procedure
B. call for broadcast
C. call for communication
D. call the congestion
Answer» A. call the procedure
152.

the procedure is disturbed and require only point-to-point _______

A. synchronization
B. communication
C. both
D. none
Answer» A. synchronization
153.

Renaming relative to the source is _____ the source.

A. xor
B. xnor
C. and
D. nand
Answer» A. xor
154.

Task dependency graph is ------------------

A. directed
B. undirected
C. directed acyclic
D. undirected acyclic
Answer» C. directed acyclic
155.

In task dependency graph longest directed path between any pair of start and finish node is called as --------------

A. total work
B. critical path
C. task path
D. task length
Answer» B. critical path
156.

which of the following is not a granularity type

A. course grain
B. large grain
C. medium grain
D. fine grain
Answer» B. large grain
157.

which of the following is a an example of data decomposition

A. matrix multiplication
B. merge sort
C. quick sort
D. 15 puzzal
Answer» A. matrix multiplication
158.

which problems can be handled by recursive decomposition

A. backtracking
B. greedy method
C. divide and conquer problem
D. branch and bound
Answer» C. divide and conquer problem
159.

In this decomposition problem decomposition goes hand in hand with its execution

A. data decomposition
B. recursive decomposition
C. explorative decomposition
D. speculative decomposition
Answer» C. explorative decomposition
160.

which of the following is not an example of explorative decomposition

A. n queens problem
B. 15 puzzal problem
C. tic tac toe
D. quick sort
Answer» D. quick sort
161.

Topological sort can be applied to which of the following graphs?

A. undirected cyclic graphs
B. directed cyclic graphs
C. undirected acyclic graphs
D. directed acyclic graphs
Answer» D. directed acyclic graphs
162.

In most of the cases, topological sort starts from a node which has __________

A. maximum degree
B. minimum degree
C. any degree
D. zero degree
Answer» D. zero degree
163.

Which of the following is not an application of topological sorting?

A. finding prerequisite of a task
B. finding deadlock in an operating system
C. finding cycle in a graph
D. ordered statistics
Answer» D. ordered statistics
164.

In ------------task are defined before starting the execution of the algorithm

A. dynamic task
B. static task
C. regular task
D. one way task
Answer» B. static task
165.

which of the following is not the array distribution method of data partitioning

A. block
B. cyclic
C. block cyclic
D. chunk
Answer» D. chunk
166.

blocking optimization is used to improve temmporal locality for reduce

A. hit miss
B. misses
C. hit rate
D. cache misses
Answer» B. misses
167.

CUDA thought that 'unifying theme' of every form of parallelism is

A. cda thread
B. pta thread
C. cuda thread
D. cud thread
Answer» C. cuda thread
168.

Topological sort of a Directed Acyclic graph is?

A. always unique
B. always not unique
C. sometimes unique and sometimes not unique
D. always unique if graph has even number of vertices
Answer» C. sometimes unique and sometimes not unique
169.

threads being block altogether and being executed in the sets of 32 threads called a

A. thread block
B. 32 thread
C. 32 block
D. unit block
Answer» A. thread block
170.

True or False: The threads in a thread block are distributed across SM units so that each thread is executed by one SM unit.

A. true
B. false
Answer» A. true
171.

When the topological sort of a graph is unique?

A. when there exists a hamiltonian path in the graph
B. in the presence of multiple nodes with indegree 0
C. in the presence of single node with indegree 0
D. in the presence of single node with outdegree 0
Answer» A. when there exists a hamiltonian path in the graph
172.

What is a high performance multi-core processor that can be used to accelerate a wide variety of applications using parallel computing.

A. cpu
B. dsp
C. gpu
D. clu
Answer» C. gpu
173.

A good mapping does not depends on which following factor

A. knowledge of task sizes
B. the size of data associated with tasks
C. characteristics of inter-task interactions
D. task overhead
Answer» D. task overhead
174.

CUDA is a parallel computing platform and programming model 

A. true
B. false
Answer» A. true
175.

Which of the following is not a form of parallelism supported by CUDA

A. vector parallelism - floating point computations are executed in parallel on wide vector units
B. thread level task parallelism - different threads execute a different tasks
C. block and grid level parallelism - different blocks or grids execute different tasks
D. data parallelism - different threads and blocks process different parts of data in memory
Answer» A. vector parallelism - floating point computations are executed in parallel on wide vector units
176.

The style of parallelism supported on GPUs is best described as

A. misd - multiple instruction single data
B. simt - single instruction multiple thread
C. sisd - single instruction single data
D. mimd
Answer» B. simt - single instruction multiple thread
177.

True or false: Functions annotated with the __global__ qualifier may be executed on the host or the device

A. true
B. false
Answer» A. true
178.

Which of the following correctly describes a GPU kernel

A. a kernel may contain a mix of host and gpu code
B. all thread blocks involved in the same computation use the same kernel
C. a kernel is part of the gpu\s internal micro-operating system, allowing it to act as in independent host
D. kernel may contain only host code
Answer» B. all thread blocks involved in the same computation use the same kernel
179.

a code known as grid which runs on GPU consisting of a set of

A. 32 thread
B. unit block
C. 32 block
D. thread block
Answer» D. thread block
180.

which of the following is not an parallel algorithm model

A. data parallel model
B. task graph model
C. task model
D. work pool model
Answer» C. task model
181.

Having load before the store in a running program order, then interchanging this order, results in a

A. waw hazards
B. destination registers
C. war hazards
D. registers
Answer» C. war hazards
182.

model based on the passing of stream of data through process arranged in a succession is called as

A. producer consumer model
B. hybrid model
C. task graph model
D. work pool model
Answer» A. producer consumer model
183.

When instruction i and instruction j are tends to write the same register or the memory location, it is called

A. input dependence
B. output dependence
C. ideal pipeline
D. digital call
Answer» B. output dependence
184.

Multithreading allowing multiple-threads for sharing the functional units of a

A. multiple processor
B. single processor
C. dual core
D. corei5
Answer» B. single processor
185.

Allowing multiple instructions for issuing in a clock cycle, is the goal of

A. single-issue processors
B. dual-issue processors
C. multiple-issue processors
D. no-issue processors
Answer» C. multiple-issue processors
186.

OpenGL stands for:

A. open general liability
B. open graphics library
C. open guide line
D. open graphics layer
Answer» B. open graphics library
187.

which of the following is not an advantage of OpenGL

A. there is more detailed documentation for opengl while other api\s don\t have such detailed documentation.
B. opengl is portable.
C. opengl is more functional than any other api.
D. it is not a cross-platform api,
Answer» D. it is not a cross-platform api,
188.

work pool model uses ---------------- approach for task assignment

A. static
B. dynamic
C. centralized
D. decentralized
Answer» B. dynamic
189.

which of the following is false regarding data parallel model

A. all task perform same computations
B. degree of parallelism increase with size of problem
C. matrix multiplication is example of data parallel computations
D. dynamic mapping is done
Answer» D. dynamic mapping is done
190.

which of the following are methods for containing interaction overheads

A. maximizing data locality
B. minimize volumn of data exchange
C. min frequency of interactions
D. all the above
Answer» D. all the above
191.

which of the following are classes of dynamic mapping centralized method

A. self scheduling
B. chunk scheduling
C. both a and b
D. none of the above
Answer» C. both a and b
192.

which of the following is not scheme for static mapping

A. block distribution
B. block cyclic distributions
C. cyclic distributions
D. self scheduling
Answer» D. self scheduling
193.

A pipeline is like ....................

A. an automobile assembly line
B. house pipeline
C. both a and b
D. a gas line
Answer» A. an automobile assembly line
194.

Data hazards occur when .....................

A. greater performance loss
B. pipeline changes the order of read/write access to operands
C. some functional unit is not fully pipelined
D. machine size is limited
Answer» B. pipeline changes the order of read/write access to operands
195.

Systems that do not have parallel processing capabilities are

A. sisd
B. simd
C. mimd
D. all of the above
Answer» A. sisd
196.

How does the number of transistors per chip increase according to Moore ´s law?

A. quadratically
B. linearly
C. cubicly
D. exponentially
Answer» D. exponentially
197.

Parallel processing may occur

A. in the instruction stream
B. b. in the data stream
C. both[a] and [b]
D. none of the above
Answer» C. both[a] and [b]
198.

Execution of several activities at the same time.

A. processing
B. parallel processing
C. serial processing
D. multitasking
Answer» B. parallel processing
199.

Cache memory works on the principle of

A. locality of data ??
B. locality of memory
C. locality of reference ??
D. locality of reference & memory
Answer» C. locality of reference ??
200.

SIMD represents an organization that ______________.

A. ?? ?? ?? refers to a computer system capable of processing ???? ?? ?? ?? several programs at the same time.
B. ?? ?? ?? represents organization of single computer containing ?? ?? a control unit, processor unit and a memory unit.
C. ?? ?? ?? includes many processing units under the supervision ?? ?? ?? ?? ?? of a common control unit
D. ?? ?? ?? none of the above.
Answer» C. ?? ?? ?? includes many processing units under the supervision ?? ?? ?? ?? ?? of a common control unit
Tags
  • Question and answers in High Performance Computing (HPC),
  • High Performance Computing (HPC) multiple choice questions and answers,
  • High Performance Computing (HPC) Important MCQs,
  • Solved MCQs for High Performance Computing (HPC),
  • High Performance Computing (HPC) MCQs with answers PDF download