Chapter: Parallel Program Development
1.

An n -body solver is a ___________ that finds 4 the solution to an n-body problem by simulating the behaviour of the particles

A. Program
B. Particle
C. Programmer
D. All of the above
Answer» A. Program
2.

The set of NP-complete problems is often denoted by ____________

A. NP-C
B. NP-C or NPC
C. NPC
D. None of the above
Answer» B. NP-C or NPC
3.

Pthreads has a nonblocking version of pthreads_mutex_lock called __________

A. pthread_mutex_lock
B. pthread_mutex_trylock
C. pthread_mutex_acquirelock
D. pthread_mutex_releaselock
Answer» B. pthread_mutex_trylock
4.

What are the algorithms for identifying which subtrees we assign to the processes or threads __________

A. breadth-first search
B. depth-first search
C. depth-first search breadth-first search
D. None of the above
Answer» C. depth-first search breadth-first search
5.

What are the scoping clauses in OpenMP _________

A. Shared Variables & Private Variables
B. Shared Variables
C. Private Variables
D. None of the above
Answer» A. Shared Variables & Private Variables
6.

The function My_avail_tour count can simply return the ________

A. Size of the process’ stack
B. Sub tree rooted at the partial tour
C. Cut-off length
D. None of the above
Answer» A. Size of the process’ stack
7.

MPI provides a function ________, for packing data into a buffer of contiguous memory.

A. MPI_Pack
B. MPI_UnPack
C. MPI_Pack Count
D. MPI_Packed
Answer» A. MPI_Pack
8.

Two MPI_Irecv calls are made specifying different buffers and tags, but the same sender and request location. How can one determine that the buffer specified in the first call has valid data?

A. Call MPI_Probe
B. Call MPI_Testany with the same request listed twice
C. Call MPI_Wait twice with the same request
D. Look at the data in the buffer and try to determine whether it is
Answer» C. Call MPI_Wait twice with the same request
9.

Which of the following statements is not true?

A. MPI_lsend and MPI_Irecv are non-blocking message passing routines of MPI
B. MPI_lssend and MPI_Ibsend are non-blocking message passing routines of MPI
C. MPI_Send and MPI_Recv are non-blocking message passing routines of MPI
D. MPI_Ssend and MPI_Bsend are blocking message passing routines of MPI
Answer» A. MPI_lsend and MPI_Irecv are non-blocking message passing routines of MPI
10.

Which of the following is not valid with reference to Message Passing Interface (MPI)?

A. MPI can run on any hardware platform
B. The programming model is a distributed memory model
C. All parallelism is implicit
D. MPI - Comm - Size returns the total number of MPI processes in specified communication
Answer» C. All parallelism is implicit
11.

An _____________ is a program that finds the solution to an n-body problem by simulating the behavior of the particles.

A. Two N-Body Solvers
B. n-body solver
C. n-body problem
D. Newton‘s second law
Answer» B. n-body solver
12.

For the reduced n-body solver, a ________________ will best distribute the workload in the computation of the forces.

A. cyclic distribution
B. velocity of each particle
C. universal gravitation
D. gravitational constant
Answer» A. cyclic distribution
13.

Parallelizing the two n-body solvers using _______________ is very similar to parallelizing them using OpenMP.

A. thread‘s rank
B. function Loopschedule
C. Pthreads
D. loop variable
Answer» C. Pthreads
14.

The run-times of the serial solvers differed from the single-process MPI solvers by ______________.

A. More than 1%
B. less than 1%
C. Equal to 1%
D. Greater than 1%
Answer» B. less than 1%
15.

Each node of the tree has an_________________ , that is, the cost of the partial tour.

A. Euler‘s method
B. associated cost
C. three-dimensional problems
D. fast function
Answer» A. Euler‘s method
16.

Using _____________ we can systematically visit each node of the tree that could possibly lead to a least-cost solution.

A. depth-first search
B. Foster‘s methodology
C. reduced algorithm
D. breadth first search
Answer» A. depth-first search
17.

The newly created stack into our private stack, set the newstack variable to_____________.

A. Infinite
B. Zero
C. NULL
D. None of the above
Answer» C. NULL
18.

The ____________________ is a pointer to a block of memory allocated by the user program and buffersize is its size in bytes.

A. tour data
B. node tasks
C. actual computation
D. buffer argument
Answer» B. node tasks
19.

A _____________ function is called by Fulfillrequest.

A. descendants
B. Splitstack
C. dynamic mapping scheme
D. ancestors
Answer» B. Splitstack
20.

The cost of stack splitting in the MPI implementation is quite high; in addition to the cost of the communication, the packing and unpacking is very ________________.

A. global least cost
B. time- consuming
C. expensive tours
D. shared stack
Answer» B. time- consuming
21.

_____________ begins by checking on the number of tours that the process has in its stack.

A. Terminated
B. Send rejects
C. Receive rejects
D. Empty
Answer» A. Terminated
22.

The ____________ is the distributed-memory version of the OpenMP busywait loop.

A. For loop
B. while(1) loop
C. Do while loop
D. Empty
Answer» B. while(1) loop
23.

______________ sent to false and continue in the loop.

A. work_request
B. My_avail_tour_count
C. Fulfill_request
D. Split_stack packs
Answer» A. work_request
24.

________________ takes the data in data to be packed and packs it into contig_buf.

A. MPI Unpack
B. MPI_Pack
C. MPI_Datatype
D. MPI_Comm
Answer» B. MPI_Pack
25.

The _______________ function when executed by a process other than 0 sends its energy to process 0.

A. Out of work
B. No_work_left
C. zero-length message
D. request for work
Answer» A. Out of work
Tags
Question and answers in Parallel Program Development, Parallel Program Development multiple choice questions and answers, Parallel Program Development Important MCQs, Solved MCQs for Parallel Program Development, Parallel Program Development MCQs with answers PDF download