McqMate
These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) , Programming Languages .
Chapters
1. |
MPI specifies the functionality of _________________ communication routines. |
A. | High-level |
B. | Low-level |
C. | Intermediate-level |
D. | Expert-level |
Answer» A. High-level |
2. |
_________________ generate log files of MPI calls. |
A. | mpicxx |
B. | mpilog |
C. | mpitrace |
D. | mpianim |
Answer» B. mpilog |
3. |
A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a ________________. |
A. | Scatter |
B. | Gather |
C. | Broadcast |
D. | Allgather |
Answer» C. Broadcast |
4. |
__________________ is a nonnegative integer that the destination can use to selectively screen messages. |
A. | Dest |
B. | Type |
C. | Address |
D. | length |
Answer» B. Type |
5. |
The routine ________________ combines data from all processes by adding them in this case and returning the result to a single process. |
A. | MPI _ Reduce |
B. | MPI_ Bcast |
C. | MPI_ Finalize |
D. | MPI_ Comm size |
Answer» A. MPI _ Reduce |
6. |
The easiest way to create communicators with new groups is with_____________. |
A. | MPI_Comm_rank |
B. | MPI_Comm_create |
C. | MPI_Comm_Split |
D. | MPI_Comm_group |
Answer» C. MPI_Comm_Split |
7. |
_______________ is an object that holds information about the received message, including, for example, it’s actually count. |
A. | buff |
B. | count |
C. | tag |
D. | status |
Answer» D. status |
8. |
The _______________ operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes. |
A. | Reduce-scatter |
B. | Reduce (to-one) |
C. | Allreduce |
D. | None of the above |
Answer» A. Reduce-scatter |
9. |
__________________is the principal alternative to shared memory parallel programming. |
A. | Multiple passing |
B. | Message passing |
C. | Message programming |
D. | None of the above |
Answer» B. Message passing |
10. |
________________may complete even if less than count elements have been received. |
A. | MPI_Recv |
B. | MPI_Send |
C. | MPI_Get_count |
D. | MPI_Any_Source |
Answer» A. MPI_Recv |
11. |
A ___________ is a script whose main purpose is to run some program. In this case, the program is the C compiler. |
A. | wrapper script |
B. | communication functions |
C. | wrapper simplifies |
D. | type definitions |
Answer» A. wrapper script |
12. |
________________ returns in its second argument the number of processes in the communicator. |
A. | MPI_Init |
B. | MPI_Comm_size |
C. | MPI_Finalize |
D. | MPI_Comm_rank |
Answer» B. MPI_Comm_size |
13. |
_____________ always blocks until a matching message has been received. |
A. | MPI_TAG |
B. | MPI_ SOURCE |
C. | MPI Recv |
D. | MPI_ERROR |
Answer» C. MPI Recv |
14. |
Communication functions that involve all the processes in a communicator are called ___________ |
A. | MPI_Get_count |
B. | collective communications |
C. | buffer the message |
D. | nonovertaking |
Answer» B. collective communications |
15. |
MPI_Send and MPI_Recv are called _____________ communications. |
A. | Collective Communication |
B. | Tree-Structured Communication |
C. | point-to-point |
D. | Collective Computation |
Answer» C. point-to-point |
16. |
The processes exchange partial results instead of using oneway communications. Such a communication pattern is sometimes called a ___________. |
A. | butterfly |
B. | broadcast |
C. | Data Movement |
D. | Synchronization |
Answer» A. butterfly |
17. |
A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a _________. |
A. | broadcast |
B. | reductions |
C. | Scatter |
D. | Gather |
Answer» A. broadcast |
18. |
In MPI, a ______________ can be used to represent any collection of data items in memory by storing both the types of the items and their relative locations in memory. |
A. | Allgather |
B. | derived datatype |
C. | displacement |
D. | beginning |
Answer» B. derived datatype |
19. |
MPI provides a function, ____________ that returns the number of seconds that have elapsed since some time in the past. |
A. | MPI_Wtime |
B. | MPI_Barrier |
C. | MPI_Scatter |
D. | MPI_Comm |
Answer» A. MPI_Wtime |
20. |
Programs that can maintain a constant efficiency without increasing the problem size are sometimes said to be _______________. |
A. | weakly scalable |
B. | strongly scalable |
C. | send_buf |
D. | recv_buf |
Answer» B. strongly scalable |
21. |
Parallelism can be used to increase the (parallel) size of the problem is applicable in ___________________. |
A. | Amdahl's Law |
B. | Gustafson-Barsis's Law |
C. | Newton's Law |
D. | Pascal's Law |
Answer» B. Gustafson-Barsis's Law |
22. |
Synchronization is one of the common issues in parallel programming. The issues related to synchronization include the followings, EXCEPT: |
A. | Deadlock |
B. | Livelock |
C. | Fairness |
D. | Correctness |
Answer» D. Correctness |
23. |
Considering to use weak or strong scaling is part of ______________ in addressing the challenges of distributed memory programming. |
A. | Splitting the problem |
B. | Speeding up computations |
C. | Speeding up communication |
D. | Speeding up hardware |
Answer» B. Speeding up computations |
24. |
Which of the followings is the BEST description of Message Passing Interface (MPI)? |
A. | A specification of a shared memory library |
B. | MPI uses objects called communicators and groups to define which collection of processes may communicate with each other |
C. | Only communicators and not groups are accessible to the programmer only by a "handle" |
D. | A communicator is an ordered set of processes |
Answer» B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other |
Done Studing? Take A Test.
Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.