202
88k

120+ Muli-core Architectures and Programming Solved MCQs

These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) , Programming Languages .

Chapters

Chapter: Shared Memory Programming with OpenMP
51.

When compiling an OpenMP program with gcc, what flag must be included?

A. -fopenmp
B. #pragma omp parallel
C. –o hello
D. ./openmp
Answer» A. -fopenmp
52.

Within a parallel region, declared variables are by default ________ .

A. Private
B. Local
C. Loco
D. Shared
Answer» D. Shared
53.

A ______________ construct by itself creates a “single program multiple data” program, i.e., each thread executes the same code.

A. Parallel
B. Section
C. Single
D. Master
Answer» A. Parallel
54.

_______________ specifies that the iteration of the loop must be executed as they would be in serial program.

A. Nowait
B. Ordered
C. Collapse
D. for loops
Answer» B. Ordered
55.

___________________ initializes each private copy with the corresponding value from the master thread.

A. Firstprivate
B. lastprivate
C. nowait
D. Private (OpenMP) and reduction.
Answer» A. Firstprivate
56.

The __________________ of a parallel region extends the lexical extent by the code of functions that are called (directly or indirectly) from within the parallel region.

A. Lexical extent
B. Static extent
C. Dynamic extent
D. None of the above
Answer» C. Dynamic extent
57.

The ______________ specifies that the iterations of the for loop should be executed in parallel by multiple threads.

A. Sections construct
B. for pragma
C. Single construct
D. Parallel for construct
Answer» B. for pragma
58.

_______________ Function returns the number of threads that are currently active in the parallel section region.

A. omp_get_num_procs ( )
B. omp_get_num_threads ( )
C. omp_get_thread_num ( )
D. omp_set_num_threads ( )
Answer» B. omp_get_num_threads ( )
59.

The size of the initial chunksize _____________.

A. total_no_of_iterations / max_threads
B. total_no_of_remaining_iterations / max_threads
C. total_no_of_iterations / No_threads
D. total_no_of_remaining_iterations / No_threads
Answer» A. total_no_of_iterations / max_threads
60.

A ____________ in OpenMP is just some text that modifies a directive.

A. data environment
B. clause
C. task
D. Master thread
Answer» B. clause
61.

In OpenMP, the collection of threads executing the parallel block the original thread and the new thread is called a ____________

A. team
B. executable code
C. implicit task
D. parallel constructs
Answer» A. team
62.

When a thread reaches a _____________ directive, it creates a team of threads and becomes the master of the team.

A. Synchronization
B. Parallel
C. Critical
D. Single
Answer» B. Parallel
63.

Use the _________ library function to determine if nested parallel regions are enabled.

A. Omp_target()
B. Omp_declare target()
C. Omp_target data()
D. omp_get_nested()
Answer» D. omp_get_nested()
64.

The ____________ directive ensures that a specific memory location is updated atomically, rather than exposing it to the possibility of multiple, simultaneous writing threads.

A. Parallel
B. For
C. atomic
D. Sections
Answer» C. atomic
65.

A ___________ construct must be enclosed within a parallel region in order for the directive to execute in parallel.

A. Parallel sections
B. Critical
C. Single
D. work-sharing
Answer» D. work-sharing
66.

____________ is a form of parallelization across multiple processors in parallel computing environments.

A. Work-Sharing Constructs
B. Data parallelism
C. Functional Parallelism
D. Handling loops
Answer» B. Data parallelism
67.

In OpenMP, assigning iterations to threads is called ________________

A. scheduling
B. Static
C. Dynamic
D. Guided
Answer» A. scheduling
68.

The ____________is implemented more efficiently than a general parallel region containing possibly several loops.

A. Sections
B. Parallel Do/For
C. Parallel sections
D. Critical
Answer» B. Parallel Do/For
69.

_______________ causes no synchronization overhead and can maintain data locality when data fits in cache.

A. Guided
B. Auto
C. Runtime
D. Static
Answer» D. Static
70.

How does the difference between the logical view and the reality of parallel architectures affect parallelization?

A. Performance
B. Latency
C. Bandwidth
D. Accuracy
Answer» A. Performance
71.

How many assembly instructions does the following C instruction take? global_count += 5;

A. 4 instructions
B. 3 instructions
C. 5 instructions
D. 2 instructions
Answer» A. 4 instructions
Chapter: Distributed Memory Programming
72.

MPI specifies the functionality of _________________ communication routines.

A. High-level
B. Low-level
C. Intermediate-level
D. Expert-level
Answer» A. High-level
73.

_________________ generate log files of MPI calls.

A. mpicxx
B. mpilog
C. mpitrace
D. mpianim
Answer» B. mpilog
74.

A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a ________________.

A. Scatter
B. Gather
C. Broadcast
D. Allgather
Answer» C. Broadcast
75.

__________________ is a nonnegative integer that the destination can use to selectively screen messages.

A. Dest
B. Type
C. Address
D. length
Answer» B. Type
76.

The routine ________________ combines data from all processes by adding them in this case and returning the result to a single process.

A. MPI _ Reduce
B. MPI_ Bcast
C. MPI_ Finalize
D. MPI_ Comm size
Answer» A. MPI _ Reduce
77.

The easiest way to create communicators with new groups is with_____________.

A. MPI_Comm_rank
B. MPI_Comm_create
C. MPI_Comm_Split
D. MPI_Comm_group
Answer» C. MPI_Comm_Split
78.

_______________ is an object that holds information about the received message, including, for example, it’s actually count.

A. buff
B. count
C. tag
D. status
Answer» D. status
79.

The _______________ operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes.

A. Reduce-scatter
B. Reduce (to-one)
C. Allreduce
D. None of the above
Answer» A. Reduce-scatter
80.

__________________is the principal alternative to shared memory parallel programming.

A. Multiple passing
B. Message passing
C. Message programming
D. None of the above
Answer» B. Message passing
81.

________________may complete even if less than count elements have been received.

A. MPI_Recv
B. MPI_Send
C. MPI_Get_count
D. MPI_Any_Source
Answer» A. MPI_Recv
82.

A ___________ is a script whose main purpose is to run some program. In this case, the program is the C compiler.

A. wrapper script
B. communication functions
C. wrapper simplifies
D. type definitions
Answer» A. wrapper script
83.

________________ returns in its second argument the number of processes in the communicator.

A. MPI_Init
B. MPI_Comm_size
C. MPI_Finalize
D. MPI_Comm_rank
Answer» B. MPI_Comm_size
84.

_____________ always blocks until a matching message has been received.

A. MPI_TAG
B. MPI_ SOURCE
C. MPI Recv
D. MPI_ERROR
Answer» C. MPI Recv
85.

Communication functions that involve all the processes in a communicator are called ___________

A. MPI_Get_count
B. collective communications
C. buffer the message
D. nonovertaking
Answer» B. collective communications
86.

MPI_Send and MPI_Recv are called _____________ communications.

A. Collective Communication
B. Tree-Structured Communication
C. point-to-point
D. Collective Computation
Answer» C. point-to-point
87.

The processes exchange partial results instead of using oneway communications. Such a communication pattern is sometimes called a ___________.

A. butterfly
B. broadcast
C. Data Movement
D. Synchronization
Answer» A. butterfly
88.

A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a _________.

A. broadcast
B. reductions
C. Scatter
D. Gather
Answer» A. broadcast
89.

In MPI, a ______________ can be used to represent any collection of data items in memory by storing both the types of the items and their relative locations in memory.

A. Allgather
B. derived datatype
C. displacement
D. beginning
Answer» B. derived datatype
90.

MPI provides a function, ____________ that returns the number of seconds that have elapsed since some time in the past.

A. MPI_Wtime
B. MPI_Barrier
C. MPI_Scatter
D. MPI_Comm
Answer» A. MPI_Wtime
91.

Programs that can maintain a constant efficiency without increasing the problem size are sometimes said to be _______________.

A. weakly scalable
B. strongly scalable
C. send_buf
D. recv_buf
Answer» B. strongly scalable
92.

Parallelism can be used to increase the (parallel) size of the problem is applicable in ___________________.

A. Amdahl's Law
B. Gustafson-Barsis's Law
C. Newton's Law
D. Pascal's Law
Answer» B. Gustafson-Barsis's Law
93.

Synchronization is one of the common issues in parallel programming. The issues related to synchronization include the followings, EXCEPT:

A. Deadlock
B. Livelock
C. Fairness
D. Correctness
Answer» D. Correctness
94.

Considering to use weak or strong scaling is part of ______________ in addressing the challenges of distributed memory programming.

A. Splitting the problem
B. Speeding up computations
C. Speeding up communication
D. Speeding up hardware
Answer» B. Speeding up computations
95.

Which of the followings is the BEST description of Message Passing Interface (MPI)?

A. A specification of a shared memory library
B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other
C. Only communicators and not groups are accessible to the programmer only by a "handle"
D. A communicator is an ordered set of processes
Answer» B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other
Chapter: Parallel Program Development
96.

An n -body solver is a ___________ that finds 4 the solution to an n-body problem by simulating the behaviour of the particles

A. Program
B. Particle
C. Programmer
D. All of the above
Answer» A. Program
97.

The set of NP-complete problems is often denoted by ____________

A. NP-C
B. NP-C or NPC
C. NPC
D. None of the above
Answer» B. NP-C or NPC
98.

Pthreads has a nonblocking version of pthreads_mutex_lock called __________

A. pthread_mutex_lock
B. pthread_mutex_trylock
C. pthread_mutex_acquirelock
D. pthread_mutex_releaselock
Answer» B. pthread_mutex_trylock
99.

What are the algorithms for identifying which subtrees we assign to the processes or threads __________

A. breadth-first search
B. depth-first search
C. depth-first search breadth-first search
D. None of the above
Answer» C. depth-first search breadth-first search
100.

What are the scoping clauses in OpenMP _________

A. Shared Variables & Private Variables
B. Shared Variables
C. Private Variables
D. None of the above
Answer» A. Shared Variables & Private Variables

Done Studing? Take A Test.

Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.