Chapter: Multi-core Processors
1.

A collection of lines that connects several devices is called ______________

A. bus
B. peripheral connection wires
C. Both a and b
D. internal wires
Answer» A. bus
2.

PC Program Counter is also called ____________

A. instruction pointer
B. memory pointer
C. data counter
D. file pointer
Answer» A. instruction pointer
3.

Which MIMD systems are best scalable with respect to the number of processors?

A. Distributed memory computers
B. ccNUMA systems
C. nccNUMA systems
D. Symmetric multiprocessors
Answer» A. Distributed memory computers
4.

Cache coherence: For which shared (virtual) memory systems is the snooping protocol suited?

A. Crossbar connected systems
B. Systems with hypercube network
C. Systems with butterfly network
D. Bus based systems
Answer» D. Bus based systems
5.

The idea of cache memory is based ______

A. on the property of locality of reference
B. on the heuristic 90-10 rule
C. on the fact that references generally tend to cluster
D. all of the above
Answer» A. on the property of locality of reference
6.

When number of switch ports is equal to or larger than number of devices, this simple network is referred to as ______________

A. Crossbar
B. Crossbar switch
C. Switching
D. Both a and b
Answer» D. Both a and b
7.

A remote node is being node which has a copy of a ______________

A. Home block
B. Guest block
C. Remote block
D. Cache block
Answer» D. Cache block
8.

A pipeline is like _______________

A. an automobile assembly line
B. house pipeline
C. both a and b
D. a gas line
Answer» A. an automobile assembly line
9.

Which cache miss does not occur in case of a fully associative cache?

A. Conflict miss
B. Capacity miss
C. Compulsory miss
D. Cold start miss
Answer» A. Conflict miss
10.

Bus switches are present in ____________

A. bus window technique
B. crossbar switching
C. linked input/output
D. shared bus
Answer» B. crossbar switching
11.

Systems that do not have parallel processing capabilities are ______________

A. SISD
B. MIMD
C. SIMD
D. MISD
Answer» A. SISD
12.

Parallel programs: Which speedup could be achieved according to Amdahl´s law for infinite number of processors if 5% of a program is sequential and the remaining part is ideally parallel?

A. 10
B. 20
C. 30
D. 40
Answer» B. 20
13.

SIMD represents an organization that ______________

A. Includes many processing units under the supervision of a common control unit
B. vector supercomputer and MIMD systems
C. logic behind pipelining an instruction as observe
D. receive an instruction from the controlling unit
Answer» A. Includes many processing units under the supervision of a common control unit
14.

Cache memory works on the principle of ____________

A. communication links
B. Locality of reference
C. Bisection bandwidth
D. average access time
Answer» B. Locality of reference
15.

In shared bus architecture, the required processor(s) to perform a bus cycle, for fetching data or instructions is ________________

A. One Processor
B. Two Processor
C. Multi-Processor
D. None of the above
Answer» A. One Processor
16.

Alternative way of a snooping-based coherence protocol, is called a ____________

A. Write invalidate protocol
B. Snooping protocol
C. Directory protocol
D. Write update protocol
Answer» C. Directory protocol
17.

If no node having a copy of a cache block, this technique is known as ______

A. Cached
B. Un-cached
C. Shared data
D. Valid data
Answer» B. Un-cached
18.

Requesting node sending the requested data starting from the memory, and the requestor which has made the only sharing node, known as ________.

A. Read miss
B. Write miss
C. Invalidate
D. Fetch
Answer» A. Read miss
19.

A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______.

A. Direct interconnects
B. Indirect interconnects
C. Pipe-lining
D. Uniform Memory Access
Answer» C. Pipe-lining
20.

All nodes in each dimension form a linear array, in the __________.

A. Star topology
B. Ring topology
C. Connect topology
D. Mesh topology
Answer» D. Mesh topology
21.

The concept of pipelining is most effective in improving performance if the tasks being performed in different stages :

A. require different amount of time
B. require about the same amount of time
C. require different amount of time with time difference between any two tasks being same
D. require different amount with time difference between any two tasks being different
Answer» B. require about the same amount of time
22.

The expression 'delayed load' is used in context of

A. processor-printer communication
B. memory-monitor communication
C. pipelining
D. none of the above
Answer» C. pipelining
23.

During the execution of the instructions, a copy of the instructions is placed in the ______ .

A. Register
B. RAM
C. System heap
D. Cache
Answer» D. Cache
Chapter: Parallel Program Challenges
24.

Producer consumer problem can be solved using _____________

A. semaphores
B. event counters
C. monitors
D. All of the above
Answer» C. monitors
25.

A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which access takes place is called:

A. data consistency
B. race condition
C. aging
D. starvation
Answer» B. race condition
26.

The segment of code in which the process may change common variables, update tables, write into files is known as :

A. program
B. critical section
C. non – critical section
D. synchronizing
Answer» B. critical section
27.

All deadlocks involve conflicting needs for __________

A. Resources
B. Users
C. Computers
D. Programs
Answer» A. Resources
28.

___________ are used for signaling among processes and can be readily used to enforce a mutual exclusion discipline.

A. Semaphores
B. Messages
C. Monitors
D. Addressing
Answer» A. Semaphores
29.

To avoid deadlock ____________

A. there must be a fixed number of resources to allocate
B. resource allocation must be done only once
C. all deadlocked processes must be aborted
D. inversion technique can be used
Answer» A. there must be a fixed number of resources to allocate
30.

A minimum of _____ variable(s) is/are required to be shared between processes to solve the critical section problem.

A. one
B. two
C. three
D. four
Answer» B. two
31.

Spinlocks are intended to provide __________ only.

A. Mutual Exclusion
B. Bounded Waiting
C. Aging
D. Progress
Answer» B. Bounded Waiting
32.

To ensure difficulties do not arise in the readers – writer’s problem, _______ are given exclusive access to the shared object.

A. readers
B. writers
C. readers and writers
D. none of the above
Answer» B. writers
33.

If a process is executing in its critical section, then no other processes can be executing in their critical section. This condition is called ___________.

A. Out-of-order execution
B. Hardware prefetching
C. Software prefetching
D. mutual exclusion
Answer» D. mutual exclusion
34.

A semaphore is a shared integer variable ____________.

A. lightweight process
B. that cannot drop below zero
C. program counter
D. stack space
Answer» B. that cannot drop below zero
35.

A critical section is a program segment ______________.

A. where shared resources are accessed
B. single thread of execution
C. improves concurrency in multi-core system
D. Lower resource consumption
Answer» A. where shared resources are accessed
36.

A counting semaphore was initialized to 10. Then 6 P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is ___________

A. 4
B. 6
C. 9
D. 8
Answer» D. 8
37.

A system has 3 processes sharing 4 resources. If each process needs a maximum of 2 units, then _____________

A. Better system utilization
B. deadlock can never occur
C. Responsiveness
D. Faster execution
Answer» B. deadlock can never occur
38.

_____________ refers to the ability of multiple process (or threads) to share code, resources or data in such a way that only one process has access to shared object at a time.

A. Readers_writer locks
B. Barriers
C. Semaphores
D. Mutual Exclusion
Answer» D. Mutual Exclusion
39.

____________ is the ability of multiple processes to co-ordinate their activities by exchange of information.

A. Deadlock
B. Synchronization
C. Mutual Exclusion
D. Cache
Answer» B. Synchronization
40.

Paths that have an unbounded number of allowed nonminimal hops from packet sources, this situation is referred to as __________.

A. Livelock
B. Deadlock
C. Synchronization
D. Mutual Exclusion
Answer» A. Livelock
41.

Let S and Q be two semaphores initialized to 1, where P0 and P1 processes the following statements wait(S);wait(Q); ---; signal(S);signal(Q) and wait(Q); wait(S);---;signal(Q);signal(S); respectively. The above situation depicts a _________.

A. Livelock
B. Critical Section
C. Deadlock
D. Mutual Exclusion
Answer» C. Deadlock
42.

Which of the following conditions must be satisfied to solve the critical section problem?

A. Mutual Exclusion
B. Progress
C. Bounded Waiting
D. All of the mentioned
Answer» D. All of the mentioned
43.

Mutual exclusion implies that ____________.

A. if a process is executing in its critical section, then no other process must be executing in their critical sections
B. if a process is executing in its critical section, then other processes must be executing in their critical sections
C. if a process is executing in its critical section, then all the resources of the system must be blocked until it finishes execution
D. none of the mentioned
Answer» A. if a process is executing in its critical section, then no other process must be executing in their critical sections
44.

Bounded waiting implies that there exists a bound on the number of times a process is allowed to enter its critical section ____________.

A. after a process has made a request to enter its critical section and before the request is granted
B. when another process is in its critical section
C. before a process has made a request to enter its critical section
D. none of the mentioned
Answer» A. after a process has made a request to enter its critical section and before the request is granted
45.

What are the two atomic operations permissible on semaphores?

A. Wait
B. Stop
C. Hold
D. none of the mentioned
Answer» A. Wait
46.

What are Spinlocks?

A. CPU cycles wasting locks over critical sections of programs
B. Locks that avoid time wastage in context switches
C. Locks that work better on multiprocessor systems
D. All of the mentioned
Answer» D. All of the mentioned
47.

What is the main disadvantage of spinlocks?

A. they are not sufficient for many process
B. they require busy waiting
C. they are unreliable sometimes
D. they are too complex for programmers
Answer» B. they require busy waiting
48.

The signal operation of the semaphore basically works on the basic _______ system call.

A. continue()
B. wakeup()
C. getup()
D. start()
Answer» B. wakeup()
49.

If the semaphore value is negative ____________.

A. its magnitude is the number of processes waiting on that semaphore
B. it is invalid
C. no operation can be further performed on it until the signal operation is performed on it
D. none of the mentioned
Answer» A. its magnitude is the number of processes waiting on that semaphore
Chapter: Shared Memory Programming with OpenMP
50.

Which directive must precede the directive: #pragma omp sections (not necessarily immediately)?

A. #pragma omp section
B. #pragma omp parallel
C. None
D. #pragma omp master
Answer» A. #pragma omp section
51.

When compiling an OpenMP program with gcc, what flag must be included?

A. -fopenmp
B. #pragma omp parallel
C. –o hello
D. ./openmp
Answer» A. -fopenmp
52.

Within a parallel region, declared variables are by default ________ .

A. Private
B. Local
C. Loco
D. Shared
Answer» D. Shared
53.

A ______________ construct by itself creates a “single program multiple data” program, i.e., each thread executes the same code.

A. Parallel
B. Section
C. Single
D. Master
Answer» A. Parallel
54.

_______________ specifies that the iteration of the loop must be executed as they would be in serial program.

A. Nowait
B. Ordered
C. Collapse
D. for loops
Answer» B. Ordered
55.

___________________ initializes each private copy with the corresponding value from the master thread.

A. Firstprivate
B. lastprivate
C. nowait
D. Private (OpenMP) and reduction.
Answer» A. Firstprivate
56.

The __________________ of a parallel region extends the lexical extent by the code of functions that are called (directly or indirectly) from within the parallel region.

A. Lexical extent
B. Static extent
C. Dynamic extent
D. None of the above
Answer» C. Dynamic extent
57.

The ______________ specifies that the iterations of the for loop should be executed in parallel by multiple threads.

A. Sections construct
B. for pragma
C. Single construct
D. Parallel for construct
Answer» B. for pragma
58.

_______________ Function returns the number of threads that are currently active in the parallel section region.

A. omp_get_num_procs ( )
B. omp_get_num_threads ( )
C. omp_get_thread_num ( )
D. omp_set_num_threads ( )
Answer» B. omp_get_num_threads ( )
59.

The size of the initial chunksize _____________.

A. total_no_of_iterations / max_threads
B. total_no_of_remaining_iterations / max_threads
C. total_no_of_iterations / No_threads
D. total_no_of_remaining_iterations / No_threads
Answer» A. total_no_of_iterations / max_threads
60.

A ____________ in OpenMP is just some text that modifies a directive.

A. data environment
B. clause
C. task
D. Master thread
Answer» B. clause
61.

In OpenMP, the collection of threads executing the parallel block the original thread and the new thread is called a ____________

A. team
B. executable code
C. implicit task
D. parallel constructs
Answer» A. team
62.

When a thread reaches a _____________ directive, it creates a team of threads and becomes the master of the team.

A. Synchronization
B. Parallel
C. Critical
D. Single
Answer» B. Parallel
63.

Use the _________ library function to determine if nested parallel regions are enabled.

A. Omp_target()
B. Omp_declare target()
C. Omp_target data()
D. omp_get_nested()
Answer» D. omp_get_nested()
64.

The ____________ directive ensures that a specific memory location is updated atomically, rather than exposing it to the possibility of multiple, simultaneous writing threads.

A. Parallel
B. For
C. atomic
D. Sections
Answer» C. atomic
65.

A ___________ construct must be enclosed within a parallel region in order for the directive to execute in parallel.

A. Parallel sections
B. Critical
C. Single
D. work-sharing
Answer» D. work-sharing
66.

____________ is a form of parallelization across multiple processors in parallel computing environments.

A. Work-Sharing Constructs
B. Data parallelism
C. Functional Parallelism
D. Handling loops
Answer» B. Data parallelism
67.

In OpenMP, assigning iterations to threads is called ________________

A. scheduling
B. Static
C. Dynamic
D. Guided
Answer» A. scheduling
68.

The ____________is implemented more efficiently than a general parallel region containing possibly several loops.

A. Sections
B. Parallel Do/For
C. Parallel sections
D. Critical
Answer» B. Parallel Do/For
69.

_______________ causes no synchronization overhead and can maintain data locality when data fits in cache.

A. Guided
B. Auto
C. Runtime
D. Static
Answer» D. Static
70.

How does the difference between the logical view and the reality of parallel architectures affect parallelization?

A. Performance
B. Latency
C. Bandwidth
D. Accuracy
Answer» A. Performance
71.

How many assembly instructions does the following C instruction take? global_count += 5;

A. 4 instructions
B. 3 instructions
C. 5 instructions
D. 2 instructions
Answer» A. 4 instructions
Chapter: Distributed Memory Programming
72.

MPI specifies the functionality of _________________ communication routines.

A. High-level
B. Low-level
C. Intermediate-level
D. Expert-level
Answer» A. High-level
73.

_________________ generate log files of MPI calls.

A. mpicxx
B. mpilog
C. mpitrace
D. mpianim
Answer» B. mpilog
74.

A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a ________________.

A. Scatter
B. Gather
C. Broadcast
D. Allgather
Answer» C. Broadcast
75.

__________________ is a nonnegative integer that the destination can use to selectively screen messages.

A. Dest
B. Type
C. Address
D. length
Answer» B. Type
76.

The routine ________________ combines data from all processes by adding them in this case and returning the result to a single process.

A. MPI _ Reduce
B. MPI_ Bcast
C. MPI_ Finalize
D. MPI_ Comm size
Answer» A. MPI _ Reduce
77.

The easiest way to create communicators with new groups is with_____________.

A. MPI_Comm_rank
B. MPI_Comm_create
C. MPI_Comm_Split
D. MPI_Comm_group
Answer» C. MPI_Comm_Split
78.

_______________ is an object that holds information about the received message, including, for example, it’s actually count.

A. buff
B. count
C. tag
D. status
Answer» D. status
79.

The _______________ operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes.

A. Reduce-scatter
B. Reduce (to-one)
C. Allreduce
D. None of the above
Answer» A. Reduce-scatter
80.

__________________is the principal alternative to shared memory parallel programming.

A. Multiple passing
B. Message passing
C. Message programming
D. None of the above
Answer» B. Message passing
81.

________________may complete even if less than count elements have been received.

A. MPI_Recv
B. MPI_Send
C. MPI_Get_count
D. MPI_Any_Source
Answer» A. MPI_Recv
82.

A ___________ is a script whose main purpose is to run some program. In this case, the program is the C compiler.

A. wrapper script
B. communication functions
C. wrapper simplifies
D. type definitions
Answer» A. wrapper script
83.

________________ returns in its second argument the number of processes in the communicator.

A. MPI_Init
B. MPI_Comm_size
C. MPI_Finalize
D. MPI_Comm_rank
Answer» B. MPI_Comm_size
84.

_____________ always blocks until a matching message has been received.

A. MPI_TAG
B. MPI_ SOURCE
C. MPI Recv
D. MPI_ERROR
Answer» C. MPI Recv
85.

Communication functions that involve all the processes in a communicator are called ___________

A. MPI_Get_count
B. collective communications
C. buffer the message
D. nonovertaking
Answer» B. collective communications
86.

MPI_Send and MPI_Recv are called _____________ communications.

A. Collective Communication
B. Tree-Structured Communication
C. point-to-point
D. Collective Computation
Answer» C. point-to-point
87.

The processes exchange partial results instead of using oneway communications. Such a communication pattern is sometimes called a ___________.

A. butterfly
B. broadcast
C. Data Movement
D. Synchronization
Answer» A. butterfly
88.

A collective communication in which data belonging to a single process is sent to all of the processes in the communicator is called a _________.

A. broadcast
B. reductions
C. Scatter
D. Gather
Answer» A. broadcast
89.

In MPI, a ______________ can be used to represent any collection of data items in memory by storing both the types of the items and their relative locations in memory.

A. Allgather
B. derived datatype
C. displacement
D. beginning
Answer» B. derived datatype
90.

MPI provides a function, ____________ that returns the number of seconds that have elapsed since some time in the past.

A. MPI_Wtime
B. MPI_Barrier
C. MPI_Scatter
D. MPI_Comm
Answer» A. MPI_Wtime
91.

Programs that can maintain a constant efficiency without increasing the problem size are sometimes said to be _______________.

A. weakly scalable
B. strongly scalable
C. send_buf
D. recv_buf
Answer» B. strongly scalable
92.

Parallelism can be used to increase the (parallel) size of the problem is applicable in ___________________.

A. Amdahl's Law
B. Gustafson-Barsis's Law
C. Newton's Law
D. Pascal's Law
Answer» B. Gustafson-Barsis's Law
93.

Synchronization is one of the common issues in parallel programming. The issues related to synchronization include the followings, EXCEPT:

A. Deadlock
B. Livelock
C. Fairness
D. Correctness
Answer» D. Correctness
94.

Considering to use weak or strong scaling is part of ______________ in addressing the challenges of distributed memory programming.

A. Splitting the problem
B. Speeding up computations
C. Speeding up communication
D. Speeding up hardware
Answer» B. Speeding up computations
95.

Which of the followings is the BEST description of Message Passing Interface (MPI)?

A. A specification of a shared memory library
B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other
C. Only communicators and not groups are accessible to the programmer only by a "handle"
D. A communicator is an ordered set of processes
Answer» B. MPI uses objects called communicators and groups to define which collection of processes may communicate with each other
Chapter: Parallel Program Development
96.

An n -body solver is a ___________ that finds 4 the solution to an n-body problem by simulating the behaviour of the particles

A. Program
B. Particle
C. Programmer
D. All of the above
Answer» A. Program
97.

The set of NP-complete problems is often denoted by ____________

A. NP-C
B. NP-C or NPC
C. NPC
D. None of the above
Answer» B. NP-C or NPC
98.

Pthreads has a nonblocking version of pthreads_mutex_lock called __________

A. pthread_mutex_lock
B. pthread_mutex_trylock
C. pthread_mutex_acquirelock
D. pthread_mutex_releaselock
Answer» B. pthread_mutex_trylock
99.

What are the algorithms for identifying which subtrees we assign to the processes or threads __________

A. breadth-first search
B. depth-first search
C. depth-first search breadth-first search
D. None of the above
Answer» C. depth-first search breadth-first search
100.

What are the scoping clauses in OpenMP _________

A. Shared Variables & Private Variables
B. Shared Variables
C. Private Variables
D. None of the above
Answer» A. Shared Variables & Private Variables
Tags
Question and answers in Muli-core Architectures and Programming, Muli-core Architectures and Programming multiple choice questions and answers, Muli-core Architectures and Programming Important MCQs, Solved MCQs for Muli-core Architectures and Programming, Muli-core Architectures and Programming MCQs with answers PDF download