166
85.2k

110+ Parallel Computing Solved MCQs

These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Master of Science in Computer Science (MSc CS) , Common Topics in Competitive and Entrance exams .

51.

Parallel processing may occur

A. in the instruction stream
B. in the data stream
C. both[a] and [b]
D. none of the above
Answer» C. both[a] and [b]
52.

The cost of a parallel processing is primarily determined by :

A. time complexity
B. switching complexity
C. circuit complexity
D. none of the above
Answer» C. circuit complexity
53.

An instruction to provide small delay in program

A. lda
B. nop
C. bea
D. none of the above
Answer» B. nop
54.

Characteristic of RISC (Reduced Instruction Set Computer) instruction set is

A. three instructions per cycle
B. two instructions per cycle
C. one instruction per cycle
D. none of the
Answer» C. one instruction per cycle
55.

In daisy-chaining priority method, all the devices that can request an interrupt are connected in

A. parallel
B. serial
C. random
D. none of the above
Answer» B. serial
56.

Which one of the following is a characteristic of CISC (Complex Instruction Set Computer)

A. fixed format instructions
B. variable format instructions
C. instructions are executed by hardware
D. none of the above
Answer» B. variable format instructions
57.

During the execution of the instructions, a copy of the instructions is placed in the ______ .

A. register
B. ram
C. system heap
D. cache
Answer» D. cache
58.

Two processors A and B have clock frequencies of 700 Mhz and 900 Mhz respectively. Suppose A can execute an instruction with an average of 3 steps and B can execute with an average of 5 steps. For the execution of the same instruction which processor is faster ?

A. a
B. b
C. both take the same time
D. insuffient information
Answer» A. a
59.

A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ .

A. super-scaling
B. pipe-lining
C. parallel computation
D. none of these
Answer» B. pipe-lining
60.

For a given FINITE number of instructions to be executed, which architecture of the processor provides for a faster execution ?

A. isa
B. ansa
C. super-scalar
D. all of the above
Answer» C. super-scalar
61.

The clock rate of the processor can be improved by,

A. improving the ic technology of the logic circuits
B. reducing the amount of processing done in one step
C. by using overclocking method
D. all of the above
Answer» D. all of the above
62.

An optimizing Compiler does,

A. better compilation of the given piece of code.
B. takes advantage of the type of processor and reduces its process time.
C. does better memory managament.
D. both a and c
Answer» B. takes advantage of the type of processor and reduces its process time.
63.

The ultimate goal of a compiler is to,

A. reduce the clock cycles for a programming task.
B. reduce the size of the object code.
C. be versatile.
D. be able to detect even the smallest of errors.
Answer» A. reduce the clock cycles for a programming task.
64.

SPEC stands for,

A. standard performance evaluation code.
B. system processing enhancing code.
C. system performance evaluation corporation.
D. standard processing enhancement corporation.
Answer» C. system performance evaluation corporation.
65.

As of 2000, the reference system to find the performance of a system is _____ .

A. ultra sparc 10
B. sun sparc
C. sun ii
D. none of these
Answer» A. ultra sparc 10
66.

The average number of steps taken to execute the set of instructions can be made to be less than one by following _______ .

A. isa
B. pipe-lining
C. super-scaling
D. sequential
Answer» C. super-scaling
67.

If a processor clock is rated as 1250 million cycles per second, then its clock period is ________ .

A. 1.9 * 10 ^ -10 sec
B. 1.6 * 10 ^ -9 sec
C. 1.25 * 10 ^ -10 sec
D. 8 * 10 ^ -10 sec
Answer» D. 8 * 10 ^ -10 sec
68.

If the instruction, Add R1,R2,R3 is executed in a system which is pipe-lined, then the value of S is (Where S is term of the Basic performance equation)

A. 3
B. ~2
C. ~1
D. 6
Answer» C. ~1
69.

CISC stands for,

A. complete instruction sequential compilation
B. computer integrated sequential compiler
C. complex instruction set computer
D. complex instruction sequential compilation
Answer» C. complex instruction set computer
70.

As of 2000, the reference system to find the SPEC rating are built with _____ Processor.

A. intel atom sparc 300mhz
B. ultra sparc -iii 300mhz
C. amd neutrino series
D. asus a series 450 mhz
Answer» B. ultra sparc -iii 300mhz
71.

The CISC stands for

A. computer instruction set compliment
B. complete instruction set compliment
C. computer indexed set components
D. complex instruction set computer
Answer» D. complex instruction set computer
72.

The Sun micro systems processors usually follow _____ architecture.

A. cisc
B. isa
C. ultra sparc
D. risc
Answer» D. risc
73.

The iconic feature of the RISC machine among the following are

A. reduced number of addressing modes
B. increased memory size
C. having a branch delay slot
D. all of the above
Answer» C. having a branch delay slot
74.

Both the CISC and RISC architectures have been developed to reduce the______.

A. cost
B. time delay
C. semantic gap
D. all of the above
Answer» C. semantic gap
75.

Out of the following which is not a CISC machine.

A. ibm 370/168
B. vax 11/780
C. intel 80486
D. motorola a567
Answer» D. motorola a567
76.

Pipe-lining is a unique feature of _______.

A. risc
B. cisc
C. isa
D. iana
Answer» A. risc
77.

In CISC architecture most of the complex instructions are stored in _____.

A. register
B. diodes
C. cmos
D. transistors
Answer» D. transistors
78.

Which of the architecture is power efficient?

A. cisc
B. risc
C. isa
D. iana
Answer» B. risc
79.

It is the simultaneous use of multiple compute resources to solve a computational problem

A. Parallel computing
B. Single processing
C. Sequential computing
D. None of these
Answer» A. Parallel computing
80.

Parallel Execution

A. A sequential execution of a program, one statement at a time
B. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
C. A program or set of instructions that is executed by a processor.
D. None of these
Answer» B. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
81.

Scalability refers to a parallel system’s (hardware and/or software) ability

A. To demonstrate a proportionate increase in parallel speedup with the removal of some processors
B. To demonstrate a proportionate increase in parallel speedup with the addition of more processors
C. To demonstrate a proportionate decrease in parallel speedup with the addition of more processors
D. None of these
Answer» B. To demonstrate a proportionate increase in parallel speedup with the addition of more processors
82.

Parallel computing can include

A. Single computer with multiple processors
B. Arbitrary number of computers connec- ted by a network
C. Combination of both A and B
D. None of these
Answer» C. Combination of both A and B
83.

Serial Execution

A. A sequential execution of a program, one statement at a time
B. Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
C. A program or set of instructions that is executed by a processor.
D. None of these
Answer» A. A sequential execution of a program, one statement at a time
84.

Shared Memory is

A. A computer architecture where all processors have direct access to common physical memory
B. It refers to network based memory access for physical memory that is not common.
C. Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employ
Answer» A. A computer architecture where all processors have direct access to common physical memory
85.

Distributed Memory

A. A computer architecture where all processors have direct access to common physical memory
B. It refers to network based memory access for physical memory that is not common
C. Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employ
Answer» B. It refers to network based memory access for physical memory that is not common
86.

Parallel Overhead is

A. Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
B. The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.
C. Refers to the hardware that comprises a given parallel system - having many processors
D. None of these
Answer» B. The amount of time required to coordi- nate parallel tasks. It includes factors such as: Task start-up time, Synchro- nizations, Data communications.
87.

Massively Parallel

A. Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
B. The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
C. Refers to the hardware that comprises a given parallel system - having many processors
D. None of these
Answer» B. The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
88.

Fine-grain Parallelism is

A. In parallel computing, it is a qualitative measure of the ratio of computation to communication
B. Here relatively small amounts of computational work are done between communication events
C. Relatively large amounts of computational work are done between communication / synchroni- zation events
D. None of these
Answer» B. Here relatively small amounts of computational work are done between communication events
89.

In shared Memory

A. Changes in a memory location effected by one processor do not affect all other processors.
B. Changes in a memory location effected by one processor are visible to all other processors
C. Changes in a memory location effected by one processor are randomly visible to all other processors.
D. None of these
Answer» B. Changes in a memory location effected by one processor are visible to all other processors
90.

In shared Memory:

A. Here all processors access, all memory as global address space
B. Here all processors have individual memory
C. Here some processors access, all memory as global address space and some not
D. None of these
Answer» A. Here all processors access, all memory as global address space
91.

In shared Memory

A. Multiple processors can operate independently but share the same memory resources
B. Multiple processors can operate independently but do not share the same memory resources
C. Multiple processors can operate independently but some do not share the same memory resources
D. None of these
Answer» A. Multiple processors can operate independently but share the same memory resources
92.

In designing a parallel program, one has to break the problem into discreet chunks of work that can be distributed to multiple tasks. This is known as

A. Decomposition
B. Partitioning
C. Compounding
D. Both A and B
Answer» D. Both A and B
93.

Latency is

A. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
C. It is the time it takes to send a minimal (0 byte) message from one point to other point
D. None of these
Answer» C. It is the time it takes to send a minimal (0 byte) message from one point to other point
94.

Domain Decomposition

A. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
C. It is the time it takes to send a minimal (0 byte) message from point A to point (B)
D. None of these
Answer» A. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
95.

Functional Decomposition:

A. Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
C. It is the time it takes to send a minimal (0 byte) message from point A to point (B)
D. None of these
Answer» B. Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
96.

Synchronous communications

A. It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.
B. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
C. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
D. It allows tasks to transfer data independently from one another.
Answer» A. It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the pro- grammer.
97.

Collective communication

A. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
C. It allows tasks to transfer data independently from one another.
D. None of these
Answer» A. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
98.

Point-to-point communication referred to

A. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*
C. It allows tasks to transfer data independently from one another.
D. None of these
Answer» B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*
99.

Uniform Memory Access (UMA) referred to

A. Here all processors have equal access and access times to memory
B. Here if one processor updates a location in shared memory, all the other processors know about the update.
C. Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
D. None of these
Answer» A. Here all processors have equal access and access times to memory
100.

Asynchronous communications

A. It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
B. It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
C. It allows tasks to transfer data independently from one another.
D. None of these
Answer» C. It allows tasks to transfer data independently from one another.

Done Studing? Take A Test.

Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.