McqMate
These multiple-choice questions (MCQs) are designed to enhance your knowledge and understanding in the following areas: Computer Science Engineering (CSE) .
| 201. |
Coherent PSK and non coherent orthogonal FSK have a difference of in PB. |
| A. | 1db |
| B. | 3db |
| C. | 4db |
| D. | 6db |
| Answer» C. 4db | |
| Explanation: the difference of pb is approximately 4db for the best ( coherent psk ) and the worst (non coherent orthogonal fsk). | |
| 202. |
Which is easier to implement and is preferred? |
| A. | coherent system |
| B. | non coherent system |
| C. | coherent & non coherent system |
| D. | none of the mentioned |
| Answer» B. non coherent system | |
| Explanation: a non coherent system is desirable because there may be difficulty is establishing and maintaining a coherent reference. | |
| 203. |
Which is the main system consideration? |
| A. | probability of error |
| B. | system complexity |
| C. | random fading channel |
| D. | all of the mentioned |
| Answer» D. all of the mentioned | |
| Explanation: the major system considerations are error probability, complexity and random fading channel. considering all this non coherent system is more desirable than coherent. | |
| 204. |
Self information should be |
| A. | positive |
| B. | negative |
| C. | positive & negative |
| D. | none of the mentioned |
| Answer» A. positive | |
| Explanation: self information is always non negative. | |
| 205. |
The unit of average mutual information is |
| A. | bits |
| B. | bytes |
| C. | bits per symbol |
| D. | bytes per symbol |
| Answer» A. bits | |
| Explanation: the unit of average mutual information is bits. | |
| 206. |
When probability of error during transmission is 0.5, it indicates that |
| A. | channel is very noisy |
| B. | no information is received |
| C. | channel is very noisy & no information is received |
| D. | none of the mentioned |
| Answer» C. channel is very noisy & no information is received | |
| Explanation: when probability of error during transmission is 0.5 then the channel is very noisy and thus no information is received. | |
| 207. |
The event with minimum probability has least number of bits. |
| A. | true |
| B. | false |
| Answer» B. false | |
| Explanation: in binary huffman coding the event with maximum probability has least number of bits. | |
| 208. |
The method of converting a word to stream of bits is called as |
| A. | binary coding |
| B. | source coding |
| C. | bit coding |
| D. | cipher coding |
| Answer» B. source coding | |
| Explanation: source coding is the method of converting a word to stream of bits that is 0’s and 1’s. | |
| 209. |
When the base of the logarithm is 2, then the unit of measure of information is |
| A. | bits |
| B. | bytes |
| C. | nats |
| D. | none of the mentioned |
| Answer» A. bits | |
| Explanation: when the base of the logarithm is 2 then the unit of measure of information is bits. | |
| 210. |
When X and Y are statistically independent, then I (x,y) is |
| A. | 1 |
| B. | 0 |
| C. | ln 2 |
| D. | cannot be determined |
| Answer» B. 0 | |
| Explanation: when x and y are statistically independent the measure of information i (x,y) is 0. | |
| 211. |
The self information of random variable is |
| A. | 0 |
| B. | 1 |
| C. | infinite |
| D. | cannot be determined |
| Answer» C. infinite | |
| Explanation: the self information of a random variable is infinity. | |
| 212. |
Entropy of a random variable is |
| A. | 0 |
| B. | 1 |
| C. | infinite |
| D. | cannot be determined |
| Answer» C. infinite | |
| Explanation: entropy of a random variable is also infinity. | |
| 213. |
Which is more efficient method? |
| A. | encoding each symbol of a block |
| B. | encoding block of symbols |
| C. | encoding each symbol of a block & encoding block of symbols |
| D. | none of the mentioned |
| Answer» B. encoding block of symbols | |
| Explanation: encoding block of symbols is more efficient than encoding each symbol of a block. | |
| 214. |
Lempel-Ziv algorithm is |
| A. | variable to fixed length algorithm |
| B. | fixed to variable length algorithm |
| C. | fixed to fixed length algorithm |
| D. | variable to variable length algorithm |
| Answer» A. variable to fixed length algorithm | |
| Explanation: lempel-ziv algorithm is a variable to fixed length algorithm. | |
| 215. |
While recovering signal, which gets attenuated more? |
| A. | low frequency component |
| B. | high frequency component |
| C. | low & high frequency component |
| D. | none of the mentioned |
| Answer» B. high frequency component | |
| Explanation: high frequency components are attenuated more when compared to low frequency components while recovering the signals. | |
| 216. |
Mutual information should be |
| A. | positive |
| B. | negative |
| C. | positive & negative |
| D. | none of the mentioned |
| Answer» C. positive & negative | |
| Explanation: mutual information can also be negative. | |
| 217. |
ASCII code is a |
| A. | fixed length code |
| B. | variable length code |
| C. | fixed & variable length code |
| D. | none of the mentioned |
| Answer» A. fixed length code | |
| Explanation: ascii code is a fixed length code. it has a fixed length of 7 bits. | |
| 218. |
Which reduces the size of the data? |
| A. | source coding |
| B. | channel coding |
| C. | source & channel coding |
| D. | none of the mentioned |
| Answer» A. source coding | |
| Explanation: source coding reduces the size of the data and channel coding increases the size of the data. | |
| 219. |
In digital image coding which image must be smaller in size? |
| A. | input image |
| B. | output image |
| C. | input & output image |
| D. | none of the mentioned |
| Answer» B. output image | |
| Explanation: in digital image coding, output image must be smaller than the input image. | |
| 220. |
Which coding method uses entropy coding? |
| A. | lossless coding |
| B. | lossy coding |
| C. | lossless & lossy coding |
| D. | none of the mentioned |
| Answer» B. lossy coding | |
| Explanation: lossy source coding uses entropy coding. | |
| 221. |
Which achieves greater compression? |
| A. | lossless coding |
| B. | lossy coding |
| C. | lossless & lossy coding |
| D. | none of the mentioned |
| Answer» B. lossy coding | |
| Explanation: lossy coding achieves greater compression where as lossless coding achieves only moderate compression. | |
| 222. |
Which are uniquely decodable codes? |
| A. | fixed length codes |
| B. | variable length codes |
| C. | fixed & variable length codes |
| D. | none of the mentioned |
| Answer» A. fixed length codes | |
| Explanation: fixed length codes are uniquely decodable codes where as variable length codes may or may not be uniquely decodable. | |
| 223. |
A rate distortion function is a |
| A. | concave function |
| B. | convex function |
| C. | increasing function |
| D. | none of the mentioned |
| Answer» B. convex function | |
| Explanation: a rate distortion function is a monotone decreasing function and also a convex function. | |
| 224. |
Which of the following algorithms is the best approach for solving Huffman codes? |
| A. | exhaustive search |
| B. | greedy algorithm |
| C. | brute force algorithm |
| D. | divide and conquer algorithm |
| Answer» B. greedy algorithm | |
| Explanation: greedy algorithm is the best approach for solving the huffman codes problem since it greedily searches for an optimal solution. | |
| 225. |
How many printable characters does the ASCII character set consists of? |
| A. | 120 |
| B. | 128 |
| C. | 100 |
| D. | 98 |
| Answer» C. 100 | |
| Explanation: out of 128 characters in an ascii set, roughly, only 100 characters are printable while the rest are non-printable. | |
| 226. |
Which bit is reserved as a parity bit in an ASCII set? |
| A. | first |
| B. | seventh |
| C. | eighth |
| D. | tenth |
| Answer» C. eighth | |
| Explanation: in an ascii character set, seven bits are reserved for character representation while the eighth bit is a parity bit. | |
| 227. |
How many bits are needed for standard encoding if the size of the character set is X? |
| A. | log x |
| B. | x+1 |
| C. | 2x |
| D. | x2 |
| Answer» A. log x | |
| Explanation: if the size of the character set is x, then [log x] bits are needed for representation in a standard encoding. | |
| 228. |
The code length does not depend on the frequency of occurrence of characters. |
| A. | true |
| B. | false |
| Answer» B. false | |
| Explanation: the code length depends on the frequency of occurrence of characters. the more frequent the character occurs, the less is the length of the code. | |
| 229. |
From the following given tree, what is the code word for the character ‘a’? |
| A. | 011 |
| B. | 010 |
| C. | 100 |
| D. | 101 |
| Answer» A. 011 | |
| Explanation: by recording the path of the node from root to leaf, the code word for character ‘a’ is found to be 011. | |
| 230. |
From the following given tree, what is the computed codeword for ‘c’? |
| A. | 111 |
| B. | 101 |
| C. | 110 |
| D. | 011 |
| Answer» C. 110 | |
| Explanation: by recording the path of the node from root to leaf, assigning left branch as 0 and right branch as 1, the codeword for c is 110. | |
| 231. |
What will be the cost of the code if character ci is at depth di and occurs at frequency fi? |
| A. | cifi |
| B. | ∫cifi |
| C. | ∑fidi |
| D. | fidi |
| Answer» C. ∑fidi | |
| Explanation: if character ci is at depth di and occurs at frequency fi, the cost of the codeword obtained is ∑fidi. | |
| 232. |
An optimal code will always be present in a full tree. |
| A. | true |
| B. | false |
| Answer» A. true | |
| Explanation: an optimal tree will always | |
| 233. |
The type of encoding where no character code is the prefix of another character code is called? |
| A. | optimal encoding |
| B. | prefix encoding |
| C. | frequency encoding |
| D. | trie encoding |
| Answer» B. prefix encoding | |
| Explanation: even if the character codes are of different lengths, the encoding where no character code is the prefix of another character code is called prefix encoding. | |
| 234. |
What is the running time of the Huffman encoding algorithm? |
| A. | o(c) |
| B. | o(log c) |
| C. | o(c log c) |
| D. | o( n log c) |
| Answer» C. o(c log c) | |
| Explanation: if we maintain the trees in a priority queue, ordered by weight, then the running time is given by o(c log c). | |
| 235. |
What is the running time of the Huffman algorithm, if its implementation of the priority queue is done using linked lists? |
| A. | o(c) |
| B. | o(log c) |
| C. | o(c log c) |
| D. | o(c2) |
| Answer» D. o(c2) | |
| Explanation: if the implementation of the priority queue is done using linked lists, the running time of huffman algorithm is o(c2). | |
| 236. |
Notch is a |
| A. | high pass filter |
| B. | low pass filter |
| C. | band stop filter |
| D. | band pass filter |
| Answer» C. band stop filter | |
| Explanation: notch filter is a band stop filter that allows most frequencies to pass through it, except frequencies in a specific range. it is just opposite of a band-pass filter. high pass filter allows higher frequencies to pass while low pass filter allows lower frequencies to pass through it. | |
| 237. |
Sin wave is |
| A. | aperiodic signal |
| B. | periodic signal |
| C. | random signal |
| D. | deterministic signal |
| Answer» B. periodic signal | |
| Explanation: periodic signal is that which repeats itself after a regular interval. sin wave is a periodic function since it’s value can be determined at any point of time, as it repeats itself at a regular interval. aperiodic signal does not repeat itself at regular interval of time. random signals are the signals which have uncertain values at any time. while deterministic signals are the signals which are constant over a period of time. | |
| 238. |
What is the role of channel in communication system? |
| A. | acts as a medium to send message signals from transmitter to receiver |
| B. | converts one form of signal to other |
| C. | allows mixing of signals |
| D. | helps to extract original signal from incoming signal |
| Answer» A. acts as a medium to send message signals from transmitter to receiver | |
| Explanation: channel acts as a medium to transmit message signal from source transmitter to the destination receiver. | |
| 239. |
Sum of a periodic and aperiodic signal always be an aperiodic signal. |
| A. | true |
| B. | false |
| Answer» B. false | |
| Explanation: periodic signal is a signal which repeats itself after a regular interval. while aperiodic signal does not repeat itself at regular interval of time. | |
| 240. |
Noise is added to a signal |
| A. | in the channel |
| B. | at receiving antenna |
| C. | at transmitting antenna |
| D. | during regeneration of information |
| Answer» A. in the channel | |
| Explanation: noise is an unwanted signal that gets mixed with the transmitted signal while passing through the channel. the noise interferes with the signal and provides distortion in received signal. the transmitting antenna transmits modulated message signal while the receiving antenna receives the transmitted signal. regeneration of information refers to demodulating the received signal to produce the original message signal. | |
| 241. |
Agreement between communication devices are called |
| A. | transmission medium |
| B. | channel |
| C. | protocol |
| D. | modem |
| Answer» C. protocol | |
| Explanation: protocol is a set of rules that looks after data communication, by acting as an agreement between communication devices. channel is the transmission medium or the path through which information travels. modem is a device that modulates and demodulates data. | |
| 242. |
What is the advantage of superheterodyning? |
| A. | high selectivity and sensitivity |
| B. | low bandwidth |
| C. | low adjacent channel rejection |
| D. | low fidelity |
| Answer» A. high selectivity and sensitivity | |
| Explanation: the main advantage of superheterodyning is that it provides high selectivity and sensitivity. it’s bandwidth remains same. it has high adjacent channel rejection and high fidelity. | |
| 243. |
Low frequency noise is |
| A. | flicker noise |
| B. | shot noise |
| C. | thermal noise |
| D. | partition noise |
| Answer» A. flicker noise | |
| Explanation: flicker noise is a type of electronic noise which is generated due to fluctuations in the density of carrier. it’s also known as 1/f as it’s power spectral density increases with a decrease in frequency or increase in offset from a signal. | |
| 244. |
Relationship between amplitude and frequency is represented by |
| A. | time-domain plot |
| B. | phase-domain plot |
| C. | frequency-domain plot |
| D. | amplitude-domain plot |
| Answer» C. frequency-domain plot | |
| Explanation: relationship between amplitude and frequency is represented by a frequency-domain plot. also, it represents the | |
| 245. |
A function f(x) is even, when? |
| A. | f(x) = -f(x) |
| B. | f(x) = f(-x) |
| C. | f(x) = -f(x)f(-x) |
| D. | f(x) = f(x)f(-x) |
| Answer» B. f(x) = f(-x) | |
| Explanation: geometrically a function f(x) is even, if plot of the function is symmetric over y-axis. algebraically, for any function f(x) to be even, f(x) = f(-x). | |
| 246. |
The minimum nyquist bandwidth needed for baseband transmission of Rs symbols per second is |
| A. | rs |
| B. | 2rs |
| C. | rs/2 |
| D. | rs2 |
| Answer» C. rs/2 | |
| Explanation: theoretical minimum nyquist bandwidth needed for the baseband transmission of rs symbols per second without isi is rs/2. | |
| 247. |
The capacity relationship is given by |
| A. | c = w log2 ( 1+s/n ) |
| B. | c = 2w log2 ( 1+s/n ) |
| C. | c = w log2 ( 1-s/n ) |
| D. | c = w log10 ( 1+s/n ) |
| Answer» A. c = w log2 ( 1+s/n ) | |
| Explanation: the capacity relationship from shannon-hartley capacity theorem is given by c = w log2 ( 1+s/n ). | |
| 248. |
Which parameter is called as Shannon limit? |
| A. | pb/n0 |
| B. | eb/n0 |
| C. | ebn0 |
| D. | none of the mentioned |
| Answer» B. eb/n0 | |
| Explanation: there exists a limiting value for eb/n0 below which they can be no error free communication at any information rate. this eb/n0 is called as shannon limit. | |
| 249. |
Entropy is the measure of |
| A. | amount of information at the output |
| B. | amount of information that can be transmitted |
| C. | number of error bits from total number of bits |
| D. | none of the mentioned |
| Answer» A. amount of information at the output | |
| Explanation: entropy is defined as the average amount of information per source output. | |
| 250. |
Equivocation is the |
| A. | conditional entropy |
| B. | joint entropy |
| C. | individual entropy |
| D. | none of the mentioned |
| Answer» A. conditional entropy | |
| Explanation: shannon uses a correction factor called equivocation to account for uncertainty in the received signal. it is defined as the conditional entropy of the message x given y. | |
Done Studing? Take A Test.
Great job completing your study session! Now it's time to put your knowledge to the test. Challenge yourself, see how much you've learned, and identify areas for improvement. Don’t worry, this is all part of the journey to mastery. Ready for the next step? Take a quiz to solidify what you've just studied.