1. Which of the following best defines parallel computing?
a) Executing multiple tasks sequentially on a single processor
b) Executing multiple tasks simultaneously using multiple processors/cores
c) Executing one task on multiple machines in sequence
d) Breaking tasks into smaller ones but running them one after another
b) Executing multiple tasks simultaneously using multiple processors/cores
In Flynn’s taxonomy, which category represents multiple instructions operating on multiple data streams?
a) SISD
b) SIMD
c) MISD
d) MIMD
MIMD
What does SIMD stand for in parallel computing?
a) Single Instruction Multiple Data
b) Single Instruction Multiple Devices
c) Simple Instruction Multi Data
d) System Instruction Multiple Data
Single Instruction Multiple Data
Which of the following is a challenge in parallel computing?
a) Synchronization
b) Load balancing
c) Data communication overhead
d) All of the above
d) All of the above
OpenMP is primarily used for parallel programming on:
a) Distributed memory systems
b) Shared memory systems
c) GPU clusters
d) Cloud computing
b) Shared memory systems
MPI (Message Passing Interface) is mainly used for:
a) Shared memory parallelism
b) Distributed memory parallelism
c) Sequential computing
d) Cloud-only systems
Distributed memory parallelism
Which type of parallelism is involved when tasks are divided based on data?
a) Functional parallelism
b) Instruction-level parallelism
c) Data parallelism
d) Pipeline parallelism
Data parallelism
In parallel computing, work is divided into:
a) Only one part
b) Many smaller parts
c) Only two parts
d) None of the above
b) Many smaller parts
What will this OpenMP program print (approximate output)?
a) Always prints thread IDs in order 0 1 2 3
b) Prints thread IDs but order may change
c) Prints only “Hello from thread 0”
d) Compilation error
👉 Answer: b (threads run in parallel, order not guaranteed).
2. In OpenMP, which directive parallelizes a for loop?
a) #pragma omp loop
b) #pragma omp for
c) #pragma omp parallel loop
d) Both b and c
👉 Answer: d (both work depending on usage).
What is the output of this MPI program (with 4 processes)?
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
printf("Hello from process %d\n", rank);
MPI_Finalize();
return 0;
}
a) Always prints ranks in order 0 1 2 3
b) Prints ranks but order may vary
c) Only process 0 prints
d) Compilation error
👉 Answer: b (each process prints, order not fixed).
Which of these is the correct way to set number of threads in OpenMP?
a) omp_set_num_threads(n);
b) set_threads(n);
c) num_threads = n;
d) Not possible
👉 Answer: a
Output of this OpenMP program with 2 threads?
#include <stdio.h>
#include <omp.h>
int main() {
int sum = 0;
#pragma omp parallel num_threads(2)
{
sum += 1;
}
printf("sum = %d\n", sum);
}
a) Always prints sum = 2
b) Always prints sum = 1
c) May print 1 or 2 due to race condition
d) Error
👉 Answer: c
In OpenMP, what does the following code do?
#pragma omp parallel for reduction(+:sum)
for(int i=0; i<100; i++) {
sum += i;
}
a) Runs sequentially, no parallelism
b) Parallelizes the loop but causes race condition
c) Parallelizes the loop and avoids race condition using reduction
d) Compilation error
👉 Answer: c
What is the output of this OpenMP code (2 threads)?
#include <stdio.h>
#include <omp.h>
int main() {
#pragma omp parallel num_threads(2)
{
printf("Thread %d says Hello\n", omp_get_thread_num());
}
return 0;
}
a) Always prints thread 0 then 1
b) Prints thread IDs in any order
c) Only prints thread 0
d) Error
👉 Answer: b
What will this OpenMP program print?
#include <stdio.h>
#include <omp.h>
int main() {
int i;
#pragma omp parallel for private(i)
for(i=0; i<3; i++) {
printf("i=%d, thread=%d\n", i, omp_get_thread_num());
}
return 0;
}
a) Each i printed once, order may vary
b) Prints wrong values of i
c) Compilation error
d) Only one thread prints
👉 Answer: a