Total Questions : 50
Expected Time : 50 Minutes

1. Discuss the concept of parallel computing granularity and its impact on scalability and efficiency, providing examples from real-world applications.

2. Why is load balancing important in parallel computing?

3. Explain Amdahl's Law and its significance in parallel computing.

4. What is the role of a parallel computing scheduler?

5. In the context of parallel algorithms, what is meant by granularity and how does it impact performance?

6. What is Amdahl's Law used for in parallel computing?

7. What is the role of parallel reduction operations, and how do they contribute to the efficiency of parallel algorithms?

8. Evaluate the role of parallel computing schedulers in managing complex parallel processes and optimizing overall system performance.

9. What is the significance of parallel computing in real-time systems?

10. Discuss the advantages and challenges of integrating GPU parallel computing in scientific simulations and data-intensive applications.

11. Discuss the challenges and solutions associated with mitigating race conditions in parallel programming.

12. Evaluate the impact of Amdahl's Law on the design and scalability of parallel algorithms, considering different scenarios and levels of parallelization.

13. Which parallel algorithm is commonly used for searching a key in a large dataset?

14. What is a race condition in parallel programming?

15. Which parallel algorithm is commonly used for sorting large datasets?

16. What is the primary challenge of achieving load balancing in parallel computing?

17. Which parallel computing architecture is characterized by processors having their own local memory and connected through a network?

18. What is the role of barriers in parallel computing, and how do they impact performance?

19. What is the primary advantage of parallel computing in scientific simulations?

20. Describe the difference between task parallelism and data parallelism in parallel computing.

21. Explain the concept of speculative execution in the context of high-performance parallel computing.

22. In parallel computing, what is a deadlock?

23. What is the purpose of parallel prefix sum in parallel algorithms?

24. Examine the impact of Amdahl's Law on the scalability of parallel computing systems, especially in the presence of communication overhead.

25. Discuss the concept of SIMD (Single Instruction, Multiple Data) and its applications in parallel computing.

26. What is the purpose of a barrier synchronization in parallel programming?

27. Explain the concept of data parallelism.

28. Discuss the trade-offs involved in choosing between shared-memory and distributed-memory architectures in parallel computing.

29. Examine the role of parallel algorithms in addressing challenges related to big data processing and analytics, highlighting key techniques and optimizations.

30. Which parallel programming concept involves dividing a program into small, independent threads of execution?

31. In parallel computing, what is speculative execution?

32. Discuss the concept of deadlock in parallel computing and provide strategies for preventing and resolving deadlock situations.

33. What is parallel processing?

34. What are the main challenges in designing efficient parallel algorithms?

35. Describe the challenges and solutions associated with achieving fault tolerance in large-scale parallel computing systems, with a focus on real-world applications.

36. In parallel computing, what does Amdahl's Law express?

37. Examine the impact of task granularity on load balancing in parallel computing, discussing strategies for achieving optimal distribution of computational tasks.

38. Discuss the challenges and advantages of achieving load balancing in complex parallel computing environments.

39. Examine the role of parallel computing in optimizing scientific simulations, highlighting specific scenarios where it provides significant advantages.

40. In parallel computing, what does load balancing refer to?

41. What role does the parallel computing cache play in optimizing the performance of parallel algorithms?

42. Evaluate the role of parallel computing in the field of artificial intelligence, emphasizing its contributions to training and inference processes in machine learning models.

43. Define a parallel algorithm.

44. Compare and contrast SIMD and MIMD parallel computing architectures, highlighting their respective strengths and weaknesses.

45. What is the significance of parallel for-loops in parallel programming, and how do they contribute to optimizing computational speed?

46. Which parallel computing model involves a single control unit managing multiple processors?

47. Which programming language is commonly used for parallel computing?

48. Which parallel computing paradigm focuses on dividing a problem into smaller, identical tasks that can be solved independently?

49. What is the role of a parallel computing cache?

50. What is the primary goal of parallel computing?