Parallel computing is a type of computation where many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. Multi-core processors brought parallel computing to desktop computers. In 2012 quad task processors became standard for desktop computers, while servers have 10 and 12 core processors.
About Parallel computing in brief
Parallel computing is a type of computation where many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction- level, data, and task parallelism. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture. Multi-core processors brought parallel computing to desktop computers. In 2012 quad task processors became standard for desktop computers, while servers have 10 and 12 core processors. From Moore’s law it can be predicted that the number of cores per processor will double every 18–24 months. This could mean that after 2020 a typical processor will have dozens or hundreds of dozens of parallel cores or even hundreds of thousands of cores. In some cases parallelism is transparent to the programmer, such as in instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones. This is because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl’s law. In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above.
Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks, for accelerated specific tasks. The core is the computing unit of the processor and each core in multi-core processor can access the same memory concurrently. To deal with the problem of power consumption and overheating the major central processing unit manufacturers started to produce power efficient processors with multiple cores. This led to the design of parallel hardware and software, as well as high performance computing. Increasing processor power consumption led ultimately to Intel’s May 8, 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm. In the past, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. This can be accomplished by using a number of different processors, or clusters, MPPs, and grids. Historically parallel computing was used for scientific computing and the simulation of scientific problems, particularly in the natural and engineering sciences, including meteorology.
You want to know more about Parallel computing?
This page is based on the article Parallel computing published in Wikipedia (as of Dec. 03, 2020) and was automatically summarized using artificial intelligence.