This article is half-done without your Comment! *** Please share your thoughts via Comment ***
In the past decade, lots of analysis and research one for the Parallel Database System and Parallel Query Processing.
The success of Parallel Database System depends on the relational database model and good CPU / DISK performance. Without a good CPU and DISK performance, we should not use the Parallel Query Processing.
When we are executing any SQL Query, It creates the execution plan and submits to the internal query executor. A query executor collects all execution plans and executes in sequence.
For example, your one SQL Query requires 100,00,0000 times manipulation, so using a single CPU thread system It takes more time to execute.
In the Parallel Query Processing,
More than one process worker running in the background and responsible for the same task or multiple tasks in the sharing mode.
For example, If we say 4 different process worker are executing one single big SQL Query which requires 100,00,0000 times manipulation, It completes the whole process four times faster than using single process worker or single CPU thread for same SQL Query.
Basically, there are main three types of architecture of Parallel Database System.
This approach is not that much popular where the same disk shared among the different background process workers.
With a shared nothing system, each processor owns a portion of the database, and only that portion may be directly accessed or manipulated by that processor. A two-phase commit protocol is required to coordinate a transaction commit which involves multiple nodes.
A shared nothing system is based on the concept of declustering. Declustering a relation includes distributing its tuples among multiple nodes according to some distribution criteria such as applying a hash function to the key attribute of each tuple.
In the context of query processing, the main advantage of a shared nothing system is its scalability.
One of the best example of this architecture is: GreenPlum Relational Database.
In a shared everything system, main memory, in addition to disks, is also shared across all the processors, making system management and load balancing much easier.
First, there are no communication delays because messages are exchanged through the shared memory, and synchronization can be accomplished by cheap, low level mechanisms.
Second, load balancing is much easier because the operating system can automatically allocate the next ready process to the first available processor.
Tomorrow, You can access one good article about Performance Test of Parallel Query Processing using PostgreSQL 9.6 Database System.