In this post, I am providing some important parameters to tunning PostgreSQL for better performance.
When we are writing SQL, We are doing our best every time to write optimized database code. We always analyse require Index and best execution plan and accordingly we are making our SQL queries.
But still sometimes, we are not getting good performance so at that time we should look at PostgreSQL Server level configuration and parameters.
Here, I have listed some important parameters:
You can set this all parameter in postgresql.conf file.
We can configure the maximum number of client connections and this is very important because every connection requires memory and other resources.
PostgreSQL can handle some hundred connections, but if we are planning for thousands connections, we should use some connection pooling mechanism to reduce overhead of connections.
Instead of big shared memory pool, PostgreSQL has a separate process for each database connection. PostgreSQL uses this dedicated memory area for caching purpose, so its provide lots of performance gain in your system.
If we have more loads, we can set large values for shared_buffers.
Generally, we should set shared_buffers values up to 1/4 part of the main memory.
You can calculate using, this formula
(0.25 * Main Memory)
This parameter defines, how much space is required for caching to write-ahead log entries. This is really very small size value but it is required to change in heavily loaded servers.
Recommended value is: 8MB.
This is also one of the most important parameter which is used to set a private work space for each and every new connection.
We have shared_buffers which we are using as dedicated buffer for all connections, but using work_mem we can set memory for each query.
If we have configured wrongly, it really creates a big problem for us because if we have a big, complex query and which is running for multiple times then it will create a big problem for us.
We should configure this parameter base on number of connections and what type of queries we are running on the server.
We can calculate using this method,
Memory(MB) / Max_connections.
If the result is very near to Max_connections, perform divide by 2 again.
The result should be at least 16MB otherwise we should buy more RAM.
This is also same as like work_mem parameter, but use only for maintenance related queries like, VACUUM, CLUSTER.
Generally, we are doing maintenance when query load is not too high.
I suggest you to set a big value for parameter so that we can quickly complete our maintenance task.
Default is 16MB and Recommended is (Memory / 8).
This parameter is mostly dedicated to PostgreSQL query planner because query planner is responsible to execute a query by choosing a better optimized execution path.
The query planner also requires some space to perform their work so this parameter helps the query planner.
Default is 65536 and Recommended is (Memory (MB) * 0.75)
This is also one of the important parameter because PostgreSQL support MVCC nature.
This is really very required and it solves lots of performance related issues by removing dead tuples.
Whenever table is free, Autovacuum performs vacuuming on that table so another individual Vacuum executes fast because it has very less to remove.
You can set number of transactions to commit per second. e.g. set 100 transaction means per second, 100 transactions is going to commit.
This parameter should be ON or OFF is up to our choice because when we put OFF, our bulk insertion is faster, but there is the chance of failure and when we put ON, bulk insertion may slow down but the chance of data failure is also very less.