This article is half-done without your Comment! *** Please share your thoughts via Comment ***
This is a standard locking like an Exclusive lock or Shared lock. The reader is blocked by writer and writer is blocked by the reader.
It locks the transaction whenever something is going wrong and it puts the transaction into the blocking queue.
Pessimistic locking allows you to access live and committed data, there is no any chance to access dirty data.
The big OLTP system like banking system or finance system are always preferred to use Pessimistic Locking because data accuracy is required for both reading and writing operation.
Yes, locking is an extra overhead in a big system, but a few types of the application require for accuracy.
It is also called as row versioning and it never blocks any type of transaction.
The MVCC architecture is most popular now a day and it depends on Optimistic Locking concept.
The RDBMS like: PostgreSQL and MySQL InnoDB is fully based on MVCC.
Microsoft SQL Server has also Snapshot Isolation which is one type of Optimistic Locking.
The Reader never blocks the writers and writers never blocks the readers.
Internally, it is managing different row version, so if the reader is reading one version of data and at the same time writer can also update the same data by creating a new version of that data, Next time readers can read new version of data and old version of data is marked as dead tuple.
There are 99% chance to get dirty data while accessing data using Optimistic locking, but it gives fast access because it never creates dependency between transactions.
Generally, most of the web and mobile applications are fine with the dirty read because managing locking is a tedious task.