SQL Server: Internal Storage and Memory Architecture Interview Questions and Answers (Day-5)

What is the max degree of parallelism option?

When SQL Server runs on a computer with more than one microprocessor or CPU, it detects the best degree of parallelism, that is, the number of processors employed to run a single statement, for each parallel plan execution. You can use the max degree of parallelism option to limit the number of processors to use in parallel plan execution.

What is MAXDOP in SQL Server?

Maximum Degree of Parallelism (MAXDOP) is an option in SQL server which enables you to configure SQL Server to use available CPU/CPUs to run a single statement in parallel execution plan. MAXDOP configuration highly depends on SQL Server edition, CPU type and operating system.

In my experience, I had to play with MAXDOP for certain statements coming from different servers in order to find best option of MAXDOP.

What is the MAXDOP query hint option?

You can override the max degree of parallelism value in queries by specifying the MAXDOP query hint in the query statement.

T-SQL script to change max degree of parallelism Option.

Setting maximum degree of parallelism to 0 allows SQL Server to use all the available processors up to 64 processors.
To suppress parallel plan generation, set max degree of parallelism to 1. Set the value to a number greater than 1 to restrict the maximum number of processors used by a single query execution.

SQL Server 2014: What is Hekaton?

SQL Server 2014: What is Hekaton?

SQL Server 2014: Create Memory Optimized File Group and Table.

SQL Server 2014: Create Memory Optimized File Group and Table

What is a Read-Ahead mechanism?

The Database Engine supports a performance optimization mechanism called read-ahead. Read-ahead anticipates the data and index pages needed to fulfill a query execution plan and brings the pages into the buffer cache before they are actually used by the query.

What are the types of Read-Ahead?

There are two types of Read-Ahead, one is Reading Data Pages and second is Reading Index Pages.

How internally pages are modified and written to the disk?

First page modification happens in the buffer cache and When a page is modified in the buffer cache, it is not immediately written back to disk;instead, the page is marked as dirty.

This means that a page can have more than one logical write made before it is physically written to disk.

For each logical write, a transaction log record is inserted in the log cache that records the modification. The log records must be written to disk before the associated dirty page is removed from the buffer cache and written to disk. SQL Server uses a technique known as write-ahead logging.

What is Bookmark Lookup?

When a SQL query requests a small number of rows, the optimizer can use the nonclustered index, if available, on the column(s) in the WHERE clause to retrieve the data. If the query refers to columns that are not part of the nonclustered index used to retrieve the data, then navigation is required from the index row to the corresponding data row in the table to access these columns.This operation is called a bookmark lookup.

What is a COLD, DIRTY or CLEAN Buffer?

The cold buffer cache has a couple of variants within SQL Server.    Typically it refers to the buffer pool right after restart.   The data cache is not loaded (cold) and requires physical reads to populate the cache.

Modified buffers that have not been written to disk are termed DIRTY buffers and are typically written to disk by checkpoint processing.

A clean buffer is a data page in memory that is NOT MODIFIED.


DBCC DROPCLEANBUFFERS removes the UNMODIFIED buffers from the buffer pool (no longer hashed) and places them on the free list.

This is often used in basic performance testing to clear the data cache and force the data to be retrieved from disk as a physical read instead of a logic read.

What is affinity I/O mask option?

The affinity I/O mask option binds SQL Server disk I/O to a specified subset of CPUs. In high-end SQL Server online transactional processing (OLTP) environments, this extension can enhance the performance of SQL Server threads issuing I/Os.

What should be the different values for affinity I/O mask?

  • A 1-byte affinity I/O mask covers up to 8 CPUs in a multiprocessor computer.
  • A 2-byte affinity I/O mask covers up to 16 CPUs in a multiprocessor computer.
  • A 3-byte affinity I/O mask covers up to 24 CPUs in a multiprocessor computer.
  • A 4-byte affinity I/O mask covers up to 32 CPUs in a multiprocessor computer.
  • To cover more than 32 CPUs, configure a four-byte affinity I/O mask for the first 32 CPUs and up to a four-byte affinity64 I/O mask for the remaining CPUs.

What Is Eager Writing in SQL Server?

Eager writing is a long back implemented idea in SQL server to prevent flooding of the buffer pool with pages that are newly created from minimally logged activities, and physically written to disk.

In SQL Server 2014, eager write doesn’t force dirty pages physical write as quickly as previous versions.   This allows dirty pages to spend their life time in RAM, used by SQL and flush them to free list without ever a physical write to disk as long memory is available. This will significantly increase the performance up to 300% as compared to previous versions.

Generally, what kind of operations are recorded in the transaction log file?

  • The start and end of each transaction.
  • Every data modification (insert, update, or delete).
  • Every extent and page allocation or deallocation.
  • Creating or dropping a table or index.
  • Rollback operations are also logged.

What happened if FILEGROWTH is enable and disable?

  • If the FILEGROWTH setting is enabled for the log and space is available on the disk, the file is extended by the amount specified in growth_increment and the new log records are added to the extension.
  • If the FILEGROWTH setting is not enabled, or the disk that is holding the log file has less free space than the amount specified in growth_increment, an 9002 error is generated.

Are you suggesting to truncate an old transaction log?


If the log records were never deleted from the transaction log, the logical log would grow until it filled all the available space on the disks holding the physical log files. At some point in time, old log records no longer necessary for recovering or restoring a database must be deleted to make way for new log records.

Do you think truncate log operation reduces the size of the physical log file?

Log truncation does not reduce the size of a physical log file, it reduces the size of the logical log and marks as inactive the virtual logs that do not hold any part of the logical log.

A log shrink operation removes enough inactive virtual logs to reduce the log file to the requested size.

Please share your ideas and opinions about this topic with me, your contribution will add true value to this topic.
If anyone has doubts on this topic then please do let me know by leaving comments or send me an email.

If you like this post, then please share it with others.
Please follow dbrnd.com, I will share my experience towards the success of Database Research and Development Activity.

I put up a post every day, please keep reading and learning.
Discover Yourself, Happy Blogging !
Anvesh M. Patel.

More from dbrnd.com

Leave a Reply

Be the First to Comment!

Notify of