Latch Contention Troubleshooting

Systematic Latch Contention Troubleshooting in Oracle

By Tanel Poder

This article was originally published at IOUG Select journal Q1 2010. Thanks to John Kanagaraj for editing the IOUG Select article and pushing me for deadlines :-) 

Thanks to IOUG team for giving me the IOUG Select Journal Editor's Choice award (2011) for this article!!! 


As an Oracle DBA, developer or performance analyst, you may have run into what is termed "latch contention" at various points. So what exactly is a "latch" and why do we have contention on this "latch". In this article, we will take an in-depth look at latches and how we determine and resolve such contention. Whether you are a newbie or an experienced old-timer, we hope this article will cast a little more light on this ill-understood subject.

What is a latch?

I will put it very simply: Latch is a lock. In fact, a latch is just another special type of lock in Oracle. Oracle uses many different types of locks internally, for example enqueue locks, library cache locks, library cache pins, buffer cache pins, mutexes and latches and these different lock types are useful for different purposes. Enqueue locks are sophisticated, provide ordered queuing capabilities and are used for cases where the locks are held for relatively long durations. Latches on the other hand are much less sophisticated, more lightweight and are used for protecting very short operations on memory structures such various internal linked list modifications, shared pool memory allocation, library cache object lookups and so on. Latches are very low-level locks managed deep inside Oracle kernel code and users or applications can not directly acquire nor release them.


So, how is a latch physically implemented? Every latch is just a memory structure in SGA, usually 100-200 bytes in size, depending on your Oracle version, hardware platform and whether you are running 32 or 64-bit Oracle.


Contrary to a common misconception, latches are in no way related to OS synchronization mechanisms such as mutexes or semaphores! Latches are just plain memory structures in SGA and the OS doesn't even have an idea that latches exist. Taking a latch does not require any system calls or OS interaction. If that were the case, latch operations would slow down considerably as such system calls are expensive. If a process wants to take a latch, it just reads latch memory to check whether the latch value is zero (i.e. latch is not taken) and if it is zero, then it changes the value to indicate that the latch is taken. For "exclusive-get-only" latches, the value will be hardcoded, such as "0xFF" on Intel platforms, but for "shared-get-capable" latches, the acquiring process will put its own Oracle PID in there if it takes that latch in exclusive mode. If a shared latch is held by many processes simultaneously then the latch value will just show the reference count (the number of shared mode holders) and tracking individual processes is done via state objects memory structures.


So, if a latch is just some memory content in SGA, modified by processes at will, who then guarantees that two processes (running on different CPUs) don't accidentally take the same latch simultaneously, without knowing about each other? This would be a race condition, and if allowed, the Oracle instance could quickly end up in a crash or a hang. The solution lies in atomic "test-and-set" and "compare-and-swap" (CAS) instructions which all modern hardware supports. This allows the Oracle kernel code to perform latch value checking and changing in a single atomic instruction. On some platforms (such as Intel), this needs to be combined with a special flag which locks the memory line where the latch lives for a single CPU access only for the duration of the latch get. This is done with LOCK CMPXCHG instruction on Intel platform. CMPXCHG means Compare and Exchange and the LOCK prefix guarantees that only one CPU at a time can access that latch  (remember, latches are usually held for very short periods, measured as low as just microseconds).

Since the Oracle SGA holds lots of information in various types of memory structures,  it is not enough to have just one latch for each type of memory structure to be protected. For example, if you have an 8 GB buffer cache for 8 KB blocks, then you have one million buffers in the buffer cache. Reasonable concurrent access to all these million buffers can't be provided by a single latch (in this case a "cache buffers chains" latch). As well, the library cache holds thousands if not hundreds of thousands of cursors and objects in it, reasonable access to all of them would not be possible with a single "library cache" latch, especially if you have lots of CPUs in your database server box. Therefore, Oracle has split the memory structures that need protection into logical sub-pieces and each piece is protected by a separate sub-latch known as a child latch. Some structures such redo log buffer are physically split into ranges and each range is protected by (and accessed using) a separate latch (redo allocation latch for log buffer). In some cases, an algorithm such as a hash or modulus function is used for determining which exact child latch should be used when accessing an object. For example, if you run a database with large buffer cache (100GB+), you may have millions of cache buffers chains child latches in use. When accessing a data buffer block, Oracle will run a modulus function on the data block address (DBA) of the block to determine which cache buffers chains latch to get to satisfy that access. The right library cache latch is also determined by running modulus function on the hash value of library cache object name (the hash_value column in V$SQL for example).


Since latches are held for very short periods, if a latch is busy due some other process, then the acquiring process doesn't give up immediately when it cannot get the latch on the first try. If a latch is busy, the acquiring process tries again few thousand times instead of going to sleep (and wait) immediately. The reasoning here is that if a latch is currently busy, it normally will be freed very quickly, so instead of taking the trouble of going to sleep (requires a syscall and context switch) we "spin" and burn some more CPU and test with the hope of getting the latch some (hundreds) of microseconds later. In other words, some CPU time is sacrificed but ultimately this will save time as it may avoid the needing for the process to "sleep" or suspend itself. This is called spinning (busy-waiting). Oracle registers a "latch free" wait event only after the process has failed to acquire the latch after the initial spinning and goes to sleep.

Note that some latch get operations explicitly don't perform a "spin-get" and hence the latch get will fail immediately after the first attempt to get it didn't succeed. This technique is used for some latches where Oracle doesn't care which exact child latch to take and is called immediate latch gets or "not willing to wait" latch gets. The "redo copy" latch is one such example as this helps the LogWriter (LGWR) process to know whether anyone is currently copying redo data to log buffer, but not who exactly is copying and where, as this does not matter to LGWR.

Systematic troubleshooting of latch contention

So after this brief introduction to latching in Oracle, let's see why latch contention occurs and how to troubleshoot it. Remember – essentially, a latch is a type of lock. It allows controlling concurrent access to some SGA memory structures so we wouldn't end up with race conditions, SGA corruptions and crashes. So, take note of this simple, but important concept in latch (or any lock) contention: Such latch contention occurs only when there is some other process already holding the latch which our process wants to acquire!


In other words. latch contention can only happen when these two conditions are satisfied:


1)    Someone must try to get the latch

2)    Someone else must be already holding the latch


Latch contention means that some process is forced to sleep as it cannot acquire the latch it wants due someone else already holding it. There will be no latch contention if both of the above conditions aren't met! Therefore, whenever we see latch contention, we need to find out two things:


1)    Who is trying to get the latch (and why)?

2)    Who is already holding the latch (and why)?


These two questions need to be asked for any lock contention and is applicable here as well. The following sections explain how to answer these questions in Oracle.

Question 1 - Who is trying to get the latch and why?

The first question can be answered with the help of any session-level tool based on the Oracle Wait Interface, such as the V$SESSION_WAIT view, Active Session History or SQL_TRACE. All these tools would let you know for which exact (child) latch address the wait occurred.


Note that I did not mention instance-level performance data collection tools such as Statspack or AWR reports in the above list. The problem with these tools is that they only gather and store system-wide performance data such V$SYSSTAT or V$SQL snapshots, and thus they don't know anything about which exact sessions are experiencing the latch contention and waits. These tools would show that someone is experiencing latch contention problems in the database instance, but finding out who and exactly why would involve a significant amount of guesswork, previous experience and luck. Thanks to this information gap, there currently are no systematic approaches to latch contention troubleshooting published and hence such troubleshooting is still perceived as a very complex task. This article aims to change this as there are ways to fill this information gap, which allows us to become systematic and not depend on luck.


There are two common trigger points for latch contention troubleshooting. One is that a DBA sees (from Statspack or AWR report or other monitoring tools) that a significant amount of instance-wide response time is spent waiting for some latch (from the "Top 5 Timed events" section for example). The other trigger point would start from the end user (who's complaining) or some measured business task which takes too long. In this case the DBA would identify the session(s) used by that user or task and use some session-level tool (as mentioned above) for quantifying where the response time is spent.


Question 2 - Who is holding the latch and why?

The second question can be easily answered (starting in Oracle 8.0) by querying the V$LATCHHOLDER view which lists any process/session currently holding a latch. A description is shown below:


SQL> describe v$latchholder

 Name                    Null?    Type

 ----------------------- -------- ----------------

 PID                              NUMBER

 SID                              NUMBER

 LADDR                            RAW(4)

 NAME                             VARCHAR2(64)

 GETS                             NUMBER


Although little known, this is an extremely useful view for systematic latch contention troubleshooting as it provides one important bit of information which no other Oracle view or trace provides - the session ID of the latch holder. Once you know the holder's SID, you can focus on that session and see exactly what it is doing that it needs to hold the latch for so long.


Columns of interest are SID (Session ID), NAME (Latch name), LADDR (Latch address in memory). LADDR is the latch's unique identifier in an instance and helps to determine which exact child latch was held in case the latch has multiple child latches.


As I said earlier, latches are usually held for very short durations, so manually querying this view may not return anything at all even after querying it multiple times. For this reason, I have written a tool for high-frequency profiling of latch holders which I will introduce now. It's called LatchProf and is a plain SQL script which doesn't require any modifications of the database or schema. You can download it for free from my website at – just search for "latchprof" using the search button on the right side of the page.


Using LatchProf to find who are holding latches the most

Let's assume that we have noticed shared pool and library cache latch contention, reported by a Statspack/AWR report or some monitoring tool. Remember what I mentioned earlier - latch and any other lock contention occurs only if someone is already holding the latch while I'm trying to get the same. Instead of guessing who are the troublemakers, I can just run LatchProf which samples V$LATCHHOLDER at high frequency and reports the top latch holders.


I could start by looking which latch types are the busiest in the system. Note that as there are many latch types in Oracle and they also have lots of child latches then the aggregate "busyness" of a latch does not necessarily reflect the (child) latch with the highest contention. While the Oracle Wait Interface is the only correct source for detecting contention, LatchProf should be used for drilldown once the latch experiencing contention has been detected.


To illustrate the capabilities of LatchProf, I start from a simple example which shows an overview of how busy different latch types are. By "busy" I mean how much of the time these latches are held by somebody.


LatchProf takes four parameters:


So, let's run LatchProf now. After few seconds of sampling, LatchProf returns the above profile:

SQL> @latchprof name % % 100000

-- LatchProf 1.20 by Tanel Poder ( )

NAME                                      Held       Gets  Held %     Held ms Avg hold ms

----------------------------------- ---------- ---------- ------- ----------- -----------

library cache                             3319       2815    3.32      91.273        .032

shared pool                               1694       1131    1.69      46.585        .041

library cache lock                         580        580     .58      15.950        .028

shared pool simulator                      482        100     .48      13.255        .133

kks stats                                  269        258     .27       7.398        .029

enqueues                                   158        158     .16       4.345        .028

library cache pin                           67         67     .07       1.843        .028

enqueue hash chains                         62         62     .06       1.705        .028

row cache objects                            4          4     .00        .110        .028

shared pool sim alloc                        2          2     .00        .055        .028

10 rows selected.

The output column meanings are as follows:

Name            - Latch name

Held            - During how many samples out of total samples (100000) the particular latch was held by somebody

Gets            - How many latch gets against that latch were detected during LatchProf sampling

Held %          - How much % of time was the latch held by somebody during the sampling. This is the main column you want to be looking at in order to see who/what holds the latch the most (the latchprof output is reverse-ordered by that column) 

Held ms         - How many milliseconds in total was the latch held during the sampling

Avg hold ms     - Average latch hold time in milliseconds (normally latches are held from a few to few hundred microseconds)


Note that LatchProf works on Oracle versions 9.2 and higher, but the Gets and Avg hold ms columns are shown only on Oracle 10g and higher as the required data isn't externalized in the V$LATCHHOLDER until 10g.

So, from the above output we can see that the library cache latches were the busiest during LatchProf run (3.32% of the sampling time there was some library cache latch held by some session).

When a latch is busy, one obvious question would be: "which sessions are holding the latch the most?" Sometimes all the sessions in an instance are both victims and troublemakers (each contributing to the latch contention problem just a little), but sometimes there are one (or a few) heavy troublemakers and most other sessions end up as victims.

Luckily LatchProf allows us to break down the latch holder statistics by SID and since I am interested only in the library cache latches for now, I can monitor only this latch, as shown in bold below:


SQL> @latchprof sid,name % "library cache" 100000

-- LatchProf 1.20 by Tanel Poder ( )

       SID NAME                                      Held       Gets  Held % Avg hold ms

---------- ----------------------------------- ---------- ---------- ------- -----------

       120 library cache                             5059       4618    5.06        .037

       120 library cache lock                         189        188     .19        .034

       120 library cache pin                          135        135     .14        .034

       139 library cache lock                         131        131     .13        .034

       139 library cache                              109        109     .11        .034

       113 library cache lock                          60         60     .06        .034

       113 library cache                               48         48     .05        .034

7 rows selected.

In the listing above, we have just found out the SID of the main "offender", i.e. SID 120, which apparently holds some library cache latch(es) over an order of magnitude more (5.06% of time) than the next most offending holder sessions (139,113).

Since we know the SID of the troublemaker, we can now see what this session is doing, determine which SQL its running by querying V$SESSION, ASH or by enabling SQL trace. In fact, the easiest way for getting an idea what kind of work a session is doing is running my Snapper script. This script is also available on my site (and doesn't make any changes to the database, by the way!) Snapper takes 4 parameters (out|trace - where to show the output, either DBMS_OUTPUT or tracefile, 5 - how many seconds to sleep between session performance snapshots, 1 - how many samples to take, 120 - SID to monitor):


SQL> @snapper stats 5 1 120

-- Session Snapper v2.02 by Tanel Poder ( )


 SID, USERNAME, TYPE, STATISTIC                         ,   DELTA, HDELTA/SEC,  %TIME


 120, SYS     , STAT, recursive calls                   ,    8904,      1.78k,

 120, SYS     , STAT, recursive cpu usage               ,     241,       48.2,

 120, SYS     , STAT, CPU used by this session          ,     249,       49.8,

 120, SYS     , STAT, enqueue requests                  ,    4452,      890.4,

 120, SYS     , STAT, enqueue releases                  ,    4452,      890.4,

 120, SYS     , STAT, calls to get snapshot scn: kcmgss ,    4452,      890.4,

 120, SYS     , STAT, parse time cpu                    ,     214,       42.8,

 120, SYS     , STAT, parse time elapsed                ,     452,       90.4,

 120, SYS     , STAT, parse count (total)               ,    4452,      890.4,

 120, SYS     , STAT, parse count (hard)                ,    4452,      890.4,

 120, SYS     , STAT, execute count                     ,    4452,      890.4,

 120, SYS     , TIME, hard parse elapsed time           , 3522257,   704.45ms,  70.4%

 120, SYS     , TIME, parse time elapsed                , 4425428,   885.09ms,  88.5%

 120, SYS     , TIME, PL/SQL execution elapsed time     ,   91410,    18.28ms,   1.8%

 120, SYS     , TIME, DB CPU                            , 2790000,      558ms,  55.8%

 120, SYS     , TIME, sql execute elapsed time          , 5035731,      1.01s, 100.7%

 120, SYS     , TIME, DB time                           , 5035731,      1.01s, 100.7%

--  End of snap 1, end=2009-11-10 03:37:30, seconds=5

PL/SQL procedure successfully completed.


The first thing that immediately jumps out from SID 120's execution profile is that it does heavy hard parsing, 890 hard parses per second! (shown in bold). That's a pretty evident reason for the frequent library cache latch holding as reported by LatchProf. We can be almost sure that the application code is not using bind variables where it should use them. The next step would be to identify couple of SQL statements from this session, either by briefly enabling SQL_TRACE or just by querying V$SESSION.SQL_ID and mapping this to V$SQL, and then call the developer(s) who wrote that code and get them to fix it!

Let's assume that we could get this code fixed, but our monitoring system still reports some ongoing library cache latch contention and we don't see an obvious troublemaker anymore (all sessions reported are holding the monitored latches roughly equal amount of time) as seen below:


SQL> @latchprof sid,name % "library cache" 100000

-- LatchProf 1.20 by Tanel Poder ( )

       SID NAME                   Held   Gets  Held %     Held ms Avg hold ms

---------- --------------------- ----- ------ ------- ----------- -----------

       116 library cache lock      657    657     .66      12.549        .019

       116 library cache           594    594     .59      11.345        .019

       134 library cache lock      211    211     .21       4.030        .019

       134 library cache           122    122     .12       2.330        .019

       139 library cache            82     82     .08       1.566        .019

       139 library cache lock       64     64     .06       1.222        .019

6 rows selected.                


So, we now have two ways to move forward:

1)    Take one of the reported sessions and see what it is doing, by enabling SQL trace or similar. However this approach may not tell you which of the SQL statements is causing the latches to be held

2)    Run LatchProf with SQLID option, which also reports the SQLIDs (or hash values in 9i) of the statements causing the latches to be held

I will proceed with option 2, and will just specify SQLID (or SQLHASH in 9i) in the first parameter to LatchProf:

SQL> @latchprof name,sqlid % "library cache" 100000

-- LatchProf 1.20 by Tanel Poder ( )

NAME                 SQLID          Held   Gets  Held % Avg hold ms

-------------------- ------------- ----- ------ ------- -----------

library cache        5csdgx2jquqvj   858    858     .86        .028

library cache lock   8vdny41vqbq7t   377    377     .38        .028

library cache lock   5csdgx2jquqvj   369    369     .37        .028


This shows me that the main statement causing library cache latches to be held has SQL_ID '5csdgx2jquqvj'.

Using the SQL ID I can find the text of the query as shown below:

SQL> select executions, sql_text from v$sql where sql_id = '5csdgx2jquqvj';


---------- -------------------------------------------------

  40522096 select count(*) from dual where rownum = 1


Apparently someone is running the above query really frequently! (Of course I should see how much the executions increases right now instead of just looking into the total number of executions since that cursor was loaded into the library cache).

The common reasons for such frequently re-executed, but useless SQL can be either someone's really bad piece of code or bad connection pool design feature, which constantly polls DUAL table to verify if the connection is still alive.

Note that it's possible to break the latch holders down by both SID and SQLID to get a more detailed view of latch holders:


SQL> @latchprof sid,name,sqlid % "library cache" 100000

-- LatchProf 1.20 by Tanel Poder ( )

 SID NAME                SQLID          Held  Gets  Held % Avg hold ms

---- ------------------- ------------- ----- ----- ------- -----------

 134 library cache       5csdgx2jquqvj   462   462     .46        .020

 134 library cache lock  5csdgx2jquqvj   395   395     .40        .020

 134 library cache lock  8vdny41vqbq7t   305   305     .31        .020

 116 library cache lock  5csdgx2jquqvj   100   100     .10        .020

 139 library cache       5csdgx2jquqvj    98    98     .10        .020

 116 library cache       5csdgx2jquqvj    93    93     .09        .020

 139 library cache lock  5csdgx2jquqvj    80    80     .08        .020

 134 library cache       8vdny41vqbq7t    73    73     .07        .020

 113 library cache       5csdgx2jquqvj    62    62     .06        .020

 139 library cache lock  8vdny41vqbq7t    60    60     .06        .020

 116 library cache lock  8vdny41vqbq7t    60    60     .06        .020

 113 library cache lock  8vdny41vqbq7t    35    35     .04        .020

 113 library cache lock  5csdgx2jquqvj    31    31     .03        .020

 113 library cache                         1     1     .00        .020

14 rows selected.


The above example was meant as an introduction to the capabilities of LatchProf. I used library cache latches as an example, but LatchProf can show the same information for any other type of latch (there are hundreds of different types of latches in Oracle!)

The key difference between LatchProf and other latch contention troubleshooting approaches are:

1)    LatchProf shows you who is holding the latches (who is causing the contention) instead of just showing who's waiting for them

2)    LatchProf shows you which actual sessions are holding the latches (not some system-wide ambiguous metrics), thus you can simply focus on the troublemaking sessions and see what are they doing

3)    LatchProf shows you which SQL statement caused taking the taking of the latch, thus narrowing down the root cause even more


Having established the basics, let's now dig a bit deeper with another example.

Using LatchProfX for finding the hot block causing cache buffers chains latch contention

Let's say your monitoring system reported significant cache buffers chains latch contention and you want to diagnose it. Cache buffers chains (CBC) latch contention can occur due a number of reasons. One of the most common reasons is that you have an extremely hot block in the buffer cache. Of course we don't know the root cause in advance, so let's systematically troubleshoot this problem.

First, there are many CBC latches in an Oracle instance. In my test server with a small buffer cache, I have 4096 CBC child latches. A big production database with a large buffer cache can easily have hundreds of thousands of child latches! You can check that using the following command:

SQL> select count(*) from v$latch_children

  2  where name = 'cache buffers chains';





So, as a first step we should identify which CBC latches are experiencing the contention – i.e. is the contention lightly spread across many CBC latch children or is there a single child latch experiencing much heavier contention compared to others?

We will need to use the Oracle Wait Interface since it can report the contention (waits) for latches. This will require us to use either SQL_TRACE or V$SESSION_WAIT/ASH or calculate deltas of V$LATCH_CHILDREN.WAIT_TIME to see which child latch experiences the most sleeps. I do like the first two approaches more as they allow to tie the latch waits together with a session and with whatever it was doing. For example, SQL_TRACE reports the following lines when a traced session is experiencing CBC latch contention:


WAIT #1: nam='latch: cache buffers chains' ela= 204 address=15742234376 ...

WAIT #1: nam='latch: cache buffers chains' ela= 8 address=15742234376   ...

WAIT #1: nam='latch: cache buffers chains' ela= 2 address=15742234376   ...

WAIT #1: nam='latch: cache buffers chains' ela= 7 address=15742234376   ...


The "address" part is important - it shows you exactly which child latch is experiencing the contention (every child latch in the instance has an unique memory address). I took the most prevalent address (in decimal) from the tracefile, converted it to hex and queried V$LATCH_CHILDREN with it as seen below:


SQL> select name, gets, misses, sleeps

  2  from v$latch_children

  3  where addr = hextoraw(trim(to_char(15742234376, '0XXXXXXXXXXXXXXX')));

NAME                            GETS     MISSES     SLEEPS

------------------------- ---------- ---------- ----------

cache buffers chains        62178598     125541        805


Alternatively you can sample V$SESSION or query ASH (if you have the Diagnostics Pack licenses!) as I've shown below:



  2      SELECT

  3          event

  4        , TRIM(TO_CHAR(p1, 'XXXXXXXXXXXXXXXX')) latch_addr

  5        , TRIM(ROUND(RATIO_TO_REPORT(COUNT(*)) OVER () * 100, 1))||'%' PCT

  6        , COUNT(*)

  7      FROM

  8          v$active_session_history

  9      WHERE

 10          event = 'latch: cache buffers chains'

 11      AND session_state = 'WAITING'

 12      GROUP BY

 13          event

 14        , p1

 15      ORDER BY

 16          COUNT(*) DESC

 17  )

 18  WHERE ROWNUM <= 10

 19  /

EVENT                         LATCH_ADDR        PCT      COUNT(*)

----------------------------- ----------------- ------ ----------

latch: cache buffers chains   3AA4F6F08         4.1%           50

latch: cache buffers chains   3AA4D4908         3.7%           45

latch: cache buffers chains   3AA4E2558         3.4%           42

latch: cache buffers chains   3AA4F00E0         3.2%           39

latch: cache buffers chains   3AA519440         2.9%           36

latch: cache buffers chains   3AA527090         2.8%           34

latch: cache buffers chains   3AA4FDD30         2.8%           34

latch: cache buffers chains   3AA50B8B8         2.7%           33

latch: cache buffers chains   3AA4DB730         2.6%           32

latch: cache buffers chains   3AA534C18         2.6%           32

10 rows selected.


From above query I get the same child latch address (in hex) as reported by SQL_TRACE. Now that I know the actual CBC child latch address, I can run LatchProf with filtering by individual child latch address! That way, I can monitor the holders of this particular child latch experiencing worst contention.


SQL> @latchprof sid,name,sqlid % 3AA4F6F08 100000 

-- LatchProf 1.20 by Tanel Poder ( )

   SID NAME                   SQLID          Held  Gets  Held % Avg hold ms

----- ---------------------- ------------- ----- ----- ------- -----------

  136 cache buffers chains   f0cxkf0q803f8    58    58     .06        .011

  120 cache buffers chains   f0cxkf0q803f8    28    28     .03        .011

  119 cache buffers chains   f0cxkf0q803f8    16    16     .02        .011

  148 cache buffers chains   f0cxkf0q803f8     4     4     .00        .011

  118 cache buffers chains   75621g9y3xmvd     1     1     .00        .011


The top SQLID causing holding of the CBC child latch at address 3AA4F6F08 is "f0cxkf0q803f8".


SQL> select executions, sql_text from v$sql where sql_id = 'f0cxkf0q803f8';



---------- --------------------------------------------------

   2115217 SELECT /*+  first_rows index(customers,

           customers_pk) index(orders, order_status_ix) */








           <= 4


From this, I know both the top sessions causing the contention and also the top SQL's causing this latch contention! Since we now know the SQLID, the next step could be to check the execution plan of the given SQL and see why is it accessing the datablock protected by our latch so frequently. Sometimes, the contention comes from many frequent revisits of the block(s) due a bad execution plan, and may involve nested loop joins in improper locations, looping over many rows. However, in other situations, the problem may just occur due a hot block, such an index root block, visited by large number of concurrent sessions.

Still, it's useful to know what kind of hot blocks are causing the latch contention.


The good news is that we can exactly identify the exact data block which is so hot that it causes the contention. There are few approaches for identifying the hot block out there, but the most accurate of them is the LatchProfX script. The LatchProfX is available on my website, again for free. However, note the X in the end of script name - it means LatchProf eXtended or LatchProf which requires X$ tables, which means extended privileges.


LatchProfX shows an additional column, called "object". This column shows information about the object protected by a given latch. For cache buffers chains latches, this object shows the Data Block Address (DBA) of the block that we accessed, causing the latch get! See the example below:


SQL> @latchprofx sid,name,sqlid,object % 3AA4F6F08 100000

-- LatchProfX 1.20 by Tanel Poder ( )

  SID NAME                  SQLID           OBJECT  Held  Gets  Held % Avg hold ms

----- --------------------- ------------- -------- ----- ----- ------- -----------

  148 cache buffers chains  f0cxkf0q803f8  1C0005A    40    40     .04        .010

  136 cache buffers chains  f0cxkf0q803f8  1C0005A    39    37     .04        .011

  120 cache buffers chains  f0cxkf0q803f8  1C0005A     4     4     .00        .010

  118 cache buffers chains  75621g9y3xmvd  1C0005A     1     1     .00        .010


It looks like the block at data block address 1C0005A (in hex) is the troublemaker!

Lets check into which segment does this block belong:


SQL> @dba 1C0005A

    RFILE#     BLOCK#

---------- ----------

         7         90

Press enter to find the segment using V$BH (this may take CPU time), CTRL+C to cancel:

STATE      BLOCK_CLASS         DATA_OBJ# OBJECT_TYPE         object

---------- ------------------ ---------- ------------------- -----------------

xcur       data block              62691 INDEX PARTITION     SOE.CUSTOMERS_PK


As seen above, this block belongs into an index partition of an index SOE.CUSTOMERS_PK. The block itself lives in relative file number 7 and block number 90 in that file. It is possible to dump the contents of the block using ALTER SYSTEM DUMP DATAFILE 7 BLOCK 90 command (note that this ALTER SYSTEM command takes absolute file ID as parameter, but the above number is a relative file ID and should be converted to absolute figure using DBA_DATA_FILES).


However, the data_obj# column shows the data_object_id of the index partition segment and using this information, I can query and find out where this index partition segment's header block resides. Keep in mind that Index root blocks are always stored immediately next to index segment header so that Oracle would know where to physically start traversing the index when accessing it.



  2      header_file

  3    , header_block

  4  FROM

  5      dba_segments

  6  WHERE

  7      (owner, segment_name, partition_name) =

  8          (SELECT owner, object_name, subobject_name

  9           FROM dba_objects WHERE data_object_id = 62691)

 10  /



----------- ------------

          7           89

1 row selected.


So from the above, the segment header block (7,89) resides immediately before our hot block (7,90) so we know that the hot block is an index root block. It is normal to have some contention on very commonly accessed blocks such index root and first level branch blocks (especially of primary key indexes and so on). Oracle has come up with many optimizations for relieving such contention - for example, it allows shared CBC latch access for examination of the index root and branch blocks on most platforms.


Note that sometimes the latch contention shows inefficient physical design for a given concurrency requirement such as not spreading the contention points across multiple blocks/latches by partitioning the object to spread the "hot" data to multiple blocks in various partitions.


Often though, the CBC latch contention is merely a symptom of an incorrect execution plan, such as a nested loop join operation that revisits the same data blocks again and again many times in a tight loop. For example the plan below would work well when the number of matching orders returned from ORDERS table is low, so we need to iterate through the CUSTOMERS table (and the CUSTOMERS_PK index which had contention) relatively small number of times.



| Id  | Operation                             | Name          |


|   0 | SELECT STATEMENT                      |               |


|   2 |   NESTED LOOPS                        |               |

|   3 |    NESTED LOOPS                       |               |


|*  5 |      INDEX RANGE SCAN                 | ORD_STATUS_IX |

|   6 |     PARTITION HASH ITERATOR           |               |


|*  8 |       INDEX UNIQUE SCAN               | CUSTOMERS_PK  |

|*  9 |    INDEX RANGE SCAN                   | ITEM_ORDER_IX |


Predicate Information (identified by operation id):


   5 - access("O"."ORDER_STATUS"<=4)

   8 - access("O"."CUSTOMER_ID"="C"."CUSTOMER_ID")

   9 - access("O"."ORDER_ID"="OI"."ORDER_ID")

Limitations of V$LATCHHOLDER and LatchProf(X)

There's a limitation in the V$LATCHHOLDER view. While it is enough for troubleshooting 99.9% of latch contention problems, there are couple of cases where V$LATCHHOLDER view does not show the latch holder even if it's actually held.


There's a huge variety of reasons for latch contention in Oracle (and there are hundreds of different types of latches in Oracle too) and I hope to discuss some of the most common reasons in a future article. However it helps to keep in mind two main facts about latch contention:


1)    Latch contention and waits happen only if someone tries to get a latch which is already held by someone else! Therefore, after finding out which sessions are waiting and for which latches, it's good to find out which sessions are holding these latches and why (and that's why I wrote LatchProf and LatchProfX!)


2)    Latch contention is usually a symptom of some other problem, such bad connection / cursor management, inefficient execution plan, physical design issues or even CPU starvation.


P.S. If you want to take on an interesting SQL challenge, then look into LatchProf or LatchProfX source code and understand how it manages to sample the V$LATCHHOLDER view up to a hundred thousand times per second while being just a single SQL statement!

Download LatchProf and LatchProfX