Home » Java » Why Double-Checked Locking is used at all?

Why Double-Checked Locking is used at all?

Posted by: admin December 28, 2021 Leave a comment


I keep on running across code that uses double-checked locking, and I’m still confused as to why it’s used at all.

I initially didn’t know that double-checked locking is broken, and when I learned it, it magnified this question for me: why do people use it in the first place? Isn’t compare-and-swap better?

if (field == null)
    Interlocked.CompareExchange(ref field, newValue, null);
return field;

(My question applies to both C# and Java, although the code above is for C#.)

Does double-checked locking have some sort of inherent advantage compared to atomic operations?


Does double-checked locking have some sort of inherent advantage compared to atomic operations?

(This answer only covers C#; I have no idea what Java’s memory model is like.)

The principle difference is the potential race. If you have:

if (f == null)
    CompareExchange(ref f, FetchNewValue(), null)

then FetchNewValue() can be called arbitrarily many times on different threads. One of those threads wins the race. If FetchNewValue() is extremely expensive and you want to ensure that it is called only once, then:

if (f == null)
        if (f == null)
            f = FetchNewValue();

Guarantees that FetchNewValue is only called once.

If I personally want to do a low-lock lazy initialization then I do what you suggest: I use an interlocked operation and live with the rare race condition where two threads both run the initializer and only one wins. If that’s not acceptable then I use locks.


In C#, it’s never been broken, so we can ignore that for now.

The code you’ve posted assumes that newValue is already available, or is cheep to (re-) calculate. In double-checked locking, you’re guaranteed that only one thread will actually perform the initialization.

That being said, however, in modern C#, I’d normally prefer to just use a Lazy<T> to deal with the initialization.


Double-checked locking is used when the performance degradation encountered when locking on the entire method is significant. In other words, if you do not wish to synchronize on the object (on which the method is invoked) or the class, you may use double-checked locking.

This may be the case if there is a lot of contention for the lock and when the resource protected by the lock is expensive to create; one would like to defer the creation process until it is required. Double checked locking improves performance by first verifying a condition (lock hint) to aid in determining whether the lock must be obtained.

Double checked locking was broken in Java until Java 5, when the new memory model was introduced. Until then, it was quite possible for the lock hint to be true in one thread, and false in another. In any case, the Initialization-on-Demand-Holder idiom is a suitable replacement for the double-checked locking pattern; I find this much easier to understand.


Well, the only advantage that comes to my mind is (the illusion of) performance: check in a non-thread-safe way, then do some locking operations to check the variable, which may be expensive. However, since double checked locking is broken in a way that precludes any firm conclusions from the non-thread-safe check, and it always smacked of premature optimization to me anyway, I would claim no, no advantage – it is an outdated pre-Java-days idiom – but would love to be corrected.

Edit: to be clear(er), I believe double checked locking is an idiom that evolved as a performance enhancement on locking and checking every time, and, roughly, is close to the same thing as a non-encapsulated compare-and-swap. I’m personally also a fan of encapsulating synchronized sections of code, though, so I think calling another operation to do the dirty work is better.


It “make sense” on some level that a value that would only change at startup shouldn’t need to be locked to be accessed, but then you should add some locking (which you probably aren’t going to need) just in case two threads try to access it at start up and it works most of the time.
Its broken, but I can see why its an easy trap to fall in to.