Is there any point in using a volatile long?

16,787

Solution 1

Not sure if I understand your question correctly, but the JLS 8.3.1.4. volatile Fields states:

A field may be declared volatile, in which case the Java memory model ensures that all threads see a consistent value for the variable (§17.4).

and, perhaps more importantly, JLS 17.7 Non-atomic Treatment of double and long :

17.7 Non-atomic Treatment of double and long
[...]
For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This can result in a situation where a thread sees the first 32 bits of a 64 bit value from one write, and the second 32 bits from another write. Writes and reads of volatile long and double values are always atomic. Writes to and reads of references are always atomic, regardless of whether they are implemented as 32 or 64 bit values.

That is, the "entire" variable is protected by the volatile modifier, not just the two parts. This tempts me to claim that it's even more important to use volatile for longs than it is for ints since not even a read is atomic for non-volatile longs/doubles.

Solution 2

This can be demonstrated by example

  • constantly toggle two fields, one marked volatile and one not between all bits set and all bits clear
  • read the field values on another thread
  • see that the foo field (not protected with volatile) can be read in an inconsistent state, this never happens to the bar field protected with volatile

Code

public class VolatileTest {
    private long foo;
    private volatile long bar;
    private static final long A = 0xffffffffffffffffl;
    private static final long B = 0;
    private int clock;
    public VolatileTest() {
        new Thread(new Runnable() {
            @Override
            public void run() {
                while (true) {
                    foo = clock % 2 == 0 ? A : B;
                    bar = clock % 2 == 0 ? A : B;
                    clock++;
                }
            }

        }).start();
        while (true) {
            long fooRead = foo;
            if (fooRead != A && fooRead != B) {
                System.err.println("foo incomplete write " + Long.toHexString(fooRead));
            }
            long barRead = bar;
            if (barRead != A && barRead != B) {
                System.err.println("bar incomplete write " + Long.toHexString(barRead));
            }
        }
    }

    public static void main(String[] args) {
        new VolatileTest();
    }
}

Output

foo incomplete write ffffffff00000000
foo incomplete write ffffffff00000000
foo incomplete write ffffffff
foo incomplete write ffffffff00000000

Note this only happens for me when running on a 32 bit VM, on 64 bit VM I couldn't get a single error in several minutes.

Share:
16,787
Adamski
Author by

Adamski

Updated on June 02, 2022

Comments

  • Adamski
    Adamski almost 2 years

    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class:

    public class MyClass {
      private volatile int id;
    
      public MyClass() {
        ScheduledExecutorService execService = Executors.newScheduledThreadPool(1);
        execService.scheduleAtFixedRate(new Runnable() {
          public void run() {
            ++id;
          }
        }, 0L, 30L, TimeUnit.SECONDS);
      }
    
      public int getId() {
        return id;
      }
    }
    

    My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit).

    Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

  • jmattheis
    jmattheis about 8 years
    You could also add why it came to that output, it is because it writes the long in 2 steps in the first step the first 32 bit and in the second step the other 32 bit.
  • shaoyihe
    shaoyihe over 5 years
    What about float value , it's also 16 bytes?
  • RS1
    RS1 about 4 years
    I am not able to imagine what can lead to the situation where a thread sees the first 32 bits of a 64 bit value from one write, and the second 32 bits from another write. Can you please tell?
  • aioobe
    aioobe about 4 years
    One thread writes a 64 bit value to a variable. Since this is not an atomic operation on all hardware, this can in fact be split up into two write operations. Another thread may come along and attempt to read the value between these two write operations. It will then see a value based on 32 bits from the old value and 32 bits from the new value. See Adam's answer below.
  • hagrawal
    hagrawal about 2 years
    What does this line means - "Writes to and reads of references are always atomic, regardless of whether they are implemented as 32 or 64 bit values."? I am confused on usage of "references"... @aioobe, if you could please clarify...
  • aioobe
    aioobe about 2 years
    When running for example Object o = "hello"; o = "world"; the o variable will always refer to either the "hello" object, or the "world" object. If the reference to "hello" is 0x11111111 and the reference to the "world" object is 0x22222222 then o will never (even temporary) equal something like 0x11112222. This is called tearing and can actually happen if o was of type long. (Some simplifications were made in this comment, such as o being a local variable and never subject o races etc)