• ## 多核CPU、AMP和SMP

千次阅读 2012-06-20 21:23:15
CPU开始早已开始向多核方向发展，这是进一步提高CPU处理性能的必然。 SMP是对称多处理。 AMP是指非对称多处理，非对称多处理是指各核的结构并非对称，用处自然有区别，一般来说，指令集可能不一样，但不排除也...
CPU开始早已开始向多核方向发展，这是进一步提高CPU处理性能的必然。

SMP是对称多处理。
AMP是指非对称多处理，非对称多处理是指各核的结构并非对称，用处自然有区别，一般来说，指令集可能不一样，但不排除也可以指令集一样。

AMP一图：

SMP一图：

SMP就是只有跑一个OS，由OS来给各个APP分配CPU core。


展开全文
• SMP Primer for Android 1.In this document Theory Memory consistency models Processor consistency CPU cache behavior Observability ARM’s weak ordering Data memory barriers Sto...

SMP Primer for Android

1.In this document
Theory
Memory consistency models
Processor consistencyCPU cache behaviorObservabilityARM’s weak orderingData memory barriers
Atomic essentialsAtomic + barrier pairingAcquire and releasePractice
What not to do in C
C/C++ and “volatile”ExamplesWhat not to do in Java
“synchronized” and “volatile”ExamplesWhat to do
General adviceSynchronization primitive guaranteesUpcoming changes to C/C++Closing NotesAppendix
SMP failure exampleImplementing synchronization storesFurther reading

Android 3.0 and later platform versions are optimized to support multiprocessor architectures. This document introduces issues that can arise when writing code for symmetric multiprocessor systems in C, C++, and the Java programming language (hereafter referred to simply as “Java” for the sake of brevity). It's intended as a primer for Android app developers, not as a complete discussion on the subject. The focus is on the ARM CPU architecture.
If you’re in a hurry, you can skip the Theory section and go directly to Practice for best practices, but this is not recommended.
2.Introduction SMP简介
SMP is an acronym for “Symmetric Multi-Processor”. It describes a design in which two or more identical CPU cores share access to main memory. Until a few years ago, all Android devices were UP (Uni-Processor).

SMP全拼是 “Symmetric Multi-Processor”，对称多处理器。多个相同的cpu核心共享访问内存。

Most — if not all — Android devices do have multiple CPUs, but generally one of them is used to run applications while others manage various bits of device hardware (for example, the radio). The CPUs may have different architectures, and the programs running on them can’t use main memory to communicate with each other.
Most Android devices sold today are built around SMP designs, making things a bit more complicated for software developers. The sorts of race conditions you might encounter in a multi-threaded program are much worse on SMP when two or more of your threads are running simultaneously on different cores. What’s more, SMP on ARM is more challenging to work with than SMP on x86. Code that has been thoroughly tested on x86 may break badly on ARM.

现在市面是的手机一般都是多核cpu。当运行多线程时，它们可能在不同核心上运行，这增加了开发者难度。

The rest of this document will explain why, and tell you what you need to do to ensure that your code behaves correctly.
3.Theory 原理
This is a high-speed, glossy overview of a complex subject. Some areas will be incomplete, but none of it should be misleading or wrong.
See Further reading at the end of the document for pointers to more thorough treatments of the subject.
3.1 Memory consistency models
Memory consistency models, or often just “memory models”, describe the guarantees the hardware architecture makes about memory accesses. For example, if you write a value to address A, and then write a value to address B, the model might guarantee that every CPU core sees those writes happen in that order.
The model most programmers are accustomed to is sequential consistency, which is described like this (Adve & Gharachorloo):
All memory operations appear to execute one at a timeAll operations on a single processor appear to execute in the order described by that processor's program.
If you look at a bit of code and see that it does some reads and writes from memory, on a sequentially-consistent CPU architecture you know that the code will do those reads and writes in the expected order. It’s possible that the CPU is actually reordering instructions and delaying reads and writes, but there is no way for code running on the device to tell that the CPU is doing anything other than execute instructions in a straightforward manner. (We’re ignoring memory-mapped device driver I/O for the moment.)
To illustrate these points it’s useful to consider small snippets of code, commonly referred to as litmus tests. These are assumed to execute in program order, that is, the order in which the instructions appear here is the order in which the CPU will execute them. We don’t want to consider instruction reordering performed by compilers just yet.
Here’s a simple example, with code running on two threads:
Thread 1Thread 2A = 3B = 5reg0 = Breg1 = A
In this and all future litmus examples, memory locations are represented by capital letters (A, B, C) and CPU registers start with “reg”. All memory is initially zero. Instructions are executed from top to bottom. Here, thread 1 stores the value 3 at location A, and then the value 5 at location B. Thread 2 loads the value from location B into reg0, and then loads the value from location A into reg1. (Note that we’re writing in one order and reading in another.)
Thread 1 and thread 2 are assumed to execute on different CPU cores. You should always make this assumption when thinking about multi-threaded code.
Sequential consistency guarantees that, after both threads have finished executing, the registers will be in one of the following states:
To get into a situation where we see B=5 before we see the store to A, either the reads or the writes would have to happen out of order. On a sequentially-consistent machine, that can’t happen.
Most uni-processors, including x86 and ARM, are sequentially consistent. Most SMP systems, including x86 and ARM, are not.
Processor consistency
x86 SMP provides processor consistency, which is slightly weaker than sequential. While the architecture guarantees that loads are not reordered with respect to other loads, and stores are not reordered with respect to other stores, it does not guarantee that a store followed by a load will be observed in the expected order.
Consider the following example, which is a piece of Dekker’s Algorithm for mutual exclusion:
Thread 1Thread 2A = truereg1 = Bif (reg1 == false)    critical-stuffB = truereg2 = Aif (reg2 == false)    critical-stuff
The idea is that thread 1 uses A to indicate that it’s busy, and thread 2 uses B. Thread 1 sets A and then checks to see if B is set; if not, it can safely assume that it has exclusive access to the critical section. Thread 2 does something similar. (If a thread discovers that both A and B are set, a turn-taking algorithm is used to ensure fairness.)
On a sequentially-consistent machine, this works correctly. On x86 and ARM SMP, the store to A and the load from B in thread 1 can be “observed” in a different order by thread 2. If that happened, we could actually appear to execute this sequence (where blank lines have been inserted to highlight the apparent order of operations):
Thread 1Thread 2reg1 = BA = trueif (reg1 == false)    critical-stuffB = truereg2 = Aif (reg2 == false)    critical-stuff
This results in both reg1 and reg2 set to “false”, allowing the threads to execute code in the critical section simultaneously. To understand how this can happen, it’s useful to know a little about CPU caches.
CPU cache behavior
This is a substantial topic in and of itself. An extremely brief overview follows. (The motivation for this material is to provide some basis for understanding why SMP systems behave as they do.)
Modern CPUs have one or more caches between the processor and main memory. These are labeled L1, L2, and so on, with the higher numbers being successively “farther” from the CPU. Cache memory adds size and cost to the hardware, and increases power consumption, so the ARM CPUs used in Android devices typically have small L1 caches and little or no L2/L3.
Loading or storing a value into the L1 cache is very fast. Doing the same to main memory can be 10-100x slower. The CPU will therefore try to operate out of the cache as much as possible. The write policy of a cache determines when data written to it is forwarded to main memory. A write-through cache will initiate a write to memory immediately, while a write-back cache will wait until it runs out of space and has to evict some entries. In either case, the CPU will continue executing instructions past the one that did the store, possibly executing dozens of them before the write is visible in main memory. (While the write-through cache has a policy of immediately forwarding the data to main memory, it only initiates the write. It does not have to wait for it to finish.)
The cache behavior becomes relevant to this discussion when each CPU core has its own private cache. In a simple model, the caches have no way to interact with each other directly. The values held by core #1’s cache are not shared with or visible to core #2’s cache except as loads or stores from main memory. The long latencies on memory accesses would make inter-thread interactions sluggish, so it’s useful to define a way for the caches to share data. This sharing is called cache coherency, and the coherency rules are defined by the CPU architecture’s cache consistency model.
With that in mind, let’s return to the Dekker example. When core 1 executes “A = 1”, the value gets stored in core 1’s cache. When core 2 executes “if (A == 0)”, it might read from main memory or it might read from core 2’s cache; either way it won’t see the store performed by core 1. (“A” could be in core 2’s cache because of a previous load from “A”.)
For the memory consistency model to be sequentially consistent, core 1 would have to wait for all other cores to be aware of “A = 1” before it could execute “if (B == 0)” (either through strict cache coherency rules, or by disabling the caches entirely so everything operates out of main memory). This would impose a performance penalty on every store operation. Relaxing the rules for the ordering of stores followed by loads improves performance but imposes a burden on software developers.
The other guarantees made by the processor consistency model are less expensive to make. For example, to ensure that memory writes are not observed out of order, it just needs to ensure that the stores are published to other cores in the same order that they were issued. It doesn’t need to wait for store #1 to finish being published before it can start on store #2, it just needs to ensure that it doesn’t finish publishing #2 before it finishes publishing #1. This avoids a performance bubble.
Relaxing the guarantees even further can provide additional opportunities for CPU optimization, but creates more opportunities for code to behave in ways the programmer didn’t expect.
One additional note: CPU caches don’t operate on individual bytes. Data is read or written as cache lines; for many ARM CPUs these are 32 bytes. If you read data from a location in main memory, you will also be reading some adjacent values. Writing data will cause the cache line to be read from memory and updated. As a result, you can cause a value to be loaded into cache as a side-effect of reading or writing something nearby, adding to the general aura of mystery.
Observability
Before going further, it’s useful to define in a more rigorous fashion what is meant by “observing” a load or store. Suppose core 1 executes “A = 1”. The store is initiated when the CPU executes the instruction. At some point later, possibly through cache coherence activity, the store is observed by core 2. In a write-through cache it doesn’t really complete until the store arrives in main memory, but the memory consistency model doesn’t dictate when something completes, just when it can be observed.
(In a kernel device driver that accesses memory-mapped I/O locations, it may be very important to know when things actually complete. We’re not going to go into that here.)
Observability may be defined as follows:
"A write to a location in memory is said to be observed by an observer Pn when a subsequent read of the location by Pn would return the value written by the write.""A read of a location in memory is said to be observed by an observer Pm when a subsequent write to the location by Pm would have no effect on the value returned by the read." (Reasoning about the ARM weakly consistent memory model)
A less formal way to describe it (where “you” and “I” are CPU cores) would be:
The notion of observing a write is intuitive; observing a read is a bit less so (don’t worry, it grows on you).
ARM's weak ordering
ARM SMP provides weak memory consistency guarantees. It does not guarantee that loads or stores are ordered with respect to each other.
Thread 1Thread 2A = 41B = 1 // “A is ready”loop_until (B == 1)reg = A
Recall that all addresses are initially zero. The “loop_until” instruction reads B repeatedly, looping until we read 1 from B. The idea here is that thread 2 is waiting for thread 1 to update A. Thread 1 sets A, and then sets B to 1 to indicate data availability.
On ARM SMP, the loads and stores can be observed in any order. It is possible, after all the code has executed, for reg to hold 0. It’s also possible for it to hold 41. Unless you explicitly define the ordering, you don’t know how this will come out.
(For those with experience on other systems, ARM’s memory model is equivalent to PowerPC in most respects.)
3.2 Data memory barriers
Memory barriers provide a way for your code to tell the CPU that memory access ordering matters. ARM/x86 uniprocessors offer sequential consistency, and thus have no need for them. (The barrier instructions can be executed but aren’t useful; in at least one case they’re hideously expensive, motivating separate builds for SMP targets.)
There are four basic situations to consider:
Recall our earlier example:
Thread 1Thread 2A = 41B = 1 // “A is ready”loop_until (B == 1)reg = A
Thread 1 needs to ensure that the store to A happens before the store to B. This is a “store/store” situation. Similarly, thread 2 needs to ensure that the load of B happens before the load of A; this is a load/load situation. As mentioned earlier, the loads and stores can be observed in any order.

Going back to the cache discussion, assume A and B are on separate cache lines, with minimal cache coherency. If the store to A stays local but the store to B is published, core 2 will see B=1 but won’t see the update to A. On the other side, assume we read A earlier, or it lives on the same cache line as something else we recently read. Core 2 spins until it sees the update to B, then loads A from its local cache, where the value is still zero.

We can fix it like this:
Thread 1Thread 2A = 41store/store barrierB = 1 // “A is ready”loop_until (B == 1)load/load barrierreg = A
The store/store barrier guarantees that all observers will observe the write to A before they observe the write to B. It makes no guarantees about the ordering of loads in thread 1, but we don’t have any of those, so that’s okay. The load/load barrier in thread 2 makes a similar guarantee for the loads there.
Since the store/store barrier guarantees that thread 2 observes the stores in program order, why do we need the load/load barrier in thread 2? Because we also need to guarantee that thread 1 observes the loads in program order.

The store/store barrier could work by flushing all dirty entries out of the local cache, ensuring that other cores see them before they see any future stores. The load/load barrier could purge the local cache completely and wait for any “in-flight” loads to finish, ensuring that future loads are observed after previous loads. What the CPU actually does doesn’t matter, so long as the appropriate guarantees are kept. If we use a barrier in core 1 but not in core 2, core 2 could still be reading A from its local cache.

Because the architectures have different memory models, these barriers are required on ARM SMP but not x86 SMP.
The Dekker’s Algorithm fragment shown earlier illustrated the need for a store/load barrier. Here’s an example where a load/store barrier is required:
Thread 1Thread 2reg = AB = 1 // “I have latched A”loop_until (B == 1)A = 41 // update A
Thread 1Thread 2reg = Aload/store barrierB = 1 // “I have latched A”loop_until (B == 1)load/store barrierA = 41 // update A

A store to local cache may be observed before a load from main memory, because accesses to main memory are so much slower. In this case, assume core 1’s cache has the cache line for B but not A. The load from A is initiated, and while that’s in progress execution continues. The store to B happens in local cache, and by some means becomes available to core 2 while the load from A is still in progress. Thread 2 is able to exit the loop before it has observed thread 1’s load from A.
A thornier question is: do we need a barrier in thread 2? If the CPU doesn’t perform speculative writes, and doesn’t execute instructions out of order, can thread 2 store to A before thread 1’s read if thread 1 guarantees the load/store ordering? (Answer: no.) What if there’s a third core watching A and B? (Answer: now you need one, or you could observe B==0 / A==41 on the third core.) It’s safest to insert barriers in both places and not worry about the details.

As mentioned earlier, store/load barriers are the only kind required on x86 SMP.
Barrier instructions
Different CPUs provide different flavors of barrier instruction. For example:
Sparc V8 has a “membar” instruction that takes a 4-element bit vector. The four categories of barrier can be specified individually.Alpha provides “rmb” (load/load), “wmb” (store/store), and “mb” (full). (Trivia: the linux kernel provides three memory barrier functions with these names and behaviors.)x86 has a variety of options; “mfence” (introduced with SSE2) provides a full barrier.ARMv7 has “dmb st” (store/store) and “dmb sy” (full).
“Full barrier” means all four categories are included.
It is important to recognize that the only thing guaranteed by barrier instructions is ordering. Do not treat them as cache coherency “sync points” or synchronous “flush” instructions. The ARM “dmb” instruction has no direct effect on other cores. This is important to understand when trying to figure out where barrier instructions need to be issued.
(This is a slightly more advanced topic and can be skipped.)
The ARM CPU provides one special case where a load/load barrier can be avoided. Consider the following example from earlier, modified slightly:
Thread 1Thread 2[A+8] = 41store/store barrierB = 1 // “A is ready”loop:    reg0 = B    if (reg0 == 0) goto loopreg1 = 8reg2 = [A + reg1]
This introduces a new notation. If “A” refers to a memory address, “A+n” refers to a memory address offset by 8 bytes from A. If A is the base address of an object or array, [A+8] could be a field in the object or an element in the array.
The “loop_until” seen in previous examples has been expanded to show the load of B into reg0. reg1 is assigned the numeric value 8, and reg2 is loaded from the address [A+reg1] (the same location that thread 1 is accessing).
This will not behave correctly because the load from B could be observed after the load from [A+reg1]. We can fix this with a load/load barrier after the loop, but on ARM we can also just do this:
Thread 1Thread 2[A+8] = 41store/store barrierB = 1 // “A is ready”loop:    reg0 = B    if (reg0 == 0) goto loopreg1 = 8 + (reg0 & 0)reg2 = [A + reg1]
What we’ve done here is change the assignment of reg1 from a constant (8) to a value that depends on what we loaded from B. In this case, we do a bitwise AND of the value with 0, which yields zero, which means reg1 still has the value 8. However, the ARM CPU believes that the load from [A+reg1] depends upon the load from B, and will ensure that the two are observed in program order.
This is called an address dependency. Address dependencies exist when the value returned by a load is used to compute the address of a subsequent load or store. It can let you avoid the need for an explicit barrier in certain situations.
ARM does not provide control dependency guarantees. To illustrate this it’s necessary to dip into ARM code for a moment: (Barrier Litmus Tests and Cookbook).

LDR r1, [r0]
CMP r1, #55
LDRNE r2, [r3]

The loads from r0 and r3 may be observed out of order, even though the load from r3 will not execute at all if [r0] doesn’t hold 55. Inserting AND r1, r1, #0 and replacing the last instruction with LDRNE r2, [r3, r1] would ensure proper ordering without an explicit barrier. (This is a prime example of why you can’t think about consistency issues in terms of instruction execution. Always think in terms of memory accesses.)
While we’re hip-deep, it’s worth noting that ARM does not provide causal consistency:
Thread 1Thread 2Thread 3A = 1loop_until (A == 1)B = 1loop:  reg0 = B  if (reg0 == 0) goto loopreg1 = reg0 & 0reg2 = [A+reg1]
It’s possible for reg2 to hold zero at the end of this. The fact that a store in thread 1 causes something to happen in thread 2 which causes something to happen in thread 3 does not mean that thread 3 will observe the stores in that order. (Inserting a load/store barrier in thread 2 fixes this.)
Memory barrier summary
Barriers come in different flavors for different situations. While there can be performance advantages to using exactly the right barrier type, there are code maintenance risks in doing so — unless the person updating the code fully understands it, they might introduce the wrong type of operation and cause a mysterious breakage. Because of this, and because ARM doesn’t provide a wide variety of barrier choices, many atomic primitives use full barrier instructions when a barrier is required.
The key thing to remember about barriers is that they define ordering. Don’t think of them as a “flush” call that causes a series of actions to happen. Instead, think of them as a dividing line in time for operations on the current CPU core.
Atomic operations
Atomic operations guarantee that an operation that requires a series of steps always behaves as if it were a single operation. For example, consider a non-atomic increment (“++A”) executed on the same variable by two threads simultaneously:
Thread 1Thread 2reg = Areg = reg + 1A = regreg = Areg = reg + 1A = reg
If the threads execute concurrently from top to bottom, both threads will load 0 from A, increment it to 1, and store it back, leaving a final result of 1. If we used an atomic increment operation, you would be guaranteed that the final result will be 2.
Atomic essentials
The most fundamental operations — loading and storing 32-bit values — are inherently atomic on ARM so long as the data is aligned on a 32-bit boundary. For example:
Thread 1Thread 2reg = 0x00000000A = regreg = 0xffffffffA = reg
The CPU guarantees that A will hold 0x00000000 or 0xffffffff. It will never hold 0x0000ffff or any other partial “mix” of bytes.

The atomicity guarantee is lost if the data isn’t aligned. Misaligned data could straddle a cache line, so other cores could see the halves update independently. Consequently, the ARMv7 documentation declares that it provides “single-copy atomicity” for all byte accesses, halfword accesses to halfword-aligned locations, and word accesses to word-aligned locations. Doubleword (64-bit) accesses are not atomic, unless the location is doubleword-aligned and special load/store instructions are used. This behavior is important to understand when multiple threads are performing unsynchronized updates to packed structures or arrays of primitive types.

There is no need for 32-bit “atomic read” or “atomic write” functions on ARM or x86. Where one is provided for completeness, it just does a trivial load or store.
Operations that perform more complex actions on data in memory are collectively known as read-modify-write(RMW) instructions, because they load data, modify it in some way, and write it back. CPUs vary widely in how these are implemented. ARM uses a technique called “Load Linked / Store Conditional”, or LL/SC.

A linked or locked load reads the data from memory as usual, but also establishes a reservation, tagging the physical memory address. The reservation is cleared when another core tries to write to that address. To perform an LL/SC, the data is read with a reservation, modified, and then a conditional store instruction is used to try to write the data back. If the reservation is still in place, the store succeeds; if not, the store will fail. Atomic functions based on LL/SC usually loop, retrying the entire read-modify-write sequence until it completes without interruption.

It’s worth noting that the read-modify-write operations would not work correctly if they operated on stale data. If two cores perform an atomic increment on the same address, and one of them is not able to see what the other did because each core is reading and writing from local cache, the operation won’t actually be atomic. The CPU’s cache coherency rules ensure that the atomic RMW operations remain atomic in an SMP environment.
This should not be construed to mean that atomic RMW operations use a memory barrier. On ARM, atomics have no memory barrier semantics. While a series of atomic RMW operations on a single address will be observed in program order by other cores, there are no guarantees when it comes to the ordering of atomic and non-atomic operations.
It often makes sense to pair barriers and atomic operations together. The next section describes this in more detail.
Atomic + barrier pairing
As usual, it’s useful to illuminate the discussion with an example. We’re going to consider a basic mutual-exclusion primitive called a spin lock. The idea is that a memory address (which we’ll call “lock”) initially holds zero. When a thread wants to execute code in the critical section, it sets the lock to 1, executes the critical code, and then changes it back to zero when done. If another thread has already set the lock to 1, we sit and spin until the lock changes back to zero.
To make this work we use an atomic RMW primitive called compare-and-swap. The function takes three arguments: the memory address, the expected current value, and the new value. If the value currently in memory matches what we expect, it is replaced with the new value, and the old value is returned. If the current value is not what we expect, we don’t change anything. A minor variation on this is called compare-and-set; instead of returning the old value it returns a boolean indicating whether the swap succeeded. For our needs either will work, but compare-and-set is slightly simpler for examples, so we use it and just refer to it as “CAS”.
The acquisition of the spin lock is written like this (using a C-like language):

1 do {
2     success = atomic_cas(&lock, 0, 1)
3 } while (!success)
4
5 full_memory_barrier()
6
7 critical-section

If no thread holds the lock, the lock value will be 0, and the CAS operation will set it to 1 to indicate that we now have it. If another thread has it, the lock value will be 1, and the CAS operation will fail because the expected current value does not match the actual current value. We loop and retry. (Note this loop is on top of whatever loop the LL/SC code might be doing inside the atomic_cas function.)

On SMP, a spin lock is a useful way to guard a small critical section. If we know that another thread is going to execute a handful of instructions and then release the lock, we can just burn a few cycles while we wait our turn. However, if the other thread happens to be executing on the same core, we’re just wasting time because the other thread can’t make progress until the OS schedules it again (either by migrating it to a different core or by preempting us). A proper spin lock implementation would optimistically spin a few times and then fall back on an OS primitive (such as a Linux futex) that allows the current thread to sleep while waiting for the other thread to finish up. On a uniprocessor you never want to spin at all. For the sake of brevity we’re ignoring all this.

The memory barrier is necessary to ensure that other threads observe the acquisition of the lock before they observe any loads or stores in the critical section. Without that barrier, the memory accesses could be observed while the lock is not held.
The full_memory_barrier call here actually does two independent operations. First, it issues the CPU’s full barrier instruction. Second, it tells the compiler that it is not allowed to reorder code around the barrier. That way, we know that the atomic_cas call will be executed before anything in the critical section. Without this compiler reorder barrier, the compiler has a great deal of freedom in how it generates code, and the order of instructions in the compiled code might be much different from the order in the source code.
Of course, we also want to make sure that none of the memory accesses performed in the critical section are observed after the lock is released. The full version of the simple spin lock is:

1 do {
2     success = atomic_cas(&lock, 0, 1)   // acquire
3 } while (!success)
4 full_memory_barrier()
5
6 critical-section
7
8 full_memory_barrier()
9 atomic_store(&lock, 0)                  // release

We perform our second CPU/compiler memory barrier immediately before we release the lock, so that loads and stores in the critical section are observed before the release of the lock.
As mentioned earlier, the atomic_store operation is a simple assignment on ARM and x86. Unlike the atomic RMW operations, we don’t guarantee that other threads will see this value immediately. This isn’t a problem, though, because we only need to keep the other threads out. The other threads will stay out until they observe the store of 0. If it takes a little while for them to observe it, the other threads will spin a little longer, but we will still execute code correctly.
It’s convenient to combine the atomic operation and the barrier call into a single function. It also provides other advantages, which will become clear shortly.
Acquire and release
When acquiring the spinlock, we issue the atomic CAS and then the barrier. When releasing the spinlock, we issue the barrier and then the atomic store. This inspires a particular naming convention: operations followed by a barrier are “acquiring” operations, while operations preceded by a barrier are “releasing” operations. (It would be wise to install the spin lock example firmly in mind, as the names are not otherwise intuitive.)
Rewriting the spin lock example with this in mind:

1 do {
2     success = atomic_acquire_cas(&lock, 0, 1)
3 } while (!success)
4
5 critical-section
6
7 atomic_release_store(&lock, 0)

This is a little more succinct and easier to read, but the real motivation for doing this lies in a couple of optimizations we can now perform.
First, consider atomic_release_store. We need to ensure that the store of zero to the lock word is observed after any loads or stores in the critical section above it. In other words, we need a load/store and store/store barrier. In an earlier section we learned that these aren’t necessary on x86 SMP -- only store/load barriers are required. The implementation of atomic_release_store on x86 is therefore just a compiler reorder barrier followed by a simple store. No CPU barrier is required.
The second optimization mostly applies to the compiler (although some CPUs, such as the Itanium, can take advantage of it as well). The basic principle is that code can move across acquire and release barriers, but only in one direction.
Suppose we have a mix of locally-visible and globally-visible memory accesses, with some miscellaneous computation as well:

1 local1 = arg1 / 41
4
5 do {
6     success = atomic_acquire_cas(&lock, 0, 1)
7 } while (!success)
8
9 local5 = globalStruct->field5
10 globalStruct->field6 = local5
11
12 atomic_release_store(&lock, 0)

Here we see two completely independent sets of operations. The first set operates on a thread-local data structure, so we’re not concerned about clashes with other threads. The second set operates on a global data structure, which must be protected with a lock.
A full compiler reorder barrier in the atomic ops will ensure that the program order matches the source code order at the lock boundaries. However, allowing the compiler to interleave instructions can improve performance. Loads from memory can be slow, but the CPU can continue to execute instructions that don’t require the result of that load while waiting for it to complete. The code might execute more quickly if it were written like this instead:

1 do {
2     success = atomic_acquire_cas(&lock, 0, 1)
3 } while (!success)
4
6 local5 = globalStruct->field5
7 local1 = arg1 / 41
9 globalStruct->field6 = local5
10
11 atomic_release_store(&lock, 0)

We issue both loads, do some unrelated computation, and then execute the instructions that make use of the loads. If the integer division takes less time than one of the loads, we essentially get it for free, since it happens during a period where the CPU would have stalled waiting for a load to complete.
Note that all of the operations are now happening inside the critical section. Since none of the “threadStruct” operations are visible outside the current thread, nothing else can see them until we’re finished here, so it doesn’t matter exactly when they happen.
In general, it is always safe to move operations into a critical section, but never safe to move operations out of a critical section. Put another way, you can migrate code “downward” across an acquire barrier, and “upward” across a release barrier. If the atomic ops used a full barrier, this sort of migration would not be possible.
Returning to an earlier point, we can state that on x86 all loads are acquiring loads, and all stores are releasing stores. As a result:
Loads may not be reordered with respect to each other. You can’t take a load and move it “upward” across another load’s acquire barrier.Stores may not be reordered with respect to each other, because you can’t move a store “downward” across another store’s release barrier.A load followed by a store can’t be reordered, because neither instruction will tolerate it.A store followed by a load can be reordered, because each instruction can move across the other in that direction.
Hence, you only need store/load barriers on x86 SMP.
Labeling atomic operations with “acquire” or “release” describes not only whether the barrier is executed before or after the atomic operation, but also how the compiler is allowed to reorder code.
4.Practice
Debugging memory consistency problems can be very difficult. If a missing memory barrier causes some code to read stale data, you may not be able to figure out why by examining memory dumps with a debugger. By the time you can issue a debugger query, the CPU cores will have all observed the full set of accesses, and the contents of memory and the CPU registers will appear to be in an “impossible” state.
4.1 What not to do in C
Here we present some examples of incorrect code, along with simple ways to fix them. Before we do that, we need to discuss the use of a basic language feature.
C/C++ and "volatile"
When writing single-threaded code, declaring a variable “volatile” can be very useful. The compiler will not omit or reorder accesses to volatile locations. Combine that with the sequential consistency provided by the hardware, and you’re guaranteed that the loads and stores will appear to happen in the expected order.
However, accesses to volatile storage may be reordered with non-volatile accesses, so you have to be careful in multi-threaded uniprocessor environments (explicit compiler reorder barriers may be required). There are no atomicity guarantees, and no memory barrier provisions, so “volatile” doesn’t help you at all in multi-threaded SMP environments. The C and C++ language standards are being updated to address this with built-in atomic operations.
If you think you need to declare something “volatile”, that is a strong indicator that you should be using one of the atomic operations instead.
Examples
In most cases you’d be better off with a synchronization primitive (like a pthread mutex) rather than an atomic operation, but we will employ the latter to illustrate how they would be used in a practical situation.
For the sake of brevity we’re ignoring the effects of compiler optimizations here — some of this code is broken even on uniprocessors — so for all of these examples you must assume that the compiler generates straightforward code (for example, compiled with gcc -O0). The fixes presented here do solve both compiler-reordering and memory-access-ordering issues, but we’re only going to discuss the latter.

1 MyThing* gGlobalThing = NULL;
2
3 void initGlobalThing()    // runs in thread 1
4 {
5     MyStruct* thing = malloc(sizeof(*thing));
6     memset(thing, 0, sizeof(*thing));
7     thing->x = 5;
8     thing->y = 10;
9     /* initialization complete, publish */
10     gGlobalThing = thing;
11 }
12
13 void useGlobalThing()    // runs in thread 2
14 {
15     if (gGlobalThing != NULL) {
16         int i = gGlobalThing->x;    // could be 5, 0, or uninitialized data
17         ...
18     }
19 }

The idea here is that we allocate a structure, initialize its fields, and at the very end we “publish” it by storing it in a global variable. At that point, any other thread can see it, but that’s fine since it’s fully initialized, right? At least, it would be on x86 SMP or a uniprocessor (again, making the erroneous assumption that the compiler outputs code exactly as we have it in the source).
Without a memory barrier, the store to gGlobalThing could be observed before the fields are initialized on ARM. Another thread reading from thing->x could see 5, 0, or even uninitialized data.
This can be fixed by changing the last assignment to:

atomic_release_store(&gGlobalThing, thing);

That ensures that all other threads will observe the writes in the proper order, but what about reads? In this case we should be okay on ARM, because the address dependency rules will ensure that any loads from an offset ofgGlobalThing are observed after the load of gGlobalThing. However, it’s unwise to rely on architectural details, since it means your code will be very subtly unportable. The complete fix also requires a barrier after the load:

2     int i = thing->x;

Now we know the ordering will be correct. This may seem like an awkward way to write code, and it is, but that’s the price you pay for accessing data structures from multiple threads without using locks. Besides, address dependencies won’t always save us:

1 MyThing gGlobalThing;
2
3 void initGlobalThing()    // runs in thread 1
4 {
5     gGlobalThing.x = 5;
6     gGlobalThing.y = 10;
7     /* initialization complete */
8     gGlobalThing.initialized = true;
9 }
10
11 void useGlobalThing()    // runs in thread 2
12 {
13     if (gGlobalThing.initialized) {
14         int i = gGlobalThing.x;    // could be 5 or 0
15     }
16 }

Because there is no relationship between the initialized field and the others, the reads and writes can be observed out of order. (Note global data is initialized to zero by the OS, so it shouldn’t be possible to read “random” uninitialized data.)
We need to replace the store with:

atomic_release_store(&gGlobalThing.initialized, true);

Another example of the same problem occurs when implementing reference-counted data structures. The reference count itself will be consistent so long as atomic increment and decrement operations are used, but you can still run into trouble at the edges, for example:

1 void RefCounted::release()
2 {
3     int oldCount = atomic_dec(&mRefCount);
4     if (oldCount == 1) {    // was decremented to zero
5         recycleStorage();
6     }
7 }
8
9 void useSharedThing(RefCountedThing sharedThing)
10 {
11     int localVar = sharedThing->x;
12     sharedThing->release();
13     sharedThing = NULL;    // can’t use this pointer any more
14     doStuff(localVar);    // value of localVar might be wrong
15 }

The release() call decrements the reference count using a barrier-free atomic decrement operation. Because this is an atomic RMW operation, we know that it will work correctly. If the reference count goes to zero, we recycle the storage.
The useSharedThing() function extracts what it needs from sharedThing and then releases its copy. However, because we didn’t use a memory barrier, and atomic and non-atomic operations can be reordered, it’s possible for other threads to observe the read of sharedThing->x after they observe the recycle operation. It’s therefore possible for localVar to hold a value from "recycled" memory, for example a new object created in the same location by another thread after release() is called.
This can be fixed by replacing the call to atomic_dec() with atomic_release_dec(). The barrier ensures that the reads from sharedThing are observed before we recycle the object.

In most cases the above won’t actually fail, because the “recycle” function is likely guarded by functions that themselves employ barriers (libc heap free()/delete(), or an object pool guarded by a mutex). If the recycle function used a lock-free algorithm implemented without barriers, however, the above code could fail on ARM SMP.

4.2 What not to do in Java
We haven’t discussed some relevant Java language features, so we’ll take a quick look at those first.
Java's "synchronized" and "volatile" keywords
The “synchronized” keyword provides the Java language’s in-built locking mechanism. Every object has an associated “monitor” that can be used to provide mutually exclusive access.
The implementation of the “synchronized” block has the same basic structure as the spin lock example: it begins with an acquiring CAS, and ends with a releasing store. This means that compilers and code optimizers are free to migrate code into a “synchronized” block. One practical consequence: you must not conclude that code inside a synchronized block happens after the stuff above it or before the stuff below it in a function. Going further, if a method has two synchronized blocks that lock the same object, and there are no operations in the intervening code that are observable by another thread, the compiler may perform “lock coarsening” and combine them into a single block.
The other relevant keyword is “volatile”. As defined in the specification for Java 1.4 and earlier, a volatile declaration was about as weak as its C counterpart. The spec for Java 1.5 was updated to provide stronger guarantees, almost to the level of monitor synchronization.
Non-volatile accesses may be reorded with respect to volatile accesses in the usual ways, for example the compiler could move a non-volatile load or store “above” a volatile store, but couldn’t move it “below”. Volatile accesses may not be reordered with respect to each other. The VM takes care of issuing the appropriate memory barriers.
It should be mentioned that, while loads and stores of object references and most primitive types are atomic,long and double fields are not accessed atomically unless they are marked as volatile. Multi-threaded updates to non-volatile 64-bit fields are problematic even on uniprocessors.
Examples
Here’s a simple, incorrect implementation of a monotonic counter: (Java theory and practice: Managing volatility).

1 class Counter {
2     private int mValue;
3
4     public int get() {
5         return mValue;
6     }
7     public void incr() {
8         mValue++;
9     }
10 }

Assume get() and incr() are called from multiple threads, and we want to be sure that every thread sees the current count when get() is called. The most glaring problem is that mValue++ is actually three operations:
reg = mValuereg = reg + 1mValue = reg
If two threads execute in incr() simultaneously, one of the updates could be lost. To make the increment atomic, we need to declare incr() “synchronized”. With this change, the code will run correctly in multi-threaded uniprocessor environments.
It’s still broken on SMP, however. Different threads might see different results from get(), because we’re reading the value with an ordinary load. We can correct the problem by declaring get() to be synchronized. With this change, the code is obviously correct.
Unfortunately, we’ve introduced the possibility of lock contention, which could hamper performance. Instead of declaring get() to be synchronized, we could declare mValue with “volatile”. (Note incr() must still usesynchronize.) Now we know that the volatile write to mValue will be visible to any subsequent volatile read of mValue. incr() will be slightly slower, but get() will be faster, so even in the absence of contention this is a win if reads outnumber writes. (See also AtomicInteger.)
Here’s another example, similar in form to the earlier C examples:

1 class MyGoodies {
2     public int x, y;
3 }
4 class MyClass {
5     static MyGoodies sGoodies;
6
7     void initGoodies() {    // runs in thread 1
8         MyGoodies goods = new MyGoodies();
9         goods.x = 5;
10         goods.y = 10;
11         sGoodies = goods;
12     }
13
14     void useGoodies() {    // runs in thread 2
15         if (sGoodies != null) {
16             int i = sGoodies.x;    // could be 5 or 0
17             ....
18         }
19     }
20 }

This has the same problem as the C code, namely that the assignment sGoodies = goods might be observed before the initialization of the fields in goods. If you declare sGoodies with the volatile keyword, you can think about the loads as if they were atomic_acquire_load() calls, and the stores as if they wereatomic_release_store() calls.
(Note that only the sGoodies reference itself is volatile. The accesses to the fields inside it are not. The statement z = sGoodies.x will perform a volatile load of MyClass.sGoodies followed by a non-volatile load of sGoodies.x. If you make a local reference MyGoodies localGoods = sGoodies, z = localGoods.xwill not perform any volatile loads.)
A more common idiom in Java programming is the infamous “double-checked locking”:

1 class MyClass {
2     private Helper helper = null;
3
4     public Helper getHelper() {
5         if (helper == null) {
6             synchronized (this) {
7                 if (helper == null) {
8                     helper = new Helper();
9                 }
10             }
11         }
12         return helper;
13     }
14 }

The idea is that we want to have a single instance of a Helper object associated with an instance ofMyClass. We must only create it once, so we create and return it through a dedicated getHelper() function. To avoid a race in which two threads create the instance, we need to synchronize the object creation. However, we don’t want to pay the overhead for the “synchronized” block on every call, so we only do that part if helperis currently null.
This doesn’t work correctly on uniprocessor systems, unless you’re using a traditional Java source compiler and an interpreter-only VM. Once you add fancy code optimizers and JIT compilers it breaks down. See the “‘Double Checked Locking is Broken’ Declaration” link in the appendix for more details, or Item 71 (“Use lazy initialization judiciously”) in Josh Bloch’s Effective Java, 2nd Edition..
Running this on an SMP system introduces an additional way to fail. Consider the same code rewritten slightly, as if it were compiled into a C-like language (I’ve added a couple of integer fields to represent Helper’sconstructor activity):

1 if (helper == null) {
2     // acquire monitor using spinlock
3     while (atomic_acquire_cas(&this.lock, 0, 1) != success)
4         ;
5     if (helper == null) {
6         newHelper = malloc(sizeof(Helper));
7         newHelper->x = 5;
8         newHelper->y = 10;
9         helper = newHelper;
10     }
11     atomic_release_store(&this.lock, 0);
12 }

Now the problem should be obvious: the store to helper is happening before the memory barrier, which means another thread could observe the non-null value of helper before the stores to the x/y fields.
You could try to ensure that the store to helper happens after the atomic_release_store() onthis.lock by rearranging the code, but that won’t help, because it’s okay to migrate code upward — the compiler could move the assignment back above the atomic_release_store() to its original position.
There are two ways to fix this:
Do the simple thing and delete the outer check. This ensures that we never examine the value of helperoutside a synchronized block.Declare helper volatile. With this one small change, the code in Example J-3 will work correctly on Java 1.5 and later. (You may want to take a minute to convince yourself that this is true.)
This next example illustrates two important issues when using volatile:

1 class MyClass {
2     int data1, data2;
3     volatile int vol1, vol2;
4
5     void setValues() {    // runs in thread 1
6         data1 = 1;
7         vol1 = 2;
8         data2 = 3;
9     }
10
11     void useValues1() {    // runs in thread 2
12         if (vol1 == 2) {
13             int l1 = data1;    // okay
14             int l2 = data2;    // wrong
15         }
16     }
17     void useValues2() {    // runs in thread 2
18         int dummy = vol2;
19         int l1 = data1;    // wrong
20         int l2 = data2;    // wrong
21     }

Looking at useValues1(), if thread 2 hasn’t yet observed the update to vol1, then it can’t know if data1 ordata2 has been set yet. Once it sees the update to vol1, it knows that the change to data1 is also visible, because that was made before vol1 was changed. However, it can’t make any assumptions about data2, because that store was performed after the volatile store.
The code in useValues2() uses a second volatile field, vol2, in an attempt to force the VM to generate a memory barrier. This doesn’t generally work. To establish a proper “happens-before” relationship, both threads need to be interacting with the same volatile field. You’d have to know that vol2 was set after data1/data2in thread 1. (The fact that this doesn’t work is probably obvious from looking at the code; the caution here is against trying to cleverly “cause” a memory barrier instead of creating an ordered series of accesses.)
4.3 What to do
In C/C++, use the pthread operations, like mutexes and semaphores. These include the proper memory barriers, providing correct and efficient behavior on all Android platform versions. Be sure to use them correctly, for example be wary of signaling a condition variable without holding the corresponding mutex.
It's best to avoid using atomic functions directly. Locking and unlocking a pthread mutex require a single atomic operation each if there’s no contention, so you’re not going to save much by replacing mutex calls with atomic ops. If you need a lock-free design, you must fully understand the concepts in this entire document before you begin (or, better yet, find an existing code library that is known to be correct on SMP ARM).
Be extremely circumspect with "volatile” in C/C++. It often indicates a concurrency problem waiting to happen.
In Java, the best answer is usually to use an appropriate utility class from the java.util.concurrentpackage. The code is well written and well tested on SMP.
Perhaps the safest thing you can do is make your class immutable. Objects from classes like String and Integer hold data that cannot be changed once the class is created, avoiding all synchronization issues. The bookEffective Java, 2nd Ed. has specific instructions in “Item 15: Minimize Mutability”. Note in particular the importance of declaring fields “final" (Bloch).
If neither of these options is viable, the Java “synchronized” statement should be used to guard any field that can be accessed by more than one thread. If mutexes won’t work for your situation, you should declare shared fields “volatile”, but you must take great care to understand the interactions between threads. The volatile declaration won’t save you from common concurrent programming mistakes, but it will help you avoid the mysterious failures associated with optimizing compilers and SMP mishaps.
The Java Memory Model guarantees that assignments to final fields are visible to all threads once the constructor has finished — this is what ensures proper synchronization of fields in immutable classes. This guarantee does not hold if a partially-constructed object is allowed to become visible to other threads. It is necessary to follow safe construction practices.(Safe Construction Techniques in Java).
Synchronization primitive guarantees
The pthread library and VM make a couple of useful guarantees: all accesses previously performed by a thread that creates a new thread are observable by that new thread as soon as it starts, and all accesses performed by a thread that is exiting are observable when a join() on that thread returns. This means you don’t need any additional synchronization when preparing data for a new thread or examining the results of a joined thread.
Whether or not these guarantees apply to interactions with pooled threads depends on the thread pool implementation.
In C/C++, the pthread library guarantees that any accesses made by a thread before it unlocks a mutex will be observable by another thread after it locks that same mutex. It also guarantees that any accesses made before calling signal() or broadcast() on a condition variable will be observable by the woken thread.
Java language threads and monitors make similar guarantees for the comparable operations.
Upcoming changes to C/C++
The C and C++ language standards are evolving to include a sophisticated collection of atomic operations. A full matrix of calls for common data types is defined, with selectable memory barrier semantics (choose from relaxed, consume, acquire, release, acq_rel, seq_cst).
See the Further Reading section for pointers to the specifications.
5.Closing Notes
While this document does more than merely scratch the surface, it doesn’t manage more than a shallow gouge. This is a very broad and deep topic. Some areas for further exploration:
Learn the definitions of happens-before, synchronizes-with, and other essential concepts from the Java Memory Model. (It’s hard to understand what “volatile” really means without getting into this.)Explore what compilers are and aren’t allowed to do when reordering code. (The JSR-133 spec has some great examples of legal transformations that lead to unexpected results.)Find out how to write immutable classes in Java and C++. (There’s more to it than just “don’t change anything after construction”.)Internalize the recommendations in the Concurrency section of Effective Java, 2nd Edition. (For example, you should avoid calling methods that are meant to be overridden while inside a synchronized block.)Understand what sorts of barriers you can use on x86 and ARM. (And other CPUs for that matter, for example Itanium’s acquire/release instruction modifiers.)Read through the java.util.concurrent and java.util.concurrent.atomic APIs to see what's available. Consider using concurrency annotations like @ThreadSafe and @GuardedBy (from net.jcip.annotations).
The Further Reading section in the appendix has links to documents and web sites that will better illuminate these topics.
6.Appendix
6.1 SMP failure example
This document describes a lot of “weird” things that can, in theory, happen. If you’re not convinced that these issues are real, a practical example may be useful.
Bill Pugh’s Java memory model web site has a few test programs on it. One interesting test is ReadAfterWrite.java, which does the following:
Thread 1Thread 2for (int i = 0; i < ITERATIONS; i++) {    a = i;    BB[i] = b;}for (int i = 0; i < ITERATIONS; i++) {    b = i;    AA[i] = a;}
Where a and b are declared as volatile int fields, and AA and BB are ordinary integer arrays.
This is trying to determine if the VM ensures that, after a value is written to a volatile, the next read from that volatile sees the new value. The test code executes these loops a million or so times, and then runs through afterward and searches the results for inconsistencies.
At the end of execution,AA and BB will be full of gradually-increasing integers. The threads will not run side-by-side in a predictable way, but we can assert a relationship between the array contents. For example, consider this execution fragment:
Thread 1Thread 2(initially a == 1534)a = 1535BB[1535] = 165a = 1536BB[1536] = 165a = 1537BB[1537] = 167(initially b == 165) b = 166AA[166] = 1536b = 167AA[167] = 1536
(This is written as if the threads were taking turns executing so that it’s more obvious when results from one thread should be visible to the other, but in practice that won’t be the case.)
Look at the assignment of AA[166] in thread 2. We are capturing the fact that, at the point where thread 2 was on iteration 166, it can see that thread 1 was on iteration 1536. If we look one step in the future, at thread 1’s iteration 1537, we expect to see that thread 1 saw that thread 2 was at iteration 166 (or later). BB[1537] holds 167, so it appears things are working.
Now suppose we fail to observe a volatile write to b:
Thread 1Thread 2(initially a == 1534)a = 1535BB[1535] = 165a = 1536BB[1536] = 165a = 1537BB[1537] = 165 // stale b(initially b == 165)b = 166AA[166] = 1536b = 167AA[167] = 1536
Now, BB[1537] holds 165, a smaller value than we expected, so we know we have a problem. Put succinctly, for i=166, BB[AA[i]+1] < i. (This also catches failures by thread 2 to observe writes to a, for example if we miss an update and assign AA[166] = 1535, we will get BB[AA[166]+1] == 165.)
If you run the test program under Dalvik (Android 3.0 “Honeycomb” or later) on an SMP ARM device, it will never fail. If you remove the word “volatile” from the declarations of a and b, it will consistently fail. The program is testing to see if the VM is providing sequentially consistent ordering for accesses to a and b, so you will only see correct behavior when the variables are volatile. (It will also succeed if you run the code on a uniprocessor device, or run it while something else is using enough CPU that the kernel doesn’t schedule the test threads on separate cores.)
If you run the modified test a few times you will note that it doesn’t fail in the same place every time. The test fails consistently because it performs the operations a million times, and it only needs to see out-of-order accesses once. In practice, failures will be infrequent and difficult to locate. This test program could very well succeed on a broken VM if things just happen to work out.
6.2 Implementing synchronization stores
(This isn’t something most programmers will find themselves implementing, but the discussion is illuminating.)
Consider once again volatile accesses in Java. Earlier we made reference to their similarities with acquiring loads and releasing stores, which works as a starting point but doesn’t tell the full story.
We start with a fragment of Dekker’s algorithm. Initially both flag1 and flag2 are false:
Thread 1Thread 2flag1 = trueif (flag2 == false)    critical-stuffflag2 = trueif (flag1 == false)    critical-stuff
flag1 and
flag2 are declared as volatile boolean fields. The rules for acquiring loads and releasing stores would allow the accesses in each thread to be reordered, breaking the algorithm. Fortunately, the JMM has a few things to say here. Informally:

A write to a volatile field happens-before every subsequent read of that same field. (For this example, it means that if one thread updates a flag, and later on the other thread reads that flag, the reader is guaranteed to see the write.)Every execution has a total order over all volatile field accesses. The order is consistent with program order.
Taken together, these rules say that the volatile accesses in our example must be observable in program order by all threads. Thus, we will never see these threads executing the “critical-stuff” simultaneously.

Another way to think about this is in terms of data races. A data race occurs if two accesses to the same memory location by different threads are not ordered, at least one of them stores to the memory location, and at least one of them is not a synchronization action (Boehm and McKenney). The memory model declares that a program free of data races must behave as if executed by a sequentially-consistent machine. Because both flag1 and flag2 are volatile, and volatile accesses are considered synchronization actions, there are no data races and this code must execute in a sequentially consistent manner.

As we saw in an earlier section, we need to insert a store/load barrier between the two operations. The code executed in the VM for a volatile access will look something like this:
volatile loadvolatile storereg = Aload/load + load/store barrierstore/store barrierA = regstore/load barrier
The volatile load is just an acquiring load. The volatile store is similar to a releasing store, but we’ve omitted load/store from the pre-store barrier, and added a store/load barrier afterward.
What we’re really trying to guarantee, though, is that (using thread 1 as an example) the write to flag1 is observed before the read of flag2. We could issue the store/load barrier before the volatile load instead and get the same result, but because loads tend to outnumber stores it’s best to associate it with the store.
On some architectures, it’s possible to implement volatile stores with an atomic operation and skip the explicit store/load barrier. On x86, for example, atomics provide a full barrier. The ARM LL/SC operations don’t include a barrier, so for ARM we must use explicit barriers.
(Much of this is due to Doug Lea and his “JSR-133 Cookbook for Compiler Writers” page.)
Web pages and documents that provide greater depth or breadth. The more generally useful articles are nearer the top of the list.

Shared Memory Consistency Models: A Tutorial

Written in 1995 by Adve & Gharachorloo, this is a good place to start if you want to dive more deeply into memory consistency models.

http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf

Memory Barriers

Nice little article summarizing the issues.

http://en.wikipedia.org/wiki/Memory_barrier

An introduction to multi-threaded programming in C++ and Java, by Hans Boehm. Excellent discussion of data races and basic synchronization methods.

Java Concurrency In Practice

Published in 2006, this book covers a wide range of topics in great detail. Highly recommended for anyone writing multi-threaded code in Java.

http://www.javaconcurrencyinpractice.com

JSR-133 (Java Memory Model) FAQ

A gentle introduction to the Java memory model, including an explanation of synchronization, volatile variables, and construction of final fields.

http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html

Overview of package java.util.concurrent

The documentation for the
java.util.concurrent package. Near the bottom of the page is a section entitled “Memory Consistency Properties” that 　　explains the guarantees made by the various classes.

java.util.concurrent Package Summary

Java Theory and Practice: Safe Construction Techniques in Java

This article examines in detail the perils of references escaping during object construction, and provides guidelines for thread-safe constructors.

http://www.ibm.com/developerworks/java/library/j-jtp0618.html

Java Theory and Practice: Managing Volatility

A nice article describing what you can and can’t accomplish with volatile fields in Java.

http://www.ibm.com/developerworks/java/library/j-jtp06197.html

The “Double-Checked Locking is Broken” Declaration

Bill Pugh’s detailed explanation of the various ways in which double-checked locking is broken. Includes C/C++ and Java.

http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html

[ARM] Barrier Litmus Tests and Cookbook

A discussion of ARM SMP issues, illuminated with short snippets of ARM code. If you found the examples in this document too un-specific, or want to read the formal description of the DMB instruction, read this. Also describes the instructions used for memory barriers on executable code (possibly useful if you’re generating code on the fly).

http://infocenter.arm.com/help/topic/com.arm.doc.genc007826/Barrier_Litmus_Tests_and_Cookbook_A08.pdf

Linux Kernel Memory Barriers

Documentation for Linux kernel memory barriers. Includes some useful examples and ASCII art.

http://www.kernel.org/doc/Documentation/memory-barriers.txt

ISO/IEC JTC1 SC22 WG21 (C++ standards) 14882 (C++ programming language), chapter 29 (“Atomic operations library”)

Draft standard for C++ atomic operation features.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3090.pdf

(intro:
http://www.hpl.hp.com/techreports/2008/HPL-2008-56.pdf)

ISO/IEC JTC1 SC22 WG14 (C standards) 9899 (C programming language) chapter 7.16 (“Atomics <stdatomic.h>”)

Draft standard for ISO/IEC 9899-201x C atomic operation features. (See also n1484 for errata.)

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1425.pdf

Dekker’s algorithm

The “first known correct solution to the mutual exclusion problem in concurrent programming”. The wikipedia article has the full algorithm, with a discussion about how it would need to be updated to work with modern optimizing compilers and SMP hardware.

http://en.wikipedia.org/wiki/Dekker's_algorithm

An e-mail on the arm-kernel mailing list from Catalin Marinas. Includes a nice summary of address and control dependencies.

http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-05/msg11811.html

What Every Programmer Should Know About Memory

A very long and detailed article about different types of memory, particularly CPU caches, by Ulrich Drepper.

Reasoning about the ARM weakly consistent memory model

This paper was written by Chong & Ishtiaq of ARM, Ltd. It attempts to describe the ARM SMP memory model in a rigorous but accessible fashion. The definition of “observability” used here comes from this paper.

http://portal.acm.org/ft_gateway.cfm?id=1353528&type=pdf&coll=&dl=&CFID=96099715&CFTOKEN=57505711

The JSR-133 Cookbook for Compiler Writers
Doug Lea wrote this as a companion to the JSR-133 (Java Memory Model) documentation. It goes much deeper into the details than most people will need to worry about, but it provides good fodder for contemplation.
http://g.oswego.edu/dl/jmm/cookbook.html

The Semantics of Power and ARM Multiprocessor Machine Code

If you prefer your explanations in rigorous mathematical form, this is a fine place to go next.

http://www.cl.cam.ac.uk/~pes20/weakmemory/draft-ppc-arm.pdf

转载于:https://www.cnblogs.com/sjjg/p/5416733.html
展开全文
• 1.SMP IRQ Affinity 硬件中断发生频繁，是件很消耗 CPU 资源的事情，在多核 CPU 条件下如果有办法把大量硬件中断分配给不同的...传统的 8259A 只适合单 CPU 的情况，现在都是多 CPU 多核SMP 体系，所以为了充分利..

1. SMP IRQ Affinity

硬件中断发生频繁，是件很消耗 CPU 资源的事情，在多核 CPU 条件下如果有办法把大量硬件中断分配给不同的 CPU (core) 处理显然能很好的平衡性能。

现在的服务器上动不动就是多 CPU 多核、多网卡、多硬盘，如果能让网卡中断独占1个 CPU (core)、磁盘 IO 中断独占1个 CPU 的话将会大大减轻单一 CPU 的负担、提高整体处理效率。

传统的 8259A 只适合单 CPU 的情况，现在都是多 CPU 多核的 SMP 体系，所以为了充分利用 SMP 体系结构、把中断传递给系统上的每个 CPU 以便更好实现并行和提高性能，Intel 引入了高级可编程中断控制器（APIC）。

光有高级可编程中断控制器的硬件支持还不够，Linux 内核还必须能利用到这些硬件特质，所以只有 kernel 2.4 以后的版本才支持把不同的硬件中断请求（IRQs）分配到特定的 CPU 上，这个绑定技术被称为 SMP IRQ Affinity.

2. 网络

在网络非常 heavy 的情况下，对于文件服务器、高流量 Web 服务器这样的应用来说，把不同的网卡 IRQ 均衡绑定到不同的 CPU 上将会减轻某个 CPU 的负担，提高多个 CPU 整体处理中断的能力；

对于数据库服务器这样的应用来说，把磁盘控制器绑到一个 CPU、把网卡绑定到另一个 CPU 将会提高数据库的响应时间、优化性能。合理的根据自己的生产环境和应用的特点来平衡 IRQ 中断有助于提高系统的整体吞吐能力和性能。

网卡多队列与RPS/RFS

以网卡中断为例，在没有设置SMP IRQ affinity时， 所有网卡中断都关联到CPU0, 这导致了CPU0负载过高，而无法有效快速的处理网络数据包，导致了瓶颈。

通过SMP IRQ affinity， 把单网卡多个队列中断分配到多个CPU上，可以分散CPU压力，提高数据处理速度。但是smp_affinity要求网卡支持多队列，网卡有多队列，才会有多个中断号，这样就可以把不同的中断号分配到不同CPU上，这样中断号就能相对均匀的分配到不同的CPU上。

而单队列的网卡可以通过RPS/RFS来模拟多队列的情况，但是该效果并不如网卡本身多队列+开启RPS RFS 来的有效。

Linux系统采用中断机制协同处理CPU与其他设备工作。网卡的中断默认由cpu0处理，在大量请求的网络环境下可能出现cpu0负载高，而其他cpu空闲。后来出现网卡多队列技术解决这个问题。

2）RPS

3）RFS

4）XPS

http://www.cnhalo.net/2016/10/14/linux-xps/

Why XPS?

如果过多的cpu使用相同的tx队列，那么加重tx对应的qdisc锁的争抢。 如果每个cpu只关联了一个tx，甚至能消除竞争

可以减小因为发送完成中断造成的cache miss。

因此xps_cpus的配置最好结合/proc/irq/xxx/smp_affinity, 映射最好在同一个cpu或者同一个numa node的cpu上

3.
irqbalance

默认所有的中断处理都集中在cpu0 上，导致服务器负载过高的时候，cpu0 成了瓶颈，而其他cpu 却还闲着,启用 irqbalance 服务，既可以提升性能，又可以降低能耗。

irqbalance 用于优化中断分配，它会自动收集系统数据以分析使用模式，并依据系统负载状况将工作状态置于 Performance mode 或 Power-save mode。

处于 Performance mode 时，irqbalance 会将中断尽可能均匀地分发给各个 CPU core，以充分利用 CPU 多核，提升性能。

处于 Power-save mode 时，irqbalance 会将中断集中分配给第一个 CPU，以保证其它空闲 CPU 的睡眠时间，降低能耗。

默认情况下，开启irqbalance是足够用的，但是对于一些对网络性能要求比较高的场景，手动绑定中断磨合是比较好的选择。

irqbalance根据系统中断负载的情况，自动迁移中断保持中断的平衡，同时会考虑到省电因素等等。 但是在实时系统中会导致中断自动漂移，对性能造成不稳定因素，在高性能的场合建议关闭。

开启irqbalance，会存在一些问题，比如：

a) 有时候计算出来的值不合理，导致CPU使用还是不均衡。

b) 在系统比较空闲IRQ处于 Power-save mode 时，irqbalance 会将中断集中分配给第一个 CPU，
以保证其它空闲 CPU 的睡眠时间，降低能耗。如果压力突然上升，可能会由于调整的滞后性带来性能问题。

c) 处理中断的CPU总是会变，导致了更多的context switch。

d）也存在一些情况，启动了irqbalance，但是并没有生效，没有真正去设置处理中断的cpu。

4. 绑定eth1 interrupt到CPU1

默认网卡中断都集中到了 CPU0, 因此在有巨大网络流量的时候CPU0可能过载, 让网卡的数据包收发出现延迟. 为了分散CPU处理网卡中断, 我们需要设把网卡绑定到特定的CPU核心。

4.1 网卡IRQ

/root # cat /proc/interrupts | grep eth

30:          0          0          0          0     GIC-0  90 Level     eth0

31:        185        0          0          0     GIC-0  92 Level     eth1

eth0 IRQ: 30

eth1 IRQ: 31

4.2 bind

echo 2 > /proc/irq/31/smp_affinity

/root # cat /proc/interrupts | grep eth

30:          0          0          0          0     GIC-0  90 Level     eth0

31:        185        258      0          0     GIC-0  92 Level     eth1

4.3 smp_affinity与CPU的对应关系

smp_affinity:

CPU            Binary      Hex

CPU 0      00000001      1

CPU 1      00000010      2

CPU 2      00000100      4

CPU 3      00001000      8

4.4 注意

如果网卡中断时绑定在CPU0上处理硬中断的，那么其触发的软中断也是在CPU0上的，因为修改的NET_RX_SOFTIRQ是cpu-per的变量，只有其上的ksoftirq进程会去读取及执行。

softirq采用谁触发，谁负责处理的。例如：当一个驱动的硬件中断被分发给了指定的CPU，并且在该中断handler中触发了一个softirq，那么该CPU负责调用该softirq number对应的action callback来处理该软中断。


展开全文
• 问题描述Ubuntu是否使用多核CPU(据说Windows 7会这样做)？换句话说，它是否进行了多任务优化，以便用户可以从4+核心处理器中受益？最佳解决思路是的，Ubuntu针对多核CPU进行了优化，并且已经存在多年。“2.0版(Linux...

问题描述
Ubuntu是否使用多核CPU(据说Windows 7会这样做)？
换句话说，它是否进行了多任务优化，以便用户可以从4+核心处理器中受益？
最佳解决思路
是的，Ubuntu针对多核CPU进行了优化，并且已经存在多年。
“2.0版(Linux内核版)于1996年6月9日发布。该系列共有41个版本.2.0的主要特性是SMP支持(即单个系统中的对称多处理)，支持更多类型的处理器“。
次佳解决思路
Ubuntu是一个操作系统，一个Linux发行版。操作系统由几个软件组件组成，如内核，libraries，服务/守护进程，应用程序等。
Ubuntu使用Linux kernel，它使用symmetric multiprocessing(SMP)和多个内核。它可以很好地从low-end single-cpu single-core系统扩展到具有数千个multi-core CPU的high-end超级计算机集群。
Ubuntu附带了数百个应用程序，其中一些被编码为使用multi-core CPU，有些则没有。在某些情况下，对应用程序进行编码是有意义的，在其他情况下，它是不可能的。并非所有可以或应该使用多个核心的应用程序都这样做。
例如，游戏使用multi-core用于图形，音频，网络，物理等等是有意义的。但是对于multi-core意识到的计算器，截图工具，suduko或菜单编辑器没有意义。
第三种解决思路
Ubuntu的kernel现在支持多个CPU，如果它的multi-core或多个CPU在一个系统中不会真正重要，它们将得到完美处理。
但是，您必须注意的一件事是，仅仅因为系统支持多个CPU /核心而系统中存在这些CPU并不意味着您的应用程序将自动运行得更快。你现在被告知对面很多天，它是一个营销的东西。
对于同时利用多个CPU /核心的应用程序，需要构建它以共享这些CPU /核心上的进程负载。
Linux Kernel和Ubuntu发行版也是如此，如果一个应用程序是为单个线程构建的，那么就没有更多的CPU可以做了。
操作系统现在用来优化多个CPU /核心上的应用程序负载的一件事就是能够在核心上运行一个不像其他核心那样负载的核心，从而平衡负载并确保你的多核心利用率是最佳的。在实践中，使用该技术只会带来很小的速度提升。
如果你真的想看到你的内核闪耀，你需要使用支持symmetric multiprocessing的程序来进行大量的CPU利用率，比如视频编辑等。
参考资料

展开全文
• zynq7k上通过“sev”指令复位启动CPU2，在发送sev指令前要先把CPU2的启动地址写入OCM的特定地址。 假设CPU1使用的DDR的地址为（0x00100000~0x001FFFFF）. 而CPU1使用的地址范围为（0x00200000~0x002FFFFF） void ...
• 2018/06/05 凌晨，雨夜！...几乎所有上世纪70年代以来的操作系统都不是为多核CPU并行编程而设计的，因此当它们遇到多核CPU的各种问题时，无一不是东填西补，最终情况依然不容乐观。这里说一个典型的，就是Li...
• ## 多核CPU运行模式

千次阅读 2012-04-05 21:23:18
多核CPU运行模式主要有以下三种：  •非对称多处理（Asymmetric multiprocessing，AMP）——每个CPU内核运行一个独立的操作系统或同一操作系统的独立实例（instantiation）。  •对称多处理（Symmetric ...
• 这几个词听了很多遍，还是没有... 说多CPU机器，则强调的是这个机器有多块CPU芯片，至于每块CPU芯片是不是多核，  则取决于具体的设计。而多块CPU芯片之间的通信则应该有CPUCPU之间的总线完  成。  那这样来
• ## SMP和多核：

千次阅读 2018-11-27 21:41:08
SMP和多核的基本概念： SMP和多核的区别： 大势取向： SMP和多核的基本概念： SMP：（Symmetrical Multi-Processing）对称多处理器  顾名思义，就是指：相同的多个处理器  多核：  指的是：将多个处理器...
• ARM64多核CPU启动流程 ARM64使用ACPI parking protocol specification BP核： start_kernel//开始初始化内核需要的全局变量，硬件资源等 -> setup_arch  -> setup_processor -> cpu_init //设定IRQ...
• 1. 在Linux下，如何确认是多核或多CPU: #cat /proc/cpuinfo 如果有多个类似以下的项目，则为多核或多CPU: processor : 0 ...... processor : 1 2. Linux下，如何看每个CPU的使用率： #top -d 1 之后按下1. ...
• 一、多处理机（SMP）： ...二、多核cpu 1.一个物理cpu包含多个cpu核心。 2.每个核心有自己的独立寄存器运算单元。 3.可共享最后一级cache. 三、多线程cpu 1.一个物理cpu包含多个逻辑cpu
• 多核CPU的发展趋势CPU从诞生之日起，主频就在不断的提高，如今主频之路已经走到了拐点。桌面处理器的主频在2000年达到了1GHz，2001年达到2GHz，2002年达到了3GHz。但在将近5年之后我们仍然没有看到4GHz处理器的出现...
• 前言:标题谈的是多核CPU下绑定某个中断到指定的CPU的方法，但是实际工作中，这个方法可以适配很多种数据接收的情况，所以就记录一下自己之前工作上遇到的具体某一个问题以及解决方法。 　之前有一个项目，需要串口...
• ## Linux下多核CPU知识

千次阅读 2015-07-23 19:29:06
1. 在Linux下，如何确认是多核或多CPU: #cat /proc/cpuinfo 如果有多个类似以下的项目，则为多核或多CPU: processor : 0 ...... processor : 1 2. Linux下，如何看每个CPU的使用率： #top -d...
• 多核CPU运行模式主要有以下三种： 　•非对称多处理（Asymmetric multiprocessing，AMP）——每个CPU内核运行一个独立的操作系统或同一操作系统的独立实例（instantiation）。 　•对称多处理（Symmetric ...
• 1. 在Linux下，如何确认是多核或多CPU: #cat /proc/cpuinfo 如果有多个类似以下的项目，则为多核或多CPU: processor  : 0 ...... processor  : 1   2. Linux下，如何看每个CPU的使用率： #top ...
• RTX是INTERVALZERO公司的主打产品，是windows的实时扩展，充分利用了windows的友好用户界面，...目前，主流的CPU都是多核的，RTX充分利用了对称多处理技术(SMP)，完全支持多核CPU。在应用时，可以指定RTX程序运行在哪些
• 物理CPU多核，超线程 判断依据： 1.具有相同core id的cpu是同一个core的超线程。 2.具有相同physical id的cpu是同一颗cpu封装的线程或者cores。 英文版： 1.Physical id and core id are...
• 1.多核处理器的分类：对称多处理器SMP和非对称多处理器AMP。AMP又分为异构和同构。异构处理器使 用不同的核心，比如DSP+FPGA等。异构处理器目前有标准推动：Kronos发布的OPENCL。 2. 有些事实说明，当CPU 频率超过...

...