当前位置:网站首页>Three characteristics of concurrency 2-orderliness

Three characteristics of concurrency 2-orderliness

2022-06-22 08:13:00 Hide on jdk

as-if-serial principle : No matter how reorder ( Compiler and processor to improve parallelism ),( Single line
cheng ) The execution result of the program cannot be changed
 

In order to improve the cpu Execution efficiency , Compilers and processors do not reorder operations that have data dependencies , because
This reordering will change the execution result . however , If there are no data dependencies between operations , These operations may be compiled
Processor and processor reorder . One sentence summary , If the execution result is not changed, the command will be replayed . On the contrary, it will not command Remake .

Calculate the circumference of a circle ,A,B Command replay can occur , because A,B A remake does not affect the result

double pi = 3.14; // A
double r = 5.0; // B
double area = 2*pi * r ; 
 

happens-before principle :

1. Program Order Rule : In one thread , In code order , The operation written in the front occurs first in the operation written in the back
do ;
2. Locking rules : One unLock The operation occurs first of all on the same lock lock operation ;
3.volatile Variable rule : A write to a variable occurs before a read to that variable ;
4. Transfer rules : If you operate A First occurs in operation B, The operation B First in operation C, Then we can get the operation A
First occurs in operation C
Mainly 2 and 3,lock Provisions and volatile Regulations

happens-before The principle is JMM The very important principle in , It's about judging whether there's competition in the data 、 Whether the thread is safe
The main basis is , Ensure the visibility of multithreaded environment
 

cpu Multi level cache architecture :

 cpu, register ,pc Counter ,cache,LRU( Logic arithmetic unit ), because cpu There is a difference in operation level between main memory and main memory , Probably 100:1 The speed of , So the L3 cache is added to improve cpu Operational efficiency ,

This three-level cache design architecture has some drawbacks : such as cpu1 perform 8 Big atom operation , such as +3 operation ,cpu2 perform +5 operation , If cpu1 Write to main memory after execution , however cpu2 No incoming and read values to main memory , Cause the last read to +3 Result ,

instead of +8 The final correct result , How to ensure that this architecture gets the right results ?

Subsequently, a MESI Mechanism , Cache consistency protocol

M:modify  E: Monopoly S:share i:invalid

The modified Modified (M)
Cache lines are dirty (dirty), Different from the value of main memory . If something else CPU The kernel wants to read the main memory , This cache line must
Must be written back to main memory , Status changes to shared (S).
Monopoly Exclusive (E)
Cache rows are only in the current cache , But clean -- Cache data is the same as main memory data . When another cache reads it , The status changes to
share ; When writing data , Change to modified status .
share Shared (S)
Cache lines also exist in other caches and are unmodified . Cache lines can be discarded at any time .
Invalid Invalid (I)
The cache line is invalid
 

That is to say cpu1 Operation completed 8 After the big atom operation , It will immediately use the bus sniffing mechanism , Write data to master and slave , thus cpu2 Get the right results , There is also a bus referee mechanism , That is to say cpu1 and cpu2 The modified data cannot be written into the master and slave at the same time .

The problem of pseudo sharing
If the threads of multiple cores operate on different variable data in the same cache row , Then there will be frequent cache failures , Even if
At the code level, there is no relationship between the data operated by the two threads . This unreasonable resource competition is pseudo sharing
 

View cache row size :64byte

cat /proc/cpuinfo
Avoid pseudo sharing scheme


class disp {
 volatile long a
 // Avoid pseudo sharing : Cache row fill
 long p1, p2, p3, p4, p5, p6, p7;
 volatile long b;
 }

Pseudo sharing cases : Two threads are in two cpu On the kernel , Operate on different variable data in the same cache row , add volatile Execution efficiency 2548ms, Because of the addition of lock lock ,, So it's inefficient , Get rid of volatile, Execution results 42ms, If you add... Between two variables 7 individual long,

long p1, p2, p3, p4, p5, p6, p7;

Then the execution efficiency is improved 51ms, It is still lower than the implementation efficiency of the original ecology , Should it be underlying or based on MESI Mechanism ,

jdk8 Just add a note , It can be automatically added later long, Make sure that it is no longer executed in a buffer line , Avoid pseudo sharing problems , A way to increase efficiency without locking .

class Pointer {
    //  Avoid pseudo sharing : @Contended +  jvm Parameters :-XX:-RestrictContended  jdk8 Support 
    @Contended
     long x;
    // Avoid pseudo sharing :  Cache row fill 
     long y;
}
private static void testPointer(Pointer pointer) throws InterruptedException {
    long start = System.currentTimeMillis();
    Thread t1 = new Thread(() -> {
        for (int i = 0; i < 100000000; i++) {
            pointer.x++;
        }
    });

    Thread t2 = new Thread(() -> {
        for (int i = 0; i < 100000000; i++) {
            pointer.y++;
        }
    });

 DCL Why use volatile 
if (object== null) {
    synchronized (a.class) {
        if (object== null) {}}}
return myInstance;
 Object initialization 3 Big steps :
// 1.  Open up a memory space 
// 3. myInstance The address to the memory space 
// 2.  Object initialization 

   Suppose the first thread comes in , according to 1,2,3 Step execution , But this time 2,3 Command replay occurs , Let the object point to execute first , that 
   The value of this object is not empty , Then the second thread comes in , The judgment object is not empty , Then you get the value of the semi initial state of the first thread , This is obviously not right 

How to prohibit command replay : add volatile

原网站

版权声明
本文为[Hide on jdk]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/173/202206220808562378.html