当前位置:网站首页>Three cache methods and principles

Three cache methods and principles

2022-06-22 22:44:00 Autumn leaf flower

Cache aside

The system usually bypasses the cache

  1. If the query cache service exists, it returns , There is no query database
  2. There is an update cache in the database , Return the data , There is no update cache with null( The period of validity ), Prevent cache penetration
  3. After updating the database , to update \ Delete cache

Application scenarios :

  1. It is applicable to business scenarios with frequent queries ( Configuration information , Login user information ...)

advantage : Cache miss query database , It belongs to lazy loading mode (Lazy Loading)

shortcoming :

1. Not hit, , The operation process is complex , Check cache , Database search , Update cache

2. When data is updated frequently , Frequent cache updates , Cache effect is reduced

3. It can easily lead to inconsistent data , Generating dirty data

Cache aside cache 4 Two update methods

The first is to update the cache first , Update the database ;

The second method is to delete the cache first , Update the database ;

The third is to update the database first ; Updating cache ;

The fourth is to update the database first ; Deleting cache ;

The probability of inconsistency caused by the four update caches is as follows :

( Cache update delay Far below Database update )

The first one is > The second kind > The third kind of > A fourth

It is recommended that the application system use the third , The fourth way is to update the cache , If the requirements for data consistency are strict ,

It is recommended to use the second method 4 Kind of ( After optimizing and updating the database , Delete the cache immediately , Then pause for a few seconds to delete )

Cache aside In cache mode Cache breakdown , Cache penetration , Cache avalanche solution

Cache breakdown :

The query cache does not , The database has ; In case of a large number of concurrent queries , This will increase the pressure on the database , Causes the system to be unavailable

Solution :

Cache preheating ; When querying the database key Add mutex lock ; The cache does not set a validity period ( Don't suggest )

Cache penetration :

The query cache does not , The database doesn't either ; In a large number of concurrent cases , This will increase the pressure on the database , Causes the system to be unavailable

Solution :

Verify the query data ; Bloon filtration ( Cuckoo filter ); When querying the database key Add mutex lock , When the database does not exist , Update cache to null And set the validity period of a few seconds ;

Cache avalanche

When querying the cache , A large amount of data is out of date , Query a large number of hits to the database , This will increase the pressure on the database , Causes the system to be unavailable

Solution :

Add a random time to the cache validity ; Hotspot data does not have a valid period ; Hot data is distributed among different servers

Read/Write through

Cache instructions

Use the cache as the primary data source , And the database is transparent to the application , Read 、 The task of updating the database is entrusted to the cache agent

Application scenarios

Many queries , Less updates , Scenarios with high data consistency

advantage : High data consistency , Quick query

shortcoming : For frequently written scenarios , Cause a delay , Performance degradation

Write behind

Cache usage

Use caching as a reliable data source , Only write to the cache every time ; The database operation is asynchronous

Application scenarios

Write more , Less read application scenarios ( stock , Advance order )

advantage : Update database asynchronously , Reduce database pressure ; Strong anti concurrency ability , Completely dependent on cache

shortcoming : Easy data inconsistency , Risk of missing data

原网站

版权声明
本文为[Autumn leaf flower]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/173/202206221733032301.html