当前位置:网站首页>The principle of redis cache consistency deep analysis
The principle of redis cache consistency deep analysis
2022-06-23 14:39:00 【Liu Java】
Keep creating , Accelerate growth ! This is my participation 「 Nuggets day new plan · 6 Yuegengwen challenge 」 Of the 22 God , Click to see the event details
In detail Redis There are three ways to achieve cache consistency , And their advantages and disadvantages .
First of all, understand , There is no absolute consistency between the cache and the database data , If absolutely consistent , Then you can't use the cache , We can only guarantee the final consistency of the data , And try to keep the cache inconsistency time as short as possible .
in addition , To avoid data inconsistency between the cache and the database caused by extreme conditions , The cache needs to set an expiration time . time out , Cache is automatically cleaned up , Only in this way can the cache and database data be “ Final consistency ”.
If the concurrency is not very high , Whether you choose to delete the cache first or later , It rarely causes problems .
1 Update the database first , Then delete the cache
If the database is successfully updated first , Failed to delete cache , Then there are new data in the database , There is old data in the cache , At this point, data inconsistency will occur .
If you're in a high concurrency scenario , There is also a more extreme case of database and cache data inconsistency :
- The cache just failed ;
- request A Query the database , Get an old value ;
- request B Writes the new value to the database ;
- request B Delete cache ;
- request A Writes the old value found to the cache ;
This leads to inconsistencies . and , If you do not use the cache expiration policy , The data is always dirty , Unless the database data is updated next time .
2 So let's delete the cache , Then update the database
So let's delete the cache , Update the database after , Even if updating the database later fails , Cache is empty , When reading, it will re read from the database , Although they are all old data , But the data are consistent .
If you're in a high concurrency scenario , There is also a more extreme case of database and cache data inconsistency :
- request A Write operation , Delete cache ;
- request B Query found that the buffer does not exist ;
- request B Query the database , Get an old value ;
- request B Writes the old value found to the cache ;
- request A Writes the new value to the database ;
3 Adopt the strategy of delay double deletion
The two methods above , Whether you write the library first , Delete the cache ; Or delete the cache first , Write the library again , There may be data inconsistencies . relatively speaking , The second is safer , therefore , If only these two simple methods , The second is recommended .
A better and better way is to adopt a “ Delay double delete ” The strategy of ! Before and after writing the library redis.del(key) operation , In addition, to avoid updating the database , Other threads cannot read data from the cache and then read the old data and write it to the cache , Just after updating the database , Again sleep A span , Then delete the cache again .
This sleep The time should be longer than another request to read the old data in the database + Write cache time , And if there is redis Master slave synchronization 、 Database sub database sub table , Also consider the time-consuming data synchronization , stay Sleep Then try deleting the cache again ( Whether new or old ). such , Although there is no guarantee that there will be no cache consistency problem , But it can be guaranteed that only sleep Cache inconsistency in time , Reduced cache inconsistency time .
Of course, this strategy needs to sleep for a certain period of time , This undoubtedly increases the time required to write the request , This causes the throughput of server requests to decrease , It's also a problem . So , You can treat the second deletion as an asynchronous deletion . such , Business thread requests don't have to sleep Return after a period of time , Do it , It can increase throughput .
What if deleting the cache fails ? At this time, a retry mechanism is required , At this point, you can use the message queue , Will need to be deleted key To the message queue , Asynchronous consumption messages , Get what needs to be removed key The value of is compared with the value of the database ! Delete if inconsistent , Or if the deletion fails, it will start from new consumption to success , Or when you fail a certain number of times, it's usually Redis There's something wrong with the server !
In fact, after the introduction of message oriented middleware , The problem has become more complicated , We need to ensure that there are no problems with message oriented middleware , For example, the producer sends a message successfully or unsuccessfully . therefore , We can listen to the database binlog Log message queue to delete the cache , The advantage is that you don't have to put messages into the message queue , Listen to the database through a middleware binlog journal , Then automatically put messages into the queue , We no longer need to program producer code , Just write the consumer's code . This kind of monitoring database binlog+ Message queue It is also a popular way at present .
4 Why delete cache
Why do the above methods delete the cache instead of updating the cache ?
If you update the database first , Then update the cache , Then the following may happen : If the database 1 Updated within hours 1000 Time , Then the cache needs to be updated as well 1000 Time , But this cache may be in 1 Only read in hours 1 Time . If it's deleted , Even if the database is updated 1000 Time , So it's just done 1 Second valid cache delete , Subsequent deletion operations will immediately return , Only when the cache is actually read can the database load the cache . This reduces Redis The burden of .
Related articles :
If you need to communicate , Or the article is wrong , Please leave a message directly . In addition, I hope you will like it 、 Collection 、 Focus on , I will keep updating all kinds of Java Learning blog !
边栏推荐
- Pyqt5 designer making tables
- 2022 ICT market in China continues to rise and enterprise digital infrastructure is imperative
- [untitled]
- Introduction to helm basics helm introduction and installation
- 【深入理解TcaplusDB技术】TcaplusDB业务数据备份
- From the establishment to the actual combat of the robotframework framework, I won't just read this learning note
- Instructions for laravel8 Beanstalk
- [deeply understand tcapulusdb technology] tmonitor system upgrade
- Kali use
- Problems during MySQL uninstallation
猜你喜欢

Gold three silver four, busy job hopping? Don't be careless. Figure out these 12 details so that you won't be fooled~

加快 yarn install 的三个简单技巧

k8s--部署单机版MySQL,并持久化

Vulnhub target os-hacknos-1
![[in depth understanding of tcapulusdb technology] how to realize single machine installation of tmonitor](/img/6d/8b1ac734cd95fb29e576aa3eee1b33.png)
[in depth understanding of tcapulusdb technology] how to realize single machine installation of tmonitor

信贷产品额度定价场景下的回归模型效果评估

php接收和发送数据

巴比特 | 元宇宙每日必读:Meta、微软等科技巨头成立元宇宙标准论坛组织,华为、阿里加入,英伟达高管称欢迎来自加密世界的参与者...

【二级等保】过二级等保用哪个堡垒机品牌好?

【深入理解TcaplusDB技术】TcaplusDB业务数据备份
随机推荐
Self inspection is recommended! The transaction caused by MySQL driver bug is not rolled back. Maybe you are facing this risk!
建議自查!MySQL驅動Bug引發的事務不回滾問題,也許你正面臨該風險!
[Level 2 warranty] which brand of Fortress machine is good for Level 2 warranty?
建议自查!MySQL驱动Bug引发的事务不回滚问题,也许你正面临该风险!
[deeply understand tcapulusdb technology] one click installation of tmonitor background
Low grain prices hurt farmers, low wages hurt farmers!
2022 college entrance examination quarterly essay winners announced
Add Icon before input of wechat applet
Win10 64位系统如何安装SQL server2008r2的DTS组件?
Drop down menu scenario of wechat applet
[digital signal processing] linear time invariant system LTI (judge whether a system is a "non time variant" system | case 2)
Shell process control - 39. Special process control statements
Illustration of ONEFLOW's learning rate adjustment strategy
Flutter Clip剪裁组件
巴比特 | 元宇宙每日必读:Meta、微软等科技巨头成立元宇宙标准论坛组织,华为、阿里加入,英伟达高管称欢迎来自加密世界的参与者...
去 OPPO 面试, 被问麻了。。。
MySQL 创建和管理表
In this year's English college entrance examination, CMU delivered 134 high scores with reconstruction pre training, significantly surpassing gpt3
【深入理解TcaplusDB技术】 Tmonitor模块架构
Shutter clip clipping component