当前位置:网站首页>Redis usage and memory optimization
Redis usage and memory optimization
2022-06-26 05:23:00 【Air transport Alliance】
About Redis Installation configuration , You can refer to : https://blog.csdn.net/weixin_46307478/article/details/122204294
About Python Use Redis, You can refer to :https://blog.csdn.net/weixin_46307478/article/details/122953512
3、 ... and 、Redis Usage and memory optimization
redis In fact, the cost of memory management is very high , That is, it takes up too much memory , It belongs to trading space for time . The author is also very clear about this , Therefore, it provides a series of parameters and means to control and save memory
It is recommended not to turn on VM( Virtual memory ) Options
VM Option is as Redis A persistence strategy that stores data that exceeds physical memory and is swapped in and out of memory and disk , It will seriously slow down the running speed of the system , So close VM function , Please check your redis.conf In file vm-enabled by no.
Set the maximum memory option
It is better to set redis.conf Medium maxmemory Options , This option is to tell Redis How much is used Physical memory Then it starts to reject subsequent write requests , This parameter can protect your Redis Will not be caused by using too much physical memory swap, Eventually, it will seriously affect the performance and even crash .
Generally, you also need to set the memory saturation recovery policy
- volatile-lru: From the set of data for which the expiration time has been set (server.db[i].expires) Select the least recently used data in
- volatile-ttl: From the set of data for which the expiration time has been set (server.db[i].expires) To select the data to be expired
- volatile-random: From the set of data for which the expiration time has been set (server.db[i].expires) In the arbitrary selection of data elimination
- allkeys-lru: From the data set (server.db[i].dict) Select the least recently used data in
- allkeys-random: From the data set (server.db[i].dict) In the arbitrary selection of data elimination
- no-enviction( deportation ): Exclusion data
Parameters that control memory usage
Redis A set of parameters is provided for different data types to control memory usage
- Hash
redis.conf Below the configuration file 2 term
- **hash-max-zipmap-entries 64 **
The meaning is when value This Map When there are no more than a few internal members, it will be stored in a linear compact format , The default is 64, namely value Internal 64 Less than members use linear compact storage zipmap, If the value is exceeded, it will be automatically converted to real HashMap(ht).
- hash-max-zipmap-value 512
hash-max-zipmap-value The meaning is when value This Map The length of each internal member value does not exceed
How many bytes will use linear compact storage zipmap To save space .
above 2 Conditions any condition that exceeds the set value will be converted into a real HashMap, It will no longer save memory , But the bigger the better ( The space and the efficiency of checking and modifying need to be weighed according to the actual situation )
- List
- list-max-ziplist-entries 512
list How many nodes of the data type are followed by a compact storage format with no pointer ziplist - list-max-ziplist-value 64
list Data type node values smaller than what bytes are stored in a compact format ziplist.
- Set
- set-max-intset-entries 512
set Data type internal data if all are numeric , And how many nodes are included below will be stored in compact format
Redis Internal optimization
- Redis The internal implementation does not optimize the memory allocation too much , There will be memory fragmentation to some extent , But most of the time this won't be Redis Performance bottlenecks .
- Redis Cache a certain range of Constant number Share as a resource , In many cases, numeric data types can greatly reduce memory overhead , The default is 1-10000, You can recompile the configuration to modify a line of macro definitions in the source code REDIS_SHARED_INTEGERS.
How to clean up memory fragments ?
restart Redis example :
- If Redis The data in is not persistent , that , The data is lost ;
- Even if Redis Data persistence , We still need to pass AOF or RDB Resume , The length of recovery depends on AOF or RDB Size , If only one Redis example , There is no service available in the recovery phase .
Fortunately, , from 4.0-RC3 After the version ,Redis It provides a method to clean up memory fragments automatically :
The basic mechanism :
- Memory fragmentation , Simply speaking , Namely “ Move to give way , Merge spaces ”.

Redis Parameters specially set for automatic memory fragment cleaning mechanism :
- Control the start and end time of debris cleaning
- The amount of CPU The proportion
- So as to reduce the impact of debris cleaning on Redis Performance impact of request processing itself
First ,Redis Automatic memory defragmentation needs to be enabled , You can put activedefrag Configuration item set to yes, The order is as follows :

Conditions that trigger cleanup ( Need to meet at the same time ):
- active-defrag-ignore-bytes 100mb: The number of bytes representing memory fragmentation reaches 100MB when , Start cleaning ;
- active-defrag-threshold-lower 10: Represents the percentage of memory fragmentation allocated by the operating system to Redis The proportion of total space reached 10% when , Start cleaning .
In order to minimize debris cleaning on Redis The impact of normal request processing , When the automatic memory fragmentation function is executed , It will also monitor the occupancy of the cleaning operation CPU Time , And there are two parameters , They are used to control the occupation of CPU In proportion to time 、 Lower limit , Ensure that the cleaning work can be carried out normally , And avoid lowering Redis performance . These two parameters are as follows :
- active-defrag-cycle-min 25: Represents the automatic cleaning process CPU The proportion of time is no less than 25%, Make sure the cleaning can be carried out normally ;
- active-defrag-cycle-max 75: Represents the automatic cleaning process CPU The proportion of time is not higher than 75%, Once you surpass , Just stop cleaning up , In order to avoid cleaning , A lot of memory copies are blocked Redis, This leads to an increase in response delay .
边栏推荐
- 出色的学习能力,才是你唯一可持续的竞争优势
- RESNET in tensorflow_ Train actual combat
- thread priority
- Installation and deployment of alluxio
- 10 set
- cartographer_fast_correlative_scan_matcher_2d分支定界粗匹配
- Technical problems to be faced in mobile terminal im development
- Practical cases | getting started and mastering tkinter+pyinstaller
- 程序人生
- Keras actual combat cifar10 in tensorflow
猜你喜欢

Install the tp6.0 framework under windows, picture and text. Thinkphp6.0 installation tutorial

Official image acceleration

【ARM】讯为rk3568开发板buildroot添加桌面应用

uni-app吸顶固定样式

百度API地图的标注不是居中显示,而是显示在左上角是怎么回事?已解决!

基于SDN的DDoS攻击缓解

cartographer_fast_correlative_scan_matcher_2d分支定界粗匹配

SOFA Weekly | 开源人—于雨、本周 QA、本周 Contributor

cartographer_ fast_ correlative_ scan_ matcher_ 2D branch and bound rough matching
![[unity3d] human computer interaction input](/img/4d/47f6d40bb82400fe9c6d624c8892f7.png)
[unity3d] human computer interaction input
随机推荐
RESNET practice in tensorflow
PHP二维/多维数组按照指定的键值来进行升序和降序
Introduction to GUI programming to game practice (I)
Anaconda creates tensorflow environment
Security problems in wireless networks and modern solutions
[greedy college] recommended system engineer training plan
data = self._data_queue.get(timeout=timeout)
cartographer_ local_ trajectory_ builder_ 2d
Protocol selection of mobile IM system: UDP or TCP?
Keras actual combat cifar10 in tensorflow
二次bootloader关于boot28.asm应用的注意事项,28035的
Gd32f3x0 official PWM drive has a small positive bandwidth (inaccurate timing)
小小面试题之GET和POST的区别
Leetcode114. 二叉树展开为链表
电机专用MCU芯片LCM32F037系列内容介绍
Vie procédurale
Decipher the AI black technology behind sports: figure skating action recognition, multi-mode video classification and wonderful clip editing
Practical cases | getting started and mastering tkinter+pyinstaller
Sentimentin tensorflow_ analysis_ cell
Chapter 9 setting up structured logging (I)