当前位置:网站首页>Redis is a loser. If you don't understand the usage specification, you will spoil it
Redis is a loser. If you don't understand the usage specification, you will spoil it
2022-06-25 20:00:00 【Code byte】
This is probably the most pertinent Redis Use the specification
Margo , I was hired by the company yesterday Leader Criticized the .
I work for a single matchmaker type Internet company , Launch the activity of sending your girlfriend when you place an order on double 11 .
Who wanted to , In the morning 12 After the point , A surge in users , There has been a technical failure , The user cannot place an order , The boss was furious !
After searching, we found Redis newspaper Could not get a resource from the pool.
Unable to get connected resource , And a single machine in the cluster Redis High connectivity .
Thus, the maximum number of connections is changed 、 Number of connection waits , Although the frequency of error information has been alleviated , But I still Keep reporting errors .
Later, after offline test , Discovery storage Redis Medium The character data is very large , Average 1s Return the data .
Margo , You can share Redis The standard of ? I want to be a real man who can't break fast !
adopt Redis Why so soon? ? This article we know Redis In order to achieve high performance and save memory .
therefore , Only the use of norms Redis, To achieve high performance and save memory , Or you'll be a loser Redis Can't help us tossing around .
Redis Use the specification to expand around the following latitudes :
- Key value pair usage specification ;
- Command usage specifications ;
- Data storage specification ;
- Operation and maintenance specifications .
Key value pair usage specification
There are two points to note :
- well
keyname , To provide readability 、 High maintainability key, Easy to locate problems and find data . valueTo avoidbigkey、 Choose efficient serialization and compression 、 Use object sharing pools 、 Choose efficient and appropriate data types ( May refer to 《Redis Actual combat : Skillfully using data types to achieve billion level data statistics 》).
key Naming specification
canonical key name , When encountering problems, it is easy to locate .Redis Belong to No, Scheme Of NoSQL database .
So it depends on norms to establish its Scheme Semantics , It's like building different databases according to different scenarios .
On the blackboard
hold 「 Business module name 」 As a prefix ( Like a database Scheme), adopt 「 The colon 」 Separate , Plus 「 Specific business name 」.
So we can get through key Prefix to distinguish different business data , clear .
In summary :「 Business name : Table name :id」
For example, we should count the official account numbers belonging to the technology type bloggers 「 Code byte 」 Number of fans .
set official account : technical : Code byte 100000Margo ,key Is there any problem if it's too long ?
key Is string , The underlying data structure is SDS,SDS Structure will contain string length 、 Metadata information such as allocated space size .
The length of the string increases ,SDS Metadata also takes up more memory space .
So when the string is too long , We can use appropriate abbreviations .
Do not use bigkey
Margo , I got caught , This results in an error message, and the connection cannot be obtained .
because Redis It is a single thread that executes read-write instructions , If appear bigkey Read and write operations will block threads , Reduce Redis The processing efficiency of .
bigkey There are two situations :
- Key value pairs
valueIt's big , such asvalueSave the2MBOfStringdata ; - Key value pairs
valueIt's a collection type , There are many elements , For example, save 5 Ten thousand elementsListaggregate .
although Redis It's official key and string type value All restrictions are 512MB.
Prevent network card traffic 、 The slow query ,string The type is controlled in 10KB within ,hash、list、set、zset The number of elements should not exceed 5000.
Margo , What if the business data is so large ? For example, what is preserved is 《 gold ping mei 》 This masterpiece .
We can also go through gzip Data compression to reduce data size :
/** * Use gzip Compress string */public static String compress(String str) { if (str == null || str.length() == 0) { return str; } try (ByteArrayOutputStream out = new ByteArrayOutputStream(); GZIPOutputStream gzip = new GZIPOutputStream(out)) { gzip.write(str.getBytes()); } catch (IOException e) { e.printStackTrace(); } return new sun.misc.BASE64Encoder().encode(out.toByteArray());}/** * Use gzip decompression */public static String uncompress(String compressedStr) { if (compressedStr == null || compressedStr.length() == 0) { return compressedStr; } byte[] compressed = new sun.misc.BASE64Decoder().decodeBuffer(compressedStr);; String decompressed = null; try (ByteArrayOutputStream out = new ByteArrayOutputStream(); ByteArrayInputStream in = new ByteArrayInputStream(compressed); GZIPInputStream ginzip = new GZIPInputStream(in);) { byte[] buffer = new byte[1024]; int offset = -1; while ((offset = ginzip.read(buffer)) != -1) { out.write(buffer, 0, offset); } decompressed = out.toString(); } catch (IOException e) { e.printStackTrace(); } return decompressed;}Collection types
If there are indeed many elements of a collection type , We can split a large set into several small sets to save .
Use efficient serialization and compression methods
To save memory , We can use efficient serialization and compression methods to reduce value Size .
protostuff and kryo These two serialization methods , It's better than Java The built-in serialization method is more efficient .
Although the above two serialization methods save memory , But after serialization, it is binary data , Poor readability .
Usually we sequence it into JSON perhaps XML, To avoid large data footprint , We can use compression tools (snappy、 gzip) Compress the data and save it to Redis in .
Use integer objects to share pools
Redis Internal maintenance 0 To 9999 this 1 10000 integer objects , And use these integers as a shared pool .
Even if a large number of key value pairs are saved 0 To 9999 Range of integers , stay Redis In the example , In fact, only one integer object is saved , Can save memory space .
It should be noted that , There are two situations that do not take effect :
Redis Set up in
maxmemory, And enabledLRUStrategy (allkeys-lru or volatile-lru Strategy), that , The integer object shared pool cannot be used .This is because LRU You need to count the usage time of each key value pair , If different key value pairs reuse an integer object, it cannot be counted .
If the collection type data adopts ziplist code , And set elements are integers , This is the time , You can't use shared pools .
because ziplist A compact memory structure is used , It is inefficient to judge the sharing of integer objects .
Command usage specifications
The execution of some commands will cause great performance problems , We need to pay special attention to .
Production disabled instructions
Redis It is a single thread processing request operation , If we do something that involves a lot of operations 、 Time consuming commands , It will seriously block the main thread , As a result, other requests cannot be processed normally .
KEYS: This command needs to be on Redis Global hash table for full table scanning , A serious blockage Redis The main thread ;
You should use SCAN Instead of , Return qualified key value pairs in batches , Avoid main thread blocking .
FLUSHALL: Delete Redis All data on the instance , If the amount of data is large , It will seriously block Redis The main thread ;
FLUSHDB, Delete the data in the current database , If the amount of data is large , It's also blocking Redis The main thread .
add ASYNC Options , Give Way FLUSHALL,FLUSHDB Asynchronous execution .
We can also disable , use rename-command Command renames these commands in the configuration file , Make these commands unavailable to clients .
Use with caution MONITOR command
MONITOR The command will continuously write the monitored content to the output buffer .
If there are many operations of online commands , The output buffer will soon overflow , That would be right for Redis Performance impact , Even cause service crash .
therefore , Unless it is necessary to monitor the execution of certain commands ( for example ,Redis Performance suddenly slows down , We want to see what commands the client executed ) We use .
Use the full operation command with caution
For example, get all the elements in the collection (HASH Type of hgetall、List Type of lrange、Set Type of smembers、zrange Wait for the order ).
These operations will scan the entire underlying data structure , Lead to blocking Redis The main thread .
Margo , What if the business scenario is to obtain full data ?
There are two ways to solve it :
- Use
SSCAN、HSCANWait for the command to return the set data in batches ; - Break the big set into small sets , For example, according to time 、 Division of areas, etc .
Data storage specification
Separation of hot and cold data
although Redis Support use RDB Snapshots and AOF Log persistence saves data , however , Both mechanisms are used to provide data reliability assurance , Not used to expand data capacity .
Don't have all the data Redis, Should be saved as a cache Thermal data , In this way, we can make full use of Redis High performance features of , You can also use valuable memory resources for service hot data .
Business data isolation
Don't put unrelated data businesses in one Redis in . On the one hand, avoid business interaction , On the other hand, avoid single instance expansion , And reduce the influence surface in case of failure , Fast recovery .
Set expiration time
When saving data , I suggest you use the data according to the duration of the business , Set the expiration time of the data .
write in Redis Your data will always occupy memory , If the data continues to grow , It is possible to reach the maximum memory limit of the machine , Cause memory overflow , Cause the service to crash .
Control the memory capacity of a single instance
It is recommended to set 2~6 GB . thus , Whether it's RDB snapshot , Or master-slave clusters for data synchronization , Can be completed quickly , Processing of normal requests is not blocked .
Prevent cache avalanches
Avoid centralized expiration key Cause cache avalanche .
Margo , What is a cache avalanche ?
When a large-scale cache failure occurs at a certain time , This will result in a large number of requests directly hitting the database , The database is under a lot of pressure , In the case of high concurrency , The database may be down in an instant .
Operation and maintenance specifications
- Use Cluster Cluster or sentinel cluster , Make it highly available ;
- Set the maximum number of connections for the instance , Prevent excessive client connections from causing excessive instance load , Affect performance .
- Don't open AOF Or turn on AOF Configured to flash disk per second , Avoid disks IO Slow down Redis performance .
- Reasonable setting repl-backlog, Reduce the probability of master-slave full synchronization
- Reasonable setting slave client-output-buffer-limit, Avoid master-slave replication interruption .
- Set the appropriate memory elimination strategy according to the actual scenario .
- Use connection pool operations Redis.
Last , Welcome to share your common usage specifications in the message area , Let's talk about .
Good article recommends
Redis Persistent article :AOF And RDB How to ensure high availability of data
Redis High availability : Data consistency synchronization principle of master-slave architecture
Redis High availability : The principle of sentry group
Redis High availability :Cluster The cluster theory
Redis Actual combat : Skillfully use Bitmap Achieve billion level massive data statistics
Redis Actual combat : adopt Geo The type implements that people near meet the goddess
边栏推荐
- PAT B1063
- Is it safe to open a new bond account
- Panda weekly -2022/02/18
- PAT B1081
- Bindgetuserinfo will not pop up
- PAT B1086
- 2020-12-09 laravel . Env file loading mechanism process
- Simple native JS tab bar switching
- 2.4 finding the sum of the first n terms of the interleaved sequence
- Principles of MySQL clustered index and non clustered index
猜你喜欢

What are Baidu collection skills? 2022 Baidu article collection skills

Hdoj topic 2005 day

Mqtt+ardunio+esp8266 development (excluding mqtt server deployment)

Verification code native JS canvas

Vulnhub range the planes:earth

Determine whether it is a web page opened on wechat

Miner's Diary: why should I go mining on April 5, 2021

wooyun-2014-065513

New features of php7

Trend ea- fixed stop loss and profit per order
随机推荐
Can GoogleSEO only do content without external chain? (e6zzseo)
PAT B1061
PAT B1066
Life cycle function of composite API
PAT B1056
五、HikariCP源码分析之初始化分析二
Appearance of object attributes
Electronic package to generate EXE file
PAT B1096
JS advanced
Applet password input box
SEO outsourcing reliable company, enterprise SEO outsourcing company which reliable?
2.3 partial sum of square and reciprocal sequences
2.17(Avoid The Lakes)
Pta--7-20 exchange minimum and maximum (15 points)
Trend ea- fixed stop loss and profit per order
Google SEO external chain releases 50+ website platform sharing (e6zzseo)
C language PTA -- continuity factor
How do I delete an entity from symfony2
2.15(Multiple of 3 Or 5)