当前位置:网站首页>Redis classic 20 questions
Redis classic 20 questions
2022-06-26 02:28:00 【Love dad】
Today I'd like to share with you Redis Frequently asked questions for interview , The answers are sorted out , It's very considerate , Let's see how many you can get right
Contents of this article :
Redis What is it? ?
Redis The advantages of ?
Redis Why so soon? ?
Redis Why single thread ?
Redis What are the application scenarios ?
Memcached and Redis The difference between ?
Redis What are the data types ?
Redis Business
Persistence mechanism
RDB The way
AOF The way
Master slave copy
sentry Sentinel
Redis cluster
Delete strategy of expired key ?
What are the memory obsolescence strategies ?
How to ensure the cache and database double write data consistency ?
Cache penetration
Cache avalanche
Cache breakdown
pipeline The role of ?
LUA Script
Redis What is it? ?
Redis(Remote Dictionary Server
) It's a use C language-written , High performance non relational key value pair database . Unlike traditional databases ,Redis The data of is stored in memory , So the speed of reading and writing is very fast , Widely used in cache direction .Redis Data can be written to disk , Ensure the safety of data without loss , and Redis The operation of is atomic .
Redis The advantages of ?
Based on memory operation , Memory reads and writes fast .
Redis yes Single thread Of , Avoid thread switching overhead and multi-threaded competition . Single thread refers to the use of a single thread to process network requests , That is, a thread handles all network requests ,Redis There is more than one thread at runtime , For example, the process of data persistence will start another thread .
Support multiple data types , Include String、Hash、List、Set、ZSet etc. .
Support persistence .Redis Support RDB and AOF Two persistence mechanisms , The persistence function can effectively avoid the problem of data loss .
Support transactions .Redis All operations of are atomic , meanwhile Redis It also supports atomic execution after several operations are combined .
Support master-slave replication . The master node will automatically synchronize the data to the slave node , Can be read-write separation .
Redis Why so soon? ?
Memory based :Redis Is to use memory storage , No disk IO Overhead on . Data is in memory , Fast reading and writing .
Single threaded implementation ( Redis 6.0 before ):Redis Use a single thread to process requests , It avoids the overhead of thread switching and lock resource contention among multiple threads .
IO Multiplexing model :Redis use IO Multiplexing technology .Redis Polling descriptors using a single thread , Convert all database operations into events , Not in the network I/O Waste too much time on .
Efficient data structure :Redis Each of the underlying data types has been optimized , The purpose is to pursue faster speed .
Redis Why single thread ?
Avoid excessive Context switching overhead . The program always runs within a single thread in the process , There is no scenario of multithreading switching .
Avoid the overhead of synchronization mechanism : If Redis Choose the multithreading model , The problem of data synchronization needs to be considered , Some synchronization mechanisms will be introduced , It will lead to more overhead in the process of operating data , It increases the complexity of the program and reduces the performance at the same time .
Implement a simple , Convenient maintenance : If Redis Use multithreading mode , Then thread safety must be considered in the design of all underlying data structures , that Redis The implementation of will become more complex .
Redis What are the application scenarios ?
Caching hot data , Relieve the pressure of database .
utilize Redis Atomic self increasing operation , Can achieve Counter The function of , For example, count the number of user likes 、 Number of user visits, etc .
Simple message queuing , have access to Redis Own release / Subscription mode or List To implement a simple message queue , Implement asynchronous operations .
Speed governor , It can be used to limit the frequency of a user accessing an interface , For example, the second kill scenario is used to prevent unnecessary pressure caused by users' fast clicking .
Friend relationship , Use some of the commands of the collection , For example, intersection 、 Combine 、 Difference set, etc , Realize common friends 、 Common hobbies and other functions .
Memcached and Redis The difference between ?
Redis Use only Single core , and Memcached Can use multi-core .
MemCached Single data structure , Used only to cache data , and Redis Support multiple data types .
MemCached Data persistence is not supported , After restart, the data will disappear .Redis Support data persistence .
Redis Provide master-slave synchronization mechanism and cluster Cluster deployment capability , Able to provide highly available services .Memcached No native cluster mode is provided , You need to rely on the client to write data to the cluster in pieces .
Redis Faster than Memcached Much faster .
Redis Use Single threaded multiplex IO Reuse model ,Memcached Using multithreaded non blocking IO Model .
Redis What are the data types ?
Basic data type :
1、String: The most common type of data ,String The value of type can be a string 、 Digital or binary , But the maximum value cannot exceed 512MB.
2、Hash:Hash It's a collection of key-value pairs .
3、Set: Disordered de duplication set .Set Provides intersection 、 Union and other methods , For the realization of common friends 、 Common concern and other functions are particularly convenient .
4、List: An ordered and repeatable set , The bottom layer is realized by relying on bidirectional linked list .
5、SortedSet: Orderly Set. One was maintained internally score
Parameters to achieve . It is suitable for leaderboard and weighted message queue scenarios .
Special data types :
1、Bitmap: Bitmap , It can be thought of as an array in bits , Each cell in an array can only hold 0 perhaps 1, The subscript of the array is in Bitmap It's called offset .Bitmap The length of is independent of the number of elements in the collection , It's about the upper limit of the cardinality .
2、Hyperloglog.HyperLogLog It's an algorithm for cardinality statistics , Its advantages are , When the number or volume of input elements is very, very large , The space needed to calculate the cardinality is always fixed 、 And it's very small . A typical usage scenario is to count independent visitors .
3、Geospatial : It is mainly used to store geographic information , And operate the stored information , Applicable scenarios such as positioning 、 People nearby, etc .
Redis Business
The principle of transaction is to send several commands within a transaction to Redis, And then let Redis Execute these orders in turn .
The life cycle of a transaction :
Use
MULTI
Start a transaction ;When you start a transaction , The command for each operation will be inserted into a queue , At the same time, this command will not be actually executed ;
EXEC
Command to commit transactions .
An error in a command within a transaction scope will not affect the execution of other commands , There is no guarantee of atomicity :
127.0.0.1:6379> multi
OK
127.0.0.1:6379> set a 1
QUEUED
127.0.0.1:6379> set b 1 2
QUEUED
127.0.0.1:6379> set c 3
QUEUED
127.0.0.1:6379> exec
1) OK
2) (error) ERR syntax error
3) OK
WATCH command
WATCH
The command can monitor one or more keys , Once one of the keys is modified , After that, the transaction will not be executed ( It's like an optimistic lock ). perform EXEC
After the command , Will automatically cancel the monitoring .
127.0.0.1:6379> watch name
OK
127.0.0.1:6379> set name 1
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> set name 2
QUEUED
127.0.0.1:6379> set gender 1
QUEUED
127.0.0.1:6379> exec
(nil)
127.0.0.1:6379> get gender
(nil)
For example, in the code above :
watch name
Yes, yesname
Thiskey
Monitoring ofmodify
name
ValueOpen transaction a
In the transaction a Set up in
name
andgender
ValueUse
EXEC
Command forward commit transactionUse command
get gender
Discover that there is no , It's business a No implementation
Use UNWATCH
May cancel WATCH
Command to key
Monitoring of , All monitoring locks will be cancelled .
Persistence mechanism
Persistence is to put The data in memory is written to disk , Prevent memory data loss caused by service downtime .
Redis Support two ways of persistence , One is RDB
The way , One is AOF
The way . The former will regularly store the data in memory on the hard disk according to the specified rules , and The latter records the command after each execution . The two are usually used in combination .
RDB The way
RDB
yes Redis Default persistence scheme .RDB During persistence, the data in memory will be written to disk , Generate a... In the specified directory dump.rdb
file .Redis Restart will load dump.rdb
File recovery data .
bgsave
It's the mainstream trigger RDB The way of persistence , The execution process is as follows :
perform
BGSAVE
commandRedis The parent process judges the current Whether there are sub processes executing , If there is ,
BGSAVE
The command returns directly to .Parent process execution
fork
operation Create child process ,fork Parent process will block during operation .The parent process
fork
After completion , The parent process continues to receive and process client requests , and The subprocess starts to write the data in memory into the temporary file on the hard disk ;When the subprocess has written all the data, it will Replace the old with this temporary file RDB file .
Redis Will read on startup RDB Snapshot file , Load data from hard disk into memory . adopt RDB Persistence of the way , once Redis Abnormal exit , The data changed after the last persistence will be lost .
Trigger RDB The way of persistence :
Manual trigger : User execution
SAVE
orBGSAVE
command .SAVE
Command execution of the snapshot will block all client requests , Avoid using this command in a production environment .BGSAVE
The command can perform snapshot operations asynchronously in the background , While taking a snapshot, the server can continue to respond to client requests , Therefore, it is recommended to use... When you need to manually execute snapshotsBGSAVE
command .Passive trigger :
Take automatic snapshots according to the configuration rules , Such as
SAVE 100 10
,100 At least in seconds 10 If a key is modified, a snapshot is taken .If full replication is performed from a node , The master node automatically executes
BGSAVE
Generate RDB File and send to slave .Execute by default
shutdown
On command , If it's not on AOF Persistence is performed automatically ·BGSAVE·.
advantage :
Redis load RDB Recover data much faster than AOF The way .
Use a single child process for persistence , The main process will not do anything IO operation , To ensure the Redis A high performance .
shortcoming :
RDB Mode data cannot be persisted in real time . because
BGSAVE
Run every timefork
Action create subprocess , Is a heavyweight operation , Frequent execution costs are relatively high .RDB Files are saved in a specific binary format ,Redis There are multiple formats during version upgrade RDB edition , There is an old version Redis Not compatible with new version RDB The problem of format .
AOF The way
AOF(append only file) Persistence : Each write is logged as a separate log ,Redis Restart will be re executed AOF The command in the file restores the data .AOF The main function of It solves the real-time problem of data persistence ,AOF yes Redis The mainstream way of persistence .
By default Redis It's not turned on AOF Persistence of the way , Can pass appendonly
Parameter enable :appendonly yes
. Turn on AOF After the mode is persisted, each write command is executed ,Redis The command will be written into aof_buf
buffer ,AOF The buffer synchronizes the hard disk according to the corresponding policy .
By default, the system Every time 30 second Will perform a synchronization operation . To prevent buffer data loss , Can be in Redis write in AOF After the file, the system is actively required to synchronize the buffer data to the hard disk . Can pass appendfsync
Parameter sets the timing of synchronization .
appendfsync always // Every time you write aof Files are synchronized , Safest and slowest , Configuration not recommended
appendfsync everysec // Ensure both performance and safety , Recommended configuration
appendfsync no // The operating system decides when to synchronize
Let's take a look AOF Persistent execution process :
All write commands are appended to AOP Buffer zone .
AOF The buffer is synchronized to the hard disk according to the corresponding policy .
With AOF The files are getting bigger , It needs to be done regularly AOF File rewriting , Achieve the purpose of compressing the file volume .AOF File rewriting is to Redis In process data is converted to write command and synchronized to new AOF Documentation process .
When Redis When the server restarts , Can be loaded AOF File for data recovery .
advantage :
AOF Better protect data from loss , You can configure the AOF Once a second
fsync
operation , If Redis Process hangs up , At most 1 Second data .AOF With
append-only
The mode of writing , So there is no overhead of disk addressing , Very high write performance .
shortcoming :
For the same document AOF File than RDB Big data snapshot .
Data recovery is slow .
Master slave copy
Redis The replication function of is to support data synchronization among multiple databases . The main database can be read and write , When the data in the master database changes, it will automatically synchronize the data to the slave database . The slave database is generally read-only , It will receive data synchronized from the primary database . A master database can have more than one slave database , A slave database can only have one master database .
redis-server // start-up Redis Instance as the primary database
redis-server --port 6380 --slaveof 127.0.0.1 6379 // Start another instance as a slave database
slaveof 127.0.0.1 6379
SLAVEOF NO ONE // Stop receiving synchronization from other databases and convert to the primary database .
The principle of master-slave replication ?
When starting a slave node , It will send one
PSYNC
Command to master ;If the slave node is connected to the master node for the first time , Then a full copy will be triggered . At this point, the master node will start a background thread , Start making a copy of
RDB
Snapshot file ;It will also be from the client side client All newly received write commands are cached in memory .
RDB
After file generation , The master node willRDB
The file is sent to the slave node , The slave node will firstRDB
file Write to local disk , Then load it from the local disk into memory ;Then the master node sends the write command cached in memory to the slave node , Synchronize this data from the node ;
If the network between the slave node and the master node fails , The connection is broken , Will automatically reconnect , After connection, the master node will only synchronize some missing data to the slave node .
sentry Sentinel
The master-slave replication exists and cannot fail over automatically 、 The problem of not reaching high availability . Sentinel mode solves these problems . The sentinel mechanism can automatically switch between master and slave nodes .
Client connection Redis When , Connect the sentry first , The Sentry will tell the client Redis The address of the master node , Then the client connects Redis And follow up . When the primary node goes down , The sentry detected that the primary node was down , A good slave node will be re selected to become a new master node , Then the other slave servers are informed through publish subscribe mode , Let them switch hosts .
working principle
Every
Sentinel
At a rate of once per second to what it knowsMaster
,Slave
And other thingsSentinel
Instance sends aPING
command .If an instance is far from the last valid reply
PING
The command took longer than the specified value , Then this instance will beSentine
Mark as subjective offline .If one
Master
Marked as subjective offline , Is monitoring thisMaster
All of theSentinel
Confirm... At a rate of once per secondMaster
Whether it really enters the subjective offline state .When there are enough
Sentinel
( Greater than or equal to the value specified in the configuration file ) Confirm... Within a specified time frameMaster
It has entered the subjective offline state , beMaster
Will be marked as objective offline . If there is not enoughSentinel
agree!Master
It's offline ,Master
Your objective offline status will be released . ifMaster
ReorientationSentinel
OfPING
Command returns a valid reply ,Master
Will be removed .The sentinel node will elect sentinels leader, Responsible for failover .
sentry leader A slave node with good performance will be selected to become a new master node , Then notify other slave nodes to update the master node information .
Redis cluster
Sentinel mode solves the problem that master-slave replication cannot fail over automatically 、 The problem of not reaching high availability , But there is still the write capability of the primary node 、 The capacity is limited by the single machine configuration . and cluster The pattern implements Redis Distributed storage , Each node stores different content , Address the write capacity of the primary node 、 The capacity is limited by the single machine configuration .
Redis cluster Minimum configuration of cluster nodes 6 More than nodes (3 Lord 3 from ), The master node provides read and write operations , Slave as standby , Don't offer requests , For failover purposes only .
Redis cluster use Virtual slot partition , All keys map to by hash function 0~16383 In an integer slot , Each node is responsible for maintaining a portion of the slots and the key-value data that the slots map to .
How hash slots are mapped to Redis Example of ?
Right key value right
key
Usecrc16
The algorithm calculates a resultMatch the results to 16384 Remainder , The resulting value represents
key
The corresponding hashicoLocate the corresponding instance according to the slot information
advantage :
No central Architecture , Support dynamic expansion Rong ;
Data according to
slot
Storage is distributed over multiple nodes , Data sharing between nodes , Data distribution can be adjusted dynamically ;High availability . Some nodes are not available , The cluster is still available . Cluster mode enables automatic failover (failover), Nodes pass through
gossip
Protocol exchanges status information , By votingSlave
ToMaster
Role transformation of .
shortcoming :
Batch operation is not supported (pipeline).
Data is replicated asynchronously , Strong consistency of data is not guaranteed .
Limited transaction operation support , Only support more
key
Transaction operations on the same node , When more than onekey
Transaction function cannot be used when distributed on different nodes .key
As the minimum granularity of data partition , A large key value object such ashash
、list
And so on .Multiple database spaces are not supported , Under the single Redis It can support 16 A database , In cluster mode, you can only use 1 Database space .
Delete strategy of expired key ?
1、 Passive delete . During a visit to key when , If you find that key It's overdue , It will be key Delete .
2、 Active delete . Clean up regularly key, Each clean-up will traverse all in turn DB, from db Random take out 20 individual key, Delete if expired , If there is 5 individual key Be overdue , Then go on with this db Clean up , Otherwise start cleaning up the next db.
3、 Clean up when there is not enough memory .Redis There is a maximum memory limit , adopt maxmemory Parameter can set the maximum memory , When the memory used exceeds the set maximum memory , To release memory , When releasing memory , The memory will be cleaned up according to the configured obsolescence policy .
What are the memory obsolescence strategies ?
When Redis After the memory exceeds the maximum allowed memory ,Redis Will trigger the memory elimination strategy , Delete some unusual data , In order to make sure Redis The server is running normally .
Redisv4.0 Provided before 6 A data culling strategy :
volatile-lru:LRU(
Least Recently Used
), Recently used . utilize LRU The algorithm removes the set expiration time keyallkeys-lru: When the memory is not enough to hold the newly written data , Remove the least recently used... From the dataset key
volatile-ttl: Select the data that will expire from the data set with expiration time set
volatile-random: Select any data elimination from the data set with expiration time set
allkeys-random: Choose data elimination from data set
no-eviction: It is forbidden to delete data , When the memory is not enough to hold the newly written data , Error will be reported in new write operation
Redisv4.0 Then add the following two :
volatile-lfu:LFU,Least Frequently Used, Use at least , Select the least frequently used data from the data set with expiration time .
allkeys-lfu: When the memory is not enough to hold the newly written data , Remove the least frequently used from the dataset key.
Memory obsolescence policy can be modified through configuration file , The corresponding configuration item is maxmemory-policy
, The default configuration is noeviction
.
How to ensure the cache and database double write data consistency ?
1、 Delete the cache before updating the database
When updating , So let's delete the cache , Then update the database , When subsequent requests are read again , It will read from the database and then update the new data to the cache .
The problem is : After deleting the cached data , Before updating the database , If a new read request comes in this time period , The old data will be read from the database and re written to the cache , Cause inconsistency again , And all the subsequent readings are old data .
2、 First update the database and then delete the cache
When updating , To update MySQL, After success , Delete cache , The new data will be written back to the cache when subsequent read requests are made .
The problem is : to update MySQL And delete the cache , The requested read is still cached old data , But when the database update is complete , It's going to come back together , The impact is relatively small .
3、 Update cache asynchronously
After the database update operation is completed, the cache will not be operated directly , Instead, the operation command is encapsulated into a message and thrown into the message queue , Then from Redis Consume and update data by yourself , Message queuing can ensure the consistency of data operation sequence , Ensure that the data of the cache system is normal .
Cache penetration
Cache penetration refers to querying a Nonexistent data , Because the cache is written passively on Miss , If from DB If no data is found, it will not be written to the cache , This will cause the nonexistent data to arrive at each request DB Go to query , It loses the meaning of caching . When the flow is large , Probably DB It's gone .
Cache null , Can't check the database .
use The bloon filter , Hash all possible data to a large enough
bitmap
in , Query non-existent data will be by thisbitmap
Intercept , So as to avoid toDB
Query pressure of .
The principle of Bloom filter : When an element is added to a collection , adopt K Hash functions map this element to... In a group of bits K A little bit , Set them as 1. When inquiring , After mapping the elements through the hash function, you will get k A little bit , If any of these points 0, The inspected element must not be in , Go straight back to ; If it's all 1, Then the query element is likely to exist , I'm going to check Redis And the database .
Cache avalanche
Cache avalanche is when we set up the cache with the same expiration time , Causes the cache to fail at the same time at a certain time , Forward all requests to DB,DB The instantaneous pressure is too heavy and hangs up .
resolvent : Based on the original failure time Add a random value , Spread the expiration time a little bit .
Cache breakdown
Cache breakdown : A large number of requests query one at a time key when , At this point key It just failed , It will cause a large number of requests to fall into the database . Cache breakdown is a failure in the query cache key, Cache penetration does not exist key.
resolvent : Add distribution lock , The first requesting thread can get the lock , After the thread that gets the lock queries the data, it sets the cache , If other threads fail to acquire locks, they will wait 50ms Then go back to the cache to fetch data , In this way, you can avoid a large number of requests falling into the database .
public String get(String key) {
String value = redis.get(key);
if (value == null) { // Cache value expired
String unique_key = systemId + ":" + key;
// Set up 30s timeout
if (redis.set(unique_key, 1, 'NX', 'PX', 30000) == 1) { // Set up the success
value = db.get(key);
redis.set(key, value, expire_secs);
redis.del(unique_key);
} else { // Other threads have accessed the database and written back to the cache , You can try again to get the cache value
sleep(50);
get(key); // retry
}
} else {
return value;
}
}
pipeline The role of ?
redis The client executes a command 4 A process : dispatch orders 、 Order to line up 、 Command execution 、 Return results . Use pipeline
You can request in batches , Batch return results , The execution speed is faster than one by one .
Use pipeline
You can't assemble too many commands , Otherwise, the amount of data is too large , Increase the waiting time of the client , It can also cause network congestion , You can split a large number of commands into several small ones pipeline
Command complete .
Native batch command (mset and mget) And pipeline
contrast :
Native batch commands are atomic ,
pipeline
yes Non atomicity .pipeline Abnormal exit in the middle of the command , Previously executed successful commands No rollback .The native batch command has only one command , but
pipeline
Support multiple commands .
LUA Script
Redis adopt LUA Scripts create atomic commands : When lua While the script command is running , There won't be any other scripts or Redis The order is executed , Realize the atomic operation of combined commands .
stay Redis In the implementation of Lua There are two ways to script :eval
and evalsha
.eval
The command uses the built-in Lua Interpreter , Yes Lua The script evaluates .
// The first parameter is lua Script , The second parameter is the key name and the number of parameters , The rest is the key name parameter and additional parameters
> eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second
1) "key1"
2) "key2"
3) "first"
4) "second"
lua Script function
1、Lua Script in Redis It's atomic execution , No other commands are inserted during execution .
2、Lua Scripts can package multiple commands at one time , Effectively reduce network overhead .
Application scenarios
give an example : Limit interface access frequency .
stay Redis Maintain a key value pair for the number of interface accesses ,key
Is the interface name ,value
It's the number of visits . Every time you access the interface , The following is done :
adopt
aop
Intercept the request of the interface , Count interface requests , One request at a time , The corresponding number of interface accessescount
Add 1, Deposit in redis.If it's the first request , Will set up
count=1
, And set expiration time . Because hereset()
andexpire()
Combinatorial operations are not atomic operations , So the introduction oflua
Script , Atomic operation , Avoid concurrent access problems .If the maximum number of accesses is exceeded within a given time range , An exception will be thrown .
private String buildLuaScript() {
return "local c" +
"\nc = redis.call('get',KEYS[1])" +
"\nif c and tonumber(c) > tonumber(ARGV[1]) then" +
"\nreturn c;" +
"\nend" +
"\nc = redis.call('incr',KEYS[1])" +
"\nif tonumber(c) == 1 then" +
"\nredis.call('expire',KEYS[1],ARGV[2])" +
"\nend" +
"\nreturn c;";
}
String luaScript = buildLuaScript();
RedisScript<Number> redisScript = new DefaultRedisScript<>(luaScript, Number.class);
Number count = redisTemplate.execute(redisScript, keys, limit.count(), limit.period());
PS: The implementation of this interface current limiting method is relatively simple , There are many problems , Generally not used , Token bucket algorithm and leaky bucket algorithm are mostly used for interface current limiting .
边栏推荐
- Tarte aux framboises + AWS IOT Greengrass
- 短信插件哪个好用万能表单需要发短信着急测试
- 2022年挖财商学院证券开户安全嘛?
- [image filtering] image filtering system based on Matlab GUI [including Matlab source code 1913]
- ROS2+DDS+RTPS
- 2022-06-25:给定一个正数n, 表示有0~n-1号任务, 给定一个长度为n的数组time,time[i]表示i号任务做完的时间, 给定一个二维数组matrix, matrix[j] = {a,
- 网上开户选哪个证券公司?网上开户是否安全么?
- Other codes,, VT,,, K
- What happens when the cloud answer does not display the third-party login button
- win32
猜你喜欢
How do I fix the iPhone green screen problem? Try these solutions
Breadth first traversal based on adjacency table
Tarte aux framboises + AWS IOT Greengrass
How to check and cancel subscription auto renewal on iPhone or iPad
Bloc入门之Cubit详解
Implementation of image binary morphological filtering based on FPGA -- Corrosion swelling
Depth first traversal of Graphs
Qtvtkvs2015 test code
Blazor University (33)表单 —— EditContext、FieldIdentifiers
基于邻接表的广度优先遍历
随机推荐
Depth first traversal of Graphs
数字商品DGE--数字经济的财富黑马
OA process editing
音视频与CPU架构
Cvpr2022 𞓜 future transformer with long-term action expectation
Shell learning record (II)
SDRAM controller -- implementation of arbitration module
Breadth first traversal based on adjacency table
基于邻接表的深度优先遍历
Other codes,, VT,,, K
为 ServiceCollection 实现装饰器模式
Shell curl execution script, with passed parameters and user-defined parameters
MySQL must master 4 languages!
win32
购买了fastadmin小程序助手但是问题工单中无法发布工单
Markov decision process (MDP): Blackjack problem (mc-es)
Here comes the official zero foundation introduction jetpack compose Chinese course!
Chrome browser developer tool usage
Eureka registration information configuration memo
Redis Lua沙盒绕过命令执行(CVE-2022-0543)