当前位置:网站首页>Redis learning notes

Redis learning notes

2022-06-22 21:15:00 fate _ zore

redis

NoSql

Technology development

The classification of Technology

  1. Solve functional problems :Java、Jsp、RDBMS、Tomcat、HTML、Linux、JDBC、SVN

  2. Solve the problem of scalability :Struts、Spring、SpringMVC、Hibernate、Mybatis

  3. Solve performance problems :NoSQL、Java Threads 、Hadoop、Nginx、MQ、ElasticSearch

redis Brief introduction and installation

  • Redis It's an open source key-value The storage system .
  • and Memcached similar , It supports storage value There are more types , Include string( character string )、list( Linked list )、set( aggregate )、zset(sorted set -- Ordered set ) and hash( Hash type ).
  • These data types support push/pop、add/remove And take intersection, union and difference sets and more abundant operations , And these operations are atomic .
  • On this basis ,Redis Support various sorts of sorting .
  • And memcached equally , To ensure efficiency , The data is cached in memory .
  • The difference is Redis Periodically, updated data is written to disk or changes are written to an appended log file .
  • And on this basis to achieve master-slave( Master-slave ) Sync .

Application scenarios

Cache with relational database

  1. High frequency , Hot access data , Lower the database IO
  2. Distributed architecture , do session share

A variety of data structures store persistent data

image-20220529205134048

install

Five data types are commonly used

key (key) operation

  • keys * View all of the current library key ( matching :keys *1)

  • exists key Judge a certain key Whether there is

  • type key View your key What type is it

  • del key Delete specified key data

  • unlink key according to value Select non blocking delete

    Only will keys from keyspace Delete from metadata , The real deletion will be done asynchronously later .

  • expire key 10 10 Second : For a given key Set expiration time

  • ttl key See how many seconds are left to expire ,-1 Never expire ,-2 Indicates that it has expired

  • select Command switch database

  • dbsize Check the... Of the current database key The number of

  • flushdb Empty the current library

  • flushall Kill all the warehouses

Redis character string (String)

brief introduction

  1. string yes redis Most basic types , And mencached Basically the same type , One key Corresponding to one value
  2. string It's binary safe ,redis Of string Can contain any data , Such as pictures or serialized objects
  3. string Of value Maximum attainable 512M

Common commands :

  • set  key value
    

    Add key value pair

  • get  <key>
    

    Query the corresponding key value

  • append <key><value>
    

    Will be given Append to the end of the original value

  • strlen <key>
    

    Get the length of the value

  • setnx <key><value>
    

    Only in key When there is no Set up key Value

  • incr <key>
    

    take key Increase the value of the number stored in 1, Can only operate on numeric values , If it is empty , The new value added is 1

  • decr <key>
    

    take key The number stored in minus 1 Can only operate on numeric values , If it is empty , The new value added is -1

  • incrby / decrby <key>< step >
    

    take key Increase or decrease the number value stored in . Custom step size .

  • mset <key1><value1><key2><value2>.....  
    

    Set one or more... At the same time key-value Yes

  • mget <key1><key2><key3> .....
    

    Get one or more at the same time value

  • msetnx <key1><value1><key2><value2> .....
    

    Set one or more... At the same time key-value Yes , If and only if all given key It doesn't exist .

  • getrange <key>< The starting position >< End position >
    

    Get the range of values , similar java Medium substring, Front bag , Back bag

  • setrange <key>< The starting position ><value>
    

    use Overwrite the stored string value , from < The starting position > Start ( Index from 0**** Start ).

  • setex <key>< Expiration time ><value>
    

    While setting the key value , Set expiration time , Unit second .

  • getset <key><value>
    

    Trade in new for old , Set the new value and get the old value .

data structure

​ String The data structure is simple Dynamic string (Simple Dynamic String, abbreviation SDS). Is a string that can be modified , The internal structure is similar to Java Of ArrayList, use Pre allocate redundant space to reduce frequent memory allocation .

image-20220531135907520

​ As shown in the picture , Space actually allocated internally for the current string capacity Generally higher than the actual string length len. When the string length is less than 1M when , Expansion is to double the existing space , If exceeded 1M, When expanding, it will only expand more at one time 1M Space . Note that the maximum string length is 512M.

Redis list (List)

brief introduction

Single bond multi value

  1. Redis List is a simple list of strings , Sort by insertion order .
  2. You can add an element to the head of the list ( On the left ) Or tail ( On the right ).
  3. Its bottom layer is actually a two-way linked list , High performance on both ends , The performance of the nodes in the middle of the index subscript operation will be poor .

image-20220531140714091

Common commands :

  • lpush/rpush  <key><value1><value2><value3> ....
    

    From the left / Insert one or more values... To the right .

  • lpop/rpop  <key>
    

    From the left / Spit out a value on the right . Value at key at , The value of the light key .

  • rpoplpush  <key1><key2>
    

    Spit out a value from the right side of the list , Insert to the left of the list .

  • lrange <key><start><stop>
    

    Get elements according to index subscript ( From left to right )

  • lrange mylist 0 -1
    

    0 The first one on the left ,-1 The first one on the right ,(0-1 To get all )

  • linsert <key>  before <value><newvalue>
    

    Insert the value after the

  • lrem <key><n><value>
    

    Delete from the left n individual value( From left to right )

  • lset<key><index><value>
    

    Will list key Subscript to be index Replace the value of with value

data structure :

List The data structure of is fast linked list quickList.

First, in the case of fewer list elements will use a block of continuous memory storage , This structure is ziplist, That is, compressed list .

It stores all the elements next to each other , Allocated is a continuous block of memory .

When there is a large amount of data, it will be changed to quicklist.

Because ordinary linked list needs too much additional pointer space , It's a waste of space . For example, what's in this list is just int Data of type , Two additional pointers are required for the structure prev and next.

image-20220531144030108

Redis Linking lists and ziplist Combined to form quicklist. That is to say, multiple ziplist Use two-way pointer string to use . This not only satisfies the fast insertion and deletion performance , There will not be too much space redundancy .

Redis aggregate (Set)

brief introduction :

  1. Redis set The functions and list Similar to a list function , What's special is set Yes. Automatic weight removal Of , When you need to store a list of data , You don't want duplicate data ,set Is a good choice , also set Provides a way to determine whether a member is in a set Important interfaces within a collection , This is also list What cannot be provided .
  2. Redis Of Set yes string Unordered collection of type . The bottom of it is actually a value by null Of hash surface , So add the , Delete , Look for The complexity is O(1).
  3. An algorithm , As data increases , The length of execution , If it is O(1), Data increase , The time to find the data remains the same

Common commands :

  • sadd <key><value1><value2> ..... 
    

    -> Put one or more member Elements are added to the collection key in , What already exists member Elements will be ignored

  • smembers <key>
    

    -> Take all the values of the set .

  • sismember <key><value>
    

    -> Determine whether the set contains the value , Yes 1, No, 0

  • scard<key>
    

    -> Returns the number of elements in the collection .

  • srem <key><value1><value2> .... 
    

    -> Delete an element in the collection .

  • spop <key>
    

    -> Spit a value out of the set at random .

  • srandmember <key><n>
    

    -> Randomly take... From the set n It's worth . Will not be removed from the collection .

  • smove <source><destination>value
    

    -> Move a value in a set from one set to another

  • sinter <key1><key2>
    

    -> Returns the intersection element of two sets .

  • sunion <key1><key2>
    

    -> Returns the union element of two sets .

  • sdiff <key1><key2>
    

    -> Returns two sets of Difference set Elements (key1 Medium , It doesn't contain key2 Medium )

data structure :

Set The data structure is dict Dictionaries , Dictionaries are implemented with hash tables .

Java in HashSet The internal implementation of HashMap, It's just all value They all point to the same object .Redis Of set The structure is the same , It's also used internally hash structure , be-all value All point to the same internal value .

Redis Hash (Hash)

brief introduction :

Redis hash It's a collection of key-value pairs .

Redis hash It's a string Type of field and value Mapping table ,hash Ideal for storing objects .

similar Java Inside Map<String,Object>

user ID For searching key, Stored value User object contains name , Age , Birthday and other information , If you use ordinary key/value Structure to store

There are mainly the following 2 Storage methods :image-20220531152929829

Common commands :

  • hset <key><field><value>
    

    -> Give... In the set Key assignment

  • hget <key1><field>
    

    -> Take... Out of the collection value

  • hmset <key1><field1><value1><field2><value2>...
    

    -> Batch settings hash Value

  • hexists<key1><field>
    

    -> Look at the hash table key in , Given domain field Whether there is .

  • hkeys <key>
    

    -> List the hash All of the collection field

  • hvals <key>
    

    -> List the hash All of the collection value

  • hincrby <key><field><increment>
    

    -> Hash table key In the domain field Plus the increment 1 -1

  • hsetnx <key><field><value>
    

    Hash table key In the domain field Is set to value , If and only if domain field non-existent .

data structure :

Hash There are two kinds of data structures corresponding to types :ziplist( Compressed list ),hashtable( Hashtable ). When field-value When the length is short and the number is small , Use ziplist, Otherwise use hashtable.

Redis Ordered set Zset

brief introduction :

  1. Redis Ordered set zset With the common set set Very similar , Is a collection of strings without repeating elements .
  2. The difference is that each member of an ordered set is associated with a score (****score), This score (score) Used to sort the members of a set from the lowest score to the highest score . Members of a collection are unique , But the score can be repeated .
  3. Because the elements are ordered , So you can also quickly rate (score) Or order (position) To get a range of elements .
  4. Accessing intermediate elements of ordered collections is also very fast , So you can use ordered sets as a smart list without duplicate members .

Common commands :

  • zadd  <key><score1><value1><score2><value2>

    Put one or more member Elements and score Value added to ordered set key among .

  • zrange <key><start><stop>  [WITHSCORES]   
    

    Return to ordered set key in , The subscript is Between the elements

    belt WITHSCORES, You can return scores and values together to the result set .

  • zrangebyscore key minmax [withscores] [limit offset count]
    

    Return to ordered set key in , all score The value is between min and max Between ( Including equal to min or max ) Members of . Members of the ordered set press score Value increment ( From small to large ) Order .

  • zrevrangebyscore key maxmin [withscores] [limit offset count]   
    

    ditto , Change to big to small .

  • zincrby <key><increment><value>
    

    For the elements score Plus the increment

  • zrem  <key><value>
    

    Delete... Under this collection , The element that specifies the value

  • zcount <key><min><max>
    

    Count the set , The number of elements in the fraction range

  • zrank <key><value>
    

    Returns the rank of the value in the collection , from 0 Start .

data structure :

SortedSet(zset) yes Redis A very special data structure provided , On the one hand, it is equivalent to Java Data structure of Map<String, Double>, You can give each element value Give a weight score, On the other hand, it is similar to TreeSet, Internal elements are weighted score Sort , You can get the rank of each element , You can also use score To get a list of elements .

zset The bottom layer uses two data structures

(1)hash,hash The role of is to associate elements value And weight score, Guarantee elements value Uniqueness , You can use the element value Find the appropriate score value .

(2) Skip list , The purpose of jump tables is to give elements value Sort , according to score Scope get element list for .

Redis Compression meter 、 Skip list ? Here you go - Nuggets (juejin.cn)

The configuration file

units Company

Configure size units , Some basic units of measurement are defined at the beginning , Only support bytes, I won't support it bit

Case insensitive

image-20220603103936399

INCLUDES

similar jsp Medium include, In the case of multiple instances, the common configuration file can be extracted

image-20220603104210709

Network related configuration

bind

  1. By default bind=127.0.0.1 Only local access requests can be accepted
  2. Without writing , Unlimited acceptance of any ip Address access
  3. Production environment must write the address of your application server ; The server needs remote access , So you need to comment it out
  4. If it's on protected-mode, Then there's no setting bind ip And without a password ,Redis Only local responses are allowed

image-20220603105458680

protected-mode

Native access protection mode

image-20220603105703923

port

Port number , Default 6379

image-20220603105816913

tcp-backlog

  1. Set up tcp Of backlog,backlog It's actually a connection queue ,backlog The sum of the queues = Three handshake queues not completed + Three handshake queues have been completed .
  2. In a high concurrency environment you need a high backlog Value to avoid slow client connection problems .
  3. Be careful Linux The kernel will reduce this value to /proc/sys/net/core/somaxconn Value (128), So we need to confirm the increase /proc/sys/net/core/somaxconn and /proc/sys/net/ipv4/tcp_max_syn_backlog(128) Two values to achieve the desired effect

image-20220603105933470

timeout

How many seconds will an idle client be shut down ,0 Indicates that the feature is turned off . namely Never close .

image-20220603110131415

tcp-keepalive

  1. A heartbeat detection for accessing clients , Every n Check once per second .
  2. The unit is in seconds , If set to 0, It won't go on Keepalive testing , It is suggested to set it to 60

image-20220603110219198

GENERAL Universal

daemonize

Whether to start background process , Off by default

image-20220603110408662

pidfile

Deposit pid The location of the file , Each instance produces a different pid file

image-20220603110501781

loglevel

Set the level of the log

image-20220603110547746

logfile

The output file address of the log

image-20220603110631798

databases

Set the number of Libraries Default 16, The default database is 0 Number , have access to SELECT The command specifies the database on the connection id

image-20220603110741094

SECURITY

Set the password

image-20220603110940502

Access password view 、 Set and cancel

Set the password in the command , It's just temporary . restart redis The server , The password is restored .

Permanent settings , It needs to be set in the configuration file .

image-20220603115747708

LIMITS Limit

Publish and subscribe

What is publish and subscribe

Redis Publish subscribe (pub/sub) It's a message communication mode : sender (pub) Send a message , subscriber (sub) receive messages .

Redis Clients can subscribe to any number of channels .

Redis Publish and subscribe to

The client can subscribe to channels, as shown in the figure below

image-20220617162250190

When you post a message to this channel , The message is sent to the subscribing client

image-20220617162502192

Publish and subscribe command line implementation

  1. Open a client subscription channel1
  2. image-20220617162937844
  3. command :SUBSCRIBE channel1
  4. Open another client , to channel1 Release the news
  5. image-20220617162915865
  6. command :publish channel1 hello
  7. image-20220617162951740

New data types

Bitmaps

brief introduction

Modern computers use binary ( position ) As the basic unit of information , 1 Bytes are equal to 8 position , for example “abc” Strings are created by 3 Byte composition , But it is actually represented in binary when stored in the computer , “abc” Respectively corresponding ASCII The codes are 97、 98、 99, The corresponding binaries are 01100001、 01100010 and 01100011, Here's the picture

image-20220617223831304

Reasonable use of operation bits can effectively improve memory utilization and development efficiency .

Redis Provides Bitmaps This “ data type ” Can achieve the operation of the bit :

  1. Bitmaps Itself is not a data type , It's actually a string (key-value) , But it can operate on the bits of a string .
  2. Bitmaps A separate set of commands is provided , So in Redis Use in Bitmaps It's not quite the same as using strings . You can put Bitmaps Think of it as an array of bits , Each unit of an array can only store 0 and 1, The subscript of the array is in Bitmaps It's called offset .

image-20220617223926119

command

  1. setbit<key><offset><value>
    

    Set up Bitmaps The value of an offset in (0 or 1)

    example :

    Whether each individual user has visited the website and stored in Bitmaps in , Record the user you visited as 1, Do not visit users remember to do 0, Use offset as user's id.

    Set the number of the key offset The value of a bit ( from 0 Count up ) , Suppose there are now 20 Users ,userid=1, 6, 11, 15, 19 's users visited the site , Then the current Bitmaps The initialization result is shown in Figure

    image-20220617224723640

    image-20220617224728877

    Be careful :

    Set up Bitmaps The value of an offset in (0 or 1) Be careful : Users of many applications id With a specified number ( for example 10000) start , Direct users id and Bitmaps A certain amount of offset is bound to cause waste , The usual practice is to do it every time setbit The user will be id Subtract the specified number .

    At first initialization Bitmaps when , If the offset is very large , Then the whole initialization process will be slow , It may cause Redis The block .

  2. getbit<key><offset>
    

    obtain Bitmaps The value of an offset in

    example :

    obtain id=8 Is the user of 2020-11-06 I visited , return 0 I haven't visited :

    image-20220617224906055

  3. bitcount<key>[start end] 
    

    Statistics character string Set to 1 Of bit Count . In general , The whole string given will be counted , By specifying additional start or end Parameters , You can make the count only on specific bits .start and end Parameter settings , You can use negative values : such as -1 Represents the last bit , and -2 The second last bit ,start、end Refer to bit Subscript number of bytes of the Group , Both include .

  4. bitop  and(or/not/xor) <destkey> [key…]
    

    bitop It's a composite operation , It can do more than one Bitmaps Of and( intersection ) 、 or( Combine ) 、 not( Not ) 、 xor( Exclusive or ) Operate and save the result in destkey in .

    example

    2020-11-04 Visit the website on userid=1,2,5,9.

    setbit unique:users:20201104 1 1

    setbit unique:users:20201104 2 1

    setbit unique:users:20201104 5 1

    setbit unique:users:20201104 9 1

    2020-11-03 Visit the website on userid=0,1,4,9.

    setbit unique:users:20201103 0 1

    setbit unique:users:20201103 1 1

    setbit unique:users:20201103 4 1

    setbit unique:users:20201103 9 1

    Calculate the number of users who visited the website in two days

    bitop and unique:users:and:20201104_03

    unique:users:20201103unique:users:20201104

    image-20220617225139476

Bitmaps And set contrast

Suppose the website has 1 Billion users , The users who visit independently every day are 5 Ten million , If you use set type and Bitmaps Store active users separately and get tables

set and Bitmaps Store a day's active user comparison
data type Each user id Occupancy space Number of users to store Total memory
aggregate type 64 position 5000000064 position *50000000 = 400MB
Bitmaps1 position 1000000001 position *100000000 = 12.5MB

Obviously , Use... In this case Bitmaps Can save a lot of memory space , In particular, the memory savings over time are considerable

set and Bitmaps Store independent user space comparison
data type One day A month A year
Collection types 400MB12GB144GB
Bitmaps12.5MB375MB4.5GB

but Bitmaps It's not a panacea , If the website has few independent visitors every day , For example, only 10 ten thousand ( A lot of zombie users ) , Then the comparison between the two is shown in the table below , Obviously , Use at this time Bitmaps Not really , Because most of the bits are 0

HyperLogLog

At work , We often encounter functional requirements related to statistics , For example, statistical websites PV(PageView Page visits ), have access to Redis Of incr、incrby Make it easy .

But like UV(UniqueVisitor, Independent visitor )、 Independent IP Count 、 How to solve the problems that need to be de duplicated and counted, such as the number of search records ? The problem of finding the number of non repeating elements in a set is called the cardinality problem .

There are many solutions to the cardinality problem :

(1) The data is stored in MySQL In the table , Use distinct count Calculate the number of non duplicates

(2) Use Redis Provided hash、set、bitmaps And other data structures

The results of the above scheme are accurate , But as the data increases , Resulting in more and more occupied space , It is impractical for very large data sets .

Whether it can reduce a certain accuracy to balance the storage space ?Redis Launched HyperLogLog

Redis HyperLogLog It's an algorithm for cardinality statistics ,HyperLogLog The advantages of , When the number or volume of input elements is very, very large , The space needed to calculate the cardinality is always fixed 、 And it's very small .

stay Redis Inside , Every HyperLogLog Keys only cost 12 KB Memory , So we can calculate the proximity 2^64 Cardinality of different elements . This is the same as calculating the cardinality , The more elements consume memory, the more collections there are .

however , because HyperLogLog Only the input elements will be used to calculate the cardinality , Instead of storing the input elements themselves , therefore HyperLogLog It can't be like a collection , Return the various elements of the input .

What is the cardinality ?

Like data sets {1, 3, 5, 7, 5, 7, 8}, So the cardinality set of this dataset is {1, 3, 5 ,7, 8}, base ( Don't repeat elements ) by 5. Cardinality estimation is within the range of acceptable error , Fast base calculation .

command

pfadd

pfadd <key>< element> [element ...] 

Add specified elements to HyperLogLog in

image-20220617230216834

Adds all elements to the specified HyperLogLog In the data structure . If after executing the command HLL The approximate cardinality of the estimate changes , Then return to 1, Otherwise return to 0.

pfcount

pfcount<key> [key ...]

Calculation HLL The approximate cardinality of , Multiple can be calculated HLL, For example, use HLL Store daily UV, Calculate a week's UV have access to 7 Days of UV Consolidation calculation is enough

image-20220617230401570

pfmerge

pfmerge<destkey><sourcekey> [sourcekey ...]

Put one or more HLL The merged results are stored in another HLL in , For example, monthly active users can use daily active users to consolidate and calculate the available

image-20220617230453766

Geospatial

Redis 3.2 Added right to GEO Type of support .GEO,Geographic, Abbreviation for geographic information . This type of , It's elemental 2 Dimensional coordinates , On the map is latitude and longitude .redis Based on this type , Longitude and latitude settings are provided , Inquire about , Range queries , Distance inquiry , Longitude and latitude Hash And so on .

command

geoadd

geoadd<key>< longitude><latitude><member> [longitude latitude member...] 

Add location ( longitude , latitude , name )

example

image-20220617230943024

Be careful : Two poles can't be added directly , City data is usually downloaded , Directly through Java The program is imported once .

Effective longitude from -180 C to 180 degree . Effective latitude from -85.05112878 C to 85.05112878 degree .

When the coordinate position is beyond the specified range , The command will return an error .

Added data , You can't add it again .

geopos

geopos  <key><member> [member...]

Get the coordinate value of the specified area

image-20220617231044473

geodist

geodist<key><member1><member2>  [m|km|ft|mi ] 

Get the linear distance between two positions

example

image-20220617231136368

Company :

  • m Expressed in meters [ The default value is ].
  • km Expressed in kilometers .
  • mi In miles .
  • ft In feet .

If the user does not explicitly specify the unit parameter , that GEODIST The default is meters

georadius

georadius<key>< longitude><latitude>radius  m|km|ft|mi

Centered on a given latitude and longitude , Find the elements in a certain radius

image-20220617231242182

Jedis operation Redis6

test

rely on

<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>3.2.0</version>
</dependency>

Connect the test ( Error model )

public static void main(String[] args) {
    
    // establish jedis object 
    Jedis jedis = new Jedis("192.168.150.111",6379);
    // Test whether it can be linked to 
    String ping = jedis.ping();
    System.out.println(ping);
}

image-20220618151104814

resolvent

Ban Linux The firewall of :Linux(CentOS7) To carry out an order

systemctl stop firewalld.service

redis.conf Note out bind 127.0.0.1 , then protected-mode no

Correct result

image-20220618152213991

Data type testing

@Test
    public void string(){
    
        // Batch addition 
        jedis.mset("k1","v1","k2","v2","k3","v3");
        // Batch query ( return list)
        jedis.mget("k1","k2","k3").forEach(System.out::println);
    }
    @Test
    public void list(){
    

        jedis.lpush("lk","lv1","lv2","lv3");
        List<String> lk = jedis.lrange("lk", 0, -1);
        System.out.println(lk);
    }
    @Test
    public void set(){
    
        jedis.sadd("sk1","sv1","sv1","sv2","sv3","sv4","sv5");
        Set<String> sk1 = jedis.smembers("sk1");
        System.out.println(sk1);
    }
    @Test
    public void hash(){
    
        jedis.hset("user","age","20");
        String hget = jedis.hget("user", "age");
        System.out.println(hget);
    }
    @Test
    public void zSet(){
    
        jedis.zadd("zk1", 100d,"zv1");
        jedis.zadd("zk1", 200d,"zv2");
        jedis.zadd("zk1", 50d,"zv3");
        Set<String> zk1 = jedis.zrange("zk1", 0, -1);
        System.out.println(zk1);
    }
}

Analog mobile phone verification code

Ideas

image-20220618214624471

package com.fate.jedis;

import org.junit.Test;
import redis.clients.jedis.Jedis;

import java.util.Random;

/** * @author m */
public class CodeController {
    
    Jedis jedis = new Jedis("192.168.150.111",6379);
    public Boolean isCode(String code,String id) {
    
        String rightCode = jedis.get(id + "_code");
        return rightCode.equals(code);
    }
    public String getCode() {
    
        Random random = new Random();
        StringBuilder code = new StringBuilder();
        for (int i = 0; i < 6; i++) {
    
            int r= random.nextInt(10);
            code.append(r);
        }
        return code.toString();
    }
    public String setCode(String id) {
    
        String code = getCode();
        jedis.incrBy(id,1);
        String count = jedis.get(id);
        if (count == null) {
    
            jedis.setex(id,24*60*60,"1");
        }else if (Integer.parseInt(count)<3){
    
            jedis.incrBy(id,1);
            jedis.setex(id+ "_code",120*2 ,code);
            return code;
        }else {
    
            System.out.println(" The maximum number of times ");
        }
        return code;
    }
    @Test
    public void t(){
    
        String code = setCode("18848312652");
        Boolean aBoolean = isCode(code, "18848312652");
        System.out.println(aBoolean);
    }
}

Redis6 And Spring Boot Integrate

Introduce dependencies

<!-- redis Integrate -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>
<!--  Connection pool -->
        <dependency>
        <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
            <version>2.6.0</version>
        </dependency>

The configuration file

#Redis Server address 
spring.redis.host=192.168.150.111
#Redis Server connection port 
spring.redis.port=6379
#Redis Database index ( The default is 0)
spring.redis.database= 0
# Connection timeout ( millisecond )
spring.redis.timeout=1800000
# Maximum number of connections in connection pool ( Use a negative value to indicate that there is no limit )
spring.redis.lettuce.pool.max-active=20
# Maximum blocking waiting time ( A negative number means no limit )
spring.redis.lettuce.pool.max-wait=-1
# The maximum free connection in the connection pool 
spring.redis.lettuce.pool.max-idle=5
# The smallest free connection in the connection pool 
spring.redis.lettuce.pool.min-idle=0

Configuration class

import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.RedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.time.Duration;

/** * @author m */
@EnableCaching
@Configuration
public class RedisConfig extends CachingConfigurerSupport {
    
    @Bean
    @SuppressWarnings({
     "rawtypes", "unchecked" })
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory){
    
        RedisTemplate<String, Object> template = new RedisTemplate<String, Object>();
        template.setConnectionFactory(factory);

        //Json Serialization configuration 
        Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
        ObjectMapper om = new ObjectMapper();
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        jackson2JsonRedisSerializer.setObjectMapper(om);

        //String Serial number configuration 
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();

        //key and hash Of key All use String The serialization configuration of 
        template.setKeySerializer(stringRedisSerializer);
        template.setHashKeySerializer(stringRedisSerializer);

        //value and hash Of value use Json The serialization configuration of 
        template.setValueSerializer(jackson2JsonRedisSerializer);
        template.setHashValueSerializer(jackson2JsonRedisSerializer);

        template.afterPropertiesSet();

        return template;
    }

    @Bean
    public CacheManager cacheManager(RedisConnectionFactory factory) {
    
        RedisSerializer<String> redisSerializer = new StringRedisSerializer();
        Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
// Solve the problem of query cache conversion exception 
        ObjectMapper om = new ObjectMapper();
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        jackson2JsonRedisSerializer.setObjectMapper(om);
//  Configure serialization ( Solve the problem of garbled code ), Expiration time 600 second 
        RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
                .entryTtl(Duration.ofSeconds(600))
                .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(redisSerializer))
                .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(jackson2JsonRedisSerializer))
                .disableCachingNullValues();
        RedisCacheManager cacheManager = RedisCacheManager.builder(factory)
                .cacheDefaults(config)
                .build();
        return cacheManager;
    }

}

test

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

/** * @author m */
@RestController
@RequestMapping("/redisTest")
public class RedisTestController {
    
    @Autowired
    private RedisTemplate<String, Object> redisTemplate;
    @GetMapping()
    public String test(){
    
        redisTemplate.opsForValue().set("name","lucy");
        return (String) redisTemplate.opsForValue().get("name");
    }
}

RedisTemplate

All the methods

//  Configure the default serialization and deserialization tool classes 
1.afterPropertiesSet
//  Execute relevant functions according to parameters operation operation , for example , Business 
2.execute
//  perform pipelining Pipeline related operations 
3.executePipelined
//  Execute assignment connection Connection related operations 
4.executeWithStickyConnection
//  perform session Internal execute Method 
5.executeSession
//  establish RedisConnection proxy class 
6.createRedisConnectionProxy
// connection Preprocessing of connections 
7.preProcessConnection
//  Post processing of results , Do nothing by default 
8.postProcessResult
//  Whether to RedisCallback Expose local connections 
9.isExposeConnection
//  Set whether to RedisCallback Expose local connections 
10.setExposeConnection
// 12 To 26 Are to set and get the relevant serialization tool classes 
11.isEnableDefaultSerializer
12.setEnableDefaultSerializer
13.getDefaultSerializer
14.setDefaultSerializer
15.setKeySerializer
16.getKeySerializer
17.setValueSerializer
18.getValueSerializer
19.getHashKeySerializer
20.setHashKeySerializer
21.getHashValueSerializer
22.setHashValueSerializer
23.getStringSerializer
24.setStringSerializer
25.setScriptExecutor
// 27 To 34 For private methods , Not available for external use 
26.rawKey
27.rawString
28.rawValue
29.rawKeys
30.deserializeKey
31.deserializeMixedResults
32.deserializeSet
33.convertTupleValues
//  Perform transactions 
34.exec
35.execRaw
//  Delete operation 
36.delete
//  Contact links 
37.unlink
//  Check whether it contains the specified key
38.hasKey
39.countExistingKeys
//  Set expiration time 
40.expire
41.expireAt
//  Convert to byte stream and send to channel send out message
42.convertAndSend
//  Get expiration time 
43.getExpire
//  Returns all... Based on the passed in regular expression key
44.keys
//  Cancel designation key The expiration time of 
45.persist
//  Move the specified key and index To database 
46.move
//  Get a random... From the key space key
47.randomKey
//  Will specify key Change to target key
48.rename
// key When there is no , Will specify key Change to target key
49.renameIfAbsent
//  The settings are stored in the specified location key The type of 
50.type
//  Retrieval stored in key Serialized version of the value of 
51.dump
//  perform Redis Of restore The order of 
52.restore
//  Mark the beginning of transaction blocking 
53.multi
//  Discard all in multi The order issued later 
54.discard
//  Observe the specified key At the beginning of the transaction multi Subsequent modifications 
55.watch
//  Refresh all previously observed key
56.unwatch
//  by key Element ordering 
57.sort
//  Close the client connection 
58.killClient
//  Request information and statistics about the connection client 
59.getClientList
//  Change the replication configuration to the new master
60.slaveOf
//  Change this machine to master
61.slaveOfNoOne
// 64 To 79 Is to get the corresponding operation 
62.opsForCluster
63.opsForGeo
64.boundGeoOps
65.boundHashOps
66.opsForHash
67.opsForHyperLogLog
68.opsForList
69.boundListOps
70.boundSetOps
71.opsForSet
72.opsForStream
73.boundStreamOps
74.boundValueOps
75.opsForValue
76.boundZSetOps
77.opsForZSet
//  Set whether transactions are supported 
78.setEnableTransactionSupport
//  Set up bean Class loader for 
79.setBeanClassLoader

spring-data-redis The following functions are provided :

  1. Connection pool automatic management , Provides a highly encapsulated “RedisTemplate” class

  2. It is classified and packaged , Encapsulate the same type of operation as operation Interface

    ValueOperations: Simple K-V operation
    SetOperations:set Type data operation
    ZSetOperations:zset Type data operation
    HashOperations: in the light of map Type of data operation
    ListOperations: in the light of list Type of data operation

  3. Provide for the right to key Of “bound”( binding ) Convenient operation API, Can pass bound Encapsulate the specified key, And then do a series of operations without “ Explicit ” Re designation of Key, namely BoundKeyOperations

    BoundValueOperations
    BoundSetOperations
    BoundListOperations
    BoundSetOperations
    BoundHashOperations

  4. Encapsulate transaction operations , There's container control .

  5. For data “ serialize / Deserialization ”, Provides a variety of alternative strategies (RedisSerializer)

    1. JdkSerializationRedisSerializer:POJO Object access scenarios , Use JDK The serialization mechanism itself , take pojo Class passing ObjectInputStream/ObjectOutputStream Do serialization , Final redis-server A sequence of bytes will be stored . Is currently the most commonly used serialization strategy .
    2. StringRedisSerializer:Key perhaps value For string scenarios , According to the designation charset Encode the byte sequence of data into string yes “newString(bytes,charset)” and “string.getBytes(charset)” The direct packaging of . It's the lightest and most efficient strategy .
    3. JacksonJsonRedisSerializer:jackson-json The tool provides javabean And json The ability to transform between , Can be pojo Examples are sequenced into json The format is stored in redis in , Can also be json Format data into pojo example . because jackson When the tool serializes and deserializes , You need to specify Class type , So this strategy is a little more complex to encapsulate .
    4. OxmSerializer: Provided will be javabean And xml The ability to transform between , Currently available tripartite support includes jaxb,apache-xmlbeans;redis The data stored will be xml Tools . But with this strategy , It's going to be difficult to program , And it's the least efficient ; Not recommended .【 need spring-oxm Module support 】
  6. If your data needs to be parsed by a third party tool , So the data should use StringRedisSerializer instead of JdkSerializationRedisSerializer.

RedisTemplate Top level method

  1. Determine given key Whether there is , Go back if you have true, Return on no false
redisTemplate.hasKey(K key)
  1. Delete the given key
redisTemplate.delete(K key)
  1. Delete given key Set
redisTemplate.delete(Collection<K> keys)
  1. perform Redis Dump the command and return the result , hold key The values are sequenced into byte[] type
redisTemplate.dump(K key)
  1. The incoming key Value sets the expiration time 、 Will be given key The expiration time of is set to the date timestamp
redisTemplate.expire(K key, long timeout, TimeUnit unit)
redisTemplate.expireAt(K key, Date date)
  1. Find all that match the given pattern key , Returns a non repeating Set type
redisTemplate.keys(K pattern)
  1. take oldKey Rename it to newKey.
redisTemplate.rename(K oldKey, K newKey)
  1. obtain key The type of value
redisTemplate.	type(K key)
  1. Only when the newKey When there is no , Key oldKey Rename it to newKey.
redisTemplate.renameIfAbsent(K oldKey, K newKey)
  1. Random from redis Get one of key
redisTemplate.randomKey()
  1. Get current key The remaining expiration time
redisTemplate.getExpire(K key)
  1. Get the remaining expiration time , Set the time unit at the same time
redisTemplate.	getExpire(K key, TimeUnit timeUnit)
  1. Delete key The expiration time of
redisTemplate.	persist(K key)
  1. Will be given key Move to an indexed database
redisTemplate.	move(K key, int dbIndex)

RedisTemplate.opsForValue() Method

  1. Set up key Follow value Value

    redisTemplate.opsForValue().set(K key, V value)
    
  2. obtain key Value

    redisTemplate.opsForValue().get(Object key)
    
  3. Set up key Follow value Value , Also set expiration time

    redisTemplate.opsForValue().set(K key, V value, Duration timeout)
    
  4. stay start and end To get the substring of the key value

    redisTemplate.opsForValue().get(K key, long start, long end)
    
  5. Set up key And returns its old value

    redisTemplate.opsForValue().getAndSet(K key, V value)
    
  6. Get multiple key

    redisTemplate.opsForValue().multiGet(Collection<K> keys)
    
  7. Get the original key Add a new string after the value of

    redisTemplate.opsForValue().append(K key, String value)
    
  8. Incrementally increase double value

    redisTemplate.opsForValue().increment(K key, double increment)
    
  9. adopt increment(K key, long delta) Method to store... Incrementally long value ( A positive value increases itself , Negative values are self decreasing )

    redisTemplate.opsForValue().increment(K key, long increment)
    
  10. Only if provided key When there is no , Only by using the key value pairs provided in the collection key Set to multiple values .

    redisTemplate.opsForValue().multiSetIfAbsent(Map<? extends K,? extends V> map)
    
  11. Use the key value pairs provided in the collection to set multiple key Set to multiple values

    Map map = new HashMap(); map.put("1","1"); map.put("2","2"); 
    map.put("3","3"); 
    redisTemplate.opsForValue().multiSet(Map<? extends K,? extends V> map)
    
  12. Get specified key The length of the string of

    redisTemplate.opsForValue().size(K key)
    
  13. Overwrite with the given value... Starting at the specified offset key Part of

    redisTemplate.opsForValue().set(K key, V value, long offset)
    
  14. If key non-existent , Is set key To save string values , There is returned false, Otherwise return to true

    redisTemplate.opsForValue().setIfAbsent(key, value)
    
  15. To reset key And add the expiration time

    redisTemplate.opsForValue().set(key, value, timeout, unit)
    
  16. Put binary number offset The bit value becomes value

    redisTemplate.opsForValue().setBit(K key, long offset, boolean value)
    
  17. Yes key String value stored , Gets the bit on the specified offset (bit)

    redisTemplate.opsForValue().getBit(K key, long offset)
    

RedisTemplate.opsForHash() Method

  1. from key Situated hash Get the given hashKey Value , namely key field(hashKey) value
redisTemplate.opsForHash().get(H key, Object hashKey)
  1. Get stored in key The whole hash, That is, get all the values
redisTemplate.opsForHash().entries(H key)
  1. Set up hash hashKey Value
redisTemplate.opsForHash().put(H key, HK hashKey, HV value)
  1. Use m The data provided in will be multiple hash Field is set to multiple values , That is to use map Assign a value
redisTemplate.opsForHash().putAll(H key, Map<? extends HK,? extends HV> m)
  1. Only when the hashKey Set when it doesn't exist hash hashKey Value .
redisTemplate.opsForHash().putIfAbsent(H key, HK hashKey, HV value)
  1. Delete the given hash hashKeys
redisTemplate.opsForHash().delete(H key, Object... hashKeys)
  1. Determine the given hash hashKey Whether there is
redisTemplate.opsForHash().hasKey(H key, Object hashKey)
  1. Increase by a given increment hash hashKey Value
redisTemplate.opsForHash().increment(H key, HK hashKey, double increment)
redisTemplate.opsForHash().increment(H key, HK hashKey, long increment)
  1. stay key Get hash Of hashKey Set ( Field )
redisTemplate.opsForHash().keys(H key)
  1. obtain key Of hash size .
redisTemplate.opsForHash().size(H key)
  1. stay key Get hash Value
redisTemplate.opsForHash().values(H key)

  1. View matching key value pairs
redisTemplate.opsForHash().scan(H key, ScanOptions options)

RedisTemplate.opsForList() Method

  1. from key Of list Get the element at the index in
redisTemplate.opsForList().index(K key, long index)
  1. from key Of list In order to get start and end Between the elements
redisTemplate.opsForList().range(K key, long start, long end)
  1. by key add value
redisTemplate.opsForList().leftPush(K key, V value)
  1. Add a value to key in
redisTemplate.opsForList().leftPushAll(K key, Collection<V> values)
  1. Only when the list In existence , To add values to key in
redisTemplate.opsForList().leftPushIfPresent(K key, V value)
  1. stay pivot Before adding values to key in
redisTemplate.opsForList().leftPush(K key, V pivot, V value)
  1. Attach a value to key
redisTemplate.opsForList().rightPush(K key, V value)
redisTemplate.opsForList().rightPushAll(K key, Collection<V> values)
  1. stay pivot Then add the value to key in
redisTemplate.opsForList().rightPush(K key, V pivot, V value)
  1. Set the value at the index of the list element
redisTemplate.opsForList().set(K key, long index, V value)
  1. Delete and return to stored in key The first element in the list of 、
redisTemplate.opsForList().leftPop(K key)
redisTemplate.opsForList().leftPop(K key, Duration timeout)
redisTemplate.opsForList().leftPop(K key, long count)
redisTemplate.opsForList().leftPop(K key, long timeout, TimeUnit unit)
  1. Delete and return to stored in key The last element in the list of
redisTemplate.opsForList().rightPop(K key)
redisTemplate.opsForList().rightPop(K key, Duration timeout)
redisTemplate.opsForList().rightPop(K key, long count)
redisTemplate.opsForList().rightPop(K key, long timeout, TimeUnit unit)
  1. from sourceKey Delete the last element from the list of , Attach it to destinationKey And return its value
redisTemplate.opsForList().rightPopAndLeftPush(K sourceKey, K destinationKey)
redisTemplate.opsForList().rightPopAndLeftPush(K sourceKey, K destinationKey, Duration timeout)
  1. From storage in key Delete the first... From the list of count The emergence of value
redisTemplate.opsForList().remove(K key, long count, Object value)
  1. stay start and end Between elements key Trim list at
redisTemplate.opsForList().trim(K key, long start, long end)
  1. Get stored in key The size of the list
redisTemplate.opsForList().size(K key)

RedisTemplate.opsForSet() Method

  1. stay key Of set Add the given value to
redisTemplate.opsForSet().add(K key, V... values)
  1. stay key Of set Deletes the given value and returns the number of deleted elements
redisTemplate.opsForSet().remove(K key, Object... values)
  1. from key Of set To remove and return a random member
redisTemplate.opsForSet().	pop(K key)
  1. stay key Get the size of the collection at
redisTemplate.opsForSet().size(K key)
  1. Check on key Of set Whether the contains a value
redisTemplate.opsForSet().isMember(K key, Object o)
  1. Back in the key and otherKeys And all given sets Intersecting members
redisTemplate.opsForSet().intersect(K key, Collection<K> otherKeys)
  1. stay key and otherKeys And all given sets The intersection , And store the results in destKey in
redisTemplate.opsForSet().intersectAndStore(K key, Collection<K> otherKeys, K destKey)
  1. stay key and otherKey Intersect all given sets, And store the results in destKey in
redisTemplate.opsForSet().intersectAndStore(K key, K otherKey, K destKey)
  1. Merge given key and otherKey All of the sets
redisTemplate.opsForSet().union(K key, K otherKey)
  1. Will be given key and otherKey All of you set Merge , And store the results in destKey in
redisTemplate.opsForSet().unionAndStore(K key, K otherKey, K destKey)
  1. Get the difference set
redisTemplate.opsForSet().difference(key, otherKeys)
  1. Get the difference set and store it in destKey
redisTemplate.opsForSet().differenceAndStore(key, otherKey, destKey)
  1. Randomly get an element in the set
redisTemplate.opsForSet().randomMember(key)
  1. Get all the elements in the collection
redisTemplate.opsForSet().members(key)
1
  1. Random acquisition in the set count It's worth
redisTemplate.opsForSet().randomMembers(key, count)
  1. Random acquisition in the set count It's worth , But weight removal
redisTemplate.opsForSet().distinctRandomMembers(key, count)
  1. Traverse set
redisTemplate.opsForSet().scan(key, options)

RedisTemplate.opsForZSet() Method

  1. Additive elements , Sort from small to large
redisTemplate.opsForZSet().add(key, value, score)
  1. To delete multiple values Value
redisTemplate.opsForZSet().remove(key, values)
  1. Add element score Value returns the increased value at the same time
redisTemplate.opsForZSet().incrementScore(key, value, delta)
  1. Returns the rank of elements in the set from small to large
redisTemplate.opsForZSet().rank(key, value)
  1. Returns the ranking of elements in the set from large to small
redisTemplate.opsForZSet().reverseRank(key, value)
  1. Get the element of the specified interval in the collection
redisTemplate.opsForZSet().reverseRangeWithScores(key, start,end)
  1. Query the elements in the collection and sort them from small to large
redisTemplate.opsForZSet().reverseRangeByScore(key, min, max)
redisTemplate.opsForZSet().reverseRangeByScoreWithScores(key, min, max)
  1. Sort from high to low , Then get the value between the minimum and maximum
redisTemplate.opsForZSet().reverseRangeByScore(key, min, max, start, end)
  1. according to score Value to get the number of elements
redisTemplate.opsForZSet().incrementScore(key, value, delta)
  1. Get the size of the collection
redisTemplate.opsForZSet().size(key)redisTemplate.opsForZSet().zCard(key)
  1. Get... In the collection key、value Elemental score value
redisTemplate.opsForZSet().score(key, value)
  1. Removes the specified index element
redisTemplate.opsForZSet().removeRange(key, start, end)
  1. Remove the specified score Collection members of the scope
redisTemplate.opsForZSet().removeRangeByScore(key, min, max)
  1. obtain key and otherKey Union of and stored in destKey in
redisTemplate.opsForZSet().unionAndStore(key, otherKey, destKey)
  1. obtain key and otherKey And stored in destKey in
redisTemplate.opsForZSet().intersectAndStore(key, otherKey, destKey)

Redis6 Transaction operation of

Redis The definition of the transaction

image-20220619152757114

Redis A transaction is a separate isolation operation : All commands in the transaction are serialized 、 To execute in order . Transaction is in the process of execution , Will not be interrupted by command requests from other clients .

Redis The main function of business is Concatenate multiple commands Prevent other orders from jumping in line .

Multi、Exec、discard

From input Multi Command start , All the entered commands will enter the command queue in turn , But will not execute , Until input Exec after ,Redis The previous commands in the command queue will be executed successively .

In the process of team formation, you can go through discard To give up team building .

image-20220619152848021

Wrong handling of transactions

  • There was a report error on a command in the team , During execution, all queues of the whole are cancelled

image-20220619152910457

  • If an error is reported in a command during execution , Only the wrong command will not be executed , And other orders will be executed , No rollback .

image-20220619152936595

Transaction conflict

Example

A request wants to reduce the amount 8000

A request wants to reduce the amount 5000

A request wants to reduce the amount 1000

image-20220619164733463

The idea of two locks

image-20220619153547188

Pessimistic locking

Pessimistic locking (Pessimistic Lock), seeing the name of a thing one thinks of its function , Is very pessimistic , Every time I go to get the data, I think others will modify it , So every time I get the data, I lock it , So that if people want to take this data, they will block Until it gets the lock . There are many lock mechanisms used in traditional relational databases , such as Row lock , Table locks etc. , Read the lock , Write lock etc. , It's all locked before the operation .

Optimism lock

Optimism lock (Optimistic Lock), seeing the name of a thing one thinks of its function , Is very optimistic , Every time I go to get the data, I think other people won't modify it , So it won't lock , But in the process of updating, we will judge whether other people have updated this data during this period , You can use mechanisms like version numbers . Optimistic lock is suitable for multi read applications , This can improve throughput .Redis It's using this check-and-set The mechanism implements the .

WATCHkey [key …]

In execution multi Before , Execute first watch key1 [key2], You can watch one ( Or more ) key , If you're in business This before execution ( Or these ) key Altered by other orders , Then the business will be interrupted .

unwatch

Cancel WATCH Command to all key Surveillance .

If in execution WATCH After the command ,EXEC Order or DISCARD If the order is executed first , Then there's no need to execute UNWATCH 了 .

Redis Three characteristics of transaction

Ø Separate isolation operation

  • All commands in the transaction are serialized 、 To execute in order . Transaction is in the process of execution , Will not be interrupted by command requests from other clients .

Ø There is no concept of isolation level

  • The commands in the queue are not actually executed until they are submitted , Because no instruction is actually executed before the transaction is committed

Ø There is no guarantee of atomicity

  • If a command fails in a transaction , Subsequent orders will still be executed , No rollback

Second kill case

Ideas

image-20220619173036319

  1. Pass in uid and prodid( And verify )
  2. according to uid and prodid Generate key
  3. Inventory judgment (null Not yet begun ,0 end ), Repeat second kill judgment
  4. Second kill of users , stock –
  5. The user enters the seckill success list

Code :

public static boolean doSecKill(String uid,String prodid) throws IOException {
    
   //1 uid and prodid Judge not empty 
   if(uid == null || prodid == null) {
    
      return false;
   }

   //2  Connect redis
   //Jedis jedis = new Jedis("192.168.44.168",6379);
   // Get through connection pool jedis object 
   JedisPool jedisPoolInstance = JedisPoolUtil.getJedisPoolInstance();
   Jedis jedis = jedisPoolInstance.getResource();

   //3  Splicing key
   // 3.1  stock key
   String kcKey = "sk:"+prodid+":qt";
   // 3.2  Second kill successful users key
   String userKey = "sk:"+prodid+":user";

   // Monitor inventory 
   jedis.watch(kcKey);

   //4  Get inventory , If inventory null, The second kill hasn't started yet 
   String kc = jedis.get(kcKey);
   if(kc == null) {
    
      System.out.println(" The second kill hasn't started yet , Please wait ");
      jedis.close();
      return false;
   }

   // 5  Judge whether the user repeats the second kill operation 
   if(jedis.sismember(userKey, uid)) {
    
      System.out.println(" It's a second kill , You can't repeat the second kill ");
      jedis.close();
      return false;
   }

   //6  Judge if the quantity of goods , Inventory quantity is less than 1, End of seckill 
   if(Integer.parseInt(kc)<=0) {
    
      System.out.println(" The second kill is over ");
      jedis.close();
      return false;
   }

   //7  Second kill process 
   // With a transaction 
   Transaction multi = jedis.multi();

   // Team operation 
   multi.decr(kcKey);
   multi.sadd(userKey,uid);

   // perform 
   List<Object> results = multi.exec();

   if(results == null || results.size()==0) {
    
      System.out.println(" The second kill failed ....");
      jedis.close();
      return false;
   }

   //7.1  stock -1
   //jedis.decr(kcKey);
   //7.2  Add successful users to the list 
   //jedis.sadd(userKey,uid);

   System.out.println(" The second kill succeeded ..");
   jedis.close();
   return true;
}

You can solve the link timeout problem through the thread pool

public class JedisPoolUtil {
    
   private static volatile JedisPool jedisPool = null;

   private JedisPoolUtil() {
    
   }

   public static JedisPool getJedisPoolInstance() {
    
      if (null == jedisPool) {
    
         synchronized (JedisPoolUtil.class) {
    
            if (null == jedisPool) {
    
               JedisPoolConfig poolConfig = new JedisPoolConfig();
               poolConfig.setMaxTotal(200);
               poolConfig.setMaxIdle(32);
               poolConfig.setMaxWaitMillis(100*1000);
               poolConfig.setBlockWhenExhausted(true);
               poolConfig.setTestOnBorrow(true);  // ping PONG
             
               jedisPool = new JedisPool(poolConfig, "192.168.44.168", 6379, 60000 );
            }
         }
      }
      return jedisPool;
   }

   public static void release(JedisPool jedisPool, Jedis jedis) {
    
      if (null != jedis) {
    
         jedisPool.returnResource(jedis);
      }
   }

}

have access to lua To solve the remaining inventory problems

public class SecKill_redisByScript {
    
   
   private static final  org.slf4j.Logger logger =LoggerFactory.getLogger(SecKill_redisByScript.class) ;

   public static void main(String[] args) {
    
      JedisPool jedispool =  JedisPoolUtil.getJedisPoolInstance();
 
      Jedis jedis=jedispool.getResource();
      System.out.println(jedis.ping());
      
      Set<HostAndPort> set=new HashSet<HostAndPort>();

   // doSecKill("201","sk:0101");
   }
   
   static String secKillScript ="local userid=KEYS[1];\r\n" + 
         "local prodid=KEYS[2];\r\n" + 
         "local qtkey='sk:'..prodid..\":qt\";\r\n" + 
         "local usersKey='sk:'..prodid..\":usr\";\r\n" + 
         "local userExists=redis.call(\"sismember\",usersKey,userid);\r\n" + 
         "if tonumber(userExists)==1 then \r\n" + 
         " return 2;\r\n" + 
         "end\r\n" + 
         "local num= redis.call(\"get\" ,qtkey);\r\n" + 
         "if tonumber(num)<=0 then \r\n" + 
         " return 0;\r\n" + 
         "else \r\n" + 
         " redis.call(\"decr\",qtkey);\r\n" + 
         " redis.call(\"sadd\",usersKey,userid);\r\n" + 
         "end\r\n" + 
         "return 1" ;
          
   static String secKillScript2 = 
         "local userExists=redis.call(\"sismember\",\"{sk}:0101:usr\",userid);\r\n" +
         " return 1";

   public static boolean doSecKill(String uid,String prodid) throws IOException {
    

      JedisPool jedispool =  JedisPoolUtil.getJedisPoolInstance();
      Jedis jedis=jedispool.getResource();

       //String sha1= .secKillScript;
      String sha1=  jedis.scriptLoad(secKillScript);
      Object result= jedis.evalsha(sha1, 2, uid,prodid);

        String reString=String.valueOf(result);
      if ("0".equals( reString )  ) {
    
         System.err.println(" Empty !!");
      }else if("1".equals( reString )  )  {
    
         System.out.println(" Panic buying !!!!");
      }else if("2".equals( reString )  )  {
    
         System.err.println(" The user has robbed !!");
      }else{
    
         System.err.println(" Panic buying exception !!");
      }
      jedis.close();
      return true;
   }
}

Redis6 Persistence is RDB

Write the data set snapshot in memory to disk within the specified time interval , That's what jargon says Snapshot snapshot , When it recovers, it reads the snapshot file directly into memory

How backups are performed

Redis Will create... Separately (fork) A subprocess to persist , Will first write data to In a temporary file , When the persistence process is over , Replace the last persistent file with this temporary file . The whole process , The main process is not going to do anything IO Operation of the , This ensures extremely high performance If large-scale data recovery is needed , And it's not very sensitive to the integrity of data recovery , that RDB The way is better than AOF It's more efficient .RDB The disadvantage is that the data may be lost after the last persistence .

Sketch Map

image-20220620192300476

fork

  • Fork Its function is to copy the same process as the current process . All the data for the new process ( Variable 、 environment variable 、 Program counter, etc ) The values are consistent with the original process , But it's a whole new process , And as a child of the original process
  • stay Linux In the program ,fork() Will produce a child process that is exactly the same as the parent process , But the subprocesses are more likely to exec system call , For efficiency ,Linux Introduced in “ Copy on write technology
  • Generally, the parent process and the child process share the same physical memory , Only when the content of each segment of the process space changes , Will copy the contents of the parent process to the child process .

Persistence process

image-20220620192519641

dump.rdb file

  1. stay redis.conf Name of the configuration file in , The default is dump.rdb

    image-20220620192629448

  2. rdb Save path of file , You can also modify . The default is Redis At startup, the command line is in the same directory dir “/myredis/”

    image-20220620195612405

    How to trigger RDB snapshot ; Retention strategy

    The default snapshot configuration in the configuration file

    image-20220620195640896

command save VS bgsave

  1. save :save Just keep it , Nothing else , All blocked . Save manually . Don't suggest .
  2. bgsave**:Redis The snapshot operation will be performed asynchronously in the background ,** The snapshot can also respond to client requests .

Can pass lastsave Command gets the last time the snapshot was executed successfully

flushall command

perform flushall command , There will be dump.rdb file , But it's empty , meaningless

Save

Format :save Second Number of write operations

RDB It's the compression of the entire memory Snapshot,RDB Data structure of , You can configure composite snapshot trigger conditions ,

The default is 1 Changed in minutes 1 Ten thousand times ,

or 5 Changed in minutes 10 Time ,

or 15 Changed in minutes 1 Time .

Ban

Not set up save Instructions , Or to save Pass in an empty string

Dynamic stop RDB:

redis-cli config set save “”#save Give null value after , Indicates disable save policy

The configuration file

stop-writes-on-bgsave-erro

When Redis If you can't write to disk , Turn it off Redis Write operations for . recommend yes

image-20220620200154338

rdbcompression

For snapshots stored on disk , You can set whether to compress storage . If so ,redis Will be used LZF Algorithm for compression .

If you don't want to consume CPU To compress , It can be set to turn off this function . recommend yes.

image-20220620200224963

rdbchecksum Check for integrity

After storing the snapshot , Can also let redis Use CRC64 Algorithm to check data ,

But doing so will add about 10% Performance consumption of , If you want to get the maximum performance improvement , You can turn it off

recommend yes.

image-20220620200259309

rdb Backup of

Through the first config get dir Inquire about rdb File directory

take *.rdb Copy your files to other places

rdb The recovery of

  1. close Redis
  2. First copy the backup file to the working directory cp dump2.rdb dump.rdb
  3. start-up Redis, The backup data is loaded directly

Pros and

advantage

  • Suitable for large-scale data recovery
  • Data integrity and consistency requirements are not high, more suitable for use
  • Save disk space
  • Fast recovery

image-20220620200532451

Inferiority

  • Fork When , The data in memory is cloned , roughly 2 Double expansion needs to be considered
  • although Redis stay fork When used Copy on write technology , But if the data is huge, it still consumes performance .
  • Make a backup at a certain interval in the backup cycle , So if Redis accident down If you drop it , All changes since the last snapshot will be lost .

image-20220620200658195

Redis6 Persistence is AOF

brief introduction

With journal To record each write operation ( Incremental save ), take Redis All written instructions executed are recorded ( Reading operation does not record ), Only files can be added but not rewritten ,redis At the beginning of startup, it will read the file and rebuild the data , In other words ,redis In case of restart, execute the write instruction from the front to the back according to the contents of the log file to complete the data recovery

Persistence process

  1. The client request write command will be append Append to AOF In the buffer
  2. AOF Buffer based on AOF Persistence strategy [always,everysec,no] Will operate sync Synced to disk AOF In file ;
  3. AOF When the file size exceeds the rewrite policy or manual rewrite , Would be right AOF file rewrite rewrite , Compress AOF File capacity ;
  4. Redis When the service is restarted , Will return load load AOF The write operation in the file achieves the purpose of data recovery ;

image-20220620200959635

AOF Not on by default

Can be in redis.conf Name of the configuration file in , The default is appendonly.aof

AOF Save path of file , Same as RDB The path is the same .

AOF and RDB At the same time open

AOF and RDB At the same time open , The system defaults to AOF The data of ( There is no loss of data )

AOF start-up / Repair / recovery

  • AOF Although the backup mechanism and performance are similar to RDB Different , But the operation of backup and recovery is the same as RDB equally , It's all copying backup files , Copy to when you need to recover Redis Working directory , Start the system and load .
  • Normal recovery
    • Modify the default appendonly no, Change it to yes
    • There will be data aof Copy a file and save it to the corresponding directory ( View directory :config get dir)
    • recovery : restart redis Then reload
  • Abnormal recovery
    • n Modify the default appendonly no, Change it to yes
    • n If encountered AOF File corruption , adopt /usr/local/bin/redis-check-aof–fix appendonly.aof Resume
    • n Backup is bad AOF file
    • n recovery : restart redis, Then reload

AOF Synchronous frequency setting

appendfsync always

Always sync , Every time Redis All writes are immediately logged ; Poor performance but good data integrity

appendfsync everysec

A second synchronous , Log once per second , If it goes down , This second's data may be lost .

appendfsync no

redis Don't take the initiative to synchronize , Give the timing of synchronization to the operating system .

Rewrite Compress

brief introduction

AOF By means of document addition , The documents will become larger and larger to avoid this , New rewrite mechanism , When AOF When the file size exceeds the set threshold ,Redis Will start AOF The content of the file is compressed , Keep only the smallest instruction set that can recover data . You can use commands bgrewriteaof

Principle and implementation

AOF When files continue to grow and become too large , Meeting fork A new process to rewrite the file ( Also write temporary documents first and then rename),redis4.0 Post version rewriting , It means to put up rdb Snapshot , In the form of a two-tier system attached to the new aof Head , As historical data available , Replace the original daily account operation .

no-appendfsync-on-rewrite:

If no-appendfsync-on-rewrite=yes , Don't write aof Files are only written to the cache , User requests don't block , But in this period of time, if you lose the cache data in this period of time .( Reduce data security , Improve performance )

If no-appendfsync-on-rewrite=no, I'll still brush the data to the disk , But we're encountering an override operation , There could be a blockage .( Data security , But the performance degrades )

Trigger mechanism , When to rewrite

Redis It will record the last time it was rewritten AOF size , The default configuration is when AOF The file size is last rewrite Double the size and the file is larger than 64M Trigger when

Rewriting can save a lot of disk space , Reduce recovery time . But there's a burden to rewrite every time , So set Redis You have to meet certain conditions to rewrite .

auto-aof-rewrite-percentage: Set the overridden benchmark , The document reaches 100% Start rewriting ( The file is the original rewritten file 2 It's time to trigger )

auto-aof-rewrite-min-size: Set the overridden benchmark , The smallest file 64MB. Reach this value and start rewriting .

for example : The document reaches 70MB Start rewriting , drop to 50MB, When do you start rewriting next time ?100MB

When the system was loaded or last rewritten ,Redis It will be recorded at this time AOF size , Set to base_size,

If Redis Of AOF The current size >= base_size +base_size*100% ( Default ) And the current size >=64mb( Default ) Under the circumstances ,Redis Would be right AOF Rewrite .

Rewrite process

  1. bgrewriteaof Trigger override , Judge whether there is bgsave or bgrewriteaof Running , If there is , Then wait for the end of the command to continue .

  2. The main process fork Out of the child process to perform the rewrite operation , Make sure the main process doesn't block .

  3. Subprocess traversal redis Data in memory to temporary file , The client's write request is written at the same time aof_buf Buffers and aof_rewrite_buf Rewriting the buffer ensures that the original AOF The file is complete and new AOF New data modification actions during file generation will not be lost .

  4. 1). The subprocess writes a new AOF After the document , Signal the main process , Parent process update statistics .

    2). The main process puts aof_rewrite_buf Write the data in to the new AOF file .

  5. Use the new AOF The file covers the old AOF file , complete AOF rewrite .

image-20220620202521869

Pros and

advantage

image-20220620202559778

  • The backup mechanism is more robust , Lower probability of data loss .
  • Readable log text , By manipulating the AOF steady , It can handle misoperation .

Inferiority

  • Compared with RDB Take up more disk space .
  • Recovery of backup is slower .
  • If every read and write is synchronized , There is a certain performance pressure .
  • There are individual problems Bug, It can't recover .

image-20220620202649842

Choose suggestions

  • The official recommendation is to use both .
  • If you're not sensitive to data , You can use the menu alone RDB.
  • It is not recommended to use it alone AOF, Because there may be Bug.
  • If it's just a pure memory cache , You don't have to .

Redis6 Master-slave replication of

brief introduction

After the host data is updated, according to the configuration and Policy , Automatic synchronization to the standby machine master/slaver Mechanism ,Master Write first ,Slave Mainly reading

effect

  • Read / write separation , Performance expansion
  • Rapid disaster recovery

image-20220621112217034

Specific operation

  1. Copy multiple redis.conf file include( Write absolute path )
  2. Turn on daemonize yes
  3. Pid File name pidfile
  4. Designated port port
  5. Log File name
  6. dump.rdb name dbfilename
  7. Appendonly Turn it off or change the name

newly build redis****.conf

include /myRedis/redis.conf
pidfile /var/run/redis_6379.pid
port 6379
dbfilename dump6379.rdb

Start three services

image-20220621135348219

perform slaveof host ip Port number

image-20220621141106758

Commonly used 3 recruit

One master and two servants

  • When you hang up from the service , Start again , It will not inherit the previous master-slave state , At this time, the service is master, It needs to be reset , And after resetting , The data of the main service will be copied
  • When the main service hangs up , The state of the slave service does not change , When the main service is started again , It will automatically continue the previous master-slave status , The data will also be copied to the main service , Everything remains the same.

It's passed down from generation to generation

the previous Slave It could be the next slave Of Master,Slave Can also receive other slaves Connection and synchronization requests for , Then the slave As the next in the chain master, Can effectively reduce master The pressure of writing , Decentralization reduces risk .

  • use slaveof
  • Change direction in the middle : Will clear the previous data , Recreate the copy of the latest
  • The risk is that once something slave Downtime , hinder slave No backup
  • The mainframe is down , Slave or slave , Unable to write data

image-20220621144809090

Going to

When one master After downtime , hinder slave Can be promoted to master, Behind it slave You don't have to make any changes .

use slaveof no one From slave to host .

Master slave replication principle

  1. After the server is linked to the primary server , Send a data synchronization message to the primary server
  2. The primary server receives a message , Pass the data through rdb Way to persist , take rdb File transfer to the slave server , From the server through rdb File read
  3. Whenever the primary server performs a write operation , Will notify the server to synchronize data
  4. The slave server only actively requests data synchronization during the first link , In other cases, the master server is responsible for data synchronization

image-20220621143739606

Sentinel mode

brief introduction

The automatic version of anti guest oriented , Be able to monitor the failure of the host in the background , If it fails, it will automatically convert from the library to the main library according to the number of votes

image-20220621150358229

Implementation steps

  1. One servant and two masters

  2. Self defined /myRedis New under the directory sentinel.conf file

  3. Deploy sentinels , Fill in the content

    sentinel monitor mymaster 127.0.0.1 6379 1

    among mymaster Server name for the monitored object , 1 For at least how many sentinels agree to move the number .

  4. Activate the sentry

    /usr/local/bin

    redis For pressure measurement, you can use your own re dis-benchmark Tools

    perform redis-sentinel /myredis/sentinel.conf

  5. When the host goes down , A new host is generated from the election of the slave computer

    ( Probably 10 You can see the sentry window log in seconds , Switched to a new host )

    Which is elected as the host from the opportunity ? According to priority :slave-priority

    After the original host is restarted, it will become a slave .

  6. Replication delay

    Because all the writing operations are first in Master On the operation , Then sync update to Slave On , So from Master Synchronize to Slave The machine has a certain delay , When the system is busy , The delay problem will be more serious ,Slave The increase in the number of machines will make the problem more serious .

Fault recovery

image-20220621152330751

The priority is redis.conf The default :slave-priority 100, The lower the value, the higher the priority

Offset refers to the most complete data of the original host

Every redis After the instance is started, it will generate one randomly 40 Bit runid

java Set up

private static JedisSentinelPool jedisSentinelPool=null;

public static  Jedis getJedisFromSentinel(){
    
if(jedisSentinelPool==null){
    
            Set<String> sentinelSet=new HashSet<>();
            sentinelSet.add("192.168.11.103:26379");

            JedisPoolConfig jedisPoolConfig =new JedisPoolConfig();
            jedisPoolConfig.setMaxTotal(10); // Maximum number of connections available 
jedisPoolConfig.setMaxIdle(5); // Maximum number of idle connections 
jedisPoolConfig.setMinIdle(5); // Minimum number of idle connections 
jedisPoolConfig.setBlockWhenExhausted(true); // Whether the connection is exhausted and wait 
jedisPoolConfig.setMaxWaitMillis(2000); // Waiting time 
jedisPoolConfig.setTestOnBorrow(true); // Test the connection  ping pong

jedisSentinelPool=new JedisSentinelPool("mymaster",sentinelSet,jedisPoolConfig);
return jedisSentinelPool.getResource();
  }else{
    
return jedisSentinelPool.getResource();
        }
}

Redis6 colony

problem

Not enough capacity ,redis How to expand ?

Concurrent write operations , redis How to apportion ?

in addition , A master-slave mode , Pass on mode , Host down , Lead to ip The address has changed , The corresponding host address needs to be modified for configuration in the application 、 Port and other information .

Previously, it was solved by proxy host , however redis3.0 The solution . That is, decentralized cluster configuration .

brief introduction

Redis The cluster has realized to Redis Horizontal expansion of , Start now N individual redis node , The entire database is distributed and stored here N A node in the , Each node stores the total data 1/N.

Redis Clusters are partitioned (partition) To provide a certain degree of usability (availability): Even if some nodes in the cluster fail or fail to communicate , The cluster can also continue to process command requests .

To configure

Configure basic information

Turn on daemonize yes

Pid File name

Designated port

Log File name

Dump.rdb name

Appendonly Turn it off or change the name

redis cluster Configuration modification

cluster-enabled yes Open cluster mode

cluster-config-file nodes-6379.conf Set the node profile name

cluster-node-timeout 15000 Set the node loss time , Beyond that time ( millisecond ), The cluster automatically switches between master and slave .

include /home/bigdata/redis.conf
port 6379
pidfile "/var/run/redis_6379.pid"
dbfilename "dump6379.rdb"
dir "/home/bigdata/redis_cluster"
logfile "/home/bigdata/redis_cluster/redis_err_6379.log"
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000

Start all services

Combine six nodes into a cluster

Before combination , Please make sure that all redis After the instance is started ,nodes-xxxx.conf All files are generated normally .

  • Fit cd /opt/redis-6.2.1/src
  • edis-cli --cluster create --cluster-replicas 1 192.168.150.111:6379 192.168.150.111:6380 192.168.150.111:6381 192.168.150.111:6389 192.168.150.111:6390 192.168.150.111:6391

Use cluster policy to connect , The setting data will be automatically switched to the corresponding write host

adopt cluster nodes Command to view cluster information

Node allocation

A cluster must have at least three master nodes .

Options --cluster-replicas 1 Indicates that we want to create a slave node for each master node in the cluster .

The allocation principle tries to ensure that each master database runs on different servers IP Address , Each slave library and master library are not in one IP Address .

slots

[OK] All nodes agree about slots configuration.

>>> Check for open slots…

>>> Check slots coverage…

[OK] All 16384 slots covered.

One Redis The cluster contains 16384 Slots (hash slot), Every key in the database belongs here 16384 One of the slots ,

The cluster uses the formula CRC16(key) % 16384 To calculate the key key Which slot does it belong to , among CRC16(key) Statements are used to evaluate keys key Of CRC16 The checksum .

Each node in the cluster is responsible for processing a portion of the slots . for instance , If a cluster can have a master node , among :

node A Responsible for handling 0 No. to 5460 Slot number .

node B Responsible for handling 5461 No. to 10922 Slot number .

node C Responsible for handling 10923 No. to 16383 Slot number .

Enter values in the cluster

stay redis-cli Every time you enter 、 Query key values ,redis The accountant worked out the key Slot that should be sent , If it is not the slot of the server corresponding to the client ,redis Will report a mistake , And inform those who should go redis Instance address and port .

redis-cli The client provides –c Parameter to implement automatic redirection .

Such as redis-cli -c –p 6379 After logging in , Re entry 、 Query key value pairs can be redirected automatically .

Not in one slot Key values under , Can't use mget,mset Wait for multi key operation .

image-20220621165904820

Can pass {} To define the concept of a group , So that key in {} Key value pairs with the same content in a slot In the middle

image-20220621165919111

Query the values in the cluster

CLUSTER GETKEYSINSLOT return count individual slot Key in groove .

image-20220621170004766

Fault recovery

If the master node goes offline , Whether the slave node can be automatically promoted to the master node ? Be careful :15**** Second timeout Meeting

image-20220621170036763

After the primary node is restored , What about the master-slave relationship ? When the master node comes back, it becomes a slave .

If all the master and slave nodes in a certain slot are down ,redis Whether the service can continue ?

If the master and slave of a certain slot hang up , and cluster-require-full-coverage by yes , that , The whole cluster is down

If the master and slave of a certain slot hang up , and cluster-require-full-coverage by no , that , None of the slot data is available , Can't store .

redis.conf Parameters in cluster-require-full-coverage

Clustered jedis

Even if the host is not connected , The cluster automatically switches host storage . Host write , Read from machine .

Decentralized master-slave cluster . No matter which host the data is written from , Data can be read on other hosts .

public class JedisClusterTest {
    
  public static void main(String[] args) {
     
     Set<HostAndPort>set =new HashSet<HostAndPort>();
     set.add(new HostAndPort("192.168.31.211",6379));
     JedisCluster jedisCluster=new JedisCluster(set);
     jedisCluster.set("k1", "v1");
     System.out.println(jedisCluster.get("k1"));
  }
}

Pros and

advantage

  • Realize capacity expansion
  • Apportionment pressure
  • No central configuration is relatively simple

Inferiority

  • Multi key operation is not supported
  • Multibond Redis Transactions are not supported .lua Scripts are not supported
  • Due to the late emergence of the cluster scheme , Many companies have adopted other clustering schemes , The proxy or client partition scheme wants to migrate to redis cluster, Overall migration is required rather than gradual transition , High complexity .

Redis6 Apply problem solving

Cache penetration

image-20220621173832905

key The corresponding data does not exist in the data source , Every time for this key The request for could not be fetched from the cache , Requests are pushed to the data source , That could crush the data source . For example, use a non-existent user id Get user information , No matter cache or database , If hackers exploit this vulnerability, they may crush the database .

Solution

There must be no data that cannot be cached or queried , Because the cache is written passively on Miss , And for the sake of fault tolerance , If no data can be found from the storage tier, it will not be written to the cache , This will cause the non-existent data to be queried by the storage layer every time it is requested , It loses the meaning of caching .

  1. ** Cache null values :** If the data returned by a query is empty ( Whether the data doesn't exist or not ), We still put this empty result (null) Cache , Setting the expiration time for empty results can be very short , Up to five minutes
  2. ** Set up an accessible list ( White list ):** Use bitmaps Type defines a list of accessible , list id As bitmaps The offset , Every visit and bitmap Inside id Compare , If you visit id be not in bitmaps Inside , To intercept , Access not allowed
  3. Use of Blum filter :( The bloon filter (Bloom Filter) yes 1970 Proposed by bron in . It's actually a very long binary vector ( Bitmap ) And a series of random mapping functions ( hash function ). The bloom filter can be used to retrieve whether an element is in a collection . Its advantage is that the space efficiency and query time are far more than the general algorithm , The disadvantage is that it has certain error recognition rate and deletion difficulty .) Hash all possible data to a large enough bitmaps in , A certain nonexistent data will be This bitmaps Intercept , Thus, the query pressure on the underlying storage system is avoided .
  4. Real-time monitoring : If I found Redis It's starting to drop , Need to check access objects and data , Cooperate with operation and maintenance personnel , You can set up a blacklist to restrict services

Cache breakdown

key The corresponding data exists , But in redis Medium overdue , At this time, if a large number of concurrent requests come , These requests usually find that the cache is expired from the back end DB Load data and reset to cache , At this time, a large number of concurrent requests may instantly put the back end DB Overwhelmed .image-20220621190414119

Solution

key It may be accessed at some point in time with super high concurrency , It's a very “ hotspot ” The data of . This is the time , A question needs to be considered : The cache is “ breakdown ” The problem of .

  1. Preset hot data stay redis Before the summit visit , Put some hot data into redis Inside , Increase the hot data key Duration

  2. ** Real time adjustments :** What data is hot on the spot , Real time adjustments key The expiration time of

  3. Using locks :

    1. When the cache fails ( Judge that the value is empty ), Not immediately load db.
    2. First, use some operations of the cache tool with the return value of the successful operation ( such as Redis Of SETNX)(1) Go to set One mutex key
    3. When the operation returns success , Proceed again load db The operation of , And reset the cache , Finally delete mutex key;
    4. When the operation returns failure , Prove that there are threads in load db, The current thread sleeps for a period of time and then retries the whole get Caching method .

    image-20220621191230409

Cache avalanche

key The corresponding data exists , But in redis Medium overdue , At this time, if a large number of concurrent requests come , These requests usually find that the cache is expired from the back end DB Load data and reset to cache , At this time, a large number of concurrent requests may instantly put the back end DB Overwhelmed .

The difference between cache avalanche and cache breakdown is that there are many key cache , The former is a certain key Normal visit

image-20220621190608513

Solution

The avalanche effect of cache failure has a terrible impact on the underlying system !

  1. ** Building a multi-level cache architecture :**nginx cache + redis cache + Other caches (ehcache etc. )
  2. Use locks or queues : Lock or queue to ensure that there will not be a large number of threads to read and write the database at one time , So as to avoid a large number of concurrent requests falling on the underlying storage system in case of failure . Not for high concurrency
  3. ** Set expiration flag to update cache :** Record whether the cache data is out of date ( Set the lead amount ), If it expires, it will trigger another thread to update the actual situation in the background key The cache of .
  4. Decentralized cache expiration time : For example, we can add a random value to the original failure time , such as 1-5 Minutes at random , In this way, the repetition rate of each cache expiration time will be reduced , It's hard to trigger a collective failure .

Distributed lock

With the needs of business development , After the original single machine deployment system is evolved into a distributed cluster system , Due to multithreading in distributed system 、 Multi process and distributed on different machines , This will invalidate the concurrent control lock policy in the case of the original stand-alone deployment , pure Java API It doesn't provide the ability of distributed locks . In order to solve this problem, we need a kind of cross JVM To control the access of shared resources , This is the problem of distributed lock !

The mainstream implementation of distributed lock :

  1. Implementation of distributed lock based on Database

  2. Cache based (Redis etc. )

  3. be based on Zookeeper

Every distributed lock solution has its own advantages and disadvantages :

  1. performance :redis The highest

  2. reliability :zookeeper The highest

here , We are based on redis Implement distributed locks .

redis Implement distributed locks

redis: command

set sku:1:info “OK” NX PX 10000

EX second : Set the expiration time of the key to second second . SET key value EX second The effect is equivalent to SETEX key second value .

PX millisecond : Set the expiration time of the key to millisecond millisecond . SET key value PX millisecond The effect is equivalent to PSETEX key millisecond value .

NX : Only if the bond doesn't exist , To set the key . SET key value NX The effect is equivalent to SETNX key value .

XX : Only if the bond already exists , To set the key .

image-20220621194815779

  1. Multiple clients acquire locks at the same time (setnx)

  2. To be successful , Execute business logic { from db get data , Put into cache }, Execute complete release lock (del)

  3. Other clients wait to try again

java Code

@GetMapping("testLock")
public void testLock(){
    
    //1 Get the lock ,setne
    Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock", "111");
    //2 Lock acquired successfully 、 Inquire about num Value 
 if(lock){
    
        Object value = redisTemplate.opsForValue().get("num");
        //2.1 Judge num It's empty return
        if(StringUtils.isEmpty(value)){
    
            return;
        }
        //2.2 If it's worth something, it turns into int
        int num = Integer.parseInt(value+"");
        //2.3 hold redis Of num Add 1
        redisTemplate.opsForValue().set("num", ++num);
        //2.4 Release the lock ,del
        redisTemplate.delete("lock");

    }else{
    
        //3 Lock acquisition failed 、 every other 0.1 Second to get 
        try {
    
            Thread.sleep(100);
            testLock();
        } catch (InterruptedException e) {
    
            e.printStackTrace();
        }
    }
}

problem :setnx Just got the lock , Exception in business logic , Causes the lock to fail to release

solve : Set expiration time , Automatic release lock .

Optimize the lock expiration time

There are two ways to set the expiration time :

  1. The first thing to think about is through expire Set expiration time ( Lacking atomicity : If in setnx and expire There is an exception between , The lock doesn't release either )

  2. stay set Specify the expiration time when ( recommend )

image-20220621194959526

Residual problems

scene : If the execution time of business logic is 7s. The execution process is as follows

\1. index1 Business logic is not finished ,3 Seconds later, the lock is released automatically .

\2. index2 Get lock , Execute business logic ,3 Seconds later, the lock is released automatically .

\3. index3 Get lock , Execute business logic

\4. index1 Business logic execution complete , Start calling del Release the lock , What's released is index3 Lock of , Lead to index3 Our business only performs 1s I'll be released by someone else .

In the end, it's the case of no lock .

solve :setnx When you get the lock , Set a specified unique value ( for example :uuid); Get this value before releasing , Judge if it's your own lock

Optimize it UUID Error proofing

image-20220621195047836

image-20220621195112099

problem : Deletion operation lacks atomicity .

Optimize it LUA Scripts guarantee atomicity of deletion

@GetMapping("testLockLua")
public void testLockLua() {
    
    //1  Make a statement uuid , Will be as a value  Put in our key In the corresponding value 
    String uuid = UUID.randomUUID().toString();
    //2  Define a lock :lua  Scripts can use the same lock , To delete !
    String skuId = "25"; //  visit skuId  by 25 No  100008348542
    String locKey = "lock:" + skuId; //  What's locked is the data of each commodity 

    // 3  Get the lock 
    Boolean lock = redisTemplate.opsForValue().setIfAbsent(locKey, uuid, 3, TimeUnit.SECONDS);

    //  The first one is : lock  Don't write any code between the expiration time .
    // redisTemplate.expire("lock",10, TimeUnit.SECONDS);// Set expiration time 
    //  If true
    if (lock) {
    
        //  The business logic of execution begins 
        //  Get the num  data 
        Object value = redisTemplate.opsForValue().get("num");
        //  If it's empty, return to 
        if (StringUtils.isEmpty(value)) {
    
            return;
        }
        //  Not empty   If something goes wrong here !  that delete  The deletion failed !  That is to say, the lock always exists !
        int num = Integer.parseInt(value + "");
        //  send num  Every time +1  Put into cache 
        redisTemplate.opsForValue().set("num", String.valueOf(++num));
        /* Use lua Script to lock */
        //  Definition lua  Script 
        String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
        //  Use redis perform lua perform 
        DefaultRedisScript<Long> redisScript = new DefaultRedisScript<>();
        redisScript.setScriptText(script);
        //  Set the return value type   by Long
        //  Because when you delete judgment , Back to 0, Encapsulate it as a data type . If it's not encapsulated, it will return by default String  type ,
        //  So return the string and 0  There will be mistakes .
        redisScript.setResultType(Long.class);
        //  The first is if script  Script  , The second thing that needs to be judged key, The third is key The corresponding value .
        redisTemplate.execute(redisScript, Arrays.asList(locKey), uuid);
    } else {
    
        //  Other threads wait 
        try {
    
            //  sleep 
            Thread.sleep(1000);
            //  When I wake up , Calling method .
            testLockLua();
        } catch (InterruptedException e) {
    
            e.printStackTrace();
        }
    }
}

image-20220621195152423

Redis6 new function

ACL

brief introduction

Redis ACL yes Access Control List( Access control list ) Abbreviation , This function allows some connections to be restricted according to the commands that can be executed and the keys that can be accessed .

stay Redis 5 Before the release ,Redis The security rule is only password control And through rename To adjust high-risk orders, such as flushdb , KEYS* , shutdown etc. .Redis 6 Provide ACL More fine-grained permission control for users :

  1. Access rights : User name and password
  2. Orders that can be executed
  3. It can be operated KEY

command

  1. Use acl list The command displays a list of user permissions

image-20220621200245075

  1. Use acl cat command

    View add permission instruction category

    image-20220621200312998

    Add the parameter type name to view the specific commands under the type

    [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-ESLqIrqI-1655813763243)(https://pic.imgdb.cn/item/62b1b3960947543129cde382.png)]

  2. Use acl whoami Command to view the current user

    [ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-QoEW2Uur-1655813763244)(https://pic.imgdb.cn/item/62b1b3b10947543129ce094c.png)]

  3. Use aclsetuser Command to create and edit users ACL

    The rules : Some rules are only used to activate or delete flags , Or to users ACL A single word that performs a given change . Other rules are character prefixes , They are the same as the command or category name 、 Key mode, etc .

    ACL The rules
    type Parameters explain
    Start and disable users on Activate a user account
    off Disable a user account . Be careful , The verified connection is still working . If the default user is marked off, The new connection will start without authentication , And ask users to use AUTH Option send AUTH or HELLO, To authenticate in some way .
    Add and delete permissions + Add an instruction to the list of instructions that the user can call
    - Removes an instruction from the user executable instruction list
    [email protected] Add all instructions to be called by the user in this category , Valid categories are @admin、@set、@sortedset… etc. , By calling ACL CAT Command to view the complete list . Special categories @all All commands , Include commands that currently exist in the server , And commands that will be loaded through the module in the future .
    [email protected] Removes a category from a user callable instruction
    allcommands[email protected] Another name for
    nocommand[email protected] Another name for
    Addition or deletion of operable keys ~ Add a mode that can be used as a user operable key . for example ~* Allow all keys

​ Create new user default permissions by command

[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-vjCpzbE3-1655813763245)(https://pic.imgdb.cn/item/62b1b4210947543129cec73e.png)]

​ In the example above , I didn't specify any rules at all . If the user doesn't exist , This will use just created To create a user . If the user already exists , Then the above command will not perform any operation .

​ Set user name 、 password 、ACL jurisdiction 、 And enabled users

acl setuser user2 on >password ~cached:* +getimage-20220621200715829

​ Switching users , Verify permissions

 Insert picture description here

IO Multithreading

brief introduction

IO Multithreading actually means Client interaction part Of The Internet IO Interactive processing module Multithreading , Instead of Execute command multithreading .Redis6 Executing commands is still single threaded .

principle

Redis 6 Join multithreading , But follow Memcached Such from IO There are some differences in the implementation mode of processing to data access multithreading .Redis The multithreading part is only used to process network data reading and writing and protocol parsing , Executing commands is still single threaded . The reason for this design is that we don't want to be complicated by multithreading , Need to control key、lua、 Business ,LPUSH/LPOP And so on . The overall design is as follows :

[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-wSg43Msc-1655813763247)(https://pic.imgdb.cn/item/62b1b4cd0947543129cfdc38.png)]

in addition , Multithreading IO It is also not enabled by default , You need to configure... In the configuration file

io-threads-do-reads yes

io-threads 4

Support Cluster

Before the old version Redis If you want to build a cluster, you need to install it separately ruby Environmental Science ,Redis 5 take redis-trib.rb Integrated into redis-cli . Another official redis-benchmark Tools start to support cluster The model , Through the way of multithreading, the pressure test is carried out for multiple pieces .

other

Redis6 New features include :

1、RESP3 new Redis Communication protocol : Optimize the communication between server and client

2、Client side caching Client cache : be based on RESP3 The client cache function of the protocol . In order to further improve the cache performance , The data that the client often accesses cache To client . Reduce TCP Network interaction .

3、Proxy Cluster agent mode :Proxy function , Give Way Cluster It has the same access method as single instance , Reduce your use cluster Threshold . However, it should be noted that the agent does not change Cluster The functional limitations of , Commands that are not supported will not be supported , For example, cross slot More Key operation .

4、Modules API

Redis 6 Middle module API Great development progress , because Redis Labs To develop complex functions , Use it from the beginning Redis modular .Redis Can become a framework , utilize Modules To build different systems , Instead of writing from scratch and then BSD The license .Redis At the beginning, it was an open platform for writing various systems .

原网站

版权声明
本文为[fate _ zore]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/173/202206221944520759.html