当前位置:网站首页>Kunlundb backup and recovery
Kunlundb backup and recovery
2022-06-22 23:49:00 【Kunlunbase Kunlun database】
Globally consistent physical backup & Basic concepts of recovery
The physical backup : The physical file of the database ( Data files , Transaction log file , Parameter file ) Backup of . Physical backup can be divided into offline backup ( Cold backup ) And online backup ( Hot backup ).
KunlunDB The cluster supports online backup , During the backup process , The database is running , Application reads and writes do not block , Because the backup operation occurs on the slave node of the master-slave node , There is little impact on application performance .

One 、 Backup recovery architecture
Backup recovery target : Storage cluster and metadata cluster
Backup and restore the dispatching center :Cluster manager
Backup data storage : Backup storage pools
Cluster backup execution unit :Node Manager
Two 、 The basic principle
KunlunDB Cluster backup & Restore how it works
2.1 Backup data destination
stay kunlunDB The metadata cluster in the distributed database cluster stores the node information of the entire cluster 、 Table structure information 、 Transaction information 、 Backup and restore information, etc , It is the foundation for the normal operation of the cluster , The metadata cluster adopts a high availability architecture of one master and two slaves .
The storage cluster is responsible for storing business data , Data is distributed in data slices (shards), The storage cluster consists of multiple shards , Each slice has multiple copies .
The data of the computing node is a subset of the metadata cluster data , The compute node itself is stateless , No separate backup is required .
Global consistent backup can backup all the data of the entire cluster ( Back up all metadata and storage cluster data ), You can also back up only part of the data , For example, only one storage partition is backed up .
2.2 Globally consistent backup and recovery
Backup : At the beginning of the backup , The backup and recovery manager obtains the global transaction information of the entire cluster from the metabase , Then start the backup , For each backup destination , At the same time, the data file and transaction log of the target data need to be backed up . Database files and transaction logs during backup are stored in the backup resource pool ( Storage ), After all files are copied , End of backup . The backup information is recorded in the metabase . The cluster runs normally during backup , There is no need to stop business .
recovery : At the beginning of the recovery , The backup recovery manager obtains currently available backup information from the metabase , Then copy the data from the backup resource pool to the recovery destination , After the data is copied , Perform transaction rollback or roll forward according to the transaction log , This will restore the entire cluster to a consistent state .
2.3 Backup type
Full backup : Back up all the data of the backup destination ( Data files and transaction logs ), It is the foundation of incremental backup .
Incremental backup : Backup incremental data and transaction logs since the last backup .
Full backup : Back up the data of the whole cluster ( Metadata and all storage cluster data ).
Partial backup : Back up only the data of a partition .
2.4 Recovery type
All recovered : Restore the entire cluster
Partial recovery : Restore only one fragment
Time based recovery : Back to a point in time
Transaction based recovery : Recovery based on a transaction number
3、 ... and 、 Execution steps
Backup
3.1 Set the backup policy
Determine the backup object , Backup type , Backup target storage preparation .
3.2 Perform backup
Through the command line or UI(kunlunDB Provide Web Interface ) Scheduling backup
[[email protected] util]$ backup --help
Usage of backup:
-HdfsNameNodeService string
specify the hdfs name node service, hdfs://ip:port
-backuptype string
back up storage node or 'compute' node,default is 'storage' (default "storage")
-clustername string
name of the cluster to be backuped
-coldstoragetype string
specify the coldback storage type: hdfs .. (default "hdfs")
-etcfile string
path to the etc file of the mysql instance to be backuped,
if port is specified and the related instance is running,
the tool will determine the etcfile path
-port string
the port of mysql or postgresql instance which to be backuped
-shardname string
name of the current shard
-workdir string
where store the backup data locally for temp use (default "./data")3.3 Check the backup results , Confirm that the backup is successful
recovery
3.4 Set recovery policy
Determine the recovery object , Backup type , Backup target storage preparation
3.5 Perform recovery
Resume operation instructions :
[[email protected] util]$ restore --help
Usage of restore:
-HdfsNameNodeService string
specify the hdfs name node service, hdfs://ip:port
-enable-globalconsistent
whether restore the new mysql under global consistent restrict
-metaclusterconnstr string
current meta cluster connection string e.g. user:[email protected](ip:port)/mysql
-mysqletcfile string
etc file of the mysql which to be restored, if port is provied and mysqld is alive ,no need
-origclustername string
source cluster name to be restored or backuped
-origmetaclusterconnstr string
orig meta cluster connection string e.g. user:[email protected](ip:port)/mysql
-origshardname string
source shard name to be restored
-port string
the port of mysql/postgresql instance which to be restored and needed to be running state
-restoretime string
time point the new mysql restore to
-restoretype string
restore storage node or 'compute' node,default is 'storage' (default "storage")
-workdir string
temporary work path to store the coldback or other type files if needed (default "./data")3.6 Check the recovery results , Confirm that the recovery was successful
Four 、 Cluster backup recovery demonstration
Show me how to kunlunDB UI The interface is used for cluster backup and recovery
environmental information :
The cluster to be backed up consists of a compute node , One shard ( Every shard It consists of one master and two slave nodes ) And metadata cluster ( One master and two slaves )

Data status before backup :

First step : Start backup
Backup operations : Click the Backup button in the cluster management interface , Start backup .

The second step : Check backup status
According to the execution time , After the backup is successful , You can see :backupcluster succeed Information .

The third step : Recovery cluster
Select a point in time for recovery , Then determine the recovery .
The cluster enters the recovery state :

When the recovery is complete , The system will create a new cluster in the available resource area , And restore the backed up data .
Restore the state :

Recovered cluster :

Enter the recovered cluster computing node , View recovered data :

After cluster recovery , The corresponding data is also recovered correctly .
The project is open source
【GitHub:】
https://github.com/zettadb
【Gitee:】
https://gitee.com/zettadb
END
边栏推荐
- LeakCanary 源码详解(2)
- MySQL-Seconds_behind_master 的精度误差
- 1. class inheritance (point)
- OJ每日一练——整理命名
- Is it safe to make an appointment to pay new debts? Is it reliable?
- Learning the interpretable representation of quantum entanglement, the depth generation model can be directly applied to other physical systems
- Problèmes rencontrés lors de l'utilisation de redistemplate
- 程序员接私活兼职选择
- canvas生成海报
- 事务系统的隔离级别
猜你喜欢

昆仑分布式数据库独特的变量读写功能介绍

OLAP ——Druid简介

Enterprise digitalization is not a separate development, but a comprehensive SaaS promotion
![[go] go modules GETTING STARTED](/img/0a/58c50bb624c91b88a88aea280aa650.jpg)
[go] go modules GETTING STARTED

【GO】go mod模式, package 12import/add is not in GOROOT

10 Super VIM plug-ins, I can't put them down

DCC888 :SSA (static single assignment form)

【首发】Redis系列2:数据持久化提高可用性
![[go] go mod mode, package 12import/add is not in goroot](/img/b5/4cf5d3f04d0e5cc6f5a957959022ec.png)
[go] go mod mode, package 12import/add is not in goroot

wallys/WiFi6 MiniPCIe Module 2T2R 2 × 2.4GHz 2x5GHz
随机推荐
Sword finger offer 05 Replace spaces
双重跨域:Access-Allow-Origin header contains multiple values“*, *”,but only one is allowed
Digital data was invited to participate in Nantong enterprise digital transformation Seminar
KunlunDB查询优化(二)Project和Filter下推
Enterprise digitalization is not a separate development, but a comprehensive SaaS promotion
[go] go array and slice (dynamic array)
Array and string offset access syntax with curly braces is no longer support
C sqlsugar, hisql, FreeSQL ORM framework all-round performance test vs. sqlserver performance test
[arm] it is reported that horizontal display is set for LVDS screen of rk3568 development board
Package management tools --npm, -cnpm, -yan, -cyarn
反向代理HAProxy
RedisTemplate使用遇到\x00的问题
在一条DML语句中插入/更新/删除/获取几百万行数据,你会特别注意什么?
Stop using system Currenttimemillis() takes too long to count. It's too low. Stopwatch is easy to use!
在Word中自定义多级列表样式
Smart data won two annual awards at the second isig China Industrial Intelligence Conference
Notes on zhouguohua's reading
xml转义字符对照表
Anti shake & throttling enhanced version
Reddit's discussion on lamda model: it is not stateless. It adopts a dual process. Compared with the way it edits Wikipedia, it doesn't matter whether it has feelings or not