当前位置:网站首页>Raspberry pie self built NAS cloud disk -- automatic data backup
Raspberry pie self built NAS cloud disk -- automatic data backup
2022-07-24 12:38:00 【Brother Xing plays with the clouds】
Take yours. Raspberry pie Become a safe place for data .
stay 《 Raspberry pie build by oneself NAS Cloud disk 》 Series of Chapter one In the article , We discussed establishing NAS Some basic steps of , Added two pieces 1TB Storage hard disk drive ( One for data storage , One for data backup ), And through the network file system (NFS) Mount the data storage disk to the remote terminal . This is the second article in this series , We will discuss automatic data backup . Automatic data backup ensures the safety of data , It provides convenience for data recovery after hardware damage and reduces unnecessary trouble caused by file misoperation .
Backup policy
We have changed from small NAS Imagine a backup strategy and get started . I suggest having a time node every day 、 Plan to back up data , To prevent interference with our normal access NAS, For example, the backup time point avoids being accessed NAS And write the time point of the file . for instance , You can do it every morning 2 Click to back up data .
in addition , You also have to decide how long your daily backups need to be kept , Because if there is no time limit , The storage space will soon be used up . Generally, the daily backup can be kept for a week , If the data goes wrong , You can easily recover the original data from the backup . But what if you need to restore the data to a longer time ago ? You can keep the backup files every Monday for one month 、 Keep the backup every month for a longer time . Let's keep the monthly backup for one year , Every year, the backup is kept longer 、 For example, five years .
such , A large number of backups will be generated on the backup disk within five years :
- Once a week 7 Daily backup
- monthly 4 Weekly backup
- Every year, 12 Monthly backup
- Every five years 5 One year backup
You should remember , The backup disk we built is the same size as the data disk ( Every 1 TB). How to be more than 10 individual 1TB The backup of data is stored from the data disk to only 1TB The size of the backup disk ? If you create a full backup , It's obviously not possible . therefore , You need to create incremental backups , Each backup is created based on the data of the previous backup . Incremental backup will not double the storage space every other day , It only takes up a little more space every day .
Here is my situation : my NAS since 2016 year 8 It will start operation in June , There is 20 Backup . at present , I stored on the data disk 406GB The file of . My backup disk is used 726GB. Of course , The utilization rate of backup disk space depends largely on the frequency of data changes , But as you can see , Incremental backup will not occupy 20 Space required for a full backup . However , as time goes on ,1TB There may not be enough space for backup . Once data growth approaches 1TB Limit ( Or any backup disk capacity ), You should choose a larger backup disk space and move the data over .
utilize rsync Data backup
utilize rsync Command line tools can generate full backups .
[email protected]:~ $ rsync -a /nas/data/ /nas/backup/2018-08-01
This command will be attached to /nas/data/ The data in the data disk under the directory has been completely copied and backed up . The backup file is saved in /nas/backup/2018-08-01 Under the table of contents .-a The parameter is backup in archive mode , This will back up all metadata , For example, the revision date of the document 、 jurisdiction 、 Owner and soft link file .
Now? , You are already in 8 month 1 A complete initial backup was created on the th , You will be in 8 month 2 Create the first incremental backup every day .
[email protected]:~ $ rsync -a --link-dest /nas/backup/2018-08-01/ /nas/data/ /nas/backup/2018-08-02
The above line of code creates another one about /nas/data Backup of data in the directory . The backup path is /nas/backup/2018-08-02. Parameters here --link-dest Specifies the path where a backup file is located . such , This backup will work with /nas/backup/2018-08-01 Compare the backup of , Back up only the modified files , Unmodified files will not be copied , Instead, create a hard link to them in the previous backup file .
When using hard linked files in backup files , You generally don't notice the difference between hard links and the original copy . They behave exactly the same , If you delete one of the hard links or files , Others still exist . You can think of them as two different entries to the same file . Here is an example :
The left box shows the original data status after the second backup . The square in the middle is the backup of yesterday . There were only pictures in yesterday's backup file1.jpg did not file2.txt . The box on the right reflects today's incremental backup . The incremental backup command creates a file2.txt. because file1.jpg It has not been modified since yesterday , So today I created a hard link , It doesn't take up extra space on the disk .
Automated backup
You certainly don't want to input commands for data backup every morning . You can create a task to call the following script regularly to make it backup automatically .
#!/bin/bashTODAY=$(date+%Y-%m-%d)DATADIR=/nas/data/BACKUPDIR=/nas/backup/SCRIPTDIR=/nas/data/backup_scriptsLASTDAYPATH=${BACKUPDIR}/$(ls${BACKUPDIR}|tail-n1)TODAYPATH=${BACKUPDIR}/${TODAY}if[[!-e ${TODAYPATH}]];thenmkdir-p ${TODAYPATH}firsync-a--link-dest ${LASTDAYPATH}${DATADIR}${TODAYPATH}[email protected]${SCRIPTDIR}/deleteOldBackups.sh
The first code specifies the data path 、 Backup path 、 Script path and backup path of yesterday and today . The second code calls rsync command . The last piece of code executes deleteOldBackups.sh Script , It will clear some expired unnecessary backup data . If you don't want to call frequently deleteOldBackups.sh, You can also execute it manually .
The following is a simple and complete example script of the backup strategy discussed today .
#!/bin/bashBACKUPDIR=/nas/backup/functionlistYearlyBackups(){foriin012345dols${BACKUPDIR}|egrep"$(date +%Y -d "${i}year ago")-[0-9]{2}-[0-9]{2}"|sort-u|head-n1done}functionlistMonthlyBackups(){foriin0123456789101112dols${BACKUPDIR}|egrep"$(date +%Y-%m -d "${i}month ago")-[0-9]{2}"|sort-u|head-n1done}functionlistWeeklyBackups(){foriin01234dols${BACKUPDIR}|grep"$(date +%Y-%m-%d -d "lastmonday-${i}weeks")"done}functionlistDailyBackups(){foriin0123456dols${BACKUPDIR}|grep"$(date +%Y-%m-%d -d "-${i}day")"done}functiongetAllBackups(){listYearlyBackupslistMonthlyBackupslistWeeklyBackupslistDailyBackups}functionlistUniqueBackups(){getAllBackups|sort-u}functionlistBackupsToDelete(){ls${BACKUPDIR}|grep-v-e"$(echo -n $(listUniqueBackups) |sed "s/ /\\\|/g")"}cd${BACKUPDIR}listBackupsToDelete|whileread file_to_delete;dorm-rf ${file_to_delete}done
This script will first list all the backup files that need to be saved according to your backup strategy , Then it will delete those backup directories that are no longer needed .
Now create a scheduled task to execute the above code . With root User rights open crontab -e, Enter the following command , It will create an early morning every day 2 Go ahead and execute /nas/data/backup_scripts/daily.sh Scheduled tasks for .
02***/nas/data/backup_scripts/daily.sh
For creating scheduled tasks, please refer to cron Create a scheduled task .
- When there is no backup task , Uninstall your backup disk or mount it as a read-only disk ;
- Take advantage of remote The server As your backup disk , In this way, data can be synchronized through the Internet
You can also use the following methods to strengthen your backup strategy , To prevent accidental deletion or destruction of backup data :
The example of backup strategy in this article is to back up some data that I think is valuable , You can also modify these strategies according to your personal needs .
I will be 《 Raspberry pie build by oneself NAS Cloud disk 》 The third article in the series discusses Nextcloud.Nextcloud Provides a more convenient way to access NAS Data on the cloud disk and it also provides offline operations , You can also synchronize your data in the client .
边栏推荐
- C进阶——数据的存储
- Counter attack dark horse: devdbops training, give you the best courses!
- Is it safe to contact the account manager online to open a fund account?
- How to realize the function of grabbing red envelopes in IM system?
- Taishan Office Technology Lecture: layout difficulties of paragraph borders
- Industry insight | how to better build a data center? It and business should "go together"
- 以Chef和Ansible为例快速入门服务器配置
- Detailed explanation of MSTP protocol for layer 3 switch configuration [Huawei ENSP experiment]
- Reserved instances & Savings Plans
- Shell script case ---2
猜你喜欢

自己实现is_default_constructible

如何在IM系统中实现抢红包功能?

微信小程序生成二维码

Do you regret learning it?

Use abp Zero builds a third-party login module (4): wechat applet development

The setting float cannot float above the previous Div

6-16漏洞利用-rlogin最高权限登陆

基于Kubernetes v1.24.0的集群搭建(三)
![Detailed explanation of MSTP protocol for layer 3 switch configuration [Huawei ENSP experiment]](/img/ee/e0770298d0534014485145c434491a.png)
Detailed explanation of MSTP protocol for layer 3 switch configuration [Huawei ENSP experiment]

国产旗舰手机定价近六千,却连iPhone12都打不过,用户选谁很明确
随机推荐
QT notes - sort a column specified by qtablewidget
for mysql
Ansible的安装及部署
Use abp Zero builds a third-party login module (4): wechat applet development
AcWing 92. 递归实现指数型枚举
Seckill implementation diagram
Native Crash的一切
Buckle practice - sum of 34 combinations
Anaconda environment migration
基于Kubernetes v1.24.0的集群搭建(三)
for mysql
Equal principal increasing repayment / equal principal decreasing mortgage repayment calculator
Buckle exercise - 35 combination sum II
OpenCV:08图像金字塔
Industry insight | how to better build a data center? It and business should "go together"
TypeNameExtractor could not be found
Buckle practice - 30 set the intersection size to at least 2
Basic SQL server operation problems - only when lists are used and identity_ Only when insert is on can the display value be set for the identification column in the table
With the strong development of cloud native, how should enterprises seize business opportunities
Buckle practice - maximum number of 28 splices