当前位置:网站首页>Zabbix6.0 upgrade Guide - how to synchronize database upgrades?
Zabbix6.0 upgrade Guide - how to synchronize database upgrades?
2022-06-27 22:48:00 【Zabbix】
Zhang Yu ,ZCP Senior certified engineer
Zabbix 6.0 LTS The version has been officially released for some time , It is believed that many small partners have tried to build a new test environment for practice . For the formal environment, you want to upgrade , You need to keep the data , At this time, how to upgrade the database synchronously ?
from Zabbix 6.0 Start , The primary key is used for all tables in the new version . This section provides instructions on how to manually upgrade a history table in an existing installation to a primary key .
MySQL 5.7+/8.0+
Rename the old table name to create a new table name . Run the sql history_pk_prepare.sql. sql file , Binary package installation mode address :
/usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
Address in source package :zabbix-6.0.0/database/mysql/ history_pk_prepare.sql)
Export import data :
Mysqlsh Should install .Mysqlsh(https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-install-linux-quick.html) Should be able to connect to the database . If the connection is made through a socket , You may need it to explicitly declare the path to it .
mysqlsh -uroot -S /run/mysqld/mysqld.sock --no-password -Dzabbix
function :(CSVPATH The function needs to be enabled Parameters local_infile = on):
CSVPATH="/var/lib/mysql-files";
util.exportTable("history_old", CSVPATH + "/history.csv", { dialect: "csv" });
util.importTable(CSVPATH + "/history.csv", {"dialect": "csv", "table": "history" });
util.exportTable("history_uint_old", CSVPATH + "/history_uint.csv", { dialect: "csv" });
util.importTable(CSVPATH + "/history_uint.csv", {"dialect": "csv", "table": "history_uint" });
util.exportTable("history_str_old", CSVPATH + "/history_str.csv", { dialect: "csv" });
util.importTable(CSVPATH + "/history_str.csv", {"dialect": "csv", "table": "history_str" });
util.exportTable("history_log_old", CSVPATH + "/history_log.csv", { dialect: "csv" });
util.importTable(CSVPATH + "/history_log.csv", {"dialect": "csv", "table": "history_log" });
util.exportTable("history_text_old", CSVPATH + "/history_text.csv", { dialect: "csv" });
util.importTable(CSVPATH + "/history_text.csv", {"dialect": "csv", "table": "history_text" });
Verify that each step is performed properly
Delete old tables :
DROP TABLE history_old;
DROP TABLE history_uint_old;
DROP TABLE history_str_old;
DROP TABLE history_log_old;
DROP TABLE history_text_old;
When MySQL <5.7, MariaDB ( Or for some reason mysqlsh When it can't be used )
This option is slower , More time consuming , Only if there is reason not to use mysqlsh Use only when .
Rename old table , Create a new table to execute history_pk_prepare.sql.
mysql -uzabbix -p<password> zabbix < /usr/share/doc/zabbix-sql-scripts/mysql/history_pk_prepare.sql
Export and import data :
Check whether import is enabled only for files under the specified path / export :
mysql> SELECT @@secure_file_priv;
+-----------------------+
| @@secure_file_priv |
+-----------------------+
| /var/lib/mysql-files/ |
+-----------------------+
If the value is the path to the directory , You can export the files in this directory / Import operation . under these circumstances , The file path in the query should be edited accordingly . perhaps ,secure_file_priv You can disable... During the upgrade ( Set to empty string ). If the value is null , You can export files located anywhere / Import operation .
* You should disable... Before exporting data max_execution_time, To avoid timeout during export ***
SET @@max_execution_time=0;
SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_old;
LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_uint_old;
LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_str_old;
LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_log_old;
LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_text_old;
LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
Verify that each step is performed properly
Delete old tables :
DROP TABLE history_old;
DROP TABLE history_uint_old;
DROP TABLE history_str_old;
DROP TABLE history_log_old;
DROP TABLE history_text_old;
Improve the performance of some parameter settings :
Examples of improving performance in both cases :
*** stay [mysqld] Part of the configuration file bulk_insert_buffer_size buffer , Or use before importing set Set up : ***
[mysqld]
bulk_insert_buffer_size=256M
mysql cli > SET SESSION bulk_insert_buffer_size= 1024 * 1024 * 256;
mysql cli > ... import queries ...
See “ Optimize InnoDB Batch data loading ”:(MySQL5.7,MySQL8.0)
*** Disable binary logging ( As the case may be ): ***
mysql cli > SET SESSION SQL_LOG_BIN=0;
mysql cli > ... import queries ...
Last , I wish you all a smooth journey to upgrade . For more detailed database upgrade schemes, please refer to the address .https://www.zabbix.com/documentation/current/en/manual/appendix/install/db_primary_keys#mysql
边栏推荐
- Arcgis-engine二次开发之空间关系查询与按图形查询
- MySQL数据库 实验报告(一)
- 解决本地连接不上虚拟机的问题
- 这类人开始被VC疯抢,月薪8万
- Using the cucumber automated test framework
- Do280openshift access control -- Security Policy and chapter experiment
- Passerelle de service pour les microservices
- 最虚的华人首富更虚了
- MONTHS_ Between function use
- OData - API using SAP API hub in SAP S4 op
猜你喜欢
最虚的华人首富更虚了
对话乔心昱:用户是魏牌的产品经理,零焦虑定义豪华
Kill the general and seize the "pointer" (Part 2)
渗透学习-sql注入过程中遇到的问题-针对sort=left(version(),1)的解释-对order by后接字符串的理解
【微服务】(十六)—— 分布式事务Seata
Conversation Qiao Xinyu: l'utilisateur est le gestionnaire de produits Wei Brand, zéro anxiété définit le luxe
Crawler notes (3) -selenium and requests
《7天學會Go並發編程》第7天 go語言並發編程Atomic原子實戰操作含ABA問題
First knowledge of the second bullet of C language
DCC888 :Register Allocation
随机推荐
渗透学习-靶场篇-pikachu靶场详细攻略(持续更新中-目前只更新sql注入部分)
Yolov6: the fast and accurate target detection framework is open source
[microservices] (16) -- distributed transaction Seata
ABAP essay-excel-3-batch import (breaking through 9999 lines of standard functions)
跟着存档教程动手学RNAseq分析(四):使用DESeq2进行DE分析的QC方法
Use logrotate to automatically cut the website logs of the pagoda
Codeforces Round #722 (Div. 2)
Introduction to MySQL operation (IV) -- data sorting (ascending, descending, and multi field sorting)
Transformation from student to engineer
99 multiplication table - C language
netERR_ CONNECTION_ Refused solution
Redis principle - string
Solution to the error of VMware tool plug-in installed in Windows 8.1 system
个人TREE ALV 模版-加快你的开发
Start the start php
网易云“情怀”底牌失守
跟着存档教程动手学RNAseq分析(一)
Management system itclub (medium)
结构化机器学习项目(一)- 机器学习策略
Crawler notes (2) - parse