当前位置:网站首页>Es data export CSV file
Es data export CSV file
2022-06-28 07:37:00 【Courageous steak】
1 Introduce
es Export data to csv file , For the time being, we will not consider efficiency , Just talk about the implementation method .
2 python3
def connect_elk():
client = Elasticsearch(hosts='http://192.168.56.20:9200',
http_auth=("elastic", "elastic password "),
# Before doing anything , Sniff first
# sniff_on_start=True,
# When the node does not respond , refresh , Reconnect the
sniff_on_connection_fail=True,
# Every time 60 Refresh every second
sniffer_timeout=60
)
return client
from elasticsearch import Elasticsearch
import csv
# obtain es database
from common.util_es import connect_elk
es = connect_elk()
''' Query all data and export '''
index = 'blog_rate'
body = {
}
item = ["r_id", "a_id"]
# body = {
# "query": {
# "match": {"name": " Zhang San "},
# }
# }
def ExportCsv(index, body,item):
query = es.search(index=index, body=body, scroll='5m', size=1000)
# es The first page of the query results
results = query['hits']['hits']
# es The total number of results found
total = query['hits']['total']["value"]
# Cursor for output es All the results of the query
scroll_id = query['_scroll_id']
for i in range(0, int(total / 100) + 1):
# scroll Parameter must be specified or an error will be reported
query_scroll = es.scroll(scroll_id=scroll_id, scroll='5m')['hits']['hits']
results += query_scroll
with open('./' + index + '.csv', 'w', newline='', encoding="utf_8_sig") as flow:
csv_writer = csv.writer(flow)
for res in results:
csvrow1 = []
for i in item:
csvrow1.append(res["_source"][i])
csv_writer.writerow(csvrow1)
print('done!')
Reference address :
https://blog.csdn.net/github_27244019/article/details/115351640
边栏推荐
- PLC -- Notes
- 剑指Offer||:链表(简单)
- Static resource compression reduces bandwidth pressure and increases access speed
- Solving the longest palindrome substring by dynamic programming
- 什么是EC鼓风机(ec blower fan)?
- Analyze 5 indicators of NFT project
- Can okcc call centers work without computers?
- Ice, protobuf, thrift -- Notes
- R language ggmap
- Kubelet garbage collection (exiting containers and unused images) source code analysis
猜你喜欢
The practice of traffic and data isolation in vivo Reviews
云原生:云计算技术再次升级 开启全面云开发时代
PLC -- 笔记
Alibaba cloud server creates snapshots and rolls back disks
ABAP 技能树
Path alias specified in vite2.9
No suspense about the No. 1 Internet company overtime table
Source code analysis of kubernetes' process of deleting pod
Section VI UART of zynq
Hash slot of rediscluster cluster cluster implementation principle
随机推荐
云原生(待更新)
Construction and exploration of vivo database and storage platform
A gadget can write crawlers faster
Cloud native (to be updated)
Porting ucosiii to stm32f429
Sword finger offer|: linked list (simple)
Evolution of vivo push platform architecture
Modifying MySQL port number under Linux
Installing redis on Linux
ACM笔记
HJ字符串排序
Is it safe to open an account on Dongfang fortune
Code submission specification
Open62541 import nodeset file directly
Understanding of OPC protocol
kubelet驱逐机制的源码分析
Application of XOR. (extract the rightmost 1 in the number, which is often used in interviews)
XML序列化向后兼容
Section VI UART of zynq
Mysql57 zip file installation