当前位置:网站首页>[cloud resident co creation] ModelBox draws your own painting across the air
[cloud resident co creation] ModelBox draws your own painting across the air
2022-06-24 19:59:00 【Hua Weiyun】
ModelBox As provided in the end edge cloud scene AI Application development and operation framework , Now more and more people have devoted themselves to learning . Now let's use ModelBox Painting in space , Draw your own paintings !
One . What is? ModelBox
ModelBox It is applicable to the end cloud scenario AI Application development and operation framework . It provides a Pipeline Parallel execution process of , Can help AI Developers quickly complete model files to AI Development and online work of reasoning application , Reduce AI Algorithm landing threshold , At the same time bring AI High stability and ultimate performance of the application .
1.1 ModelBox characteristic
1. Easy to develop
AI Reasoning business visual choreography development , Function modularization , Rich component library ;c++,python Multilingual support .
2. Easy to integrate
Integrate components docked on the cloud , Cloud docking is easier .
3. High performance , Highly reliable
pipeline Run concurrently , Data computing intelligent scheduling , Refinement of resource management and scheduling , Business operation is more efficient .
4. Heterogeneous software and hardware
CPU、GPU、NPU Multi heterogeneous hardware support , More convenient and efficient use of resources .
5. The whole scene
video 、 voice 、 Text 、NLP The whole scene , Customized for service , Cloud integration is easier , Seamless exchange of end cloud data .
6. Easy to maintain
Visualization of service running status , application , Real time monitoring of component performance , Optimization is easier .
1.2 ModelBox Problem solved
at present AI Application development , After training the model , Multiple models and application logic need to be connected in series to form AI application , And release it online as a service or application . In the whole process , Need to face complex application programming problems :
Two . adapter ModelBox frame , Make it easier for developers to get started
2.1 Why do we need to use development boards
Realization AI Application end side deployment
1. Quickly understand the hardware and software of the learning system
2. Study AI Application side hardware development process
3. Achieve cloud side AI Apply end to side rock
ModelBox End cloud collaboration Al Development Kit , Lower threshold AI Development board
adapter ModelBox frame , Realization AI Reasoning business visual choreography development , Have 0.8TOPS Calculate the force , Support one click AI Skill deployment implementation AI Applications run fast : A strong developer community , It can help developers customize according to scenarios AI Algorithm .
2.2 AI Application development mode
① API Pattern :
② Diagram layout mode :
2.3 ModelBox Basic concepts in
. flow chart : Directed graph , Express application logic , control ModelBox Execution process ,. use Graphviz DOT Language to express .
. Functional units : Vertices in the flowchart , Basic components of the application ,ModelBox Execution unit of , The components that developers mainly develop ModelBox All Application development process :
. Flow chart design : Define business processes , Divide functional units , Straighten out the data transmission relationship between functional units .
. Functional unit development : Adopt the common functional units provided by the framework , Implement self-defined business function units .
. Run and test : Use pictures 、 Video file 、 Real time video streaming and other test applications .
3、 ... and . The development case
Case link :RK3568 Model transformation validation cases (huaweicloud.com)
This case uses yolox_nano Take the pre training model as an example , Exhibition Pytorch->onnx->rknn The whole process of model transformation and verification .
3.1 Development environment deployment
Use the development board to ModelBox AI There are two ways to develop applications , First, the development board connects the display with the keyboard and mouse , install Ubuntu desktop , Develop directly on the development board ; The second is to use remote connection tools ( Such as VS Code Medium Remote-SSH) from PC Log in to the development board for development . Here we recommend the second way , because PC The end can be used with more functions 、 More friendly interface IDE.
3.1.1 configure network
PC To connect the development board, you need to know the ip, But the development board is not fixed by default ip, We provide ModelBox PC Tool, You can automatically configure for the development board ip, You can also easily push and pull video streams in the reasoning stage .
PC Tool be located SDK Of connect_wizard Directory :
double-click connect_wizard.exe, In the page, you can see that there are two ways to connect the development board , We use the network cable to connect the development board :
Follow the instructions to disconnect or connect the network cable :
Wait for a short time , You can see the third step , At this point, the development board has been set as the default ip:192.168.2.111,PC Use this ip that will do SSH Sign in :
3.1.2 Remote connection development board
We recommend in PC End use VS Code Connect the development board remotely to operate the device .
Use VS Code The connection development board can refer to the ModelBox End cloud collaboration AI Development Kit (RK3568) Hands on Guide . meanwhile , The getting started guide also describes how to register the development board to HiLens Management console for more convenient online management .
3.2 transformation YOLOX Pre training model to onnx
3.2.1 Describe the model
Pull YOLOX Code
!git clone https://github.com/Megvii-BaseDetection/YOLOX.git
cd YOLOX
Deploy YOLOX Running environment
!pip install torch==1.9.0 thop loguru torchvision tabulate opencv_python
Reference resources README obtain yolox_nano Pre training model :
Model | size | mAPval | Params | FLOPs | weights | |
416 | 25.8 | 0.91 | 1.08 |
!wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano.pth
Load the model according to the configuration file :
from yolox.exp import get_expimport torch
exp = get_exp("./exps/default/yolox_nano.py", None)
model = exp.get_model()
ckpt_file = "yolox_nano.pth"
ckpt = torch.load(ckpt_file, map_location="cpu")
if"model"in ckpt:
ckpt = ckpt["model"]
model.load_state_dict(ckpt)
model.head.decode_in_inference = False
stay YOLOX/yolox/models/network_blocks.py Can be seen in , In the model Focus The layer is defined as :
import torch.nn as nn
classFocus(nn.Module):
"""Focus width and height information into channel space."""
def__init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu"):
super().__init__()
self.conv = BaseConv(in_channels * 4, out_channels, ksize, stride, act=act)
defforward(self, x):
# shape of x (b,c,w,h) -> y(b,4c,w/2,h/2)
patch_top_left = x[..., ::2, ::2]
patch_top_right = x[..., ::2, 1::2]
patch_bot_left = x[..., 1::2, ::2]
patch_bot_right = x[..., 1::2, 1::2]
x = torch.cat(
(
patch_top_left,
patch_bot_left,
patch_top_right,
patch_bot_right,
),
dim=1,
)
return self.conv(x)
Where the step size is 2 The slicing operation of is subsequently converted to rknn Format is not friendly , So we use convolution instead of slicing :
import numpy as np
classFocusInfer(nn.Module):
def__init__(self, conv):
super().__init__()
self.conv = conv
self.top_left = nn.Conv2d(3, 3, 2, 2, groups=3, bias=False)
top_left_weight = torch.Tensor(np.array([[1, 0], [0, 0]]).reshape(1, 1, 2, 2).repeat(3, 0))
self.top_left.weight = torch.nn.Parameter(top_left_weight)
self.top_right = nn.Conv2d(3, 3, 2, 2, groups=3, bias=False)
top_right_weight = torch.Tensor(np.array([[0, 1], [0, 0]]).reshape(1, 1, 2, 2).repeat(3, 0))
self.top_right.weight = torch.nn.Parameter(top_right_weight)
self.bot_left = nn.Conv2d(3, 3, 2, 2, groups=3, bias=False)
bot_left_weight = torch.Tensor(np.array([[0, 0], [1, 0]]).reshape(1, 1, 2, 2).repeat(3, 0))
self.bot_left.weight = torch.nn.Parameter(bot_left_weight)
self.bot_right = nn.Conv2d(3, 3, 2, 2, groups=3, bias=False)
bot_right_weight = torch.Tensor(np.array([[0, 0], [0, 1]]).reshape(1, 1, 2, 2).repeat(3, 0))
self.bot_right.weight = torch.nn.Parameter(bot_right_weight)
defforward(self, x):
top_left = self.top_left(x)
top_right = self.top_right(x)
bot_left = self.bot_left(x)
bot_right = self.bot_right(x)
return self.conv(x)
Again ,nn.SiLU The operation is not friendly to subsequent transformations , We use it network_blocks Medium SiLU Operation replacement :
from yolox.models.network_blocks import SiLUfrom yolox.utils import replace_module
focus_infer = FocusInfer(model.backbone.backbone.stem.conv)
model.backbone.backbone.stem = focus_infer
model = replace_module(model, nn.SiLU, SiLU)
model.eval().to("cpu")
print(model.backbone.backbone.stem)
You can see , The operation has been replaced successfully , Next, the pre training model is exported as onnx Format
dummy_input = torch.randn(1, 3, 288, 512)
torch.onnx._export(
model,
dummy_input,
"yolox_nano.onnx",
opset_version=12,)
install rknn-toolkit2
In the above steps , We have changed the pre training model to onnx Format , Next, we return to the parent directory to install RK3568 Applicable model transformation tools :rknn-toolkit2
cd ..
import moxing as moximport os
ifnot os.path.exists("rknn_toolkit2-1.2.0_f7bb160f-cp36-cp36m-linux_x86_64.whl"):
mox.file.copy('obs://liuyu291/Notebook/rk3568-model/rknn_toolkit2-1.2.0_f7bb160f-cp36-cp36m-linux_x86_64.whl',
'rknn_toolkit2-1.2.0_f7bb160f-cp36-cp36m-linux_x86_64.whl')
because rknn-toolkit2 Provided only python3.6 The installation package in the environment , So we create a python3.6 Of kernel.
First, through conda Create a python3.6 Virtual environment for py36:
!/home/ma-user/anaconda3/bin/conda create -n py36 python=3.6 -y
Next, install the dependency package :
!/home/ma-user/anaconda3/envs/py36/bin/pip install ipykernel
add to kernel The configuration file , Make the virtual environment available in notebook Is recognized in :
import jsonimport os
data = {
"display_name": "Python36",
"env": {
"PATH": "/home/ma-user/anaconda3/envs/py36/bin:/home/ma-user/anaconda3/envs/python-3.7.10/bin:/modelarts/authoring/notebook-conda/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/ma-user/modelarts/ma-cli/bin:/home/ma-user/modelarts/ma-cli/bin:/home/ma-user/anaconda3/envs/PyTorch-1.4/bin"
},
"language": "python",
"argv": [
"/home/ma-user/anaconda3/envs/py36/bin/python",
"-m",
"ipykernel",
"-f",
"{connection_file}"
]
}
ifnot os.path.exists("/home/ma-user/anaconda3/share/jupyter/kernels/py36/"):
os.mkdir("/home/ma-user/anaconda3/share/jupyter/kernels/py36/")
withopen('/home/ma-user/anaconda3/share/jupyter/kernels/py36/kernel.json', 'w') as f:
json.dump(data, f, indent=4)
When finished, you can see the upper right corner kernel From the list, you can select to Python36 Environmental Science ( without , You can wait or refresh ):
Choose the kernel, Check it out python Version and pip edition :
!python -V
!pip -V
Next, after installing the necessary dependencies , install rknn-toolkit2:
!pip install numpy
!pip install rknn_toolkit2-1.2.0_f7bb160f-cp36-cp36m-linux_x86_64.whl
see rknn-toolkit2, Confirm correct installation :
!pip show rknn-toolkit2
After installation, you can use the toolkit for model transformation and verification ,rknn-toolkit2 The documentation for the use of is located at https://github.com/rockchip-linux/rknn-toolkit2/tree/master/doc
%%python
from rknn.api import RKNN
rknn = RKNN(verbose=False)
print('--> Config model')
rknn.config(mean_values=[[0., 0., 0.]], std_values=[[1., 1., 1.]])print('done')
print('--> Loading model')
ret = rknn.load_onnx("YOLOX/yolox_nano.onnx")
if ret != 0:
print('Load failed!')
exit(ret)print('done')
print('--> Building model')
ret = rknn.build(do_quantization=False)if ret != 0:
print('Build failed!')
exit(ret)print('done')
print('--> Export RKNN model')
ret = rknn.export_rknn("yolox_nano_288_512.rknn")if ret != 0:
print('Export failed!')
exit(ret)print('done')
rknn.release()
In the above steps, we have successfully achieved yolox_nano_288_512.rknn Model , The next in rknn-toolkit2 The model is verified in the built-in simulator .
Install some running demo Dependent package :
!pip install loguru thop tabulate pycocotools
%%pythonimport cv2import numpy as npimport sys
sys.path.append("YOLOX")from rknn.api import RKNNfrom yolox.utils import demo_postprocess, multiclass_nms, visfrom yolox.data.data_augment import preproc as preprocessfrom yolox.data.datasets import COCO_CLASSES
rknn = RKNN(False)
rknn.config(mean_values=[[0., 0., 0.]], std_values=[[1., 1., 1.]])
ret = rknn.load_onnx("YOLOX/yolox_nano.onnx")
ret = rknn.build(do_quantization=False)
ret = rknn.init_runtime()
img = cv2.imread("YOLOX/assets/dog.jpg")
start_img, ratio = preprocess(img, (288, 512), swap=(0, 1, 2))
outputs = rknn.inference(inputs=[start_img])
outputs = outputs[0].squeeze()
predictions = demo_postprocess(outputs, (288, 512))
boxes = predictions[:, :4]
scores = predictions[:, 4:5] * predictions[:, 5:]
boxes_xyxy = np.ones_like(boxes)
boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2] / 2.
boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3] / 2.
boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2] / 2.
boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3] / 2.
boxes_xyxy /= ratio
dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.45, score_thr=0.5)
if dets isnotNone:
final_boxes = dets[:, :4]
final_scores, final_cls_inds = dets[:, 4], dets[:, 5]
img = vis(img, final_boxes, final_scores, final_cls_inds, conf=0.5, class_names=COCO_CLASSES)
cv2.imwrite("result.jpg", img)
rknn.release()
You can see , Result pictures result.jpg Successfully saved to the current directory , View results :
import cv2from matplotlib import pyplot as plt
%matplotlib inline
img = cv2.imread("result.jpg")
plt.imshow(img[:,:,::-1])
plt.show()
You can see ,rknn The reasoning result of the model is correct , End of model transformation validation .
To download the converted model , Just right click the corresponding model to download .
3.2.2 General unit
Python The common unit needs to provide independent toml The configuration file , Appoint python Basic attributes of functional units . General situation , The directory structure is :
[FlowUnitName]|---[FlowUnitName].toml|---[FlowUnitName].py|---xxx.py
Compared with the reasoning unit , A generic unit has not only configuration files , It also needs to improve the specific function code , With yolox_post For example , The first is the functional unit configuration file :
# Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.# Basic config[base]name = "yolox_post" # The FlowUnit namedevice = "cpu" # The flowunit runs on cpuversion = "1.0.0" # The version of the flowunittype = "python" # Fixed value, do not changedescription = "description" # The description of the flowunitentry = "[email protected]_postFlowUnit" # Python flowunit entry functiongroup_type = "generic" # flowunit group attribution, change as input/output/image ...# Flowunit Typestream = false # Whether the flowunit is a stream flowunitcondition = true # Whether the flowunit is a condition flowunitcollapse = false # Whether the flowunit is a collapse flowunitcollapse_all = false # Whether the flowunit will collapse all the dataexpand = false # Whether the flowunit is a expand flowunit# The default Flowunit config[config]item = "value"# Input ports description[input][input.input1] # Input port number, the format is input.input[N]name = "in_image" # Input port nametype = "uint8" # input port data type ,e.g. float or uint8device = "cpu" # input buffer type[input.input2] # Input port number, the format is input.input[N]name = "in_feat" # Input port nametype = "uint8" # input port data type ,e.g. float or uint8device = "cpu" # input buffer type# Output ports description[output][output.output1] # Output port number, the format is output.output[N]name = "has_hand" # Output port nametype = "float" # output port data type ,e.g. float or uint8[output.output2] # Output port number, the format is output.output[N]name = "no_hand" # Output port nametype = "float" # output port data type ,e.g. float or uint8
Basic config Some basic configurations such as unit names ,Flowunit Type Is the type of functional unit ,yolox_post Is a conditional unit , So you can see condition by true, There are also some expansions 、 Convergence and other properties , Can be in AI Gallery ModelBox) See more cases under the plate .
config Some properties that need to be configured for the unit , For example, some characteristic drawings are required in this unit size、 Threshold and other information , So modify in the configuration file config by :
[config]net_h = 320net_w = 320num_classes = 2conf_threshold = 0.5iou_threshold = 0.5
Besides , Input and output type Some modifications may be made according to the actual logic :
# Input ports description[input][input.input1] # Input port number, the format is input.input[N]name = "in_image" # Input port nametype = "uint8" # input port data type ,e.g. float or uint8device = "cpu" # input buffer type[input.input2] # Input port number, the format is input.input[N]name = "in_feat" # Input port nametype = "float" # input port data type ,e.g. float or uint8device = "cpu" # input buffer type# Output ports description[output][output.output1] # Output port number, the format is output.output[N]name = "has_hand" # Output port nametype = "uint8" # output port data type ,e.g. float or uint8[output.output2] # Output port number, the format is output.output[N]name = "no_hand" # Output port nametype = "uint8" # output port data type ,e.g. float or uint8
Next , We see the yolox_post.py, You can see that the basic interface has been generated when the unit is created :
# Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.#!/usr/bin/env python# -*- coding: utf-8 -*-import _flowunit as modelboxclass yolox_postFlowUnit(modelbox.FlowUnit):# Derived from modelbox.FlowUnitdef __init__(self):super().__init__()def open(self, config):# Open the flowunit to obtain configuration informationreturn modelbox.Status.StatusCode.STATUS_SUCCESSdef process(self, data_context):# Process the datain_data = data_context.input("in_1")out_data = data_context.output("out_1")# yolox_post process code.# Remove the following code and add your own code here.for buffer in in_data:response = "Hello World " + buffer.as_object()result = response.encode('utf-8').strip()add_buffer = modelbox.Buffer(self.get_bind_device(), result)out_data.push_back(add_buffer)return modelbox.Status.StatusCode.STATUS_SUCCESSdef close(self):# Close the flowunitreturn modelbox.Status()def data_pre(self, data_context):# Before streaming data startsreturn modelbox.Status()def data_post(self, data_context):# After streaming data endsreturn modelbox.Status()def data_group_pre(self, data_context):# Before all streaming data startsreturn modelbox.Status()def data_group_post(self, data_context):# After all streaming data endsreturn modelbox.Status()
If the working mode of the functional unit is stream = false when , The functional unit will call open、process、close Interface ; If the working mode of the functional unit is stream = true when , The functional unit will call open、data_group_pre、data_pre、process、data_post、data_group_post、close Interface ; Users can implement corresponding interfaces according to actual requirements .
According to the nature of the unit , We mainly need to improve open、process Interface :
import _flowunit as modelboximport numpy as np from yolox_utils import postprocess, expand_bboxes_with_filter, draw_color_paletteclass yolox_postFlowUnit(modelbox.FlowUnit):# Derived from modelbox.FlowUnitdef __init__(self):super().__init__()def open(self, config):self.net_h = config.get_int('net_h', 320)self.net_w = config.get_int('net_w', 320)self.num_classes = config.get_int('num_classes', 2)self.num_grids = int((self.net_h / 32) * (self.net_w / 32)) * (1 + 2*2 + 4*4)self.conf_thre = config.get_float('conf_threshold', 0.3)self.nms_thre = config.get_float('iou_threshold', 0.4)return modelbox.Status.StatusCode.STATUS_SUCCESSdef process(self, data_context):modelbox.info("YOLOX POST")in_image = data_context.input("in_image")in_feat = data_context.input("in_feat")has_hand = data_context.output("has_hand")no_hand = data_context.output("no_hand")for buffer_img, buffer_feat in zip(in_image, in_feat):width = buffer_img.get('width')height = buffer_img.get('height')channel = buffer_img.get('channel')img_data = np.array(buffer_img.as_object(), copy=False)img_data = img_data.reshape((height, width, channel))feat_data = np.array(buffer_feat.as_object(), copy=False)feat_data = feat_data.reshape((self.num_grids, self.num_classes + 5))ratio = (self.net_h / height, self.net_w / width)bboxes = postprocess(feat_data, (self.net_h, self.net_w), self.conf_thre, self.nms_thre, ratio)box = expand_bboxes_with_filter(bboxes, width, height)if box:buffer_img.set("bboxes", box)has_hand.push_back(buffer_img)else:draw_color_palette(img_data)img_buffer = modelbox.Buffer(self.get_bind_device(), img_data)img_buffer.copy_meta(buffer_img)no_hand.push_back(img_buffer)return modelbox.Status.StatusCode.STATUS_SUCCESSdef close(self):# Close the flowunitreturn modelbox.Status()
You can see , stay open In, we get some parameters ,process Do logical processing , Input and output can be through data_context To get , It is worth noting that when we output, we return a graph , When the hand is detected, the detection frame information is attached to the graph , This information can be obtained by the next unit .
alike , Improve other general functional units , For details, please refer to the code provided by us .
3.2.3 Application and operation
We need to prepare a mp4 File copy to data Under the folder , We provide a test video hand.mp4, Then open the project directory bin/mock_task.toml file , Modify the task input and task output configuration as follows :
# Task input ,mock Simulation currently only supports one way rtsp Or local url# rtsp camera ,type = "rtsp", url Write in rtsp Address # Other uses "url", For example, it can be a local file address , perhaps httpserver The address of ,( camera url = "0")[input]type = "url"url = "../data/hand.mp4"# Task output , Currently only supported "webhook", And local output "local"( Output to the screen ,url="0", Output to rtsp, Fill in rtsp Address )# (local You can also export to a local file , Pay attention to , Files can be relative paths , Is relative to this mock_task.toml The document itself )[output]type = "local"url = "../hilens_data_dir/paint.mp4"
After configuration, execute in the project path build_project.sh Carry out engineering construction :
[email protected]:~/lxy/examples$ cd workspace/hand_painting/[email protected]:~/lxy/examples/workspace/hand_painting$ ./build_project.shdos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/graph/hand_painting.toml to Unix format...dos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/graph/modelbox.conf to Unix format...dos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/etc/flowunit/extract_roi/extract_roi.toml to Unix format...dos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/etc/flowunit/painting/painting.toml to Unix format...dos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/etc/flowunit/yolox_post/yolox_post.toml to Unix format...dos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/model/hand_detection/hand_detection.toml to Unix format...dos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/model/pose_detection/pose_detection.toml to Unix format...dos2unix: converting file /home/rock/lxy/examples/workspace/hand_painting/bin/mock_task.toml to Unix format...build success: you can run main.sh in ./bin [email protected]:~/lxy/examples/workspace/hand_painting$
Run the project after the build is complete :
[email protected]:~/lxy/examples/workspace/hand_painting$ ./bin/main.sh
Wait a while and you will be able to hilens_data_dir See the running results under the folder :
except mp4 In addition, we also support many other types of input and output ,ModelBox PC TOOL It also provides push and pull functions , Select input live video stream , start-up :
When running the program, configure the output address as the streaming address , You can view the running results in the local web page :
ModelBox It's a AI Application development and operation framework , It is applicable to terminal edge cloud scenarios . It provides a pipeline based parallel execution process , Can help AI Developers quickly complete the process from model files to AI Development of reasoning applications and online work , Reduce AI Landing threshold of algorithm , by AI Application brings high stability and high performance . Come here , You learn to use it ModelBox Painting in space , Have you painted your own paintings yet ?
This article participates in Huawei cloud community 【 Content co creation 】 Activity number 17 period .
https://bbs.huaweicloud.com/blogs/358780
Mission 12:ModelBox Painting in space Draw your own paintings
边栏推荐
- Application practice | massive data, second level analysis! Flink+doris build a real-time data warehouse scheme
- 工作6年,月薪3W,1名PM的奋斗史
- Power efficiency test
- Huawei machine learning service speech recognition function enables applications to paint "sound" and color
- The name of the button in the Siyuan notes toolbar has changed to undefined. Has anyone ever encountered it?
- Data backup and recovery of PgSQL
- 微信小程序轮播图怎么自定义光标位置
- What do I mean when I link Mysql to report this error?
- 【Go語言刷題篇】Go從0到入門4:切片的高級用法、初級複習與Map入門學習
- Some ideas about chaos Engineering
猜你喜欢
Unity mobile game performance optimization spectrum CPU time-consuming optimization divided by engine modules
php OSS文件读取和写入文件,workerman生成临时文件并输出浏览器下载
Huawei machine learning service speech recognition function enables applications to paint "sound" and color
PHP OSS file reads and writes files, and workman generates temporary files and outputs them to the browser for download
Geoscience remote sensing data collection online
Where are Xiaomi mobile phone's favorite SMS and how to delete them
Audio and video 2020 2021 2022 basic operation and parameter setting graphic tutorial
Power supply noise analysis
对国产数据库厂商提几个关于SQL引擎的小需求
Technology implementation | Apache Doris cold and hot data storage (I)
随机推荐
Working for 6 years with a monthly salary of 3W and a history of striving for one PM
Power efficiency test
【Go语言刷题篇】Go从0到入门4:切片的高级用法、初级复习与Map入门学习
Confirm whether the host is a large terminal or a small terminal
How to deal with the problem that the Flink CDC reads MySQL in full and always reports this error
[video tutorial] functions that need to be turned off in win10 system. How to turn off the privacy option in win10 computer
Bytebase 加入阿里云 PolarDB 开源数据库社区
Unity mobile game performance optimization spectrum CPU time-consuming optimization divided by engine modules
Unit actual combat lol skill release range
Teach you how to cancel computer hibernation
To open the registry
全链路业务追踪落地实践方案
8 challenges of BSS application cloud native deployment
Making startup U disk -- Chinese cabbage U disk startup disk making tool V5.1
The difference between the lazy man mode and the hungry man mode
Would you like to ask whether the same multiple tasks of the PgSQL CDC account will have an impact? I now have only one of the three tasks
Some ideas about chaos Engineering
unity实战之lol技能释放范围
What are the functions of IBPs open source form designer?
Understanding openstack network