当前位置:网站首页>LVI Sam summary
LVI Sam summary
2022-06-27 09:56:00 【xiaoma_ bk】
Lvi-sam
lidar-visual-inertial odometry and mapping system
- General framework diagram :

- Each node Data transmission diagram

vins
visual_feature
The main function :
initialization Ros node
Set up Log Grade
ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Warn);Read parameters Each node is read once , It's hard
Read camera parameters N Camera parameters are read separately
- If you have a fish eye camera , Read fish eye mask
Initialize the depth register ( After reading the parameters ) DepthRegister
subscribe Laser and image topic of conversation , If laser is not applicable ,
sub_lidar.shutdown();- img_callback
- lidar_callback ( Have been to distorted point clouds )
Release topic:
- feature,restart( For visual odometer )、feature_img(rviz)
Two threads ,
MultiThreadedSpinner, For parallel processing ( Image and lidar )
lidar_callback
1、 Skip frame ,
++lidar_count % (LIDAR_SKIP+1) != 02、 obtain vins_world To body The transformation of transNow
- tf listen When reading fails return
- Convert to
Eigen::Affine3f
3、 Point cloud data processing
- laser cloud Convert to pcl
- Downsampling (0.2,0.2,0.2)
- Point cloud filtering ( Keep points only in camera view )
x>=0&&y/x<=10&&z/x<=10 - From laser coordinate system to Camera coordinate system
pcl::transformPointCloud - Convert to global odometer coordinate system , Used tf Answered transNow
4、 Save the point cloud queue Point cloud +time Two queues cloudQueue、timeQueue
5、 Pop up the old data in the queue , Retain 5s data
6、 Fuse the point cloud data in the queue
depthCloud, Add all point cloud data in the queue7、 Desampling of fused point cloud data ,(0.2,0.2,0.2)
img_callback
- if first_image_flage when , assignment
first_image_time、last_image_timereturn - Camera data stream stability detection , The time interval >1s perhaps Time jumps back
- When abnormal , send out restart sign , and return
- Release the current frame frequency control
PUB_THIS_FRAME, When it was released ++pub_countround(1.0 * pub_count / (cur_img_time - first_image_time)) <= FREQ- Reset pub_count ,first_image_time
- image Data to cv::Mat, and trackerData[i].readImage, The core :readImage
PUB_THIS_FRAMEwhen , Release topic,pub_feature, notes :depthRegister->get_depth
readImage
Histogram equalization , Parameters :
cv::createCLAHE(3.0, cv::Size(8, 8)if forw_img It's empty , be
prev_img = cur_img = forw_img = img;if
cur_pts.size() > 0when , Optical flow tracking , Current tracking feature pointforw_pts- Optical flow tracking
cv::calcOpticalFlowPyrLK - Delete Invalid feature point
- Optical flow tracking
If released At this frame
- Set up Mask, Non maximum suppression
- if When the number of feature points of the frame is less than the preset maximum value , Perform additional extraction
cv::goodFeaturesToTrack - And add extra points
assignment , And remove distortion undistortedPoints
cv::undistortPointsBut use the camera model :m_camera->liftProjective- If there are matching points in the previous frame , Make speed predictions
get_depth
Initialization depth passageway , Prepare for return
name = "depth",values.resize(features_2d.size(), -1)
If there is no depth point cloud , Straight back , The depth point cloud consists of
lidar_callbackobtainGet the current time period body To the world coordinate system
transNowMove the point cloud from Convert world coordinate system to camera coordinate system
transNow.inverse()Project feature points onto the unit sphere ,z Always for 1 features_3d_sphere
- The switch to ros Standard coordinate system ,x = z, y =-x,z=-y
- standard : front x, Left y, On z, The camera : front z, Right x, Next y
- Intensity is used to store depth , assignment -1
Define the picture to get the depth (-90°,90°), The resolution of the bin_res =180/360
Traverse all depth points , Calculation raw_id,col_id, If it is within the image range , Keep only the closest points
row_angle =atan2(p.z, sqrt(p.x * p.x + p.y * p.y)) * 180.0 / M_PI + 90.0- To convert to [0,180], Therefore, it is necessary to add 90°
col_angle = atan2(p.x, p.y) * 180.0 / M_PI;- row_id = row_angle / bin_res,col_id=col_angle/bin_res.
- If it has been updated , Take only the nearest
depth_cloud_local assignment , Release depth to vins_body_ros Coordinate system
The depth point cloud depth_cloud_local Normalize , obtain depth_cloud_unit_sphere
- x,y,z/range , Intensity saves the depth value ,
Through normalized depth map depth_cloud_unit_sphere, establish kd_tree
Traverse Normalized characteristic points features_3d_sphere, Get the depth of each point
- stay kd_tree Find 3 Adjacent points , threshold 5 Square of pixels
- Can find 3 And the distance is less than the threshold , Do the following :
- take 3 Point data :A、B、C, Three dimensional coordinates of each point ( Normalized coordinates * depth ) And depth r
- Normalized characteristic points V V V( Normalized coordinates )
- Calculation ABC Determined normal vector N N N
- Calculate the origin to The distance between the planes
(N(0) * A(0) + N(1) * A(1) + N(2) * A(2)) - Calculate the origin to Normalized characteristic points and normal vectors N N N The distance of the defined plane
(N(0) * V(0) + N(1) * V(1) + N(2) * V(2)) - obtain Normalized depth of feature points s = Divide the above two
- if 3 The depth difference between the two points 2m or Depth less than 0.5m when ,s unchanged
- s If the depth is greater than 3 Maximum depth of points , Then give the maximum depth , If it is less than the minimum depth, assign the minimum depth
- Restore characteristic 3d Information ( The normalized data is multiplied by the depth value ) features_3d_sphere,
If a depth map is published , Then assign different colors to display , And publish
Update the depth point of each feature depth_of_point, And back to
visual_odometry
The main function :
- structure
Estimator estimatorGlobal variables - initialization ros
- Read parameters , and
estimator.setParameter()- Camera external parameters ,td, Information matrix
- subscribe
restart_callback- imu_callback
- odom_callback
- feature_back
- If not used When laser
sub_odom.shutdown(); - Define the main thread measurement_process{process};
- Two threads ,
MultiThreadedSpinner, For parallel processing
imu_callback
- if imu data When time jumps back or does not change , direct Print warnings and return
- take imu data push To imu_buf in , The mutex
m_buf - Condition wakes up the main thread
- Release The nearest odometer , be used for rviz Show
odom_callback
- Put data into odomQueue in , The mutex
m_odom
feature_callback
- Put data into feature_buf in , The mutex
m_buf - Condition wakes up the main thread
process main_thread
- while ros::ok
- Conditional arousal
measurements !=0- measurements =getMeasurements
- Traverse
measurements- imu Preintegration
- imu_msg.time <= img_msg.time
estimator.processIMU - otherwise , Based on the last linear acceleration and angular acceleration Make them perfectly aligned
- imu_msg.time <= img_msg.time
- image[feature_id] structure
- imu Preintegration
- Get initialization information from lidar odometer odomRegister->getOdometry
- Because of the use of odometry data , therefore The mutex
m_odom
- Because of the use of odometry data , therefore The mutex
- Process images processImage
- visualization
- Release odometer , Keyframes Pose, The camera Pose, Release Tf, Publish keyframes
- Conditional arousal
other_function
getMeasurements
- while 1 loop
- imu_buf and feature_buf When one is empty return
- imu_buf.back Not included feature_buf.fornt Time time ,return
- imu_buf The end time does not contain... To package feature data , skip
- imu_buf.front Not included feature_buf.fornt Time time ,feature_buf eject ,continue
- imu_buf The start time does not contain feature Data time , Pop it up
- Because the data time is increasing , It will never contain , Throw away
- pack imu_buf Less than feaure_buf.font The data of , namely :[Imus,feaure_buf.font]
getOdometry
Reset odometry_channel(18,-1)
- id(1), P(3), Q(4), V(3), Ba(3), Bg(3), gravity(1)
The laser odometer is empty , Lose the old frame of the odometer ,
odom<img_time-0.05? pop_forntWhen the laser odometer is empty , Go straight back to
obtain The closest q_odom_lidar
- Find the odometer closest to the image time
odomCur, The nearest odometer frame that is smaller than the visual image - If odometer
odomCurAnd img The time interval is greater than 0.05, direct return
- Find the odometer closest to the image time
Convert it to Laser coordinate system q_odom_cam
transformation Odometer position and attitude from laser coordinate system to Camera coordinate system
odomCurConvert to p_eigen,v_eigen- p_eigen,v_eigen = q_lidar_to_cam_eigen* p_eigen,v_eigen
- p_eigen,v_eigen Convert to
odomCur
return odometry_channel, from
odomCurIt's a transformation
processImage
- addFeatureCheckParallax Add features to feature, And calculate the tracking times and parallax , Judge whether it is a key frame
- If it is a keyframe , Edge old frames ; Otherwise, edge the new frame
- If there is a laser odometer and initialization is valid , Edge old frames
- Add this frame to all_image_frame in , And start the pre integration again
- If external parameters need to be calibrated , Carry out rotating external parameter calibration CalibrationExRotation Change the status after successful calibration
- If the system is in the initialization state :
- If the number of frames in the sliding window is less than the preset value ,push frame
- To initialize initialStructure
- After successful initialization , The state is nonlinear optimization , And carry on
- Solve odometer solveOdometry
- Move the sliding window slideWindow
- Remove feature points for tracking f_manager.removeFailures()
- assignment
- otherwise : Move the sliding window slideWindow
- Otherwise, the initialization status of the system fee :
- Solve odometer solveOdometry
- If the fetch fails , Then restart vins System
- Move the sliding window slideWindow
- Remove feature points for tracking f_manager.removeFailures()
- Assign sliding window , Get ready VINS Output
initialStructure
- Laser initialization
- Clear the keys in the container
- Traverse all frames in the container ,is_key_frame=false
- testing Whether the laser information in the window is valid , invalid break
- If the laser information in the window is valid :
- Update the status in the sliding window
- Update gravity direction
- Reset the depth of all features , And triangulate triangulate
- If the feature depth of the point is valid , Then skip triangulation
- return true
- Clear the keys in the container
- testing imu The observability of
- Calculation Between frames imu Preintegration Acceleration (delta_v/dt)
- Calculation imu Preintegration The standard deviation of acceleration
- If the standard deviation is less than 0.25, Then return to ( The sentence has been written off )
- overall situation sfm
- Traverse All feature points , Add observation constraints imu_j
- Traverse imu_j++, Add all constraints to the feature
- Sufficient parallax recovery R,t relativePose
- Pure visual restoration Sliding window pose and characteristics construct
- Traverse All feature points , Add observation constraints imu_j
- Pnp Solve all frames
- Vision Imu alignment
triangulate
- Feature triangulation , The difference from the original is that if the feature has depth , Just skip
visual_loop
The main function :
ros initialization , Initialize node + Handle +log Level display
Load parameters
- Judge whether the parameter path is correct
- Parameters used for closed loop yaml
If Closed loop required : Parameter setting
- Initialize word bag
- initialization brief extract
- Initialize camera model
Subscribe to the topic :
- image_call
- Key frame of visual odometer Postures pose_callback
- Key frame of visual odometer Characteristic point point_callback
- Estimated external parameters of visual odometer extrinsic_callback
Post topics :
- Closed loop matching picture pub_match_img
- Closed loop matching frame pub_match_msg
- Closed loop keyframe pose pub_key_pose
If there is no closed loop , The above subscription and publication topics are shutdown
Build the main thread std::thread(process);
callback
- The callback function is to put data into buf in
pose_callback
- When there is no closed loop , direct return
- Put data into pose_buf, The mutex
m_buf
point_callback
- When there is no closed loop , direct return
- Put data into point_buf, The mutex
m_buf
image_callback
- When there is no closed loop , direct return
- Put data into image_buf, The mutex
m_buf - testing Stability of camera data stream
- Test pictures Time interval and bounce
- interval >1s Or jump back , All queues are in good condition
extrinsic_callback
- When there is no closed loop , direct return
- assignment tic, qic, The mutex
m_process
Process
When there is no closed loop , direct return
while ok
- Data alignment
- find image_msg、pose_msg、point_msg
- The three are at the same time , And mutual exclusion
m_buf
- if pose_msg != Null when , That is, the assignment :
- Skip the first ten frames
static int - Limit the frequency , Skip some frames ( It is not the same as frequency reduction )
- Get the pose of the key frame pose_msg -> R ,T
- Add key
- picture
- All map points for keyframes
- Construct new keyframes
- m_process.addKeyFrame, The mutex
m_process - visualization Keyframe pose visualizeKeyPoses
- Skip the first ten frames
- 5S Do it once ,sleep_fors
- Data alignment
lio-sam
ImageProjection
Construct
- subscribe Topic:
- subscribe imu Raw data imuHandler
- Subscription by vins Provided ros Odometer , odometryHandler
- Subscribe to radar
边栏推荐
- When does the mobile phone video roll off?
- R langage plotly visualisation: visualisation de plusieurs histogrammes normalisés d'ensembles de données et ajout d'une courbe de densité KDE à l'histogramme, réglage de différents histogrammes en ut
- 细说物体检测中的Anchors
- 导师邀请你继续跟他读博,你会不会立马答应?
- 10 常见网站安全攻击手段及防御方法
- leetcode:522. 最长特殊序列 II【贪心 + 子序列判断】
- 为智能设备提供更强安全保护 科学家研发两种新方法
- Google browser chropath plug-in
- 多线程实现 重写run(),怎么注入使用mapper文件操作数据库
- 别再用 System.currentTimeMillis() 统计耗时了,太 Low,StopWatch 好用到爆!
猜你喜欢
![[system design] proximity service](/img/02/57f9ded0435a73f86dce6eb8c16382.png)
[system design] proximity service

导师邀请你继续跟他读博,你会不会立马答应?

Privacy computing fat offline prediction

手机影像内卷几时休?

Preliminary understanding of pytorch

2-4Kali下安装nessus
Shortcut key bug, reproducible (it seems that bug is the required function [funny.Gif])
![文件名设置导致writelines写入报错:OSError: [Errno 22] Invalid argument](/img/08/2d4f425e6941af35616911672b6fed.png)
文件名设置导致writelines写入报错:OSError: [Errno 22] Invalid argument

lvi-sam 总结

三层架构中,数据库的设计在哪一层实现,不是在数据存储层吗?
随机推荐
Some exercises about binary tree
QT运行显示 This application failed to start because it could not find or load the Qt platform plugin
R语言plotly可视化:plotly可视化基础小提琴图(basic violin plot in R with plotly)
torch. utils. data. Randomsampler and torch utils. data. Differences between sequentialsampler
[200 opencv routines] 212 Draw a slanted rectangle
2-4Kali下安装nessus
Installation and usage of source insight tool
Freemarker
多线程实现 重写run(),怎么注入使用mapper文件操作数据库
On anchors in object detection
Scientists develop two new methods to provide stronger security protection for intelligent devices
Only one ConfirmCallback is supported by each RabbitTemplate 解决办法
测试同学怎么参与codereview
你睡觉时大脑真在自动学习!首个人体实验证据来了:加速1-4倍重放,深度睡眠阶段效果最好...
通俗易懂理解朴素贝叶斯分类的拉普拉斯平滑
Vector:: data() pointer usage details
dns备用服务器信息,dns服务器地址(dns首选和备用填多少)
R语言plotly可视化:可视化多个数据集归一化直方图(historgram)并在直方图中添加密度曲线kde、设置不同的直方图使用不同的分箱大小(bin size)、在直方图的底部边缘添加边缘轴须图
leetcode:968. 监控二叉树【树状dp,维护每个节点子树的三个状态,非常难想权当学习,类比打家劫舍3】
详解各种光学仪器成像原理