当前位置:网站首页>LVI Sam summary

LVI Sam summary

2022-06-27 09:56:00 xiaoma_ bk

Lvi-sam

lidar-visual-inertial odometry and mapping system

  • General framework diagram :
     Frame diagram
  • Each node Data transmission diagram
     Insert picture description here

vins

visual_feature

The main function :

  • initialization Ros node

    • Set up Log Grade

      ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Warn);

    • Read parameters Each node is read once , It's hard

  • Read camera parameters N Camera parameters are read separately  

    • If you have a fish eye camera , Read fish eye mask
  • Initialize the depth register ( After reading the parameters ) DepthRegister

  • subscribe Laser and image topic of conversation , If laser is not applicable ,sub_lidar.shutdown();

    • img_callback
    • lidar_callback ( Have been to distorted point clouds )
  • Release topic:

    • feature,restart( For visual odometer )、feature_img(rviz)
  • Two threads ,MultiThreadedSpinner, For parallel processing ( Image and lidar )

lidar_callback

  • 1、 Skip frame ,++lidar_count % (LIDAR_SKIP+1) != 0

  • 2、 obtain vins_world To body The transformation of transNow

    • tf listen When reading fails return
    • Convert to Eigen::Affine3f
  • 3、 Point cloud data processing

    • laser cloud Convert to pcl
    • Downsampling (0.2,0.2,0.2)
    • Point cloud filtering ( Keep points only in camera view ) x>=0&&y/x<=10&&z/x<=10
    • From laser coordinate system to Camera coordinate system pcl::transformPointCloud
    • Convert to global odometer coordinate system , Used tf Answered transNow
  • 4、 Save the point cloud queue Point cloud +time Two queues cloudQueue、timeQueue

  • 5、 Pop up the old data in the queue , Retain 5s data

  • 6、 Fuse the point cloud data in the queue depthCloud, Add all point cloud data in the queue

  • 7、 Desampling of fused point cloud data ,(0.2,0.2,0.2)

img_callback

  • if first_image_flage when , assignment first_image_time、last_image_time return
  • Camera data stream stability detection , The time interval >1s perhaps Time jumps back
    • When abnormal , send out restart sign , and return
  • Release the current frame frequency control PUB_THIS_FRAME, When it was released ++pub_count
    • round(1.0 * pub_count / (cur_img_time - first_image_time)) <= FREQ
    • Reset pub_count ,first_image_time
  • image Data to cv::Mat, and trackerData[i].readImage, The core :readImage
  • PUB_THIS_FRAME when , Release topic,pub_feature, notes :depthRegister->get_depth

readImage

  • Histogram equalization , Parameters :cv::createCLAHE(3.0, cv::Size(8, 8)

  • if forw_img It's empty , be prev_img = cur_img = forw_img = img;

  • if cur_pts.size() > 0 when , Optical flow tracking , Current tracking feature point forw_pts

    • Optical flow tracking cv::calcOpticalFlowPyrLK
    • Delete Invalid feature point
  • If released At this frame

    • Set up Mask, Non maximum suppression
    • if When the number of feature points of the frame is less than the preset maximum value , Perform additional extraction cv::goodFeaturesToTrack
    • And add extra points
  • assignment , And remove distortion undistortedPoints

    • cv::undistortPoints But use the camera model :m_camera->liftProjective
    • If there are matching points in the previous frame , Make speed predictions

get_depth

  • Initialization depth passageway , Prepare for return

    • name = "depth",values.resize(features_2d.size(), -1)
  • If there is no depth point cloud , Straight back , The depth point cloud consists of lidar_callback obtain

  • Get the current time period body To the world coordinate system transNow

  • Move the point cloud from Convert world coordinate system to camera coordinate system transNow.inverse()

  • Project feature points onto the unit sphere ,z Always for 1 features_3d_sphere

    • The switch to ros Standard coordinate system ,x = z, y =-x,z=-y
    • standard : front x, Left y, On z, The camera : front z, Right x, Next y
    • Intensity is used to store depth , assignment -1
  • Define the picture to get the depth (-90°,90°), The resolution of the bin_res =180/360

  • Traverse all depth points , Calculation raw_id,col_id, If it is within the image range , Keep only the closest points

    • row_angle =atan2(p.z, sqrt(p.x * p.x + p.y * p.y)) * 180.0 / M_PI + 90.0
    • To convert to [0,180], Therefore, it is necessary to add 90°
    • col_angle = atan2(p.x, p.y) * 180.0 / M_PI;
    • row_id = row_angle / bin_res,col_id=col_angle/bin_res.
    • If it has been updated , Take only the nearest
  • depth_cloud_local assignment , Release depth to vins_body_ros Coordinate system

  • The depth point cloud depth_cloud_local Normalize , obtain depth_cloud_unit_sphere

    • x,y,z/range , Intensity saves the depth value ,
  • Through normalized depth map depth_cloud_unit_sphere, establish kd_tree

  • Traverse Normalized characteristic points features_3d_sphere, Get the depth of each point

    • stay kd_tree Find 3 Adjacent points , threshold 5 Square of pixels
    • Can find 3 And the distance is less than the threshold , Do the following :
      • take 3 Point data :A、B、C, Three dimensional coordinates of each point ( Normalized coordinates * depth ) And depth r
      • Normalized characteristic points V V V( Normalized coordinates )
      • Calculation ABC Determined normal vector N N N
      • Calculate the origin to The distance between the planes (N(0) * A(0) + N(1) * A(1) + N(2) * A(2))
      • Calculate the origin to Normalized characteristic points and normal vectors N N N The distance of the defined plane (N(0) * V(0) + N(1) * V(1) + N(2) * V(2))
      • obtain Normalized depth of feature points s = Divide the above two
      • if 3 The depth difference between the two points 2m or Depth less than 0.5m when ,s unchanged
      • s If the depth is greater than 3 Maximum depth of points , Then give the maximum depth , If it is less than the minimum depth, assign the minimum depth
      • Restore characteristic 3d Information ( The normalized data is multiplied by the depth value ) features_3d_sphere,
  • If a depth map is published , Then assign different colors to display , And publish

  • Update the depth point of each feature depth_of_point, And back to

visual_odometry

The main function :

  • structure Estimator estimator Global variables
  • initialization ros
  • Read parameters , and estimator.setParameter()
    • Camera external parameters ,td, Information matrix
  • subscribe restart_callback
    • imu_callback
    • odom_callback
    • feature_back
  • If not used When laser sub_odom.shutdown();
  • Define the main thread measurement_process{process};
  • Two threads ,MultiThreadedSpinner, For parallel processing

imu_callback

  • if imu data When time jumps back or does not change , direct Print warnings and return
  • take imu data push To imu_buf in , The mutex m_buf
  • Condition wakes up the main thread
  • Release The nearest odometer , be used for rviz Show

odom_callback

  • Put data into odomQueue in , The mutex m_odom

feature_callback

  • Put data into feature_buf in , The mutex m_buf
  • Condition wakes up the main thread

process main_thread

  • while ros::ok
    • Conditional arousal measurements !=0
      • measurements =getMeasurements
    • Traverse measurements
      • imu Preintegration
        • imu_msg.time <= img_msg.time estimator.processIMU
        • otherwise , Based on the last linear acceleration and angular acceleration Make them perfectly aligned
      • image[feature_id] structure
    • Get initialization information from lidar odometer odomRegister->getOdometry
      • Because of the use of odometry data , therefore The mutex m_odom
    • Process images processImage
    • visualization
      • Release odometer , Keyframes Pose, The camera Pose, Release Tf, Publish keyframes

other_function

getMeasurements

  • while 1 loop
    • imu_buf and feature_buf When one is empty return
    • imu_buf.back Not included feature_buf.fornt Time time ,return
      • imu_buf The end time does not contain... To package feature data , skip
    • imu_buf.front Not included feature_buf.fornt Time time ,feature_buf eject ,continue
      • imu_buf The start time does not contain feature Data time , Pop it up
      • Because the data time is increasing , It will never contain , Throw away
    • pack imu_buf Less than feaure_buf.font The data of , namely :[Imus,feaure_buf.font]

getOdometry

  • Reset odometry_channel(18,-1)

    • id(1), P(3), Q(4), V(3), Ba(3), Bg(3), gravity(1)
  • The laser odometer is empty , Lose the old frame of the odometer ,odom<img_time-0.05? pop_fornt

  • When the laser odometer is empty , Go straight back to

  • obtain The closest q_odom_lidar

    • Find the odometer closest to the image time odomCur , The nearest odometer frame that is smaller than the visual image
    • If odometer odomCur And img The time interval is greater than 0.05, direct return
  • Convert it to Laser coordinate system q_odom_cam

  • transformation Odometer position and attitude from laser coordinate system to Camera coordinate system

    • odomCur Convert to p_eigen,v_eigen
    • p_eigen,v_eigen = q_lidar_to_cam_eigen* p_eigen,v_eigen
    • p_eigen,v_eigen Convert to odomCur
  • return odometry_channel, from odomCur It's a transformation

processImage

  • addFeatureCheckParallax Add features to feature, And calculate the tracking times and parallax , Judge whether it is a key frame
    • If it is a keyframe , Edge old frames ; Otherwise, edge the new frame
  • If there is a laser odometer and initialization is valid , Edge old frames
  • Add this frame to all_image_frame in , And start the pre integration again
  • If external parameters need to be calibrated , Carry out rotating external parameter calibration CalibrationExRotation Change the status after successful calibration
  • If the system is in the initialization state :
    • If the number of frames in the sliding window is less than the preset value ,push frame
    • To initialize initialStructure
    • After successful initialization , The state is nonlinear optimization , And carry on
      • Solve odometer solveOdometry
      • Move the sliding window slideWindow
      • Remove feature points for tracking f_manager.removeFailures()
      • assignment
    • otherwise : Move the sliding window slideWindow
  • Otherwise, the initialization status of the system fee :
    • Solve odometer solveOdometry
    • If the fetch fails , Then restart vins System
    • Move the sliding window slideWindow
    • Remove feature points for tracking f_manager.removeFailures()
    • Assign sliding window , Get ready VINS Output

initialStructure

  • Laser initialization
    • Clear the keys in the container
      • Traverse all frames in the container ,is_key_frame=false
    • testing Whether the laser information in the window is valid , invalid break
    • If the laser information in the window is valid :
      • Update the status in the sliding window
      • Update gravity direction
      • Reset the depth of all features , And triangulate triangulate
        • If the feature depth of the point is valid , Then skip triangulation
      • return true
  • testing imu The observability of
    • Calculation Between frames imu Preintegration Acceleration (delta_v/dt)
    • Calculation imu Preintegration The standard deviation of acceleration
    • If the standard deviation is less than 0.25, Then return to ( The sentence has been written off )
  • overall situation sfm
    • Traverse All feature points , Add observation constraints imu_j
      • Traverse imu_j++, Add all constraints to the feature
    • Sufficient parallax recovery R,t relativePose
    • Pure visual restoration Sliding window pose and characteristics construct
  • Pnp Solve all frames
  • Vision Imu alignment

triangulate

  • Feature triangulation , The difference from the original is that if the feature has depth , Just skip

visual_loop

The main function :

  • ros initialization , Initialize node + Handle +log Level display

  • Load parameters

    • Judge whether the parameter path is correct
    • Parameters used for closed loop yaml
  • If Closed loop required : Parameter setting

    • Initialize word bag
    • initialization brief extract
    • Initialize camera model
  • Subscribe to the topic :

    • image_call
    • Key frame of visual odometer Postures pose_callback
    • Key frame of visual odometer Characteristic point point_callback
    • Estimated external parameters of visual odometer extrinsic_callback
  • Post topics :

    • Closed loop matching picture pub_match_img
    • Closed loop matching frame pub_match_msg
    • Closed loop keyframe pose pub_key_pose
  • If there is no closed loop , The above subscription and publication topics are shutdown

  • Build the main thread std::thread(process);

callback

  • The callback function is to put data into buf in

pose_callback

  • When there is no closed loop , direct return
  • Put data into pose_buf, The mutex m_buf

point_callback

  • When there is no closed loop , direct return
  • Put data into point_buf, The mutex m_buf

image_callback

  • When there is no closed loop , direct return
  • Put data into image_buf, The mutex m_buf
  • testing Stability of camera data stream
    • Test pictures Time interval and bounce
    • interval >1s Or jump back , All queues are in good condition

extrinsic_callback

  • When there is no closed loop , direct return
  • assignment tic, qic, The mutex m_process

Process

  • When there is no closed loop , direct return

  • while ok

    • Data alignment
      • find image_msg、pose_msg、point_msg
      • The three are at the same time , And mutual exclusion m_buf
    • if pose_msg != Null when , That is, the assignment :
      • Skip the first ten frames static int
      • Limit the frequency , Skip some frames ( It is not the same as frequency reduction )
      • Get the pose of the key frame pose_msg -> R ,T
      • Add key
        • picture
        • All map points for keyframes
        • Construct new keyframes
        • m_process.addKeyFrame, The mutex m_process
        • visualization Keyframe pose visualizeKeyPoses
    • 5S Do it once ,sleep_fors

lio-sam

ImageProjection

Construct

  • subscribe Topic:
    • subscribe imu Raw data imuHandler
    • Subscription by vins Provided ros Odometer , odometryHandler
    • Subscribe to radar
原网站

版权声明
本文为[xiaoma_ bk]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/178/202206270944133171.html