当前位置:网站首页>[leaderboard] Carla leaderboard leaderboard leaderboard operation and participation in hands-on teaching

[leaderboard] Carla leaderboard leaderboard leaderboard operation and participation in hands-on teaching

2022-06-25 00:08:00 Kin__ Zhang

This branch is mainly for participation leaderboard Ranking usage , Introduce how to build a team , Submit your own code , This section is simple , Mainly basic teaching and demonstration ; You can refer to more open source code for further study .
Most schools and laboratories basically participate in this list , There are still few companies, such as waymo uber And so on , May be … They already have their own datasets

References and foreword

  1. Leaderboard Ranking List https://leaderboard.carla.org/leaderboard/
  2. About Carla Challenge 2020 Introduction video of [ Including the route of the competition and the test scene ]:https://www.bilibili.com/video/bv1r54y1V7wP
  3. Github Links, etc. :https://github.com/carla-simulator/scenario_runner
  4. Official scene test website description :https://carla-scenariorunner.readthedocs.io/en/latest/openscenario_support/
  5. I find leaderboard Direct is the state of open source , then … Some scenes are also written here :https://github.com/carla-simulator/leaderboard
  6. About OpenSCENARIO Chinese version of
  7. OpenSCENARIO stay carla Internally supported setup:https://carla-scenariorunner.readthedocs.io/en/latest/openscenario_support.html
  8. Some experts who collect and sort out by themselves expert To do e2e Students collect data , Code address :https://github.com/Kin-Zhang/carla-expert Remember to give start! This section is suitable for experienced Students who need to do this task later

Cooperation meeting

Carla leaderboard The leaderboard and related meetings are held every year / Journal cooperation As someone workshop The results are summarized and methods are shared , for example :The CARLA Autonomous Driving Challenge 2021 is organized as part of the Machine Learning for Autonomous Driving Workshop at NeurIPS 2021.

year meeting
2019CVPR
2020NeurIPS
2021NeurIPS
2022TBD

The first place of each track is to be invited to share , Then the organizing committee will select two groups of teams as invited guests according to the papers and technical reports submitted by other contestants

1. Brief introduction

sensor

It can be used and its related configurations

Take a screenshot to the official website

Related indicators

agent The driving proficiency of a vehicle can be characterized by multiple indicators . For this leaderboard , We selected a set of indicators to help understand the different aspects of driving . Although all routes have the same type of metrics , But their respective values are calculated separately . The specific indicators are as follows :

  • Driving Score R i P i R_iP_i RiPi

    Key indicators of Leaderboard , As The product of route completion and violation penalty ; among R i R_i Ri For the first time i i i Percentage of routes completed , P i P_i Pi For the first time i i i Penalties for violations of routes

  • Route completion: R R R

    Percentage of this route completed , There is an additional violation , The calculation that will affect the completion of the route

    • Off-road driving — If an agent drives off-road, When calculating the route completion score The percentage of the route will not be considered .

    Besides , Some events interrupt the simulation , prevent agent continue . In these cases , The route being simulated will be closed , The leaderboard will move to the next , Trigger it normally .

    • Route deviation — If an agent deviates more than 30 meters from the assigned route.
    • Agent blocked — If an agent doesn’t take any actions for 180 simulation seconds.
    • Simulation timeout — If no client-server communication can be established in 60 seconds.
    • Route timeout — If the simulation of a route takes too long to finish.
  • Infraction penalty: P = ∏ j ped., ..., stop  ( p i j ) #  infractions  j P=\prod_{j}^{\text {ped., ..., stop }}\left(p_{i}^{j}\right)^{\# \text { infractions }_{j}} P=jped., ..., stop (pij)# infractions j

    Leaderboards track many types of violations , This indicator will agent All of these violations triggered are aggregated into geometric sequences . Agent from the ideal 1.0 The basic score begins , Every time a violation occurs , The score will be reduced .

    Summary of all non-compliance scores :

    • Collisions with pedestrians0.50.
    • Collisions with other vehicles0.60.
    • Collisions with static elements0.65.
    • Running a red light0.70.
    • Running a stop sign0.80.

Last Usually before uploading , I will run local test scenarios by myself , You can view through records : Every time any of the above occurs , Will record some detailed information , These details will be displayed as a list , It allows you to view various indicators of the route . Here is an example of a route , among agent Running a red light → So multiply by 0.7 Of penalty, The total score is because the whole route has been completed So it is 100, then 100*0.7=70, So the final score of this line is 70 branch

{
    
	"index": 53,
	"infractions": {
    
	    "collisions_layout": [],
	    "collisions_pedestrian": [],
	    "collisions_vehicle": [],
	    "outside_route_lanes": [],
	    "red_light": [
	        "Agent ran a red light 3740 at (x=6.91, y=184.96, z=0.22)"
	    ],
	    "route_dev": [],
	    "route_timeout": [],
	    "stop_infraction": [],
	    "vehicle_blocked": []
	},
	"meta": {
    
	    "duration_game": 611.9500091187656,
	    "duration_system": 589.3781015872955,
	    "route_length": 974.0864898139865
	},
	"route_id": "RouteScenario_53",
	"scores": {
    
	    "score_composed": 70.0,
	    "score_penalty": 0.7,
	    "score_route": 100.0
	},
	"status": "Completed"
},
  • If you don't understand , Another example

    For example, this is I hit someone else's car twice , therefore 0.50.5=0.25 Of penalty, And then because I finished the route 1000.25=25 So the score is 25 branch

    {
          
      "index": 10,
      "infractions": {
          
          "collisions_layout": [],
          "collisions_pedestrian": [],
          "collisions_vehicle": [
              "Agent collided against object with type=vehicle.audi.etron and id=2373 at (x=242.536, y=88.114, z=0.196)",
              "Agent collided against object with type=vehicle.audi.etron and id=2373 at (x=241.405, y=79.254, z=0.105)"
          ],
          "outside_route_lanes": [],
          "red_light": [
              "Agent ran a red light 2236 at (x=249.4, y=46.07, z=0.2)"
          ],
          "route_dev": [],
          "route_timeout": [],
          "stop_infraction": [],
          "vehicle_blocked": []
      },
      "meta": {
          
          "duration_game": 467.50000696629286,
          "duration_system": 3596.4442558288574,
          "route_length": 1128.3240988201576
      },
      "route_id": "RouteScenario_10",
      "scores": {
          
          "score_composed": 25.2,
          "score_penalty": 0.252,
          "score_route": 100.0
      },
      "status": "Completed"
    },
    

2. Start configuration

First of all, make it clear , end December 17, 2021 11:29 AM Time time ,CARLA Leaderboard only supports 0.9.10.1

  • python To configure

    conda create -n py37 python=3.7
    conda activate py37
    cd CARLA_0.9.10.1  # Change ${CARLA_ROOT} for your CARLA root folder
    
    pip3 install -r PythonAPI/carla/requirements.txt
    
  • download leaderboard

    cd CARLA_SUBMIT
    git clone -b stable --single-branch https://github.com/carla-simulator/leaderboard.git
    cd leaderboard # Change ${LEADERBOARD_ROOT} for your Leaderboard root folder
    pip3 install -r requirements.txt
    
    git clone -b leaderboard --single-branch https://github.com/carla-simulator/scenario_runner.git
    cd scenario_runner # Change ${SCENARIO_RUNNER_ROOT} for your Scenario_Runner root folder
    pip3 install -r requirements.txt
    
  • Add a path : stay bashrc perhaps zshrc Add path in

    # << Leaderboard setting
    export CARLA_ROOT=~/CARLA_0.9.10.1
    export SCENARIO_RUNNER_ROOT=~/CARLA_SUBMIT/scenario_runner
    export LEADERBOARD_ROOT=~/CARLA_SUBMIT/leaderboard
    export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":"${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":${PYTHONPATH}
    

Manual testing

thus The required environment already exists , You can manually test whether this evaluation platform is available :

cd CARLA_0.9.10.1
./CarlaUE4.sh -quality-level=Epic -world-port=2000 -resx=800 -resy=600

Open another one terminal

cd ~/CARLA_SUBMIT/leaderboard
touch test_run.sh
chmod +x test_run.sh
gedit test_run.sh

Copy this in :

# Parameterization settings. These will be explained in 2.2. Now simply copy them to run the test.
export SCENARIOS=${LEADERBOARD_ROOT}/data/all_towns_traffic_scenarios_public.json
export ROUTES=${LEADERBOARD_ROOT}/data/routes_devtest.xml
export REPETITIONS=1
export DEBUG_CHALLENGE=1
export TEAM_AGENT=${LEADERBOARD_ROOT}/leaderboard/autoagents/human_agent.py
export CHECKPOINT_ENDPOINT=${LEADERBOARD_ROOT}/results.json
export CHALLENGE_TRACK_CODENAME=SENSORS

./scripts/run_evaluation.sh

Last , Run this script

./test_run.sh

The end result is :

In window four , You can use WASD To control the vehicle , People will suddenly appear on the whole road

At the end , The terminal will give the result , Like this :

3. Build your own agent

This section is too much , And it is not directly possible to build a agent, After all, there are many steps to complete such a system [ If not end-to-end , Is to complete the target detection first → Then behavior planning → Local planning → Trajectory planning → Controller and other modules ], For example, this open source :

ICRA 2021: Pylot: A Modular Platform for Exploring Latency-Accuracy Tradeoffs in Autonomous Vehicles

See the official website for details :https://leaderboard.carla.org/get_started/#3-creating-your-own-autonomous-agent

This one will be opened separately in the back , After the review, I will explain the open source version to you

Next section We use it directly An open source agent For local testing → Submit cloud test A series of signs

4. How to submit

First of all, to make CARLA Leaderboards can evaluate their own code , It needs to be encapsulated in a docker In the mirror . Before submitting to the cloud , You must first use local docker Leaderboard running in agent

And local docker Production also needs to be based on the official dockerfile.master Document to decide , By default, some of them are for leaderboard and scenario_runner With , Then we can set the environment in “BEGINNING OF USER COMMANDS” and “END OF USER COMMANDS” Between

  1. FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04

Create submission folder

So according to the first two steps, the folder is like this , among leaderboard and scenario_runner It's all official clone Of That is, there are in the second step ,team_code That is, its own code and model files , As long as it is necessary for the code to run All of them should be put inside

Make local docker

First of all, before that We need to do it again at the terminal export All required paths , Because of making docker You need to extract this information from your script

export CARLA_ROOT=~/CARLA_0.9.10.1
export SCENARIO_RUNNER_ROOT=~/CARLA_SUBMIT/scenario_runner
export LEADERBOARD_ROOT=~/CARLA_SUBMIT/leaderboard
export TEAM_CODE_ROOT=~/CARLA_SUBMIT/team_code

Then you have to customize your own team_code agent What is it? , need source Something :

  • stay ${LEADERBOARD_ROOT}/scripts/Dockerfile.master find leaderboard Provided dockerfile. All the dependencies required by the scenario runner and leaderboard have been set here . Include dependencies and other packages required by your agent . We recommend that you use the label *“BEGINNING OF USER COMMANDS” and “END OF USER COMMANDS”* Add a new command to the separated area .

    • Expand to see the code , Like this in Dockerfile.master Declare oneself agent What is the document :

      ########################################################################################################################
      ########################################################################################################################
      ############ BEGINNING OF USER COMMANDS ############
      ########################################################################################################################
      ########################################################################################################################
      
      ENV TEAM_AGENT ${TEAM_CODE_ROOT}/npc_agent.py
      ENV TEAM_CONFIG ${TEAM_CODE_ROOT}/YOUR_CONFIG_FILE
      ENV CHALLENGE_TRACK_CODENAME SENSORS
      
      ########################################################################################################################
      ########################################################################################################################
      ############ END OF USER COMMANDS ############
      ########################################################################################################################
      ########################################################################################################################
      

      such as pylot Of master file It's like this : relevant issue

      ########################################################################################################################
      ########################################################################################################################
      ############ BEGINNING OF USER COMMANDS ############
      ########################################################################################################################
      ########################################################################################################################
      
      RUN apt-get update && apt-get install -y clang libgeos-dev python-opencv libqt5core5a libeigen3-dev cmake qtbase5-dev python3-tk
      RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
      ENV PATH="/root/.cargo/bin/:${
               PATH}"
      RUN rustup default nightly
      RUN packages='wheel setuptools setuptools-rust fire' && pip install ${packages}
      RUN git clone https://github.com/erdos-project/erdos.git && cd erdos && python3 python/setup.py install --user
      
      RUN packages='absl-py cvxpy gdown lapsolver matplotlib==2.2.4 motmetrics numpy<1.17 open3d-python==0.5.0.0 opencv-python>=4.1.0.25 opencv-contrib-python>=4.1.0.25 pillow>=6.2.2 pytest scikit-image<0.15 scipy==1.2.2 shapely==1.6.4 tensorflow-gpu==1.15.4 torch==1.3.1 torchvision==0.2.1 filterpy==1.4.1 numba==0.50.0 scikit-learn==0.20.0 imgaug==0.2.8 nonechucks==0.3.1 cython' \
      	&& pip install ${packages}
      ENV PYTHONPATH ${TEAM_CODE_ROOT}/:${TEAM_CODE_ROOT}/dependencies/:${PYTHONPATH}
      ENV TEAM_AGENT ${TEAM_CODE_ROOT}/pylot/simulation/challenge/ERDOSAgent.py
      ENV TEAM_CONFIG ${TEAM_CODE_ROOT}/pylot/simulation/challenge/challenge.conf
      ENV CHALLENGE_TRACK_CODENAME MAP
      ENV PYLOT_HOME ${TEAM_CODE_ROOT}/
      
      RUN cd ${TEAM_CODE_ROOT}/dependencies/frenet_optimal_trajectory_planner && rm -r build && ./build.sh
      RUN cd ${TEAM_CODE_ROOT}/dependencies/hybrid_astar_planner && rm -r build && ./build.sh
      RUN cd ${TEAM_CODE_ROOT}/dependencies/rrt_star_planner && rm -r build && ./build.sh
      
      ########################################################################################################################
      ########################################################################################################################
      ############ END OF USER COMMANDS ############
      ########################################################################################################################
      ########################################################################################################################
      

      Another example transfuser Of Dockerfile USER Part of it looks like this :

      ENV PYTHONPATH "/workspace":${
              PYTHONPATH}
      
      RUN apt-get update && apt-get install -y --no-install-recommends libgtk2.0-dev
      
      RUN pip install -r /workspace/team_code/requirements.txt
      
      ENV TEAM_AGENT ${
              TEAM_CODE_ROOT}/transfuser_agent.py
      ENV TEAM_CONFIG ${
              TEAM_CODE_ROOT}/model_ckpt/transfuser
      ENV CHALLENGE_TRACK_CODENAME SENSORS
      
  • Update variables TEAM_AGENT To set up your proxy file , From AutonomousAgent Inherited entry file . Do not change the path “/workspace/team_code” The rest of . If your agent needs a configuration file to initialize , Please put the variable TEAM_CONFIG Set to profile .

  • Make sure you want ${HOME}/agent_sources.sh Everything you get is inside , Because you won't call a source added anywhere else . Before running the agent in the cloud , This file will be automatically obtained .

Last , Run the official make_docker.sh Make docker Mirror image :

${LEADERBOARD_ROOT}/scripts/make_docker.sh

After this step is run Very slow , A few small pit reminders ( If it's a walk Plan A No need to see That already fix bugs 了 ):

  1. primary dockerfile.master Yes pip upgrade The operation of , however ! It's a mistake emm

    So the right thing to do is :1 Do not upgrade directly pip Don't worry about it ;2 Revise it dockerfile.master Change it , hold pip Version upgrade is limited to 21.0 following , By default, it will go to 21.3 So it will always report errors

    python 3.5 needs to match with pip3 version < 21.0

  2. rm -rf Can't find the place to delete

    Just delete it

  3. Missing a system file library :libgeos-dev

    Just add

relevant :Pull request

Example And test

export CARLA_SUBMIT_FOLDER=~/MMFN_SUBMIT
export CARLA_ROOT=~/CARLA_0.9.10.1
export SCENARIO_RUNNER_ROOT=${CARLA_SUBMIT_FOLDER}/scenario_runner
export LEADERBOARD_ROOT=${CARLA_SUBMIT_FOLDER}/leaderboard
export TEAM_CODE_ROOT=${CARLA_SUBMIT_FOLDER}/team_code
${LEADERBOARD_ROOT}/scripts/make_docker.sh

Test it yourself Docker Inside agent Is it possible to evaluate

docker run -it --net=host --gpus all leaderboard-user /bin/bash
./leaderboard/scripts/run_evaluation.sh

Registered account

more See the official website for details , The following is a simplified version :

  1. Register login :https://app.alphadrive.ai/teams

  2. Innovative new team

  3. Go to the carla Of benchmark:https://app.alphadrive.ai/benchmarks/3/overview, And then click apply carla leaderboard

  4. On the computer alphadrive Secret key settings

    curl http://dist.alphadrive.ai/install-ubuntu.sh | sh -
    
  5. Sign in ,emmm Because the last key was 404 The state of , This one I've run... Before , Remember a bit of a hole , But I forgot where the pit was It is probably a bilateral authentication operation

    alpha login
    
    • Wait until the above key is not 404 了 I'll try again December 17, 2021 9:37 PM → The staff finally returned the message Updated January 30, 2022 10:47 AM

      The key has expired , I don't know if it's because carla leaderboard Stopped. But install alpha It should not be overdue

    That is, display the current terminal QKMB-LBSX Fill in And then it shows logged in 了

  6. Then after certification , You can submit , But it is best to test locally

    alpha benchmark:submit  --split {
          2, 3} ${YOUR_DOCKERIMAGE_NAME}
    

    2 yes MAP Track ,3 yes SENSOR Track , By default ${YOUR_DOCKERIMAGE_NAME} It's usually leaderboard-user:latest.

    For example, I vote for MAP Track , wait for docker images push Go up in

5. The result shows

Finally, the upload was successful , You can see a submission on the web page :

This is actually alphadrive There is a free cloud to run your docker, And then look at the effect ; In general? , Mostly 60 hours , The longer the model runs / The better The slower the outcome may be , The result will not be very careful , Only one such report will be given , But it also shows more infraction What is it? To update

So It is recommended that one wave of debugging should be carried out locally , Make sure it's ready before you go up , Otherwise, it is easy to wait and be busy , And one team limits 200 Hours of submission time and 20 Number of submissions , But through my own tests , I found that it would not be more than one month


Give someone a compliment Fragrance in hand ; Positive feedback To better open the record

原网站

版权声明
本文为[Kin__ Zhang]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206241927443396.html

随机推荐