Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inaccurate GPU Lidar #2743

Open
amock opened this issue Jan 29, 2025 · 3 comments
Open

Inaccurate GPU Lidar #2743

amock opened this issue Jan 29, 2025 · 3 comments

Comments

@amock
Copy link

amock commented Jan 29, 2025

Hi Gazebo Dev Team,

First of all, thank you for the great software project. I've been using Gazebo Classic for a long time and it helped me a lot. In Gazebo Classic, switching from the CPU ray plugin to the GPU ray plugin resulted in a slight decrease in lidar accuracy. This was particularly noticeable when the lidar hit surfaces at shallow angles, though in most cases, the accuracy remained sufficient for our needs. However, after migrating from Gazebo Classic, the new "gpu_lidar" plugin appeared to be even more approximate than before. In my use cases, where scans are expected to come from spinning devices, this level of inaccuracy became problematic.

Below is a detailed problem description with accompanying images:

Test 1: Gazebo Classic

Setup:

  • Ubuntu 22
  • ROS 2 humble
  • Gazebo Classic Version: 11.10.2 (installed from official APT repositories)
  • Tested code is here: https://github.com/amock/rmcl_examples . However, I changed the field of view, number of samples, and resolution to make it equal to test 2 & 3.

CPU Ray Plugin

XML:

<gazebo reference="velodyne">
<sensor type="ray" name="velodyne-vlp16">
    <pose>0 0 0 0 0 0</pose>
    <visualize>false</visualize>
    <update_rate>20</update_rate>
    <ray>
    <scan>
        <horizontal>
            <samples>128</samples>
            <resolution>1</resolution>
            <min_angle>${-M_PI}</min_angle>
            <max_angle>${M_PI}</max_angle>
            </horizontal>
        <vertical>
            <samples>32</samples>
            <resolution>1</resolution>
            <min_angle>${-M_PI / 16.0}</min_angle>
            <max_angle>${M_PI / 4.0}</max_angle>
        </vertical>
    </scan>
    <range>
        <min>0.3</min>
        <max>130.0</max>
        <resolution>0.001</resolution>
    </range>
    <noise>
        <type>gaussian</type>
        <mean>0.0</mean>
        <stddev>0.0</stddev>
    </noise>
    </ray>
    <plugin name="gazebo_ros_laser_controller" filename="libgazebo_ros_velodyne_laser.so">
        <ros>
            <namespace>/velodyne</namespace>
            <remapping>~/out:=points</remapping>
        </ros>
        <frame_name>velodyne</frame_name>
        <organize_cloud>true</organize_cloud>
        <min_range>0.9</min_range>
        <max_range>130.0</max_range>
        <gaussian_noise>0.008</gaussian_noise>
    </plugin>
</sensor>
</gazebo>

Results (left Rviz, right Gazebo):

Image

GPU Ray

XML:

<gazebo reference="velodyne">
<sensor type="gpu_ray" name="velodyne-vlp16">
    <pose>0 0 0 0 0 0</pose>
    <visualize>false</visualize>
    <update_rate>20</update_rate>
    <ray>
    <scan>
        <horizontal>
            <samples>128</samples>
            <resolution>1</resolution>
            <min_angle>${-M_PI}</min_angle>
            <max_angle>${M_PI}</max_angle>
            </horizontal>
        <vertical>
            <samples>32</samples>
            <resolution>1</resolution>
            <min_angle>${-M_PI / 16.0}</min_angle>
            <max_angle>${M_PI / 4.0}</max_angle>
        </vertical>
    </scan>
    <range>
        <min>0.3</min>
        <max>130.0</max>
        <resolution>0.001</resolution>
    </range>
    <noise>
        <type>gaussian</type>
        <mean>0.0</mean>
        <stddev>0.0</stddev>
    </noise>
    </ray>
    <plugin name="gazebo_ros_laser_controller" filename="libgazebo_ros_velodyne_laser.so">
        <ros>
            <namespace>/velodyne</namespace>
            <remapping>~/out:=points</remapping>
        </ros>
        <frame_name>velodyne</frame_name>
        <organize_cloud>true</organize_cloud>
        <min_range>0.9</min_range>
        <max_range>130.0</max_range>
        <gaussian_noise>0.008</gaussian_noise>
    </plugin>
</sensor>
</gazebo>

Results (left Rviz, right Gazebo):

Image

Note: Accuracy has slightly degraded compared to the CPU Ray. But it's still OK.

Test 2: Gazebo Fortress

Setup:

GPU Raycaster:

<gazebo reference="laser3d">
    <sensor name="laser3d" type="gpu_lidar">
      <pose relative_to='laser3d'>0 0 0 0 0 0</pose>
      <topic>/model/robot/cloud</topic>
      <update_rate>10</update_rate>
      <ray>
        <scan>
          <horizontal>
              <samples>128</samples>
              <resolution>1</resolution>
              <min_angle>${-M_PI}</min_angle>
              <max_angle>${M_PI}</max_angle>
          </horizontal>
          <vertical>
              <samples>32</samples>
              <resolution>1</resolution>
              <min_angle>${-M_PI / 16.0}</min_angle>
              <max_angle>${M_PI / 4.0}</max_angle>
          </vertical>
        </scan>
        <range>
            <min>0.2</min>
            <max>100.0</max>
            <resolution>0.001</resolution>
        </range>
      </ray>
      <always_on>1</always_on>
      <visualize>true</visualize>
      <frame_id>laser3d</frame_id>
      <gz_frame_id>laser3d</gz_frame_id>
      <ign_frame_id>laser3d</ign_frame_id>
    </sensor>
 </gazebo>

Results (left RViz, right Gazebo):

Image

Test 3: Gazebo Harmonic

  • Ubuntu 24
  • ROS 2 jazzy
  • Gazebo Harmonic, from official APT repositories (Version see image)
  • Code: Same as test 2

Same effects than in Gazebo Fortress:

Image

Additional Tests

  • Same effects for ros2 launch ros_gz_sim_demos gpu_lidar_bridge.launch.py
  • Briefly tested under ROS 2 rolling. Problems remain.

Questions

  1. Did I do something wrong while parameterizing the new "gpu_lidar" plugin?
  2. Is this problem known?
  3. What causes the GPU LiDAR to have lower accuracy compared to the Gazebo Classic version?
  4. Unrelated to this issue: For Gazebo Harmonic, changing "gpu_lidar" to "lidar" was not working at all. Are the XML interfaces for "lidar" different then from a "gpu_lidar"?

Thanks for the help in advance
Alex

@iche033
Copy link
Contributor

iche033 commented Jan 29, 2025

The implementation of gpu_lidar with the default render engine in gz-sim (ogre2) is slightly different from gazebo-classic. The <samples> parameter plays a more important role in the accuracy of the readings. If you increase this number, you should get more accurate results, but at the cost of higher resource usage.

Alternatively, you can also try switching the sensor system's render engine to ogre (in your world sdf file) and that uses the gpu lidar implementation from gazebo-classic, i.e.

    <plugin
      filename="gz-sim-sensors-system"
      name="gz::sim::systems::Sensors">
      <render_engine>ogre</render_engine>
    </plugin>

cpu based lidar isn't available in gz-sim yet. I think there's some interest in adding support for this. Some related work is started in #2514 but more needs to be done to make it a sensor.

@amock
Copy link
Author

amock commented Feb 3, 2025

Thanks! Going back to ogre(1) actually resolved the issue for now. However, this doesn't seem like an ideal solution, as it appears that I can't switch the rendering engine for just this specific sensor - only for all sensors at once. Is that correct?

Suppose I have a very sparse lidar and want to simulate it accurately using the ogre2 implementation. Would that require increasing the number of rays, simulating the scanner, and then downsample afterward to get it sparse again?

Alternatively, would it be feasible to introduce a parameter that ensures a minimum accuracy level, even at the cost of increased runtime?

@iche033
Copy link
Contributor

iche033 commented Feb 5, 2025

it appears that I can't switch the rendering engine for just this specific sensor - only for all sensors at once. Is that correct?

yes that's correct

Suppose I have a very sparse lidar and want to simulate it accurately using the ogre2 implementation. Would that require increasing the number of rays, simulating the scanner, and then downsample afterward to get it sparse again?

Alternatively, would it be feasible to introduce a parameter that ensures a minimum accuracy level, even at the cost of increased runtime?

yes that'll be the workaround. Ideally gazebo does all that for you. The <resolution> sdf tag is supposed to help with this but that's is not implemented yet in gazebo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Inbox
Development

No branches or pull requests

2 participants