Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Render points with per-fragment depth. #6508

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

kpreid
Copy link
Collaborator

@kpreid kpreid commented Jun 5, 2024

What

Each point in a point cloud is now rendered with depth that is approximately a full sphere, rather than a flat camera-facing billboard. This is a very approximate calculation, but one which is good as long as the sphere is not too off-axis; it's the same kind of approximation as already used by the brightness shading of the points. The primary goal of this change is to make “ball and stick” constructions of point and line entities look better.

Here are screenshots of the “helix” demo before and after this change. Notice that lines no longer extend to the centers of the points, the gray point on the right looks more like a bead on the line, and the gray point on the left is clipped by intersecting the red point.

Screenshot 2024-06-05 at 09 19 58
Screenshot 2024-06-05 at 09 20 14

I intended this algorithm to be equally applicable to lines (making them render as cylinders), but that is not implemented yet.

At least on my machine, it has no observable increase in frame time when testing with demos such as the open_photogrammetry_format example. However, I haven't tested extreme overdraw conditions.

Checklist

  • I have read and agree to Contributor Guide and the Code of Conduct
  • I've included a screenshot or gif (if applicable)
  • I have tested the web demo (if applicable):
  • The PR title and labels are set such as to maximize their usefulness for the next release's CHANGELOG
  • If applicable, add a new check to the release checklist!

To run all checks from main, comment on the PR with @rerun-bot full-check.

Each point in a point cloud is now rendered with depth that is
approximately a full sphere, rather than a flat camera-facing billboard.
This is a very approximate calculation, but one which is good as long as
the sphere is not too off-axis; it's the same kind of approximation as
already used by the brightness `shading` of the points. The primary goal
of this change is to make “ball and stick” constructions of point and
line entities look better.

I intended this algorithm to be equally applicable to lines (making them
render as cylinders), but that is not implemented yet.
@abey79 abey79 added 🔺 re_renderer affects re_renderer itself include in changelog labels Jun 5, 2024
@abey79
Copy link
Member

abey79 commented Jun 6, 2024

@rerun-bot full-check

edit: that fails on external contributor PR

@abey79
Copy link
Member

abey79 commented Jun 6, 2024

@kpreid thanks for the contribution—looks great! I'll leave it to our GPU expert @Wumpf to review it, but this will likely have to wait until next Tuesday when he's back to office.

@emilk emilk requested a review from Wumpf June 10, 2024 07:47
Copy link
Member

@Wumpf Wumpf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for your contribution!

I previously considered implementing this but got too nervous about the performance implications: Today, we have users complaining about extremely slow point cloud rendering due to overdraw. This typically happens in "degenerated" situations when zooming out on a large point cloud, so it's a bit of a corner case, but my fear is that changing depth proliferates the problem or at least makes it show up for hidden point clouds when it previously wouldn't. All this is highly speculative, I never got around to doing a proper test series. (Also, re_renderer today has almost no guarantees on draw order, so it may well make little difference.)

It does look a lot better for large spheres, so I'd love to get this in (with suggested code re-use and ability to revert to flat), but a more little bit more thorough round of testing with large point clouds with both small & big points is in order. I'm curious in particular about apple silicon & desktop gpu performance under high overdraw. Happy to assist with this.
Note that the timings the viewer shows optionally at the top are only cpu timings, so we have to be careful to check the whole frame time here which can be done via puffin (we haven't done much gpu profiling overall so far; I have adding wgpu-profiler and overall gpu timing numbers on the backlog for very long now)
Depending on the results we'll decide if we can leave this always on & leave optimization for the future or we already need to do something like limiting this feature to certain point clouds.

@@ -120,6 +133,7 @@ fn vs_main(@builtin(vertex_index) vertex_idx: u32) -> VertexOut {
out.world_position = quad.pos_in_world;
out.point_center = point_data.pos;
out.picking_instance_id = point_data.picking_instance_id;
out.sphere_radius_projected_depth = sphere_radius_projected_depth(point_data.pos, world_radius);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this only makes sense when we're actually rendering a sphere. Right now, we use this shader for both flat circles and spheres.
All of the sphere calculation & depth adjustment has to be conditional on has_any_flag(batch.flags, FLAG_DRAW_AS_CIRCLES) otherwise we might get weird artifacts.

Since the threat of arbitrary depth changes will disable early z and hierarchical depth, I think we should ideally use different shaders for both. Implementing that is fairly cumbersome, so if we can prove it's not a big deal I'd rather not go there. However, users previously reported issues for high pointcloud overdraw (especially on zoomed-out point clouds) and this is likely to make it worse.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the threat of arbitrary depth changes will disable early z and hierarchical depth,

I believe that early Z is already disabled because the fragment shader uses discard. I'm not familiar with optimizations based on hierarchical depth so I can't comment on that.

Copy link
Member

@Wumpf Wumpf Jun 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I think you're right. I'm a bit hazy on the details but since depth test+write is supposed to behave like an atomic operation, discard would break that already. Found this confirmed here https://stackoverflow.com/a/68602660

In a way this is great news: what we're doing here won't affect performance nearly as much as I feared (on paper) and right now we only care about (how much) we regress 🙂

Good you bring up hierarchical z: For hierarchical z to still work in some fashion, we should use conservative depth which is quite easy :) https://docs.rs/wgpu/latest/wgpu/core/naga/enum.ConservativeDepth.html
Unfortunately this hasn't made it into the wgsl spec yet, it's just a Naga extension. So unless this is just drop in and won't cause issues on web (don't know if that's the case!) we should just use it. Otherwise a todo note will suffice

@@ -0,0 +1,52 @@
#import <../global_bindings.wgsl>

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a pretty neat estimate and I appreciate how well you documented all of this, but ...

To get it right, we'd need to, essentially, compute the ray-tracing of a sphere.

we're already doing literally that. It's a bit burried and we need to get the results of it out (or rather pass it in) but we do so as part of the coverage estimation
The calls go as: coverage -> sphere_quad_coverage -> ray_sphere_distance
With that have the exact world position already. We can project it then and use the depth from there
Could coverage computation use something cheaper instead? Maybe! But we use this to draw accurate spheres there in the first place, so as things stands it would be silly to do an approximation here after the fact when we already have everything we need.

Wumpf added a commit that referenced this pull request Jul 24, 2024
…`Ellipsoids`. (#6953)

### What

With these changes, `Boxes3D` and `Ellipsoids` can now be viewed as
solid objects. This is part of the work on #1361 and the mechanisms
added here will generalize to other shapes.

* Add new component type `SolidColor`. This is identical to `Color`
except that, on shapes where it applies, it sets the color of surfaces
instead of lines. Whether its color is transparent controls whether the
surfaces are drawn at all.
* Add `SolidColor` to the `Boxes3D` and `Ellipsoids` archetypes.
* Add support for solid colors to those archetypes’ visualizers.
* Add `proc_mesh::SolidCache` to cache the calculation of triangle
meshes.
* Add `proc_mesh::ProcMeshKey::Cube` to allow the cached mech mechanism
to generate solid cubes.

![Screenshot 2024-07-20 at 17 36
01](https://github.com/user-attachments/assets/ab6b2d1b-20d0-471c-ba49-25d4e10638ea)
![Screenshot 2024-07-20 at 17 35
12](https://github.com/user-attachments/assets/2d6ce740-5bd5-4475-a018-4d286adf2c5b)

Future work (things this PR *doesn't* do):

* Viewer UI needs a version of the color picker that lets you pick
"fully transparent".
* The line renderer does not play well with adjacent faces, causing line
width to be halved some of the time. This could be fixed with something
like #6508, simple depth offsets, or something else.
* Getting naming and semantics right:
* Should the `colors` of `Boxes3D` be renamed `line_colors`, or
something like that? I think so, but that would be a breaking change, so
I didn't in this PR.
    * Should `Color` be renamed?
* Should there be all 3 of `SolidColor`, LineColor`, and `Color` with
some override policy between them?
    * Should `SolidColor` be called `SurfaceColor` instead?
    * How should `SolidColor` interact with annotation contexts?
* Figure out whether instanced meshes are the right choice for
performance.

### Checklist
* [X] I have read and agree to [Contributor
Guide](https://github.com/rerun-io/rerun/blob/main/CONTRIBUTING.md) and
the [Code of
Conduct](https://github.com/rerun-io/rerun/blob/main/CODE_OF_CONDUCT.md)
* [X] I've included a screenshot or gif (if applicable)
* [x] I have tested the web demo (if applicable):
* Using examples from latest `main` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/6953?manifest_url=https://app.rerun.io/version/main/examples_manifest.json)
* Using full set of examples from `nightly` build:
[rerun.io/viewer](https://rerun.io/viewer/pr/6953?manifest_url=https://app.rerun.io/version/nightly/examples_manifest.json)
* [X] The PR title and labels are set such as to maximize their
usefulness for the next release's CHANGELOG
* [X] If applicable, add a new check to the [release
checklist](https://github.com/rerun-io/rerun/blob/main/tests/python/release_checklist)!
* [X] If have noted any breaking changes to the log API in
`CHANGELOG.md` and the migration guide
* The `Ellipsoids` archetype has a renamed field but that isn't released
yet, so doesn't need noting.

- [PR Build Summary](https://build.rerun.io/pr/6953)
- [Recent benchmark results](https://build.rerun.io/graphs/crates.html)
- [Wasm size tracking](https://build.rerun.io/graphs/sizes.html)

To run all checks from `main`, comment on the PR with `@rerun-bot
full-check`.

---------

Co-authored-by: Andreas Reich <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants