Skip to content

Commit

Permalink
chore: formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
seankmartin committed Mar 14, 2024
1 parent 83d95fc commit daa1f89
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
4 changes: 3 additions & 1 deletion src/layer/image/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,9 @@ export class ImageUserLayer extends Base {
specification,
VOLUME_RENDERING_DEPTH_SAMPLES_JSON_KEY,
(volumeRenderingDepthSamplesTarget) =>
this.volumeRenderingGain.restoreState(volumeRenderingDepthSamplesTarget),
this.volumeRenderingGain.restoreState(
volumeRenderingDepthSamplesTarget,
),
);
}
toJSON() {
Expand Down
6 changes: 3 additions & 3 deletions src/volume_rendering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,14 +64,14 @@ The main modules in this folder are:
2. `backend.ts` - extends the original chunk manager with a volume-rendering-specific chunk manager to establish chunk priority.
3. `volume_render_layer.ts` - links up to UI parameters from the `ImageUserLayer`, binds together callbacks and chunk management, etc. The drawing operation happens here. For each chunk that is visible, all of the shader parameters get passed to the shader (e.g. the model view projection matrix), and then each chunk that is in GPU memory is processed separately and drawn. The state is considered ready if no chunks that are in GPU memory have not yet been drawn. The vertex shader and the fragment shader are defined in this file. Additionally, the user defined fragment shader is injected into the fragment shader here.

* The vertex shader essentially passes normalised screen coordinates for each chunk along with the inverse matrix of the model view projection matrix to get back from screen space to model space.
* The fragment shader uses this information to determine the start and end point of each ray based on the screen position given by the vertex shader. The fragment shader then establishes how color is accumulated along the rays. The ray start and end points are set up such that the rays all lie within the view-clipping bounds and volume bounds. Finally, the rays are marched through that small clipping box, providing the `curChunkPosition` at each step and also allowing access to the scalar voxel value via `getDataValue()` for the nearest voxel, or `getInterpolatedDataValue()` for a weighted contribution from the nearest eight voxels. The value return will be typed, so use `toRaw` or `toNormalized` to convert to a float (high precision).
- The vertex shader essentially passes normalised screen coordinates for each chunk along with the inverse matrix of the model view projection matrix to get back from screen space to model space.
- The fragment shader uses this information to determine the start and end point of each ray based on the screen position given by the vertex shader. The fragment shader then establishes how color is accumulated along the rays. The ray start and end points are set up such that the rays all lie within the view-clipping bounds and volume bounds. Finally, the rays are marched through that small clipping box, providing the `curChunkPosition` at each step and also allowing access to the scalar voxel value via `getDataValue()` for the nearest voxel, or `getInterpolatedDataValue()` for a weighted contribution from the nearest eight voxels. The value return will be typed, so use `toRaw` or `toNormalized` to convert to a float (high precision).

### Sampling ratio and opacity correction

To avoid overcompositing when the number of depth samples change, opacity correction is performed. The first step of this involves calculating the sampling ratio, which is the ratio of the chosen number of depth samples to the optimal number of depth samples. The optimal number of depth samples is calculated based on the physical spacing of the data and the view spacing. The sampling ratio is then used to correct the opacity of the color at each step along the ray.

For example, if the optimal number of depth samples for the given data resolution is 250, but we the user selects 375 depth samples, then the sampling ratio would be 2/3 - indicating that we are oversampling. For a voxel with opacity 0.5, the opacity correction would be 0.5 * (2 / 3) = 0.33. This means that the voxel would contribute less to the final color than it would if the sampling ratio was 1.
For example, if the optimal number of depth samples for the given data resolution is 250, but we the user selects 375 depth samples, then the sampling ratio would be 2/3 - indicating that we are oversampling. For a voxel with opacity 0.5, the opacity correction would be 0.5 \* (2 / 3) = 0.33. This means that the voxel would contribute less to the final color than it would if the sampling ratio was 1.

### Samples accumulated per chunk

Expand Down

0 comments on commit daa1f89

Please sign in to comment.