-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Given the structure of the code, it should be relatively straightforward to use a "true" AMR kernel rather than an SPH one for AMR datasets.
To that end, we have two options:
- Create a "cell kernel" texture as an equivalent to SPH kernels. This can be done relatively easily by rendering a cube into a texture, mipmap it, and then use it as an SPH kernel. This won't work if the camera is not orthographic, though (since all cells are seen from a different angle).
- Represent cells as cubes. Note that we can quickly compute the length of the intersection of a ray with a cube by subtracting the position of all backfacing fragments from the position of all frontfacing fragments (see below). This requires 8 vertices and 12 triangles per cell. Vertices could potentially be shared from neighbouring cells to spare memory.
/* The integral along the line-of-sight for a given ray is
$$ \sum_i (t_{i+1} - t_i) w_i, $$
where $t_i, t_{i+1}$ are the entrance and exit distances along the ray to a given cell.
The $t$ values also happen to be the values of `input.texcoord.z`!
So we can implement the equation above by summing
- input.texcoord.z * weight
for each front-facing face and
+ input.texcoord.z * weight
for the back-facing ones.
*/
@fragment
fn fragment_main(input: VertexOutput, @builtin(front_facing) front_facing: bool) -> FragmentOutput {
var output: FragmentOutput;
// NOTE: this may be missing factors of `dx`
let value = input.weight * input.texcoord.z;
let sign = front_facing ? 1 : -1;
output.density = sign * vec2<f32>(value, value * input.quantity);
return output;
}Metadata
Metadata
Assignees
Labels
No labels