-
Notifications
You must be signed in to change notification settings - Fork 0
Description
The basic idea of te GQE is to enable a ray tracing for multiple query sources to save computations. An ideal GQE would be initializes whenever the developer needs it and other systems are informed that it has been initialized and then add a subscriber for events like ray hit etc.
A query source may be a:
- renderer
- 3D audio listener
- etc.
What an engine like this enables us to do (notes from AI):
- A robust ray/query system becomes your engine’s spatial backbone for:
- Visibility / line-of-sight (AI perception)
- Bullet hits / weapon traces
- Picking (mouse → world)??
- Sensors (LIDAR-style gameplay, proximity, stealth cones)
- Navigation helpers (raycast down to ground, slope checks)
- SDF sampling / distance queries (if you extend it)
- Lighting probes / bake tools (offline)
NOTE: We might want to build out #13 first and expand the system to support things like #17.
Extra: Can we do the ray calculation on the GPU and pass the results to the CPU? Are there significant time losse with data transfer like this?
Research: BVH, DXR / VKRT / MetalRT
TODO:
- Geometry layer (shared)
- Shading/response layer (plugin per domain)
- Material mapping? (shared ID, different payload)
Further reading:
https://github.com/JustGoscha/ray-tracing-audio
https://lese.io/blog/how-raytraced-audio-works-for-reverb/
https://www.audiokinetic.com/en/public-library/2025.1.4_9062/?source=SDK&id=raytracing_geometry_guide.html
https://www.patreon.com/posts/raytraced-audio-125968537
Metadata
Metadata
Assignees
Labels
Projects
Status