The AVerMedia SenseEdge Kit is an integrated development platform designed for real-time computer vision and depth-based AI applications. It combines the AVerMedia D317 Carrier Board (populated with NVIDIA Jetson AGX Orin module) with the RealSense D457 depth camera, allowing developers to quickly build spatial detection, perception, and distance-measurement workloads.
The SenseEdge Kit includes the following components:
-
AVerMedia D317 Carrier Board
- Integrated NVIDIA Jetson AGX Orin module
-
RealSense D457 Depth Camera
-
GMSL-to-Jetson Interface Board
-
GMSL/FAKRA Cable
-
Power Adapter
The SenseEdge Kit comes with JetPack 6.2 (L4T 36.4.3) pre-installed. You may access the D317 by connecting a monitor, keyboard, and mouse to use the GUI directly, or by using any remote-access workflow you normally apply in Jetson development.
Note
Unless explicitly noted, all commands in this guide are intended to be executed on the Jetson device (D317).
SenseEdge Kit relies on a pre-tested JetPack environment on the Jetson device.
Please clone the quick-start repository and run the setup script as follows:
git clone https://github.com/AVerMedia-Technologies-Inc/SenseEdge-kit-quick-start.git
cd SenseEdge-kit-quick-startThe setup.sh script will automatically perform the following steps:
- Check System Prerequisites: Verifies network connectivity and system time to prevent SSL/installation errors.
- Install System Dependencies: Installs CUDA compiler, TensorRT dependencies,
pip, andvenvtools. - Setup Virtual Environment: Creates a dedicated environment (
realsense_env) under~/avermedia/. - Install Python Libraries: Installs
pycuda,opencv-python, andpyrealsense2. - Download AI Models: Prompts to automatically download the necessary models for the demo.
./setup.shFigure: The setup script will prompt you to confirm settings, such as power mode configuration and model downloading. You can press Enter to accept the default options (capitalized, e.g.,
[Y/n]).
If you did not choose to download the models during the setup.sh script execution, you can run the following command later to download them separately:
./scripts/download_model.shTo verify that the RealSense D457 and the AI stack are working properly, we provide a simple Python example. This example covers color and depth streaming, AI inference using a YOLO model, and distance estimation based on depth data.
Follow these steps to activate the virtual environment and run the demo:
-
Activate the Virtual Environment:
source ~/avermedia/realsense_env/bin/activate
-
Run the Python Demo:
python demo.py
After execution, the application window will display synchronized color and depth streams with overlayed AI inference results.
Figure: Real-time detection showing bounding boxes and depth heatmaps; the boxes turn red to trigger a proximity warning when the calculated 3D distance between individuals is too close.
The SenseEdge Kit provides a complete, ready-to-use environment for any depth-camera-based AI or computer vision application. The development environment supports core frameworks and libraries like PyTorch, TensorRT, ONNX Runtime, RealSense SDK, and OpenCV.
Once you have access to synchronized color and depth frames, you are free to integrate your own algorithms, models, or processing pipelines, such as:
- Object or person detection
- Pose estimation and segmentation
- 3D scene understanding and depth measurement
- Gesture recognition or human-computer interaction sensing
Tip
Maximize Performance (Optional) By default, the Jetson device may run in a power-efficient mode. To unlock maximum performance (MAXN mode) for smoother AI inference, you can manually configure the power mode via the GUI (top-right system menu) or run the following command and reboot:
sudo nvpmodel -m 0
