-
Notifications
You must be signed in to change notification settings - Fork 8
Home
FINN+ is a powerful framework for deploying quantized neural networks on FPGA hardware. It specifically targets quantized neural networks (QNNs), with emphasis on generating dataflow-style architectures customized for each network.
FINN+ takes your quantized neural networks (in QONNX format) and automatically generates optimized FPGA implementations. Whether you're targeting Zynq SoCs or Alveo data center cards, FINN+ handles the complex hardware generation process for you.
FINN+ is a fork of AMD's FINN framework and is developed as part of the EKI research project. For additional information on the original FINN framework, we refer to the project website: https://xilinx.github.io/finn/
- π Automated Compilation - From ONNX model to FPGA bitstream
- β‘ High Performance - Optimized dataflow architectures
- ποΈ Flexible Configuration - Fine-tune performance vs. resource trade-offs
- π Resource Estimation - Predict FPGA utilization before synthesis
- π§ Multiple Targets - Support for Zynq and Alveo platforms
- π PYNQ Integration - Python driver for rapid prototyping
- β‘ High-Performance C++ - Optimized C++ driver for production deployment
# Using pip (recommended)
pip install finn-plus
# Or using Poetry for development
git clone https://github.com/eki-project/finn-plus.git
cd finn-plus
poetry installTrain and export your quantized model using Brevitas:
# Export your Brevitas model to ONNX
from brevitas.export import export_qonnx
export_qonnx(model, dummy_input, "model.onnx")# config.yaml
board: U55C
generate_outputs:
- estimate_reports
- bitfile
- pynq_driver # Python driver for prototyping
- cpp_driver # C++ driver for production
shell_flow_type: vitis_alveo
synth_clk_period_ns: 10.0
target_fps: 1000finn build config.yaml model.onnxCheck out our Quick-start guide page for detailed setup instructions!
| Section | Description |
|---|---|
| Building an Accelerator | How to configure your accelerator builds |
| DataflowBuildConfig | Complete reference for all configuration options |
| Development Guide | Contributing and development IDE setup |
| API Documentation | The FINN+ API documentation |
- Image classification on edge devices
- Real-time object detection
- Video processing pipelines
- Audio classification and enhancement
- Sensor data processing
- Time-series analysis
- Predictive maintenance
- Quality control systems
- Process optimization
| Command | Description |
|---|---|
finn build <config> <model> |
Build FPGA accelerator from ONNX model |
finn run <flow_file> |
Run custom build flow (legacy) |
finn config create <path> |
Create configuration template |
finn config list |
Show current configuration |
finn deps update |
Update FINN+ dependencies |
finn test |
Run test suite |
- π Documentation: This wiki contains comprehensive guides
- π Issues: Report bugs on GitHub
- π¬ Discussions: Join the community discussions
- π§ Contributing: See our Development guide
- Brevitas - Quantization-aware training library
- QONNX - Quantized ONNX representation
- PYNQ - Python framework for FPGA acceleration
Ready to get started? π Begin with Installation or jump to Quick Start Guide
π Home
- Migration Guide
- Building an Accelerator
- DataflowBuildConfig Documentation
- Example Models
- Build Guides:
-
finn build- Build neural network (yml + onnx) -
finn run- Run FINN+ (legacy) -
finn config- Configuration management -
finn test- Run tests -
finn deps- Dependency management
- Brevitas - Quantization library
- FINN+ Repository
- Custom Steps Library