Skip to content
Felix Jentzsch edited this page Oct 16, 2025 · 16 revisions

Welcome to FINN+ πŸš€

FINN+ is a powerful framework for deploying quantized neural networks on FPGA hardware. It specifically targets quantized neural networks (QNNs), with emphasis on generating dataflow-style architectures customized for each network.

🎯 What is FINN+?

FINN+ takes your quantized neural networks (in QONNX format) and automatically generates optimized FPGA implementations. Whether you're targeting Zynq SoCs or Alveo data center cards, FINN+ handles the complex hardware generation process for you.

FINN+ is a fork of AMD's FINN framework and is developed as part of the EKI research project. For additional information on the original FINN framework, we refer to the project website: https://xilinx.github.io/finn/

✨ Key Features

  • πŸ”„ Automated Compilation - From ONNX model to FPGA bitstream
  • ⚑ High Performance - Optimized dataflow architectures
  • πŸŽ›οΈ Flexible Configuration - Fine-tune performance vs. resource trade-offs
  • πŸ“Š Resource Estimation - Predict FPGA utilization before synthesis
  • πŸ”§ Multiple Targets - Support for Zynq and Alveo platforms
  • 🐍 PYNQ Integration - Python driver for rapid prototyping
  • ⚑ High-Performance C++ - Optimized C++ driver for production deployment

πŸš€ Quick Start

1. Install FINN+

# Using pip (recommended)
pip install finn-plus

# Or using Poetry for development
git clone https://github.com/eki-project/finn-plus.git
cd finn-plus
poetry install

2. Prepare Your Model

Train and export your quantized model using Brevitas:

# Export your Brevitas model to ONNX
from brevitas.export import export_qonnx
export_qonnx(model, dummy_input, "model.onnx")

3. Create Build Configuration

# config.yaml
board: U55C
generate_outputs:
  - estimate_reports
  - bitfile
  - pynq_driver     # Python driver for prototyping
  - cpp_driver      # C++ driver for production
shell_flow_type: vitis_alveo
synth_clk_period_ns: 10.0
target_fps: 1000

4. Build Your Accelerator

finn build config.yaml model.onnx

Check out our Quick-start guide page for detailed setup instructions!

πŸ“š Documentation Overview

Section Description
Building an Accelerator How to configure your accelerator builds
DataflowBuildConfig Complete reference for all configuration options
Development Guide Contributing and development IDE setup
API Documentation The FINN+ API documentation

🎯 Use Cases

πŸ–ΌοΈ Computer Vision

  • Image classification on edge devices
  • Real-time object detection
  • Video processing pipelines

πŸ”Š Signal Processing

  • Audio classification and enhancement
  • Sensor data processing
  • Time-series analysis

🏭 Industrial Applications

  • Predictive maintenance
  • Quality control systems
  • Process optimization

πŸ› οΈ CLI Commands Overview

Command Description
finn build <config> <model> Build FPGA accelerator from ONNX model
finn run <flow_file> Run custom build flow (legacy)
finn config create <path> Create configuration template
finn config list Show current configuration
finn deps update Update FINN+ dependencies
finn test Run test suite

🀝 Community & Support

  • πŸ“– Documentation: This wiki contains comprehensive guides
  • πŸ› Issues: Report bugs on GitHub
  • πŸ’¬ Discussions: Join the community discussions
  • πŸ”§ Contributing: See our Development guide

πŸ”— Related Projects

  • Brevitas - Quantization-aware training library
  • QONNX - Quantized ONNX representation
  • PYNQ - Python framework for FPGA acceleration

Ready to get started? πŸ‘‰ Begin with Installation or jump to Quick Start Guide

FINN+ Wiki

🏠 Home

πŸš€ Getting Started

πŸ“– User Guide

πŸ› οΈ Development

πŸ“‹ Quick Reference

CLI Commands

  • finn build - Build neural network (yml + onnx)
  • finn run - Run FINN+ (legacy)
  • finn config - Configuration management
  • finn test - Run tests
  • finn deps - Dependency management

πŸ”— External Links

Clone this wiki locally