MIND is structured as a modular compiler-runtime stack. This document captures the top-level components and how data flows between them.
- Frontend – Lexer, parser, and surface-level validations produce a typed abstract syntax tree (AST).
- Type & Shape System – A constraint solver assigns concrete ranks, shapes, and element types while validating effect capabilities.
- Intermediate Representation (IR) – The typed AST lowers into a static single assignment (SSA) IR purpose-built for tensor programs.
- Lowering Pipelines – Dedicated passes perform canonicalization, fusion, layout selection, and eventually emit MLIR.
- Execution Runtimes – Backends convert MLIR to CPU or GPU executables, or interpret the IR directly for debugging.
- Tooling – Packaging, FFI, benchmarking, and developer tooling live alongside the compiler in feature-gated crates.
The architecture diagram in ../assets/diagrams/architecture.svg mirrors this flow.
| Crate/Module | Responsibility |
|---|---|
mind-syntax |
Lexer, parser, AST definitions, surface diagnostics |
mind-types |
Type lattice, constraint solver, effect tracking |
mind-ir |
Core SSA structures, pattern-matching utilities, graph rewrites |
mind-lowering |
High-level → mid-level lowering, canonicalization passes |
mind-mlir |
Emission of MLIR dialects, translation to LLVM |
mind-runtime |
Tensor buffer management, host/device executors |
mind-cli |
Command-line interface, REPL, and package tooling |
The root crate enables features to bring specific components into the final binary. For example --features cpu-exec,mlir-exec compiles both the native interpreter and MLIR JIT.
Source → AST → Typed AST → MIND IR → MLIR → LLVM IR / Runtime Calls → Execution
Key invariants:
- Types and shapes must be fully resolved before lowering to MLIR.
- Autodiff annotations expand into explicit IR functions prior to optimization.
- Runtime backends operate on host-device descriptors generated during lowering.
- Dialect Extensions – New operators are introduced via pattern definitions in
mind-irand mirrored in MLIR dialect extensions. - Backend Plugins – Trait-based executors allow embedding custom accelerators via the
Runtimetrait. - Pass Pipelines – Pass managers can be configured per target, making it safe to ship experimental transformations behind feature flags.
For detailed discussions of the intermediate representation see ir-mlir.md; for runtime integration details refer to ffi-runtime.md.