Intermediate Representation
The MIND IR is a typed, SSA-based intermediate representation designed for tensor operations and automatic differentiation.
IR Structure
MIND IR uses Static Single Assignment (SSA) form with explicit types:
// Example IR for: y = relu(x @ w) %0 = mind.const : tensor<2x2xf32> = [[1.0, 2.0], [3.0, 4.0]] %1 = mind.randn : tensor<2x2xf32> %2 = mind.matmul(%0, %1) : tensor<2x2xf32> %3 = mind.relu(%2) : tensor<2x2xf32> mind.return %3
Core Operations
| Category | Operations |
|---|---|
| Arithmetic | add, sub, mul, div, neg, pow |
| Linear Algebra | matmul, transpose, dot |
| Activations | relu, sigmoid, tanh, softmax, gelu |
| Reductions | sum, mean, max, min, prod |
| Shape | reshape, broadcast, squeeze, unsqueeze |
| Convolution | conv2d, maxpool2d, avgpool2d |
Type Representation
IR types encode both dtype and shape information:
tensor<f32> // Scalar tensor<10xf32> // 1D, static shape tensor<2x3xf32> // 2D, static shape tensor<?x?xf32> // 2D, dynamic shape tensor<2x?xf32> // Mixed static/dynamic
Canonicalization
The IR undergoes canonicalization passes to normalize operations:
- Constant folding for compile-time-known values
- Identity elimination (x + 0 → x, x * 1 → x)
- Strength reduction (x * 2 → x + x for integers)
- Dead code elimination
Lowering Pipeline
Source (.mind)
↓ Parse
AST
↓ Type check
Typed AST
↓ Lower
MIND IR (High-level)
↓ Canonicalize
MIND IR (Canonical)
↓ Autodiff (if needed)
MIND IR + Gradients
↓ Lower to MLIR
MLIR Dialects
↓ Lower to LLVM
LLVM IR
↓ Codegen
Machine CodeLearn More
See the full IR specification at mind-spec/ir.md.