Shapes & Broadcasting

Core v1 treats tensor shapes as ordered lists of extents. This page explains the practical rules used by the compiler and runtime.

Ranks and shapes

  • Scalars are rank-0 tensors with an empty shape [].
  • Vectors and matrices are rank-1 and rank-2 respectively.
  • Higher-rank tensors are just longer shape lists, e.g. [2, 3, 4].

Broadcasting in practice

Most Core v1 operators are elementwise and follow numpy-style broadcasting:

  • Shapes are aligned from the right.
  • Each dimension must either match or be 1 on one side.
  • If neither side is 1 and the extents differ, broadcasting fails.

The reference implementation exposes the helper:

use mind::shapes::engine::broadcast_shapes;

let a = [2, 3];
let b = [1, 3];
let out = broadcast_shapes(&a, &b).unwrap();
assert_eq!(out, vec![2, 3]);

Shape rules by operator kind

  • Unary elementwise (tensor.relu, tensor.exp): output shape equals input shape.
  • Binary elementwise (tensor.add, tensor.mul): output shape is the broadcasted shape of the two inputs.
  • Full reduction (tensor.sum_all): reduces all axes to a scalar ([]).
  • 2D matmul (tensor.matmul): both inputs must be rank-2, and shapes must satisfy A: [M, K], B: [K, N], producing [M, N].

Reference shape engine

The reference shape engine lives in the main compiler repository:

  • Module: mind::shapes::engine
  • Tests: tests/shapes_engine.rs