MIND Core v1 Cookbook

A collection of short, practical recipes demonstrating how to use Core v1 in real workflows.

Recipe 1 — Simple arithmetic (CPU)

fn main(x: tensor<f32>[4]) -> tensor<f32>[4] { return x * 2.0 }

Run:

mindc scale.mind -o scale.ir
runtime run scale.ir --input x=[1,2,3,4]

Recipe 2 — Autodiff of a loss function

fn main(x: tensor<f32>[3]) -> tensor<f32>[1] {
  let y = sum(x * x)
  return y
}

Gradient IR:

mindc loss.mind --grad --func main -o loss.grad.ir

Expected gradient: 2 * x.

Recipe 3 — MLIR lowering for CPU

mindc scale.mind --mlir -o scale.mlir

Recipe 4 — GPU profile: correct error handling

mindc main.mind --target gpu

Expected result (Core v1-stable):

error[runtime]: backend 'gpu' unavailable

Recipe 5 — Host embedding via the runtime API

let rt = MindRuntime::new_cpu()?;
let inp = rt.allocate(&tensor_desc_f32(&[2]))?;
rt.write_tensor(inp, &[1.0, 3.0])?;

let out = rt.allocate(&tensor_desc_f32(&[1]))?;
rt.run_op("sum", &[inp], &[out])?;

let result = rt.read_tensor(out)?;

Output: 4.0.

Recipe 6 — Running the official conformance suite

CPU baseline:

mindc conformance --profile cpu

GPU profile:

mindc conformance --profile gpu