Language
MIND is a statically-typed, tensor-native language designed for AI and numerical computing. This page covers the core syntax and type system.
Basic Syntax
MIND uses a Rust-inspired syntax with first-class tensor support:
// Function definition
fn relu(x: Tensor<f32, N, M>) -> Tensor<f32, N, M> {
max(x, 0.0)
}
// Main entry point
fn main() {
let x: Tensor<f32, 2, 3> = [[1.0, -2.0, 3.0], [-1.0, 2.0, -3.0]];
let y = relu(x);
print(y);
}Type System
MIND features a rich type system with compile-time shape checking:
- Scalar types:
f32,f64,i32,i64,bool - Tensor types:
Tensor<dtype, ...dims>with static or dynamic shapes - Generic dimensions: Use uppercase letters (
N,M,K) for polymorphic shapes - Device annotations:
@cpu,@gpufor placement control
Tensor Literals
// 1D tensor let v: Tensor<f32, 3> = [1.0, 2.0, 3.0]; // 2D tensor (matrix) let m: Tensor<f32, 2, 3> = [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]; // Random initialization let w: Tensor<f32, 784, 128> = randn(); // Zeros/ones let zeros: Tensor<f32, 10, 10> = zeros(); let ones: Tensor<i32, 5> = ones();
Functions
// Regular function
fn add(a: Tensor<f32, N>, b: Tensor<f32, N>) -> Tensor<f32, N> {
a + b
}
// Differentiable function
@differentiable
fn loss(pred: Tensor<f32, N>, target: Tensor<f32, N>) -> f32 {
mean((pred - target) ** 2)
}
// Generic over dtype
fn identity<T>(x: Tensor<T, N, M>) -> Tensor<T, N, M> {
x
}Control Flow
// Conditionals
fn activation(x: f32, use_relu: bool) -> f32 {
if use_relu {
max(x, 0.0)
} else {
tanh(x)
}
}
// Loops (bounded for determinism)
fn sum_first_n(x: Tensor<f32, 100>, n: i32) -> f32 {
let mut acc = 0.0;
for i in 0..n {
acc = acc + x[i];
}
acc
}Learn More
See the full language specification at mind-spec/language.md.