A lightweight, high-performance tensor operations library with automatic differentiation, inspired by PyTorch and powered by Rust engine.
- High Performance: Rust engine for maximum speed and memory efficiency
- Python-Friendly: Familiar PyTorch-like API for easy adoption
- Neural Networks: Complete neural network layers and optimizers
- NumPy Integration: Seamless interoperability with NumPy arrays
- Automatic Differentiation: Built-in gradient computation for training
- Extensible: Modular design for easy customization and extension
From PyPI:
pip install minitensorFrom Source:
# Clone the repository
git clone https://github.com/neuralsorcerer/minitensor.git
cd minitensor
# Quick install with make (Linux/macOS)
make install
# Or manually with maturin
python -m pip install maturin[patchelf]
maturin develop --release
# Optional: editable install with pip (debug build by default)
python -m pip install -e .Note:
python -m pip install -e .builds a debug version by default; pass--config-settings=--releasefor a release build.
Using the install script (Linux/macOS/Windows):
bash install.shCommon options:
bash install.sh --no-venv # Use current Python env (no virtualenv)
bash install.sh --venv .myvenv # Create/use a specific venv directory
bash install.sh --debug # Debug build (default is --release)
bash install.sh --python /usr/bin/python3.12 # Use a specific Python interpreterThe script ensures Python 3.10+, sets up a virtual environment by default, installs Rust (via rustup if needed), installs maturin (with patchelf on Linux), builds MiniTensor, and verifies the installation.
import minitensor as mt
from minitensor import nn, optim
# Create tensors
mt.manual_seed(7)
x = mt.randn(32, 784) # Batch of 32 samples
y = mt.zeros(32, 10) # Target labels
# Build a neural network
model = nn.Sequential([
nn.DenseLayer(784, 128),
nn.ReLU(),
nn.DenseLayer(128, 10)
])
# Set up training
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999), epsilon=1e-8)
print(f"Model type: {type(model).__name__}")
print(f"Input shape: {x.shape}")Model type: Sequential
Input shape: Shape([32, 784])
MiniTensor ships a full API reference in docs/api_reference.md,
alongside examples and guides. For a runtime overview of what's available, use the
introspection helpers below.
import minitensor as mt
submodules = mt.available_submodules()
nn_api = mt.list_public_api()["nn"]
loss_hits = mt.search_api("loss")
ce_desc = mt.describe_api("nn.CrossEntropyLoss")
print(f"has submodules: {len(submodules) > 0}")
print(f"has nn API entries: {len(nn_api) > 0}")
print(f"loss search non-empty: {len(loss_hits) > 0}")
print(f"CrossEntropyLoss described: {'CrossEntropyLoss' in ce_desc}")has submodules: True
has nn API entries: True
loss search non-empty: True
CrossEntropyLoss described: True
import minitensor as mt
import numpy as np
# Create tensors
x = mt.zeros(3, 4) # Zeros
y = mt.ones(3, 4) # Ones
z = mt.randn(2, 2) # Random normal
np_array = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float32)
w = mt.from_numpy(np_array) # From NumPy
# Operations
result = x + y # Element-wise addition
product = x.matmul(y.transpose(0, 1)) # Matrix multiplication
mean_val = x.mean() # Reduction operations
max_val = x.max() # -inf for empty or all-NaN tensors
min_vals, min_idx = x.min(dim=1) # Returns values & indices; empty dims yield (inf, 0)
print(result.shape) # Shape([3, 4])
print(product.shape) # Shape([3, 3])
print(float(mean_val.numpy().ravel()[0])) # 0.0
print(float(max_val.numpy().ravel()[0])) # 0.0
print(min_idx.numpy()) # [0 0 0]from minitensor import nn
# Layers
dense = nn.DenseLayer(10, 5) # Dense layer (fully connected)
conv = nn.Conv2d(3, 16, 3) # 2D convolution
bn = nn.BatchNorm1d(128) # Batch normalization
dropout = nn.Dropout(0.5) # Dropout regularization
# Activations
relu = nn.ReLU() # ReLU activation
sigmoid = nn.Sigmoid() # Sigmoid activation
tanh = nn.Tanh() # Tanh activation
gelu = nn.GELU() # GELU activation
# Loss functions
mse = nn.MSELoss() # Mean squared error
ce = nn.CrossEntropyLoss() # Cross entropy
bce = nn.BCELoss() # Binary cross entropy
print(type(dense).__name__, type(conv).__name__, type(relu).__name__, type(ce).__name__)DenseLayer Conv2d ReLU CrossEntropyLoss
from minitensor import nn, optim
# Optimizers
model = nn.DenseLayer(10, 5)
params = model.parameters()
sgd = optim.SGD(params, lr=0.01, momentum=0.9, weight_decay=0.0, nesterov=False)
adam = optim.Adam(params, lr=0.001, betas=(0.9, 0.999), epsilon=1e-8, weight_decay=0.0)
adamw = optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), epsilon=1e-8, weight_decay=0.01)
rmsprop = optim.RMSprop(params, lr=0.01, alpha=0.99, epsilon=1e-8, weight_decay=0.0, momentum=0.0)
print(type(sgd).__name__, type(adam).__name__, type(adamw).__name__, type(rmsprop).__name__)SGD Adam AdamW RMSprop
Minitensor is built with a modular architecture:
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Python API │ │ PyO3 Bindings │ │ Rust Engine │
│ │<-->│ │<-->│ │
│ • Tensor │ │ • Type Safety │ │ • Performance │
│ • nn.Module │ │ • Memory Mgmt │ │ • Autograd │
│ • Optimizers │ │ • Error Handling │ │ • SIMD/GPU │
└─────────────────┘ └──────────────────┘ └─────────────────┘
- Engine: High-performance Rust backend with SIMD optimizations
- Bindings: PyO3-based Python bindings for seamless interop
- Python API: Familiar PyTorch-like interface for ease of use
import minitensor as mt
from minitensor import nn, optim
# Create a simple classifier
model = nn.Sequential([
nn.DenseLayer(784, 128),
nn.ReLU(),
nn.DenseLayer(128, 10),
])
# Initialize model
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001, betas=(0.9, 0.999), epsilon=1e-8)
print(type(model).__name__, type(optimizer).__name__)Sequential Adam
import minitensor as mt
from minitensor import nn, optim
# Synthetic data: y = 3x + 0.5 + noise
mt.manual_seed(7)
x = mt.randn(256, 1)
noise = 0.1 * mt.randn(256, 1)
y = 3 * x + 0.5 + noise
# Model, loss, optimizer
model = nn.DenseLayer(1, 1)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.05)
for epoch in range(100):
pred = model(x)
loss = criterion(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (epoch + 1) % 20 == 0:
loss_val = float(loss.numpy().ravel()[0])
w = float(model.weight.numpy().ravel()[0])
b = float(model.bias.numpy().ravel()[0])
print(f"Epoch {epoch+1:03d} | Loss: {loss_val:.4f} | w: {w:.3f} | b: {b:.3f}")Epoch 020 | Loss: 0.2520 | w: 2.545 | b: 0.407
Epoch 040 | Loss: 0.0150 | w: 2.934 | b: 0.485
Epoch 060 | Loss: 0.0103 | w: 2.988 | b: 0.498
Epoch 080 | Loss: 0.0102 | w: 2.995 | b: 0.500
Epoch 100 | Loss: 0.0102 | w: 2.996 | b: 0.501
The Python package is a thin wrapper around the compiled Rust engine, so native and Python changes should be validated in a deterministic order.
# 1) One-time contributor setup (installs dev tooling + editable extension)
python -m pip install -e '.[dev]' --config-settings=--release
# 2) Rebuild the extension after changes under engine/ or bindings/
python -m pip install -e . --config-settings=--release
# 3) Run Rust unit/integration tests
cargo test
# 4) Run Python tests
pytest -q
# 5) Run formatting/lint/type hooks
pre-commit run --all-filesNotes:
- Use
python -m pipso installs target the same interpreter used forpytest. - Step 2 is only required when Rust or PyO3 bindings changed; pure-Python/docs edits can skip it.
- Keep Step 1 as one-time setup unless dev dependencies change.
- Rust: Follow
rustfmtandclippyrecommendations - Python: Use
blackandisortfor formatting
Minitensor is designed for performance:
- Memory Efficient: Zero-copy operations where possible
- SIMD Optimized: Vectorized operations for maximum throughput
- Parallel: Multi-threaded operations for large tensors
If you use minitensor in your work and wish to refer to it, please use the following BibTeX entry.
@misc{sarkar2026minitensorlightweighthighperformancetensor,
title={MiniTensor: A Lightweight, High-Performance Tensor Operations Library},
author={Soumyadip Sarkar},
year={2026},
eprint={2602.00125},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.00125},
}This project is licensed under the Apache License - see the LICENSE file for details.
- Inspired by PyTorch's design and API
- Built with Rust's performance and safety
- Powered by PyO3 for Python integration
