This document consolidates the public MiniTensor surface area available through
minitensor and its submodules, using the Python bindings and the Rust engine
as the source of truth. It is intentionally exhaustive and meant to complement
existing guides such as custom_operations.md, plugin_system.md, and
performance.md.
MiniTensor’s top-level module re-exports the Rust-backed core API and a handful of convenience aliases.
| Export | Description |
|---|---|
Tensor / tensor |
Core tensor type (constructor + alias). |
Device / device |
Device handle type (CPU/GPU). |
cpu, cuda |
Convenience constructors for CPU/GPU devices. |
functional |
Functional API module (stateless ops). |
nn |
Neural network modules and losses. |
optim |
Optimizers. |
numpy_compat |
NumPy-style helpers (if built). |
plugins |
Plugin registry and utilities (if built). |
serialization |
Model serialization utilities (if built). |
__version__reflects the backend version exposed by the Rust core (if available) or a default fallback version.__version_tuple__mirrors the structured version tuple.
| Function | Purpose |
|---|---|
get_default_dtype() |
Return the global default dtype string. |
set_default_dtype(dtype) |
Set the global default dtype. |
default_dtype(dtype) |
Context manager for temporary dtype overrides. |
manual_seed(seed) |
Seed the RNG used by random ops. |
get_gradient(tensor) |
Access a tensor’s gradient in the global graph. |
clear_autograd_graph() |
Clear the global autograd graph. |
is_autograd_graph_consumed() |
Inspect whether a graph has been consumed. |
mark_autograd_graph_consumed() |
Mark the current graph as consumed. |
available_submodules() |
Return availability of optional submodules. |
list_public_api() |
Return public API symbol lists by module. |
api_summary() |
Return version and API counts by module. |
search_api(query, module=None) |
Search available symbols by name. |
describe_api(symbol) |
Return a one-line description for a symbol. |
help() |
Render a formatted MiniTensor API reference. |
The custom-ops system is exposed at the top level:
execute_custom_op_py(name, inputs)is_custom_op_registered_py(name)list_custom_ops_py()register_example_custom_ops()unregister_custom_op_py(name)
Every creation helper is available as either mt.<name>(...) or
Tensor.<name>(...).
rand,rand_likerandn,randn_liketruncated_normal,truncated_normal_likeuniform,uniform_likerandint,randint_likerandperm
xavier_uniform,xavier_uniform_likexavier_normal,xavier_normal_likehe_uniform,he_uniform_likehe_normal,he_normal_likelecun_uniform,lecun_uniform_likelecun_normal,lecun_normal_like
zeros,zeros_likeones,ones_likeempty,empty_likefull,full_likeeyearangelinspacelogspace
from_numpy(array)from_numpy_shared(array)as_tensor(obj, dtype=None, requires_grad=None, copy=False)
Frequently used tensor attributes:
tensor.shape/tensor.ndimtensor.dtypetensor.devicetensor.requires_grad
Conversion helpers:
tensor.numpy()→ NumPy arraytensor.item()→ Python scalar (for 0-d tensors)tensor.tolist()→ Python listtensor.astype(dtype)→ dtype conversion
The following instance methods are exercised by the test suite and are available
on Tensor objects (many also have functional/top-level equivalents):
reshape,view,transpose,permutemovedim,moveaxis,swapaxes,swapdimssqueeze,unsqueeze,expandflatten,ravel
index_select,gather,narrowflip,roll
dot,bmmsolvediagonal,tracetriu,tril
sum,mean,median,quantile,nanquantilenansum,nanmean,nanmax,nanminlogsumexp
softmax,log_softmaxsoftsign,rsqrt,reciprocal,signclip,clamp,clamp_min,clamp_maxround,floor,ceilsin,cos,tanasin,acos,atansinh,cosh,asinh,acosh,atanhsoftplus,gelu,elu,selu,siluhardshrink
layer_norm(shape, weight=None, bias=None, eps=1e-5)
backward()to trigger gradient computation.fill_(value)for in-place fills.
MiniTensor provides stateless functional variants that mirror Tensor methods.
Each of the following names is accessible from:
minitensor.<name>minitensor.functional.<name>
cat, stack, split, chunk, index_select, gather, narrow, topk, sort, argsort,
median, quantile, nanquantile, nansum, nanmean, nanmax, nanmin, logsumexp,
softmax, log_softmax, masked_softmax, masked_log_softmax, softsign, rsqrt,
reciprocal, sign, reshape, view, triu, tril, diagonal, trace, solve, flatten,
ravel, transpose, permute, movedim, moveaxis, swapaxes, swapdims, squeeze,
unsqueeze, expand, repeat, repeat_interleave, flip, roll, clip, clamp,
clamp_min, clamp_max, round, floor, ceil, sin, cos, tan, asin, acos, atan,
sinh, cosh, asinh, acosh, atanh, where, masked_fill
The functional namespace also exposes:
dotbmm
Lower-case callable symbols from minitensor.nn are mirrored into
minitensor.functional for convenience (for example, activation functions that
have a functional signature).
Module(base class)DenseLayerConv2dBatchNorm1dBatchNorm2dDropout,Dropout2dSequential(container of modules)
ReLULeakyReLUSigmoidTanhGELUELUSoftmax
MSELossMAELossHuberLossLogCoshLossSmoothL1LossCrossEntropyLossBCELossFocalLoss
layer.parameters()returns tensors for optimizers.layer.zero_grad()clears gradients for trainable tensors.
SGDAdamAdamWRMSprop
All optimizer classes share a common interface:
step()— apply parameter updates and clear the global autograd graph.zero_grad(set_to_none: bool = False)— reset gradients.lrproperty — read/write learning rate.
asarray(data, dtype=None, requires_grad=False)zeros_like,ones_like,empty_like,full_like
concatenate,stack,vstack,hstacksplit,hsplit,vsplit
dot,matmul,cross,whereallclose,array_equal
mean,nanmean,std,var,prod,sum,nansummax,min,nanmax,nanmin
ModelVersion— semantic version for serialized models.ModelMetadata— name, description, architecture, shapes, custom metadata.SerializationFormat—json(),binary(),messagepack().SerializedModel— metadata + state dict.StateDict— tensor parameters/buffers.DeploymentModel— compact model format for inference.ModelSerializer—save()/load()helpers.
save_model(model, path, format=None)load_model(path, format=None)
VersionInfo(parse, current, compatibility checks)PluginInfo(name, version, author, min/max supported versions)
CustomPlugin— plugin object with init/cleanup/custom-op callbacks.PluginRegistry— register/unregister/list Python plugins.CustomLayer— define custom layers in Python.PluginBuilder— fluent builder for plugin metadata.
load_plugin(path)unload_plugin(name)list_plugins()get_plugin_info(name)is_plugin_loaded(name)
MiniTensor supports custom ops in both Rust and Python. Refer to
docs/custom_operations.md for:
- The
CustomOptrait and builder pattern. - Python registration and execution (
execute_custom_op_py, etc.). - Example custom ops (Swish, GELU, power).
The core engine supports CPU execution and can be compiled with CUDA, Metal, or
OpenCL backends where applicable. Device selection flows through the Device
API and tensor creation functions.
docs/custom_operations.md— custom ops and autograd integration.docs/plugin_system.md— plugin registry and compatibility handling.docs/performance.md— performance tuning and profiling.examples/andexamples/notebooks/— end-to-end usage patterns.