A high-performance C++ deep learning library designed for flexibility and efficiency.
SmartDNN is a modern C++ deep learning framework that offers a clean, intuitive API for building and training neural networks while maintaining C++'s performance advantages. The library focuses on providing a high-level interface that simplifies neural network development without sacrificing computational efficiency.
- Flexible Architecture: Easily build and customize neural network architectures
- High Performance: Optimized C++ implementation with significant runtime improvements
- Comprehensive Layer Support: Full suite of essential neural network layers
- Customizable Training: Multiple loss functions and optimization methods
- Clean API: Intuitive interface for model building and training
SmartDNN leverages templated C++ to deliver exceptional performance gains:
- Non-templated runtime: ~17680ms
- Optimized templated runtime: ~8325ms
- Performance gain: ~53% improvement
- Non-templated runtime: ~83 minutes per epoch
- Optimized templated runtime: ~10969ms per epoch
- Performance gain: ~99.8% improvement
Creating your first neural network with SmartDNN is straightforward:
// Initialize the model
SmartDNN model;
// Define architecture
model.addLayer(FullyConnectedLayer(10, 100)); // Input -> Hidden
model.addLayer(ActivationLayer(ReLU())); // ReLU activation
model.addLayer(FullyConnectedLayer(100, 100)); // Hidden -> Hidden
model.addLayer(ActivationLayer(Sigmoid())); // Sigmoid activation
model.addLayer(FullyConnectedLayer(100, 10)); // Hidden -> Output
model.addLayer(ActivationLayer(Softmax())); // Softmax for classification
// Compile and train
model.compile(MSELoss(), AdamOptimizer());
model.train(inputs, targets, epochs);// Initialize the SmartDNN MNIST model
SmartDNN<float> model;
// Convolutional layers
model.addLayer(Conv2DLayer(1, 32, 3)); // Conv2D layer
model.addLayer(BatchNormalizationLayer(32)); // Batch normalization
model.addLayer(ActivationLayer(ReLU())); // ReLU activation
model.addLayer(MaxPooling2DLayer(2, 2)); // MaxPooling
model.addLayer(DropoutLayer(0.25f)); // Dropout for regularization
// Fully connected layers
model.addLayer(FlattenLayer()); // Flatten layer
model.addLayer(FullyConnectedLayer(5408, 128)); // FC layer
model.addLayer(BatchNormalizationLayer(128)); // Batch normalization
model.addLayer(ActivationLayer(ReLU())); // ReLU activation
model.addLayer(DropoutLayer(0.25f)); // Dropout
// Output layer
model.addLayer(FullyConnectedLayer(128, 10)); // Output layer
model.addLayer(ActivationLayer(Softmax())); // Softmax activation
// Configure optimizer options
AdamOptions adamOptions;
adamOptions.learningRate = learningRate;
adamOptions.beta1 = 0.9f;
adamOptions.beta2 = 0.999f;
adamOptions.epsilon = 1e-8f;
// Compile and train
model.compile(CategoricalCrossEntropyLoss(), AdamOptimizer(adamOptions));
model.train(inputs, targets, epochs);- Fully Connected Layer: Dense neural network layers
- Convolutional 2D Layer: For image processing tasks
- Activation Layers: ReLU, Sigmoid, Tanh, Softmax, Leaky ReLU
- Regularization Layers: Dropout, Batch Normalization
- Pooling Layers: Max Pooling 2D
- Utility Layers: Flatten
- Adam: Adaptive Moment Estimation optimizer with configurable parameters
- Mean Squared Error (MSE): For regression tasks
- Categorical Cross Entropy: For classification tasks
SmartDNN includes several key optimizations:
- Slice View: Access tensor data without copying
- Broadcast View: Efficient data broadcasting for better performance
- Transforms: Iterator-based transforms for compiler optimizations
- Clean Architecture: Single responsibility principle for better code organization
- Template Specialization: Type-specific optimizations
- Parallel Directives: Optimized parallelization for computationally expensive operations
-
Clone the repository:
git clone https://github.com/A-Georgiou/SmartDNN.git
-
Build the library:
cmake . -
Create a
src/main.cppfile for your neural network code -
Build your project:
make
-
Run the program:
./SmartDNN
-
Clone the repository:
git clone https://github.com/A-Georgiou/SmartDNN.git
-
Create a
src/main.cppfile (or copy from theExamples/folder) -
Build the Docker image:
docker build -f .docker/Dockerfile -t smartdnn-app . -
Run the project:
docker run --rm -it smartdnn-app
- Extended Layer Support: Additional layer types including advanced convolutional and recurrent layers
- Advanced Network Architectures: More flexible and customizable network structures
- GPU Acceleration: CUDA integration for GPU-based training and inference
- Comprehensive Documentation: Detailed guides and examples
Contributions are welcome! If you would like to contribute to the project, please reach out via the contact information below.
This project is licensed under the MIT License.
For questions or inquiries, please contact AndrewGeorgiou98@outlook.com.