Training & Deployment

TinyMind supports both on-device training and a train-offline-deploy-lean workflow with PyTorch.

Topic Description
Advanced Training Techniques Adam, RMSprop, gradient clipping, weight decay, learning rate scheduling, early stopping
PyTorch Interoperability Train in PyTorch, export weights, deploy in C++ at 40-60% less memory

Table of contents


Back to top

Dan McLeran — danmcleran@gmail.com — MIT License

This site uses Just the Docs, a documentation theme for Jekyll.