Importance Of Training
Training Bottlenecks AI: BOLT Provides A Way Forward
A harsh reality of AI models
There is no silver bullet AI model. Better predictions require a constant process of engineering features, hyper-parameter tuning, and training and testing the resultant models. The most time-consuming part of any AI-powered application development is fine tuning the model, which requires repeated iterations of training the network to find the sweet spot. Even with AutoML and Neural Architecture Search (NAS), engineering features and hammering out other task-specific pipelines in an AI system is unavoidable.


Domain adaptation and transfer learning are cheap but significantly inferior alternatives to constant retraining
Currently, domain adaptation and transfer learning alternatives are the best way to sidestep the expenses of training. But such solutions are appealing only where retraining is entirely prohibitive. Otherwise, if accuracy matters and resources allow, it is always a good idea to rebuild the model from scratch. It is almost always the case that rebuilding a model from scratch with all information results in a significantly superior outcome. Rather than trying to avoid training, the way forward is more efficient training.
Even efficient inference can be solved with fast training
Improvements to model deployment have also been much discussed as the future of AI development. There, the task is to search for models where inference is cheap. The search space can be quantized, or pruned, or some other efficient design can be brought to bear. But in the end, it all comes back to fast training. The line of work using Deep Reinforcement Learning for efficient ASICs is bottlenecked by how fast we can train and search for the most energy-efficient configuration.


The faster we can validate large and complex network architectures, the quicker we move the AI innovation wheel
Improvements in architecture validation efficiency pay immediate and constant dividends for the process of creating AI solutions. The quicker we can validate different methods, the earlier we reach dominance for a specific task.
ThirdAI’s BOLT
ThirdAI is excited to reveal BOLT, our proprietary solution to the need for faster and more accessible neural network training. Via purely algorithmic innovations, BOLT can convert any commodity CPU into horsepower for training large models. Not only can BOLT train commercial-sized large neural networks on CPUs, it can do so with superior performance even when compared to GPUs running the strongest competing software. Best of all, our algorithmic improvements don’t rely on specialized hardware: BOLT can be used on any CPU (Intel, AMD, ARM). Even refurbished CPUs from past generations can be made faster AI trainers than A100 GPUs. Read more about our benchmarks here. Read more about our algorithm, a pioneering technology that brings AI a step closer to replicating the sparsity and efficiency of the brain, here.
