AUGUST 18, 2022
Billion-scale Deep Learning on CPUs up to 100x cheaper training and inference.
Sub 1ms inference for your AI/ML models using CPUs.
ThirdAI’s software based AI allows commodity hardware(CPU) to do the job of GPUs or any other specialized hardware for training and inference. We utilize Sparsity, a new technique for training AI models which quickly identifies a very small fraction of parameters, out of millions or billions of parameters, which are sufficient for decision making with neural networks. A high level overview of our technology can be found here.
Challenges in AI we address:
- Unable to create the models you need or train often enough due to cost of compute or available resources.
- Inference latency prohibits the amount of AI you can do (our typical latencies are <1mseven with largest models).
- Training complex models like GPT-3 is impossible on the resources you have available.
- Explainability of your models. We can explain all decisions by design.
- Carbon footprint for AI is staggering. ThirdAI can reduce your AI carbon footprint by orders of magnitude.
We offer solutions for Search and Recommendation, Forecasting, Root Cause Analysis, Text Classification, Question Answering, and many other areas of AI. Sounds too good to be true? Let us prove it to you.