ThirdAI’s Deep Transformers are a universal deep learning engine for various supervised datasets consisting of input-output pairs. Our Deep Transformers are Deep Learning Models that can transform “most” feature vectors in their raw format, like CSV, to a prediction. UDT supports multi-modal column (or feature) formats, including text, millions of categories, numbers, and even timestamps.
Intended Use Case for UDT
UDT can reduce the time needed to estimate the value proposition of an AI hypothesis in business. If the model is found to be of value, it is immediately ready for deployment since inception. UDT offers a push-button train and deploy solution to turn supervised data into a production-ready deep learning model that can be tested for its business value out of the box.
What is unique about UDT?
UDT uses one of the most advanced forms of Deep Learning, powered by our BOLT Engine’s sparsity to provide capabilities that are not feasible with alternative AI solutions.
- Universal: The same model and API can leverage a variety of information. UDT can understand text, vectors, categories, timestamps, ids and many other types of data. The AI software stack remains simple and easy to update. By providing more fine grained information as new columns, the same routine can exploit personalization, NLP, meta-data, and even sequential modelling if timestamps are provided. The feature engineering and model selection is automatic.
- Scalability to billions of features, hundreds of millions of outputs (Extreme Classification), and billions of sample datasets: UDT can run with billions of input dimensions/categories and can handle hundreds of millions of outputs. Solutions like product/document search and other extreme classification tasks do not require code changes.
- 1 ms inference latency on standard CPUs: UDT provides an inference latency of 1 ms on traditional CPUs, irrespective of model size.
- Production-ready from Day 1: UDT is written in C++, and any AI model created with UDT comes pre-optimized for production. UDT models are straightforward to integrate into most standard runtimes.
- Trains on standard CPUs: UDT capitalizes on dynamic sparsity-based acceleration on CPUs, which is faster than top-of-the-line GPUs for training large Neural Networks, even with millions or billions of samples.
- AutoML: UDT does not require model tuning or selection. Everything is auto-tuned, from feature engineering to model selection.
- Root Cause Analysis: UDT leverages sizeable deep learning models but can still provide backflow analysis and give an interpretable explanation for its predictions.
- Automatic Multi-Modal Feature Engineering and Understanding:
UDT can consume:
1. Text Columns
2. Categorical columns with millions of unique categories
3. Numerical columns for quantitative inputs
4. Automatically Leverages Timestamps for Sequential Modeling
Business Solutions with UDT
Below we list some standard business solutions. Refer to the demo scripts to see how UDT can unlock the next level of AI by providing additional modality of information.
UDT can handle many business problems by simply passing the data in the raw format to the standard UDT API. The engine understands a variety of inputs and hence does not require any hand-holding to perform feature engineering and model tuning.
- Query to Product Recommendation (or Search).
- Personalization with user meta-data.
- Product to Product Recommendation (or Ads).
- Text Classification and annotations.
- Sequential Modeling and Forecasting.
- Root Cause Analysis.
Technology: What is inside UDT?
UDT learns models that consist of wide, large neural networks. ThirdAI’s sparsity-based training algorithm lets us quickly train models with 100s of millions – even billions – of parameters and simultaneously provides sub one millisecond inference latency. Read more about the core technology here.
Model Ownership and Managing AI Lifecycle
UDT gives you full ownership of the model and the complete AI lifecycle, with intuitive and straightforward APIs to train, predict, save and much more. The trained model is production ready for inference with a ~1-millisecond latency on any standard CPU. Our training costs are very affordable, enabling frequent updates to business-critical models. The model also provides the capability to extract the internal embeddings of the data samples, in addition to all the prediction scores for the output. The representation and scores can then be fed into most standard pipelines if required.
UDT support distributed training out of box for training on extremely large datasets. UDT data parallel training is an extremely simple setup and can be achieve in less than 20 mins. Our distributed training can be done on any Cloud infrastructure or on-prem systems supporting Ray clusters. For details reach out to us.
UDT is available both on-prem and in the cloud. We also provide docker images and REST APIs, and we integrate with any standard runtime, including Java and Python. Please reach out if you have any custom requests for deployment.
What is not supported?
Currently, we do not support computer vision and speech applications. We do plan to support them in the near future.
UDT comes with a very intuitive, simple and small set of APIs. Since everything is auto-tuned, you can easily use them in any pipeline. Here are the demo scripts that describe the functionalities exposed by the software.
API Documentation for BOLT-0.5.0, releasing 19 Oct 2022.
Please refer to our API docs and demos page examples of different use cases.