Convert your text data into a private, searchable, and interactive knowledge-base using the power of LLMs. No hallucinations or data transfer. Runs without internet access.
Download the app from this url: https://www.thirdai.com/pocketllm/
PocketLLM in Action
Another advantage of sparsity is that the neural model can be updated and personalized, on your device in real-time, by playing with the displayed results. Simply click one of the results that you like more than others and hit update. You are updating, or fine-tuning, the models with your new preferences. Each time you hit update the model, a sparse back-propagation algorithm kicks and the complete model get updates. With only a small amount of interaction, you can personalize the model to your taste. In the example above, after typing the query, “What is a no contest plea,” I can pick the response I like the most and hit update. This will update the whole neural model. When you re-hit the discover button, you will see new search results with a new neural model that is updated with your feedback. This way you can personalize the model to any extent you wish.
Try it Out!
This is an alpha release. Stay tuned for more features. Watch out the space https://www.thirdai.com/pocketllm/
The application can be easily extended to handle millions to billions of documents on-premises or in the cloud. We never require any hardware infrastructure changes, so existing CPUs are more than enough. For commercial use and any other feature request, please reach out to email@example.com.
For Developers: The app is built using our Universal Deep Transformers (UDT) search and embedding model described here. We look forward to seeing what other applications you can build! To get started, apply for a free UDT license here and unlock the power of training billion-parameter models on everyday CPU devices.