Dive into more than 1800 ICML Papers Instantly on Your Personal Device via RAG (Retrieval Augmented Generation)
The International Conference on Machine Learning (ICML) 2023, the leading event in the machine learning world this year, is starting this week in Hawaii. AI is currently the hottest trend, and enthusiasts worldwide are eager to stay updated on the rapidly evolving field by exploring its latest and greatest advancements. However, absorbing the vast amount of information presented in over 1800 dense and technical full papers won’t be an easy feat. Simply skimming titles and abstracts won’t suffice to grasp the full insights.
Google Search and ChatGPT wont cut it, as the information in the papers are too specific as well as too recent. Good news, there is now a third way, the ThirdAI way of specialized LLMs for this task.
Simplify your ICML 2023 Exploration: Instant Semantic Discovery on your Device. Even works Offline without Internet.
Refer to our PocketLLM Beta user documentation to unlock other capabilities like reinforcement learning and many more.

Next Level Exploration: With semantic search, we elevate information discovery to the next level. Imagine being intrigued by a paragraph in one paper during your exploration and wanting to find similar ideas in other ICML papers. Just type the whole paragraph, and you’ll uncover related concepts. See the figure below for an example of how it works.

The power of Neural Search: Drill down on any paragraph to find similar or related ideas from other papers.
(We are only showing NeuralDB Retrieval by switching off Generation in PocketLLM)
Personalized Exploration with Real-Time Incremental Teaching

Know the Limitations
Behind the Scenes: ThirdAI’s NeuralDB with Just 20 minutes of Pre-training on an AMD Desktop!
The complete NeuralDB model was built from scratch on an AMD Milan Desktop in under 20 minutes, without using any base or foundational models. We pre-trained a 200 million parameter model from scratch on 1827 ICML texts using this simple NeuralDB script. The Neural DB index, is less than 1GB in size. Additionally, the end-to-end retrieval latency on a standard laptop is less than 10ms. That’s the power of ThirdAI’s NeuralDB. No GPUs in the loop, all local computations.
Instead, if we opt for embedding models and vector databases with 16 million tokens and approximately 1M sentences, the scenario changes significantly. The sentence embedding alone (excluding ANN index), assuming 1500 dimensions, would occupy about 6GB of memory to store 1.5 billion numbers. You should be prepared to spend thousands of dollars every month on cloud services to maintain the embedding models and vector database, as someone has to cover the cost of GPUs. Even with all this expenditure, the search latency would still be quite high due to slow embedding model inference followed by a vector database micro-service query.
Level Up: From AI User to AI Builder with PocketLLM!
Conference proceedings are just one illustration. With PocketLLM, anyone can build and share similar capabilities on any custom corpus using the user-friendly UI or simple NeuralDB scripts provided here. If you’re familiar with Windows or Mac, you have the power to create your own AI and make it instantly accessible. Get started with PocketLLM today!