Platonic Research: Make Large Language Models Smarter and Cheaper
Learn More
About Us
Platonic Research is a San Francisco-based startup with the mission of pushing the boundaries of language models reasoning abilities. Co-founded by two physicists, we approach problems from first principles, exposing the weaknesses of LLMs and addressing them with new architectures.
The problems with current LLMs
No possibility to backtrack
When LLMs identify an error in a generated word, they cannot backtrack and correct it.
Constant compute allocation per word
LLMs allocate the same compute resources to predict simple and complex words, regardless of difficulty. This means that a lot of computational power is wasted on easy words, while complex words may not receive enough resources to be predicted accurately.
Sentence tree is too big
LLMs are limited in their ability to explore all possible sentences due to their computational constraints, relying on heuristics instead. This leads to a bias towards more common and predictable sentences, limiting creativity and accuracy of the generated text.
Information loss from word to word
LLMs discard all previous computational work with each new word generated, wasting significant resources (over 100 MB per word!). This means that LLMs are not able to learn from previous computations, leading to inefficiencies and slower performance.
Our Approach
1
Concept Extraction
We extract and represent the key concepts within text data.
2
Concept-Based Reasoning
Our models use these conceptual representations to perform reasoning in the concept space.
3
Language Decoding
After generating the answer concepts, we decode them into human language and show them to the user.
How our architecture solves LLMs problems

1

Constant compute allocation per word
Our model spends most of the compute to predict the next concept (the part that is really difficult) and a small amount of compute to translate it to readable text.

2

Information loss between word generations
Our model uses the complex vector space of concepts, and never loses information until the reasoning is done and it translates to words. It can also allocate arbitrary test-time compute to the hardest concepts by using reasoning concepts.

3

Sentence tree is too big
When expressed in terms of concepts, the sentence tree is exponentially smaller (about 1000 times smaller for every concept in the text).

4

No possibility to backtrack
The model does not generate the text until it is sure it is right, otherwise it continues to compute in the space of concepts. Enhanced Understanding.
Our Team
Marco Eterno
Co-founder and CEO
Federico Visintini
Co-founder and CTO
Contact Us
We encourage you to reach out and connect with us. We're excited to discuss our research, explore potential collaborations, and learn about your interest in our work.
info@platonicresearch.com