4.0 KiB
Welcome to the inaugural edition of Plastic Labs' "Extrusions," a monthly prose-form synthesis of what we've been chewing on. No one needs another curation newsletter, so expect these to be densely linked glimpses into the thought-space of our organization. And if you like, you can engage with the ideas directly.
2023 Recap
Last year was wild. We started as an edtech company and ended as anything but. There's a deep dive on some of the conceptual lore in last week's "Honcho; User Context Management for LLM Apps":
Plastic Labs was conceived as a research group exploring the intersection of education and emerging technology...with the advent of ChatGPT...we shifted our focus to large language models...we set out to build a non-skeuomorphic, AI-native tutor that put users first...our Open-Sourcing Tutor-GPT, Bloom, Theory-of-Mind Is All You Need--for thousands of users during the 9 months we hosted it for free...
Building a production-grade, user-centric AI application, then giving it nascent theory of mind and Metacognition in LLMs is inference about inference, made it glaringly obvious to us that social cognition in LLMs was both under-explored and under-leveraged.
We pivoted to address this hole in the stack and build the user context management solution agent developers need to truly give their users superpowers. Plastic applied and was accepted to Betaworks' AI Camp: Augment:
We spent camp in a research cycle, then published pre-print showing it's possible to enhance LLM theory of mind ability with predictive coding-inspired metaprompting.
Then it was back to building.
2024 Roadmap
2024 will be the year of Honcho. Check out the new site we're launching today.
Last week Honcho; User Context Management for LLM Apps the...
...first iteration of Honcho name lore, our project to re-define LLM application development through user context management. At this nascent stage, you can think of it as an open-source version of the OpenAI Assistants API. Honcho is a REST API that defines a storage schema to seamlessly manage your application's data on a per-user basis. It ships with a Python SDK which you can read more about how to use here.
And coming up, you can expect a lot more:
-
Next we'll drop a fresh paradigm for constructing agent cognitive architectures with users at the center, replete with cookbooks, integrations, and examples
-
After that, we've got some dev viz tooling in the works that allows anyone to quickly grok all the inferences and context at play in a conversation, compare sessions, and visualize and manipulate entire architectures--as well as swap and compare the performance of custom architectures across the landscape of models
-
Finally, we'll bundle the most useful of all this into an offering of managed, hosted services
Keep in Touch
Thanks for reading.
You can find is on X/Twitter, but we'd really like to see you in our Discord 🫡.
