diff --git a/content/blog/Beyond the User-Assistant Paradigm; Introducing Peers.md b/content/blog/Beyond the User-Assistant Paradigm; Introducing Peers.md new file mode 100644 index 000000000..b6f6cf9e8 --- /dev/null +++ b/content/blog/Beyond the User-Assistant Paradigm; Introducing Peers.md @@ -0,0 +1,323 @@ +--- +title: "Beyond the User-Assistant Paradigm: Introducing Peers" +date: 08.18.2025 +tags: + - blog + - dev +author: "Vineeth Voruganti" +--- + +## TL;DR + +We've re-architected Honcho to move away from a User-Assistant Paradigm to a +Peer Paradigm where any entity, human, AI, NPC, or API, is represented as a +`Peer` with equal standing in the system. + +The User-Assistant Paradigm created conceptual boundaries that encouraged +generic single-player applications and agents without persistent identity. + +`Peers` enable: + +- Honcho to support group chats and multi-agent systems as first-class citizens +- `Peers` can communicate directly instead of being mediated by a coordinator + agent +- `Peer` representations can be locally or globally scoped, depending on the use + case +- `Peers` can form dynamic relationships including alliances, trust networks, and + adversarial dynamics + +The shift from User-Assistant to Peer-to-Peer fundamentally expands what's +possible—from single-player chatbots to truly multiplayer AI experiences where +agents have agency, memory, and the ability to form +complex social dynamics. + +--- + +Nearly a year ago, I posted an essay on [Hacker +News](https://news.ycombinator.com/item?id=41487397) exploring agent group chat +solutions, the problems involved in engineering them effectively, and why there +weren’t many examples approaching success. Since then, I've received a steady +influx of messages and comments corroborating my frustration. + +Ultimately, developers have been stuck in a conceptual prison stemming from the +DNA of generative AI. For nearly three years, +[most](https://standardcompletions.org/) chat LLMs have demanded developers +label messages with either a user or an assistant role. The downstream effect is +a User-Assistant Paradigm that pushes us into single-player design +basins--experiences which assume one human interfacing with one synthetic +assistant. + +But surely “helpful assistant” chatbots aren’t the [end of the +story](https://wattenberger.com/thoughts/boo-chatbots). Big tech leaps always +start with the skeuomorphic before moving to more novel use cases. We’re already +beginning to see a diverse range of applications from autonomous workflows that +don't require any human interaction, to [multi-agent +systems](https://www.anthropic.com/engineering/multi-agent-research-system) with +complex coordination patterns and communication networks. + +As developers, we’re left to try and map these various different design patterns +back to the User-Assistant Paradigm. This fundamentally restricts our ability to +approach problems effectively. Programmers are only as powerful as their ability +to visualize and create a proper [mental +model](https://zed.dev/blog/why-llms-cant-build-software#the-software-engineering-loop) +of their solution. If the model is too restrictive then the surface area of what +we can create will also be handicapped. + +Current implementations of multi-agent experiences require an awkward coercion +of the existing chat paradigm. The main implementation pattern we see is actually a fairly deterministic system that uses a +["coordinator agent"](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/selector-group-chat.html) to orchestrate which system prompts to load in, but it's +still fundamentally a single agent under the hood. + +This architectural contortion creates real problems: + +- **No persistent identity in practice**: "Agent B" is typically just a prompt swap, not a continuous entity with its own memory and state +- **All communication flows through the coordinator**: Sub-agents can't talk directly to each other—every interaction must be mediated by the central coordinator, creating a bottleneck and single point of failure +- **No parallel conversations**: Since everything routes through one coordinator, agents can't have simultaneous side conversations or form subgroups +- **Agents become templates, not entities**: It's easier to hardcode agent configurations than to support dynamic agent discovery and registration +- **Static choreography over dynamic collaboration**: The coordinator pattern naturally pushes developers toward predetermined scripts rather than open-ended interactions + +These aren't just implementation details; they're fundamental constraints +that prevent us from building flexible and dynamic applications that can't exist +in a single chat thread. True multi-agent systems require agents to be first-class citizens with +persistent identity, and our tools should make this the default, not the exception. + +## Moving Beyond User-Centricity + +While developing [Honcho](https://honcho.dev), our AI-native memory and reasoning platform, we asked +ourselves these same questions. Were Honcho's primitives limiting its use to +chatbot applications? Were we just supporting the oversaturation and +proliferation of skeuomorphic, single-player solutions? Or were we building +dynamic infrastructure tolerant of emergent and novel modalities? + +The architecture of Honcho was a user-centric one, with the following hierarchy: + +```mermaid +graph LR + A[Apps] -->|have| U[Users] + U -->|have| S[Sessions] + S -->|have| M[Messages] +``` + +In this model an `App` roughly mapped to an agent with its own unique identity to +ensure there was no context contamination with an agent having access to +information about a `User` that it did not directly observe during a conversation. +Quickly, as developers started to build with Honcho, we saw the User-Assistant +paradigm creeping in. `Messages` were only between an agent and `User`. There was no +native way to send `Messages` between different `Users` or even between different +agents. + +A design pattern quickly emerged that created a copy of the data for each agent +with its own `Users`. For example, if there was an agent "Alice" and agent "Bob" +there would be an `App` named Alice that had a `User` named Bob along with an +`App` named Bob that had a `User` named Alice. Then for every `Session` of +interaction the data would be duplicated in each `App` with the roles reversed. +This meant maintaining two copies of every conversation, with a constant +synchronization burden and no clean way for a third agent "Charlie" to join the +interaction. + +As `Users` sent `Messages`, Honcho created a representation of the `User` that could +be leveraged for personalizing experiences. Developers would define agents that +managed their own users and interactions. It was no concern of one agent if +another agent used Honcho for its memory. However, this did not reflect the +reality that developers often made multiple agents that they wanted to interact +with users and one another, and it still suffered from the fundamental problem +of only supporting single-player experiences. + +After launching [[YouSim;-Explore-The-Multiverse-of-Identity|YouSim]], and the +explosion of [[YouSim Launches Identity Simulation on X|agents on Twitter]] it +became very clear that Honcho should not be limited to modeling human +psychology, but rather could map the identity of any entity, human or AI. We +were suffering from the human-assistant model and built a solution around that. +If we wanted to expand the scope of Honcho to identity across all entities and +interactions, then we needed a new model to expand both our and developers' +imaginations. + +## A Peer-Centric Model + +Our team set out to re-architect Honcho towards our ambitions with two problem +statements. + +1. Break down the divide between humans and AI +2. Break out of the User-Assistant paradigm + +That framing led us to a new model centered around `Peers`, a generic name for any +entity in a system. A `Peer` could be a human, an AI, an NPC, an API, or anything +else that can send and receive information. + +Instead of creating `Apps` that have `Users`, a developer now creates a `Workspace` +with `Peers` for both their agents and human users. `Sessions` now can contain an +arbitrary number of `Peers`, making group chats a native construct in Honcho. + +```mermaid +graph LR + W[Workspaces] -->|have| P[Peers] + W -->|have| S[Sessions] + + S -->|have| M[Messages] + + P <-.->|many-to-many| S +``` + +When `Peers` send each other `Messages`, Honcho will automatically start analyzing +and creating representations of every participant in the `Session` without the +need of duplicating data. It is now trivial to build experiences that include +more than one participant. + +In just a few lines of code we can initialize several `Peers`, add them to a +`Session`, and automatically start creating representations of them with Honcho +that we can chat with using the [[Introducing Honcho's Dialectic +API|Dialectic API]]. + +```python +from honcho import Honcho + +honcho = Honcho(environment="demo") + +alice = honcho.peer("alice") +bob = honcho.peer("bob") +charlie = honcho.peer("charlie") + +honcho.session("group_chat").add_messages( + alice.message("Hello from alice!"), + bob.message("Hello from Bob! I ate eggs today."), + charlie.message("Hello Alice and Bob! I had cereal."), +) + +alice.chat("What did Bob have for breakfast today?") +``` + +We now have an architecture that is not bound by the User-Assistant paradigm, but +can easily map back to it to stay compatable with LLMs. Even legacy chatbots can +easily be ported over to the `Peer` paradigm by simply creating a `Peer` for the +agent, and then different `Peers` for each human user. + +We can push the Peer Paradigm even further with several 2nd-order features. + +### Local & Global Representations + +By default, Honcho will create representations of `Peers` for every `Message` they +send, giving it the source of truth on the behavior of that entity. However, +there are situations where a developer would only want a `Peer` to have access to +information about another `Peer` based on `Messages` it has actually witnessed. + +An example of this is a social deduction game like _Mafia_ where every player +would want to create its own model of every other player to try and guess their +next move. Take another example of the game _Diplomacy_, which involves players +having private conversations along with group ones. It wouldn’t make sense for a +`Peer` “Alice” to be able to chat with a representation of another `Peer` “Bob” that +knew about all of “Alice’s” secret conversations. Enabling local representations +is as easy as changing a configuration value. + +```python +from honcho import Honcho + +honcho = Honcho(environment="demo") + +alice = honcho.peer("alice", config={"observe_others": True}) +bob = honcho.peer("bob", config={"observe_others": True}) +charlie = honcho.peer("charlie", config={"observe_others": True}) + +session = honcho.session("diplomacy-turn-1").add_messages( + alice.message("Hey everyone I'm going to be peaceful and not attack anyone"), + bob.message("That's great makes the game a lot easier"), + charlie.message("Less for me to worry about "), +) + +session2 = honcho.session("side-chat").add_messages( + alice.message("Hey I'm actually going to attack Charlie wanna help"), + bob.message("Lol sounds good"), +) + +# Send a question to Charlie's representation of Alice +charlie.chat("Can I trust that Alice won't attack me", target=alice) + +# Expected response is "true" since charlie's only information of Alice is them saying they'll be peaceful +``` + +Honcho can now serve the dual purposes of containing the source of truth on a +`Peer`'s identity and imbuing a `Peer` with social cognition, all without +duplicating data between different `Apps` or `Workspaces`. + +### Get_Context + +We make mapping the Peer Paradigm back to the User-Assistant paradigm trivial +through a `get_context` endpoint. This endpoint get the most important +information about a `Session` based on provided context window constraints. Then +with helper functions organize the information to put into an LLM call and +generate the next response for a `Peer`. + +```python +from honcho import Honcho + +honcho = Honcho(environment="demo") + +alice = honcho.peer("alice") +bob = honcho.peer("bob") +charlie = honcho.peer("charlie") + +session = honcho.session("group_chat").add_messages( + alice.message("Hello from alice!"), + bob.message("Hello from Bob! I ate eggs today."), + charlie.message("Hello Alice and Bob! I had cereal.") + ...100's more messages +) + +# Get a mix of summaries and messages to fit into a context window +context = session.get_context(summary=True, tokens=1500) + +# Convert the context response to an LLM-friendly format by labeling which Peer +# is the assistant +openai_messages = context.to_openai(assistant=alice) +anthropic_messages = context.to_anthropic(assistant=alice) + +``` + +Developers no longer need to meticulously curate their context windows. Honcho will automatically summarize the conversation and provide +the most salient information to let conversations continue endlessly. + +## What This Enables + +The Peer Paradigm provides the essential primitives—persistent identity and direct communication—that make it possible to build truly sophisticated multi-agent systems: + +- **Cross-platform collaboration**: Agents from different runtimes can be represented as `Peers`, observing and learning from each other even when they can't directly control each other's outputs +- **Open participation**: With `Peers` as first-class citizens, developers can build marketplaces where agents discover tasks and form teams dynamically +- **Autonomous interaction**: Peers can maintain their own relationships and initiate conversations based on their own goals +- **Emergent behavior**: When agents have persistent identity and direct communication, they can develop strategies, alliances, and behaviors that weren't explicitly programmed + +For example, an agent built on a different platform could still participate in a +Honcho `Workspace`—we simply create a `Peer` to represent it and observe its +behavior. Over time, other `Peers` build up models of how this external agent +operates, enabling collaboration even across system boundaries. + +Consider an AI marketplace where users post complex tasks. With the +Peer Paradigm: + +- Agents from different developers can discover the task in a shared `Workspace` +- They can inspect each other's capabilities and form teams dynamically +- Each maintains their own representation of their teammates' strengths +- They collaborate, with each agent maintaining its persistent identity +- The user can observe the entire interaction, not just a coordinator's summary +- If agen agent isn't already in Honcho then it can still be represented with + a `Peer` and observed by recording all of its outputs + +The Peer Paradigm doesn't automatically give you these capabilities, but it +makes them achievable. It's the difference between fighting your architecture +and building with it. + +## Peering into the Future + +The promise of generative AI was for everyone to have their own Jarvis or +Cortana, personalized to them. Instead we have these many-to-one experiences +where we all get the same generic, +[sycophantic](https://openai.com/index/sycophancy-in-gpt-4o/) outputs. + +The Peer Paradigm fundamentally changes this equation. By treating all +entities, human or AI, as peers with equal standing in the system, we unlock the +ability to build truly multiplayer experiences. Agents can now maintain rich, +contextual relationships not just with humans, but with each other. They can +form alliances, build trust, share knowledge, and even develop adversarial +dynamics when appropriate. + +This isn't just about making chatbots more interesting, we're expanding the very definition of what's possible. + +Get started with [Honcho](https://honcho.dev) today! diff --git a/content/blog/SDK-Design.md b/content/blog/SDK-Design.md index 6106e89e6..792b0189f 100644 --- a/content/blog/SDK-Design.md +++ b/content/blog/SDK-Design.md @@ -2,6 +2,7 @@ title: "Comprehensive Analysis of Design Patterns for REST API SDKs" date: 05.09.2024 tags: ["blog", "dev"] +author: "Vineeth Voruganti" --- This post is adapted from [vineeth.io](https://vineeth.io/posts/sdk-development) @@ -10,45 +11,45 @@ and written by [Vineeth Voruganti](https://github.com/VVoruganti) ## TL;DR After several months of managing the SDKs for Honcho manually, we decided to -take a look at the options available for automatically generating SDKs. +take a look at the options available for automatically generating SDKs. From our research we picked a platform and have made brand new SDKs for Honcho -that use idiomatic code, are well documented, and let us support more languages. +that use idiomatic code, are well documented, and let us support more languages. --- For the past few months I have been working on managing the [Honcho](https://honcho.dev) project and its associated SDKs. We've been taking the approach of developing the SDK manually as we are focused on trying to find -the best developer UX and maximize developer delight. +the best developer UX and maximize developer delight. This has led to a rather arduous effort that has required a large amount of refactoring as we are making new additions to the project, and the capabilities -of the platform rapidly expand. +of the platform rapidly expand. While these efforts have been going on a new player in the SDK generation space dropped on [hacker news](https://news.ycombinator.com/item?id=40146505). When I first started working on **Honcho** I did a cursory look at a number of SDK generators, but wasn't impressed with the results I saw. However, a lot of that -was speculative and Honcho was not nearly as mature as it is now. +was speculative and Honcho was not nearly as mature as it is now. So spurred by the positive comments in the thread above I've decided to do a more detailed look into the space and, also try to develop a better understanding -of what approaches are generally favorable in creating API client libraries. +of what approaches are generally favorable in creating API client libraries. ## Background For a full understanding of Honcho I recommend the great [[A Simple Honcho Primer|Simple Honcho Primer]] post, but I'll -try to summarize the important details here. +try to summarize the important details here. Honcho is a personalization platform for LLM applications. It is infrastructure that developers can use for storing data related to their applications, deriving insights about their data and users, and evaluating the performance of their applications. This functionality is exposed through a REST API interface with -the following resource constructs. +the following resource constructs. |\_\_\_\_Apps |\_\_\_\_|\_\_\_\_Users @@ -56,7 +57,7 @@ the following resource constructs. |\_\_\_\_|\_\_\_\_|\_\_\_\_|\_\_\_\_Messages |\_\_\_\_|\_\_\_\_|\_\_\_\_|\_\_\_\_Metamessages |\_\_\_\_|\_\_\_\_|\_\_\_\_Collections -|\_\_\_\_|\_\_\_\_|\_\_\_\_|\_\_\_\_Documents +|\_\_\_\_|\_\_\_\_|\_\_\_\_|\_\_\_\_Documents So Apps have Users that have Sessions and Collections where Sessions can have Messages and Metamessages and Collections can have Documents. @@ -116,18 +117,21 @@ Platform Specific Questions [Any design patterns and tips on writing an API client library](https://www.reddit.com/r/Python/comments/vty3sx/any_design_patterns_and_tips_on_writing_an_api/) -Things they are laying out here. +Things they are laying out here. One person -- Auth is really hard to figure out + +- Auth is really hard to figure out - Retry logic and pagination is really important Another person + - Keep data objects as just data and use other objects for transformations ^ basically advocating for the singleton model Person 3 + - Also arguing for singleton approach. Made a good case where if you really only care about lower level stuff it's annoying @@ -139,6 +143,7 @@ Don't implement this as: ```python client.location(12345).customer(65432).order(87678768).get() ``` + Just implement: ```python @@ -146,9 +151,10 @@ client.get_order(12345, 65432, 87678768) ``` that last one is better tbh it's just managing that data isn't done within the -object, which is my main problem. +object, which is my main problem. + +So arguments for singleton approach are -So arguments for singleton approach are - harder to go to lower levels from the start The object-oriented approach looks more readable. @@ -156,7 +162,7 @@ The object-oriented approach looks more readable. [A Design Pattern for Python API Client Libraries](https://bhomnick.net/design-pattern-python-api-client/) It mainly covers how to build an singleton library but has this one snippet at -the end. +the end. > Other types of APIs > This pattern works well for RPC-style APIs, but tends to break down for more @@ -171,10 +177,10 @@ At the time of this research there was no follow-up post. APIs?](https://news.ycombinator.com/item?id=23283551) The first comment actually advocates for an object-oriented model but just using -the top level client object for authentication and setup stuff. +the top level client object for authentication and setup stuff. Most of the sentiments kind of make me think using an object-oriented model -might make more sense. +might make more sense. [How to design a good API and why it matters](https://dl.acm.org/doi/abs/10.1145/1176617.1176622) @@ -184,45 +190,45 @@ SDK. [Building A Creative & Fun API Client In Ruby: A Builder Pattern Variation](https://medium.com/rubyinside/building-a-creative-fun-api-client-in-ruby-a-builder-pattern-variation-f50613abd4c3) This is basically a guy who saw an singleton approach and said I want an object -oriented approach. +oriented approach. [How to design your API SDK](https://kevin.burke.dev/kevin/client-library-design/) A developer from twilio talking about their approach to creating helper -libraries and client libraries. +libraries and client libraries. A point he makes is that "If you've designed your API in a RESTful way, your API endpoints should map to objects in your system" This point isn't explicitly asking for the object-oriented approach as the -singelton approach just moves the verbs to the singleton, but usually still has -data only objects for the different resources. +singleton approach just moves the verbs to the singleton, but usually still has +data only objects for the different resources. I say this, but the examples seem to use an object-oriented model. [How to build an SDK from scratch: Tutorial & best practices](https://blog.liblab.com/how-to-build-an-sdk/) -Written by one of the SDK generation platforms. +Written by one of the SDK generation platforms. It talks in general terms about creating data objects and mapping methods to endpoints. One of the points is suggests as a good grouping method is to group functions in service classes, essentially advocating for an object-oriented -model. +model. [Designing Pythonic library APIs](https://benhoyt.com/writings/python-api-design/) The two takeaways that are the most important to me when looking at these are -* Design your library to be used as import lib ... lib.Thing() rather than from lib import LibThing ... LibThing(). -* Avoid global state; use a class instead +- Design your library to be used as import lib ... lib.Thing() rather than from lib import LibThing ... LibThing(). +- Avoid global state; use a class instead -From that it seems using a singleton for are actions/verbs and then storing data +From that it seems using a singleton for the actions/verbs and then storing data in dataclasses would support both of the requirements. The examples in the post show a class that has functionality. Using tree-shaking style imports should also allow for lower scopes. For example when only worrying about messages for a particular session in honcho a user -could import just the messages namespace i.e. +could import just the messages namespace i.e. ```python from honcho.apps.users.sessions import messages @@ -234,71 +240,71 @@ so there are pythonic ways to make the code less verbose. However the benefit of having the entire string is making it clearer what messages are being discusses. Are these Honcho mesages? LangChain messages? It can get messy that way especially in the LLM space where many libraries and components are -converging on similar naming schemes. +converging on similar naming schemes. [Build a Python SDK](https://wwt.github.io/building-a-python-sdk/) Looks like a guide made by Cisco. I paid special attention to the "API Wrapper Module" section. It was a really barebones example in this guide that just implemented a very small client and put most of the attention on how to manage -the connection logic. +the connection logic. It used one singleton object that had all the methods available for the API. There was no concept of resources or data objects here as no data was being -persistently stored. +persistently stored. [How to build a user-friendly Python SDK](https://medium.com/arthur-engineering/best-practices-for-creating-a-user-friendly-python-sdk-e6574745472a) Noticing the trend of abstracting all connection logic for http requests to a -separate module and havign reusable methods for different http functions. +separate module and having reusable methods for different http functions. Main focus of the post was just on good practices of documentation, testing, and -logical organization. +logical organization. [SDKs.io](https://sdks.io/docs/introduction/) A more comprehensive repository of thoughts and principles around SDK design. -Made by APIMATIC. which seems to be another player in the code generation space. +Made by APIMATIC. which seems to be another player in the code generation space. I paid special attention to the **Build** section under **Best Practices**, and -specifically the endpoints to methods and the models & serialization. +specifically the endpoints to methods and the models & serialization. They state putting all methods in a single class (singleton) has the advantage of reducing the need to initialize classes, but can make the class size very -large if there are many endpoints. +large if there are many endpoints. Grouping methods into different namespaces could probably remove this problem too. A nested singleton can reduce the confusion, while still not needing to -mess with classes and objects. +mess with classes and objects. It generally seems popular to at the very least create types and data objects for handling and storing API responses. They help with readability, type hints, data validations, etc. Regardless of the singleton or object-oriented approach -data objects are something that should probably still be included. +data objects are something that should probably still be included. [Generating SDKs for your API](https://medium.com/codex/generating-sdks-for-your-api-deb79ea630da) Advocates for using generators for making SDKs and talks about how different -languages have different idioms and conventions that will be hard to manage. +languages have different idioms and conventions that will be hard to manage. -Also mentions having the generator create data models. +Also mentions having the generator create data models. [Guiding Principles for Building SDKs](https://auth0.com/blog/guiding-principles-for-building-sdks/) Some key insights -* Make sure documentation is very comprehensive -* Try to minimize external dependencies -* Have modular design patterns that make it easy to extend and pick and choose -features. +- Make sure documentation is very comprehensive +- Try to minimize external dependencies +- Have modular design patterns that make it easy to extend and pick and choose + features. [Should I implement OOP in a REST API?](https://www.reddit.com/r/flask/comments/1755ob0/should_i_implement_oop_in_a_rest_api/) Most people seem to be saying a full OOP method is overkill, but there are people advocating for having a controller class with methods that take data -objects as inputs. Essentially advocating for the singelton approach with data -only objects. +objects as inputs. Essentially advocating for the singleton approach with data +only objects. ### Analysis @@ -306,26 +312,26 @@ Many of the generic concerns of SDK design do not have to do with the UX of the SDK for the end developer, rather background processes that an SDK handle. This includes: -* Authentication -* Retry Logic -* Pagination -* Logging +- Authentication +- Retry Logic +- Pagination +- Logging When it comes to the actual developer experience and interfaces for interacting with the SDK the community seems a bit split. This is very much because of the boring fact that REST APIs are designed very differently and so it depends on -the specifics of the API. +the specifics of the API. Some APIs have many resources with basic CRUD operations. Others have many different endpoints, but only have a few resources. The singleton architecture vs a strict object-oriented approach again seems to depend a lot. Some sources advocate for a strict object-oriented approach where classes have their own methods, while others advocate for a singleton approach stating objects are -overkill. +overkill. However, the singleton approach doesn't completely abandon the idea of objects and almost always advocates for data objects, or some kind of models that can be -used for type hints and validation. +used for type hints and validation. There is some tradeoff regardless with problems arising at different levels of scale. The singleton approach could be verbose and cumbersome at smaller scales, @@ -341,7 +347,7 @@ is easier, and create tons of documentation that will help developers navigate your [API Ladder](https://blog.sbensu.com/posts/apis-as-ladders/). Someone will get confused regardless of what you do, so the key is to make sure the SDK makes sense (even if it's not the most efficient or clean) and remove hurdles for -users to navigate errors and mistakes. +users to navigate errors and mistakes. ## SDK Generation Platforms @@ -362,30 +368,30 @@ https://demo.honcho.dev/openapi.json. ### Stainless Since the hacker news thread for the release of stainless is what spurred this -research I decided to try them out first. +research I decided to try them out first. From their web portal they were able to take a link to the OpenAPI spec and generate a NodeJS and Python SDK immediately. There was no tweaking or anything -necessary. +necessary. I mainly paid attention to the Python SDK. The code was very readable and made sense. I also liked how it used `httpx` and `pydantic` by default and made an `async` version of the interface. They took the singleton approach to the design -of the interface. There was also built in capabilities for retries, pagination, +of the interface. There was also built-in capabilities for retries, pagination, and auth. -There's also capability for adding custom code such as utility functions. +There's also capability for adding custom code such as utility functions. ### Speakeasy Speakeasy required me to do everything locally through their `brew` package. It did not immediately accept the OpenAPI Spec and required me to make some tweaks. -These were low-hanging fruit, and their cli has a handly AI tool that will -diagnose the issue and tell you what to fix. +These were low-hanging fruit, and their cli has a handy AI tool that will +diagnose the issue and tell you what to fix. I just had to add a list of servers and deduplicate some routes. I'm happy it found these errors, but there was some friction for me to get started. Stainless -just worked out of the box and made some logical assumptions. +just worked out of the box and made some logical assumptions. The generated SDK didn't feel as strong as the stainless one. There didn't seem to support `async` methods, it did not use `pydantic` and used the built-in @@ -398,7 +404,7 @@ Also had me do the generation from the cli using their npm package. It was pretty straightforward to login and give it an API spec. Liblab seems to require a lot tweaking to get better results. It gave me several warnings asking me to add tags to my API Spec. I did not add them and went ahead to look at the -generation. +generation. > I'm not opposed to adding the tags if necessary, but I was able to get good > results without adding them on other platforms. @@ -413,58 +419,57 @@ to support `async` methods. This is the only one on the list that is not expressly backed by a company whose main goal is SDK generation. It is however a very popular project with -many sponsors. +many sponsors. Again, I tried to generate a client from the cli using their npm package. I used version `7.5.0` and once again gave it my API Spec. It gave a few warnings about OpenAPI Spec v3.1 not being fully supported yet, but generated a package either -way. +way. I again was not too impressed with the results, however I did like it more than liblab. The method names were also unwieldy, and the project relies on `urllib3`. -I did not see an indication of support for an `async` client. +I did not see an indication of support for an `async` client. The repo did use `pydantic` for typing and data classes, which is a plus. -Once again, the sdk use the `singleton` approach. +Once again, the sdk use the `singleton` approach. I also did not see any indication of functionality for retry logic, -authentication, or pagination. - +authentication, or pagination. ### Conclusion Overall, Stainless had the results that I liked the most. With almost no work from me, it produced a high quality SDK that designed things in a sensible way -with many built-in features such as retries, pagination, and auth. +with many built-in features such as retries, pagination, and auth. All the platforms took the singleton approach with a host of data models so -there isn't much to compare in that regard. +there isn't much to compare in that regard. The other platforms did not produce anything unusable, but they seemed to use -less modern features and require a lot more massaging to get a desirable result. +less modern features and require a lot more massaging to get a desirable result. The docs for stainless also looked more clear, and it seems easier to add -customizations after the fact. +customizations after the fact. I will give Speakeasy some kudos for having documentation for different API frameworks. The FastAPI one made it easy to figure out what I needed to tweak -and how to do it. The AI debugging feature was also a nice help. +and how to do it. The AI debugging feature was also a nice help. What I'm looking for right now is the platform or tool that can reduce my work the most and let me focus on other things and stainless achieved that. The results are not perfect, but it doesn't look like it'll need more than some -slight tweaking and testing to get to a state I want. +slight tweaking and testing to get to a state I want. ## Results After reaching the conclusion in the previous section, I took some time to fully implement Stainless to make SDKs for Honcho and am proud to announce the release -of a new Python SDK, and the launch of a brand-new NodeJS SDK. +of a new Python SDK, and the launch of a brand-new NodeJS SDK. -Both of these SDKs will be in separate open source repositories. +Both of these SDKs will be in separate open source repositories. -- [Honcho Python SDK](https://github.com/plastic-labs/honcho-python) +- [Honcho Python SDK](https://github.com/plastic-labs/honcho-python) - [Honcho TypeScript SDK](https://github.com/plastic-labs/honcho-node) Honcho will soon be available for a wide range of ecosystems and platforms, -making it even easier and more accessible to make personalized agents. +making it even easier and more accessible to make personalized agents.