This commit is contained in:
Courtland Leer 2024-06-01 16:19:56 -04:00
parent 57fb83271d
commit 882713202c
3 changed files with 29 additions and 1 deletions

View File

@ -0,0 +1,28 @@
---
title: Cope Is the Canary, but Context Is Key (for The End of Software)
date: 06.01.24
tags:
- philosophy
- honcho
- blog
- macro
---
![[Copium Meme.jpg]]
Many reactions to Chris Paiks prescient [The End of Software](https://x.com/cpaik/status/1796633683908005988) carry a distinct signature that readers of the [Pessimist's Archive](https://pessimistsarchive.org/) will recognize instantlycope.
Cope-y outbursts like this are almost always a canary in the coal mine. As technologists, were quick to notice the defensive, rationalizing outcry that accompanies the eve of disruption. If there were no threat, thered be no negative reaction. But like everyone else, its hard to notice when its coming for you. When youve got skin in the game.
Its easy for us to see that creators denouncing the quality of image generators or English teachers asserting LLMs “only produce bad writing” herald the advent of serious change. They might be right…right now, but its only a matter of time (and market forces). No doubt they too can laugh at the historical examples of this in other groups disparaging stuff we all love and take for granted today. 
The key thing to notice is that both positions can be true. New technology often does suck, but it also often gets way, way better. So much better that we can fully dispense with yesterdays drudgery for tomorrows opportunity. Yet the ways in which the fresh tech sucks today form the roadmap to the ways it will be awesome in the future. Its a mistake to say the problem is solved and a mistake to say it wont be solved.
Chris is right that AI is coming for software like the internet came for journalism[^1]. But hes making a predictive macro argument. And hes not saying this is a done deal. Similarly, those arguing that how they do software development is more complex than what LLMs are currently capable of are right...but again, not a done deal. If the solution was complete, wed be on to arguing about the next thing.
So whats missing? What roadmap can we learn from the cope that gets us to disruption? What do LLMs lack and software engineers have thats critical to translate ideas and natural language into applications?
At [Plastic Labs](https://plasticlabs.ai), we think its context. Not just context on how to do a general task, like writing code, but your context. How you would write the code? Why would you write it that way? To bridge the gap, LLMs need access to a model of your identity. How you solve a technical problem is about more than just your technical knowledge. Its about all the elements of your identity and psychology and history that inform how you synthesize unique solutions. Thats why were building [Honcho](https://honcho.dev).
And to realize a future replete with trusted autonomous agents working across diverse domains on your behalf reliably, as true extensions of your agency, well need Honcho too.
[^1]: Theres a distinction to be made re: cs & journalism degrees. Journalism is actually more like software engineering here, & computer science like language. Lang & cs will remain useful to study, but the journalism & engineering trade degrees built on top of those primitives need a serious refresh to be worthwhile. I.e. its a good idea to have aptitude with symbolic systems & abstract technical knowledge, but application & execution will change as technology evolves.

View File

@ -7,7 +7,7 @@ tags:
---
There are two reasons that ever increasing and even functionally infinite context windows won't by default solve personalization for AI apps/agents:
1. **Personal context has to come from somewhere.** Namely, from your head--off your wetware. So we need mechanisms to transfer that data from the human to the model. And there's *[[There's an enormous space of user identity to model|a lot of it]]*. At [Plastic](https://plasticlabs.ai) we think the path here is mimicking human social cognition, which is why we built [Honcho](https://honcho.dev)--to ambiently model users, the generate personal context for agents on demand.
1. **Personal context has to come from somewhere.** Namely, from your head--off your wetware. So we need mechanisms to transfer that data from the human to the model. And there's *[[The model-able space of user identity is enormous|a lot of it]]*. At [Plastic](https://plasticlabs.ai) we think the path here is mimicking human social cognition, which is why we built [Honcho](https://honcho.dev)--to ambiently model users, the generate personal context for agents on demand.
2. **If everything is important, nothing is important**. Even if the right context is stuffed in a crammed context window somewhere, the model still needs mechanisms to discern what's valuable and important for generation. What should it pay attention to? What weight should it give different pieces of context in any given moment? Again humans do this almost automatically, so mimicking what we know about those processes can give the model critical powers of on-demand discernment. Even what might start to look to us like intuition, taste, or vibes.