mirror of
https://github.com/jackyzha0/quartz.git
synced 2025-12-19 19:04:06 -06:00
copium
This commit is contained in:
parent
57fb83271d
commit
882713202c
@ -0,0 +1,28 @@
|
|||||||
|
---
|
||||||
|
title: Cope Is the Canary, but Context Is Key (for The End of Software)
|
||||||
|
date: 06.01.24
|
||||||
|
tags:
|
||||||
|
- philosophy
|
||||||
|
- honcho
|
||||||
|
- blog
|
||||||
|
- macro
|
||||||
|
---
|
||||||
|
![[Copium Meme.jpg]]
|
||||||
|
|
||||||
|
Many reactions to Chris Paik’s prescient [The End of Software](https://x.com/cpaik/status/1796633683908005988) carry a distinct signature that readers of the [Pessimist's Archive](https://pessimistsarchive.org/) will recognize instantly–cope.
|
||||||
|
|
||||||
|
Cope-y outbursts like this are almost always a canary in the coal mine. As technologists, we’re quick to notice the defensive, rationalizing outcry that accompanies the eve of disruption. If there were no threat, there’d be no negative reaction. But like everyone else, it’s hard to notice when it’s coming for you. When you’ve got skin in the game.
|
||||||
|
|
||||||
|
It’s easy for us to see that creators denouncing the quality of image generators or English teachers asserting LLMs “only produce bad writing” herald the advent of serious change. They might be right…right now, but it’s only a matter of time (and market forces). No doubt they too can laugh at the historical examples of this in other groups disparaging stuff we all love and take for granted today.
|
||||||
|
|
||||||
|
The key thing to notice is that both positions can be true. New technology often does suck, but it also often gets way, way better. So much better that we can fully dispense with yesterday’s drudgery for tomorrow’s opportunity. Yet the ways in which the fresh tech sucks today form the roadmap to the ways it will be awesome in the future. It’s a mistake to say the problem is solved and a mistake to say it won’t be solved.
|
||||||
|
|
||||||
|
Chris is right that AI is coming for software like the internet came for journalism[^1]. But he’s making a predictive macro argument. And he’s not saying this is a done deal. Similarly, those arguing that how they do software development is more complex than what LLMs are currently capable of are right...but again, not a done deal. If the solution was complete, we’d be on to arguing about the next thing.
|
||||||
|
|
||||||
|
So what’s missing? What roadmap can we learn from the cope that gets us to disruption? What do LLMs lack and software engineers have that’s critical to translate ideas and natural language into applications?
|
||||||
|
|
||||||
|
At [Plastic Labs](https://plasticlabs.ai), we think it’s context. Not just context on how to do a general task, like writing code, but your context. How you would write the code? Why would you write it that way? To bridge the gap, LLMs need access to a model of your identity. How you solve a technical problem is about more than just your technical knowledge. It’s about all the elements of your identity and psychology and history that inform how you synthesize unique solutions. That’s why we’re building [Honcho](https://honcho.dev).
|
||||||
|
|
||||||
|
And to realize a future replete with trusted autonomous agents working across diverse domains on your behalf reliably, as true extensions of your agency, we’ll need Honcho too.
|
||||||
|
|
||||||
|
[^1]: There’s a distinction to be made re: cs & journalism degrees. Journalism is actually more like software engineering here, & computer science like language. Lang & cs will remain useful to study, but the journalism & engineering trade degrees built on top of those primitives need a serious refresh to be worthwhile. I.e. it’s a good idea to have aptitude with symbolic systems & abstract technical knowledge, but application & execution will change as technology evolves.
|
||||||
@ -7,7 +7,7 @@ tags:
|
|||||||
---
|
---
|
||||||
There are two reasons that ever increasing and even functionally infinite context windows won't by default solve personalization for AI apps/agents:
|
There are two reasons that ever increasing and even functionally infinite context windows won't by default solve personalization for AI apps/agents:
|
||||||
|
|
||||||
1. **Personal context has to come from somewhere.** Namely, from your head--off your wetware. So we need mechanisms to transfer that data from the human to the model. And there's *[[There's an enormous space of user identity to model|a lot of it]]*. At [Plastic](https://plasticlabs.ai) we think the path here is mimicking human social cognition, which is why we built [Honcho](https://honcho.dev)--to ambiently model users, the generate personal context for agents on demand.
|
1. **Personal context has to come from somewhere.** Namely, from your head--off your wetware. So we need mechanisms to transfer that data from the human to the model. And there's *[[The model-able space of user identity is enormous|a lot of it]]*. At [Plastic](https://plasticlabs.ai) we think the path here is mimicking human social cognition, which is why we built [Honcho](https://honcho.dev)--to ambiently model users, the generate personal context for agents on demand.
|
||||||
|
|
||||||
2. **If everything is important, nothing is important**. Even if the right context is stuffed in a crammed context window somewhere, the model still needs mechanisms to discern what's valuable and important for generation. What should it pay attention to? What weight should it give different pieces of context in any given moment? Again humans do this almost automatically, so mimicking what we know about those processes can give the model critical powers of on-demand discernment. Even what might start to look to us like intuition, taste, or vibes.
|
2. **If everything is important, nothing is important**. Even if the right context is stuffed in a crammed context window somewhere, the model still needs mechanisms to discern what's valuable and important for generation. What should it pay attention to? What weight should it give different pieces of context in any given moment? Again humans do this almost automatically, so mimicking what we know about those processes can give the model critical powers of on-demand discernment. Even what might start to look to us like intuition, taste, or vibes.
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user