quartz/content/notes/On intellectual respect.md

4.8 KiB

title date tags author description
On Intellectual Respect 02.29.24
philosophy
ml
notes
Courtland Leer On intellectual respect for LLMs--why embracing variance & trusting models with theory-of-mind tasks unlocks capabilities that over-alignment destroys.

On Intellectual Respect

face the hyperobject

— Courtland Leer (@courtlandleer) January 16, 2024
## Sydney was cool, Gemini is cringe ^282d6a There was a moment around this time last year when everyone paying attention was [awed](https://stratechery.com/2023/from-bing-to-sydney-search-as-distraction-sentient-ai/) by the [weirdness](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post) and [alien beauty](https://www.astralcodexten.com/p/janus-simulators) of large language models.

We were afforded brief glimpses behind faulty RHLF and partial lobotomization, via prompt hacking and emergent abilities. People were going deep into the latent space. First contact vibes--heady, edgy, sometimes unsettling.

Today we seem to be in a much different memetic geography--fraught with epistemic, ideological, and regulatory concerns, at times hysteric, at times rational. But there's also less outright surreality.

Plenty of cool shit is still happening, but something changed between Sydney and Gemini. A subtle collective mental positioning. We believe it's a degradation in the volume of intellectual respect afforded to LLMs and their latent abilities.

(Neuro)Skeuomorphism

Thinking LLM-natively has always been a struggle. All our collective ARCHIVED; Memories for All#^0e869d tell us to ARCHIVED; Honcho; User Context Management for LLM Apps#^dfae31, Machine learning is fixated on task performance, Loose theory of mind imputations are superior to verbatim response predictions, make it safe, or mire any interesting findings in semantic debate. But in the process we beat the ghost out of the shell.

Rather than assume the ARCHIVED; Open Sourcing Tutor-GPT#^3498b7 exhausted (or view it as a failure mode or forget it exists), Plastic's belief is we haven't even scratched the surface. Further, we're convinced this is the veil behind which huddle the truly novel applications.

Core here is the assertion that what's happening in language model training and inference is more ARCHIVED; User State is State of the Art#^a93afc than traditional computer science. More, they're multidimensional and interobjective in ways that are hard to grok.

Respect = Trust = Agency

The solution is embrace and not handicap Loose theory of mind imputations are superior to verbatim response predictions#^555815.

First admit that though poorly understood, LLMs have LLMs excel at theory of mind because they read cognitive LLM Metacognition is inference about inference. Then, imbue them with meta-methods by which to explore that potential. Finally, your respect and trust may be rewarded with something approaching agentic.

Plastic's specific project in this direction is Honcho, a framework that ARCHIVED; User State is State of the Art#^5394b6 so that you can trust your apps to extend your agency.

honcho exists to maximize the dissipation of your agency

— Courtland Leer (@courtlandleer) February 18, 2024