mirror of
https://github.com/jackyzha0/quartz.git
synced 2025-12-20 03:14:06 -06:00
formatting updates
This commit is contained in:
parent
7ce2038cf4
commit
9ac751a786
@ -48,17 +48,17 @@ Applications without these 3 items won't be considered, but be sure to optimize
|
||||
And it can't hurt to [join Discord](https://discord.gg/plasticlabs) and introduce yourself or engage with [our GitHub](https://github.com/plastic-labs).
|
||||
|
||||
## Research We're Excited About
|
||||
[s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393)
|
||||
[Neural Networks Are Elastic Origami!](https://youtu.be/l3O2J3LMxqI?si=bhodv2c7GG75N2Ku)
|
||||
[Titans: Learning to Memorize at Test Time](https://arxiv.org/abs/2501.00663v1)
|
||||
[Mind Your Theory: Theory of Mind Goes Deeper Than Reasoning](https://arxiv.org/abs/2412.13631)
|
||||
[Generative Agent Simulations of 1,000 People](https://arxiv.org/abs/2411.10109)
|
||||
[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948)
|
||||
[Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains](https://arxiv.org/abs/2501.05707)
|
||||
[s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393)
|
||||
[Neural Networks Are Elastic Origami!](https://youtu.be/l3O2J3LMxqI?si=bhodv2c7GG75N2Ku)
|
||||
[Titans: Learning to Memorize at Test Time](https://arxiv.org/abs/2501.00663v1)
|
||||
[Mind Your Theory: Theory of Mind Goes Deeper Than Reasoning](https://arxiv.org/abs/2412.13631)
|
||||
[Generative Agent Simulations of 1,000 People](https://arxiv.org/abs/2411.10109)
|
||||
[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948)
|
||||
[Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains](https://arxiv.org/abs/2501.05707)
|
||||
[Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](https://arxiv.org/pdf/2102.07350)
|
||||
[Theory of Mind May Have Spontaneously Emerged in Large Language Models](https://arxiv.org/pdf/2302.02083v3)
|
||||
[Think Twice: Perspective-Taking Improved Large Language Models' Theory-of-Mind Capabilities](https://arxiv.org/pdf/2311.10227)
|
||||
[Representation Engineering: A Top-Down Approach to AI Transparency](https://arxiv.org/abs/2310.01405)
|
||||
[Think Twice: Perspective-Taking Improved Large Language Models' Theory-of-Mind Capabilities](https://arxiv.org/pdf/2311.10227)
|
||||
[Representation Engineering: A Top-Down Approach to AI Transparency](https://arxiv.org/abs/2310.01405)
|
||||
[Theia Vogel's post on Representation Engineering Mistral 7B an Acid Trip](https://vgel.me/posts/representation-engineering/)
|
||||
[A Roadmap to Pluralistic Alignment](https://arxiv.org/abs/2402.05070)
|
||||
[Open-Endedness is Essential for Artificial Superhuman Intelligence](https://arxiv.org/pdf/2406.04268)
|
||||
|
||||
Loading…
Reference in New Issue
Block a user