From 9ac751a7862bff206ff9efe90f297925f86927d3 Mon Sep 17 00:00:00 2001 From: vintro Date: Mon, 24 Feb 2025 11:17:48 -0500 Subject: [PATCH] formatting updates --- content/careers/Founding ML Engineer.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/careers/Founding ML Engineer.md b/content/careers/Founding ML Engineer.md index 9e6c8aa87..4caff7bcf 100644 --- a/content/careers/Founding ML Engineer.md +++ b/content/careers/Founding ML Engineer.md @@ -48,17 +48,17 @@ Applications without these 3 items won't be considered, but be sure to optimize And it can't hurt to [join Discord](https://discord.gg/plasticlabs) and introduce yourself or engage with [our GitHub](https://github.com/plastic-labs). ## Research We're Excited About -[s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393) -[Neural Networks Are Elastic Origami!](https://youtu.be/l3O2J3LMxqI?si=bhodv2c7GG75N2Ku) -[Titans: Learning to Memorize at Test Time](https://arxiv.org/abs/2501.00663v1) -[Mind Your Theory: Theory of Mind Goes Deeper Than Reasoning](https://arxiv.org/abs/2412.13631) -[Generative Agent Simulations of 1,000 People](https://arxiv.org/abs/2411.10109) -[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948) -[Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains](https://arxiv.org/abs/2501.05707) +[s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393) +[Neural Networks Are Elastic Origami!](https://youtu.be/l3O2J3LMxqI?si=bhodv2c7GG75N2Ku) +[Titans: Learning to Memorize at Test Time](https://arxiv.org/abs/2501.00663v1) +[Mind Your Theory: Theory of Mind Goes Deeper Than Reasoning](https://arxiv.org/abs/2412.13631) +[Generative Agent Simulations of 1,000 People](https://arxiv.org/abs/2411.10109) +[DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948) +[Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains](https://arxiv.org/abs/2501.05707) [Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm](https://arxiv.org/pdf/2102.07350) [Theory of Mind May Have Spontaneously Emerged in Large Language Models](https://arxiv.org/pdf/2302.02083v3) -[Think Twice: Perspective-Taking Improved Large Language Models' Theory-of-Mind Capabilities](https://arxiv.org/pdf/2311.10227) -[Representation Engineering: A Top-Down Approach to AI Transparency](https://arxiv.org/abs/2310.01405) +[Think Twice: Perspective-Taking Improved Large Language Models' Theory-of-Mind Capabilities](https://arxiv.org/pdf/2311.10227) +[Representation Engineering: A Top-Down Approach to AI Transparency](https://arxiv.org/abs/2310.01405) [Theia Vogel's post on Representation Engineering Mistral 7B an Acid Trip](https://vgel.me/posts/representation-engineering/) [A Roadmap to Pluralistic Alignment](https://arxiv.org/abs/2402.05070) [Open-Endedness is Essential for Artificial Superhuman Intelligence](https://arxiv.org/pdf/2406.04268)