Recursive Self-Improvement is a model or system that is capable of improving its own algorithms or design, leading to a feedback loop of increasing capability. Once an AI can even slightly outperform humans at designing AI, it could iteratively design ever-better successors, potentially accelerating development beyond human oversight capabilities.
This phenomenon is already emerging across the AI landscape. We see language models evaluating and improving other language models, neural networks designing better neural architectures, and AI systems optimizing the very hardware they run on. Some researchers have suggested that these developments may represent early precursors to more advanced forms of AI-driven development.
The table below documents concrete examples of AI systems being used to improve other AI systems. This collection is not exhaustive but aims to track this rapidly evolving field. The "Authors" and "Author Affiliations" columns refer to the researchers behind each innovation, while the "Submitter" indicates who brought the example to our database. If you're aware of relevant examples not included here, please submit them through our form.
Note: This initiative was originally created by Thomas Woodside at the Center for AI Safety. It is now maintained by the Algorithmic Research Group. We build off of their work and are grateful to the original contributors.
Original Contributors: Herbie Bradley, James Campbell, Jun Shern Chan, Aidan O'Gara, Dan Hendrycks, Esben Kran, Nathaniel Li, Mantas Mazeika, Aaron Scher, Zach Stein-Perlman, Fred Zhang, Oliver Zhang, Andy Zou.
ID | Description | Source | Date Published | Authors | Author Affiliations | Submitter |
---|