In the preface to the English translation, Bourdieu hints at a justification:
The idea: give an AI agent a small but real LLM training setup and let it experiment autonomously overnight. It modifies the code, trains for 5 minutes, checks if the result improved, keeps or discards, and repeats. You wake up in the morning to a log of experiments and (hopefully) a better model. The training code here is a simplified single-GPU implementation of nanochat. The core idea is that you're not touching any of the Python files like you normally would as a researcher. Instead, you are programming the program.md Markdown files that provide context to the AI agents and set up your autonomous research org. The default program.md in this repo is intentionally kept as a bare bones baseline, though it's obvious how one would iterate on it over time to find the "research org code" that achieves the fastest research progress, how you'd add more agents to the mix, etc. A bit more context on this project is here in this tweet.
,更多细节参见新收录的资料
周畅(原通义千问大模型技术负责人):2024 年被字节跳动以千万年薪挖走,阿里随后提起竞业诉讼,这一点在新收录的资料中也有详细论述
Doing so virtually guarantees ongoing, steady progress in a rapidly changing startup world. This is a proven recipe for success that allows corporations to achieve their innovation goals.