Autoresearch
Two weeks since Andrej Karpathy released Autoresearch, here are some noteworthy projects to keep an eye on.
Two weeks since Andrej Karpathy released Autoresearch, here are some noteworthy projects to keep an eye on.
I discovered a better way of converting PDFs to Markdowns, with all mathematical formulas converted to LaTeX, on Apple silicon.
I rechecked the Days codebase with GPT 5.4 xhigh and GPT 5.4 Pro, and the pair of models has found serious issues in one aspect that I asked it to focus on in the current implementation.
A few weeks ago, OpenAI posted a blog post on harness engineering. Yesterday, it also released a component of its workflow as open-source, called Symphony.
A quick iOS Codex access tip with Agentboard, plus a strong Rust-over-Python essay for agentic programming.
I redesigned my personal website, featuring not only a simple, minimalist design, but also a streamlined process of writing and publishing new entries via CLI tools.
Arc — My new browser of choice. I love the fact that bookmarks are organized on the side panel, rather than clustered at the top of the window.
How I use LLMs by Andrej Karpathy — A must watch.
Andriy Burkov’s minimalist implementation of GRPO from scratch — Rather than using a library such as Hugging Face’s TRL.
Transformer Lab — a free, open-source LLM workspace that prepares a custom dataset and fine-tunes a model using MLX on the Mac.
From 0 to Production — The Modern React Tutorial — Theo released it last year, and I always wanted to learn from this marathon tutorial.
Unsloth.ai’s GRPO — it seems that the Unsloth implementation of GRPO uses less GPU memory, and it supports QLoRA and LoRA.
GRPO will soon be added to Apple MLX — The PR now works, using about 32 GB of memory when training Qwen2.5-0.5B.
Another simple DeepSeek R1 reproduction — This reproduction of GRPO has one distinct feature: it is exceedingly simple and quite elegant.
Fourth attempt on reproducing DeepSeek R1’s GRPO on small models — The third fourth time is the charm. I can successfully run this repo, without activating vLLM.
How to fine-tune open LLMs in 2025 with Hugging Face — Philipp Schmid a Technical Lead at Hugging Face, posted this article on fine-tuning LLMs using Hugging Face.
Use ↑/↓ to navigate results, Enter to open, Esc to close.
Type to search posts.
No matching posts found.