How to Fine‑Tune GPT‑4o for Free: Costs, Steps, and Real‑World Benchmarks
OpenAI has launched low‑cost fine‑tuning for GPT‑4o, offering free daily training tokens, a simple dashboard workflow, and early benchmark results that show significant performance gains, while the community debates the merits of fine‑tuning versus prompt‑caching for efficient AI applications.
OpenAI announced fine‑tuning for GPT‑4o, allowing developers to customize the model with custom datasets at a low cost of $25 per 1 million tokens, and providing 1 million free training tokens per organization each day until September 23.
Fine‑tuning can be started from the OpenAI fine‑tuning dashboard by selecting the base model gpt-4o-2024-08-06. Only a few dozen examples are needed to achieve good results.
Success Cases
Cosine’s code‑assistant “Genie” was fine‑tuned on billions of high‑quality code snippets (21% JavaScript/Python, 14% TypeScript/TSX, 3% other languages). After fine‑tuning it achieved 43.8% SOTA on SWE‑Bench Verified and 30.08% on SWE‑Bench Full, surpassing previous records.
Distyl, an AI solutions provider for Fortune‑500 firms, fine‑tuned a model that ranked first on the BIRD‑SQL benchmark with 71.83% execution accuracy and strong performance on query rewriting, intent classification, chain‑of‑thought, and self‑correction tasks.
Developer data (inputs and outputs) are not shared or used to train other models, and layered security mitigations—including continuous automated safety assessments—are applied to fine‑tuned models.
Community Debate: Fine‑Tuning vs Prompt Caching
Some users argue that prompt caching (used by Google Gemini and Claude) offers faster, cheaper inference by reusing shared prompt prefixes, while others claim fine‑tuning yields more reliable structured outputs such as correct JSON.
OpenAI’s latency‑optimization guide recommends placing dynamic parts later in the prompt to maximize shared prefixes and reduce input tokens, but it does not confirm that OpenAI has implemented prompt caching.
In addition to GPT‑4o, OpenAI also offers free fine‑tuning for GPT‑4o mini with 2 million free training tokens per day until September 23.
Reference links: https://openai.com/index/gpt-4o-fine-tuning/ https://x.com/OpenAIDevs/status/1825938486568038569 https://news.ycombinator.com/item?id=41301673
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
