• Signal to Noise
  • Posts
  • o3 in 2 weeks and Former OpenAI Researcher Predicts 2027 ASI

o3 in 2 weeks and Former OpenAI Researcher Predicts 2027 ASI

o3 in 2 weeks and Former OpenAI Researcher Predicts 2027 ASI

OpenAI Flip-flops Again

What Happened? Sam Altman told us that o3 would be coming after all, in ‘about two weeks’. This would be OpenAI’s most performant model, as the o3 that powers deep research was ‘an early version’ and o3 has been ‘really improved’ since then. GPT-5 (bringing the bigger 4.5 base model and o3-style reasoning) delayed for ‘a few months’ thereafter.

Great job.

  • But this all comes after Altman the roadmap for OpenAI would be ‘much clearer from here’ (speaking at the turn of 2025), see above. Initially, o3 was coming ‘shortly’ after o3-mini (which came in January). Then it would ‘never be released separately’. Now it’s in two weeks. 

  • It’s been a tough week for @sama, with testimony of ‘documented lies’ in the Wall Street Journal and books coming out about him that he feels are ‘twisted’.

So What? Did Gemini 2.5 pre-empt this release, as it stole the SOTA crown? Do OpenAI want to get ahead of Llama 4 Omni, or DeepSeek R2, as o3 may underperform the now-inflated expectations of us all? Or is the story that OpenAI’s GPUs ‘are melting’ and a GPT-5 release has been pushed to the autumn/fall, so OpenAI felt they needed to give us something. Only one way to find out, which is using the model ourselves.

Does It Change Everything? Rating =

Speaking of a forthcoming DeepSeek R2, great time to plug my 34-minute Insiders documentary on Patreon, “OpenAI is Not a God” [Liang Wenfeng quote] on the full story behind R1, and the reclusive billionaire behind it all.

————

Weights & Biases have kept Simple Bench going, and we use their platform Weave, to run our benchmark. At the very least, do consider checking out their platform, free courses, and assorted material (they are also the guys behind the No. 1 model on SWE-bench verified).

They Ran the Numbers (kinda), and Got Superintelligence in 2027

What Happened? Daniel Kokotajlo, Scott Alexander, Eli Lifland and Thomas Larsen, put together this report, on ASI (Artificial Superintelligence) by 2027. Kokotajlo is a recent OpenAI employee (who bravely defied the non-disparagement clause, and caused OpenAI to back down) and another author is a No.1-rated superforecaster. Doesn’t mean it’s a great paper though.

How dare you imply this is anything but clear. Their October 2027 prediction.

  • I’ll do a deep dive on this paper at some point perhaps, but the key crux is that they think coding will be superhuman in 2026, and ML research likewise in 2027. From this stems a 100-1000x speed-up in AI progress (massively oversimplifying the paper of course), leading to Agent-4, a superintelligence come 2.5 years from now.

  • So much to say about the innumerable assumptions baked into their post, but just to take one: they assume secret sauce in Western labs will produce a set of model weights that China will be desperate to steal ‘if China steals Agent-1’s weights, they could increase their research speed by nearly 50%’ without acknowledging that DeepSeek (see documentary above) didn’t need to steal any weights to be at the forefront (and nor did Grok 3 either). And that if secret weights were so crucial, why did DeepSeek open source its architecture innovations and weights and promise to ‘never be closed source’?

  • More incredibly they claim that in just 22 months we will have models that can ‘could autonomously develop and execute plans to hack into AI servers, install copies of itself, evade detection, and use that secure base to pursue whatever other goals it might have’ despite even OpenAI optimists giving a litany of 5+ things that o3 lacks even a hint of achieving, if it were to just hack, in the Deep Research system card.

So What? Well, not much really, other than it has been picked up in the NYT, the youtube-sphere, and in much other media besides. I give the authors immense credit on the one hand for laying out detailed predictions (and would happily firm bet on the opposite of their Jan 2027 prediction above to match their earnestness) but on the other hand they relentlessly caveat with stuff like though how effectively it would do so as weeks roll by is unknown and in doubt’ and even a co-author on Dwarkesh said ‘the timelines were mainly Daniel, not me’. I think superintelligence is coming, but there are a myriad obstacles that push it well into the 2030s, for me.

Does It Change Everything? Rating =

To support hype-free journalism, and to get a full suite of exclusive AI Explained videos, explainers and a Discord community of hundreds of (edit: now 1000+) truly top-flight professionals w/ networking, I would love to invite you to our newly discounted $7/month Patreon tier.