- Signal to Noise
- Posts
- Gemini Deep Think Revelations + Ilya Sutskever's 'AGI Bunker'
Gemini Deep Think Revelations + Ilya Sutskever's 'AGI Bunker'

Gemini 2.5 Pro ‘Deep Think’ - Sergey Brin Gives New Hints
‘5 Teams Were Combined’
What Happened? Sergey Brin gave heretofore-unreleased insights into what went behind Gemini 2.5’s Deep Think mode, which broke various benchmark records but has yet to be released (have been promised early access). It was not just one or two axis of compute spend (bigger base model, longer thinking time) it was ‘five’, and I enumerate what I think they are…

Given the state of the labs, Deep Think *may* be the most performant mode of any LLM for many months.
In an interview with Logan Kilpatrick at I/O, Brin (co-founder of Google) said they used ‘five approaches’, drawing on five different internal teams at Google. ‘we took the best of ideas of all of them, combined it in one go.’ Using these to let Gemini think for hours is ‘...kind of new. And it's nontrivial.’
So what are they? Given Brin’s repeated hints of a ‘parallel’ approach, the key insights, I believe come from a recent paper I broke down here. The lead author called the core approach - sampling hundreds of responses from the base model, and then using the base model to verify those responses, a ‘new-axis of compute through which they ‘elevated the performance of Gemini 1.5 Pro above o1-preview’.
Let’s then count the five approaches (we can later look back at this and count how many we got right). 1) Scaling up the base model (data and parameters). 2) Those base models outputting longer chains of thought (key step for the o-series), allowing them to solve problems that require more steps. 3) Increasing the number of candidate responses sampled (kinf from the paper) 4) Increasing the number of verification attempts for each candidate response (kverif) 5) The core fine-tuning through RL done after 1), by all model providers.
So What? If, as I suspect, Google starts to run away with the AI lead (counted by many benchmarks, and even soon, potentially, by user-count), then the Deep Think mode may represent the smartest iteration of any LLM in the world. Understanding its core ingredients may prove to be one of the biggest intrigues of the second half of 2025, and here are my early findings, based on clues left by the billionaire co-founder.
Does It Change Everything? Rating = ⚂
Sponsored: I personally know MBA candidates who used and enjoyed Target Test Prep and their tools. Target Test Prep’s AI Assist acts like your personal GMAT tutor—available 24/7 to guide your study, answer questions instantly, and help you prep more efficiently.
Here’s what it does:
Instantly answers your questions on any GMAT topic, concept, or problem
Recommends more efficient solving methods based on how you approach problems
Identifies common traps and explains how to avoid them
Generates unlimited AI-adjusted practice questions to match your level
Offers time-management tips and test-taking strategies tailored to your style
Ilya Sutskever’s ‘AGI Bunker’
What Happened? The Atlantic published a detailed exposé of the tumult behind Sam Altman’s firing, and generally the atmosphere within OpenAI. The biggest brain, arguably, behind the early rise of ChatGPT was the Chief Scientist of the company, Ilya Sutskever, who has gone on to found a company that has both 0 known users/products, and a $32 billion valuation: Safe Superintelligence. But what happened, to get to the point, is that we learnt Sutskever believed that we will ‘build a bunker before we release AGI’. Whether that’s amusing or scary is up to you.

Safe Superintelligence co-founders, Ilya Sutskever, middle
“Once we all get into the bunker—” he began, according to an [OpenAI] researcher who was present. “I’m sorry,” the researcher interrupted, “the bunker?” “We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.” - quote from mid-2023
Now a lot has happened since 2023, and many would say the models have gotten a lot stronger. And Sutskever’s ambitions have raised, if anything, with the goal now being ‘a straight shot to superintelligence’ (restated in 2025) passing artificial general intelligence (AGI) by without collecting $200.
Of course Sam Altman, Sutskever’s former boss, actually has a literal bunker, according to reports, but that’s another story. The point here is to highlight that many of those leading the charge to self-improving AI have at one point, fairly recently, expressed the idea that the end-result of this improvement is something they are personally terrified of. Whether you should be, is for your own reflection.
So What? This story is more about the men behind the mission, and the kind of person who leads perhaps the most secretive AI lab on the planet. Is there any precedent in history for such a person to lead an effort that he believes will soon require such a bunker - and that effort being given a valuation higher than that of Hewlett Packard?
Does It Change Everything? Rating = ⚀
To support hype-free journalism, and to get a full suite of exclusive AI Explained videos, explainers and a Discord community of hundreds of (edit: now 1000+) truly top-flight professionals w/ networking, I would love to invite you to our newly discounted $7/month Patreon tier.