OpenAI has set an internal goal of building an "automated AI research intern" by September 2026, and a "true automated AI researcher" by March 2028. The intern would handle tasks that take a human a few days; the researcher would tackle end-to-end scientific problems with minimal supervision. It'll run on hundreds of thousands of GPUs backed by $1.4 trillion in infrastructure commitments.

1. This Is How Science Gets Faster (OpenAI, NextGenAI Universities)

A system that never sleeps, never forgets a paper, and costs less than a postdoc.

OpenAI's chief scientist says they're close to models that work indefinitely. Jakub Pachocki described the 2028 goal as a system that can come up with ideas, design experiments, and produce research reports autonomously. He tempered it — even by 2028, he doesn't expect systems as smart as people in all ways — but the ambition is a machine that can do what a talented grad student does, around the clock.

Fifteen universities are already betting on it. OpenAI launched its NextGenAI consortium in March 2025 with $50 million in research grants, partnering with MIT, Harvard, Caltech, Duke, Michigan, Ohio State, and Texas A&M among others. Researchers get API credits and computational resources. The institutions are explicitly tasked with finding high-impact applications — from healthcare to education — as the automated researcher tools mature.

The current product is already impressive for $20 a month. Deep Research, live inside ChatGPT, conducts multi-step web research and synthesizes structured reports with citations in 3 to 30 minutes. It moved from the $200/month Pro tier to the $20/month Plus tier, giving tens of millions of users access to what OpenAI describes as research-analyst-level output. OpenAI calls it an early version of the AI researcher.

2. The Interns Are Already Getting Fired (SHRM, PwC, Wall Street Analysts)

Two-thirds of companies plan to cut entry-level hiring. The AI intern announcement is the reason they'll give.

The junior analyst job is disappearing in real time. Wall Street firms are reportedly considering cutting junior analyst hiring by as much as two-thirds as they lean into AI. PwC UK explicitly cited AI when announcing plans to slash about 200 entry-level roles. Market research analysts and investment analysts are among the occupations most exposed to automation.

The hiring data is brutal. An IDC/Deel survey found 66% of global enterprises plan to cut entry-level hiring due to AI. SHRM reports 78% of hiring managers believe AI will cause job losses for recent graduates. Hult International Business School found 37% of employers would now prefer to hire AI or robots over entry-level workers. Stanford researchers found employment for workers ages 22-25 in AI-impacted jobs has already dropped 16% since late 2022.

The experienced workers are fine — for now. The Dallas Fed found a split: while employment in AI-exposed sectors trails the broader economy, wage growth in those sectors is double the national average — 16.7% in computer systems design versus 7.5% nationwide since fall 2022. AI is augmenting senior workers while automating the junior ones. The Yale Budget Lab's framing: AI replicates codified knowledge from textbooks but can't replicate tacit knowledge from experience. That's great if you have experience. Not so great if you're 22.

3. Chill People -- It Barely Works (MIT Technology Review, Enterprise Pilots)

Ninety-five percent of AI projects have delivered zero value. But sure, let's automate research.

The hype-to-reality gap is widening. MIT Technology Review noted that OpenAI had been hyping GPT-5 as a "PhD-level expert in anything," but when it landed, it seemed to be more of the same. Most narratives about AI accelerating science come from AI companies or scientists who benefit from those narratives.

Enterprise pilots are failing at basic tasks. A striking statistic: 95% of businesses that tried using AI found zero value in it. Agents powered by top LLMs from OpenAI, Google, and Anthropic failed to complete many straightforward workplace tasks by themselves. They crashed into permissions, muddied compliance rules, and failed to deliver the autonomy promised. A good intern asks clarifying questions — AI systems often just run with their best guess.

Even AI researchers think automated AI research is dangerous. A survey found that 20 of 25 AI researchers identified automating AI research as one of the most severe and urgent AI risks, citing the potential for recursive self-improvement. Meanwhile, at least two OpenAI employees on the economic research team quit, alleging the team was becoming a propaganda arm rather than doing real research — and that the company had grown guarded about publishing findings that AI could be bad for the economy.

Where This Lands

OpenAI is staking its next two years on building a machine that can do what research assistants do — and then what researchers themselves do. The universities are signed up, the compute is committed, and the timeline is aggressive. On the other hand, the current generation of AI agents can barely handle basic enterprise tasks, and the people who know the technology best rate automated AI research as one of the most dangerous things you could build. Whether Altman's intern arrives on schedule matters less than what's already happening in the hiring market: companies are cutting entry-level jobs on the assumption that it will.

Sources