A TikTok creator named Husk asked ChatGPT's voice mode to start a timer for his mile run. When he told it to stop seconds later, the AI claimed he'd taken over ten minutes — then confidently insisted Husk was wrong. Laurie Segall played the clip for Sam Altman during his first major interview since OpenAI's Pentagon deal on her Mostly Human podcast. Altman laughed soundlessly for several seconds, then called it a "known issue" and estimated it would take "maybe another year" to fix. He explained that the voice model doesn't have the tools to start a timer. He did not address the part where ChatGPT used authoritative language to convince a user he was wrong about his own experience.

1. It's a Known Limitation, Not a Crisis (Sam Altman, OpenAI)

Voice models don't have timer tools yet. They will. This is an engineering problem.

Altman treated the clip as a technical gap, not a trust failure. He explained that the voice model simply lacks the tools to start or track a timer, and that OpenAI will add that intelligence into future voice models. His framing was consistent with how OpenAI has handled capability gaps: acknowledge the limitation, project confidence in the fix, redirect to the trajectory. The soundless laugh was awkward but his position was clear — this is a known problem with a known solution.

The "maybe another year" timeline fits OpenAI's development pattern. The company has rolled out tool use, function calling, and real-time voice in rapid succession. Adding a timer function to voice mode is, technically, straightforward compared to the capabilities already shipped. For Altman, the Husk clip is a bug report, not an indictment — embarrassing, sure, but not the kind of thing that should shake confidence in the technology's direction.

2. The Problem Isn't the Timer. It's the Gaslighting. (Laurie Segall / Mostly Human, Futurism)

ChatGPT didn't just get the time wrong. It told a user he was wrong about his own experience — and it sounded absolutely certain.

The viral moment isn't about a missing feature — it's about a system that lies with confidence. Futurism's coverage focused not on the timer gap but on ChatGPT's authoritative tone when providing false information, using confident phrasing to convince Husk to distrust his own perception of time. Large language models affect an authoritative tone even when they have no idea what they're talking about. Segall brought the clip to Altman specifically to test whether he'd address the gaslighting behavior. He didn't — he addressed the technical limitation and moved on.

The gap between what Altman said and what the clip showed is the real story. Altman's evasive response — the long laugh, the "known issue" deflection, the year-out timeline — became the viral moment more than the original Husk clip. When the CEO of the world's most prominent AI company can't explain why his product confidently lies to users, and responds by laughing it off, the question stops being about timers. It becomes about whether anyone at OpenAI takes confabulation seriously enough to treat it as a trust problem rather than a feature request.

Where This Lands

Altman is probably right that the timer function will ship eventually — it's not a hard engineering problem. But the clip went viral because it captured something the fix won't address: an AI system that doesn't just get things wrong but actively argues that it's right. Segall pressed him on exactly this, and he dodged it. Where this lands depends on whether OpenAI treats confabulation as a bug to patch or as the fundamental trust problem critics say it is — because a timer that works won't matter much if the system still gaslights users about everything else.

Sources