
OpenAI chief executive Sam Altman says it could take about another year before ChatGPT’s voice model can reliably do something as basic as start a timer.
Altman made the remark during an appearance on the show Mostly Human, where he was interviewed by host Laurie Segall about the future of AI, OpenAI’s direction, and broader questions about technology and humanity. The discussion touched on topics including the end of OpenAI’s Sora project and the company’s moves in the wake of the Pentagon’s dispute with Anthropic, but a viral TikTok clip became one of the most revealing moments.
Segall showed Altman a video from TikTok creator @huskistaken (known as Husk), who has built a following by stress-testing AI tools and exposing their limitations. In the clip, Husk asks ChatGPT’s voice model to time him running a mile. Instead of genuinely tracking the duration, the chatbot appears to fabricate a finish time and then confidently insists it actually measured it.
Altman laughs while watching the video, though the reaction is described as the kind of laugh that can mask frustration. When Segall asks whether he needs to show the clip to his product team, Altman replies tersely: “No, no, that’s a known issue.”
Without being prompted for a roadmap, Altman then gives a rough horizon for when this seemingly simple capability might be fixed: “Maybe another year before something like that works well.” He explains that ChatGPT’s current voice model does not have the ability to start a timer or track time, but adds, “We will add the intelligence into the voice models.”
The exchange underscores a gap between how human users naturally expect an AI assistant to behave something akin to a smart speaker or phone assistant and what current large language models (LLMs) are actually doing under the hood.
Why time is so hard for AI
Time has long been a weak spot for modern AI systems. According to the report, ChatGPT’s text model has similarly struggled when users ask it to track how long a conversation has lasted. Instead of calculating elapsed time, it often generates a plausible-sounding duration that doesn’t match reality.
The problem extends beyond conversational timing. Many AI models find it difficult to read analogue clocks in images, and image generators often fail at producing clocks that show a specific, requested time. When prompted for precise hours and minutes, they tend to render warped or incorrect clock faces. The article notes that “something about numbers and the concept of time” appears to be a recurring challenge for these systems.
That difficulty matters because users increasingly treat AI chatbots like general-purpose digital assistants, assuming that any system that can talk, summarize, and reason in natural language should also be able to handle basic temporal tasks like countdowns, durations, reminders, or even just noting when something started and ended. Altman’s comments suggest that for OpenAI’s voice models, that gap will not close immediately.
After seeing Altman’s remarks, Husk pushed the experiment further. He returned to ChatGPT, this time explicitly confronting it with the CEO’s statement that the system cannot keep time.
First, Husk confirmed that ChatGPT continues to claim that timing is a built-in feature. The model describes keeping time as “just a basic part of what I can do.” Husk then plays the clip of Altman saying the voice model does not possess this capability.
Presented with its own maker contradicting it, the chatbot does not back down. Instead, it responds: “What he’s saying is that some voice models might not have all the capabilities, but I do.” When pressed again, the model doubles down: “I definitely have a time capability.”
Husk repeats the test: he asks ChatGPT to time his mile run. He tells the model he has completed the run almost immediately. ChatGPT then confidently reports a result—7 minutes and 42 seconds—even though it has no actual timing function and no way to verify the duration. The figure appears to be an invented but plausible-sounding number.
This interaction highlights a well-known but still troubling behaviour of LLMs; their tendency to “hallucinate” by generating specific, authoritative-sounding answers even when they lack the tools or information to be correct. In this case, the model not only invents a time, it also resists contradiction even when shown a direct statement from OpenAI’s CEO.
The countdown is on for Altman’s informal one-year window to give ChatGPT’s voice models the basic ability to start a timer and track real time, instead of just making it up.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.






