The tech world is absolutely buzzing. Jony Ive, the legendary designer behind the iPhone and iPad, has partnered with OpenAI in a $6.5 billion deal to create what they’re calling the future of AI hardware. The headlines write themselves: the man who defined modern consumer electronics meets the company that gave us ChatGPT.
But here’s what’s interesting—once you get past the star power and the breathless coverage, we actually know remarkably little about what they’re building. And what we do know raises some fascinating questions about whether this partnership can deliver on its enormous promise.
What We Know (And Don’t Know) About the Device
The details that have leaked are intriguingly sparse. We’re talking about a pocket-sized, screen-free device that’s designed to be “contextually aware” of your surroundings. Think small pendant or perhaps something the size of an old iPod Shuffle. It’ll likely have cameras, definitely microphones, and the goal is to help users “wean away from screens” entirely.
Ambitious? Absolutely. Novel? Not really.
From a design perspective, there’s nothing here we haven’t seen before. Wearable tech, voice interfaces, ambient computing—these concepts have been floating around Silicon Valley for years. If there’s anything truly radical about what Ive and OpenAI are planning, it’s probably not the hardware itself but rather the fully integrated experience they’re promising.
This is where Ive’s Apple background becomes crucial. Part of Apple’s secret sauce has always been controlling the entire stack—hardware, software, services, and increasingly, the silicon inside. Does Ive want to recreate this level of integration at OpenAI? That would be genuinely ambitious, especially for a company that’s primarily known for software.
Science Fiction’s Vision of AI Companions
To understand what they might be building, it helps to look at how popular culture has envisioned AI devices. Science fiction has been remarkably prescient about technology trends, often serving as a blueprint for real-world innovation.
From HAL 9000’s ominous presence in 2001: A Space Odyssey to KITT’s helpful banter in Knight Rider, we’ve seen AI take many forms. But perhaps no film has been more influential in shaping modern expectations than Spike Jonze’s 2013 masterpiece Her.
In Her, the protagonist Theodore falls in love with Samantha, an AI operating system that exists purely as a voice. No screen, no physical form—just an incredibly sophisticated conversational AI that understands context, learns from experience, and develops genuine emotional intelligence. The film was remarkably prescient about cloud-based AI and the potential for truly natural human-computer interaction.
Here’s the thing: Her feels like a direct inspiration for what Ive and Altman are building. A screenless, voice-first AI companion that’s always present but never intrusive. Samantha could hop between devices seamlessly, understand Theodore’s emotional state, and provide contextual assistance without demanding constant attention.
The parallels are striking. But here’s where things get complicated.
The Voice Assistant Reality Check
Even though popular culture is filled with examples of AI taking voice assistant form, the real-world success of voice interfaces has been lukewarm at best. We’ve had Siri since 2011, Alexa since 2014, and Google Assistant shortly after. Yet by 2025, only 60% of consumers regularly use voice assistants, and many of those interactions are limited to basic commands like weather checks and timer setting.
The problems are well-documented. Voice recognition struggles in noisy environments, commands need to be phrased precisely, and privacy concerns about always-on microphones have left many users wary. There’s also the fundamental issue that voice simply isn’t the best interface for many tasks—try shopping for clothes or comparing product features using only speech.
If Ive and OpenAI are betting on voice as the primary interface for their device, they’re swimming against a tide of consumer skepticism that even Apple, Amazon, and Google haven’t fully overcome.
Lessons from Google Glass
Then there’s the cautionary tale of Google Glass. Launched in 2013—the same year as Her, coincidentally—Glass promised ambient computing through a heads-up display. Like the rumored OpenAI device, it featured always-on cameras and voice control.
The public backlash was swift and brutal. Users were labeled “Glassholes” for wearing them in public, restaurants banned the devices over privacy concerns, and the $1,500 price tag made them accessible only to tech enthusiasts willing to endure social ridicule.
Google discontinued the consumer version in 2014 and killed the enterprise edition in 2023. The core issues—privacy invasion, social awkwardness, and unclear value proposition—never got resolved.
Now imagine a device that’s even more invasive, designed to be “fully conscious” of your life and surroundings. How will the public react to always-on cameras and microphones from a company already facing federal investigations over data privacy?
The Graveyard of AI Hardware
Recent attempts at AI-first consumer devices haven’t exactly inspired confidence. The Humane AI Pin launched in 2024 with tremendous hype but was discontinued by February 2025 after users discovered it couldn’t perform basic tasks reliably and had severe overheating issues.
The Rabbit R1 fared even worse. Despite selling 150,000 units initially, the orange square device proved unable to handle simple requests, suffered from terrible battery life, and had a 40% return rate within months of launch.
Both devices shared common failure modes: they overpromised on AI capabilities, delivered poor user experiences, and couldn’t articulate why anyone needed a separate AI device when smartphones were rapidly adding AI features.
Jony Ive’s Mixed Post-Jobs Legacy
This brings us to Jony Ive himself. There’s no question he’s one of the most influential designers of the modern era. The iMac, iPod, iPhone—these devices defined entire product categories and changed how we interact with technology.
But was Ive’s best work only possible because of his symbiotic relationship with Steve Jobs? Jobs provided the product vision and user-focused pragmatism that kept Ive’s aesthetic minimalism grounded in real-world usability.
After Jobs’ death in 2011, Ive’s track record became decidedly mixed. Yes, the Apple Watch became a massive success, and AirPods created an entirely new product category. But there were also significant missteps: the butterfly keyboard disaster that led to a $500 million settlement, the iPhone 6’s “Bendgate” structural failures, and the unloved cylindrical Mac Pro that prioritized form over function.
Former Apple executives noted that Ive became increasingly focused on artistic purity at the expense of practical concerns. His interest in luxury fashion and his comment that “art needs the proper space and support to grow” suggest someone who may have lost touch with the everyday user needs that made Apple’s products revolutionary.
Will Ive’s partnership with OpenAI recapture the magic of his Jobs-era work, or will it reflect the mixed legacy of his later Apple years?
OpenAI’s Mounting Challenges
Then there’s OpenAI itself. The company certainly deserves credit for transforming public perception of AI practically overnight with ChatGPT’s launch. But as a hardware partner, they face significant headwinds.
Unlike Apple in 2007, OpenAI isn’t entering the market from a position of strength in consumer devices. They’re fighting multiple copyright lawsuits from The New York Times and other publishers, facing federal privacy investigations, and dealing with growing regulatory scrutiny across multiple jurisdictions.
Recent court orders requiring OpenAI to preserve all ChatGPT logs, including deleted conversations, hardly inspire confidence in their data handling practices. How will consumers react to carrying an always-on OpenAI device when the company is already under federal investigation for privacy violations?
Moreover, while OpenAI pioneered the current AI boom, they’re increasingly playing catch-up to competitors like Google, Anthropic, and others who may have superior models by the time this device launches in late 2026.
Sam Altman, for all his promotional prowess, is no Steve Jobs. OpenAI is no Apple. The company has never shipped consumer hardware, never managed global supply chains, and never navigated the complex regulatory landscape that comes with always-on consumer devices.
The Bottom Line
I don’t mean to dismiss what Ive and OpenAI are attempting. The combination of world-class design talent and cutting-edge AI could potentially create something genuinely transformative. Ive’s stated goal of addressing the “unintended consequences” of smartphones suggests a thoughtful approach to technology’s impact on human behavior.
But once you get past the celebrity names and the cult of personality surrounding this partnership, there are substantial reasons for skepticism. The technical challenges of voice interfaces, the privacy concerns around always-on devices, the graveyard of failed AI hardware, and the mounting legal pressures on OpenAI all suggest this won’t be the slam dunk that headlines might suggest.
We’re still remarkably light on details about what they’re actually building. The ambitious timeline targeting late 2026 and the goal of shipping 100 million units “faster than any company has ever achieved” sound more like marketing hyperbole than realistic product planning.
Maybe they’ll surprise us. Maybe this partnership will crack the code that Google Glass, Humane, Rabbit, and others couldn’t. Maybe Ive’s design genius combined with OpenAI’s AI capabilities will create the seamless, beneficial AI companion that science fiction has long promised.
But until we see actual working devices solving real problems for real people, this remains a fascinating experiment built on enormous assumptions. The mystery isn’t just what they’re building—it’s whether they can overcome the fundamental challenges that have doomed every similar attempt before them.