
The Illusion of Apple's AI Research
Bottom line up front: Apple’s “Illusion of Thinking” paper claimed that AI reasoning models catastrophically fail at complex tasks, but methodological flaws and suspicious timing suggest the study reveals more about corporate strategy than AI limitations. On June 6, 2025, Apple’s research team led by Mehrdad Farajtabar dropped a bombshell: a study claiming that state-of-the-art AI reasoning models experience “complete accuracy collapse” when faced with complex puzzles. The paper, titled “The Illusion of Thinking,” tested models like OpenAI’s o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet on classic logic problems, concluding that what appears to be reasoning is actually sophisticated pattern matching. ...