I logged into a Google Meet expecting the usual tech meetup optimism. What I found instead was something more sobering—a virtual room full of AI researchers, engineers, and enthusiasts genuinely wrestling with whether we’re prepared for what’s coming next.

The New York Artificial Intelligence Meetup Group hosted an in-depth virtual seminar on AGI—“AGI: What, When, How…and Are We Ready?"—and the conversations that emerged were both fascinating and unsettling. Here’s what stuck with me from that evening online.

The Turing Test Is Dead (And We Need Better Measures)

One of the first things that became clear was how outdated our traditional measures of AI intelligence have become. The Turing Test, once considered the gold standard for machine intelligence, feels almost quaint now.

As one participant pointed out, current language models can already fool people in conversations without demonstrating anything close to general intelligence. We’re essentially testing for mimicry, not understanding.

The group spent considerable time discussing the Arc AGI benchmark, which tests visual-spatial reasoning through pattern recognition puzzles. What makes Arc interesting is that it requires AI systems to generalize from just a few examples—something humans do naturally but machines struggle with.

OpenAI’s O3 model recently showed dramatic improvements on Arc AGI(1)(2)(3), but here’s the catch: it required between $15,000 and $30,000 in computational costs per question to achieve those results. That’s not intelligence—that’s brute force with a massive budget.

We’re Confusing Intelligence with Optimization Power

This distinction became a central theme throughout the evening, and it’s one that keeps me thinking. The group made a compelling case that intelligence, consciousness, and optimization power are separate dimensions that just happen to coincide in humans.

Think about fire. Fire is what they called a “dumb optimizer”—it converts fuel into heat and light very effectively, but it’s not intelligent. Yet fire can still burn down forests and destroy cities. The optimization happens without any understanding or intent.

According to research on AI alignment, the real danger isn’t necessarily from intelligent AI, but from any system that optimizes strongly for goals that don’t align with human values. Intelligence might not even be the primary concern.

The Safety Problem Is Harder Than We Thought

The technical challenges around AI safety came up repeatedly, and they’re genuinely difficult. One issue that particularly caught my attention was something called the “superposition problem.”

In current AI systems, individual vectors can encode multiple meanings simultaneously. This makes it nearly impossible to verify what values or goals an AI system actually holds. Imagine trying to check if someone shares your values, but their thoughts exist in a format where every concept overlaps with dozens of others.

Then there’s the basic problem that humans themselves don’t have coherent, codifiable value systems to embed in AI systems in the first place. We make decisions based on context, emotion, culture, and countless other factors that we can’t easily translate into code.

Mechanistic interpretability research is making progress on understanding how AI systems work internally, but we’re still far from being able to peer inside these systems and understand their actual decision-making processes.

This Economic Disruption Will Be Different

Every technological revolution has displaced workers, but this one feels different in scope and speed. Previous innovations mostly affected manual labor first, then gradually moved up the skill ladder. AI is hitting knowledge workers and professionals directly.

The participants expressed genuine concern about the timeline. Unlike the decades-long transitions of previous industrial revolutions, AI capabilities seem to be advancing faster than human institutions can adapt.

One conversation that stuck with me centered on what happens to human meaning and purpose when cognitive abilities—previously our unique contribution—become commoditized. We don’t have good models for economies where human intelligence isn’t scarce.

Timeline Uncertainty Creates Its Own Problems

Perhaps the most unsettling aspect of the discussion was the uncertainty around timelines. Participants distinguished between two different questions: how long until we achieve AGI, and how fast capabilities will expand once we do.

The concept of recursive self-improvement came up—AI systems potentially improving themselves, leading to rapid capability escalation. But nobody could confidently predict when this might happen or how quickly it would unfold.

This uncertainty creates a policy nightmare. How do you prepare for something that might happen in five years or fifty? How do you regulate technologies that are advancing faster than regulatory frameworks can evolve?

The US-China AI competition adds another layer of complexity. Safety considerations might take a backseat to geopolitical advantages, creating pressure to move fast rather than move carefully.

What This Means for You

After spending hours absorbing these perspectives through my screen, a few practical takeaways emerged:

Stay informed, but stay grounded. The AI field moves quickly, and it’s easy to get caught up in either hype or doom. Follow developments from credible sources, but remember that even experts disagree on fundamental questions.

Think about adaptability over specific skills. If rapid AI advancement is possible, the most valuable human traits might be flexibility, creativity, and the ability to work with AI systems rather than compete against them.

Engage with these questions now. The participants agreed that society needs broader conversations about AI’s implications. These aren’t just technical problems—they’re human problems that affect all of us.

The virtual discussion left me with more questions than answers, but that might be the point. We’re potentially facing the most significant technological transition in human history, and the honest truth is that nobody knows exactly how it will unfold.

What became clear is that the people building these systems are taking the challenges seriously. The question is whether the rest of us are ready to join that conversation.