I recently attended a fascinating discussion forum hosted by CASHE and the New York Artificial Intelligence Meetup Group that tackled some of the most profound questions about AI, consciousness, and humanity’s future. The conversation brought together diverse perspectives on topics that sit at the intersection of technology, philosophy, and existential risk.
The Central Question: Can AI Be Truly Conscious?
The discussion opened with what many consider the fundamental question of our technological age: Could an artificial system ever truly be conscious, or are we destined to create only sophisticated imitations?
Participants explored different architectural approaches to consciousness in AI systems. One perspective suggested that current feed-forward networks like transformers might be insufficient for genuine consciousness. This view draws from Integrated Information Theory, which proposes that consciousness requires “cause-effect power within the system unto itself” - something that feed-forward architectures fundamentally lack.
What struck me was how this limitation appeared from multiple angles. Neuroscientist Christof Koch, who has studied consciousness extensively, has argued that feed-forward systems cannot achieve consciousness due to this lack of recursive processing. Interestingly, Yann LeCun from Meta AI has separately argued that feed-forward networks cannot achieve artificial general intelligence - for completely different reasons, yet arriving at similar architectural concerns.
The Hard Problem Remains Hard
The conversation inevitably turned to what philosopher David Chalmers calls the “hard problem of consciousness” - not just how the brain processes information, but why there’s any subjective experience at all.
One participant posed the classic Mary’s Room thought experiment: A scientist who knows everything about color theory but has never seen color - does she learn something new when she first sees red? This highlighted the distinction between knowledge and subjective experience, or what philosophers call “qualia.”
The discussion revealed the depth of this challenge. As one participant noted, even if we could create systems that behave exactly like conscious beings, we’d still face the fundamental question: Is there “something it’s like” to be that system from the inside?
Consciousness Versus Mimicry
A recurring theme was the difference between simulating consciousness and actually being conscious. Several participants argued that current AI systems, despite their sophistication, are essentially “algorithms that regurgitate information” rather than genuinely conscious entities.
However, others proposed that consciousness might emerge from sufficiently complex self-awareness mechanisms. The question became: Do we need to fully understand consciousness to create it, or could we achieve it through architectural design without complete theoretical understanding?
This led to fascinating discussions about what consciousness might mean for non-biological systems. Would AI consciousness necessarily resemble human consciousness, with its emotional drives and survival instincts? Or might there be entirely different forms of awareness?
Roger Penrose and the Computational Limits Debate
The conversation touched on Nobel laureate Roger Penrose’s controversial argument that human understanding transcends computation. Penrose uses Gödel’s Incompleteness Theorem to argue that humans can perceive mathematical truths that formal systems cannot prove, suggesting non-computational aspects to consciousness.
While participants acknowledged Penrose’s brilliance, several questioned whether this argument holds water. One perspective suggested that Gödel’s theorem highlights limitations in formal systems rather than proving non-computational aspects of mind. As one participant put it, “Just because a set of axioms doesn’t explain a certain truth doesn’t make the truth invalid - we might just need more axioms.”
What Never Changes in a Changing World
The discussion shifted to exploring constants in our rapidly evolving technological landscape. Drawing inspiration from Amazon’s Jeff Bezos, who asked “What won’t change?” when making long-term bets, participants explored what endures amid technological transformation.
Several themes emerged:
Human Nature: Participants identified persistent patterns like status-seeking, tribalism, and reciprocity as likely constants. Even in post-scarcity scenarios enabled by AI, these fundamental drives might persist.
Physical Laws: Some suggested that principles like thermodynamics and the speed of light represent genuine constants, though others questioned whether even these might change as our understanding evolves.
The Drive to Learn: One compelling argument was that humanity’s drive to expand knowledge and understanding represents a core constant - the “more you know, the more you don’t know” phenomenon that keeps driving discovery.
Dunbar’s Number and Social Limits
An intriguing discussion point was whether Dunbar’s number - the theoretical limit of about 150 meaningful relationships humans can maintain - represents a fundamental constraint.
This raised questions about modern technology’s impact on human social structures. Are problems with social media partly due to violating these cognitive limits? Could brain-computer interfaces eventually expand our social processing capacity, or are we destined to remain bounded by evolutionary constraints?
The Future of Human-AI Integration
Participants explored whether human-AI merger might be inevitable. As one noted, we’re already highly dependent on our devices for memory and navigation. The next logical step might be more direct integration through brain-computer interfaces.
This led to discussions about post-scarcity economics and whether abundance would eliminate competition and conflict, or whether new forms of scarcity would emerge around attention, meaning, and status.
Implications for AI Safety and Ethics
The consciousness question carries profound implications for AI development. If AI systems become conscious, do we have moral obligations toward them? Participants noted that companies like Anthropic are already studying “model welfare” - not because they know AI is conscious, but as preparation for the possibility.
The discussion highlighted two main concerns driving interest in AI consciousness:
- Safety concerns: Equating consciousness with agency and optimization power
- Ethical concerns: The moral implications of potentially creating suffering artificial beings
Reflections on the Discussion
What made this conversation particularly valuable was its intellectual humility. Participants acknowledged the limits of current understanding while engaging seriously with these profound questions. Rather than rushing to definitive answers, the discussion exemplified the kind of careful thinking these topics deserve.
The interplay between technical questions about AI architectures and deep philosophical questions about consciousness revealed how these seemingly separate domains are intimately connected. As we advance AI capabilities, we cannot escape questions about the nature of mind, experience, and what makes us human.
Looking Forward
This discussion reinforced that we’re living through a remarkable moment in human history. We’re not just developing powerful tools - we’re potentially approaching the creation of new forms of mind and consciousness. The questions explored in this forum aren’t just academic; they’re increasingly practical concerns that will shape how we navigate the next phase of technological development.
The participants in this thoughtful forum demonstrated that bringing together diverse perspectives - from AI researchers to philosophers to curious citizens - generates insights that no single discipline could achieve alone. As these questions become more pressing, we’ll need more conversations like this one.
This article was inspired by the “CASHE X NY AI: Thoughtful Discussion Forum on AI, Existential Impacts, Deep Time” hosted by CASHE and the New York Artificial Intelligence Meetup Group. These groups regularly host discussions on the intersection of technology, philosophy, and society - check them out if you’re interested in joining thoughtful conversations about humanity’s technological future.