I have become the living embodiment of my own contradiction.

Almost a week ago, I published “The Vibe Coding Paradox: When Understanding Became Optional”, exploring the unsettling implications of Andrej Karpathy’s confession that he was building functional applications without comprehending the code they contained. I wrote about the psychological weight of surrendering comprehension and the fundamental questions vibe coding raises about human expertise in an AI-driven world.

Then I immediately embarked on my own vibe coding projects, like the one I’m about to tell you about.

The irony wasn’t lost on me as I opened a new Claude project designed to help me build n8n-nodes-perplexity-research-tool—an open source tool node that would enable AI agents in n8n workflows to conduct autonomous deep research using Perplexity’s API. I’m an SVP Engineering and fractional CTO with extensive experience in platform engineering and technical strategy. What I don’t understand is TypeScript.

This created a fascinating experiment: Could traditional engineering discipline solve the vibe coding comprehension problem? Could strategic planning and rigorous project management frameworks transform AI collaboration from vibes-based development into something methodical and reliable?

The Strategic Context: Why This Tool Needed to Exist

The distinction between regular nodes and tool nodes in n8n isn’t merely technical—it’s architectural. According to n8n’s documentation, “The Tools Agent uses external tools and APIs to perform actions and retrieve information. It can understand the capabilities of different tools and determine which tool to use depending on the task.”

Regular nodes are designed for human-directed workflows where each step is explicitly configured. Tool nodes are designed for AI consumption, where agents make autonomous decisions about when and how to use available capabilities.

The existing Perplexity integration for n8n provides “access to Perplexity’s large language models through n8n workflows” with support for “chat completions” and “multiple messages with system, user, and assistant roles.” It’s a well-built node that serves its intended purpose: enabling humans to interact with Perplexity’s LLMs within n8n workflows.

But it wasn’t designed for autonomous AI agent consumption. AI agents don’t need chat completion interfaces—they need research tools. They don’t need conversation threading—they need structured output with proper citations. They don’t need model selection guidance for humans—they need comprehensive tool descriptions that help them choose appropriate models for specific research tasks autonomously.

Consider a workflow where an AI agent needs to research market trends, validate claims against current data, and produce a report with proper citations. With existing tools, the agent would need human intervention to conduct research, breaking the autonomous flow that makes agentic workflows powerful. What’s needed is a tool node specifically designed for AI agent consumption—one that accepts research queries, returns structured results with citations, and provides detailed tool descriptions that enable agents to use it effectively.

Engineering Leadership Meets Vibe Coding

My approach began not with code, but with strategy. I created a specialized Claude project with a comprehensive system prompt—nearly 3,000 words establishing Claude as “an expert assistant specializing in n8n workflow automation and AI agent development” with the specific charter: “We are going to build and maintain together a tool type node for n8n that will give AI Agent nodes a tool for working with Perplexity deep research API.”

This wasn’t a casual request for coding help. It was a project charter.

The critical decision was insisting on creating three foundational documents before writing any code. The Product Requirements Document (PRD) would crystallize what we were building and why. The Technical Design Document (TDD) would address how we would build it. The Project Checklist would ensure nothing fell through the cracks during implementation.

This wasn’t Claude’s suggestion—it was mine. After many years of leading engineering teams, I knew that projects succeed or fail based on the clarity of their requirements and the rigor of their planning. I suspected that vibe coding projects failed for the same reasons traditional development projects fail: unclear requirements, insufficient planning, and inadequate risk management.

The three-document framework served multiple purposes. It forced me to articulate what I actually wanted to build rather than relying on vague aspirations. It created concrete deliverables that would persist across context window limitations. Most importantly, it established a collaboration pattern where I provided strategic direction while Claude handled technical implementation details.

But I also had to establish clear boundaries. Following the old adage “measure twice, cut once,” I was explicit with Claude from the beginning: no coding until I gave permission. We would plan first, comprehensively, before touching implementation.

The Planning Phase: Where Strategy Meets Implementation

The initial conversation with Claude revealed the sophistication possible when AI collaboration is properly structured. I presented the project concept, and Claude’s response demonstrated the value of the specialized system prompt. From the chat transcript:

Claude: “This is an excellent project that bridges two powerful technologies. You want to create an open source n8n tool node that enables AI Agent nodes to leverage Perplexity’s search API capabilities. This would give AI agents running in n8n workflows access to real-time web research with proper citations.”

Rather than jumping into implementation, Claude responded with strategic analysis—exactly what I had directed. The conversation stayed focused on requirements and architecture because I had established clear guardrails. This human-in-the-loop discipline proved critical. Without explicit guidance to avoid embellishment, Claude might have invented cost projections, feature roadmaps, or timeline estimates that weren’t part of our discussion. What I lacked in TypeScript experience, I compensated for with engineering leadership discipline.

The PRD process became genuinely collaborative. I provided strategic vision: the gap between existing Perplexity nodes and agentic workflow needs, the importance of citation handling for autonomous research, the community contribution strategy. Claude contributed technical insights about n8n’s node architecture, parameter organization patterns that would optimize AI agent usage, and the specific requirements for community verification.

The TDD development proved equally valuable. Claude designed an API integration architecture using n8n’s declarative style, proposed error handling patterns specifically for AI agent consumption, and detailed the parameter design that would enable autonomous model selection. When Claude recommended exposing all Perplexity models with detailed descriptions—“sonar: Fast general queries, good for simple factual lookups” versus “sonar-deep-research: Comprehensive analysis for complex topics requiring thorough investigation”—I couldn’t evaluate the technical implementation details, but I could assess whether this aligned with our strategic goal of enabling autonomous agent decision-making.

The Project Checklist became the framework for managing the comprehension gap. Claude proposed an eight-phase development process covering everything from repository setup to community verification. Each phase included specific deliverables and success criteria that I could evaluate without understanding implementation details.

By the end of our planning conversation, we had created three comprehensive documents totaling about 9,000 words of requirements, technical specifications, and implementation guidance. Any competent TypeScript developer could use these documents to build the tool node we had designed.

The Current State: Ready for Implementation

As I write this, Claude has expressed readiness to begin implementation. From our conversation: “Based on the documentation you’ve provided… I have a clear understanding of both the n8n ecosystem and the Perplexity API requirements. We can definitely build this tool together.”

This type of collaboration represents a major pivot point in my career. I’m accustomed to having engineers working for me, ready to build whatever I ask for. But I’m doing this one for myself and the broader community, with no budget and no staff to call on. It’s just me and my AI. The strategic foundation is more solid than many traditional projects I’ve managed. The technical details are more comprehensively documented than typical enterprise initiatives. Yet I couldn’t write a single line of the TypeScript that will bring our designs to life.

The question that keeps recurring is whether this represents a sustainable form of technical leadership or an elaborate form of professional self-deception. Traditional engineering leadership assumes that strategic decisions require sufficient technical understanding to evaluate implementation feasibility and quality. My experiment assumes that strategic oversight can be maintained while delegating technical evaluation to AI systems.

The success of this collaboration also depended on preparation. I had pre-loaded the Claude project with comprehensive reference materials—documentation about n8n tool development and the Perplexity API that I had commissioned from Perplexity itself. This homework helped ensure that the AI engineer could get most decisions right the first time, though I’m definitely expecting to hit snags along the way.

What We’re Actually Building

The tool we designed represents one piece of a larger transformation in how I approach technical leadership. At the micro level, we’re building a research tool that enables AI agents in n8n workflows to conduct autonomous, citation-rich research using Perplexity’s API. But at the macro level, I’m building new skills I’ll need as a senior leader to successfully direct hybrid human-AI teams.

This Perplexity tool is just one of many I intend to build. Part of what I’m doing is leveraging new AI capabilities to reduce friction in my own intellectual pursuits. Part of what I’m doing is learning to lead by doing—discovering through practice what humans excel at, what AI excels at, and how to frame productive collaborations between us.

The technical architecture itself represents a significant evolution in AI agent capabilities within workflow automation platforms. The fundamental architectural choice was implementing the node using n8n’s declarative style rather than programmatic style. According to n8n’s documentation, declarative style “uses JSON-based syntax, making it simpler to write with reduced bug risk” and “specifically supports REST API integrations.”

The parameter architecture optimizes for AI agent consumption while maintaining human usability. Our design includes required fields for query input and model selection, with optional collapsible sections for search options and advanced configuration. The model descriptions are specifically crafted for AI agent decision-making:

  • sonar-pro: “Enhanced capabilities, larger context window, production-quality research”
  • sonar-deep-research: “Comprehensive analysis for complex topics requiring thorough investigation”
  • sonar-reasoning-pro: “Advanced reasoning for complex multi-step problems”

The error handling philosophy represents a fundamental shift from human-centered to agent-centered design. Our system classifies failures into specific types—rate limits, authentication failures, model unavailability—with structured responses that include retry guidance and suggested alternative actions. When an AI agent encounters a rate limit error, it receives not just an error message but specific guidance about retry timing and alternative model selection.

The output format provides clean, structured data that agents can process effectively:

  • Clean answer text without inline citation markers
  • Structured sources array with title and URL pairs
  • Model usage metadata for cost tracking
  • Search metadata including query count and processing time

Broader Implications: The Future of Technical Leadership

This project has become a case study in how technical leadership might evolve as AI capabilities expand. The traditional model assumes that strategic decision-making requires deep technical understanding. My experience suggests that strategic leadership can remain effective when operating across an expanded gap between vision and implementation.

The key insight is developing new evaluation criteria. Rather than evaluating technical proposals based on implementation details I understand, I’ve learned to evaluate them based on alignment with strategic objectives, compliance with established standards, and consistency with architectural principles.

This shift has broader implications for engineering organizations. If AI systems can generate technically sound implementations from strategic requirements, then the critical skill becomes crafting those strategic requirements with sufficient clarity and comprehensiveness to guide AI implementation effectively.

The three-document framework we developed represents one approach to this challenge. By forcing strategic decisions into explicit documentation before implementation begins, we created a framework that enables AI collaboration while maintaining human strategic control.

For the n8n and Perplexity Communities

The tool we’re building addresses a specific gap in enabling sophisticated AI agent workflows. Current agents often rely on static knowledge or simple API lookups, but they typically can’t conduct comprehensive, multi-source research that human knowledge workers take for granted.

Our Perplexity research tool enables a different class of agentic workflows. An AI agent tasked with market analysis can now autonomously research current trends, validate claims against multiple sources, and produce reports with proper citations. An agent managing customer support can research complex technical issues and provide responses grounded in current documentation.

The key innovation isn’t simply adding research capabilities—it’s designing those capabilities specifically for autonomous consumption. The tool’s parameter design, error handling, and output format all optimize for AI agent decision-making rather than human interaction patterns.

For n8n developers, this project demonstrates design principles that could enhance any tool node intended for agent consumption: comprehensive parameter documentation, structured error handling, and AI-optimized output formats.

For the Perplexity community, the tool represents an expansion of use cases beyond direct human interaction, enabling Perplexity’s researched, cited responses to be deployed autonomously within larger workflows.

Reflection: Understanding vs. Capability in the AI Era

This project has forced me to confront fundamental questions about professional expertise when AI systems can generate technical implementations that exceed individual human capabilities. My experience suggests that the resolution isn’t abandoning understanding but operating at different levels of abstraction.

I don’t understand the TypeScript syntax that Claude will generate, but I understand the strategic requirements that should guide that implementation. I can’t debug n8n routing configurations, but I can evaluate whether our architectural decisions align with project objectives and community standards.

This shift from implementation understanding to strategic understanding may represent a broader evolution in how human expertise contributes to technical outcomes. The strategic decisions I made about architecture, user experience, and integration patterns required substantial technical knowledge—just not implementation-level technical knowledge.

The difference wasn’t between technical and non-technical decision-making—it was between strategic technical decision-making and implementation technical decision-making. Both require technical sophistication, but they operate at different levels of abstraction.

Rather than replacing technical expertise, AI systems may be reshaping it toward higher levels of abstraction. Technical leaders may need to become more sophisticated at strategic planning, requirement articulation, and architectural decision-making while becoming less focused on implementation details.

Conclusion: Engineering Still Requires Humans

Standing at the threshold between comprehensive planning and unknown implementation, I’ve learned that the comprehension paradox that makes vibe coding psychologically uncomfortable for experienced engineers isn’t unsolvable—it requires redefining what we mean by understanding and control in technical projects.

The three-document framework represents more than project management tools. It represents a methodology for maintaining strategic oversight while delegating technical implementation to AI systems. By forcing strategic decisions into explicit documentation before implementation begins, we created evaluation criteria that enable quality assessment without requiring implementation expertise.

This approach transforms vibe coding from an abdication of professional responsibility into a new form of technical leadership. Rather than surrendering understanding, we’re operating at higher levels of abstraction while maintaining accountability for outcomes.

As I prepare to watch Claude implement our designs, the broader implications seem clear: as AI capabilities expand into increasingly sophisticated technical tasks, competitive advantage will belong to individuals and organizations that develop effective patterns for strategic oversight of AI implementation.

Engineering still requires humans, but the nature of that requirement is evolving. There’s much exploration ahead to discover what humans are best at, what AI excels at, and how to frame the collaborations between us—human to human, human to AI, AI to AI—to achieve the best outcomes. The code may write itself, but strategy, architecture, and quality standards still require human judgment, domain expertise, and professional accountability.

This project continues as Claude begins implementation. The next phase will test whether our strategic planning can successfully guide AI implementation to produce a tool that meets community standards and serves its intended purpose. That story—and what it reveals about the practical limits and possibilities of disciplined vibe coding—will be the subject of a follow-up article when the tool reaches functional completeness.