2026-02-10

My AI Workflow

Am I backtracking on Vibe Coding? A look back at my journey to find an effective workflow with AI.


A few months ago, I posted on LinkedIn that basically Vibe Coding is crap, and that I hated the experience. But I also said that paradoxically, I'd love to one day find an effective way to code with AI.

When I tried vibe coding last year, my problem was that I had my project, I'd set it up, then I'd ask myself what the next task was (based on a Kanban board, or TODOs in the code), and I'd prompt Claude Code to do the task, then I'd review it, and occasionally ask it to write tests if the logic was sensitive. If I spotted anomalies, I'd prompt it to fix them. Then when I was satisfied, I'd commit. I had a specs file that was there from the start, that I'd sometimes forget about, which drifted over time from the actual state of the project. So in a way, maybe it wasn't vibe coding that was the problem, it was me not knowing how to do it yet. So what changed? First, I prefer to call it an AI workflow.

BMAD MethodLink icon

Some time ago, Benjamin Code (french YTer) in a video talked about his new AI workflow, the BMAD Method. A method with a setup of agents, commands, and tools. To summarize, it involves designing a project in depth, scrum-style, with multiple agents each having their own role (product owner, UX designer, marketing, QA, etc.). Each agent goes through several brainstorming workshops to gather requirements, set up specs, define personas, and much more. Each workshop produces a document that's used by the next agent.

I tested this method on a really basic project. An interval timer for sports. I spent almost a week switching from one agent to another, doing brainstorming workshops, and waiting for Claude usage cooldowns (I was on a Pro subscription at the time).

I wasn't convinced. There were way too many workshops, too many generated documents. It exhausted me. Zero fun. It was heavy and tedious BUT... also very enriching. Because I have to admit, I was forced to think more deeply about the project's design, better prepare it upfront, and ultimately understand it better before even starting to code. I realized that the key was perhaps in the design and the workflow after all.

Harper Reed WorkflowLink icon

A few weeks ago, during my daily tech watch, I came across a video about an article that proposes an AI workflow with only 4 very simple prompts, generating just 3 markdown files: the project spec, the implementation plan, and the verification checklist. With an emphasis on an iteration loop that's almost always the same: read the docs, understand the goal, understand the project, the current step, read the code, write the code, build the project, fix errors, write and run tests, fix errors, if everything's good update the docs, move to the next step.

And that changed everything. Because this time, I spent only one afternoon brainstorming and one day implementing a functional and relatively complete MVP. And it was really fun to watch my project come to life step by step and so quickly.

I now have a simple, relatively lightweight, easily reusable method, and I have enough confidence in what's been produced. The fun part is watching the project progress on its own, one commit after another. At the end I type "Continue" and off it goes again. It's surprisingly satisfying.

Post-greenfieldLink icon

Now the workflow is done, I have my MVP, it works. And now what? How do I add a feature without breaking my workflow? That's where I felt a bit without a safety net. The base workflow doesn't account for this — it was ideal for a "greenfield" project, meaning starting from a blank page, but now that the project exists, what do you do?

I tried prompting what I need. It can work for some cases, but I now know it's not effective to work that way. I'd like to keep this workflow but for adding to or modifying what already exists. Do I grow the existing files? Do I start fresh with new files? Is there a risk of breaking everything?

So I went back to brainstorm with Claude, and we set up a workflow for this post-greenfield phase. Keeping the same iteration loop, control, and documentation updates. Basically, we keep the specs file we had before, but now it becomes our reference for understanding the project's context. And we now have our 3 feature-specific files: specs/plan/todo.

It works reasonably well, but not everything is sorted out yet regarding the instructions I need to put in claude.md from project to project. Same with the location of these post-greenfield files, but that'll come with experience. I still need to make the whole thing robust, but the essentials are there, and it works. I'll update this article when I've solved this problem.

ConclusionLink icon

What I take away from all this is the importance of thorough brainstorming before starting a project, producing implementation documents that are simple but dense with information, and setting up an implementation loop that includes tests and documentation updates at every step.

In the end, where just a few weeks ago I had a lot of frustration letting AI take too much responsibility in my projects, now I get real satisfaction from watching my git repo progress on its own, commit after commit. And ultimately, I'm doing what I love most about this job: finding solutions to problems.

I believe I've unlocked a new skill: coding effectively with AI. I have a backlog of ideas and projects as long as my arm that I couldn't get through — I feel like I'm going to be able to change that in the coming months.