2026-03-05

Parallelizing Work with AI

80% preparation, 20% execution. Lessons learned from parallelizing tasks with Claude Code and git worktree.


Parallelizing AI tasks with multiple Claude Code terminals

Parallelizing Work with AI. 80% Preparation, 20% ExecutionLink icon

I've been wanting to try parallelizing tasks with my AI workflow for a while. I felt it was an essential skill, and one I hadn't developed yet.

At the same time, I didn't want to mess around with the workflow I use on Coneko, which is very reliable.

So I'm taking advantage of being in a bit of a waiting period for user feedback and data on Coneko to prototype my new project. And since it's just a POC, I can experiment with things. Parallelization in particular.

What I knew at this point was that I needed to use git worktree, and launch multiple terminals with Claude Code from these worktrees. Once again, the key is to organize everything upfront. We always come back to this with AI: always think and prepare before any implementation.

In practice, what I did was ask Perplexity how this is typically organized. Then brainstorm with Claude Desktop on how it would translate to my prototype for parallelizing tasks.

The takeaway is that some tasks must be sequential, due to dependencies. That's the common foundation for the tasks that follow. And the reunification task at the end.

The other key element is that before implementing, we first need to create the specifications for all tasks. Why? Because they'll adjust to each other as the design evolves.

Now we can start implementing. First, the preparatory tasks, sequentially. Then, we create the worktrees, and finally we can ask our AI squad to work in parallel.

Personally, I couldn't stay in front of my screen and approve each command because I had errands to run. So I launched it with --dangerously-skip-permissions (aka YOLO mode. At your own risk). But by the time I was ready to head out, it was already done 😅.

When I got back, I launched the final reunification task, and apart from a few bugs related to a somewhat shady scraping API I'm using for the prototype, it worked perfectly.

So, to summarize, what I learned is:

  • Once again, it's all about upfront preparation.
  • Some tasks are necessarily sequential.
  • You need to write the specifications for ALL tasks before getting started.

Did you learn something? I hope so. I certainly did.