Let me jot down how I’m using AI as of March 2026. This is a fast-moving space, so it might be fun to look back on later.

Coding

I use Claude Code, Codex, and GitHub Copilot CLI without any strong preference. All of them are on paid plans or their equivalents. There are small differences between them, but all produce code of satisfying quality. My customization is minimal — just a few allow/deny-tool rules, some MCP settings, and a handful of custom slash commands.

When I write code, I start in plan mode to build a plan. If there are open questions, the coding agent now asks me interactively (a hybrid of multiple-choice and freeform input), and I work through those answers until a solid document takes shape. At this stage I lock in the final output up front. For example, if I want a pull request, I include that instruction; if I want a project management tool updated with a status or comment, I include that too. For tests I also specify the expected coverage and ask the agent to watch CI until it passes. Once that’s all settled, I let it run in YOLO mode (also called all-allow or autopilot) and finish everything in one shot. I’m not thinking about sub-agents or agent teams. I leave the “how to execute the task” to the coding agent, while I as a human focus on defining the spec. That’s enough to get things working.

After that, depending on where the PR is being submitted, I sometimes spend a fair amount of time on presentation. Things like whether the commit granularity and messages are appropriate, or whether the code feels consistent with what came before — the surrounding context. Sometimes I put the coding agent’s output aside and rewrite from scratch. Even if AI wrote it, the Author is still me, the same as always. That part hasn’t changed.

Code Review

I’ve built a custom slash command that reviews all pending Review Requests in bulk. I run it about once a day and use the output as reference. It’s purely reference — the Approve/Reject call is still made by a human. For code I’m familiar with, I continue to review it myself. For unfamiliar or large-scale changes, I throw them at the coding agent first, then deepen my understanding through repeated questions before diving in. Honestly, this is an area where I’m not getting much benefit from coding agents yet. When I review code I’m not purely looking at whether the code is good or bad; I’m also thinking about things like:

  • Will this conflict with other ongoing changes?
  • Can this be deployed safely?
  • Will this affect the workload of related users or teams?
  • Are the right people included as reviewers?
  • What level of completeness should we be aiming for?

There’s a lot going on, and I can’t hand that off to a coding agent unconditionally. Code review is built on long-term trust, and AI isn’t there yet, so the human Approve step remains and doesn’t scale. I suppose once I can articulate the decision criteria clearly enough, the right tooling will follow naturally, and gradually it will move out of human hands.

Documentation

This is an area where humans find it a bit tedious, yet it’s a great fit for coding agents, so I’m using them effectively. The load here has dropped considerably. I think broad, consistent editing that spans both the codebase and the docs as a whole is where AI really shines. Deployment risk is low and recovery is easy, so it’s a natural fit.

Troubleshooting

Small tasks like handing over an error message and the relevant repository to pinpoint the cause are an excellent fit. However, starting from something like “hmm, it fails occasionally” and iterating through hypotheses and investigations — that’s still not working. This kind of work is interrupt-driven by nature, often urgent, and puts a high load on humans in every way — mentally and physically. I think this is an area where introducing AI could have enormous value. If you have a sufficient set of readonly tools and a small set of safe write tools, plus a highly autonomous agent, AI has infinite stamina and should be able to handle it well.

Routine Work

Rather than pushing to “adopt AI,” I think the better approach is to use AI day-to-day to advance automation. For example, instead of having AI write runbooks for daily tasks, I use AI’s power to build the automation that makes runbooks unnecessary.

Document Creation

I thought AI would be a great fit here, but it turns out it’s not, at least not for me. Most of the materials I use day-to-day are things meant to spark discussion or align direction with someone, and even if AI writes them, they don’t hold up in the discussion that follows — so I can’t use them as-is. The way it works is: there’s a document, everyone reads it and forms a rough understanding, and then the real conversation begins — which means my own words need to be in there.

Conclusion

I think of it as one powerful tool to use to the fullest. I don’t really feel like my job is being taken away. I also don’t think AI will replace humans.

It has vastly superhuman knowledge and reasoning ability, yet it’s passive and has no long-term plan of its own. It feels more like the number of highly capable robots has increased. As of today, it still seems like the phase where humans hold the reins.

There’s also no learning feedback loop the way there is between humans, so when working as a team, the knowledge and experience that team members on the human side would normally be expected to accumulate ends up being missing. Without that, things could get difficult in the long run. Even if you work through something together with AI — thinking it through, building it, shipping it — it still feels like solo development (because you’re just using AI as a tool). I suppose casual retrospective conversations — “that was tricky, wasn’t it? Between you and me, that part was actually…” — are what really matter.

I wonder what things will look like in a few years. That’s all.