After two years living with AI assistants in the editor, work routines have settled enough to offer a more mature reflection than in 2023. Back then we were all discovering how to use Copilot, experimenting with prompts, and feeling a mix of fascination and skepticism. Today the experience is more sober, more practical, and there are concrete habits that have stuck while others haven’t.
This post isn’t a tool review, but a personal balance of how pair programming with AI has changed for me and for several teams I’ve worked with. It tells what habits have remained and which have fallen away, with the idea that someone starting can skip some mistakes.
What has changed day-to-day
Most important is that friction in writing boilerplate code has vanished. Anything repetitive (a CRUD, a standard integration test, a parser for a known format) is delegated to the assistant and comes out reasonably well. This isn’t a secret, but its cumulative effect over two years is considerable: a certain kind of work that used to eat mornings now takes minutes.
Exploring unknown code has also changed. Entering a new codebase and asking the assistant “how does the authentication flow work here?” or “where is the product cache managed?” is much faster than grep-and-read. And it doesn’t replace reading, it guides it: the AI points you, and you read the relevant parts with the right focus.
The third thing that has changed is ease of trying ideas. If I have an idea of how to solve a problem but I’m not sure, asking Claude or Cursor to show what that approach would look like in my code is a five-minute exercise. Before, probing an alternative ate an afternoon; now I try two or three before deciding. Design decisions are better because comparisons are cheaper.
Habits that have stuck
Through trial and error, I’ve settled on a few habits that consistently pay off.
Writing brief context before asking for anything significant. Five or six lines explaining what I’m trying to achieve, what constraints exist, and what decisions are already made. It isn’t sophisticated prompt engineering; it’s simply not expecting the AI to guess.
Always reviewing generated code before committing. AI produces code that looks right but has subtle errors: poorly named variables, reversed conditions, uses of functions that don’t exist, imagined libraries. Reading line by line is faster than debugging later.
Asking for tests at the same time as production code. If I ask for a new function, I also ask for a couple of tests covering it. They aren’t the definitive tests, but they verify the code does what it should. Developing and testing in the same prompt is more efficient than stepping through.
Using “agent” mode with caution. Tools that run commands and modify files by themselves (Claude Code, Cursor Composer, Aider) are very useful when the task is well scoped. But telling the agent “fix all linter warnings” without supervising what it does can end in mass changes you didn’t intend.
Habits that haven’t stuck
There are things I tried for months and ended up abandoning.
Asking for full architectural designs. AI is good at helping think about alternatives, but asking “design the architecture of this microservice” produces results that sound reasonable but aren’t better or worse than what I’d do on my own. The decision requires knowing the team’s human context, project history, implicit trade-offs, and there the AI doesn’t add clear value.
Trusting explanations of other people’s code without verifying. When I ask the AI “what does this function do?”, the answer is usually right, but sometimes it invents behavior that doesn’t exist. For important things, reading the code is inevitable.
Using the assistant as sole source for learning new technologies. Early on I tried learning a library only with the AI, and the result was serious comprehension gaps. Today I use official documentation as first source and the assistant to speed up the applied part.
Trying to make the assistant understand the style of a whole project. With large repositories, current tools don’t yet have the capacity to maintain large-scale style coherence. Reviewing results and adapting manually remains necessary, especially in projects with internal conventions.
What has surprised me most
Three things have surprised me over this time.
First is how my relationship with programming languages has changed. I used to have a clear preference for certain languages (Python, Go) because they were comfortable. Today, with the assistant, the friction of using a language I know less well (Rust, Kotlin, Elixir) is much lower. I can pick the right language for the problem instead of the language I’m fastest in. This feels like an important substantive change.
The second surprise is how much context quality matters. I’ve seen many times that the same prompt, with the same model, produces very different results depending on what files the editor has open. Cursor, Zed, and Claude Code have invested heavily here, and it shows. Part of the difference between teams getting value from AI and teams not reduces to how they manage context.
The third is that the initial enthusiasm has moderated. Two years ago I felt AI was about to change everything; today I feel it already has, to the extent it was going to change things short-term. The next jumps (truly autonomous agents, complete code generated without human oversight) aren’t imminent. AI in the editor is a mature tool with clear benefits and clear limitations, and that maturity is healthy.
My recommendation for those starting now
If you’re joining AI in your coding flow in 2025, my recommendation is concrete and short.
Pick one tool and use it for three months before judging. All serious options (Copilot, Cursor, Claude Code, Zed with AI, Windsurf) are reasonably good; switching between them every week will prevent developing effective habits.
Start with tasks where you can immediately verify the result: writing tests, generating standalone scripts, refactoring small functions. Leave the more ambitious cases for when your judgment is formed.
Always ask the assistant to explain what it did, even if the code seems obvious. That explanation reveals whether the assistant understood what you wanted, and teaches you new patterns.
And keep the habit of writing code without assistant now and then. Not out of nostalgia, but because some skills (reasoning about concurrency, designing APIs, reading other people’s code) rust if you always delegate. AI is an amplifier, not a substitute.
How I think it will evolve
Short-term, improvements will come from two sides. One, more integrated assistants across the full toolchain (IDE, terminal, git, CI/CD, observability) that can act with broader context. Claude Code and derivatives are advancing in that direction. Two, better handling of large contexts: more effective context, better performance on extensive corpora, deeper integration with project search tools.
Medium-term, focus will be on reliability. Reducing subtle errors, improving the assistant’s ability to say “I don’t know” or “I need more information” instead of inventing, making generated code more maintainable. This is less spectacular than 2023-2024 advances but is where most work remains.
What seems unlikely short-term, despite what some startups promise, is the “autonomous developer” writing and deploying software on its own. Small steps toward that destination are useful, but the full vision still requires advances not visible in the horizon of the coming months. For those who code, the message is reassuring: the tool will keep improving, but your role remains essential, and it’s worth investing in learning to work well with it.