TL;DR: After a just over a year of AI coding evolution from copy-paste to spec-driven development, the biggest lesson is this: AI coders work best as partners, not autonomous generators. AI Buddy Coding - treating AI as a capable partner with complementary strengths - delivers better results than point-and-shoot approaches ever will. Success requires clear documentation, language-appropriate tool choice and constant oversight.
The Journey So Far
Late 2022: Copy-paste from ChatGPT into your IDE. Hope it compiles. Debug the inevitable problems. Code snippets at best. Repeat ad nauseam.Late 2024: Integrated assistants like Cursor and Cline. Chat with AI inside your development environment - as long as its VScode. Watch it generate code in context. Still mostly vibe coding - conversational and exploratory.
Early 2025: Optimised tools like Kodu, Roo Code came out. Better context handling, multi-file editing and (sometimes) smarter suggestions. Started noticing that AI coding needs specs and guardrails to deliver half-decent results.
Mid 2025: There are now many AI Coders, many are forks of previous projects like Cline, some are new, all struggle with using up the context window and attention. Spec-driven development with GitHub Spec Kit and Kiro come into the fray. Start with specifications, let AI handle implementation, it actually works quite well when your specs are clear.
End 2025: I've spent nearly the whole year with AI Coding, spent considerable amounts of time with 8 different coding assistants, written huge amounts of code successfully and changed my approach to programming entirely. Spec is everything, document organisation is essential. Oh yes, I've become completely programming language agnostic, now I choose the best language for the requirement, not only the languages I know, and that's truly liberating.
The overall trajectory looks like progress toward autonomous coding, but that’s not what’s actually happening.
The Hallucination Problem Nobody Fixed
AI coding tools still hallucinate spectacularly when your documentation isn’t perfect. It also struggles to maintain attention, in that it tends to forget earlier instructions and that causes much frustration. The answer seems to lie in clear documentation and short, focussed tasks.Give them unclear specifications and they invent plausible-sounding solutions. Leave gaps in your documentation and they fill those gaps with assumptions. Create duplications and they pick inconsistent interpretations.
Spec-driven tools don’t solve this - they just make the problem more systematic. Garbage specifications still produce garbage code, just more consistently.
During Orange Octopus development, I discovered that documentation quality mattered more than tool sophistication. Clear documentation with index.md navigation, 500-line chunking and Single Source of Truth enforcement made AI coding productive. Vague documentation made even the best AI tools generate problematic code.
The tools evolved. But the fundamental requirement for explicit, unambiguous documentation didn’t change.
The Language Lottery
Not all AI coders handle all languages equally well. Some AI backends excel at Java. Others handle C better. Most are brilliant at JavaScript. All of them are competent at Python. This matters more than tool features. Using the wrong AI backend for your language stack means constantly fighting against biased training data. Using the right one means AI suggestions align with language idioms.I’ve watched teams struggle with AI-generated C code that looked syntactically correct but violated memory management practices. The problem wasn’t the team - it was using an AI backend trained primarily on garbage-collected languages - in C that's the developers job too. Choose your AI coding assistant based on what languages you’re actually writing, not which tool has the most impressive demo.
The Point-and-Shoot Myth
Here’s the dangerous assumption: that AI coding tools will eventually reach “point-and-shoot” capability. Describe what you want, press go and receive working production code - It’s not happening, not this year, probably not next year either.What is happening: AI coding tools are becoming exceptional partners for developers who understand what they’re building.
Point-and-shoot assumes AI can understand requirements, make architectural decisions and generate production-ready code autonomously. The reality is AI excels at translation - turning clear specifications into working implementations - but struggles with invention.
This isn’t a limitation we’re about to overcome. It’s fundamental to how these tools work.
Introducing AI Buddy Coding
Let’s call this what it actually is: AI Buddy Coding. You’re not delegating development to AI. You’re pair programming with an AI partner that has different strengths than a human partner would.Your AI buddy excels at:
- Generating boilerplate quickly
- Translating clear specifications into code
- Remembering syntax and API details
- Maintaining consistency across implementations
- Writing comprehensive tests from examples
Your AI buddy struggles with:
- Understanding vague requirements
- Making architectural decisions
- Evaluating security implications
- Optimising for performance
- Knowing when to question specifications
What AI Buddy Coding Looks Like in Practice
You: Write clear specifications with explicit requirements and architectural decisions
AI: Translates specifications into initial implementationsYou: Review the implementation for correctness, security and performance. Question assumptions. Catch edge cases
AI: Refines implementation based on your feedback. Generates tests. Maintains consistencyYou: Verify tests are comprehensive. Check integration with existing code. Ensure nothing broke
AI: Updates documentation to reflect implementation decisions.You: Review documentation for completeness and accuracy
This isn’t slower than point-and-shoot would be (if it worked). It’s faster than solo development and it produces better code than either human or AI working alone.
The Oversight Requirement
AI Buddy Coding requires constant oversight. Not micromanagement, but partnership. You’re not checking every line of code, you’re ensuring the implementation matches your architectural intent, handles security correctly, performs adequately and integrates properly. During Orange Octopus development, I measured this: systematic oversight through AI Buddy Coding was minimally 2x faster than solo development, and significantly more deterministic - things got done in the expected timeframe. Point-and-shoot attempts (when I got lazy and just accepted AI suggestions) created technical debt that cost more time to fix than I saved. The oversight isn’t overhead, it’s the partnership that makes AI coding productive.
What This Means Going Forward
The AI coding tools will keep improving. Better context handling. Smarter suggestions. More sophisticated spec-driven capabilities. But the fundamental dynamic won’t change: AI as translation partner, not autonomous developer. Organisations adopting AI coding tools need to structure for AI Buddy Coding:- Clear documentation and specification practices
- Review protocols that catch AI limitations early
- Training on effective partnership with AI tools
- Governance frameworks that maintain quality
The Lanboss Perspective
At Lanboss AI, we help development teams implement AI Buddy Coding effectively. That means:- Documentation architecture that AI can reliably consume
- Specification standards that translate consistently
- Review protocols appropriate to AI partnership
- Governance frameworks that maintain quality without slowing development
- Team training on productive AI Buddy Coding practices

No comments:
Post a Comment