

Manpreet Kaur
December 18, 2025
5 min read
December 18, 2025
5 min read

Cut code review time & bugs by 50%
Most installed AI app on GitHub and GitLab
Free 14-day trial
This year, we dove deep into all kinds of topics, from the philosophical shift toward “Slow AI” to the practical realities of building with increasingly sophisticated LLM models to why you shouldn’t trust threads with 🚀on vibe coding for code you intend to ship to prod.
Here’s a look back at our most impactful posts from the past year in case you missed them:

For years, developers could swap LLM models like interchangeable parts but now those days are over. This piece explores how modern AI models have separated in fundamental ways, from reasoning approaches to output formats, making LLM choice a critical product decision rather than a simple configuration change. We break down what this means for developers and why the “one prompt fits all” era is now over.

Fast isn’t always the way to go. While AI coding tools promise lightning-speed development, this article makes the case for slowing down. We explore why AI tools that take time to reason through problems produce better, more maintainable code than those optimized purely for speed. Drawing on data from a number of studies, we examine the paradox of developer confidence versus actual trust in AI-generated code and why “Slow AI” might be an antidote to technical debt.

The title is clickbait (we admit it), but the question remains: how do you measure the impact of AI on your codebase? This post challenges the notion that “percentage of AI-generated code” is a meaningful metric. Instead, we explore what engineering teams should actually measure when evaluating AI’s role in their development process, and why focusing on the wrong metrics can lead to dangerous blind spots in code quality.

The Model Context Protocol (MCP) promised easy integration between LLMs and external tools. But in reality, it created a context overload problem. This article tackles the issue of ballooning context windows and how to engineer your way out of them. We explore why MCP’s elegance can become a liability without deliberate context engineering and share strategies for keeping your AI tools sharp and focused rather than drawing in a black hole of data.

When Microsoft and Google both announced that AI generates 30% of their code, it became clear: we’re not talking about single tools anymore, we're talking about stacks. This post explores the emerging ecosystem of layered AI dev tools across the software development lifecycle. From foundational coding assistants to essential code review layers, we map out what a modern AI dev tool stack looks like and share sample configurations teams are using.

👍feels good, but is it teaching your AI reviewer anything? This article explores why emoji-based feedback, while universal, falls short at improving AI performance over time. We break down the simplicity trap and explain which nuanced feedback works to build better AI code reviews. Spoiler: it’s not as simple as a thumbs up or thumbs down.

“Vibe coding,” the practice of prompting AI tools with vibes and hoping for the best is everywhere. And it’s creating technical debt at an unprecedented scale. What happens when developers rely heavily on AI assistants like Claude Code, ChatGPT, and GitHub Copilot without proper processes in place? We dive into the hidden costs of moving fast and breaking things when your entire codebase depends on it.

Twitter threads promising “10 vibe coding and review tips every dev should know” are everywhere. But here’s the truth: practical code review advice requires full context, nuance, and experience. This blog questions the idea that code review wisdom is distilled into a tweet, from fresh eyes to AI-assisted review layers that understand your specific context.

Ever wish your code reviewer could channel Gordon Ramsay? Or maybe your disappointed mom? We talk about CodeRabbit’s tone customization feature, which lets you adjust how your AI code reviewer communicates, from encouraging and gentle to bgenerated code), share setup instructions, and celebrate the creative ways developers are cusrutally honest. We dive into why tone matters in code review (especially when dealing with AI-tomizing their review experience.

Open source is the foundation of modern software development, from package managers to frameworks to the infrastructure we all depend on. This post announced CodeRabbit’s $1 million USD commitment to open-source software sponsorships, reflecting our gratitude for what open source enables and our ongoing support for the developers and projects that power the ecosystem we all build on.
Each of these blogs represents a piece of the larger conversation about how AI is reshaping software development. We hope these insights will help you ship better code, refine your AI development setup, tackle context engineering challenges, or simply avoid technical debt from "vibe-coding."
Try out CodeRabbit today with a 14-day free trial.