


Consistent enforcement layer across AI and human-written code
Automatically enforces internal markdown policies
Catches subtle, high-impact issues humans miss
High signal-to-noise ratio
Abnormal AI is a behavioral cybersecurity company that uses AI to understand what’s normal across organizations so it can detect suspicious behavior patterns that traditional, signature-based approaches can miss. That mission puts a premium on accuracy, security, and consistency in the codebase.
Abnormal AI has leaned into an AI-native development model, where tools like Cursor and background agents can generate working code at unprecedented speed. But that introduced a new set of problems. Those problems included inconsistent development practices across teams, subtle bugs that slip past fast-moving reviews, and a review process that quickly became the bottleneck when implementation is increasingly delegated to agents.
Abnormal AI adopted CodeRabbit early in their transformation to an AI-native SDLC, using it as a constant guardrail through each stage of the transition: from AI-assisted coding to increasingly autonomous, agent-driven workflows.
Managing about 250 engineers, Shrivu Shankar, VP of AI Strategy at Abnormal AI, works across product and engineering to determine how AI should be applied throughout the company’s systems. As Abnormal AI embraced what it calls an “AI-native playbook,” the team began generating more code via AI tools; both interactive (like Cursor) and in the background via agents.
For any company becoming AI-native, that acceleration is powerful. AI-native tooling lets teams produce more code changes much faster, but the “penalty” for getting something wrong doesn’t shrink. A subtle bug, a security issue, or a misaligned implementation still costs real time to diagnose, fix, retest, and potentially respond to. So, when output accelerates, risk can scale too unless stronger guardrails are added.
In Abnormal AI’s case, AI adoption wasn’t uniform. Some engineers were all-in on AI-assisted workflows. Others were still primarily manually writing code. Abnormal AI needed a consistent standard across both without slowing teams down or forcing every engineer into the same tooling preferences.
As Shrivu put it, “CodeRabbit provided a consistent enforcement layer both for AI-generated code and for engineers still writing code manually.” Meanwhile, human review remained non-negotiable for compliance and structural reasons: “A human still reviews every code change that hits our main repo,” Shrivu noted.
That set up a classic AI-native bottleneck. If agents can generate changes in parallel, but humans remain the bottleneck for review, you need a way to increase review throughput and quality without adding noise or creating yet another tool engineers ignore.
Abnormal AI’s engineering workflow evolved significantly beyond traditional agile sprints. Instead of allocating tickets in standups, teams write detailed specs collaboratively and delegate those specs to agents that implement changes in parallel. In practice, the model shifts human attention upstream and downstream:
Upstream: Humans focus on writing clear intent and constraints (the spec).
Midstream: Agents handle implementation, iteration, and testing.
Downstream: Humans review the code and the evidence that it works.
Shrivu described the spectrum of autonomy this way: “The bare minimum is using a Cursor agent to implement changes. At the highest level, you’re completely delegating across a suite of background agents. You’re writing the spec, you're not touching the code at all.”. This kind of system is only sustainable with strong guardrails. Abnormal AI layers multiple safeguards into its workflow, including:
Static analyzers and linters (e.g., Ruff, MyPy)
Custom internal lint rules
Cloud code sub-agents for self-review against internal markdown files
“Review by proof,” where agents provide screenshots or other validation artifacts demonstrating that changes were tested Even with layers of safeguards, the real pressure point is still the pull request—where high-velocity, agent-generated changes must be validated before they can ship. That’s where CodeRabbit becomes the enforcement layer. CodeRabbit reviews the resulting pull request before merge, raising issues that would otherwise land on the already-saturated human reviewer layer.

As a cybersecurity company, Abnormal AI maintained high procurement and compliance standards for any AI tool touching engineering workflows. “As a security company, we needed a mature solution for procurement, one with strong compliance, a privacy policy, and a trust center,” Shrivu noted. Among the tools we evaluated, CodeRabbit met our key procurement requirements, including compliance documentation, privacy terms, and vendor maturity
CodeRabbit is not just generating commentary; it is surfacing high-impact issues that engineers act on. Across Abnormal AI pull requests, CodeRabbit's acceptance rate for critical-severity comments is above 65%. For Shrivu, this is the clearest signal that the tool is catching meaningful problems (not noise), especially as code generation scales via agents.
“One of the biggest factors in our decision was how many issues CodeRabbit surfaced that could have realistically been missed in a purely human review,” stated Shrivu. One example stood out in an internal chatbot project, something that was still relatively new and specialized at the time.
“I was impressed that it identified something both technical and relatively new. I would have been surprised if anyone internally had flagged that my code contained a prompt injection vulnerability,” shared Shrivu. This wasn’t a simple linting; it required semantic reasoning about AI interaction patterns, a class of issues unlikely to be caught by static rules alone.
In the last 30 days alone, CodeRabbit has saved an estimated 100 hours of reviewer time, increasing review throughput without compromising human validation. But that only represents part of the time savings Abnormal sees from using CodeRabbit, since they also avoid time they’d previously spent on revisions and fixes. In fact, Abnormal is so certain that CodeRabbit improves their code that if it were removed, the team expects they’d see measurable downstream effects.
“If there were metrics for maybe the number of revisions later to a piece of code, that would spike,” Shrivu explained. CodeRabbit prevents subtle issues from lying dormant for weeks. “We’d likely uncover a week or two later that an edge case wasn’t handled correctly, resulting in additional revisions and code churn,” noted Shrivu.
Abnormal AI documents everything: architecture, rules, and best practices are documented extensively in Markdown files. “We have an unusually high density of markdown documentation relative to code,” Shrivu explained. Engineers are trained to think less about writing code and more about giving AI the right context.
CodeRabbit automatically pulls in these markdown files, increasing the precision and relevance of its feedback. “Because CodeRabbit pulls in that context automatically, its feedback is much more effective,” Shrivu noted. “We don’t maintain any CodeRabbit-specific configuration. It simply ingests those markdown files directly.” CodeRabbit turns documentation into executable guardrails, policy enforced at review time.
That performance translates into measurable impact across high-risk domains, with acceptance rates of 54% for functional correctness, 52% for data integrity and integration and over 40% for security and privacy.
An unexpected signal of CodeRabbit’s effectiveness came from an engineer who was “not typically as into AI as other engineers.” He suddenly began advocating for paying closer attention to CodeRabbit’s feedback. “He showed me examples of CodeRabbit, pointing out non-optional patterns and telling people to avoid certain things,” shared Shrivu.
For engineers, trust is built on signal quality. The consistency and relevance of CodeRabbit’s comments helped increase adoption organically.
Abnormal AI aims to become a fully AI native cybersecurity company.” That means AI won’t just help write code; it will increasingly plan, implement, deploy, and maintain systems, with humans focusing on intent, oversight, and validation. In that scenario, the question isn’t whether agents can generate code. They already can. The question is whether an organization can scale autonomy without scaling risk. For Abnormal AI, CodeRabbit plays a foundational role in that equation acting as the enforcement layer that helps teams move quickly, keep standards consistent, and catch the kinds of issues that become more likely as autonomy increases. As Shrivu summarized it: “CodeRabbit helps Claude write better code.”
Before CodeRabbit
After CodeRabbit

San Francisco, United States
https://abnormal.ai/250+
Go, Java, JavaScript
As Abnormal AI accelerated AI-generated code across 250 engineers, human review became a bottleneck, creating risk of subtle bugs, inconsistent standards, and scalability issues in their AI-native development model.