


54% acceptance rate on critical issues
32.8 weeks of reviewer time saved
Reduced manual code-review burden
Improved code quality
freee is a Japan-based software company whose mission is to “Make small businesses the hero of the world.” The company provides an integrated business management platform that includes products such as freee Accounting, HR/Payroll, and electronic contracts, helping sole proprietors and small businesses run their operations more easily. As freee expanded its use of AI in software development, the company faced a new challenge. More code was being produced, more pull requests were being opened, and human review risked becoming the limiting factor. To help development teams keep pace without sacrificing quality, freee adopted CodeRabbit.
freee has a large, distributed development organization, with hundreds of engineers working across products in small Scrum teams throughout Japan. As the company invested more heavily in AI, including coding agents and other AI-assisted development tools, engineers began generating more pull requests. That increased output put new pressure on code review. “As the bottleneck of reviews was becoming a reality, the challenge was how to reduce the time spent on reviews,” Software Engineer Yuhei Nakayama said. The issue was not necessarily that pull requests were already stacking up in a visible crisis. Rather, freee could see where things were heading. If engineers were going to keep producing more code with AI, the review process had to become more efficient, too. Otherwise, delivery speed would slow at the exact moment coding speed was accelerating.
CodeRabbit first came to freee through personal use. Software Engineer Jeong Jaesoon had used the free version for solo development and saw that it offered something missing from other review tools already in the mix internally.
“The other tool that was already implemented only presented reviews one-sidedly, and I felt it was lacking in the inability to have dialogue or properly configure settings like ‘I want you to review from this perspective,’” Jeong said. “Through using CodeRabbit in personal development, I realized that the usability in those areas was very good, and I thought it would match our internal challenges, so I proposed a comparative verification.”
When freee compared CodeRabbit with alternatives, review accuracy stood out as a major differentiator in favor of CodeRabbit. freee also valued CodeRabbit’s operational fit. The company appreciated the ability to add and manage custom coding rules through Learnings, as well as the dashboard and chat-based workflows.

That accuracy shows up not just anecdotally, but in the results freee saw in practice. Across freee’s use of CodeRabbit, the platform achieved an overall acceptance rate of 48.5%, meaning nearly half of CodeRabbit’s suggestions were accepted by developers.
The impact also cut across multiple review categories. Acceptance rates reached:
Together, those numbers suggest that CodeRabbit was not only catching superficial issues, but contributing across the range of concerns engineering teams have to manage in production systems.
Unlike usage-based tools, CodeRabbit’s flat-rate subscription model made budgeting far easier.
“With infrastructure like AWS, if usage spikes, you can explain it as ‘access increased and revenue went up.’ However, with development tools, usage fees don't necessarily correlate with productivity. For development tool budgeting where flat-rate pricing was traditionally the norm, predictable costs were a very significant reassurance,” Yuhei said.
freee began with about 30 users, tested CodeRabbit for roughly a month, and then expanded the rollout to 570 seats, to “everyone who writes code,” Yuhei said. Today, the platform is active across 285 repositories, giving the company broad review coverage across its engineering environment.
At scale, the benefits of CodeRabbit have been tangible. CodeRabbit has saved reviewers the equivalent of 32.8 weeks of reviewer time in the last six months, helping teams absorb higher pull request volume without asking humans to shoulder the full burden alone. That matters because freee does not see AI review as a replacement for human judgment. Instead, the company uses CodeRabbit to create a better division of labor between AI and people. Developers are encouraged to work through CodeRabbit’s suggestions before requesting human review, allowing AI to catch basic issues first and freeing human reviewers to focus on higher-order concerns like architecture, intent, and design.
On an internal survey, the average satisfaction score was four out of five, about 84% of freee’s engineers said they were satisfied with CodeRabbit and roughly 70% said CodeRabbit had reduced their review burden. For junior developers, that means greater confidence before handing off work for review. For senior developers, it means less time spent on routine comments and more time spent on the issues that really require human judgment. “For junior members, there's a sense of security that ‘I can self-check before sending a review request,’ and for senior members, they can trust that ‘AI has already completed basic checks,’ creating positive effects for both sides,” Jeong said.
freee also highlighted CodeRabbit’s walkthroughs and summaries as especially helpful in understanding pull requests faster. And there was another benefit that mattered in everyday team life. CodeRabbit helped remove some of the interpersonal friction around small corrections that can be awkward for humans to point out repeatedly.
Before CodeRabbit
After CodeRabbit
For freee, adopting AI-assisted development increased output, but it also exposed the limits of a review process that depended too heavily on human bandwidth alone. CodeRabbit helped the company respond by improving review quality, reducing burden on engineers, and saving significant reviewer time across a large development footprint.
More broadly, freee sees AI review as essential infrastructure for modern software development. “I think it's one of the absolutely necessary parts in the AI coding era,” Jeong said of CodeRabbit. “Both having AI create things and having AI do the work is important, but from the perspective of how to ensure quality, I think we can't consider it without review AI as a set, or it becomes a bottleneck.”
With active use across 285 repositories, nearly half of suggestions accepted, and the equivalent of 32.8 weeks of reviewer time saved, CodeRabbit has become an important part of how freee keeps development moving without letting code review slow it down.

Tokyo, Japan
https://corp.freee.co.jp/en/1,000+ employees
Shell, Ruby, JavaScript, TypeScript
Code generation scaled faster than review capacity.