How to Use an AI Code Reviewer on GitHub in 4 Examples
Nobody prepares you for the hard work of being an OSS project maintainer. When you’re just getting started, it’s exciting to get the word out and generate hype around your project. You start earning followers and stars on your repo, people are using your software, and momentum begins to build.
But over time, things can get unwieldy. More and more people are using your code, which is great, but expectations increase. You have to start thinking differently about your work. There’s mounting pressure for new features, better performance, bug fixes, and (most importantly) security.
What started as a hobby can quickly become a substantial responsibility.
Maintainers of popular projects are turning to AI for help, particularly for code reviews. An AI can quickly scan through pull requests for errors and security flaws, providing some breathing room to project maintainers. It’s especially helpful for contributor PRs, acting as a proactive line of defense for maintainers’ time.
Let’s review some of the most effective use cases we see emerging on some notable repos.
High-level PR summaries and expert code walkthroughs in an instant
The more popular the OSS project, the more difficult it is to track what’s actually happening with code contributions. Maintainers often need a brief summary of what’s going on with a new PR and the code changes it introduces.
Let’s look at a PR summary for Vitwit, an AI and blockchain company based in Hyderabad, India. The purpose of the PR is to close a feature request for new wallet switching functionality in January 2024. As you read through the following, think about how much time and energy are saved for the maintainer: fewer clicks, lower cognitive load, and expert commentary that considers the context of the entire codebase.
At a brief glance, you can see the PR contains two code commits with changes across six files. Instead of clicking through the six files and reviewing code changes, there’s a quick summary of the new features and enhancements being introduced.
Further down, we can see an automated comment with a technical walkthrough of the changes. The AI explains the feature’s purpose, the required data integration, and compatibility concerns — all in plain English in a short, easy-to-read paragraph.
File-level changes are summarized in a short table followed by another table that demonstrates whether the code changes actually address existing software requirements. The maintainer doesn’t need to manually review whether the feature requirements are actually addressed. Everything is spelled out in a way that can be visually skimmed for completeness.
To compile and check for all this information would take an expert maintainer at least half an hour. With an AI, they get it in an instant. It’s contextually aware and contains the code, the PR, all open issues, stylistic expectations, and whether there’s sufficient documentation. The maintainer can focus on the risk of merging the change and home in on where improvements can be made.
Collaborative, interactive PR code reviews and issue tracking in real time
Developer Kevin Mesiab built an interactive, SMS-based nudging feature in his Equilibria engine. For this pull request, he went beyond the PR summary and code walkthrough for some interactive insights and assistance.
The AI made a recommendation to add logging for database connectivity in the PingDatabase
function.
You can see the suggested change in the “Committable suggestion” section below.
Mesiab responds in a chat by saying logging is already handled in a different way outside of the function. The AI accepts the feedback and retains the knowledge for the future. (Imagine if every dev took feedback so kindly!)
Next, the AI discovers some potential issues in the existing GET-based implementation. It offers thorough feedback and a suggested alternative. Mesiab tells the AI to file the suggestion as an issue, which it does, and then provides the link.
Given that many developers are strapped for time, it’s unlikely that they’d provide such thorough analysis to the point of providing alternative implementation code. And if they did, it would take much, much longer than the instantaneous response of the AI. Not only is Mesiab saving his own time, but he’s getting much more bang for the buck out of his code-reviewing peer. Development time isn’t cheap, so this brief interaction saves time and money.
Open-minded, long-term learning
Artemis is a popular interactive learning app with individualized feedback for learning reinforcement. Written and supported by the Technical University of Munich, Artemis has many contributors, meaning a larger codebase with more moving parts.
In this chat-based interaction, we can see a suggestion from the AI overruled by the maintainer: “we want to get rid of the star imports.” The AI’s response: “Understood, I’ll remember this preference for explicit imports over wildcard imports in the Artemis project for future reviews.”
Then the AI shares what it’s learned and what it will update about its previous learnings:
Not only can you see what the AI has learned to do (and not do) in the future, but you can clearly see that learnings can be tracked to individual PRs. Those learnings can be corrected with a simple chat-based suggestion. For example, you can say, “Please don’t use explicit imports anymore. We are switching back to wildcard imports.”
Easily maintain project standards and requirements
OpenReplay is a self-hosted browser session replay and analytics tool that helps devs reproduce issues with real-world customer interaction data. It’s a popular repo with more than 8,800 stars. In this pull request, there are new features, a few areas of refactoring, and the removal of outdated code, summarized by the AI
In particular, we want to highlight the “codebase verification” feature that happens near the end of the PR.
The AI detects a reference to an old method (GetHandler) and finds that “not all references to the method were updated following its renaming to bGetHandler
in the Router
struct.” Perhaps this updated function name was a typo that needed correction, or perhaps it was an intentional renaming that wasn’t consistently applied. In either case, this could have been a breaking change introduced into the codebase that was caught by the AI.
Impact of AI on OSS project maintenance
With an AI code reviewer, maintainers have more help than ever in keeping a clean, consistent, and functional codebase. Looking through the examples above, we can clearly see how an AI can assist developers and maintainers with summaries, walkthroughs, interactive code reviews, and consistency. We can also see how much work can be done before a PR ever gets to a maintainer.
AI code reviewers can make a huge difference for open source projects by:
- Identifying errors
- Enforcing coding standards
- Spotting security risks
- Explaining code changes
- Telling maintainers where to focus
When it comes down to it, we’re talking about time management and expertise. As OSS projects expand, there’s more to manage in terms of contributions and complexity. Plus, with AI handling routine and repetitive tasks, project maintainers can allocate human resources to more strategic tasks such as feature development, bug fixes, and community engagement. It’s an effective strategy for any project aiming for more efficient use of volunteer time and potentially faster project development.
How to use an AI code reviewer in your OSS projects
There are a variety of AI code reviewers available in the GitHub Marketplace, many of which are completely free to use. To implement one in your open source project, follow these steps:
- Explore options: Look for tools on the GitHub Marketplace that best meet the specific needs of your project, such as language support, customization options, and integration capabilities.
- Install and configure: Select an AI tool and install it to your repository (some require only a couple of clicks). Configure the tool according to your project’s coding standards and review processes. This may include setting up rules for code style, defining error checks, and specifying security protocols.
- Integrate into workflow: This might be the most difficult process if you’ve got a steady workflow going in your project. Consider automatically reviewing all new pull requests or configuring it to provide periodic codebase scans. Ensure that all contributors understand how to interact with the AI and what to expect from its reviews. (Update your README.)
- Monitor and tweak: As you begin to use the AI tool, monitor its performance and feedback for effectiveness and accuracy. Be open to tweaking its settings and rules based on real-world use to better fit your project’s needs.
- Educate your team: Educate your team and contributors on how to make the most out of the AI code reviews. This includes understanding how to interpret the AI’s feedback, how to make corrections based on its suggestions, and how to override the AI when necessary.
With AI code reviewers, project maintainers can significantly reduce the manual burden of code checks and ensure higher standards of quality and security. It saves valuable time while enhancing the overall development process. In the end, you’re making open source projects more robust and reliable.
Want to get started with a completely free AI code reviewer? Try CodeRabbit. Learn more at CodeRabbit.ai.