Cursor’s AI agent for code reviews exits beta

Date:


For developers caught in the time-consuming loop of code reviews, Cursor’s Bugbot AI agent is officially out of beta.

Bugbot gets to work right inside a developer’s workflow, automatically analysing code changes submitted in pull requests (PRs). It’s designed to act as a digital safety net, hunting for logic bugs, tricky edge cases, and security issues before the code ever makes it into the live production environment.

Cursor says that it initially built Bugbot for their own use, and it quickly became a core part of their development process.

For its reviews, Bugbot uses top AI models paired with Cursor’s own custom techniques to understand what a piece of code is meant to do. This focus on intent helps it find meaningful bugs while keeping the “noise” from false positives low.

Teams can even guide the bot with custom rules in a BUGBOT.md file, teaching it about the specific quirks and requirements of their codebase.

During its beta period, Bugbot reviewed over one million pull requests and flagged more than 1.5 million potential issues. More importantly, developers found the feedback valuable, with over 50% of the identified bugs being fixed before the code was merged.

Leaders from prominent tech companies that used the Cursor Bugbot beta have been impressed by its performance.

“I’ve tried many AI review tools,” said David Cramer, Co-Founder & CPO of Sentry. “Bugbot produced less noise, caught real bugs, and just slotted perfectly into our flow.”

Bugbot’s AI-powered code reviews also earned the confidence of engineering teams.

“We’ve had PRs approved by humans, and then Bugbot comes in and finds real bugs afterward,” commented Kodie Goodwin, Senior Engineering Manager of AI Tools at Discord. “That builds a lot of trust.”

For some, the type of the bugs it caught was a standout feature.

“Bugbot blew us away with the nuance of the bugs it was catching,” said Vijay Iyengar, an Engineering Leader at Sierra. Iyengar noted its particular strength in a world increasingly reliant on AI-generated code, saying, “The generator-verifier gap is real, and Bugbot is incredibly strong at reviewing AI-generated code.”

When Bugbot spots something, it leaves a comment in the PR exactly where the issue is found. With a single click, developers can send the problem to the Cursor editor or have a web-based agent start working on a fix. This is all tracked on a central analytics dashboard, giving teams an overview of all reviews and statistics.

Cursor says that, having come to rely on Bugbot themselves, it’s keen to see how well the AI agent performs code reviews for others. We look forward to seeing that too.

(Photo by Emiliano Vittoriosi)

See also: Can Europe fix the open-source maintenance crisis?

Banner for AI Expo where attendees will learn about artificial intelligence for tasks such as code reviews in their software development.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: agents, ai, artificial intelligence, bugs, code reviews, coding, cursor, development, programming, security



LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Off the Grid: Sally breaks down USA TODAY's daily crossword puzzle, First Base

Explore daily insights on the USA TODAY crossword...

USB‑C Power Delivery Not Negotiating Correct Wattage?

If your USB-C device isn’t negotiating the correct...

Oppo Find N5 foldable phone review: Specs, price, and more

Oppo Find N5 review: Once you go bendy,...