The curl open-source project is grappling with an overwhelming deluge of low-quality “AI slop” security reports.
The situation has become so severe that curl founder Daniel Stenberg says it threatens the future of its successful bug bounty programme, sparking a vigorous debate within the developer community about how to protect open-source projects from this new form of digital exhaustion.
For years, the open-source utility has been an essential, if unseen, component in countless applications, from cars to web servers. To maintain its security, curl has run a bug bounty programme since 2019, rewarding researchers for discovering genuine vulnerabilities. This programme has been a notable success, paying out over $90,000 for 81 confirmed security fixes that have made the internet safer.
However, the landscape has shifted. A rising tide of nonsensical and time-wasting submissions – largely generated by AI tools – is exhausting the volunteer-led security team.
Daniel Stenberg, the founder and lead developer of curl, noted that the trend of poor-quality reports “does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop.”
The project has been receiving an average of two security reports per week, with approximately 20 percent of them being easily identifiable AI slop. The most concerning figure is the validation rate. As of early July, just 5 percent of all reports submitted this year have pointed to a genuine vulnerability. The sheer volume of noise is drowning out the signal.
This deluge of AI slop has a tangible human cost, especially for smaller open-source teams. The curl security team is composed of just seven members. When a report arrives, it typically engages three to four people for a period ranging from thirty minutes to several hours each. For Stenberg, who works on curl full-time, this is a frustrating waste of valuable time. For his colleagues, who often volunteer their limited spare hours, the impact is far more severe.
“My fellows however are not full time on curl. They might only have three hours per week for curl,” Stenberg explained, highlighting the disproportionate impact of a single bogus report. He also spoke of the “emotional toll it takes to deal with these mind-numbing stupidities,” revealing that the team handled eight such reports in the first week of July alone.
Current deterrents offered by the bug bounty platform HackerOne have proven insufficient. While users who submit invalid reports see their reputation score lowered, this is a minor inconvenience for seasoned participants and meaningless for newcomers who can just create a new account. Banning users is described by Stenberg as a “rather toothless threat” against an endless supply of reporters.
While no immediate changes are planned for the open-source project, Stenberg has announced a period of reflection for the remainder of 2025 to determine the best path forward to counter AI slop.
“I want us to use the rest of the year 2025 to evaluate and think,” he stated, adding that the goal is to act “for the sanity of the curl security team members.” The core objective is clear: “We must do something to drastically reduce the temptation for users to submit low quality reports. Be it with AI or without AI.”
Stenberg’s post has ignited a passionate discussion, with community members proposing a wide array of potential solutions to stem the AI slop tidal wave hitting open-source projects.
The idea of charging a submission fee, which Stenberg himself viewed as hostile, found support among some developers. One person argued that “money is a good entry bar for the cheap low effort AI slop,” a sentiment echoed by others who believe a small, refundable fee would force reporters to validate their findings first.
However, the idea of such a submission fee was countered by others who pointed out that pernicious actors could simply absorb the cost. “The payout outweighs the nominal charges up front,” one developer retorted, comparing it to the economics of gold farming in online games.
Other suggestions focused on raising the procedural bar for entry. The idea of requiring a short screen recording demonstrating the exploit was popular as it would be a “much bigger barrier for people who only have non-working AI slop” and would allow reviewers to spot fakes in seconds.
Other proposals include mandatory unit tests to reproduce a vulnerability, or even creative “honeypot” code designed to be flagged by low-effort AI scans.
More complex technical solutions were also floated, including reputation-gating systems based on a “web of trust,” similar to old PGP key-signing parties, or using proof-of-work systems like Hashcash to increase the computational cost of submitting reports.
The curl project’s struggle is a canary in the coal mine for the wider open-source community, with one developer noting that the problem is being “actively encouraged by GitHub,” suggesting a wider ecosystem issue.
As AI tools become more accessible, the challenge of separating genuine security contributions from automated slop will only intensify and place an unsustainable burden on open-source maintainers.
See also: AI programming tools slow software developers down

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.