Mar 23, 20269 min readinsightaicoding-agents

The Coding Agent Wars: Who's Actually Winning (And It's Not Who You Think)

We analyzed 14 coding agents across 600K+ GitHub events to find out who's really winning the coding agent wars. Stars tell one story — contributor data tells a very different one.

OSSInsight
The Coding Agent Wars: Who's Actually Winning (And It's Not Who You Think)

I've been using coding agents daily for the past year. Started with Aider, moved to Claude Code, tried Codex when it launched, keep going back and forth. Like most developers, I have opinions about which one is "best."

But opinions are cheap. So I decided to actually look at the data.

I pulled numbers on 14 coding agents through OSSInsight — not just stars (everyone's favorite vanity metric), but the stuff that actually matters: who's contributing code, how fast are they shipping, and how overwhelmed are their maintainers? The results surprised me. Some of the "winners" are running on fumes, and some of the underdogs have quietly built something much more durable.

The Leaderboard Nobody Expected

Let's start with the raw numbers. Here are the top 10 coding agents by GitHub stars as of March 2026:

RankAgentStarsForksContributorsLanguageCreated
1OpenCode128,27713,569828TypeScriptApr 2025
2Gemini CLI98,73512,538590TypeScriptApr 2025
3Claude Code81,4376,77749ShellFeb 2025
4OpenHands69,5768,730460PythonMar 2024
5Codex66,9698,953383RustApr 2025
6Cline59,2526,014283TypeScriptJul 2024
7Aider42,2644,063180PythonMay 2023
8Goose33,4533,109402RustAug 2024
9Cursor*32,4942,21532Mar 2023
10Continue31,9974,288501TypeScriptMay 2023

*Cursor's GitHub repo is primarily an issue tracker — the actual source code is proprietary. Its contributor/commit data is not directly comparable to the others.

OpenCode leads. Gemini CLI is second. Claude Code third. The usual suspects.

But here's where it gets interesting.

Stars Lie. Contributors Don't.

Star count is a vanity metric. It tells you how many people clicked a button. What actually matters is: how many people care enough to contribute code?

Look at the contributor-to-star ratio:

AgentStarsContributorsRatio (contributors per 1K stars)
Continue31,99750115.7
Goose33,45340212.0
OpenHands69,5764606.6
OpenCode128,2778286.5
Gemini CLI98,7355906.0
Codex66,9693835.7
Cline59,2522834.8
Aider42,2641804.3
Claude Code81,437490.6

(Cursor excluded — GitHub repo is an issue tracker, not source code)

Continue has 26x the contributor density of Claude Code. I had to double-check this number. Twenty-six times.

Now, to be fair to Claude Code — and I say this as someone who genuinely likes using it — Anthropic runs a tight ship. 49 contributors isn't a weakness if those 49 are world-class engineers shipping a focused product. But it does mean Claude Code's fate is entirely in Anthropic's hands. If they deprioritize it tomorrow, there's no community to carry it forward.

Meanwhile, Continue, Goose, and OpenHands have thriving contributor ecosystems. These are genuine open-source communities where external developers are shaping the product.

The Open Issues Signal

Here's a dimension most people miss — open issue count:

AgentOpen IssuesStarsIssues per 1K Stars
Claude Code7,40981K91.0
OpenCode7,324128K57.1
Gemini CLI3,12999K31.7
Codex2,18367K32.6
Aider1,44942K34.3
Continue93432K29.2
Cline71559K12.1
OpenHands33670K4.8
Goose31833K9.5

Claude Code has 91 open issues per 1K stars — nearly 2x the next closest. This suggests massive user demand outpacing the team's capacity to respond. OpenHands, by contrast, has just 4.8 — their community is efficiently triaging and resolving issues.

The Velocity Test: Who's Shipping Fastest?

Stars and contributors are historical. What about right now? Let's look at commits in the last 30 days:

AgentCommits (Last 30 Days)ContributorsCommits/Contributor
OpenCode8238281.0
Codex7543832.0
Gemini CLI6035901.0
Goose2594020.6
OpenHands2474600.5
Continue1305010.3
Cline1172830.4
Claude Code43490.9
Aider251800.1

OpenCode, Codex, and Gemini CLI are shipping at breakneck speed — 600+ commits a month. They're in a full sprint.

And then there's Aider. 25 commits in a month. For context, I was an early Aider user — it genuinely changed how I thought about AI-assisted coding. But 25 commits when your competitors are pushing 600+? That's concerning. Paul Gauthier built something remarkable largely by himself, and maybe that's the problem. One person can't outship Google and OpenAI. I hope I'm wrong about this one.

The Three Archetypes

Looking at all this data, I see three distinct models emerging:

1. The Corporate Rockets

OpenCode, Gemini CLI, Codex, Claude Code

Big company backing. Huge star counts. Internal teams pushing 600+ commits a month. But look at the contributor ratios — these are products with a GitHub repo, not open-source communities. That's fine! Chrome is technically open-source too. Just don't confuse the two.

2. The Community Organisms

Continue, Goose, OpenHands, Cline

This is where things get interesting. Continue has 501 contributors for 32K stars. That's not a project — that's a movement. These tools tend to be messier, more opinionated, harder to set up. But they evolve in ways no product team could predict, because hundreds of developers are scratching their own itches.

There's a well-known pattern where the "worse" open-source option wins long-term — Linux over Solaris, Kubernetes over Docker Swarm. But I'll challenge my own argument later in this piece, because dev tools might be different.

3. The Pioneers

Aider, Cursor, Plandex

Aider invented the category. Cursor proved you could build a $10B company around it. But pioneers don't always win the war they started — ask Netscape. The question is whether they can reinvent themselves fast enough, or if they become the projects people "used to use."

(I really don't want this to happen to Aider. It's one of the most elegant developer tools I've ever used.)

What the Data Doesn't Tell You

I need to be honest about something: GitHub metrics are only half the story. Maybe less.

Nothing in this data tells you which agent writes the best code. Which one actually understands your 200-file monorepo. Which one gracefully recovers when it breaks something. Which one makes you feel like you have a senior engineer pairing with you versus a junior who's very fast at typing.

I've used most of these tools. My subjective take, which you're free to ignore:

Claude Code produces the best code. It's not close. The reasoning quality, the understanding of complex codebases, the way it asks clarifying questions instead of barreling ahead — Anthropic's model advantage is real. The 49-contributor count doesn't matter when the underlying model is a generation ahead.

Aider has the best workflow. Despite the commit slowdown, the actual experience of using Aider is remarkably well-designed. git-native, minimal, respects your existing toolchain. There's a lesson here about software craft that commit counts can't capture.

Cline is the best IDE integration. If you live in VS Code, Cline feels native in a way the terminal agents never will. Different philosophy, different strengths.

The corporate rockets are improving fastest — because they can throw 50 engineers at the problem. GitHub metrics show community health, but for a tool you use 8 hours a day, polish matters more than contributor count. VS Code beat Atom. Chrome beat Firefox. In developer tools, the most polished option often wins, not the most open one.

So take my archetype framework above with a grain of salt. The "community always wins" narrative is seductive, and it's often true for infrastructure (Linux, Kubernetes). But for end-user tools? The jury is very much still out.

My Bet

Alright, predictions. Putting a stake in the ground so you can come back in a year and tell me I was wrong.

Bet #1: MCP kills the moat. Right now, each agent has its own tool/plugin system. MCP is rapidly becoming the universal standard. Once every agent can use every MCP server, the differentiator shifts from "what can this agent do" to "how well does it do it." This commoditizes the corporate rockets and advantages the community organisms, who can iterate on UX faster.

Bet #2: Continue or Goose will be top 3 by stars within 12 months. Their contributor density is a leading indicator. Projects with 15+ contributors per 1K stars don't stay small. Something is pulling people in.

Bet #3: At least two of today's top 5 will be irrelevant by March 2027. Not dead — just irrelevant, the way Sublime Text is irrelevant. Still loved by some, but not where the energy is. This market is moving too fast for anyone to coast.

I genuinely don't know who wins. The data points in one direction, my experience points in another, and history is ambiguous. What I do know is this: we're in the first inning. The coding agent that dominates in 2028 might not even exist yet.

The only thing I'm truly confident about? A year from now, these numbers will look completely different. I'll be back with an update.


All data in this article was sourced from OSSInsight, which analyzes 10B+ GitHub events in real-time. You can explore any of these projects yourself — just search for a repo name and dive in.

Compare any two agents head-to-head: OpenCode vs Claude Code | Codex vs Gemini CLI | Aider vs Continue