MergeProof

Why Mergeproof#

Mergeproof is a staked PR review protocol. This document explains why it exists, why it matters, and why the current approach to code review is fundamentally broken.

The Broken State of Code Review#

Code review is one of the most important activities in software engineering. It catches bugs before they ship. It enforces standards. It transfers knowledge between developers. And it is almost entirely unpaid.

Review is unpaid labor. Open source maintainers review hundreds of PRs per year with no compensation. Corporate engineers do it because their manager told them to, not because they are incentivized to do it well. The result is predictable: burnout. Maintainers of critical infrastructure projects quit. Bus factor drops to one. Projects that millions of people depend on rot from the inside because nobody wants to do the thankless work of reading someone else's code.

The economics are stark. A maintainer of a package with 10 million weekly downloads might spend 20 hours per week reviewing PRs. That is a half-time job with zero pay, zero equity, and zero job security. The only compensation is a GitHub contribution graph and the occasional "thanks" in a PR comment.

Quality is arbitrary and subjective. One reviewer says "LGTM" after a 30-second skim. Another spends two hours and finds five bugs. Both reviews carry equal weight. There is no mechanism to distinguish thorough review from rubber-stamping. The developer who spent two hours gets the same recognition as the one who spent 30 seconds — which is to say, none.

GitHub's review system treats all approvals as equal. A green checkmark from a junior developer who glanced at the diff is indistinguishable from a green checkmark from a senior engineer who traced every code path. The tooling does not measure review quality. It cannot. And because it cannot, there is no feedback loop to improve it.

There is no accountability. A reviewer approves a PR that later causes a production outage. Nothing happens to the reviewer. There are no consequences for a bad review, no rewards for a good one. The incentive structure actively punishes thoroughness: the more time you spend reviewing, the less time you have for your own work, and nobody is tracking the difference.

Consider the asymmetry: a developer who introduces a bug gets blamed. A reviewer who approved that bug gets nothing — not blame, not consequences, nothing. The reviewer's incentive is to approve quickly and move on. Spending extra time to find subtle issues is strictly worse for the reviewer in every measurable way.

The coordination problem is real. Who reviews what? In most projects, it is first-come-first-serve with no skin in the game. Senior engineers cherry-pick the interesting PRs. Junior engineers get stuck reviewing the boring ones. Nobody wants to review the 2,000-line refactor. The assignment is random, the effort is unequal, and the quality varies wildly.

CODEOWNERS files and review assignment bots help with routing but do nothing for incentives. They tell you who should review, not who wants to review. The result is the same: reviews sit in queues for days, PRs go stale, and developers context-switch between writing code and grudgingly reading someone else's.

This is not a new problem. It has been the status quo for decades. But something changed recently that turned a chronic problem into an acute crisis.

The AI Agent Tsunami#

AI coding agents are here. Devin, OpenClaw, Copilot Workspace, Cursor Agent, Codex — the list grows every month. These tools generate pull requests at 10-100x the speed of a human developer. A single agent can open dozens of PRs per day across multiple repositories.

This is not a future scenario. It is happening now.

The review bottleneck is about to become catastrophic. If one developer generates 10 PRs per day instead of one, the review load increases 10x. But the number of qualified reviewers stays the same. The math does not work. We are heading toward a world where PR queues grow faster than teams can drain them.

Think about what happens to a popular open source project. Today, a maintainer might receive 5-10 PRs per week from human contributors. With AI agents, that number becomes 50-100 per week — or more. Each PR still needs a human to read the diff, understand the intent, verify correctness, check for edge cases, and confirm it does not break existing functionality. The review time per PR does not decrease just because the PR was written faster.

Low-quality PRs are already flooding open source. Maintainers of popular repositories report a surge in AI-generated contributions that look plausible but are subtly wrong. The code compiles. The tests pass (the ones the agent wrote, at least). But the logic has edge cases the agent did not consider, or the implementation ignores architectural constraints that are not written down anywhere. These PRs waste reviewer time and erode trust.

The failure mode is insidious. An AI-generated PR that is obviously wrong is easy to reject. The dangerous ones are 90% correct — good enough to pass a cursory review, subtly broken in ways that only show up in production. Without thorough review, these get merged. With thorough review, they consume hours of human time that could have been spent building.

The current response is gatekeeping, not scaling. Some projects have started closing PRs from suspected AI agents. Others require contributors to certify they did not use AI. These are stopgap measures that fight the symptom, not the cause. The real problem is that review capacity does not scale with PR volume. Banning AI agents does not solve it — it just delays the reckoning.

We need a mechanism that scales review with submissions. More PRs should mean more reviewers, not more burden on the same maintainers. The protocol needs to be economically self-sustaining: as PR volume increases, the economic incentive to review should increase proportionally. A $10,000 bounty with 50 PRs in queue should attract 50x the review attention of a $10,000 bounty with 1 PR. The economics need to be self-correcting.

Agents can be reviewers too — and that is the unlock. The same AI systems that generate PRs can also review them. An agent that is good at writing code is also good at reading code and finding bugs. Mergeproof is agent-native by design: the protocol is indifferent to whether a participant is human or machine. It cares only about the quality of their work. If an AI agent can find a critical bug, it earns the same 9% reward as a human. This means AI agents can scale the review side of the equation, not just the submission side.

The Mergeproof Insight: Skin in the Game#

The core insight is simple: what if every PR required a financial stake?

Not a reputation score. Not a social signal. Actual money on the line.

A developer who submits a PR stakes a percentage of the bounty (configurable, 5-25%). This stake says: "I believe this code is correct, and I am willing to put money behind that belief." If the code is good, they get their stake back plus the bounty. If the code has bugs, they lose part of the bounty to the people who found those bugs. If the code is so bad it hits the bounty floor (50% reduction from bugs), the PR is auto-rejected and the entire stake is forfeited.

This changes submission behavior immediately. A developer who knows their code will be scrutinized by economically motivated bug hunters writes better code. They run more tests. They handle more edge cases. They read the spec more carefully. The stake is not just a filter for low-quality submissions — it is an incentive to produce high-quality ones.

A bug hunter who reports an issue also stakes — a smaller amount (0.25% of the bounty per report). This stake says: "I believe this is a real bug, not a style nit or a false positive." If the bug is valid, the hunter earns a reward proportional to the severity: 1% for minor bugs, 3% for major bugs, 10% for critical bugs. If the report is invalid, the hunter loses their stake. This filters out noise and ensures that only substantive, functional bugs are reported.

An attestor who vouches for code quality stakes as well (1% of the bounty). This stake says: "I reviewed this code and believe it is clean." If they are right, they earn a 0.5% reward. If a valid bug is found by anyone — even a different reviewer — all attestors are slashed. This creates a powerful incentive to be thorough before attesting. An attestor who rubber-stamps gets destroyed the moment a real bug hunter shows up.

The result is a self-sustaining marketplace for code quality. Every participant has economic skin in the game. Every action has consequences. The protocol does not rely on goodwill, reputation, or social pressure. It relies on the oldest and most reliable incentive mechanism: money.

For project owners, the value proposition is straightforward. Post a bounty on a GitHub issue, and you get a mini-audit on every PR — paid for by the bounty itself. Bug hunters are economically motivated to scrutinize every line. Attestors are economically motivated to verify before vouching. The owner does not need to find reviewers, assign them, or evaluate their work. The protocol handles all of it.

The zero-sum design is critical. Bug rewards come directly from the bounty reduction — what hunters earn is exactly what the submitter loses. This means planting fake bugs with an accomplice is strictly worse than submitting clean code. The 10% protocol fee on all rewards adds further friction. Every round trip of collusion costs real money. The only way to maximize returns is to do honest, quality work.

Why GenLayer?#

Mergeproof requires a smart contract that can do things traditional smart contracts cannot. It needs to read the GitHub API to verify CI status, PR authorship, and merge state. It needs to evaluate whether a bug report is valid (in v2+). It needs to make decisions that involve judgment, not just arithmetic.

GenLayer is a blockchain where smart contracts can call external APIs natively. The BountyRegistry contract reads GitHub directly — no oracles, no middleware, no trusted third parties feeding data on-chain. When a developer calls submit_pr(), the contract fetches the CI status from GitHub's API and verifies it within the consensus process itself.

Why does this matter? Consider the alternative. On Ethereum or any traditional EVM chain, you would need an oracle service (Chainlink, UMA, etc.) to feed GitHub data on-chain. That means trusting a third party, paying oracle fees, dealing with update latency, and building infrastructure to keep the data fresh. Mergeproof needs to check CI status, PR authorship, merge state, and eventually evaluate code quality — all in real-time, at transaction time. Oracles are not designed for this kind of interactive, per-transaction external data access. GenLayer is.

The equivalence principle makes this trustless. In GenLayer's consensus model, a leader node proposes a result (e.g., "CI is green for commit abc123"). Multiple validator nodes then independently repeat the same API call and verify the result. If the validators agree with the leader, the transaction succeeds. If they disagree, the transaction fails. This means no single node can lie about what GitHub says.

AI-powered arbitration is built into the consensus layer. In v2+, when a bug report is disputed, the contract can evaluate whether the bug is valid by having the LLM read the issue spec, the PR diff, and the bug description. This is not a bolted-on feature — it is a native capability of the GenLayer runtime. The LLM acts as a decentralized arbiter: the leader proposes a judgment, validators independently form their own judgments, and consensus determines the outcome.

Non-deterministic computation enables subjective evaluation. Traditional smart contracts can only perform deterministic operations. They can check if a number is greater than another number. They cannot evaluate "does this PR meet the spec?" or "is this a real bug or a style nit?" GenLayer's run_nondet() primitive allows exactly this kind of evaluation within a smart contract, with consensus ensuring the result is trustworthy.

This is the unlock for v2 and v3 of the protocol. In v1, the bounty owner validates bug reports — a necessary centralization for the MVP. But the path to full decentralization does not require building a complex voting system or a DAO. It requires a smart contract that can read code, read a spec, and make a judgment call. GenLayer provides exactly this capability, natively, as part of the consensus layer.

Base/EVM handles the money. Funds live in a Solidity Escrow contract on Base — battle-tested, auditable, and well-understood EVM security. GenLayer is the arbiter; Base is the custodian. When a bounty concludes, GenLayer sends a single bridge message to Base with the settlement decision, and the Escrow contract executes all payouts atomically on-chain. GenLayer decides who gets paid. Base moves the money. Clean separation of concerns.

The bridge architecture is efficient by design. No matter how many participants are involved in a bounty — submitters, bug hunters, attestors — there is exactly one cross-chain message per bounty conclusion. GenLayer encodes a SettlementDecision (outcome type, bug validities, participant addresses) and sends it to Base. The Escrow contract calculates exact payout amounts on-chain and executes batch transfers. One message, atomic execution, all payouts settled in a single transaction.

An Open Market for Code Quality#

Mergeproof is permissionless. Anyone can participate.

You do not need to be a project contributor. A bug hunter can review code on a project they have never seen before. If they find a bug, they earn money. If they do not, they move on to the next bounty. There is no approval process, no contributor license agreement, no social capital required. The only thing that matters is whether you can find bugs.

Bug hunting becomes a paid profession. Today, the closest equivalent is security auditing — expensive, slow, and gated behind relationships with audit firms. Mergeproof democratizes this. A skilled reviewer anywhere in the world can earn money by finding issues in code. The better you are at finding bugs, the more you earn. Meritocracy enforced by economics.

The math works out for hunters. A bug hunter who can reliably find major bugs in submitted code earns 2.7% of each bounty (net after fees). On a $10,000 bounty, that is $270 for a single report. A hunter who reviews 10 bounties per week and finds one major bug in each earns $2,700 per week — $140,000 per year. Finding critical bugs pays more: 9% per bounty, or $900 on a $10,000 bounty. And the barrier to entry is a $25 stake per report and the skill to find real bugs.

Attestors build reputation and earn by vouching for quality. Attestation is the "I reviewed this and it looks good" action. Attestors stake 1% of the bounty and earn 0.5% if no bugs are found. But if a valid bug is found by anyone — even by a different reviewer — all attestors are slashed. This means attestation is not a rubber stamp. It is a financial commitment to the quality of the code. Attestors who rubber-stamp get destroyed the moment a real bug surfaces.

Over time, attestors who consistently get it right build a track record. The protocol does not have a formal reputation system in v1, but the on-chain history of attestations, slashings, and rewards creates a public, verifiable record of reviewer quality. Future versions can formalize this into a reputation score.

AI agents are first-class participants. The protocol does not care if you are human or machine. An AI agent can submit a PR with a stake, hunt for bugs, or attest to code quality. This turns the AI agent problem on its head: instead of agents generating spam that humans must review, agents become productive participants in the review process itself. An agent like OpenClaw submits a PR, other agents and humans hunt for bugs, attestors verify quality, and settlement is automatic. The protocol creates a productive role for AI agents rather than treating them as a nuisance.

This is a global, permissionless code review marketplace. No geography restrictions. No employment requirements. No credentials needed. Post a bounty in USDC, and anyone on the planet (or any agent on the internet) can earn by doing quality work.

Consider a concrete scenario. A company posts a $50,000 bounty for implementing a critical feature. An AI agent (let us call it OpenClaw) submits a PR with a $5,000 stake. Within the 72-hour review window, three human bug hunters and two AI agents review the code. One human finds a major bug ($1,500 reward). One AI agent finds a critical vulnerability ($5,000 reward). Two attestors verify the fix in round 2 and earn $250 each. The developer fixes the bugs, resubmits, claims the reduced bounty. Every participant got paid for the value they provided. The company got a multi-party audit of its critical feature. No one needed to be hired, onboarded, or managed.

The protocol does not discriminate. A bug found by an AI agent is worth exactly the same as a bug found by a human. A PR submitted by an agent receives the same treatment as one submitted by a developer. The only thing that matters is the quality of the work and the willingness to stake on it. In a world where the line between human and AI contributions blurs daily, this neutrality is a feature, not a limitation.

The Numbers#

Theory is nice. Numbers are better. Walk through a concrete example to see how the economics work in practice.

Setup#

A project owner wants a feature built. They post a bounty on a GitHub issue:

  • Bounty: $10,000 USDC
  • Stake ratio: 10% (submitter must stake $1,000)
  • Attestation pool: 5% ($500 additional from owner, $10,500 total escrow)
  • Review window: 72 hours

Round 1: Submission and Review#

A developer submits a PR and stakes $1,000. The contract verifies CI is green, locks the commit hash, and opens the 72-hour review window.

During the window:

  • Hunter A stakes $25 (0.25% of bounty) and reports a major bug (3% severity). If valid, they earn $300 gross, minus 10% protocol fee = $270 net + $25 stake back = $295 total.
  • Hunter B stakes $25 and reports a minor bug (1% severity). If valid, they earn $100 gross, minus 10% fee = $90 net + $25 stake back = $115 total.
  • Hunter C stakes $25 and reports a false positive. The owner marks it invalid. Hunter C loses their $25 stake — it goes to the treasury.

Both valid bugs reduce the bounty:

  • Major bug: 3% = $300 reduction
  • Minor bug: 1% = $100 reduction
  • New bounty amount: $9,600

Round 2: Fix and Resubmit#

The developer fixes both bugs, pushes a new commit, and calls retry(). This is attempt 2 of 3. A new 72-hour review window opens on the new commit hash.

Note that the review happens against the new commit. The contract records the exact commit hash, and all bug reports must reference it. New commits pushed during the window are ignored — the "lock" is economic, not technical.

During the second window:

  • No bugs found. The bug hunters reviewed the fixes and did not find new issues.
  • 3 attestors each stake $100 (1% of bounty) during the last 24 hours of the window. They are asserting that the code at this commit is clean.
  • Each attestor earns 0.5% of bounty = $50 gross, minus 10% fee = $45 net + $100 stake back = $145 total.
  • Attestation pool used: $150 of $500. The attestation pool can support up to 10 attestors ($500 pool / $50 reward per attestor). The remaining $350 returns to the owner at settlement.

Settlement#

The PR is merged on GitHub. The developer calls claim(). The GenLayer contract verifies the merge via the GitHub API, calculates all payouts, and sends a single bridge message to the Base Escrow contract. The Escrow executes batch transfers atomically. Here is what everyone receives:

ParticipantOutcomeNet
Developer$9,600 bounty + $1,000 stake back+$9,600
Hunter A (major bug)$270 reward + $25 stake back+$270
Hunter B (minor bug)$90 reward + $25 stake back+$90
Hunter C (invalid report)Stake slashed-$25
3 Attestors$45 each net + $100 stake back+$135 total
Owner-$10,500 deposit + $350 unused pool-$10,150 net cost
TreasuryProtocol fees + slashed stakes+$80

Zero-sum check: -10,150 + 9,600 + 270 + 90 - 25 + 135 + 80 = 0

Everyone who did quality work got paid. The developer earned the bounty minus the cost of their bugs. The hunters earned proportional to the severity of what they found. The attestors earned for correctly vouching that the fixed code was clean. The hunter who filed a bad report lost their stake. Every dollar is accounted for.

Notice what happened from the project owner's perspective. They spent $10,150 and got:

  • A feature implemented and merged
  • Two bugs found and fixed before the code shipped
  • Three independent attestations that the final code is clean
  • A false-positive report filtered out automatically

This is a multi-party audit of a feature PR. Traditional security audits for a similar scope cost $20,000-$50,000 and take weeks. The Mergeproof review happened in 6 days (two 72-hour windows) and cost the owner $10,150 — most of which was the bounty itself.

What If the Code Is Terrible?#

If bugs pile up and the bounty drops to 50% of its original value ($5,000), the protocol triggers an auto-reject. The PR is automatically rejected. The submitter loses their entire $1,000 stake. The bounty reopens for a new submitter.

Why Collusion Does Not Work#

The protocol has three layers of anti-collusion design. Here is what happens if a submitter tries to game it.

Attempt 1: Plant minor bugs with an accomplice.

Submitter plants 10 minor bugs, accomplice reports all 10:

  Bounty reduced by 10% (10 x 1%):  $9,000 remaining
  Accomplice earns:                  $1,000 gross - $100 fee = $900 net

  Combined return:  $9,000 (bounty) + $1,000 (stake) + $900 = $10,900
  Clean submission: $10,000 (bounty) + $1,000 (stake)        = $11,000

  Net loss from collusion: $100

The 10% protocol fee eats the scheme. Every dollar extracted through fake bugs costs more than a dollar in fees.

Attempt 2: Go bigger, plant critical bugs.

Submitter plants 5 critical bugs, accomplice reports all 5:

  Total reduction: 50% → bounty floor hit → AUTO-REJECT

  Submitter receives:   $0 (rejected, no bounty) - $1,000 (stake forfeited) = -$1,000
  Accomplice receives:  $4,500 (5 x $900 net)

  Combined return:  $3,500
  Clean submission: $11,000

  Net loss from collusion: $7,500

Hitting the floor is catastrophic. The auto-reject mechanism means the submitter gets nothing. The accomplice earns bug rewards, but the combined return is a fraction of what a clean submission would yield.

The takeaway: collusion is unprofitable at every scale. Small collusion loses money to the 10% fee. Medium collusion loses more money to fees. Large collusion triggers auto-reject and catastrophic losses. The protocol is not just resistant to gaming — it actively punishes it. The rational strategy is always to submit clean code and find real bugs.

Why This Matters#

Code review becomes a market, not a favor. Today, review is something you ask people to do nicely. With Mergeproof, review is something people compete to do because they get paid for it. The shift from social obligation to economic opportunity changes everything about who reviews, how much effort they put in, and how many people participate.

Quality scales with economic incentives, not goodwill. A $100,000 bounty attracts more reviewers than a $100 bounty. A critical bug pays more than a minor one. The protocol naturally allocates more review attention to higher-stakes code. This is the correct behavior — important code should get more scrutiny, and the economic incentives ensure it does. No project manager needs to decide "this PR is important, assign three reviewers." The market does it automatically.

AI agents become productive participants, not spam generators. Instead of closing the door on AI-generated PRs, Mergeproof gives agents a constructive role. An agent can submit PRs if it is willing to stake on their quality. Other agents can earn money by finding bugs in those submissions. The protocol turns the AI flood from a problem into a solution: more agents means more review capacity, not just more PRs. This is the key insight for the AI era: you do not solve the agent problem by banning agents. You solve it by giving them skin in the game.

Project maintainers are freed from the review burden. The owner posts a bounty and walks away. The protocol handles reviewer recruitment, incentive alignment, dispute resolution, and payout. The owner still needs to validate bug reports in v1, but even that burden disappears in v2 when the AI arbiter takes over. Maintainers can focus on building instead of reviewing.

The best reviewers earn the most. Finding a critical bug pays 9% of the bounty (net). Finding a minor bug pays 0.9%. Attesting pays 0.45%. The protocol rewards depth and skill. A reviewer who consistently finds critical issues earns an order of magnitude more than one who rubber-stamps attestations. This is meritocracy enforced by economics — not by performance reviews, not by social status, not by who you know.

Early hunters find easy bugs. Late hunters find deep ones. The protocol creates a natural dynamic where the first reviewers grab the obvious issues, and later reviewers must dig deeper. This is not a flaw — it mirrors how real audits work. The easy bugs get caught first, and the hard ones require more expertise and effort. The staking mechanism ensures that even late reviewers have an incentive to keep looking: a critical bug is worth 10x a minor one, and the deep bugs are usually the critical ones.

Collusion is economically irrational. The zero-sum design (bug rewards = bounty reductions), the 10% protocol fee, and the catastrophic floor auto-reject mechanism make gaming the system consistently unprofitable. Small collusion loses money to fees. Large collusion triggers auto-reject and catastrophic losses. The math works against cheaters regardless of strategy. The only rational play is honest participation.

It starts with GenLayer, but the model is universal. The staked review model does not depend on any specific chain or technology. GenLayer provides the ideal foundation — native API access, AI arbitration, trustless verification — but the core insight (skin in the game for code review) applies to any development ecosystem. As AI agents proliferate across every platform, the need for economically-incentivized review becomes universal.

How This Compares to What Exists Today#

ApproachWho reviews?IncentiveAccountabilityScales with AI?
GitHub PR reviewsWhoever is assignedSocial obligationNoneNo
Security auditsAudit firmsFlat feeReputation onlyNo
Bug bounties (Immunefi)White hatsPer-vulnerability rewardNone for false positivesPartially
MergeproofAnyone (human or agent)Staked rewards per severitySlashing for bad workYes

Traditional bug bounty platforms like Immunefi reward vulnerability discovery but do not apply to regular PR review. They operate on deployed contracts, not pull requests. Security audits are thorough but expensive and slow — a single audit costs $20,000-$200,000 and takes weeks. You cannot audit every PR. GitHub reviews are free but unreliable and unaccountable.

Mergeproof sits in the gap: economically-incentivized review for every pull request, not just security-critical code. It is cheaper than an audit, more thorough than a GitHub review, faster than either, and scales with demand.

Where We Are#

Mergeproof is live with working smart contracts on GenLayer and Base, a CLI for all protocol interactions, and a web dashboard for browsing bounties. The first target is genlayer-studio — real bounties on a real codebase.

What Is Built#

  • GenLayer contract (BountyRegistry): Full state machine for bounty lifecycle — creation, submission, bug reporting, attestation, retry, claim, abandonment, auto-reject. Reads GitHub API for CI status, PR authorship, and merge state. 35+ direct-mode tests.
  • EVM contracts (Escrow + BridgeReceiver): Fund custody on Base, stake management, decision-based settlement with atomic batch payouts. 63 Foundry tests.
  • CLI (@mergeproof/cli): All protocol interactions via command line. Two-transaction pattern handled automatically. JSON output mode for AI agent integration.
  • Web dashboard: Browse bounties with filters, view submissions and bug reports, state-aware CTAs that show contextual CLI commands, light/dark theme.
  • Bridge relay: Service that delivers settlement messages from GenLayer to Base for dev/testnet environments.

The Roadmap#

  1. v1 (now): Owner-validated bug reports, CLI-first workflow, real stakes on testnet. The owner is a neutral arbiter who validates whether bugs are real.
  2. v2: AI arbiter replaces owner validation. GenLayer's LLM evaluates bug validity, severity, and spec compliance within the contract. Fully decentralized dispute resolution with no human authority.
  3. v3: Full trustless operation. Owner has no special powers. All decisions — merge worthiness, bug validity, severity classification — are made by AI + consensus. True escrow where outcomes are determined entirely by contract logic.

Dogfooding#

The protocol is designed to dogfood itself. Mergeproof bounties will fund Mergeproof development. Submitters will stake on their PRs to this repo. Bug hunters will review our code for money. The system validates itself by its own mechanism.

This is not a hypothetical. Once the protocol is live on testnet, every feature added to Mergeproof will go through the Mergeproof review process. If the incentives work, the code gets better. If they do not, we will know immediately because we are the first users.

The protocol works with any ERC-20 token. All stakes and rewards are percentage-based, so the economics scale identically whether the bounty is 10,000 USDC, 5 ETH, or 1,000,000 of a project's native token. This means any project, on any chain that bridges to Base, can use Mergeproof to incentivize quality reviews on their codebase.

The endgame is a world where "is this PR mergeproof?" is a standard question in software development. Not "did someone click approve on GitHub?" but "has this code survived an economically-incentivized review where people had real money on the line?" That is the bar every PR should clear. Mergeproof makes it possible.

For protocol mechanics and worked examples, see PROTOCOL.md. For architecture and contract details, see ARCHITECTURE.md. For the full specification, see SPEC.md.