Only this pageAll pages
Powered by GitBook
1 of 10

Product Docs

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Overview

Here are the articles in this section:

  • Who are we

  • Team & Jobs

The impossible Bench

Here are the articles in this section:

  • What is it

  • What's in it for you

  • Who should apply

Selection Process
Admin stuff
FAQ

What is it

Some claim that fully autonomous SWE is around the corner, that devs will be useless in a few months.

“We just killed software engineering.”

Their sales pitch? Saturated benchmarks. But these evaluations don’t represent the work we do every day as great engineers on a complex codebase.

Coding is not just vibecoding.

It’s a great narrative to push when you are a coding agent raising money, but it’s simply not true.

The best engineers in the world can perform tasks that are very far beyond the actual capabilities of LLMs.

And to prove it, we need to set up a fair game. Code that has never been seen by LLMs. Well-scoped and feasible tasks.

We will grant 100 incredible engineers working on 100 closed-source hardcore projects. From this work, we will create the ultimate benchmark.

A coding benchmark, with a 0% success rate from LLMs.

Team & Jobs

The Gang

We are a team of 7. There is the growth, finance, and the tech team, along with our two co-founders.

The tech team has back-end & front-end devs. All with their very own talents and specialties. They are the ones who help build the product that you see today. And debug in under 30 minutes. Yep, they are that reactive 😉

The finance team speaks for itself! This is where the team makes sure that we're not financing weird stuff and that those who use our platform are legit users.

The growth team is probably the most "visible". They're the ones who are contacting you daily and making sure everything is going well. We are based in San Francisco. Every month, we gather together during our team off-site, because we like each other :).

Jobs

Currently, we do not have any job openings! But hey, if we do, we will be sure to communicate on it.

Who are we

We find and fund elite developers working on side projects that are so complex that AIs/agents fail to help them code.

Yep, that's us. But how did we come to this?

2023 - 2025: Become the "Open Source home"

We started in 2023 by building a platform to contribute to open source.

Over the years, we’ve worked to:

  • Make it easy for any maintainer to get visibility, contributors, and actual help

  • Make it easy for any contributor to find a project and a good first issue (without reading 14 outdated READMEs)

  • Provide a "matching engine" where contributors are recommended projects based on their skills and interests.

  • Show metrics for maintainers: who’s looking at your project, who applied, and how experienced they are.

2025 - now:

We spent years mastering something most companies never crack: finding elite developers and funding their best work.

By 2024, we'd distributed $15M to 400 open source projects, building an expertise in what makes truly exceptional code, and how to fund it without micromanagement.

Then we moved to San Francisco, and the top AI labs pulled us aside with a problem:

They confessed a hard reality: their coding agents were acing toy benchmarks but failing catastrophically on real-world systems, on the kind of intricate, closed-source, deeply contextual code that OnlyDust’s network lived for.

That’s when it clicked.

We reoriented everything toward a new mission: funding engineers whose code breaks AI.

Today, OnlyDust (operating under our Ctrl+G brand for AI labs) partners with the world’s top research teams to turn elite, private codebases into the most rigorous coding benchmarks ever built.

We no longer just fund open source. We fund the impossible, so machines can one day catch up to human ingenuity.

Selection Process

  1. Kickoff, with JB, Head of DevRel Check motivation, 6+ month scope, and 10 to 15 h/week. Review the contract. If fit, book the tech interview.

  2. Before the tech interview Have at least 10 to 15 hours of code started. Send a short project brief, roadmap, and key challenges 24 hours ahead. Share GitHub access.

  3. Technical interview, with Antho or Olivier (both senior back end engineers) Evaluate complexity, potential, and code quality. Written feedback and a go or no-go within 24 hours.

  4. Admin validation If accepted, you get a confirmation email and the contract.

  5. One-month review after contract signature Committee checks progress and quality. Decision based on delivery, effort, and rule compliance.

  6. Payment and legal On approval, finance pays the first quarterly €7,500.

Admin stuff

Contract

We will sign together a license contract with the following principles:

  • You license your repo(s) to OnlyDust for AI use only (benchmarking, training, eval, improving models). The license is exclusive, worldwide, transferable, sublicensable, and irrevocable.

  • If your code stays private, you can still build and sell products with it, just not for AI benchmark or training purposes.

  • You can open-source your code after 12 months

Pay

  • Total amount: 30,000€ per year, renewable once (2 years in total)

  • Pay: 7,500€ every quarter, submitted to a committee review (make sure the code has advanced enough during the past 3 months + the quality is still up to expectations)

  • 1st pay timing:

    • If you have already started a project with an extensive code base, we can review :

What's in it for you

Here’s what you get if selected for the Impossible Benchmark Program:

  • Your work becomes the new frontier: your contribution has a direct impact on improving the models & agents of the world's top AI labs (think, for example, OpenAI, Anthropic, DeepMind, Mistral, ...).

  • Invitation to San Francisco (for top contributors): Meet researchers from top AI labs, share your workflow, and shape how AI learns to code like elite engineers, not just autocomplete.

  • $30,000/year grant (paid quarterly, $7,500 per installment), renewable for up to 2 years, no equity taken, no strings attached beyond the program terms. It’s a grant, pure and simple, to empower you to keep building on YOUR project.

7,500€ at contract signature
  • If you've not started and want to use this to fund an idea :

    7,500€ 1 month after contract signature, submitted for review (quantity and quality of the work in this 1 month)

  • Full ownership of your code: You retain all rights. We only ask for a time-bound, exclusive license to use your closed-source project to build AI benchmarks.

  • Zero pressure to “productize”: This isn’t about building a business. It’s about advancing hard, real-world engineering. If your project solves a meaningful problem and confuses AI, that’s enough.

  • A community of peers: Join 100 of the world’s most technically rigorous builders, people who code in Rust, C++, Go, embedded systems, compilers, and other domains where correctness, context, and complexity matter more than syntax.

  • FAQ

    The questions you might have - this is regularly updated. Contact us if you can't find your answer here.

    ❓ Not sure I understand the program

    Ok cool program, but I don't have 10 to 15 hours per week
    • It's an average, not a strict requirement

    • Some weeks you'll do 3 hours, others you'll binge an entire weekend

    • If you're in France between Christmas and New Year, we're not tracking your time

    • We care about monthly progress, not weekly time logs

    What we actually measure:

    • Is the project moving forward?

    • Are you delivering what you said you would at quarterly reviews?

    • Is the code quality maintaining standards?

    Ok cool program, but I don't have a project or ideas

    Then this program isn't for you right now, and that's okay.

    Here's why we require existing projects or clear ideas:

    What we fund: Builders who are already scratching their own itch, solving problems they're obsessed with, and would code this regardless of money.

    What we don't fund: Developers looking for project ideas or trying to come up with something "hard for AI" just to get funding.

    Why this matters:

    I would make more money freelancing, the pay is not high enough

    You're right. If this were a freelance contract, the rate would be low. But here's the key difference: this is a grant to fund a project you'd build anyway, not payment for contract work. This is NOT A PAY, it's a grant. Think of it this way:

    • If you're already spending evenings/weekends on a side project you're passionate about, this grant gives you €30k to keep doing exactly that

    • It's not about replacing your income, it's about funding your ambition

    I want a full-time job, not a grant

    This isn't employment - it's funding for independent builders.

    What this is:

    • A grant to fund YOUR project while you maintain autonomy

    • You work when you want, how you want, with quarterly check-ins

    Can I use AI extensively to code on my project?

    The problem with AI-heavy development for our use case:

    • If you iterate with Claude for 3.5 hours instead of coding for 5 hours, you're essentially having Claude write your code

    • That creates synthetic data, code written in AI patterns, by AI logic

    I work in a team / want to collaborate with others

    This is not an issue; many options are possible to make this work. Apply anyway, and we will discuss if you're selected.

    ⚖️ Legal questions

    Can I do this alongside my full-time job?
    • Generally, yes, you can do that in addition to a full-time job as long as:

      • Your employment contract doesn't have a strict exclusivity clause preventing side projects

    Do I need to create a legal status for legal compliance and handle taxation?

    Usually, yes, but it depends on where you're currently living. We have experience with this kind of topic and can help.

    The licensing terms bother me: indefinite exclusive license for AI use

    Let's break down what the license actually means, because we drafted it to be as developer-friendly as possible while meeting lab requirements:

    What we CAN do (and labs can do):

    • Use your code to create benchmarks and evaluations

    Projects without intrinsic motivation rarely survive 6+ months
  • The best code comes from solving real problems you deeply understand

  • Labs can tell the difference between genuine complexity and artificial difficulty

  • What you can do instead:

    1. Build your confidence through open source — Contribute to existing projects, get comfortable with complex codebases (we even have a discovery platform for this)

    2. Wait until you have an itch to scratch — The best projects come from frustration with existing solutions

    3. Start small on your own — Code something for 20-30 hours first, see if it excites you, then apply

  • The real value compounds: you keep all IP rights for commercial use, get visibility with top AI labs (OpenAI, Anthropic, DeepMind), and potentially get flown to SF to meet researchers

  • If your goal is to maximize hourly rate, freelancing is definitely better. But if you're building something you believe in and want funding without investor dilution or client management, this is designed for that.

    You keep all commercial rights to build a business if you want

    What this isn't:

    • A job interview in disguise

    • A probation period before full-time employment

    • A contractor relationship with daily management

    If you want full-time employment:

    • The SF trip for top performers might lead to intros with labs who ARE hiring

    • Your project could become a business you run full-time (many past grant recipients did this)

    • Working on this kind of initiative is a great signal for employers and definitely something you can flex about on your resume

    When we create benchmarks from that code, we're testing AI on AI-generated patterns

  • The evaluation becomes circular and useless

  • What we're actually looking for:

    • Code with YOUR mental models, YOUR architecture decisions, YOUR problem-solving patterns

    • Projects where AI agents currently fail because they haven't seen this thinking before

    • Genuine human expertise at the frontier, not AI-assisted approximations

    What IS allowed:

    • Autocompletion is fine (like GitHub Copilot's inline suggestions), you're still structuring the logic

    • Occasional use when you're truly stuck on something (ask us first if unsure)

    • 50/50 workflow: one day exploring with Claude, next day refactoring/rewriting by hand

    The self-assessment test: If you're thinking "without AI, I'd go 10x slower", your project isn't complex enough for current models. That's actually GOOD for normal development, but it's not what labs need for frontier benchmarks.

    Bottom line: We're paying you to write code that AI can't write yet. If AI is writing most of it, we're funding the wrong thing.

    You're not working on something that directly competes with your employer

  • You're doing this on your own time (evenings/weekends)

  • Many of our developers have full-time jobs; this is specifically designed for people with day jobs who code on the side projects

  • Your employer typically can't prevent you from working on personal projects unless explicitly stated in your contract

  • Train models using your code (post-training, pre-training)
  • That's it. Literally just AI training and evaluation purposes.

  • What we CANNOT do:

    • Launch a commercial product using your code

    • Compete with you if you productize your project

    • Sublicense it for non-AI purposes

    • Prevent you from using your own code

    What YOU keep:

    • Full commercial rights for non-AI use

    • You can build a SaaS, sell licenses, offer services, whatever you want

    • You own the code; we just have specific usage rights

    • If you want to start your own AI lab someday, we can discuss (that's the only restriction)

    Why this structure?

    • Labs need assurance that the benchmark won't be contaminated by open-sourcing too early

    • You need assurance that you're not signing away your business future

    • The contract is 1.5 pages in plain English, if a sentence confuses you, it's our fault, not legalese

    Context matters: This isn't a typical "company wants your IP" situation. We're not building products with your code. We're using it to measure where AI fails so it can improve. You get paid, keep your upside, and help advance the field.

    Who should apply

    This program isn’t for everyone, and that’s by design. Below are some key traits of ideal candidates:

    • You already have a project started or a clear idea you want to launch; this isn't about finding you a project; it's about funding the one you're already passionate about.

    • AI agents struggle with your codebase: when you try to use LLMs to assist, they hallucinate, produce broken logic, or require 10+ iterations to get anything usable. (If your features get built in “one-shot” with autocomplete, your project is likely too simple for this program.)

    • You code primarily by hand, using AI only for minor autocompletion, not as a co-pilot for core logic or architecture. The goal is human-original reasoning, not synthetic data.

    • You’re committed to 10–15 hours per week over the next 6–12 months to advance your project. Not as a freelancer-for-hire but as a passionate builder solving a problem you deeply care about.

    • You retain full ownership of your code, but are willing to grant a license so we can turn your work into rigorous, private benchmarks for top AI labs

    • Your project is closed source (for now), you can open source it later but it needs to stay private during the grant period

    • You’re not in it just for the money: the €30k/year grant (paid quarterly) is meant to enable your vision, not replace a salary. We fund builders, not contractors.

    If this sounds like you, or if you’re unsure but your code regularly makes AI “give up”, apply anyway. Many of our strongest candidates initially thought, “My project isn’t hard enough.” Spoiler: it probably is. Don't self-reject.