Mindburn Labs

We build the control layer for AI agents — an open-source engine, a published standard, and tools for teams that need execution governance.

We built an autonomous trading system and hit a hard problem: nobody can reliably prove what AI agents do with real money, real data, or real infrastructure. Every action was a black box.

So we built HELM — an execution control layer that sits between agents and the tools they use, checks every action against your rules, blocks unsafe operations, and records a signed receipt for every decision. The engine is open source. The problem is universal.

Key Milestones

2025 Q4 — Founded. Thesis: AI agents need a safety layer, not better prompts.
2025 Q4 — Shipped HELM OSS v0.1.0 — open-source execution firewall for AI agents.
2025 Q4 — Shipped TITAN — 8-service autonomous trading system proving HELM under real capital.
2026 Q1 — Shipped HELM OSS v0.3.0. HELM Platform — team controls, dashboards, early access program.
2026 Q2 — Published safety test suite, compliance tooling, enterprise connectors.
2026 H2 — Self-serve launch, EU AI Act compliance module.

Operating Principles

Proof First

Every action creates a tamper-proof record. No record means the action didn't happen.

Block by Default

Every action must be allowed by a rule. If there's no rule, the action is blocked. No shortcuts.

Works Offline

Every proof record can be checked without an internet connection. No server needed.

Leadership

IP

Ivan Peychev

Co-founder & CEO

Technical founder who designed and built the entire HELM system — the fail-closed execution firewall for AI agents. Architected the 8-package trusted computing base, signed receipt engine, and multi-SDK platform from first principles.

LinkedIn
AR

Aleksey Rusenov

Co-founder & Business Development

Drives enterprise adoption and ecosystem growth for the HELM safety standard. Brings entrepreneurial experience from VAIB and a technology education from CODE University of Applied Sciences.

LinkedIn
KM

Kirill Melnikov

Co-founder & COO

Serial founder with deep finance and operations background. Manages capital formation, strategic alliances, and the operational backbone of Mindburn Labs.

LinkedIn

Why this becomes the default

Mindburn Labs builds the safety layer for AI agents. We believe the trust gap in AI-driven systems is the defining infrastructure problem of the decade.

Investment Thesis

Every AI agent will need safety rules

As AI agents go from demos to real work, they need something that checks their actions before they run — not after.

Standards win, not features

HTTPS won because it was a standard, not a product. HELM's test levels create the same effect for AI safety.

Open-source adoption turns into paying customers

Every developer who installs HELM OSS is a future enterprise customer. Free adoption is the growth engine.

Proof is the moat

Others sell dashboards. We create tamper-proof records that work offline. You can't copy a proof system by building a nicer UI.

Traction

Production
Safety Engine
Checks every action against your rules before it runs
Production
Proof Chain
Linked proof records that can't be tampered with
L1/L2 Complete
Test Suite
Automated tests proving the engine works correctly
v1.0 Shipped
Proof Records
Exportable, verifiable, offline-checkable proof bundles
3 Languages
SDK Coverage
Go, Python, TypeScript — works with any AI framework
Active
Enterprise Pipeline
Team-wide safety controls and compliance tools in development

Trust at Machine Speed

Every AI action creates a tamper-proof record. Records link together into a full audit trail.

Git Commitsha256:abc…SLSA3 BuilderGitHub ActionsAttestationSigstore / TUFLedger Entrymindburn.org

Let's Talk

If you're interested in the safety infrastructure for AI agents, we'd love to hear from you.

[email protected]

Work on verifiable autonomy

We're building the safety infrastructure for autonomous AI. Every action checked. Every record tamper-proof. Every proof replayable.

Why Mindburn Labs

Remote First

Work from anywhere. Async-first communication. We optimize for deep focus.

Hard Problems

Formal verification, tamper-proof records, reliable AI execution — problems with permanent solutions.

Research Time

20% dedicated research time. Publish papers, build prototypes, explore ideas.

Real Impact

Your code runs in production. Every AI action checked, every proof record verifiable.

Early Equity

Meaningful ownership in infrastructure that will power the next generation of autonomous systems.

No BS Culture

Small team, flat structure, high trust. Ship code that matters.

Open Roles

We hire for capability and curiosity. If you see a role that fits, reach out with what you've built.

Founding Engineer — Safety Engine

Core Engineering·Remote (EU timezone preferred)·Full-time

Design and implement the AI safety engine — secure sandboxing, proof record generation, and safety test verification. You'll shape the core that every HELM deployment runs on.

We look for engineers who think in systems, build reliable code, and ship like operators.

  • Strong systems programming (Go, Rust, or C++)
  • Experience with sandboxing, WASM, or secure execution
  • Comfort with cryptographic primitives (signing, hashing, merkle trees)
  • Track record of shipping production infrastructure
Apply

Technical Writer / DevRel

Product·Remote·Full-time or Contract

Create documentation, quickstarts, and tutorials that make HELM adoptable in 5 minutes. Build the developer community around AI safety.

We look for engineers who think in systems, build reliable code, and ship like operators.

  • Strong technical writing with developer audience focus
  • Ability to read and understand Go/TypeScript codebases
  • Experience building developer documentation or API references
  • Bonus: experience with security/compliance tooling
Apply

Applied Researcher — Formal Methods

Research·Remote·Full-time

Formalize the guarantees we claim in our AI Safety Standard. Work on verification of sandbox security, proof chain integrity, and safety level proofs.

We look for engineers who think in systems, build reliable code, and ship like operators.

  • PhD or equivalent experience in formal methods, PL theory, or verification
  • Familiarity with model checking, theorem proving, or static analysis
  • Ability to bridge formal work with practical engineering
  • Interest in trust, governance, and autonomous systems
Apply

Problems We're Solving

These are the hard problems at the frontier of AI safety infrastructure. If you have ideas, we want to hear them.

Engine Engineering

Build the AI safety engine — action proposal pipelines, auto-block enforcement, resource metering.

Cryptography & Proofs

Design and implement linked proof chains and proof bundle formats.

Safety Testing & Verification

Build L1/L2/L3 test vectors, safety test runners, and formal verification tooling.

Applied AI Systems

Multi-vendor agent orchestration, trust sharing, and AI model analytics pipelines.

Infrastructure & DevOps

Multi-cluster deployment, CI/CD pipelines, observability, and fleet operations tooling.

Ready to build the future?

We're always looking for exceptional engineers and researchers. Send us what you've built — code speaks louder than resumes.

Get in touch

Building the infrastructure for reliable AI safety. Reach out to learn more about our products, research, or career opportunities.

Send us a message