We built an autonomous trading system and hit a hard problem: nobody can reliably prove what AI agents do with real money, real data, or real infrastructure. Every action was a black box.
So we built HELM — an execution control layer that sits between agents and the tools they use, checks every action against your rules, blocks unsafe operations, and records a signed receipt for every decision. The engine is open source. The problem is universal.
Key Milestones
Operating Principles
Proof First
Every action creates a tamper-proof record. No record means the action didn't happen.
Block by Default
Every action must be allowed by a rule. If there's no rule, the action is blocked. No shortcuts.
Works Offline
Every proof record can be checked without an internet connection. No server needed.
Leadership
Ivan Peychev
Technical founder who designed and built the entire HELM system — the fail-closed execution firewall for AI agents. Architected the 8-package trusted computing base, signed receipt engine, and multi-SDK platform from first principles.
LinkedInAleksey Rusenov
Drives enterprise adoption and ecosystem growth for the HELM safety standard. Brings entrepreneurial experience from VAIB and a technology education from CODE University of Applied Sciences.
LinkedInKirill Melnikov
Serial founder with deep finance and operations background. Manages capital formation, strategic alliances, and the operational backbone of Mindburn Labs.
LinkedInWhy this becomes the default
Mindburn Labs builds the safety layer for AI agents. We believe the trust gap in AI-driven systems is the defining infrastructure problem of the decade.
Investment Thesis
Every AI agent will need safety rules
As AI agents go from demos to real work, they need something that checks their actions before they run — not after.
Standards win, not features
HTTPS won because it was a standard, not a product. HELM's test levels create the same effect for AI safety.
Open-source adoption turns into paying customers
Every developer who installs HELM OSS is a future enterprise customer. Free adoption is the growth engine.
Proof is the moat
Others sell dashboards. We create tamper-proof records that work offline. You can't copy a proof system by building a nicer UI.
Traction
Trust at Machine Speed
Every AI action creates a tamper-proof record. Records link together into a full audit trail.
Let's Talk
If you're interested in the safety infrastructure for AI agents, we'd love to hear from you.
[email protected]Work on verifiable autonomy
We're building the safety infrastructure for autonomous AI. Every action checked. Every record tamper-proof. Every proof replayable.
Why Mindburn Labs
Remote First
Work from anywhere. Async-first communication. We optimize for deep focus.
Hard Problems
Formal verification, tamper-proof records, reliable AI execution — problems with permanent solutions.
Research Time
20% dedicated research time. Publish papers, build prototypes, explore ideas.
Real Impact
Your code runs in production. Every AI action checked, every proof record verifiable.
Early Equity
Meaningful ownership in infrastructure that will power the next generation of autonomous systems.
No BS Culture
Small team, flat structure, high trust. Ship code that matters.
Open Roles
We hire for capability and curiosity. If you see a role that fits, reach out with what you've built.
Founding Engineer — Safety Engine
Design and implement the AI safety engine — secure sandboxing, proof record generation, and safety test verification. You'll shape the core that every HELM deployment runs on.
We look for engineers who think in systems, build reliable code, and ship like operators.
- Strong systems programming (Go, Rust, or C++)
- Experience with sandboxing, WASM, or secure execution
- Comfort with cryptographic primitives (signing, hashing, merkle trees)
- Track record of shipping production infrastructure
Technical Writer / DevRel
Create documentation, quickstarts, and tutorials that make HELM adoptable in 5 minutes. Build the developer community around AI safety.
We look for engineers who think in systems, build reliable code, and ship like operators.
- Strong technical writing with developer audience focus
- Ability to read and understand Go/TypeScript codebases
- Experience building developer documentation or API references
- Bonus: experience with security/compliance tooling
Applied Researcher — Formal Methods
Formalize the guarantees we claim in our AI Safety Standard. Work on verification of sandbox security, proof chain integrity, and safety level proofs.
We look for engineers who think in systems, build reliable code, and ship like operators.
- PhD or equivalent experience in formal methods, PL theory, or verification
- Familiarity with model checking, theorem proving, or static analysis
- Ability to bridge formal work with practical engineering
- Interest in trust, governance, and autonomous systems
Problems We're Solving
These are the hard problems at the frontier of AI safety infrastructure. If you have ideas, we want to hear them.
Engine Engineering
Build the AI safety engine — action proposal pipelines, auto-block enforcement, resource metering.
Cryptography & Proofs
Design and implement linked proof chains and proof bundle formats.
Safety Testing & Verification
Build L1/L2/L3 test vectors, safety test runners, and formal verification tooling.
Applied AI Systems
Multi-vendor agent orchestration, trust sharing, and AI model analytics pipelines.
Infrastructure & DevOps
Multi-cluster deployment, CI/CD pipelines, observability, and fleet operations tooling.
Ready to build the future?
We're always looking for exceptional engineers and researchers. Send us what you've built — code speaks louder than resumes.