The Product
meshql is the engine behind everything we deliver. Open source, battle-tested, and built by us. It turns schema definitions into federated APIs with REST writes, GraphQL reads, and cross-entity resolvers—out of the box.
Think of it like buying a suit. 95% off the shelf, 4% tailored, 1% bespoke.
Define your schema, get REST + GraphQL + federation. An LLM can generate a complete solution in minutes. Work that used to take teams months now takes a conversation.
Swap the backend without changing clients. The same API runs on your laptop with SQLite, on Lambda with a file system, or on Kafka with a 1000-node cluster. Choose the infrastructure that fits—the code doesn't change.
When the off-the-shelf doesn't fit, build the piece that's missing. One component, not the whole system. The framework doesn't fight you.
Perspective
Teams spend months debating Rust vs Java vs TypeScript based on benchmarks that vanish behind a single database query. We built the same 13-entity federated API in all three languages, against the same MongoDB infrastructure. Under load, they converge.
The difference between "0.7ms" and "1.8ms" is undetectable to any human or frontend. The real performance lever is infrastructure—indexing, connection pooling, sharding—not language choice. The right question isn't "which language is fastest?" It's "which language lets this team ship reliable software fastest?"
Perspective
The same code runs everywhere. No rewrites between environments, no backend-specific logic leaking into your schemas, no "works on my machine."
merkql (embedded event log) or SQLite. Zero infrastructure. cargo run and you're done.
Lambda + EFS. $0 at idle, ~9 writes/sec. No database server, no ops team. A 3.8MB binary, 57ms cold start.
Lambda + Confluent Cloud. ~49 writes/sec, managed Kafka, near-zero ops. Same schemas, same clients. We swapped one configuration.
Kubernetes + PostgreSQL + Kafka. Thousands of nodes. The same API, the same schemas, the same clients.
You promote your environment, not rewrite your code.
Approach
LLMs understand meshql's schema-driven model natively. Describe your domain in natural language, get a working federated API. Config files, GraphQL schemas, JSON definitions, federation resolvers—all generated from a conversation.
Mesher takes this further: point it at a legacy database, describe the domain, and get a complete anti-corruption layer project. Schema introspection, domain mapping, code generation. Minutes, not months.
This isn't "AI-assisted." The framework was designed for it. Config-driven means LLM-generatable.
Services
We deliver working APIs, not proposals. Every engagement ships running software.
Config-driven APIs from schema definitions. AI-generated where possible. A working federated API in days, not quarters.
Match the runtime to the workload. Laptop for dev, serverless for prototypes, managed services for production, full cluster for enterprise. Same code, every time.
Clean API boundaries in front of systems that can't change. CDC, anti-corruption layers, event streaming. The source system stays untouched.
Work
Each includes working source, infrastructure configuration, and automated tests.
Legacy PostgreSQL with SCREAMING_SNAKE columns and VARCHAR dates. CDC pipeline surfaces clean domain entities. Three frontend apps, zero changes to the source.
34,936 plants across 167 countries. Point the CLI at the legacy schema, describe the domain, get a working API in minutes.
Two legacy databases—SAP ERP and distribution PostgreSQL—unified behind a single federated API. 13 services, one docker compose up.
Three frontend apps, four entities, eight federation resolvers, Docker and Kubernetes manifests. Built in under 30 minutes.
13 entities, three databases, materialized projections, and a Debezium CDC pipeline. Three frontend apps from three different teams.
More examples
Open Source
Everything under BSL 1.1—free for non-production use, commercial license for production.
Service
The original MeshQL. Schema-driven REST + GraphQL APIs with Express, federation resolvers, four database backends. Where the model was proven.
The same model in Java. Jetty 12, virtual threads, PostgreSQL, MongoDB, SQLite, ksqlDB. Better performance at the cost of more memory—a natural fit for virtual infrastructure.
The same model in Rust. Five pluggable backends, sub-millisecond internals, 3.8MB binary. Lambda, edge, embedded. The hard bits are done.
Storage
Kafka semantics backed by a merkle tree. Topics, partitions, consumer groups, and cryptographic inclusion proofs—without the cluster. The storage engine that makes "no server required" possible.
Tooling
Points at a legacy database, introspects the schema, and generates a complete anti-corruption layer project. The AI-first onramp to MeshQL.
Git-style file-based issue tracker. Zero setup, stores issues as markdown, works for humans and LLMs equally. How we track work.
Working Together
We look at what you have, map the integration points, and tell you what we think. Sometimes the answer is that you don't need us.
Architecture, implementation, and delivery. We build the integration layer, test it against your systems, and hand over the source. Your team operates it.
Production support and knowledge transfer. We stay as long as you need, then get out of the way.
Start a conversation