OSO RabbitMQ Backup
Back up your RabbitMQ messages without stopping the broker or losing a single message. OSO RabbitMQ Backup is a deployment-agnostic CLI tool that connects as a standard AMQP 0-9-1 client to non-destructively read messages from queues, export definitions via the Management HTTP API, and write compressed segments to pluggable cloud storage.
No broker restarts. No filesystem access. No messages consumed.
Key Features
- Non-destructive backup -- messages stay in your queues via a cancel-and-requeue strategy
- AMQP 0-9-1 + Stream Protocol -- backs up classic queues, quorum queues, and stream-type queues
- Multi-cloud storage -- write to Amazon S3, Azure Blob Storage, Google Cloud Storage, or local filesystem
- Point-in-time restore (PITR) -- restore messages filtered by timestamp
- Definitions backup/restore -- export and import exchanges, queues, bindings, policies, and users via the Management HTTP API
- YAML-driven configuration -- one config file controls everything
- Prometheus metrics -- expose backup progress and health at
/metrics - Publisher confirms -- reliable restore with per-message acknowledgement
- Compressed segment format (RBAK) -- zstd or lz4 compressed segments with checksums
- Resumable backups -- SQLite checkpoints synced to remote storage so interrupted backups pick up where they left off
Why This Tool?
RabbitMQ is a message broker, not a database -- it was never designed with backup in mind. Existing approaches all have serious drawbacks:
| Approach | Problem |
|---|---|
| Filesystem snapshot | Requires broker shutdown or risk corruption |
rabbitmqctl export_definitions | Only captures topology, not messages |
| Shovel / Federation | Consumes messages from the source queue |
| Custom consumer | Consumes messages -- they leave the queue |
There is no existing tool that backs up RabbitMQ messages without stopping the broker or draining queues. OSO RabbitMQ Backup fills that gap. It uses a cancel-and-requeue strategy: it starts a consumer, reads a batch of messages, then cancels the consumer -- causing all unacknowledged messages to requeue automatically. Your queue depth stays exactly the same before and after the backup.
For stream-type queues (x-queue-type: stream), backup is inherently non-destructive -- streams support offset-based reads, so messages are never consumed at all.
Choose Your Path
Quickstart
Get running in 5 minutes with a local RabbitMQ instance and filesystem storage.
First Backup
Walk through a full backup-and-restore cycle step by step, with verification.
CLI Reference
See every command, flag, and option the CLI supports.
Architecture Overview
YAML Config
|
v
┌──────────────────────────────── ─────────────────────────────┐
│ rabbitmq-backup CLI │
│ (clap command router) │
└────────────┬────────────────────────────────────┬────────────┘
│ │
v v
┌────────────────────────┐ ┌──────────────────────────┐
│ rabbitmq-backup-core │ │ Prometheus Metrics │
│ │ │ (GET /metrics) │
│ ┌──────────────────┐ │ └──────────────────────────┘
│ │ Backup Engine │ │
│ │ - queue_reader │ │
│ │ - stream_reader │ │
│ └───────┬──────────┘ │
│ │ │
│ ┌───────▼──────────┐ │
│ │ Segment Writer │ │
│ │ (zstd / lz4) │ │
│ └───────┬──────────┘ │
│ │ │
│ ┌───────▼──────────┐ │
│ │ Storage Layer │ │
│ │ (object_store) │ │
│ └──────────────────┘ │
└────────────┬────────────┘
│
v
┌──────────────────────────────────────────────┐
│ RabbitMQ Broker │
│ ┌──────────┐ ┌──────────┐ ┌────────────┐ │
│ │ AMQP │ │ Stream │ │ Management │ │
│ │ :5672 │ │ :5552 │ │ :15672 │ │
│ └──────────┘ └──────────┘ └────────────┘ │
└──────────────────────────────────────────────┘
│
v
┌──────────────────────────────────────────────┐
│ Cloud Storage │
│ │
│ ┌─────┐ ┌───────┐ ┌─────┐ ┌──────────┐ │
│ │ S3 │ │ Azure │ │ GCS │ │ Local │ │
│ │ │ │ Blob │ │ │ │ Disk │ │
│ └─────┘ └───────┘ └─────┘ └──────────┘ │
└──────────────────────────────────────────────┘
The CLI reads messages from RabbitMQ over AMQP 0-9-1 (or Stream Protocol for stream queues), compresses them into segments, and writes them to your chosen storage backend. Definitions are exported via the Management HTTP API. All operations are non-destructive -- your broker keeps running and your queues retain their messages.
Storage Data Layout
Each backup produces a self-contained directory structure:
{prefix}/{backup_id}/
├── manifest.json # Backup metadata and statistics
├── definitions/
│ └── definitions.json.zst # Compressed topology export
├── state/
│ └── offsets.db # SQLite checkpoint (for resumable backups)
└── queues/
└── {vhost}/
└── {queue_name}/
├── segment-0001.zst # Compressed message segments
└── segment-0002.zst
Design Heritage
This project follows the architecture established in osodevops/kafka-backup, our Apache Kafka backup tool. If you are familiar with kafka-backup, you will find the configuration style, segment format, and CLI conventions immediately recognizable. The same team maintains both projects.