Files
post3/README.md
2026-02-27 11:50:31 +01:00

177 lines
6.1 KiB
Markdown

# post3
**S3-compatible object storage you can run anywhere.**
post3 is a lightweight, self-hosted S3-compatible storage server written in Rust. Store objects in PostgreSQL or on the local filesystem — your choice, same API. Works with any S3 client: the AWS SDK, the AWS CLI, boto3, MinIO client, or plain curl.
## Why post3?
- **Drop-in S3 compatibility** — 20+ S3 operations, validated against the [Ceph s3-tests](https://github.com/ceph/s3-tests) conformance suite (124 tests passing)
- **Two backends, one API** — PostgreSQL (objects chunked into 1 MiB blocks) or local filesystem. Swap at startup with a flag.
- **Zero external dependencies for FS mode** — No database, no message queue, no cloud account. Just the binary and a directory.
- **Multipart uploads** — Full support for creating, uploading parts, completing, aborting, and listing multipart uploads. 5 GiB body limit.
- **Custom metadata** — `x-amz-meta-*` headers preserved and returned on GET/HEAD
- **Rust SDK included** — Ergonomic client wrapping `aws-sdk-s3` with sane defaults. One-liner setup.
- **Built on proven foundations** — axum, tokio, sqlx, tower. Production-grade async Rust.
## Quick Start
### Filesystem backend (no database needed)
```sh
# Build and run
cargo build --release -p post3-server
./target/release/post3-server serve --backend fs --data-dir /tmp/post3-data
```
### PostgreSQL backend
```sh
# Start PostgreSQL and the server
mise run up # docker compose up (PostgreSQL 18)
mise run dev # start post3-server on localhost:9000
```
### Try it out
```sh
# Create a bucket
curl -X PUT http://localhost:9000/my-bucket
# Upload an object
curl -X PUT http://localhost:9000/my-bucket/hello.txt \
-d "Hello, post3!"
# Download it
curl http://localhost:9000/my-bucket/hello.txt
# List objects
curl http://localhost:9000/my-bucket?list-type=2
# Delete
curl -X DELETE http://localhost:9000/my-bucket/hello.txt
curl -X DELETE http://localhost:9000/my-bucket
```
Or use the AWS CLI:
```sh
alias s3api='aws s3api --endpoint-url http://localhost:9000 --no-sign-request'
s3api create-bucket --bucket demo
s3api put-object --bucket demo --key readme.md --body README.md
s3api list-objects-v2 --bucket demo
s3api get-object --bucket demo --key readme.md /tmp/downloaded.md
```
## Rust SDK
```toml
[dependencies]
post3-sdk = { path = "crates/post3-sdk" }
```
```rust
use post3_sdk::Post3Client;
let client = Post3Client::new("http://localhost:9000");
client.create_bucket("my-bucket").await?;
client.put_object("my-bucket", "hello.txt", b"Hello, world!").await?;
let data = client.get_object("my-bucket", "hello.txt").await?;
assert_eq!(data.as_ref(), b"Hello, world!");
// Large files — automatic multipart upload
client.multipart_upload("my-bucket", "big-file.bin", &large_data, 8 * 1024 * 1024).await?;
// List with prefix filtering
let objects = client.list_objects("my-bucket", Some("logs/")).await?;
```
Since `post3-sdk` re-exports `aws_sdk_s3`, you can drop down to the raw AWS SDK for anything the convenience API doesn't cover.
## Supported S3 Operations
| Category | Operations |
|----------|-----------|
| **Buckets** | CreateBucket, HeadBucket, DeleteBucket, ListBuckets, GetBucketLocation |
| **Objects** | PutObject, GetObject, HeadObject, DeleteObject |
| **Listing** | ListObjects (v1 & v2), ListObjectVersions, delimiter/CommonPrefixes |
| **Batch** | DeleteObjects (up to 1000 keys) |
| **Multipart** | CreateMultipartUpload, UploadPart, CompleteMultipartUpload, AbortMultipartUpload, ListParts, ListMultipartUploads |
| **Metadata** | Custom `x-amz-meta-*` headers on PUT, returned on GET/HEAD |
## Architecture
```
crates/
post3/ Core library — StorageBackend trait, PostgresBackend,
FilesystemBackend, models, migrations
post3-server/ HTTP server — axum-based, generic over any StorageBackend
post3-sdk/ Client SDK — wraps aws-sdk-s3 with ergonomic defaults
ci/ CI pipeline — custom Dagger-based build/test/package
```
The server is generic over `B: StorageBackend`. Both backends implement the same trait, so the HTTP layer doesn't know or care where bytes end up.
**PostgreSQL backend** splits objects into 1 MiB blocks stored as `bytea` columns. Seven tables with `ON DELETE CASCADE` for automatic cleanup. Migrations managed by sqlx.
**Filesystem backend** uses percent-encoded keys, JSON metadata sidecars, and atomic writes (write-to-temp + rename). No database required.
## S3 Compliance
post3 is validated against the [Ceph s3-tests](https://github.com/ceph/s3-tests) suite — the same conformance tests used by Ceph RGW, s3proxy, and other S3-compatible implementations.
```
124 passed, 0 failed, 0 errors
```
Run them yourself:
```sh
git submodule update --init
mise run test:s3-compliance # run tests
mise run test:s3-compliance:dry # list which tests would run
```
## Development
Requires [mise](https://mise.jdx.dev/) for task running.
```sh
mise run up # Start PostgreSQL via docker compose
mise run dev # Run the server (localhost:9000)
mise run test # Run all tests
mise run test:integration # S3 integration tests only
mise run check # cargo check --workspace
mise run build # Release build
mise run db:shell # psql into dev database
mise run db:reset # Wipe and restart PostgreSQL
```
### Examples
```sh
mise run example:basic # Bucket + object CRUD
mise run example:metadata # Custom metadata round-trip
mise run example:aws-sdk # Raw aws-sdk-s3 usage
mise run example:cli # AWS CLI examples
mise run example:curl # curl examples
mise run example:large # Large file stress test
mise run example:multipart # Multipart upload stress test
```
## Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `POST3_HOST` | `127.0.0.1:9000` | Address to bind |
| `DATABASE_URL` | — | PostgreSQL connection string (pg backend) |
| `--backend` | `pg` | Storage backend: `pg` or `fs` |
| `--data-dir` | — | Data directory (fs backend) |
## License
Licensed under the [MIT License](LICENSE).