feat: add post3 s3 proxy for postgresql
Signed-off-by: kjuulh <contact@kjuulh.io>
This commit is contained in:
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
target/
|
||||||
3
.gitmodules
vendored
Normal file
3
.gitmodules
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
[submodule "s3-tests"]
|
||||||
|
path = s3-tests
|
||||||
|
url = https://github.com/ceph/s3-tests.git
|
||||||
77
CLAUDE.md
Normal file
77
CLAUDE.md
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
# post3 — Pluggable S3-Compatible Storage
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
**post3** = **Post**greSQL + S**3**. An S3-compatible storage system with pluggable backends. Objects can be stored in PostgreSQL (split into 1 MiB blocks in `bytea` columns) or on the local filesystem.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
- **`crates/post3/`** — Core library crate. Contains the `StorageBackend` trait, `PostgresBackend`, `FilesystemBackend`, repository layer, models, error types, and SQL migrations.
|
||||||
|
- **`crates/post3-server/`** — Binary + lib crate. S3-compatible HTTP server using axum. Generic over `B: StorageBackend` — works with any backend.
|
||||||
|
- **`crates/post3-sdk/`** — Client SDK wrapping `aws-sdk-s3` with ergonomic defaults (dummy creds, path-style, us-east-1). Re-exports `aws_sdk_s3` for advanced use.
|
||||||
|
- **`ci/`** — Custom CI pipeline using `dagger-sdk` directly. Builds, tests, and packages in containers.
|
||||||
|
|
||||||
|
## Development Commands (mise)
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mise run up # Start PostgreSQL (docker compose)
|
||||||
|
mise run down # Stop PostgreSQL + remove volumes
|
||||||
|
mise run check # cargo check --workspace
|
||||||
|
mise run dev # Run the server (localhost:9000)
|
||||||
|
mise run test # Run all tests (starts PG first)
|
||||||
|
mise run test:integration # Run S3 integration tests only
|
||||||
|
mise run db:shell # psql into dev database
|
||||||
|
mise run db:reset # Wipe and restart PostgreSQL
|
||||||
|
mise run build # Release build
|
||||||
|
mise run ci:pr # Run CI PR pipeline via Dagger
|
||||||
|
mise run ci:main # Run CI main pipeline via Dagger
|
||||||
|
mise run example:basic # Run basic SDK example (requires server)
|
||||||
|
mise run example:metadata # Run metadata example (requires server)
|
||||||
|
mise run example:aws-sdk # Run raw aws-sdk-s3 example (requires server)
|
||||||
|
mise run example:cli # Run AWS CLI example (requires server + aws CLI)
|
||||||
|
mise run example:curl # Run curl example (requires server)
|
||||||
|
mise run example:large # Run large file stress test (requires server)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
- **DATABASE_URL**: `postgresql://devuser:devpassword@localhost:5435/post3_dev`
|
||||||
|
- **POST3_HOST**: `127.0.0.1:9000`
|
||||||
|
- PostgreSQL 18 on port **5435** (avoids conflicts with other projects)
|
||||||
|
|
||||||
|
## Key Patterns
|
||||||
|
|
||||||
|
- **`StorageBackend` trait** — Pluggable storage via `impl Future<...> + Send` desugared async methods (edition 2024). Server is generic over `B: StorageBackend`.
|
||||||
|
- **`PostgresBackend`** (alias `Store`) — PostgreSQL backend using sqlx repos + 1 MiB block chunks
|
||||||
|
- **`FilesystemBackend`** — Local filesystem backend using percent-encoded keys, JSON metadata, atomic writes
|
||||||
|
- **notmad 0.11** for component lifecycle (native async traits, no async_trait)
|
||||||
|
- **sqlx** with `PgPool` for database access; migrations at `crates/post3/migrations/`
|
||||||
|
- **axum 0.8** with `{param}` path syntax and `{*wildcard}` for nested keys
|
||||||
|
- Trailing slash routes duplicated for AWS SDK compatibility (`/{bucket}` + `/{bucket}/`)
|
||||||
|
- Body limit set to 5 GiB via `DefaultBodyLimit`
|
||||||
|
- S3 multipart upload supported: CreateMultipartUpload, UploadPart, CompleteMultipartUpload, AbortMultipartUpload, ListParts, ListMultipartUploads
|
||||||
|
- Query param dispatch: PUT/GET/DELETE/POST on `/{bucket}/{*key}` dispatch by `?uploads`, `?uploadId`, `?partNumber`
|
||||||
|
- Handlers use turbofish `::<B>` in router for generic dispatch
|
||||||
|
- Tests use `aws-sdk-s3` with `force_path_style(true)` and dummy credentials
|
||||||
|
|
||||||
|
## Database Schema
|
||||||
|
|
||||||
|
7 tables: `buckets`, `objects`, `object_metadata` (KV registry), `blocks` (1 MiB chunks), `multipart_uploads`, `multipart_upload_metadata`, `upload_parts`. All use `ON DELETE CASCADE` for cleanup.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
- **PostgreSQL integration tests** in `crates/post3-server/tests/s3_integration.rs` — spin up a real server per test on an ephemeral port. Each test gets its own `PgPool` and cleans the database. Tests must run with `--test-threads=1` to avoid DB conflicts.
|
||||||
|
- **Filesystem integration tests** in `crates/post3-server/tests/fs_integration.rs` — same HTTP-level tests but using `FilesystemBackend` with a temp directory. No PostgreSQL required.
|
||||||
|
- **Filesystem unit tests** in `crates/post3/src/fs.rs` — direct backend method tests.
|
||||||
|
|
||||||
|
## Roadmap (see `todos/`)
|
||||||
|
|
||||||
|
- **POST3-008**: Client SDK crate — **Done** (`crates/post3-sdk/`)
|
||||||
|
- **POST3-009**: CI pipeline — **Done** (custom `ci/` crate using `dagger-sdk` directly)
|
||||||
|
- **POST3-010**: Production Docker Compose (Dockerfile, health endpoint, compose)
|
||||||
|
- **POST3-011**: Usage examples — **Done** (Rust examples, AWS CLI, curl, large file stress test)
|
||||||
|
- **POST3-012**: Authentication (SigV4 verification, API keys table, admin CLI)
|
||||||
|
|
||||||
|
## CI Pattern
|
||||||
|
|
||||||
|
Custom `ci/` crate using `dagger-sdk` (v0.19) directly — self-contained, no external dagger-components dependency. Subcommands: `pr` (check + test + build + package) and `main` (same, no publish yet). Uses PostgreSQL 18 as a Dagger service container for integration tests. Skeleton source + dependency-only prebuild for cargo layer caching. mold linker for fast linking. Final image: `debian:bookworm-slim`.
|
||||||
4527
Cargo.lock
generated
Normal file
4527
Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
42
Cargo.toml
Normal file
42
Cargo.toml
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
[workspace]
|
||||||
|
members = ["crates/*", "ci"]
|
||||||
|
resolver = "2"
|
||||||
|
|
||||||
|
[workspace.package]
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2024"
|
||||||
|
|
||||||
|
[workspace.dependencies]
|
||||||
|
post3 = { path = "crates/post3" }
|
||||||
|
post3-sdk = { path = "crates/post3-sdk" }
|
||||||
|
|
||||||
|
anyhow = "1"
|
||||||
|
tokio = { version = "1", features = ["full"] }
|
||||||
|
tracing = { version = "0.1", features = ["log"] }
|
||||||
|
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
|
||||||
|
clap = { version = "4", features = ["derive", "env", "string"] }
|
||||||
|
dotenvy = "0.15"
|
||||||
|
serde = { version = "1", features = ["derive"] }
|
||||||
|
uuid = { version = "1", features = ["v4", "v7"] }
|
||||||
|
bytes = "1"
|
||||||
|
chrono = { version = "0.4", features = ["serde"] }
|
||||||
|
thiserror = "2"
|
||||||
|
axum = "0.8"
|
||||||
|
tower = "0.5"
|
||||||
|
tower-http = { version = "0.6", features = ["trace", "normalize-path"] }
|
||||||
|
notmad = "0.11"
|
||||||
|
tokio-util = { version = "0.7", features = ["compat"] }
|
||||||
|
sqlx = { version = "0.8", features = [
|
||||||
|
"chrono",
|
||||||
|
"postgres",
|
||||||
|
"runtime-tokio",
|
||||||
|
"uuid",
|
||||||
|
] }
|
||||||
|
md-5 = "0.10"
|
||||||
|
hex = "0.4"
|
||||||
|
quick-xml = { version = "0.36", features = ["serialize"] }
|
||||||
|
serde_json = "1"
|
||||||
|
percent-encoding = "2"
|
||||||
|
tempfile = "3"
|
||||||
|
dagger-sdk = "0.19"
|
||||||
|
eyre = "0.6"
|
||||||
11
ci/Cargo.toml
Normal file
11
ci/Cargo.toml
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
[package]
|
||||||
|
name = "ci"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2024"
|
||||||
|
publish = false
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
dagger-sdk.workspace = true
|
||||||
|
eyre.workspace = true
|
||||||
|
tokio.workspace = true
|
||||||
|
clap.workspace = true
|
||||||
258
ci/src/main.rs
Normal file
258
ci/src/main.rs
Normal file
@@ -0,0 +1,258 @@
|
|||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use clap::Parser;
|
||||||
|
|
||||||
|
const BIN_NAME: &str = "post3-server";
|
||||||
|
const MOLD_VERSION: &str = "2.40.4";
|
||||||
|
|
||||||
|
#[derive(Parser)]
|
||||||
|
#[command(name = "ci")]
|
||||||
|
enum Cli {
|
||||||
|
/// Run PR validation pipeline (check + test + build)
|
||||||
|
Pr,
|
||||||
|
/// Run main branch pipeline (check + test + build)
|
||||||
|
Main,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> eyre::Result<()> {
|
||||||
|
let cli = Cli::parse();
|
||||||
|
|
||||||
|
dagger_sdk::connect(|client| async move {
|
||||||
|
match cli {
|
||||||
|
Cli::Pr => run_pr(&client).await?,
|
||||||
|
Cli::Main => run_main(&client).await?,
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
})
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn run_pr(client: &dagger_sdk::Query) -> eyre::Result<()> {
|
||||||
|
eprintln!("==> PR pipeline: check + test + build");
|
||||||
|
|
||||||
|
let base = build_base(client).await?;
|
||||||
|
|
||||||
|
// Step 1: cargo check
|
||||||
|
eprintln!("--- cargo check --workspace");
|
||||||
|
base.clone()
|
||||||
|
.with_exec(vec!["cargo", "check", "--workspace"])
|
||||||
|
.sync()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Step 2: tests with PostgreSQL service
|
||||||
|
eprintln!("--- running tests");
|
||||||
|
run_tests(client, &base).await?;
|
||||||
|
|
||||||
|
// Step 3: build release binary + package image
|
||||||
|
eprintln!("--- building release image");
|
||||||
|
let _image = build_release_image(client, &base).await?;
|
||||||
|
|
||||||
|
eprintln!("==> PR pipeline complete");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn run_main(client: &dagger_sdk::Query) -> eyre::Result<()> {
|
||||||
|
eprintln!("==> Main pipeline: check + test + build");
|
||||||
|
|
||||||
|
let base = build_base(client).await?;
|
||||||
|
|
||||||
|
eprintln!("--- cargo check --workspace");
|
||||||
|
base.clone()
|
||||||
|
.with_exec(vec!["cargo", "check", "--workspace"])
|
||||||
|
.sync()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
eprintln!("--- running tests");
|
||||||
|
run_tests(client, &base).await?;
|
||||||
|
|
||||||
|
eprintln!("--- building release image");
|
||||||
|
let _image = build_release_image(client, &base).await?;
|
||||||
|
|
||||||
|
eprintln!("==> Main pipeline complete");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load source from host, excluding build artifacts.
|
||||||
|
fn load_source(client: &dagger_sdk::Query) -> eyre::Result<dagger_sdk::Directory> {
|
||||||
|
let src = client.host().directory_opts(
|
||||||
|
".",
|
||||||
|
dagger_sdk::HostDirectoryOptsBuilder::default()
|
||||||
|
.exclude(vec!["target/", ".git/", "node_modules/", ".cuddle/"])
|
||||||
|
.build()?,
|
||||||
|
);
|
||||||
|
Ok(src)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load dependency-only source (Cargo.toml + Cargo.lock, no src/ or tests/).
|
||||||
|
fn load_dep_source(client: &dagger_sdk::Query) -> eyre::Result<dagger_sdk::Directory> {
|
||||||
|
let src = client.host().directory_opts(
|
||||||
|
".",
|
||||||
|
dagger_sdk::HostDirectoryOptsBuilder::default()
|
||||||
|
.exclude(vec![
|
||||||
|
"target/",
|
||||||
|
".git/",
|
||||||
|
"node_modules/",
|
||||||
|
".cuddle/",
|
||||||
|
"**/src",
|
||||||
|
"**/tests",
|
||||||
|
])
|
||||||
|
.build()?,
|
||||||
|
);
|
||||||
|
Ok(src)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create skeleton source files so cargo can resolve deps without real source.
|
||||||
|
fn create_skeleton_files(client: &dagger_sdk::Query) -> eyre::Result<dagger_sdk::Directory> {
|
||||||
|
let main_content = r#"fn main() { panic!("skeleton"); }"#;
|
||||||
|
let lib_content = r#"pub fn _skeleton() {}"#;
|
||||||
|
|
||||||
|
let crate_paths = discover_crates()?;
|
||||||
|
let mut dir = client.directory();
|
||||||
|
|
||||||
|
for crate_path in &crate_paths {
|
||||||
|
let src_dir = crate_path.join("src");
|
||||||
|
dir = dir.with_new_file(src_dir.join("main.rs").to_string_lossy().to_string(), main_content);
|
||||||
|
dir = dir.with_new_file(src_dir.join("lib.rs").to_string_lossy().to_string(), lib_content);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also add skeleton for ci/ crate itself
|
||||||
|
dir = dir.with_new_file("ci/src/main.rs".to_string(), main_content);
|
||||||
|
|
||||||
|
Ok(dir)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Discover workspace crate directories on the host.
|
||||||
|
fn discover_crates() -> eyre::Result<Vec<PathBuf>> {
|
||||||
|
let crates_dir = PathBuf::from("crates");
|
||||||
|
let mut crate_paths = Vec::new();
|
||||||
|
|
||||||
|
if crates_dir.is_dir() {
|
||||||
|
for entry in std::fs::read_dir(&crates_dir)? {
|
||||||
|
let entry = entry?;
|
||||||
|
if entry.file_type()?.is_dir() {
|
||||||
|
crate_paths.push(entry.path());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(crate_paths)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build the base Rust container with all deps cached.
|
||||||
|
async fn build_base(client: &dagger_sdk::Query) -> eyre::Result<dagger_sdk::Container> {
|
||||||
|
let src = load_source(client)?;
|
||||||
|
let dep_src = load_dep_source(client)?;
|
||||||
|
let skeleton = create_skeleton_files(client)?;
|
||||||
|
|
||||||
|
// Merge skeleton files into dep source so cargo can resolve the workspace
|
||||||
|
let dep_src_with_skeleton = dep_src.with_directory(".", skeleton);
|
||||||
|
|
||||||
|
// Base rust image with build tools
|
||||||
|
let rust_base = client
|
||||||
|
.container()
|
||||||
|
.from("rustlang/rust:nightly")
|
||||||
|
.with_exec(vec!["apt", "update"])
|
||||||
|
.with_exec(vec!["apt", "install", "-y", "clang", "wget"])
|
||||||
|
// Install mold linker
|
||||||
|
.with_exec(vec![
|
||||||
|
"wget",
|
||||||
|
"-q",
|
||||||
|
&format!(
|
||||||
|
"https://github.com/rui314/mold/releases/download/v{MOLD_VERSION}/mold-{MOLD_VERSION}-x86_64-linux.tar.gz"
|
||||||
|
),
|
||||||
|
])
|
||||||
|
.with_exec(vec![
|
||||||
|
"tar",
|
||||||
|
"-xf",
|
||||||
|
&format!("mold-{MOLD_VERSION}-x86_64-linux.tar.gz"),
|
||||||
|
])
|
||||||
|
.with_exec(vec![
|
||||||
|
"mv",
|
||||||
|
&format!("mold-{MOLD_VERSION}-x86_64-linux/bin/mold"),
|
||||||
|
"/usr/bin/mold",
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Step 1: build deps with skeleton source (cacheable layer)
|
||||||
|
let prebuild = rust_base
|
||||||
|
.clone()
|
||||||
|
.with_workdir("/mnt/src")
|
||||||
|
.with_directory("/mnt/src", dep_src_with_skeleton)
|
||||||
|
.with_exec(vec!["cargo", "build", "--release", "--bin", BIN_NAME]);
|
||||||
|
|
||||||
|
// Step 2: copy cargo registry from prebuild (avoids re-downloading deps)
|
||||||
|
// Don't copy target/ — Dagger normalizes timestamps which breaks cargo fingerprinting
|
||||||
|
let build_container = rust_base
|
||||||
|
.with_workdir("/mnt/src")
|
||||||
|
.with_directory(
|
||||||
|
"/usr/local/cargo",
|
||||||
|
prebuild.directory("/usr/local/cargo"),
|
||||||
|
)
|
||||||
|
.with_directory("/mnt/src/", src);
|
||||||
|
|
||||||
|
Ok(build_container)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Run tests against a PostgreSQL service container.
|
||||||
|
async fn run_tests(
|
||||||
|
client: &dagger_sdk::Query,
|
||||||
|
base: &dagger_sdk::Container,
|
||||||
|
) -> eyre::Result<()> {
|
||||||
|
let postgres = client
|
||||||
|
.container()
|
||||||
|
.from("postgres:18-alpine")
|
||||||
|
.with_env_variable("POSTGRES_DB", "post3_dev")
|
||||||
|
.with_env_variable("POSTGRES_USER", "devuser")
|
||||||
|
.with_env_variable("POSTGRES_PASSWORD", "devpassword")
|
||||||
|
.with_exposed_port(5432)
|
||||||
|
.as_service();
|
||||||
|
|
||||||
|
base.clone()
|
||||||
|
.with_service_binding("postgres", postgres)
|
||||||
|
.with_env_variable(
|
||||||
|
"DATABASE_URL",
|
||||||
|
"postgresql://devuser:devpassword@postgres:5432/post3_dev",
|
||||||
|
)
|
||||||
|
.with_exec(vec![
|
||||||
|
"cargo",
|
||||||
|
"test",
|
||||||
|
"--workspace",
|
||||||
|
"--",
|
||||||
|
"--test-threads=1",
|
||||||
|
])
|
||||||
|
.sync()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build release binary and package into a slim image.
|
||||||
|
async fn build_release_image(
|
||||||
|
client: &dagger_sdk::Query,
|
||||||
|
base: &dagger_sdk::Container,
|
||||||
|
) -> eyre::Result<dagger_sdk::Container> {
|
||||||
|
// Build release binary
|
||||||
|
let built = base
|
||||||
|
.clone()
|
||||||
|
.with_exec(vec!["cargo", "build", "--release", "--bin", BIN_NAME]);
|
||||||
|
|
||||||
|
let binary = built.file(format!("/mnt/src/target/release/{BIN_NAME}"));
|
||||||
|
|
||||||
|
// Package into slim debian image
|
||||||
|
let final_image = client
|
||||||
|
.container()
|
||||||
|
.from("debian:bookworm-slim")
|
||||||
|
.with_exec(vec!["apt", "update"])
|
||||||
|
.with_exec(vec!["apt", "install", "-y", "ca-certificates"])
|
||||||
|
.with_exec(vec!["rm", "-rf", "/var/lib/apt/lists/*"])
|
||||||
|
.with_file(format!("/usr/local/bin/{BIN_NAME}"), binary)
|
||||||
|
.with_exec(vec![BIN_NAME, "--help"]);
|
||||||
|
|
||||||
|
// Execute to verify the image works
|
||||||
|
final_image.sync().await?;
|
||||||
|
|
||||||
|
eprintln!("--- release image built successfully");
|
||||||
|
Ok(final_image)
|
||||||
|
}
|
||||||
17
crates/post3-sdk/Cargo.toml
Normal file
17
crates/post3-sdk/Cargo.toml
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
[package]
|
||||||
|
name = "post3-sdk"
|
||||||
|
version.workspace = true
|
||||||
|
edition.workspace = true
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
aws-sdk-s3 = "1"
|
||||||
|
aws-credential-types = { version = "1", features = ["hardcoded-credentials"] }
|
||||||
|
aws-types = "1"
|
||||||
|
aws-config = "1"
|
||||||
|
bytes.workspace = true
|
||||||
|
thiserror.workspace = true
|
||||||
|
chrono.workspace = true
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
tokio.workspace = true
|
||||||
|
anyhow.workspace = true
|
||||||
107
crates/post3-sdk/examples/aws_sdk_direct.rs
Normal file
107
crates/post3-sdk/examples/aws_sdk_direct.rs
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
//! Use aws-sdk-s3 directly against post3 (without the post3-sdk wrapper).
|
||||||
|
//! Shows the raw configuration needed.
|
||||||
|
//!
|
||||||
|
//! Prerequisites: post3-server running on localhost:9000
|
||||||
|
//! mise run up && mise run dev
|
||||||
|
//!
|
||||||
|
//! Run:
|
||||||
|
//! cargo run -p post3-sdk --example aws_sdk_direct
|
||||||
|
|
||||||
|
use post3_sdk::aws_sdk_s3;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
let endpoint = std::env::var("POST3_ENDPOINT")
|
||||||
|
.unwrap_or_else(|_| "http://localhost:9000".to_string());
|
||||||
|
|
||||||
|
// Configure aws-sdk-s3 manually for post3
|
||||||
|
let creds = aws_sdk_s3::config::Credentials::new(
|
||||||
|
"test", // access key (any value works when auth is disabled)
|
||||||
|
"test", // secret key
|
||||||
|
None, // session token
|
||||||
|
None, // expiry
|
||||||
|
"example", // provider name
|
||||||
|
);
|
||||||
|
|
||||||
|
let config = aws_sdk_s3::Config::builder()
|
||||||
|
.behavior_version_latest()
|
||||||
|
.region(aws_sdk_s3::config::Region::new("us-east-1"))
|
||||||
|
.endpoint_url(&endpoint)
|
||||||
|
.credentials_provider(creds)
|
||||||
|
.force_path_style(true) // Required: post3 uses path-style, not virtual-hosted
|
||||||
|
.build();
|
||||||
|
|
||||||
|
let client = aws_sdk_s3::Client::from_conf(config);
|
||||||
|
|
||||||
|
// Create bucket
|
||||||
|
println!("Creating bucket...");
|
||||||
|
client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("direct-bucket")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Put object
|
||||||
|
println!("Putting object...");
|
||||||
|
client
|
||||||
|
.put_object()
|
||||||
|
.bucket("direct-bucket")
|
||||||
|
.key("greeting.txt")
|
||||||
|
.body(Vec::from(&b"Hello from aws-sdk-s3!"[..]).into())
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Get object
|
||||||
|
let resp = client
|
||||||
|
.get_object()
|
||||||
|
.bucket("direct-bucket")
|
||||||
|
.key("greeting.txt")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
let body = resp.body.collect().await?.into_bytes();
|
||||||
|
println!("Got: {}", String::from_utf8_lossy(&body));
|
||||||
|
|
||||||
|
// List objects
|
||||||
|
let list = client
|
||||||
|
.list_objects_v2()
|
||||||
|
.bucket("direct-bucket")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
println!("Objects:");
|
||||||
|
for obj in list.contents() {
|
||||||
|
println!(
|
||||||
|
" {} ({} bytes)",
|
||||||
|
obj.key().unwrap_or("?"),
|
||||||
|
obj.size().unwrap_or(0)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Head object
|
||||||
|
let head = client
|
||||||
|
.head_object()
|
||||||
|
.bucket("direct-bucket")
|
||||||
|
.key("greeting.txt")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
println!(
|
||||||
|
"Head: size={}, etag={:?}",
|
||||||
|
head.content_length().unwrap_or(0),
|
||||||
|
head.e_tag()
|
||||||
|
);
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
client
|
||||||
|
.delete_object()
|
||||||
|
.bucket("direct-bucket")
|
||||||
|
.key("greeting.txt")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
client
|
||||||
|
.delete_bucket()
|
||||||
|
.bucket("direct-bucket")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
println!("Done!");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
76
crates/post3-sdk/examples/basic.rs
Normal file
76
crates/post3-sdk/examples/basic.rs
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
//! Basic post3 usage: create a bucket, put/get/delete objects, list objects.
|
||||||
|
//!
|
||||||
|
//! Prerequisites: post3-server running on localhost:9000
|
||||||
|
//! mise run up && mise run dev
|
||||||
|
//!
|
||||||
|
//! Run:
|
||||||
|
//! cargo run -p post3-sdk --example basic
|
||||||
|
|
||||||
|
use post3_sdk::Post3Client;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
let endpoint = std::env::var("POST3_ENDPOINT")
|
||||||
|
.unwrap_or_else(|_| "http://localhost:9000".to_string());
|
||||||
|
let client = Post3Client::new(&endpoint);
|
||||||
|
|
||||||
|
// Create a bucket
|
||||||
|
println!("Creating bucket 'example-bucket'...");
|
||||||
|
client.create_bucket("example-bucket").await?;
|
||||||
|
|
||||||
|
// List buckets
|
||||||
|
let buckets = client.list_buckets().await?;
|
||||||
|
println!("Buckets: {:?}", buckets);
|
||||||
|
|
||||||
|
// Put an object
|
||||||
|
println!("Putting 'hello.txt'...");
|
||||||
|
client
|
||||||
|
.put_object("example-bucket", "hello.txt", b"Hello, post3!")
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Get the object back
|
||||||
|
let data = client.get_object("example-bucket", "hello.txt").await?;
|
||||||
|
println!("Got: {}", String::from_utf8_lossy(&data));
|
||||||
|
|
||||||
|
// Put a few more objects
|
||||||
|
client
|
||||||
|
.put_object("example-bucket", "docs/readme.md", b"# README")
|
||||||
|
.await?;
|
||||||
|
client
|
||||||
|
.put_object("example-bucket", "docs/guide.md", b"# Guide")
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// List all objects
|
||||||
|
let objects = client.list_objects("example-bucket", None).await?;
|
||||||
|
println!("All objects:");
|
||||||
|
for obj in &objects {
|
||||||
|
println!(" {} ({} bytes)", obj.key, obj.size);
|
||||||
|
}
|
||||||
|
|
||||||
|
// List with prefix filter
|
||||||
|
let docs = client
|
||||||
|
.list_objects("example-bucket", Some("docs/"))
|
||||||
|
.await?;
|
||||||
|
println!("Objects under docs/:");
|
||||||
|
for obj in &docs {
|
||||||
|
println!(" {} ({} bytes)", obj.key, obj.size);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete objects
|
||||||
|
println!("Cleaning up...");
|
||||||
|
client
|
||||||
|
.delete_object("example-bucket", "hello.txt")
|
||||||
|
.await?;
|
||||||
|
client
|
||||||
|
.delete_object("example-bucket", "docs/readme.md")
|
||||||
|
.await?;
|
||||||
|
client
|
||||||
|
.delete_object("example-bucket", "docs/guide.md")
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Delete the bucket
|
||||||
|
client.delete_bucket("example-bucket").await?;
|
||||||
|
println!("Done!");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
161
crates/post3-sdk/examples/large_upload.rs
Normal file
161
crates/post3-sdk/examples/large_upload.rs
Normal file
@@ -0,0 +1,161 @@
|
|||||||
|
//! Stress test: upload and verify large files.
|
||||||
|
//!
|
||||||
|
//! Tests progressively larger files to find limits and measure performance.
|
||||||
|
//! Generates deterministic pseudo-random data so we can verify integrity
|
||||||
|
//! without keeping the full payload in memory twice.
|
||||||
|
//!
|
||||||
|
//! Prerequisites: post3-server running on localhost:9000
|
||||||
|
//! mise run up && mise run dev
|
||||||
|
//!
|
||||||
|
//! Run:
|
||||||
|
//! cargo run -p post3-sdk --example large_upload --release
|
||||||
|
//!
|
||||||
|
//! Or with custom sizes (in MB):
|
||||||
|
//! POST3_SIZES=10,50,100,500,1024 cargo run -p post3-sdk --example large_upload --release
|
||||||
|
|
||||||
|
use post3_sdk::Post3Client;
|
||||||
|
use std::time::Instant;
|
||||||
|
|
||||||
|
fn generate_data(size_bytes: usize) -> Vec<u8> {
|
||||||
|
// Deterministic pattern: repeating 256-byte blocks with position-dependent content
|
||||||
|
let mut data = Vec::with_capacity(size_bytes);
|
||||||
|
let mut state: u64 = 0xdeadbeef;
|
||||||
|
while data.len() < size_bytes {
|
||||||
|
// Simple xorshift64 PRNG for fast deterministic data
|
||||||
|
state ^= state << 13;
|
||||||
|
state ^= state >> 7;
|
||||||
|
state ^= state << 17;
|
||||||
|
data.extend_from_slice(&state.to_le_bytes());
|
||||||
|
}
|
||||||
|
data.truncate(size_bytes);
|
||||||
|
data
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_size(bytes: usize) -> String {
|
||||||
|
if bytes >= 1024 * 1024 * 1024 {
|
||||||
|
format!("{:.1} GiB", bytes as f64 / (1024.0 * 1024.0 * 1024.0))
|
||||||
|
} else if bytes >= 1024 * 1024 {
|
||||||
|
format!("{:.1} MiB", bytes as f64 / (1024.0 * 1024.0))
|
||||||
|
} else if bytes >= 1024 {
|
||||||
|
format!("{:.1} KiB", bytes as f64 / 1024.0)
|
||||||
|
} else {
|
||||||
|
format!("{} B", bytes)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_throughput(bytes: usize, duration: std::time::Duration) -> String {
|
||||||
|
let secs = duration.as_secs_f64();
|
||||||
|
if secs == 0.0 {
|
||||||
|
return "∞".to_string();
|
||||||
|
}
|
||||||
|
let mb_per_sec = bytes as f64 / (1024.0 * 1024.0) / secs;
|
||||||
|
format!("{:.1} MiB/s", mb_per_sec)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
let endpoint = std::env::var("POST3_ENDPOINT")
|
||||||
|
.unwrap_or_else(|_| "http://localhost:9000".to_string());
|
||||||
|
let client = Post3Client::new(&endpoint);
|
||||||
|
|
||||||
|
// Parse sizes from env or use defaults
|
||||||
|
let sizes_mb: Vec<usize> = std::env::var("POST3_SIZES")
|
||||||
|
.unwrap_or_else(|_| "1,10,50,100,500,1024,2048".to_string())
|
||||||
|
.split(',')
|
||||||
|
.filter_map(|s| s.trim().parse().ok())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
println!("=== post3 Large File Stress Test ===");
|
||||||
|
println!("Endpoint: {}", endpoint);
|
||||||
|
println!("Sizes: {:?} MB", sizes_mb);
|
||||||
|
println!();
|
||||||
|
|
||||||
|
client.create_bucket("stress-test").await?;
|
||||||
|
|
||||||
|
for size_mb in &sizes_mb {
|
||||||
|
let size_bytes = size_mb * 1024 * 1024;
|
||||||
|
let key = format!("test-{}mb.bin", size_mb);
|
||||||
|
|
||||||
|
println!("--- {} ---", format_size(size_bytes));
|
||||||
|
|
||||||
|
// Generate data
|
||||||
|
print!(" Generating data... ");
|
||||||
|
let gen_start = Instant::now();
|
||||||
|
let data = generate_data(size_bytes);
|
||||||
|
println!("done ({:.1}s)", gen_start.elapsed().as_secs_f64());
|
||||||
|
|
||||||
|
// Upload
|
||||||
|
print!(" Uploading... ");
|
||||||
|
let upload_start = Instant::now();
|
||||||
|
match client.put_object("stress-test", &key, &data).await {
|
||||||
|
Ok(()) => {
|
||||||
|
let upload_dur = upload_start.elapsed();
|
||||||
|
println!(
|
||||||
|
"done ({:.1}s, {})",
|
||||||
|
upload_dur.as_secs_f64(),
|
||||||
|
format_throughput(size_bytes, upload_dur)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("FAILED: {}", e);
|
||||||
|
println!(" Skipping remaining sizes (hit server limit)");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Head (verify metadata)
|
||||||
|
let head = client.head_object("stress-test", &key).await?;
|
||||||
|
if let Some(info) = &head {
|
||||||
|
println!(
|
||||||
|
" Head: size={}, etag={:?}",
|
||||||
|
format_size(info.size as usize),
|
||||||
|
info.etag
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download
|
||||||
|
print!(" Downloading... ");
|
||||||
|
let download_start = Instant::now();
|
||||||
|
match client.get_object("stress-test", &key).await {
|
||||||
|
Ok(downloaded) => {
|
||||||
|
let download_dur = download_start.elapsed();
|
||||||
|
println!(
|
||||||
|
"done ({:.1}s, {})",
|
||||||
|
download_dur.as_secs_f64(),
|
||||||
|
format_throughput(size_bytes, download_dur)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Verify integrity
|
||||||
|
print!(" Verifying... ");
|
||||||
|
if downloaded.len() != data.len() {
|
||||||
|
println!(
|
||||||
|
"FAILED: size mismatch (expected {}, got {})",
|
||||||
|
data.len(),
|
||||||
|
downloaded.len()
|
||||||
|
);
|
||||||
|
} else if downloaded.as_ref() == data.as_slice() {
|
||||||
|
println!("OK (byte-for-byte match)");
|
||||||
|
} else {
|
||||||
|
// Find first mismatch
|
||||||
|
let pos = data
|
||||||
|
.iter()
|
||||||
|
.zip(downloaded.iter())
|
||||||
|
.position(|(a, b)| a != b)
|
||||||
|
.unwrap_or(0);
|
||||||
|
println!("FAILED: mismatch at byte {}", pos);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("FAILED: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup this object
|
||||||
|
client.delete_object("stress-test", &key).await?;
|
||||||
|
println!();
|
||||||
|
}
|
||||||
|
|
||||||
|
client.delete_bucket("stress-test").await?;
|
||||||
|
println!("=== Done ===");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
78
crates/post3-sdk/examples/metadata.rs
Normal file
78
crates/post3-sdk/examples/metadata.rs
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
//! Demonstrate custom metadata (x-amz-meta-*) with post3.
|
||||||
|
//!
|
||||||
|
//! Prerequisites: post3-server running on localhost:9000
|
||||||
|
//! mise run up && mise run dev
|
||||||
|
//!
|
||||||
|
//! Run:
|
||||||
|
//! cargo run -p post3-sdk --example metadata
|
||||||
|
|
||||||
|
use post3_sdk::Post3Client;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
let endpoint = std::env::var("POST3_ENDPOINT")
|
||||||
|
.unwrap_or_else(|_| "http://localhost:9000".to_string());
|
||||||
|
let client = Post3Client::new(&endpoint);
|
||||||
|
|
||||||
|
client.create_bucket("meta-bucket").await?;
|
||||||
|
|
||||||
|
// Use the inner aws-sdk-s3 client to set custom metadata
|
||||||
|
let inner = client.inner();
|
||||||
|
println!("Putting object with custom metadata...");
|
||||||
|
inner
|
||||||
|
.put_object()
|
||||||
|
.bucket("meta-bucket")
|
||||||
|
.key("report.pdf")
|
||||||
|
.body(Vec::from(&b"fake pdf content"[..]).into())
|
||||||
|
.content_type("application/pdf")
|
||||||
|
.metadata("author", "alice")
|
||||||
|
.metadata("department", "engineering")
|
||||||
|
.metadata("version", "2")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Retrieve metadata via head_object
|
||||||
|
let head = inner
|
||||||
|
.head_object()
|
||||||
|
.bucket("meta-bucket")
|
||||||
|
.key("report.pdf")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
println!("Content-Type: {:?}", head.content_type());
|
||||||
|
println!("Content-Length: {:?}", head.content_length());
|
||||||
|
println!("ETag: {:?}", head.e_tag());
|
||||||
|
if let Some(metadata) = head.metadata() {
|
||||||
|
println!("Custom metadata:");
|
||||||
|
for (k, v) in metadata {
|
||||||
|
println!(" x-amz-meta-{}: {}", k, v);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Retrieve the full object with metadata
|
||||||
|
let resp = inner
|
||||||
|
.get_object()
|
||||||
|
.bucket("meta-bucket")
|
||||||
|
.key("report.pdf")
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
println!("\nGet object response:");
|
||||||
|
println!(" Content-Type: {:?}", resp.content_type());
|
||||||
|
if let Some(metadata) = resp.metadata() {
|
||||||
|
println!(" Metadata:");
|
||||||
|
for (k, v) in metadata {
|
||||||
|
println!(" x-amz-meta-{}: {}", k, v);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let body = resp.body.collect().await?.into_bytes();
|
||||||
|
println!(" Body: {}", String::from_utf8_lossy(&body));
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
client.delete_object("meta-bucket", "report.pdf").await?;
|
||||||
|
client.delete_bucket("meta-bucket").await?;
|
||||||
|
println!("\nDone!");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
177
crates/post3-sdk/examples/multipart_upload.rs
Normal file
177
crates/post3-sdk/examples/multipart_upload.rs
Normal file
@@ -0,0 +1,177 @@
|
|||||||
|
//! Stress test: multipart upload and verify huge files (4–16 GiB).
|
||||||
|
//!
|
||||||
|
//! Uses the SDK's multipart_upload convenience method which splits data into
|
||||||
|
//! parts and uploads them sequentially via CreateMultipartUpload / UploadPart /
|
||||||
|
//! CompleteMultipartUpload.
|
||||||
|
//!
|
||||||
|
//! Prerequisites: post3-server running on localhost:9000
|
||||||
|
//! mise run up && mise run dev
|
||||||
|
//!
|
||||||
|
//! Run:
|
||||||
|
//! cargo run -p post3-sdk --example multipart_upload --release
|
||||||
|
//!
|
||||||
|
//! Or with custom sizes (in MB) and part size:
|
||||||
|
//! POST3_SIZES=4096,8192,16384 POST3_PART_SIZE=64 cargo run -p post3-sdk --example multipart_upload --release
|
||||||
|
|
||||||
|
use post3_sdk::Post3Client;
|
||||||
|
use std::time::Instant;
|
||||||
|
|
||||||
|
fn generate_data(size_bytes: usize) -> Vec<u8> {
|
||||||
|
let mut data = Vec::with_capacity(size_bytes);
|
||||||
|
let mut state: u64 = 0xdeadbeef;
|
||||||
|
while data.len() < size_bytes {
|
||||||
|
state ^= state << 13;
|
||||||
|
state ^= state >> 7;
|
||||||
|
state ^= state << 17;
|
||||||
|
data.extend_from_slice(&state.to_le_bytes());
|
||||||
|
}
|
||||||
|
data.truncate(size_bytes);
|
||||||
|
data
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_size(bytes: usize) -> String {
|
||||||
|
if bytes >= 1024 * 1024 * 1024 {
|
||||||
|
format!("{:.1} GiB", bytes as f64 / (1024.0 * 1024.0 * 1024.0))
|
||||||
|
} else if bytes >= 1024 * 1024 {
|
||||||
|
format!("{:.1} MiB", bytes as f64 / (1024.0 * 1024.0))
|
||||||
|
} else if bytes >= 1024 {
|
||||||
|
format!("{:.1} KiB", bytes as f64 / 1024.0)
|
||||||
|
} else {
|
||||||
|
format!("{} B", bytes)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_throughput(bytes: usize, duration: std::time::Duration) -> String {
|
||||||
|
let secs = duration.as_secs_f64();
|
||||||
|
if secs == 0.0 {
|
||||||
|
return "∞".to_string();
|
||||||
|
}
|
||||||
|
let mb_per_sec = bytes as f64 / (1024.0 * 1024.0) / secs;
|
||||||
|
format!("{:.1} MiB/s", mb_per_sec)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
let endpoint = std::env::var("POST3_ENDPOINT")
|
||||||
|
.unwrap_or_else(|_| "http://localhost:9000".to_string());
|
||||||
|
let client = Post3Client::new(&endpoint);
|
||||||
|
|
||||||
|
let sizes_mb: Vec<usize> = std::env::var("POST3_SIZES")
|
||||||
|
.unwrap_or_else(|_| "100,1024,4096,8192,16384".to_string())
|
||||||
|
.split(',')
|
||||||
|
.filter_map(|s| s.trim().parse().ok())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
// Part size in MiB (default 64 MiB — good balance of part count vs memory)
|
||||||
|
let part_size_mb: usize = std::env::var("POST3_PART_SIZE")
|
||||||
|
.unwrap_or_else(|_| "64".to_string())
|
||||||
|
.parse()
|
||||||
|
.unwrap_or(64);
|
||||||
|
let part_size = part_size_mb * 1024 * 1024;
|
||||||
|
|
||||||
|
println!("=== post3 Multipart Upload Stress Test ===");
|
||||||
|
println!("Endpoint: {}", endpoint);
|
||||||
|
println!("Sizes: {:?} MB", sizes_mb);
|
||||||
|
println!("Part size: {} MiB", part_size_mb);
|
||||||
|
println!();
|
||||||
|
|
||||||
|
client.create_bucket("mp-stress").await?;
|
||||||
|
|
||||||
|
for size_mb in &sizes_mb {
|
||||||
|
let size_bytes = size_mb * 1024 * 1024;
|
||||||
|
let key = format!("mp-test-{}mb.bin", size_mb);
|
||||||
|
let num_parts = (size_bytes + part_size - 1) / part_size;
|
||||||
|
|
||||||
|
println!("--- {} ({} parts of {} each) ---",
|
||||||
|
format_size(size_bytes),
|
||||||
|
num_parts,
|
||||||
|
format_size(part_size.min(size_bytes)),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Generate data
|
||||||
|
print!(" Generating data... ");
|
||||||
|
let gen_start = Instant::now();
|
||||||
|
let data = generate_data(size_bytes);
|
||||||
|
println!("done ({:.1}s)", gen_start.elapsed().as_secs_f64());
|
||||||
|
|
||||||
|
// Multipart upload
|
||||||
|
print!(" Uploading (multipart)... ");
|
||||||
|
let upload_start = Instant::now();
|
||||||
|
match client.multipart_upload("mp-stress", &key, &data, part_size).await {
|
||||||
|
Ok(()) => {
|
||||||
|
let upload_dur = upload_start.elapsed();
|
||||||
|
println!(
|
||||||
|
"done ({:.1}s, {})",
|
||||||
|
upload_dur.as_secs_f64(),
|
||||||
|
format_throughput(size_bytes, upload_dur)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("FAILED: {}", e);
|
||||||
|
println!(" Skipping remaining sizes");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Head (verify metadata)
|
||||||
|
let head = client.head_object("mp-stress", &key).await?;
|
||||||
|
if let Some(info) = &head {
|
||||||
|
println!(
|
||||||
|
" Head: size={}, etag={:?}",
|
||||||
|
format_size(info.size as usize),
|
||||||
|
info.etag
|
||||||
|
);
|
||||||
|
// Verify the compound ETag format (md5-N)
|
||||||
|
if let Some(ref etag) = info.etag {
|
||||||
|
let stripped = etag.trim_matches('"');
|
||||||
|
if stripped.contains('-') {
|
||||||
|
let parts_str = stripped.split('-').last().unwrap_or("?");
|
||||||
|
println!(" ETag format: compound ({} parts)", parts_str);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Download and verify
|
||||||
|
print!(" Downloading... ");
|
||||||
|
let download_start = Instant::now();
|
||||||
|
match client.get_object("mp-stress", &key).await {
|
||||||
|
Ok(downloaded) => {
|
||||||
|
let download_dur = download_start.elapsed();
|
||||||
|
println!(
|
||||||
|
"done ({:.1}s, {})",
|
||||||
|
download_dur.as_secs_f64(),
|
||||||
|
format_throughput(size_bytes, download_dur)
|
||||||
|
);
|
||||||
|
|
||||||
|
print!(" Verifying... ");
|
||||||
|
if downloaded.len() != data.len() {
|
||||||
|
println!(
|
||||||
|
"FAILED: size mismatch (expected {}, got {})",
|
||||||
|
data.len(),
|
||||||
|
downloaded.len()
|
||||||
|
);
|
||||||
|
} else if downloaded.as_ref() == data.as_slice() {
|
||||||
|
println!("OK (byte-for-byte match)");
|
||||||
|
} else {
|
||||||
|
let pos = data
|
||||||
|
.iter()
|
||||||
|
.zip(downloaded.iter())
|
||||||
|
.position(|(a, b)| a != b)
|
||||||
|
.unwrap_or(0);
|
||||||
|
println!("FAILED: mismatch at byte {}", pos);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("FAILED: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
client.delete_object("mp-stress", &key).await?;
|
||||||
|
println!();
|
||||||
|
}
|
||||||
|
|
||||||
|
client.delete_bucket("mp-stress").await?;
|
||||||
|
println!("=== Done ===");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
408
crates/post3-sdk/src/lib.rs
Normal file
408
crates/post3-sdk/src/lib.rs
Normal file
@@ -0,0 +1,408 @@
|
|||||||
|
use aws_credential_types::Credentials;
|
||||||
|
use aws_sdk_s3::types::{CompletedMultipartUpload, CompletedPart};
|
||||||
|
use aws_sdk_s3::Client;
|
||||||
|
use bytes::Bytes;
|
||||||
|
|
||||||
|
pub use aws_sdk_s3;
|
||||||
|
pub use bytes;
|
||||||
|
|
||||||
|
/// Error type for post3-sdk operations.
|
||||||
|
#[derive(Debug, thiserror::Error)]
|
||||||
|
pub enum Error {
|
||||||
|
#[error("bucket not found: {0}")]
|
||||||
|
BucketNotFound(String),
|
||||||
|
|
||||||
|
#[error("object not found: {bucket}/{key}")]
|
||||||
|
ObjectNotFound { bucket: String, key: String },
|
||||||
|
|
||||||
|
#[error("s3 error: {0}")]
|
||||||
|
S3(String),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<E: std::fmt::Display> From<aws_sdk_s3::error::SdkError<E>> for Error {
|
||||||
|
fn from(err: aws_sdk_s3::error::SdkError<E>) -> Self {
|
||||||
|
Error::S3(err.to_string())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub type Result<T> = std::result::Result<T, Error>;
|
||||||
|
|
||||||
|
/// Summary of an object returned by list operations.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct ObjectInfo {
|
||||||
|
pub key: String,
|
||||||
|
pub size: i64,
|
||||||
|
pub etag: Option<String>,
|
||||||
|
pub last_modified: Option<chrono::DateTime<chrono::Utc>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A client for post3 that wraps `aws-sdk-s3` with ergonomic defaults.
|
||||||
|
///
|
||||||
|
/// # Example
|
||||||
|
///
|
||||||
|
/// ```no_run
|
||||||
|
/// # async fn example() -> post3_sdk::Result<()> {
|
||||||
|
/// let client = post3_sdk::Post3Client::new("http://localhost:9000");
|
||||||
|
///
|
||||||
|
/// client.create_bucket("my-bucket").await?;
|
||||||
|
/// client.put_object("my-bucket", "hello.txt", b"hello world").await?;
|
||||||
|
///
|
||||||
|
/// let data = client.get_object("my-bucket", "hello.txt").await?;
|
||||||
|
/// assert_eq!(data.as_ref(), b"hello world");
|
||||||
|
/// # Ok(())
|
||||||
|
/// # }
|
||||||
|
/// ```
|
||||||
|
pub struct Post3Client {
|
||||||
|
inner: Client,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Post3Client {
|
||||||
|
/// Create a client with default configuration (dummy credentials, us-east-1, path-style).
|
||||||
|
pub fn new(endpoint_url: impl Into<String>) -> Self {
|
||||||
|
Self::builder().endpoint_url(endpoint_url).build()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Access the underlying `aws_sdk_s3::Client` for advanced operations.
|
||||||
|
pub fn inner(&self) -> &Client {
|
||||||
|
&self.inner
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Start building a client with custom configuration.
|
||||||
|
pub fn builder() -> Post3ClientBuilder {
|
||||||
|
Post3ClientBuilder::default()
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- Bucket operations --
|
||||||
|
|
||||||
|
pub async fn create_bucket(&self, name: &str) -> Result<()> {
|
||||||
|
self.inner
|
||||||
|
.create_bucket()
|
||||||
|
.bucket(name)
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn head_bucket(&self, name: &str) -> Result<bool> {
|
||||||
|
match self.inner.head_bucket().bucket(name).send().await {
|
||||||
|
Ok(_) => Ok(true),
|
||||||
|
Err(err) => {
|
||||||
|
if err
|
||||||
|
.as_service_error()
|
||||||
|
.map_or(false, |e| e.is_not_found())
|
||||||
|
{
|
||||||
|
Ok(false)
|
||||||
|
} else {
|
||||||
|
Err(Error::S3(err.to_string()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_bucket(&self, name: &str) -> Result<()> {
|
||||||
|
self.inner
|
||||||
|
.delete_bucket()
|
||||||
|
.bucket(name)
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_buckets(&self) -> Result<Vec<String>> {
|
||||||
|
let resp = self.inner.list_buckets().send().await?;
|
||||||
|
Ok(resp
|
||||||
|
.buckets()
|
||||||
|
.iter()
|
||||||
|
.filter_map(|b| b.name().map(|s| s.to_string()))
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- Object operations --
|
||||||
|
|
||||||
|
pub async fn put_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
body: impl AsRef<[u8]>,
|
||||||
|
) -> Result<()> {
|
||||||
|
let body = Bytes::copy_from_slice(body.as_ref());
|
||||||
|
self.inner
|
||||||
|
.put_object()
|
||||||
|
.bucket(bucket)
|
||||||
|
.key(key)
|
||||||
|
.body(body.into())
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_object(&self, bucket: &str, key: &str) -> Result<Bytes> {
|
||||||
|
let resp = self
|
||||||
|
.inner
|
||||||
|
.get_object()
|
||||||
|
.bucket(bucket)
|
||||||
|
.key(key)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
if e.as_service_error()
|
||||||
|
.map_or(false, |se| se.is_no_such_key())
|
||||||
|
{
|
||||||
|
Error::ObjectNotFound {
|
||||||
|
bucket: bucket.to_string(),
|
||||||
|
key: key.to_string(),
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
Error::S3(e.to_string())
|
||||||
|
}
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let data = resp
|
||||||
|
.body
|
||||||
|
.collect()
|
||||||
|
.await
|
||||||
|
.map_err(|e| Error::S3(e.to_string()))?;
|
||||||
|
Ok(data.into_bytes())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn head_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
) -> Result<Option<ObjectInfo>> {
|
||||||
|
match self
|
||||||
|
.inner
|
||||||
|
.head_object()
|
||||||
|
.bucket(bucket)
|
||||||
|
.key(key)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(resp) => Ok(Some(ObjectInfo {
|
||||||
|
key: key.to_string(),
|
||||||
|
size: resp.content_length().unwrap_or(0),
|
||||||
|
etag: resp.e_tag().map(|s| s.to_string()),
|
||||||
|
last_modified: resp
|
||||||
|
.last_modified()
|
||||||
|
.and_then(|t| {
|
||||||
|
chrono::DateTime::from_timestamp(t.secs(), t.subsec_nanos())
|
||||||
|
}),
|
||||||
|
})),
|
||||||
|
Err(err) => {
|
||||||
|
if err
|
||||||
|
.as_service_error()
|
||||||
|
.map_or(false, |e| e.is_not_found())
|
||||||
|
{
|
||||||
|
Ok(None)
|
||||||
|
} else {
|
||||||
|
Err(Error::S3(err.to_string()))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_object(&self, bucket: &str, key: &str) -> Result<()> {
|
||||||
|
self.inner
|
||||||
|
.delete_object()
|
||||||
|
.bucket(bucket)
|
||||||
|
.key(key)
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Upload an object using multipart upload, splitting into parts of the given size.
|
||||||
|
///
|
||||||
|
/// This is useful for large files where multipart upload provides better performance
|
||||||
|
/// through parallelism and resumability.
|
||||||
|
pub async fn multipart_upload(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
data: impl AsRef<[u8]>,
|
||||||
|
part_size: usize,
|
||||||
|
) -> Result<()> {
|
||||||
|
let data = data.as_ref();
|
||||||
|
|
||||||
|
// Create multipart upload
|
||||||
|
let create_resp = self
|
||||||
|
.inner
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket(bucket)
|
||||||
|
.key(key)
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let upload_id = create_resp
|
||||||
|
.upload_id()
|
||||||
|
.ok_or_else(|| Error::S3("missing upload_id in response".to_string()))?
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
// Upload parts
|
||||||
|
let mut completed_parts = Vec::new();
|
||||||
|
let mut part_number = 1i32;
|
||||||
|
|
||||||
|
for chunk in data.chunks(part_size) {
|
||||||
|
let body = Bytes::copy_from_slice(chunk);
|
||||||
|
let upload_resp = self
|
||||||
|
.inner
|
||||||
|
.upload_part()
|
||||||
|
.bucket(bucket)
|
||||||
|
.key(key)
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(part_number)
|
||||||
|
.body(body.into())
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.map_err(|e| {
|
||||||
|
// Try to abort on failure
|
||||||
|
Error::S3(e.to_string())
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let etag = upload_resp
|
||||||
|
.e_tag()
|
||||||
|
.ok_or_else(|| Error::S3("missing ETag in upload_part response".to_string()))?
|
||||||
|
.to_string();
|
||||||
|
|
||||||
|
completed_parts.push(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(part_number)
|
||||||
|
.e_tag(etag)
|
||||||
|
.build(),
|
||||||
|
);
|
||||||
|
|
||||||
|
part_number += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Complete multipart upload
|
||||||
|
let mut builder = CompletedMultipartUpload::builder();
|
||||||
|
for part in completed_parts {
|
||||||
|
builder = builder.parts(part);
|
||||||
|
}
|
||||||
|
|
||||||
|
self.inner
|
||||||
|
.complete_multipart_upload()
|
||||||
|
.bucket(bucket)
|
||||||
|
.key(key)
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.multipart_upload(builder.build())
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_objects(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
prefix: Option<&str>,
|
||||||
|
) -> Result<Vec<ObjectInfo>> {
|
||||||
|
let mut req = self
|
||||||
|
.inner
|
||||||
|
.list_objects_v2()
|
||||||
|
.bucket(bucket);
|
||||||
|
|
||||||
|
if let Some(p) = prefix {
|
||||||
|
req = req.prefix(p);
|
||||||
|
}
|
||||||
|
|
||||||
|
let resp = req.send().await?;
|
||||||
|
Ok(resp
|
||||||
|
.contents()
|
||||||
|
.iter()
|
||||||
|
.map(|obj| ObjectInfo {
|
||||||
|
key: obj.key().unwrap_or_default().to_string(),
|
||||||
|
size: obj.size().unwrap_or(0),
|
||||||
|
etag: obj.e_tag().map(|s| s.to_string()),
|
||||||
|
last_modified: obj
|
||||||
|
.last_modified()
|
||||||
|
.and_then(|t| {
|
||||||
|
chrono::DateTime::from_timestamp(t.secs(), t.subsec_nanos())
|
||||||
|
}),
|
||||||
|
})
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Builder for `Post3Client` with custom configuration.
|
||||||
|
pub struct Post3ClientBuilder {
|
||||||
|
endpoint_url: Option<String>,
|
||||||
|
access_key: String,
|
||||||
|
secret_key: String,
|
||||||
|
region: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for Post3ClientBuilder {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self {
|
||||||
|
endpoint_url: None,
|
||||||
|
access_key: "test".to_string(),
|
||||||
|
secret_key: "test".to_string(),
|
||||||
|
region: "us-east-1".to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Post3ClientBuilder {
|
||||||
|
pub fn endpoint_url(mut self, url: impl Into<String>) -> Self {
|
||||||
|
self.endpoint_url = Some(url.into());
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn credentials(mut self, access_key: impl Into<String>, secret_key: impl Into<String>) -> Self {
|
||||||
|
self.access_key = access_key.into();
|
||||||
|
self.secret_key = secret_key.into();
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn region(mut self, region: impl Into<String>) -> Self {
|
||||||
|
self.region = region.into();
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn build(self) -> Post3Client {
|
||||||
|
let creds = Credentials::new(
|
||||||
|
&self.access_key,
|
||||||
|
&self.secret_key,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
"post3-sdk",
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut config = aws_sdk_s3::Config::builder()
|
||||||
|
.behavior_version_latest()
|
||||||
|
.region(aws_types::region::Region::new(self.region))
|
||||||
|
.credentials_provider(creds)
|
||||||
|
.force_path_style(true);
|
||||||
|
|
||||||
|
if let Some(url) = self.endpoint_url {
|
||||||
|
config = config.endpoint_url(url);
|
||||||
|
}
|
||||||
|
|
||||||
|
Post3Client {
|
||||||
|
inner: Client::from_conf(config.build()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_construct_client() {
|
||||||
|
let client = Post3Client::new("http://localhost:9000");
|
||||||
|
// Verify we can access the inner client
|
||||||
|
let _inner = client.inner();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_builder_custom_creds() {
|
||||||
|
let client = Post3Client::builder()
|
||||||
|
.endpoint_url("http://localhost:9000")
|
||||||
|
.credentials("my-access-key", "my-secret-key")
|
||||||
|
.region("eu-west-1")
|
||||||
|
.build();
|
||||||
|
let _inner = client.inner();
|
||||||
|
}
|
||||||
|
}
|
||||||
37
crates/post3-server/Cargo.toml
Normal file
37
crates/post3-server/Cargo.toml
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
[package]
|
||||||
|
name = "post3-server"
|
||||||
|
version.workspace = true
|
||||||
|
edition.workspace = true
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
post3.workspace = true
|
||||||
|
|
||||||
|
anyhow.workspace = true
|
||||||
|
tokio.workspace = true
|
||||||
|
tracing.workspace = true
|
||||||
|
tracing-subscriber.workspace = true
|
||||||
|
clap.workspace = true
|
||||||
|
dotenvy.workspace = true
|
||||||
|
uuid.workspace = true
|
||||||
|
bytes.workspace = true
|
||||||
|
axum.workspace = true
|
||||||
|
tower.workspace = true
|
||||||
|
tower-http.workspace = true
|
||||||
|
notmad.workspace = true
|
||||||
|
tokio-util.workspace = true
|
||||||
|
sqlx.workspace = true
|
||||||
|
chrono.workspace = true
|
||||||
|
quick-xml.workspace = true
|
||||||
|
md-5.workspace = true
|
||||||
|
hex.workspace = true
|
||||||
|
serde.workspace = true
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
aws-config = "1"
|
||||||
|
aws-sdk-s3 = "1"
|
||||||
|
aws-credential-types = "1"
|
||||||
|
aws-types = "1"
|
||||||
|
tokio = { workspace = true, features = ["test-util"] }
|
||||||
|
tower.workspace = true
|
||||||
|
tracing-subscriber.workspace = true
|
||||||
|
tempfile.workspace = true
|
||||||
58
crates/post3-server/src/cli.rs
Normal file
58
crates/post3-server/src/cli.rs
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
pub mod serve;
|
||||||
|
|
||||||
|
use anyhow::Context;
|
||||||
|
use clap::{Parser, Subcommand};
|
||||||
|
use post3::{FilesystemBackend, PostgresBackend};
|
||||||
|
use sqlx::PgPool;
|
||||||
|
|
||||||
|
use crate::state::State;
|
||||||
|
|
||||||
|
#[derive(Parser)]
|
||||||
|
#[command(name = "post3-server", about = "S3-compatible storage server")]
|
||||||
|
struct App {
|
||||||
|
#[command(subcommand)]
|
||||||
|
command: Commands,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Subcommand)]
|
||||||
|
enum Commands {
|
||||||
|
Serve(serve::ServeCommand),
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn execute() -> anyhow::Result<()> {
|
||||||
|
let app = App::parse();
|
||||||
|
|
||||||
|
match app.command {
|
||||||
|
Commands::Serve(cmd) => match cmd.backend {
|
||||||
|
serve::BackendType::Pg => {
|
||||||
|
let database_url =
|
||||||
|
std::env::var("DATABASE_URL").context("DATABASE_URL not set")?;
|
||||||
|
let pool = PgPool::connect(&database_url).await?;
|
||||||
|
|
||||||
|
sqlx::migrate!("../post3/migrations/")
|
||||||
|
.set_locking(false)
|
||||||
|
.run(&pool)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
tracing::info!("database migrations applied");
|
||||||
|
|
||||||
|
let backend = PostgresBackend::new(pool);
|
||||||
|
let state = State { store: backend };
|
||||||
|
cmd.run(&state).await
|
||||||
|
}
|
||||||
|
serve::BackendType::Fs => {
|
||||||
|
let data_dir = cmd
|
||||||
|
.data_dir
|
||||||
|
.as_ref()
|
||||||
|
.context("--data-dir is required when using --backend fs")?;
|
||||||
|
|
||||||
|
std::fs::create_dir_all(data_dir)?;
|
||||||
|
tracing::info!(path = %data_dir.display(), "using filesystem backend");
|
||||||
|
|
||||||
|
let backend = FilesystemBackend::new(data_dir);
|
||||||
|
let state = State { store: backend };
|
||||||
|
cmd.run(&state).await
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
44
crates/post3-server/src/cli/serve.rs
Normal file
44
crates/post3-server/src/cli/serve.rs
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
use std::net::SocketAddr;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use clap::{Parser, ValueEnum};
|
||||||
|
use post3::StorageBackend;
|
||||||
|
|
||||||
|
use crate::s3::S3Server;
|
||||||
|
use crate::state::State;
|
||||||
|
|
||||||
|
#[derive(Clone, ValueEnum)]
|
||||||
|
pub enum BackendType {
|
||||||
|
/// PostgreSQL backend (requires DATABASE_URL)
|
||||||
|
Pg,
|
||||||
|
/// Local filesystem backend
|
||||||
|
Fs,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Parser)]
|
||||||
|
pub struct ServeCommand {
|
||||||
|
#[arg(long, env = "POST3_HOST", default_value = "127.0.0.1:9000")]
|
||||||
|
pub host: SocketAddr,
|
||||||
|
|
||||||
|
/// Storage backend to use
|
||||||
|
#[arg(long, default_value = "pg")]
|
||||||
|
pub backend: BackendType,
|
||||||
|
|
||||||
|
/// Data directory for filesystem backend
|
||||||
|
#[arg(long)]
|
||||||
|
pub data_dir: Option<PathBuf>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ServeCommand {
|
||||||
|
pub async fn run<B: StorageBackend>(&self, state: &State<B>) -> anyhow::Result<()> {
|
||||||
|
notmad::Mad::builder()
|
||||||
|
.add(S3Server {
|
||||||
|
host: self.host,
|
||||||
|
state: state.clone(),
|
||||||
|
})
|
||||||
|
.run()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
2
crates/post3-server/src/lib.rs
Normal file
2
crates/post3-server/src/lib.rs
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
pub mod s3;
|
||||||
|
pub mod state;
|
||||||
18
crates/post3-server/src/main.rs
Normal file
18
crates/post3-server/src/main.rs
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
mod cli;
|
||||||
|
pub mod s3;
|
||||||
|
pub mod state;
|
||||||
|
|
||||||
|
#[tokio::main]
|
||||||
|
async fn main() -> anyhow::Result<()> {
|
||||||
|
dotenvy::dotenv().ok();
|
||||||
|
|
||||||
|
tracing_subscriber::fmt()
|
||||||
|
.with_env_filter(
|
||||||
|
tracing_subscriber::EnvFilter::from_default_env()
|
||||||
|
.add_directive("post3_server=debug".parse()?)
|
||||||
|
.add_directive("post3=debug".parse()?),
|
||||||
|
)
|
||||||
|
.init();
|
||||||
|
|
||||||
|
cli::execute().await
|
||||||
|
}
|
||||||
55
crates/post3-server/src/s3/extractors.rs
Normal file
55
crates/post3-server/src/s3/extractors.rs
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
/// Query params for GET /{bucket} — dispatches between ListObjectsV2, ListMultipartUploads,
|
||||||
|
/// ListObjectVersions, and GetBucketLocation.
|
||||||
|
#[derive(Debug, Default, Deserialize)]
|
||||||
|
pub struct BucketGetQuery {
|
||||||
|
/// Presence of `?uploads` signals ListMultipartUploads
|
||||||
|
pub uploads: Option<String>,
|
||||||
|
/// Presence of `?versions` signals ListObjectVersions
|
||||||
|
pub versions: Option<String>,
|
||||||
|
/// Presence of `?location` signals GetBucketLocation
|
||||||
|
pub location: Option<String>,
|
||||||
|
#[serde(rename = "list-type")]
|
||||||
|
pub list_type: Option<i32>,
|
||||||
|
pub prefix: Option<String>,
|
||||||
|
#[serde(rename = "max-keys")]
|
||||||
|
pub max_keys: Option<i64>,
|
||||||
|
#[serde(rename = "continuation-token")]
|
||||||
|
pub continuation_token: Option<String>,
|
||||||
|
#[serde(rename = "start-after")]
|
||||||
|
pub start_after: Option<String>,
|
||||||
|
/// ListObjects v1 pagination marker
|
||||||
|
pub marker: Option<String>,
|
||||||
|
pub delimiter: Option<String>,
|
||||||
|
#[serde(rename = "encoding-type")]
|
||||||
|
pub encoding_type: Option<String>,
|
||||||
|
#[serde(rename = "key-marker")]
|
||||||
|
pub key_marker: Option<String>,
|
||||||
|
#[serde(rename = "upload-id-marker")]
|
||||||
|
pub upload_id_marker: Option<String>,
|
||||||
|
#[serde(rename = "max-uploads")]
|
||||||
|
pub max_uploads: Option<i32>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Query params for POST /{bucket} — dispatches between DeleteObjects and other ops.
|
||||||
|
#[derive(Debug, Default, Deserialize)]
|
||||||
|
pub struct BucketPostQuery {
|
||||||
|
/// Presence of `?delete` signals DeleteObjects
|
||||||
|
pub delete: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Query params for /{bucket}/{*key} dispatchers (PUT, GET, DELETE, POST).
|
||||||
|
#[derive(Debug, Default, Deserialize)]
|
||||||
|
pub struct ObjectKeyQuery {
|
||||||
|
#[serde(rename = "uploadId")]
|
||||||
|
pub upload_id: Option<String>,
|
||||||
|
#[serde(rename = "partNumber")]
|
||||||
|
pub part_number: Option<i32>,
|
||||||
|
/// Presence of `?uploads` signals CreateMultipartUpload (POST only)
|
||||||
|
pub uploads: Option<String>,
|
||||||
|
#[serde(rename = "max-parts")]
|
||||||
|
pub max_parts: Option<i32>,
|
||||||
|
#[serde(rename = "part-number-marker")]
|
||||||
|
pub part_number_marker: Option<i32>,
|
||||||
|
}
|
||||||
187
crates/post3-server/src/s3/handlers/buckets.rs
Normal file
187
crates/post3-server/src/s3/handlers/buckets.rs
Normal file
@@ -0,0 +1,187 @@
|
|||||||
|
use axum::{
|
||||||
|
extract::{Path, State},
|
||||||
|
http::StatusCode,
|
||||||
|
response::IntoResponse,
|
||||||
|
};
|
||||||
|
use post3::{Post3Error, StorageBackend};
|
||||||
|
|
||||||
|
use crate::s3::responses;
|
||||||
|
use crate::state::State as AppState;
|
||||||
|
|
||||||
|
fn is_valid_bucket_name(name: &str) -> bool {
|
||||||
|
let len = name.len();
|
||||||
|
if len < 3 || len > 63 {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
// Must contain only lowercase letters, numbers, hyphens, and periods
|
||||||
|
if !name
|
||||||
|
.bytes()
|
||||||
|
.all(|b| b.is_ascii_lowercase() || b.is_ascii_digit() || b == b'-' || b == b'.')
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
// Must start and end with a letter or number
|
||||||
|
let first = name.as_bytes()[0];
|
||||||
|
let last = name.as_bytes()[len - 1];
|
||||||
|
if !(first.is_ascii_lowercase() || first.is_ascii_digit()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if !(last.is_ascii_lowercase() || last.is_ascii_digit()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
// Must not be formatted as an IP address
|
||||||
|
if name.split('.').count() == 4
|
||||||
|
&& name
|
||||||
|
.split('.')
|
||||||
|
.all(|part| part.parse::<u8>().is_ok())
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
true
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_bucket<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
if !is_valid_bucket_name(&bucket) {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml".to_string())],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidBucketName",
|
||||||
|
"The specified bucket is not valid.",
|
||||||
|
&bucket,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response();
|
||||||
|
}
|
||||||
|
|
||||||
|
match state.store.create_bucket(&bucket).await {
|
||||||
|
Ok(_) => (
|
||||||
|
StatusCode::OK,
|
||||||
|
[
|
||||||
|
("Location", format!("/{bucket}")),
|
||||||
|
(
|
||||||
|
"x-amz-request-id",
|
||||||
|
uuid::Uuid::new_v4().to_string(),
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::BucketAlreadyExists(_)) => (
|
||||||
|
StatusCode::CONFLICT,
|
||||||
|
[("Content-Type", "application/xml".to_string())],
|
||||||
|
responses::error_xml(
|
||||||
|
"BucketAlreadyOwnedByYou",
|
||||||
|
"Your previous request to create the named bucket succeeded and you already own it.",
|
||||||
|
&bucket,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("create_bucket error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml".to_string())],
|
||||||
|
responses::error_xml("InternalError", &e.to_string(), &bucket),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn head_bucket<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
match state.store.head_bucket(&bucket).await {
|
||||||
|
Ok(Some(_)) => (
|
||||||
|
StatusCode::OK,
|
||||||
|
[
|
||||||
|
("x-amz-request-id", uuid::Uuid::new_v4().to_string()),
|
||||||
|
("x-amz-bucket-region", "us-east-1".to_string()),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Ok(None) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("x-amz-request-id", uuid::Uuid::new_v4().to_string())],
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("head_bucket error: {e}");
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_bucket<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
match state.store.delete_bucket(&bucket).await {
|
||||||
|
Ok(()) => (
|
||||||
|
StatusCode::NO_CONTENT,
|
||||||
|
[("x-amz-request-id", uuid::Uuid::new_v4().to_string())],
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::BucketNotFound(_)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml".to_string())],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&bucket,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::BucketNotEmpty(_)) => (
|
||||||
|
StatusCode::CONFLICT,
|
||||||
|
[("Content-Type", "application/xml".to_string())],
|
||||||
|
responses::error_xml(
|
||||||
|
"BucketNotEmpty",
|
||||||
|
"The bucket you tried to delete is not empty",
|
||||||
|
&bucket,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("delete_bucket error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml".to_string())],
|
||||||
|
responses::error_xml("InternalError", &e.to_string(), &bucket),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_buckets<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
) -> impl IntoResponse {
|
||||||
|
match state.store.list_buckets().await {
|
||||||
|
Ok(buckets) => (
|
||||||
|
StatusCode::OK,
|
||||||
|
[
|
||||||
|
("Content-Type", "application/xml".to_string()),
|
||||||
|
(
|
||||||
|
"x-amz-request-id",
|
||||||
|
uuid::Uuid::new_v4().to_string(),
|
||||||
|
),
|
||||||
|
],
|
||||||
|
responses::list_buckets_xml(&buckets),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("list_buckets error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml".to_string())],
|
||||||
|
responses::error_xml("InternalError", &e.to_string(), "/"),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
3
crates/post3-server/src/s3/handlers/mod.rs
Normal file
3
crates/post3-server/src/s3/handlers/mod.rs
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
pub mod buckets;
|
||||||
|
pub mod multipart;
|
||||||
|
pub mod objects;
|
||||||
509
crates/post3-server/src/s3/handlers/multipart.rs
Normal file
509
crates/post3-server/src/s3/handlers/multipart.rs
Normal file
@@ -0,0 +1,509 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use axum::{
|
||||||
|
extract::{Path, Query, State},
|
||||||
|
http::{HeaderMap, HeaderValue, StatusCode},
|
||||||
|
response::{IntoResponse, Response},
|
||||||
|
};
|
||||||
|
use bytes::Bytes;
|
||||||
|
use post3::{Post3Error, StorageBackend};
|
||||||
|
|
||||||
|
use crate::s3::extractors::{BucketGetQuery, ObjectKeyQuery};
|
||||||
|
use crate::s3::responses;
|
||||||
|
use crate::state::State as AppState;
|
||||||
|
|
||||||
|
pub async fn create_multipart_upload<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
headers: HeaderMap,
|
||||||
|
) -> Response {
|
||||||
|
let content_type = headers
|
||||||
|
.get("content-type")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(|s| s.to_string());
|
||||||
|
|
||||||
|
let mut metadata = HashMap::new();
|
||||||
|
for (name, value) in headers.iter() {
|
||||||
|
let name_str = name.as_str();
|
||||||
|
if let Some(meta_key) = name_str.strip_prefix("x-amz-meta-") {
|
||||||
|
if let Ok(v) = value.to_str() {
|
||||||
|
metadata.insert(meta_key.to_string(), v.to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.create_multipart_upload(&bucket, &key, content_type.as_deref(), metadata)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let mut response_headers = HeaderMap::new();
|
||||||
|
response_headers
|
||||||
|
.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
response_headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(
|
||||||
|
StatusCode::OK,
|
||||||
|
response_headers,
|
||||||
|
responses::initiate_multipart_upload_xml(
|
||||||
|
&result.bucket,
|
||||||
|
&result.key,
|
||||||
|
&result.upload_id,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("create_multipart_upload error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InternalError",
|
||||||
|
&e.to_string(),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn upload_part<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
Query(query): Query<ObjectKeyQuery>,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Response {
|
||||||
|
let upload_id = match &query.upload_id {
|
||||||
|
Some(id) => id.clone(),
|
||||||
|
None => {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidRequest",
|
||||||
|
"Missing uploadId parameter",
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let part_number = match query.part_number {
|
||||||
|
Some(n) => n,
|
||||||
|
None => {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidRequest",
|
||||||
|
"Missing partNumber parameter",
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.upload_part(&bucket, &key, &upload_id, part_number, body)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert("ETag", result.etag.parse().unwrap());
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(StatusCode::OK, headers).into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::UploadNotFound(id)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchUpload",
|
||||||
|
"The specified multipart upload does not exist",
|
||||||
|
&id,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("upload_part error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InternalError",
|
||||||
|
&e.to_string(),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn complete_multipart_upload<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
Query(query): Query<ObjectKeyQuery>,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Response {
|
||||||
|
let upload_id = match &query.upload_id {
|
||||||
|
Some(id) => id.clone(),
|
||||||
|
None => {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidRequest",
|
||||||
|
"Missing uploadId parameter",
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let part_etags = match responses::parse_complete_multipart_xml(&body) {
|
||||||
|
Ok(parts) => parts,
|
||||||
|
Err(msg) => {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml("MalformedXML", &msg, &format!("/{bucket}/{key}")),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.complete_multipart_upload(&bucket, &key, &upload_id, part_etags)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let location = format!("/{}/{}", bucket, key);
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers
|
||||||
|
.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(
|
||||||
|
StatusCode::OK,
|
||||||
|
headers,
|
||||||
|
responses::complete_multipart_upload_xml(
|
||||||
|
&location,
|
||||||
|
&result.bucket,
|
||||||
|
&result.key,
|
||||||
|
&result.etag,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::UploadNotFound(id)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchUpload",
|
||||||
|
"The specified multipart upload does not exist",
|
||||||
|
&id,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::InvalidPart {
|
||||||
|
upload_id: _,
|
||||||
|
part_number,
|
||||||
|
}) => (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidPart",
|
||||||
|
&format!("Part {part_number} not found or not uploaded"),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::ETagMismatch {
|
||||||
|
part_number,
|
||||||
|
expected,
|
||||||
|
got,
|
||||||
|
}) => (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidPart",
|
||||||
|
&format!(
|
||||||
|
"ETag mismatch for part {part_number}: expected {expected}, got {got}"
|
||||||
|
),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::InvalidPartOrder) => (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidPartOrder",
|
||||||
|
"Parts must be in ascending order",
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::EntityTooSmall {
|
||||||
|
part_number,
|
||||||
|
size,
|
||||||
|
}) => (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"EntityTooSmall",
|
||||||
|
&format!(
|
||||||
|
"Your proposed upload is smaller than the minimum allowed size. Part {part_number} has size {size}."
|
||||||
|
),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("complete_multipart_upload error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InternalError",
|
||||||
|
&e.to_string(),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn abort_multipart_upload<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
Query(query): Query<ObjectKeyQuery>,
|
||||||
|
) -> Response {
|
||||||
|
let upload_id = match &query.upload_id {
|
||||||
|
Some(id) => id.clone(),
|
||||||
|
None => {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidRequest",
|
||||||
|
"Missing uploadId parameter",
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.abort_multipart_upload(&bucket, &key, &upload_id)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(()) => {
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(StatusCode::NO_CONTENT, headers).into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::UploadNotFound(id)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchUpload",
|
||||||
|
"The specified multipart upload does not exist",
|
||||||
|
&id,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("abort_multipart_upload error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InternalError",
|
||||||
|
&e.to_string(),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_parts<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
Query(query): Query<ObjectKeyQuery>,
|
||||||
|
) -> Response {
|
||||||
|
let upload_id = match &query.upload_id {
|
||||||
|
Some(id) => id.clone(),
|
||||||
|
None => {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidRequest",
|
||||||
|
"Missing uploadId parameter",
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.list_parts(
|
||||||
|
&bucket,
|
||||||
|
&key,
|
||||||
|
&upload_id,
|
||||||
|
query.max_parts,
|
||||||
|
query.part_number_marker,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let max_parts = query.max_parts.unwrap_or(1000);
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers
|
||||||
|
.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(
|
||||||
|
StatusCode::OK,
|
||||||
|
headers,
|
||||||
|
responses::list_parts_xml(&result, max_parts),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::UploadNotFound(id)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchUpload",
|
||||||
|
"The specified multipart upload does not exist",
|
||||||
|
&id,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("list_parts error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InternalError",
|
||||||
|
&e.to_string(),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_multipart_uploads<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
Query(query): Query<BucketGetQuery>,
|
||||||
|
) -> Response {
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.list_multipart_uploads(
|
||||||
|
&bucket,
|
||||||
|
query.prefix.as_deref(),
|
||||||
|
query.key_marker.as_deref(),
|
||||||
|
query.upload_id_marker.as_deref(),
|
||||||
|
query.max_uploads,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let max_uploads = query.max_uploads.unwrap_or(1000);
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers
|
||||||
|
.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(
|
||||||
|
StatusCode::OK,
|
||||||
|
headers,
|
||||||
|
responses::list_multipart_uploads_xml(&result, max_uploads),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("list_multipart_uploads error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml("InternalError", &e.to_string(), &bucket),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
598
crates/post3-server/src/s3/handlers/objects.rs
Normal file
598
crates/post3-server/src/s3/handlers/objects.rs
Normal file
@@ -0,0 +1,598 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use axum::{
|
||||||
|
body::Body,
|
||||||
|
extract::{Path, Query, State},
|
||||||
|
http::{header::HeaderName, HeaderMap, HeaderValue, StatusCode},
|
||||||
|
response::{IntoResponse, Response},
|
||||||
|
};
|
||||||
|
use bytes::Bytes;
|
||||||
|
use post3::{Post3Error, StorageBackend};
|
||||||
|
|
||||||
|
use crate::s3::extractors::{BucketGetQuery, ObjectKeyQuery};
|
||||||
|
use crate::s3::handlers::multipart;
|
||||||
|
use crate::s3::responses;
|
||||||
|
use crate::state::State as AppState;
|
||||||
|
|
||||||
|
// --- Dispatch functions ---
|
||||||
|
|
||||||
|
/// PUT /{bucket}/{*key} — dispatches to upload_part or put_object based on query params.
|
||||||
|
pub async fn put_dispatch<B: StorageBackend>(
|
||||||
|
state: State<AppState<B>>,
|
||||||
|
path: Path<(String, String)>,
|
||||||
|
query: Query<ObjectKeyQuery>,
|
||||||
|
headers: HeaderMap,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Response {
|
||||||
|
if query.upload_id.is_some() && query.part_number.is_some() {
|
||||||
|
multipart::upload_part(state, path, query, body).await
|
||||||
|
} else {
|
||||||
|
put_object(state, path, headers, body).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /{bucket}/{*key} — dispatches to list_parts or get_object based on query params.
|
||||||
|
pub async fn get_dispatch<B: StorageBackend>(
|
||||||
|
state: State<AppState<B>>,
|
||||||
|
path: Path<(String, String)>,
|
||||||
|
query: Query<ObjectKeyQuery>,
|
||||||
|
) -> Response {
|
||||||
|
if query.upload_id.is_some() {
|
||||||
|
multipart::list_parts(state, path, query).await
|
||||||
|
} else {
|
||||||
|
get_object(state, path).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// DELETE /{bucket}/{*key} — dispatches to abort_multipart_upload or delete_object.
|
||||||
|
pub async fn delete_dispatch<B: StorageBackend>(
|
||||||
|
state: State<AppState<B>>,
|
||||||
|
path: Path<(String, String)>,
|
||||||
|
query: Query<ObjectKeyQuery>,
|
||||||
|
) -> Response {
|
||||||
|
if query.upload_id.is_some() {
|
||||||
|
multipart::abort_multipart_upload(state, path, query).await
|
||||||
|
} else {
|
||||||
|
delete_object(state, path).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// POST /{bucket}/{*key} — dispatches to create_multipart_upload or complete_multipart_upload.
|
||||||
|
pub async fn post_dispatch<B: StorageBackend>(
|
||||||
|
state: State<AppState<B>>,
|
||||||
|
path: Path<(String, String)>,
|
||||||
|
query: Query<ObjectKeyQuery>,
|
||||||
|
headers: HeaderMap,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Response {
|
||||||
|
if query.uploads.is_some() {
|
||||||
|
multipart::create_multipart_upload(state, path, headers).await
|
||||||
|
} else if query.upload_id.is_some() {
|
||||||
|
multipart::complete_multipart_upload(state, path, query, body).await
|
||||||
|
} else {
|
||||||
|
(
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidRequest",
|
||||||
|
"POST requires ?uploads or ?uploadId parameter",
|
||||||
|
&format!("/{}/{}", path.0 .0, path.0 .1),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Object handlers ---
|
||||||
|
|
||||||
|
pub async fn put_object<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
headers: HeaderMap,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Response {
|
||||||
|
let content_type = headers
|
||||||
|
.get("content-type")
|
||||||
|
.and_then(|v| v.to_str().ok())
|
||||||
|
.map(|s| s.to_string());
|
||||||
|
|
||||||
|
// Extract x-amz-meta-* user metadata
|
||||||
|
let mut metadata = HashMap::new();
|
||||||
|
for (name, value) in headers.iter() {
|
||||||
|
let name_str = name.as_str();
|
||||||
|
if let Some(meta_key) = name_str.strip_prefix("x-amz-meta-") {
|
||||||
|
if let Ok(v) = value.to_str() {
|
||||||
|
metadata.insert(meta_key.to_string(), v.to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.put_object(&bucket, &key, content_type.as_deref(), metadata, body)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let mut response_headers = HeaderMap::new();
|
||||||
|
response_headers.insert("ETag", result.etag.parse().unwrap());
|
||||||
|
response_headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(StatusCode::OK, response_headers).into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("put_object error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InternalError",
|
||||||
|
&e.to_string(),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_object<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
) -> Response {
|
||||||
|
match state.store.get_object(&bucket, &key).await {
|
||||||
|
Ok(result) => {
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert(
|
||||||
|
"Content-Type",
|
||||||
|
HeaderValue::from_str(&result.metadata.content_type).unwrap(),
|
||||||
|
);
|
||||||
|
headers.insert(
|
||||||
|
"Content-Length",
|
||||||
|
HeaderValue::from_str(&result.metadata.size.to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
headers.insert("ETag", HeaderValue::from_str(&result.metadata.etag).unwrap());
|
||||||
|
headers.insert(
|
||||||
|
"Last-Modified",
|
||||||
|
HeaderValue::from_str(
|
||||||
|
&result
|
||||||
|
.metadata
|
||||||
|
.last_modified
|
||||||
|
.format("%a, %d %b %Y %H:%M:%S GMT")
|
||||||
|
.to_string(),
|
||||||
|
)
|
||||||
|
.unwrap(),
|
||||||
|
);
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Return user metadata as x-amz-meta-* headers
|
||||||
|
for (k, v) in &result.user_metadata {
|
||||||
|
let header_name = format!("x-amz-meta-{k}");
|
||||||
|
if let (Ok(name), Ok(val)) = (
|
||||||
|
header_name.parse::<HeaderName>(),
|
||||||
|
HeaderValue::from_str(v),
|
||||||
|
) {
|
||||||
|
headers.insert(name, val);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
(StatusCode::OK, headers, Body::from(result.body)).into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(Post3Error::ObjectNotFound { bucket: b, key: k }) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchKey",
|
||||||
|
"The specified key does not exist.",
|
||||||
|
&format!("/{b}/{k}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("get_object error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InternalError",
|
||||||
|
&e.to_string(),
|
||||||
|
&format!("/{bucket}/{key}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn head_object<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
) -> Response {
|
||||||
|
match state.store.head_object(&bucket, &key).await {
|
||||||
|
Ok(Some(result)) => {
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert(
|
||||||
|
"Content-Type",
|
||||||
|
HeaderValue::from_str(&result.object.content_type).unwrap(),
|
||||||
|
);
|
||||||
|
headers.insert(
|
||||||
|
"Content-Length",
|
||||||
|
HeaderValue::from_str(&result.object.size.to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
headers.insert("ETag", HeaderValue::from_str(&result.object.etag).unwrap());
|
||||||
|
headers.insert(
|
||||||
|
"Last-Modified",
|
||||||
|
HeaderValue::from_str(
|
||||||
|
&result
|
||||||
|
.object
|
||||||
|
.last_modified
|
||||||
|
.format("%a, %d %b %Y %H:%M:%S GMT")
|
||||||
|
.to_string(),
|
||||||
|
)
|
||||||
|
.unwrap(),
|
||||||
|
);
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
|
||||||
|
for (k, v) in &result.user_metadata {
|
||||||
|
let header_name = format!("x-amz-meta-{k}");
|
||||||
|
if let (Ok(name), Ok(val)) = (
|
||||||
|
header_name.parse::<HeaderName>(),
|
||||||
|
HeaderValue::from_str(v),
|
||||||
|
) {
|
||||||
|
headers.insert(name, val);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
(StatusCode::OK, headers).into_response()
|
||||||
|
}
|
||||||
|
Ok(None) => {
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(StatusCode::NOT_FOUND, headers).into_response()
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("head_object error: {e}");
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_object<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path((bucket, key)): Path<(String, String)>,
|
||||||
|
) -> Response {
|
||||||
|
match state.store.delete_object(&bucket, &key).await {
|
||||||
|
Ok(()) => {
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(StatusCode::NO_CONTENT, headers).into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("delete_object error: {e}");
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Handles GET /{bucket} — dispatches to ListMultipartUploads, ListObjectVersions,
|
||||||
|
/// GetBucketLocation, or ListObjects (v1/v2).
|
||||||
|
pub async fn list_or_get<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
Query(query): Query<BucketGetQuery>,
|
||||||
|
) -> Response {
|
||||||
|
// ?uploads → ListMultipartUploads
|
||||||
|
if query.uploads.is_some() {
|
||||||
|
return multipart::list_multipart_uploads(
|
||||||
|
State(state),
|
||||||
|
Path(bucket),
|
||||||
|
Query(query),
|
||||||
|
)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ?location → GetBucketLocation
|
||||||
|
if query.location.is_some() {
|
||||||
|
return get_bucket_location(State(state), Path(bucket)).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ?versions → ListObjectVersions
|
||||||
|
if query.versions.is_some() {
|
||||||
|
return list_object_versions(State(state), Path(bucket), Query(query)).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Default: ListObjects (v1 or v2)
|
||||||
|
let is_v2 = query.list_type == Some(2);
|
||||||
|
let continuation_token = if is_v2 {
|
||||||
|
// v2: use continuation-token if present, else start-after
|
||||||
|
query
|
||||||
|
.continuation_token
|
||||||
|
.as_deref()
|
||||||
|
.or(query.start_after.as_deref())
|
||||||
|
} else {
|
||||||
|
query.marker.as_deref()
|
||||||
|
};
|
||||||
|
|
||||||
|
// Treat empty delimiter as absent (S3 spec: empty delimiter = no delimiter)
|
||||||
|
let delimiter = query
|
||||||
|
.delimiter
|
||||||
|
.as_deref()
|
||||||
|
.filter(|d| !d.is_empty());
|
||||||
|
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.list_objects_v2(
|
||||||
|
&bucket,
|
||||||
|
query.prefix.as_deref(),
|
||||||
|
continuation_token,
|
||||||
|
query.max_keys,
|
||||||
|
delimiter,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let max_keys = query.max_keys.unwrap_or(1000);
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
|
||||||
|
let xml = if is_v2 {
|
||||||
|
responses::list_objects_v2_xml(
|
||||||
|
&bucket,
|
||||||
|
&result,
|
||||||
|
max_keys,
|
||||||
|
query.continuation_token.as_deref(),
|
||||||
|
query.start_after.as_deref(),
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
responses::list_objects_v1_xml(
|
||||||
|
&bucket,
|
||||||
|
&result,
|
||||||
|
max_keys,
|
||||||
|
query.marker.as_deref(),
|
||||||
|
)
|
||||||
|
};
|
||||||
|
|
||||||
|
(StatusCode::OK, headers, xml).into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&b,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("list_objects error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml("InternalError", &e.to_string(), &bucket),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /{bucket}?versions — ListObjectVersions (stub: returns all as version "null").
|
||||||
|
async fn list_object_versions<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
Query(query): Query<BucketGetQuery>,
|
||||||
|
) -> Response {
|
||||||
|
let delimiter = query.delimiter.as_deref().filter(|d| !d.is_empty());
|
||||||
|
match state
|
||||||
|
.store
|
||||||
|
.list_objects_v2(
|
||||||
|
&bucket,
|
||||||
|
query.prefix.as_deref(),
|
||||||
|
query.key_marker.as_deref(),
|
||||||
|
query.max_keys,
|
||||||
|
delimiter,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(result) => {
|
||||||
|
let max_keys = query.max_keys.unwrap_or(1000);
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(
|
||||||
|
StatusCode::OK,
|
||||||
|
headers,
|
||||||
|
responses::list_object_versions_xml(
|
||||||
|
&bucket,
|
||||||
|
&result,
|
||||||
|
max_keys,
|
||||||
|
query.key_marker.as_deref(),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
Err(Post3Error::BucketNotFound(b)) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml("NoSuchBucket", "The specified bucket does not exist", &b),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("list_object_versions error: {e}");
|
||||||
|
(
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml("InternalError", &e.to_string(), &bucket),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// GET /{bucket}?location — GetBucketLocation.
|
||||||
|
async fn get_bucket_location<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
) -> Response {
|
||||||
|
match state.store.head_bucket(&bucket).await {
|
||||||
|
Ok(Some(_)) => {
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
(StatusCode::OK, headers, responses::get_bucket_location_xml()).into_response()
|
||||||
|
}
|
||||||
|
Ok(None) => (
|
||||||
|
StatusCode::NOT_FOUND,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"NoSuchBucket",
|
||||||
|
"The specified bucket does not exist",
|
||||||
|
&bucket,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response(),
|
||||||
|
Err(e) => {
|
||||||
|
tracing::error!("get_bucket_location error: {e}");
|
||||||
|
StatusCode::INTERNAL_SERVER_ERROR.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// POST /{bucket} — dispatches to DeleteObjects based on ?delete query param.
|
||||||
|
pub async fn bucket_post_dispatch<B: StorageBackend>(
|
||||||
|
state: State<AppState<B>>,
|
||||||
|
path: Path<String>,
|
||||||
|
query: Query<crate::s3::extractors::BucketPostQuery>,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Response {
|
||||||
|
if query.delete.is_some() {
|
||||||
|
delete_objects(state, path, body).await
|
||||||
|
} else {
|
||||||
|
(
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"InvalidRequest",
|
||||||
|
"POST on bucket requires ?delete parameter",
|
||||||
|
&format!("/{}", path.0),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// POST /{bucket}?delete — DeleteObjects (batch delete).
|
||||||
|
async fn delete_objects<B: StorageBackend>(
|
||||||
|
State(state): State<AppState<B>>,
|
||||||
|
Path(bucket): Path<String>,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Response {
|
||||||
|
let (keys, quiet) = match responses::parse_delete_objects_xml(&body) {
|
||||||
|
Ok(result) => result,
|
||||||
|
Err(msg) => {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml("MalformedXML", &msg, &format!("/{bucket}")),
|
||||||
|
)
|
||||||
|
.into_response();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// S3 limits DeleteObjects to 1000 keys
|
||||||
|
if keys.len() > 1000 {
|
||||||
|
return (
|
||||||
|
StatusCode::BAD_REQUEST,
|
||||||
|
[("Content-Type", "application/xml")],
|
||||||
|
responses::error_xml(
|
||||||
|
"MalformedXML",
|
||||||
|
"The number of keys in a DeleteObjects request cannot exceed 1000",
|
||||||
|
&format!("/{bucket}"),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.into_response();
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut deleted = Vec::new();
|
||||||
|
let mut errors: Vec<(String, String, String)> = Vec::new();
|
||||||
|
|
||||||
|
for key in keys {
|
||||||
|
match state.store.delete_object(&bucket, &key).await {
|
||||||
|
Ok(()) => {
|
||||||
|
if !quiet {
|
||||||
|
deleted.push(key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
errors.push((key, "InternalError".to_string(), e.to_string()));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut headers = HeaderMap::new();
|
||||||
|
headers.insert("Content-Type", HeaderValue::from_static("application/xml"));
|
||||||
|
headers.insert(
|
||||||
|
"x-amz-request-id",
|
||||||
|
HeaderValue::from_str(&uuid::Uuid::new_v4().to_string()).unwrap(),
|
||||||
|
);
|
||||||
|
|
||||||
|
(
|
||||||
|
StatusCode::OK,
|
||||||
|
headers,
|
||||||
|
responses::delete_objects_result_xml(&deleted, &errors),
|
||||||
|
)
|
||||||
|
.into_response()
|
||||||
|
}
|
||||||
42
crates/post3-server/src/s3/mod.rs
Normal file
42
crates/post3-server/src/s3/mod.rs
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
pub mod extractors;
|
||||||
|
pub mod handlers;
|
||||||
|
pub mod responses;
|
||||||
|
pub mod router;
|
||||||
|
|
||||||
|
use std::net::SocketAddr;
|
||||||
|
|
||||||
|
use notmad::{Component, ComponentInfo, MadError};
|
||||||
|
use post3::StorageBackend;
|
||||||
|
use tokio::net::TcpListener;
|
||||||
|
use tokio_util::sync::CancellationToken;
|
||||||
|
|
||||||
|
use crate::state::State;
|
||||||
|
|
||||||
|
pub struct S3Server<B: StorageBackend> {
|
||||||
|
pub host: SocketAddr,
|
||||||
|
pub state: State<B>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<B: StorageBackend> Component for S3Server<B> {
|
||||||
|
fn info(&self) -> ComponentInfo {
|
||||||
|
"post3/s3".into()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn run(&self, cancellation_token: CancellationToken) -> Result<(), MadError> {
|
||||||
|
let app = router::build_router(self.state.clone());
|
||||||
|
|
||||||
|
tracing::info!("post3 s3-compatible server listening on {}", self.host);
|
||||||
|
let listener = TcpListener::bind(&self.host).await.map_err(|e| {
|
||||||
|
MadError::Inner(anyhow::anyhow!("failed to bind: {e}"))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
axum::serve(listener, app.into_make_service())
|
||||||
|
.with_graceful_shutdown(async move {
|
||||||
|
cancellation_token.cancelled().await;
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.map_err(|e| MadError::Inner(anyhow::anyhow!("server error: {e}")))?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
538
crates/post3-server/src/s3/responses.rs
Normal file
538
crates/post3-server/src/s3/responses.rs
Normal file
@@ -0,0 +1,538 @@
|
|||||||
|
use post3::models::{BucketInfo, ListMultipartUploadsResult, ListObjectsResult, ListPartsResult};
|
||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
pub fn list_buckets_xml(buckets: &[BucketInfo]) -> String {
|
||||||
|
let mut xml = String::from(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
|
||||||
|
<Owner><ID>post3</ID><DisplayName>post3</DisplayName></Owner>\
|
||||||
|
<Buckets>",
|
||||||
|
);
|
||||||
|
|
||||||
|
for b in buckets {
|
||||||
|
xml.push_str("<Bucket><Name>");
|
||||||
|
xml.push_str(&xml_escape(&b.name));
|
||||||
|
xml.push_str("</Name><CreationDate>");
|
||||||
|
xml.push_str(&b.created_at.format("%Y-%m-%dT%H:%M:%S%.3fZ").to_string());
|
||||||
|
xml.push_str("</CreationDate></Bucket>");
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("</Buckets></ListAllMyBucketsResult>");
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_objects_v2_xml(
|
||||||
|
bucket_name: &str,
|
||||||
|
result: &ListObjectsResult,
|
||||||
|
max_keys: i64,
|
||||||
|
continuation_token: Option<&str>,
|
||||||
|
start_after: Option<&str>,
|
||||||
|
) -> String {
|
||||||
|
let mut xml = String::from(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<ListBucketResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">",
|
||||||
|
);
|
||||||
|
|
||||||
|
xml.push_str("<Name>");
|
||||||
|
xml.push_str(&xml_escape(bucket_name));
|
||||||
|
xml.push_str("</Name>");
|
||||||
|
|
||||||
|
xml.push_str("<Prefix>");
|
||||||
|
if let Some(ref pfx) = result.prefix {
|
||||||
|
xml.push_str(&xml_escape(pfx));
|
||||||
|
}
|
||||||
|
xml.push_str("</Prefix>");
|
||||||
|
|
||||||
|
if let Some(sa) = start_after {
|
||||||
|
xml.push_str("<StartAfter>");
|
||||||
|
xml.push_str(&xml_escape(sa));
|
||||||
|
xml.push_str("</StartAfter>");
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("<KeyCount>");
|
||||||
|
xml.push_str(&result.key_count.to_string());
|
||||||
|
xml.push_str("</KeyCount>");
|
||||||
|
|
||||||
|
xml.push_str("<MaxKeys>");
|
||||||
|
xml.push_str(&max_keys.to_string());
|
||||||
|
xml.push_str("</MaxKeys>");
|
||||||
|
|
||||||
|
xml.push_str("<IsTruncated>");
|
||||||
|
xml.push_str(if result.is_truncated { "true" } else { "false" });
|
||||||
|
xml.push_str("</IsTruncated>");
|
||||||
|
|
||||||
|
if let Some(ref delim) = result.delimiter {
|
||||||
|
xml.push_str("<Delimiter>");
|
||||||
|
xml.push_str(&xml_escape(delim));
|
||||||
|
xml.push_str("</Delimiter>");
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(token) = continuation_token {
|
||||||
|
xml.push_str("<ContinuationToken>");
|
||||||
|
xml.push_str(&xml_escape(token));
|
||||||
|
xml.push_str("</ContinuationToken>");
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref token) = result.next_continuation_token {
|
||||||
|
xml.push_str("<NextContinuationToken>");
|
||||||
|
xml.push_str(&xml_escape(token));
|
||||||
|
xml.push_str("</NextContinuationToken>");
|
||||||
|
}
|
||||||
|
|
||||||
|
for obj in &result.objects {
|
||||||
|
xml.push_str("<Contents>");
|
||||||
|
xml.push_str("<Key>");
|
||||||
|
xml.push_str(&xml_escape(&obj.key));
|
||||||
|
xml.push_str("</Key>");
|
||||||
|
xml.push_str("<LastModified>");
|
||||||
|
xml.push_str(
|
||||||
|
&obj.last_modified
|
||||||
|
.format("%Y-%m-%dT%H:%M:%S%.3fZ")
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
xml.push_str("</LastModified>");
|
||||||
|
xml.push_str("<ETag>");
|
||||||
|
xml.push_str(&xml_escape(&obj.etag));
|
||||||
|
xml.push_str("</ETag>");
|
||||||
|
xml.push_str("<Size>");
|
||||||
|
xml.push_str(&obj.size.to_string());
|
||||||
|
xml.push_str("</Size>");
|
||||||
|
xml.push_str("<StorageClass>STANDARD</StorageClass>");
|
||||||
|
xml.push_str("</Contents>");
|
||||||
|
}
|
||||||
|
|
||||||
|
for cp in &result.common_prefixes {
|
||||||
|
xml.push_str("<CommonPrefixes><Prefix>");
|
||||||
|
xml.push_str(&xml_escape(cp));
|
||||||
|
xml.push_str("</Prefix></CommonPrefixes>");
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("</ListBucketResult>");
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_objects_v1_xml(
|
||||||
|
bucket_name: &str,
|
||||||
|
result: &ListObjectsResult,
|
||||||
|
max_keys: i64,
|
||||||
|
marker: Option<&str>,
|
||||||
|
) -> String {
|
||||||
|
let mut xml = String::from(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<ListBucketResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">",
|
||||||
|
);
|
||||||
|
|
||||||
|
xml.push_str("<Name>");
|
||||||
|
xml.push_str(&xml_escape(bucket_name));
|
||||||
|
xml.push_str("</Name>");
|
||||||
|
|
||||||
|
xml.push_str("<Prefix>");
|
||||||
|
if let Some(ref pfx) = result.prefix {
|
||||||
|
xml.push_str(&xml_escape(pfx));
|
||||||
|
}
|
||||||
|
xml.push_str("</Prefix>");
|
||||||
|
|
||||||
|
xml.push_str("<Marker>");
|
||||||
|
if let Some(m) = marker {
|
||||||
|
xml.push_str(&xml_escape(m));
|
||||||
|
}
|
||||||
|
xml.push_str("</Marker>");
|
||||||
|
|
||||||
|
xml.push_str("<MaxKeys>");
|
||||||
|
xml.push_str(&max_keys.to_string());
|
||||||
|
xml.push_str("</MaxKeys>");
|
||||||
|
|
||||||
|
xml.push_str("<IsTruncated>");
|
||||||
|
xml.push_str(if result.is_truncated { "true" } else { "false" });
|
||||||
|
xml.push_str("</IsTruncated>");
|
||||||
|
|
||||||
|
if let Some(ref token) = result.next_continuation_token {
|
||||||
|
xml.push_str("<NextMarker>");
|
||||||
|
xml.push_str(&xml_escape(token));
|
||||||
|
xml.push_str("</NextMarker>");
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref delim) = result.delimiter {
|
||||||
|
xml.push_str("<Delimiter>");
|
||||||
|
xml.push_str(&xml_escape(delim));
|
||||||
|
xml.push_str("</Delimiter>");
|
||||||
|
}
|
||||||
|
|
||||||
|
for obj in &result.objects {
|
||||||
|
xml.push_str("<Contents>");
|
||||||
|
xml.push_str("<Key>");
|
||||||
|
xml.push_str(&xml_escape(&obj.key));
|
||||||
|
xml.push_str("</Key>");
|
||||||
|
xml.push_str("<LastModified>");
|
||||||
|
xml.push_str(
|
||||||
|
&obj.last_modified
|
||||||
|
.format("%Y-%m-%dT%H:%M:%S%.3fZ")
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
xml.push_str("</LastModified>");
|
||||||
|
xml.push_str("<ETag>");
|
||||||
|
xml.push_str(&xml_escape(&obj.etag));
|
||||||
|
xml.push_str("</ETag>");
|
||||||
|
xml.push_str("<Size>");
|
||||||
|
xml.push_str(&obj.size.to_string());
|
||||||
|
xml.push_str("</Size>");
|
||||||
|
xml.push_str("<Owner><ID>post3</ID><DisplayName>post3</DisplayName></Owner>");
|
||||||
|
xml.push_str("<StorageClass>STANDARD</StorageClass>");
|
||||||
|
xml.push_str("</Contents>");
|
||||||
|
}
|
||||||
|
|
||||||
|
for cp in &result.common_prefixes {
|
||||||
|
xml.push_str("<CommonPrefixes><Prefix>");
|
||||||
|
xml.push_str(&xml_escape(cp));
|
||||||
|
xml.push_str("</Prefix></CommonPrefixes>");
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("</ListBucketResult>");
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_object_versions_xml(
|
||||||
|
bucket_name: &str,
|
||||||
|
result: &ListObjectsResult,
|
||||||
|
max_keys: i64,
|
||||||
|
key_marker: Option<&str>,
|
||||||
|
) -> String {
|
||||||
|
let mut xml = String::from(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<ListVersionsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">",
|
||||||
|
);
|
||||||
|
|
||||||
|
xml.push_str("<Name>");
|
||||||
|
xml.push_str(&xml_escape(bucket_name));
|
||||||
|
xml.push_str("</Name>");
|
||||||
|
|
||||||
|
xml.push_str("<Prefix>");
|
||||||
|
if let Some(ref pfx) = result.prefix {
|
||||||
|
xml.push_str(&xml_escape(pfx));
|
||||||
|
}
|
||||||
|
xml.push_str("</Prefix>");
|
||||||
|
|
||||||
|
// Echo back input markers
|
||||||
|
xml.push_str("<KeyMarker>");
|
||||||
|
if let Some(km) = key_marker {
|
||||||
|
xml.push_str(&xml_escape(km));
|
||||||
|
}
|
||||||
|
xml.push_str("</KeyMarker>");
|
||||||
|
xml.push_str("<VersionIdMarker/>");
|
||||||
|
|
||||||
|
xml.push_str("<MaxKeys>");
|
||||||
|
xml.push_str(&max_keys.to_string());
|
||||||
|
xml.push_str("</MaxKeys>");
|
||||||
|
|
||||||
|
xml.push_str("<IsTruncated>");
|
||||||
|
xml.push_str(if result.is_truncated { "true" } else { "false" });
|
||||||
|
xml.push_str("</IsTruncated>");
|
||||||
|
|
||||||
|
for obj in &result.objects {
|
||||||
|
xml.push_str("<Version>");
|
||||||
|
xml.push_str("<Key>");
|
||||||
|
xml.push_str(&xml_escape(&obj.key));
|
||||||
|
xml.push_str("</Key>");
|
||||||
|
xml.push_str("<VersionId>null</VersionId>");
|
||||||
|
xml.push_str("<IsLatest>true</IsLatest>");
|
||||||
|
xml.push_str("<LastModified>");
|
||||||
|
xml.push_str(
|
||||||
|
&obj.last_modified
|
||||||
|
.format("%Y-%m-%dT%H:%M:%S%.3fZ")
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
xml.push_str("</LastModified>");
|
||||||
|
xml.push_str("<ETag>");
|
||||||
|
xml.push_str(&xml_escape(&obj.etag));
|
||||||
|
xml.push_str("</ETag>");
|
||||||
|
xml.push_str("<Size>");
|
||||||
|
xml.push_str(&obj.size.to_string());
|
||||||
|
xml.push_str("</Size>");
|
||||||
|
xml.push_str("<StorageClass>STANDARD</StorageClass>");
|
||||||
|
xml.push_str("<Owner><ID>post3</ID><DisplayName>post3</DisplayName></Owner>");
|
||||||
|
xml.push_str("</Version>");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Include NextKeyMarker/NextVersionIdMarker when truncated for pagination
|
||||||
|
if result.is_truncated {
|
||||||
|
if let Some(last_obj) = result.objects.last() {
|
||||||
|
xml.push_str("<NextKeyMarker>");
|
||||||
|
xml.push_str(&xml_escape(&last_obj.key));
|
||||||
|
xml.push_str("</NextKeyMarker>");
|
||||||
|
xml.push_str("<NextVersionIdMarker>null</NextVersionIdMarker>");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("</ListVersionsResult>");
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_bucket_location_xml() -> String {
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<LocationConstraint xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"/>"
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- DeleteObjects ---
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
#[serde(rename = "Delete")]
|
||||||
|
struct DeleteObjectsRequest {
|
||||||
|
#[serde(rename = "Object")]
|
||||||
|
objects: Vec<DeleteObjectEntry>,
|
||||||
|
#[serde(rename = "Quiet", default)]
|
||||||
|
quiet: Option<bool>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct DeleteObjectEntry {
|
||||||
|
#[serde(rename = "Key")]
|
||||||
|
key: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn parse_delete_objects_xml(body: &[u8]) -> Result<(Vec<String>, bool), String> {
|
||||||
|
let request: DeleteObjectsRequest =
|
||||||
|
quick_xml::de::from_reader(body).map_err(|e| format!("invalid XML: {e}"))?;
|
||||||
|
let quiet = request.quiet.unwrap_or(false);
|
||||||
|
let keys = request.objects.into_iter().map(|o| o.key).collect();
|
||||||
|
Ok((keys, quiet))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn delete_objects_result_xml(deleted: &[String], errors: &[(String, String, String)]) -> String {
|
||||||
|
let mut xml = String::from(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<DeleteResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">",
|
||||||
|
);
|
||||||
|
|
||||||
|
for key in deleted {
|
||||||
|
xml.push_str("<Deleted><Key>");
|
||||||
|
xml.push_str(&xml_escape(key));
|
||||||
|
xml.push_str("</Key></Deleted>");
|
||||||
|
}
|
||||||
|
|
||||||
|
for (key, code, message) in errors {
|
||||||
|
xml.push_str("<Error><Key>");
|
||||||
|
xml.push_str(&xml_escape(key));
|
||||||
|
xml.push_str("</Key><Code>");
|
||||||
|
xml.push_str(&xml_escape(code));
|
||||||
|
xml.push_str("</Code><Message>");
|
||||||
|
xml.push_str(&xml_escape(message));
|
||||||
|
xml.push_str("</Message></Error>");
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("</DeleteResult>");
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn error_xml(code: &str, message: &str, resource: &str) -> String {
|
||||||
|
let request_id = uuid::Uuid::new_v4().to_string();
|
||||||
|
format!(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<Error>\
|
||||||
|
<Code>{code}</Code>\
|
||||||
|
<Message>{message}</Message>\
|
||||||
|
<Resource>{resource}</Resource>\
|
||||||
|
<RequestId>{request_id}</RequestId>\
|
||||||
|
</Error>",
|
||||||
|
code = xml_escape(code),
|
||||||
|
message = xml_escape(message),
|
||||||
|
resource = xml_escape(resource),
|
||||||
|
request_id = request_id,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Multipart upload responses ---
|
||||||
|
|
||||||
|
pub fn initiate_multipart_upload_xml(bucket: &str, key: &str, upload_id: &str) -> String {
|
||||||
|
format!(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<InitiateMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
|
||||||
|
<Bucket>{bucket}</Bucket>\
|
||||||
|
<Key>{key}</Key>\
|
||||||
|
<UploadId>{upload_id}</UploadId>\
|
||||||
|
</InitiateMultipartUploadResult>",
|
||||||
|
bucket = xml_escape(bucket),
|
||||||
|
key = xml_escape(key),
|
||||||
|
upload_id = xml_escape(upload_id),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn complete_multipart_upload_xml(
|
||||||
|
location: &str,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
etag: &str,
|
||||||
|
) -> String {
|
||||||
|
format!(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<CompleteMultipartUploadResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\
|
||||||
|
<Location>{location}</Location>\
|
||||||
|
<Bucket>{bucket}</Bucket>\
|
||||||
|
<Key>{key}</Key>\
|
||||||
|
<ETag>{etag}</ETag>\
|
||||||
|
</CompleteMultipartUploadResult>",
|
||||||
|
location = xml_escape(location),
|
||||||
|
bucket = xml_escape(bucket),
|
||||||
|
key = xml_escape(key),
|
||||||
|
etag = xml_escape(etag),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_parts_xml(result: &ListPartsResult, max_parts: i32) -> String {
|
||||||
|
let mut xml = String::from(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<ListPartsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">",
|
||||||
|
);
|
||||||
|
|
||||||
|
xml.push_str("<Bucket>");
|
||||||
|
xml.push_str(&xml_escape(&result.bucket));
|
||||||
|
xml.push_str("</Bucket>");
|
||||||
|
|
||||||
|
xml.push_str("<Key>");
|
||||||
|
xml.push_str(&xml_escape(&result.key));
|
||||||
|
xml.push_str("</Key>");
|
||||||
|
|
||||||
|
xml.push_str("<UploadId>");
|
||||||
|
xml.push_str(&xml_escape(&result.upload_id));
|
||||||
|
xml.push_str("</UploadId>");
|
||||||
|
|
||||||
|
xml.push_str("<MaxParts>");
|
||||||
|
xml.push_str(&max_parts.to_string());
|
||||||
|
xml.push_str("</MaxParts>");
|
||||||
|
|
||||||
|
xml.push_str("<IsTruncated>");
|
||||||
|
xml.push_str(if result.is_truncated { "true" } else { "false" });
|
||||||
|
xml.push_str("</IsTruncated>");
|
||||||
|
|
||||||
|
if let Some(marker) = result.next_part_number_marker {
|
||||||
|
xml.push_str("<NextPartNumberMarker>");
|
||||||
|
xml.push_str(&marker.to_string());
|
||||||
|
xml.push_str("</NextPartNumberMarker>");
|
||||||
|
}
|
||||||
|
|
||||||
|
for part in &result.parts {
|
||||||
|
xml.push_str("<Part>");
|
||||||
|
xml.push_str("<PartNumber>");
|
||||||
|
xml.push_str(&part.part_number.to_string());
|
||||||
|
xml.push_str("</PartNumber>");
|
||||||
|
xml.push_str("<LastModified>");
|
||||||
|
xml.push_str(
|
||||||
|
&part
|
||||||
|
.created_at
|
||||||
|
.format("%Y-%m-%dT%H:%M:%S%.3fZ")
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
xml.push_str("</LastModified>");
|
||||||
|
xml.push_str("<ETag>");
|
||||||
|
xml.push_str(&xml_escape(&part.etag));
|
||||||
|
xml.push_str("</ETag>");
|
||||||
|
xml.push_str("<Size>");
|
||||||
|
xml.push_str(&part.size.to_string());
|
||||||
|
xml.push_str("</Size>");
|
||||||
|
xml.push_str("</Part>");
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("</ListPartsResult>");
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_multipart_uploads_xml(
|
||||||
|
result: &ListMultipartUploadsResult,
|
||||||
|
max_uploads: i32,
|
||||||
|
) -> String {
|
||||||
|
let mut xml = String::from(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\
|
||||||
|
<ListMultipartUploadsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">",
|
||||||
|
);
|
||||||
|
|
||||||
|
xml.push_str("<Bucket>");
|
||||||
|
xml.push_str(&xml_escape(&result.bucket));
|
||||||
|
xml.push_str("</Bucket>");
|
||||||
|
|
||||||
|
xml.push_str("<Prefix>");
|
||||||
|
if let Some(ref pfx) = result.prefix {
|
||||||
|
xml.push_str(&xml_escape(pfx));
|
||||||
|
}
|
||||||
|
xml.push_str("</Prefix>");
|
||||||
|
|
||||||
|
xml.push_str("<MaxUploads>");
|
||||||
|
xml.push_str(&max_uploads.to_string());
|
||||||
|
xml.push_str("</MaxUploads>");
|
||||||
|
|
||||||
|
xml.push_str("<IsTruncated>");
|
||||||
|
xml.push_str(if result.is_truncated {
|
||||||
|
"true"
|
||||||
|
} else {
|
||||||
|
"false"
|
||||||
|
});
|
||||||
|
xml.push_str("</IsTruncated>");
|
||||||
|
|
||||||
|
if let Some(ref marker) = result.next_key_marker {
|
||||||
|
xml.push_str("<NextKeyMarker>");
|
||||||
|
xml.push_str(&xml_escape(marker));
|
||||||
|
xml.push_str("</NextKeyMarker>");
|
||||||
|
}
|
||||||
|
if let Some(ref marker) = result.next_upload_id_marker {
|
||||||
|
xml.push_str("<NextUploadIdMarker>");
|
||||||
|
xml.push_str(&xml_escape(marker));
|
||||||
|
xml.push_str("</NextUploadIdMarker>");
|
||||||
|
}
|
||||||
|
|
||||||
|
for upload in &result.uploads {
|
||||||
|
xml.push_str("<Upload>");
|
||||||
|
xml.push_str("<Key>");
|
||||||
|
xml.push_str(&xml_escape(&upload.key));
|
||||||
|
xml.push_str("</Key>");
|
||||||
|
xml.push_str("<UploadId>");
|
||||||
|
xml.push_str(&xml_escape(&upload.upload_id));
|
||||||
|
xml.push_str("</UploadId>");
|
||||||
|
xml.push_str("<Initiated>");
|
||||||
|
xml.push_str(
|
||||||
|
&upload
|
||||||
|
.initiated
|
||||||
|
.format("%Y-%m-%dT%H:%M:%S%.3fZ")
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
xml.push_str("</Initiated>");
|
||||||
|
xml.push_str("</Upload>");
|
||||||
|
}
|
||||||
|
|
||||||
|
xml.push_str("</ListMultipartUploadsResult>");
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- XML request parsing for CompleteMultipartUpload ---
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
#[serde(rename = "CompleteMultipartUpload")]
|
||||||
|
struct CompleteMultipartUploadRequest {
|
||||||
|
#[serde(rename = "Part")]
|
||||||
|
parts: Vec<CompletePart>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct CompletePart {
|
||||||
|
#[serde(rename = "PartNumber")]
|
||||||
|
part_number: i32,
|
||||||
|
#[serde(rename = "ETag")]
|
||||||
|
etag: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn parse_complete_multipart_xml(body: &[u8]) -> Result<Vec<(i32, String)>, String> {
|
||||||
|
let request: CompleteMultipartUploadRequest =
|
||||||
|
quick_xml::de::from_reader(body).map_err(|e| format!("invalid XML: {e}"))?;
|
||||||
|
|
||||||
|
Ok(request
|
||||||
|
.parts
|
||||||
|
.into_iter()
|
||||||
|
.map(|p| (p.part_number, p.etag))
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn xml_escape(s: &str) -> String {
|
||||||
|
s.replace('&', "&")
|
||||||
|
.replace('<', "<")
|
||||||
|
.replace('>', ">")
|
||||||
|
.replace('"', """)
|
||||||
|
.replace('\'', "'")
|
||||||
|
}
|
||||||
48
crates/post3-server/src/s3/router.rs
Normal file
48
crates/post3-server/src/s3/router.rs
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
use axum::{
|
||||||
|
extract::{DefaultBodyLimit, Request},
|
||||||
|
http::StatusCode,
|
||||||
|
response::IntoResponse,
|
||||||
|
routing::{delete, get, head, post, put},
|
||||||
|
Router,
|
||||||
|
};
|
||||||
|
use post3::StorageBackend;
|
||||||
|
use tower_http::trace::TraceLayer;
|
||||||
|
|
||||||
|
use super::handlers::{buckets, objects};
|
||||||
|
use crate::state::State;
|
||||||
|
|
||||||
|
pub fn build_router<B: StorageBackend>(state: State<B>) -> Router {
|
||||||
|
Router::new()
|
||||||
|
// Service-level
|
||||||
|
.route("/", get(buckets::list_buckets::<B>))
|
||||||
|
// Bucket-level (with and without trailing slash for SDK compat)
|
||||||
|
.route("/{bucket}", put(buckets::create_bucket::<B>))
|
||||||
|
.route("/{bucket}/", put(buckets::create_bucket::<B>))
|
||||||
|
.route("/{bucket}", head(buckets::head_bucket::<B>))
|
||||||
|
.route("/{bucket}/", head(buckets::head_bucket::<B>))
|
||||||
|
.route("/{bucket}", delete(buckets::delete_bucket::<B>))
|
||||||
|
.route("/{bucket}/", delete(buckets::delete_bucket::<B>))
|
||||||
|
.route("/{bucket}", get(objects::list_or_get::<B>))
|
||||||
|
.route("/{bucket}/", get(objects::list_or_get::<B>))
|
||||||
|
.route("/{bucket}", post(objects::bucket_post_dispatch::<B>))
|
||||||
|
.route("/{bucket}/", post(objects::bucket_post_dispatch::<B>))
|
||||||
|
// Object-level (wildcard key for nested paths like "a/b/c")
|
||||||
|
.route("/{bucket}/{*key}", put(objects::put_dispatch::<B>))
|
||||||
|
.route("/{bucket}/{*key}", get(objects::get_dispatch::<B>))
|
||||||
|
.route("/{bucket}/{*key}", head(objects::head_object::<B>))
|
||||||
|
.route("/{bucket}/{*key}", delete(objects::delete_dispatch::<B>))
|
||||||
|
.route("/{bucket}/{*key}", post(objects::post_dispatch::<B>))
|
||||||
|
.fallback(fallback)
|
||||||
|
.layer(DefaultBodyLimit::max(5 * 1024 * 1024 * 1024)) // 5 GiB
|
||||||
|
.layer(TraceLayer::new_for_http())
|
||||||
|
.with_state(state)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn fallback(req: Request) -> impl IntoResponse {
|
||||||
|
tracing::warn!(
|
||||||
|
method = %req.method(),
|
||||||
|
uri = %req.uri(),
|
||||||
|
"unmatched request"
|
||||||
|
);
|
||||||
|
StatusCode::NOT_FOUND
|
||||||
|
}
|
||||||
6
crates/post3-server/src/state.rs
Normal file
6
crates/post3-server/src/state.rs
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
use post3::StorageBackend;
|
||||||
|
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct State<B: StorageBackend> {
|
||||||
|
pub store: B,
|
||||||
|
}
|
||||||
106
crates/post3-server/tests/common/mod.rs
Normal file
106
crates/post3-server/tests/common/mod.rs
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
use std::net::SocketAddr;
|
||||||
|
|
||||||
|
use aws_credential_types::Credentials;
|
||||||
|
use aws_sdk_s3::Client;
|
||||||
|
use post3::PostgresBackend;
|
||||||
|
use sqlx::PgPool;
|
||||||
|
use tokio::net::TcpListener;
|
||||||
|
use tokio_util::sync::CancellationToken;
|
||||||
|
|
||||||
|
static TRACING: std::sync::Once = std::sync::Once::new();
|
||||||
|
|
||||||
|
fn init_tracing() {
|
||||||
|
TRACING.call_once(|| {
|
||||||
|
tracing_subscriber::fmt()
|
||||||
|
.with_env_filter(
|
||||||
|
tracing_subscriber::EnvFilter::from_default_env()
|
||||||
|
.add_directive("post3_server=debug".parse().unwrap())
|
||||||
|
.add_directive("tower_http=debug".parse().unwrap()),
|
||||||
|
)
|
||||||
|
.with_test_writer()
|
||||||
|
.init();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct TestServer {
|
||||||
|
pub addr: SocketAddr,
|
||||||
|
pub client: Client,
|
||||||
|
cancel: CancellationToken,
|
||||||
|
pool: PgPool,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TestServer {
|
||||||
|
pub async fn start() -> Self {
|
||||||
|
init_tracing();
|
||||||
|
|
||||||
|
let db_url = std::env::var("DATABASE_URL").unwrap_or_else(|_| {
|
||||||
|
"postgresql://devuser:devpassword@localhost:5435/post3_dev".into()
|
||||||
|
});
|
||||||
|
|
||||||
|
let pool = sqlx::pool::PoolOptions::new()
|
||||||
|
.max_connections(5)
|
||||||
|
.connect(&db_url)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Run migrations
|
||||||
|
sqlx::migrate!("../post3/migrations/")
|
||||||
|
.set_locking(false)
|
||||||
|
.run(&pool)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Clean slate
|
||||||
|
sqlx::query("DELETE FROM upload_parts").execute(&pool).await.unwrap();
|
||||||
|
sqlx::query("DELETE FROM multipart_upload_metadata").execute(&pool).await.unwrap();
|
||||||
|
sqlx::query("DELETE FROM multipart_uploads").execute(&pool).await.unwrap();
|
||||||
|
sqlx::query("DELETE FROM blocks").execute(&pool).await.unwrap();
|
||||||
|
sqlx::query("DELETE FROM object_metadata").execute(&pool).await.unwrap();
|
||||||
|
sqlx::query("DELETE FROM objects").execute(&pool).await.unwrap();
|
||||||
|
sqlx::query("DELETE FROM buckets").execute(&pool).await.unwrap();
|
||||||
|
|
||||||
|
let backend = PostgresBackend::new(pool.clone());
|
||||||
|
let state = post3_server::state::State { store: backend };
|
||||||
|
|
||||||
|
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||||
|
let addr = listener.local_addr().unwrap();
|
||||||
|
|
||||||
|
let cancel = CancellationToken::new();
|
||||||
|
let cancel_clone = cancel.clone();
|
||||||
|
|
||||||
|
let router = post3_server::s3::router::build_router(state);
|
||||||
|
tokio::spawn(async move {
|
||||||
|
axum::serve(listener, router.into_make_service())
|
||||||
|
.with_graceful_shutdown(async move {
|
||||||
|
cancel_clone.cancelled().await;
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
});
|
||||||
|
|
||||||
|
let creds = Credentials::new("test", "test", None, None, "test");
|
||||||
|
let config = aws_sdk_s3::Config::builder()
|
||||||
|
.behavior_version_latest()
|
||||||
|
.region(aws_types::region::Region::new("us-east-1"))
|
||||||
|
.endpoint_url(format!("http://{}", addr))
|
||||||
|
.credentials_provider(creds)
|
||||||
|
.force_path_style(true)
|
||||||
|
.build();
|
||||||
|
|
||||||
|
let client = Client::from_conf(config);
|
||||||
|
|
||||||
|
Self {
|
||||||
|
addr,
|
||||||
|
client,
|
||||||
|
cancel,
|
||||||
|
pool,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn shutdown(self) {
|
||||||
|
self.cancel.cancel();
|
||||||
|
// Give the server task a moment to wind down
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
|
self.pool.close().await;
|
||||||
|
}
|
||||||
|
}
|
||||||
390
crates/post3-server/tests/fs_integration.rs
Normal file
390
crates/post3-server/tests/fs_integration.rs
Normal file
@@ -0,0 +1,390 @@
|
|||||||
|
//! Integration tests using FilesystemBackend (no PostgreSQL required).
|
||||||
|
|
||||||
|
use std::net::SocketAddr;
|
||||||
|
|
||||||
|
use aws_credential_types::Credentials;
|
||||||
|
use aws_sdk_s3::types::{CompletedMultipartUpload, CompletedPart};
|
||||||
|
use aws_sdk_s3::Client;
|
||||||
|
use post3::FilesystemBackend;
|
||||||
|
use tokio::net::TcpListener;
|
||||||
|
use tokio_util::sync::CancellationToken;
|
||||||
|
|
||||||
|
struct FsTestServer {
|
||||||
|
client: Client,
|
||||||
|
cancel: CancellationToken,
|
||||||
|
_tmpdir: tempfile::TempDir,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl FsTestServer {
|
||||||
|
async fn start() -> Self {
|
||||||
|
let tmpdir = tempfile::tempdir().unwrap();
|
||||||
|
let backend = FilesystemBackend::new(tmpdir.path());
|
||||||
|
let state = post3_server::state::State { store: backend };
|
||||||
|
|
||||||
|
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
|
||||||
|
let addr: SocketAddr = listener.local_addr().unwrap();
|
||||||
|
|
||||||
|
let cancel = CancellationToken::new();
|
||||||
|
let cancel_clone = cancel.clone();
|
||||||
|
|
||||||
|
let router = post3_server::s3::router::build_router(state);
|
||||||
|
tokio::spawn(async move {
|
||||||
|
axum::serve(listener, router.into_make_service())
|
||||||
|
.with_graceful_shutdown(async move {
|
||||||
|
cancel_clone.cancelled().await;
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
});
|
||||||
|
|
||||||
|
let creds = Credentials::new("test", "test", None, None, "test");
|
||||||
|
let config = aws_sdk_s3::Config::builder()
|
||||||
|
.behavior_version_latest()
|
||||||
|
.region(aws_types::region::Region::new("us-east-1"))
|
||||||
|
.endpoint_url(format!("http://{}", addr))
|
||||||
|
.credentials_provider(creds)
|
||||||
|
.force_path_style(true)
|
||||||
|
.build();
|
||||||
|
|
||||||
|
let client = Client::from_conf(config);
|
||||||
|
|
||||||
|
Self {
|
||||||
|
client,
|
||||||
|
cancel,
|
||||||
|
_tmpdir: tmpdir,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn shutdown(self) {
|
||||||
|
self.cancel.cancel();
|
||||||
|
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Tests ---
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_fs_bucket_crud() {
|
||||||
|
let server = FsTestServer::start().await;
|
||||||
|
let c = &server.client;
|
||||||
|
|
||||||
|
// Create
|
||||||
|
c.create_bucket().bucket("my-bucket").send().await.unwrap();
|
||||||
|
|
||||||
|
// Head
|
||||||
|
c.head_bucket().bucket("my-bucket").send().await.unwrap();
|
||||||
|
|
||||||
|
// List
|
||||||
|
let resp = c.list_buckets().send().await.unwrap();
|
||||||
|
let names: Vec<_> = resp
|
||||||
|
.buckets()
|
||||||
|
.iter()
|
||||||
|
.filter_map(|b| b.name())
|
||||||
|
.collect();
|
||||||
|
assert!(names.contains(&"my-bucket"));
|
||||||
|
|
||||||
|
// Delete
|
||||||
|
c.delete_bucket().bucket("my-bucket").send().await.unwrap();
|
||||||
|
|
||||||
|
// Verify gone
|
||||||
|
let result = c.head_bucket().bucket("my-bucket").send().await;
|
||||||
|
assert!(result.is_err());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_fs_put_get_delete() {
|
||||||
|
let server = FsTestServer::start().await;
|
||||||
|
let c = &server.client;
|
||||||
|
|
||||||
|
c.create_bucket().bucket("test").send().await.unwrap();
|
||||||
|
|
||||||
|
// Put
|
||||||
|
c.put_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("hello.txt")
|
||||||
|
.content_type("text/plain")
|
||||||
|
.body(aws_sdk_s3::primitives::ByteStream::from_static(
|
||||||
|
b"hello world",
|
||||||
|
))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Get
|
||||||
|
let resp = c
|
||||||
|
.get_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("hello.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let body = resp.body.collect().await.unwrap().into_bytes();
|
||||||
|
assert_eq!(body.as_ref(), b"hello world");
|
||||||
|
|
||||||
|
// Head
|
||||||
|
let head = c
|
||||||
|
.head_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("hello.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(head.content_length(), Some(11));
|
||||||
|
assert_eq!(head.content_type(), Some("text/plain"));
|
||||||
|
|
||||||
|
// Delete
|
||||||
|
c.delete_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("hello.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Verify gone
|
||||||
|
let result = c
|
||||||
|
.get_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("hello.txt")
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
assert!(result.is_err());
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
c.delete_bucket().bucket("test").send().await.unwrap();
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_fs_list_objects() {
|
||||||
|
let server = FsTestServer::start().await;
|
||||||
|
let c = &server.client;
|
||||||
|
|
||||||
|
c.create_bucket().bucket("test").send().await.unwrap();
|
||||||
|
|
||||||
|
for i in 0..5 {
|
||||||
|
c.put_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key(format!("item-{i:02}"))
|
||||||
|
.body(aws_sdk_s3::primitives::ByteStream::from_static(b"data"))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
// List all
|
||||||
|
let resp = c
|
||||||
|
.list_objects_v2()
|
||||||
|
.bucket("test")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(resp.key_count(), Some(5));
|
||||||
|
|
||||||
|
// List with prefix
|
||||||
|
let resp = c
|
||||||
|
.list_objects_v2()
|
||||||
|
.bucket("test")
|
||||||
|
.prefix("item-03")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert_eq!(resp.key_count(), Some(1));
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
for i in 0..5 {
|
||||||
|
c.delete_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key(format!("item-{i:02}"))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
c.delete_bucket().bucket("test").send().await.unwrap();
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_fs_user_metadata() {
|
||||||
|
let server = FsTestServer::start().await;
|
||||||
|
let c = &server.client;
|
||||||
|
|
||||||
|
c.create_bucket().bucket("test").send().await.unwrap();
|
||||||
|
|
||||||
|
c.put_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("meta.txt")
|
||||||
|
.metadata("author", "test-user")
|
||||||
|
.metadata("version", "1")
|
||||||
|
.body(aws_sdk_s3::primitives::ByteStream::from_static(b"data"))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let head = c
|
||||||
|
.head_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("meta.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let meta = head.metadata().unwrap();
|
||||||
|
assert_eq!(meta.get("author").unwrap(), "test-user");
|
||||||
|
assert_eq!(meta.get("version").unwrap(), "1");
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
c.delete_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("meta.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
c.delete_bucket().bucket("test").send().await.unwrap();
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_fs_multipart_upload() {
|
||||||
|
let server = FsTestServer::start().await;
|
||||||
|
let c = &server.client;
|
||||||
|
|
||||||
|
c.create_bucket().bucket("test").send().await.unwrap();
|
||||||
|
|
||||||
|
// Create multipart upload
|
||||||
|
let create = c
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("test")
|
||||||
|
.key("big.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let upload_id = create.upload_id().unwrap();
|
||||||
|
|
||||||
|
// Upload parts (non-last parts must be >= 5 MB per S3 spec)
|
||||||
|
let min_part = 5 * 1024 * 1024;
|
||||||
|
let part1 = c
|
||||||
|
.upload_part()
|
||||||
|
.bucket("test")
|
||||||
|
.key("big.bin")
|
||||||
|
.upload_id(upload_id)
|
||||||
|
.part_number(1)
|
||||||
|
.body(aws_sdk_s3::primitives::ByteStream::from(vec![0xAAu8; min_part]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let part2 = c
|
||||||
|
.upload_part()
|
||||||
|
.bucket("test")
|
||||||
|
.key("big.bin")
|
||||||
|
.upload_id(upload_id)
|
||||||
|
.part_number(2)
|
||||||
|
.body(aws_sdk_s3::primitives::ByteStream::from(vec![0xBBu8; 1024]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Complete
|
||||||
|
let completed = CompletedMultipartUpload::builder()
|
||||||
|
.parts(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(1)
|
||||||
|
.e_tag(part1.e_tag().unwrap())
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.parts(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(2)
|
||||||
|
.e_tag(part2.e_tag().unwrap())
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.build();
|
||||||
|
|
||||||
|
let complete_resp = c
|
||||||
|
.complete_multipart_upload()
|
||||||
|
.bucket("test")
|
||||||
|
.key("big.bin")
|
||||||
|
.upload_id(upload_id)
|
||||||
|
.multipart_upload(completed)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Verify compound ETag
|
||||||
|
let etag = complete_resp.e_tag().unwrap();
|
||||||
|
assert!(etag.contains("-2"), "Expected compound ETag, got: {etag}");
|
||||||
|
|
||||||
|
// Verify data
|
||||||
|
let resp = c
|
||||||
|
.get_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("big.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let body = resp.body.collect().await.unwrap().into_bytes();
|
||||||
|
assert_eq!(body.len(), min_part + 1024);
|
||||||
|
assert!(body[..min_part].iter().all(|b| *b == 0xAA));
|
||||||
|
assert!(body[min_part..].iter().all(|b| *b == 0xBB));
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
c.delete_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("big.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
c.delete_bucket().bucket("test").send().await.unwrap();
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_fs_abort_multipart() {
|
||||||
|
let server = FsTestServer::start().await;
|
||||||
|
let c = &server.client;
|
||||||
|
|
||||||
|
c.create_bucket().bucket("test").send().await.unwrap();
|
||||||
|
|
||||||
|
let create = c
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("test")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let upload_id = create.upload_id().unwrap();
|
||||||
|
|
||||||
|
// Upload a part
|
||||||
|
c.upload_part()
|
||||||
|
.bucket("test")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.upload_id(upload_id)
|
||||||
|
.part_number(1)
|
||||||
|
.body(aws_sdk_s3::primitives::ByteStream::from(vec![0u8; 100]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Abort
|
||||||
|
c.abort_multipart_upload()
|
||||||
|
.bucket("test")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.upload_id(upload_id)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Verify no object was created
|
||||||
|
let result = c
|
||||||
|
.get_object()
|
||||||
|
.bucket("test")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
assert!(result.is_err());
|
||||||
|
|
||||||
|
c.delete_bucket().bucket("test").send().await.unwrap();
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
871
crates/post3-server/tests/s3_integration.rs
Normal file
871
crates/post3-server/tests/s3_integration.rs
Normal file
@@ -0,0 +1,871 @@
|
|||||||
|
mod common;
|
||||||
|
|
||||||
|
use aws_sdk_s3::primitives::ByteStream;
|
||||||
|
use aws_sdk_s3::types::{CompletedMultipartUpload, CompletedPart};
|
||||||
|
use common::TestServer;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_create_and_list_buckets() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("test-bucket")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let resp = server.client.list_buckets().send().await.unwrap();
|
||||||
|
let names: Vec<_> = resp
|
||||||
|
.buckets()
|
||||||
|
.iter()
|
||||||
|
.filter_map(|b| b.name())
|
||||||
|
.collect();
|
||||||
|
assert!(names.contains(&"test-bucket"));
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_head_bucket() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("hb-test")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.head_bucket()
|
||||||
|
.bucket("hb-test")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let err = server
|
||||||
|
.client
|
||||||
|
.head_bucket()
|
||||||
|
.bucket("no-such-bucket")
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
assert!(err.is_err());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_delete_bucket() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("to-delete")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.delete_bucket()
|
||||||
|
.bucket("to-delete")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let err = server
|
||||||
|
.client
|
||||||
|
.head_bucket()
|
||||||
|
.bucket("to-delete")
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
assert!(err.is_err());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_put_and_get_object() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("data")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body = ByteStream::from_static(b"hello world");
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("data")
|
||||||
|
.key("greeting.txt")
|
||||||
|
.content_type("text/plain")
|
||||||
|
.body(body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let resp = server
|
||||||
|
.client
|
||||||
|
.get_object()
|
||||||
|
.bucket("data")
|
||||||
|
.key("greeting.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let content_type = resp.content_type().map(|s| s.to_string());
|
||||||
|
let bytes = resp.body.collect().await.unwrap().into_bytes();
|
||||||
|
assert_eq!(bytes.as_ref(), b"hello world");
|
||||||
|
assert_eq!(content_type.as_deref(), Some("text/plain"));
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_put_large_object_chunked() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("large")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// 3 MiB object => should be split into 3 blocks at 1 MiB each
|
||||||
|
let data = vec![0x42u8; 3 * 1024 * 1024];
|
||||||
|
let body = ByteStream::from(data.clone());
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("large")
|
||||||
|
.key("big-file.bin")
|
||||||
|
.body(body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let resp = server
|
||||||
|
.client
|
||||||
|
.get_object()
|
||||||
|
.bucket("large")
|
||||||
|
.key("big-file.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let bytes = resp.body.collect().await.unwrap().into_bytes();
|
||||||
|
assert_eq!(bytes.len(), 3 * 1024 * 1024);
|
||||||
|
assert_eq!(bytes.as_ref(), data.as_slice());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_head_object() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("meta")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body = ByteStream::from_static(b"test");
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("meta")
|
||||||
|
.key("file.txt")
|
||||||
|
.body(body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let resp = server
|
||||||
|
.client
|
||||||
|
.head_object()
|
||||||
|
.bucket("meta")
|
||||||
|
.key("file.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.content_length(), Some(4));
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_delete_object() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("del")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body = ByteStream::from_static(b"bye");
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("del")
|
||||||
|
.key("gone.txt")
|
||||||
|
.body(body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.delete_object()
|
||||||
|
.bucket("del")
|
||||||
|
.key("gone.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let err = server
|
||||||
|
.client
|
||||||
|
.get_object()
|
||||||
|
.bucket("del")
|
||||||
|
.key("gone.txt")
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
assert!(err.is_err());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_list_objects_v2() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("list-test")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
for i in 0..5 {
|
||||||
|
let body = ByteStream::from_static(b"x");
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("list-test")
|
||||||
|
.key(format!("prefix/file-{i}.txt"))
|
||||||
|
.body(body)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
let resp = server
|
||||||
|
.client
|
||||||
|
.list_objects_v2()
|
||||||
|
.bucket("list-test")
|
||||||
|
.prefix("prefix/")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resp.key_count(), Some(5));
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_overwrite_object() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("ow")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body1 = ByteStream::from_static(b"version1");
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("ow")
|
||||||
|
.key("file.txt")
|
||||||
|
.body(body1)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body2 = ByteStream::from_static(b"version2-longer");
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("ow")
|
||||||
|
.key("file.txt")
|
||||||
|
.body(body2)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let resp = server
|
||||||
|
.client
|
||||||
|
.get_object()
|
||||||
|
.bucket("ow")
|
||||||
|
.key("file.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let bytes = resp.body.collect().await.unwrap().into_bytes();
|
||||||
|
assert_eq!(bytes.as_ref(), b"version2-longer");
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_user_metadata_roundtrip() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("meta-test")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body = ByteStream::from_static(b"with metadata");
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.put_object()
|
||||||
|
.bucket("meta-test")
|
||||||
|
.key("doc.txt")
|
||||||
|
.body(body)
|
||||||
|
.metadata("author", "test-user")
|
||||||
|
.metadata("version", "42")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let resp = server
|
||||||
|
.client
|
||||||
|
.head_object()
|
||||||
|
.bucket("meta-test")
|
||||||
|
.key("doc.txt")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let meta = resp.metadata().unwrap();
|
||||||
|
assert_eq!(meta.get("author").map(|s| s.as_str()), Some("test-user"));
|
||||||
|
assert_eq!(meta.get("version").map(|s| s.as_str()), Some("42"));
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Multipart upload tests ---
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_multipart_upload_basic() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("mp-basic")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Create multipart upload
|
||||||
|
let create_resp = server
|
||||||
|
.client
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("mp-basic")
|
||||||
|
.key("large-file.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let upload_id = create_resp.upload_id().unwrap().to_string();
|
||||||
|
|
||||||
|
// Upload 3 parts (non-last parts must be >= 5 MB per S3 spec)
|
||||||
|
let min_part = 5 * 1024 * 1024;
|
||||||
|
let part1_data = vec![0x11u8; min_part];
|
||||||
|
let part2_data = vec![0x22u8; min_part];
|
||||||
|
let part3_data = vec![0x33u8; 1024 * 1024];
|
||||||
|
|
||||||
|
let p1 = server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-basic")
|
||||||
|
.key("large-file.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(1)
|
||||||
|
.body(ByteStream::from(part1_data.clone()))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let p2 = server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-basic")
|
||||||
|
.key("large-file.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(2)
|
||||||
|
.body(ByteStream::from(part2_data.clone()))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let p3 = server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-basic")
|
||||||
|
.key("large-file.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(3)
|
||||||
|
.body(ByteStream::from(part3_data.clone()))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Complete multipart upload
|
||||||
|
let completed = CompletedMultipartUpload::builder()
|
||||||
|
.parts(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(1)
|
||||||
|
.e_tag(p1.e_tag().unwrap())
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.parts(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(2)
|
||||||
|
.e_tag(p2.e_tag().unwrap())
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.parts(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(3)
|
||||||
|
.e_tag(p3.e_tag().unwrap())
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.build();
|
||||||
|
|
||||||
|
let complete_resp = server
|
||||||
|
.client
|
||||||
|
.complete_multipart_upload()
|
||||||
|
.bucket("mp-basic")
|
||||||
|
.key("large-file.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.multipart_upload(completed)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Verify ETag is compound format (hex-3)
|
||||||
|
let etag = complete_resp.e_tag().unwrap();
|
||||||
|
assert!(etag.contains("-3"), "Expected compound ETag, got: {etag}");
|
||||||
|
|
||||||
|
// Get and verify assembled data
|
||||||
|
let get_resp = server
|
||||||
|
.client
|
||||||
|
.get_object()
|
||||||
|
.bucket("mp-basic")
|
||||||
|
.key("large-file.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body = get_resp.body.collect().await.unwrap().into_bytes();
|
||||||
|
assert_eq!(body.len(), min_part * 2 + 1024 * 1024);
|
||||||
|
|
||||||
|
let mut expected = Vec::new();
|
||||||
|
expected.extend_from_slice(&part1_data);
|
||||||
|
expected.extend_from_slice(&part2_data);
|
||||||
|
expected.extend_from_slice(&part3_data);
|
||||||
|
assert_eq!(body.as_ref(), expected.as_slice());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_abort_multipart_upload() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("mp-abort")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let create_resp = server
|
||||||
|
.client
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("mp-abort")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let upload_id = create_resp.upload_id().unwrap().to_string();
|
||||||
|
|
||||||
|
// Upload a part
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-abort")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(1)
|
||||||
|
.body(ByteStream::from(vec![0xAAu8; 1024]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Abort
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.abort_multipart_upload()
|
||||||
|
.bucket("mp-abort")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Verify object doesn't exist
|
||||||
|
let err = server
|
||||||
|
.client
|
||||||
|
.get_object()
|
||||||
|
.bucket("mp-abort")
|
||||||
|
.key("aborted.bin")
|
||||||
|
.send()
|
||||||
|
.await;
|
||||||
|
assert!(err.is_err());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_list_parts() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("mp-list-parts")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let create_resp = server
|
||||||
|
.client
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("mp-list-parts")
|
||||||
|
.key("parts.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let upload_id = create_resp.upload_id().unwrap().to_string();
|
||||||
|
|
||||||
|
// Upload 3 parts
|
||||||
|
for i in 1..=3 {
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-list-parts")
|
||||||
|
.key("parts.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(i)
|
||||||
|
.body(ByteStream::from(vec![i as u8; 1024 * 100]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
// List parts
|
||||||
|
let list_resp = server
|
||||||
|
.client
|
||||||
|
.list_parts()
|
||||||
|
.bucket("mp-list-parts")
|
||||||
|
.key("parts.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let parts = list_resp.parts();
|
||||||
|
assert_eq!(parts.len(), 3);
|
||||||
|
assert_eq!(parts[0].part_number(), Some(1));
|
||||||
|
assert_eq!(parts[1].part_number(), Some(2));
|
||||||
|
assert_eq!(parts[2].part_number(), Some(3));
|
||||||
|
for p in parts {
|
||||||
|
assert_eq!(p.size(), Some(1024 * 100));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.abort_multipart_upload()
|
||||||
|
.bucket("mp-list-parts")
|
||||||
|
.key("parts.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_list_multipart_uploads() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("mp-list-uploads")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Create two uploads
|
||||||
|
let u1 = server
|
||||||
|
.client
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("mp-list-uploads")
|
||||||
|
.key("file-a.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let u1_id = u1.upload_id().unwrap().to_string();
|
||||||
|
|
||||||
|
let u2 = server
|
||||||
|
.client
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("mp-list-uploads")
|
||||||
|
.key("file-b.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let u2_id = u2.upload_id().unwrap().to_string();
|
||||||
|
|
||||||
|
// List multipart uploads
|
||||||
|
let list_resp = server
|
||||||
|
.client
|
||||||
|
.list_multipart_uploads()
|
||||||
|
.bucket("mp-list-uploads")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let uploads = list_resp.uploads();
|
||||||
|
assert_eq!(uploads.len(), 2);
|
||||||
|
|
||||||
|
let keys: Vec<&str> = uploads.iter().filter_map(|u| u.key()).collect();
|
||||||
|
assert!(keys.contains(&"file-a.bin"));
|
||||||
|
assert!(keys.contains(&"file-b.bin"));
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.abort_multipart_upload()
|
||||||
|
.bucket("mp-list-uploads")
|
||||||
|
.key("file-a.bin")
|
||||||
|
.upload_id(&u1_id)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.abort_multipart_upload()
|
||||||
|
.bucket("mp-list-uploads")
|
||||||
|
.key("file-b.bin")
|
||||||
|
.upload_id(&u2_id)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_overwrite_part() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("mp-overwrite")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let create_resp = server
|
||||||
|
.client
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("mp-overwrite")
|
||||||
|
.key("ow.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let upload_id = create_resp.upload_id().unwrap().to_string();
|
||||||
|
|
||||||
|
// Upload part 1 with data A
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-overwrite")
|
||||||
|
.key("ow.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(1)
|
||||||
|
.body(ByteStream::from(vec![0xAAu8; 1024]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Re-upload part 1 with data B
|
||||||
|
let p1 = server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-overwrite")
|
||||||
|
.key("ow.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(1)
|
||||||
|
.body(ByteStream::from(vec![0xBBu8; 1024]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Complete with the latest etag
|
||||||
|
let completed = CompletedMultipartUpload::builder()
|
||||||
|
.parts(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(1)
|
||||||
|
.e_tag(p1.e_tag().unwrap())
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.build();
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.complete_multipart_upload()
|
||||||
|
.bucket("mp-overwrite")
|
||||||
|
.key("ow.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.multipart_upload(completed)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Verify data B
|
||||||
|
let get_resp = server
|
||||||
|
.client
|
||||||
|
.get_object()
|
||||||
|
.bucket("mp-overwrite")
|
||||||
|
.key("ow.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let body = get_resp.body.collect().await.unwrap().into_bytes();
|
||||||
|
assert_eq!(body.as_ref(), vec![0xBBu8; 1024].as_slice());
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_multipart_with_metadata() {
|
||||||
|
let server = TestServer::start().await;
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.create_bucket()
|
||||||
|
.bucket("mp-meta")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Create multipart upload with metadata
|
||||||
|
let create_resp = server
|
||||||
|
.client
|
||||||
|
.create_multipart_upload()
|
||||||
|
.bucket("mp-meta")
|
||||||
|
.key("meta-file.bin")
|
||||||
|
.metadata("author", "test-user")
|
||||||
|
.metadata("version", "7")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let upload_id = create_resp.upload_id().unwrap().to_string();
|
||||||
|
|
||||||
|
// Upload one part
|
||||||
|
let p1 = server
|
||||||
|
.client
|
||||||
|
.upload_part()
|
||||||
|
.bucket("mp-meta")
|
||||||
|
.key("meta-file.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.part_number(1)
|
||||||
|
.body(ByteStream::from(vec![0xFFu8; 512]))
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Complete
|
||||||
|
let completed = CompletedMultipartUpload::builder()
|
||||||
|
.parts(
|
||||||
|
CompletedPart::builder()
|
||||||
|
.part_number(1)
|
||||||
|
.e_tag(p1.e_tag().unwrap())
|
||||||
|
.build(),
|
||||||
|
)
|
||||||
|
.build();
|
||||||
|
|
||||||
|
server
|
||||||
|
.client
|
||||||
|
.complete_multipart_upload()
|
||||||
|
.bucket("mp-meta")
|
||||||
|
.key("meta-file.bin")
|
||||||
|
.upload_id(&upload_id)
|
||||||
|
.multipart_upload(completed)
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
// Head object — verify metadata came through
|
||||||
|
let head = server
|
||||||
|
.client
|
||||||
|
.head_object()
|
||||||
|
.bucket("mp-meta")
|
||||||
|
.key("meta-file.bin")
|
||||||
|
.send()
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let meta = head.metadata().unwrap();
|
||||||
|
assert_eq!(meta.get("author").map(|s| s.as_str()), Some("test-user"));
|
||||||
|
assert_eq!(meta.get("version").map(|s| s.as_str()), Some("7"));
|
||||||
|
|
||||||
|
server.shutdown().await;
|
||||||
|
}
|
||||||
22
crates/post3/Cargo.toml
Normal file
22
crates/post3/Cargo.toml
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
[package]
|
||||||
|
name = "post3"
|
||||||
|
version.workspace = true
|
||||||
|
edition.workspace = true
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
anyhow.workspace = true
|
||||||
|
tokio.workspace = true
|
||||||
|
tracing.workspace = true
|
||||||
|
sqlx.workspace = true
|
||||||
|
uuid.workspace = true
|
||||||
|
bytes.workspace = true
|
||||||
|
chrono.workspace = true
|
||||||
|
md-5.workspace = true
|
||||||
|
hex.workspace = true
|
||||||
|
thiserror.workspace = true
|
||||||
|
serde.workspace = true
|
||||||
|
serde_json.workspace = true
|
||||||
|
percent-encoding.workspace = true
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
tempfile.workspace = true
|
||||||
37
crates/post3/migrations/20260226000001_initial.sql
Normal file
37
crates/post3/migrations/20260226000001_initial.sql
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
CREATE TABLE buckets (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_buckets_name ON buckets (name);
|
||||||
|
|
||||||
|
CREATE TABLE objects (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
bucket_id UUID NOT NULL REFERENCES buckets(id) ON DELETE CASCADE,
|
||||||
|
key TEXT NOT NULL,
|
||||||
|
size BIGINT NOT NULL,
|
||||||
|
etag TEXT NOT NULL,
|
||||||
|
content_type TEXT NOT NULL DEFAULT 'application/octet-stream',
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_objects_bucket_key ON objects (bucket_id, key);
|
||||||
|
CREATE INDEX idx_objects_key_prefix ON objects (bucket_id, key text_pattern_ops);
|
||||||
|
|
||||||
|
CREATE TABLE object_metadata (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
object_id UUID NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
|
||||||
|
meta_key TEXT NOT NULL,
|
||||||
|
meta_value TEXT NOT NULL
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_metadata_object_key ON object_metadata (object_id, meta_key);
|
||||||
|
CREATE INDEX idx_metadata_object_id ON object_metadata (object_id);
|
||||||
|
|
||||||
|
CREATE TABLE blocks (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
object_id UUID NOT NULL REFERENCES objects(id) ON DELETE CASCADE,
|
||||||
|
block_index INT NOT NULL,
|
||||||
|
data BYTEA NOT NULL,
|
||||||
|
block_size INT NOT NULL
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_blocks_object_index ON blocks (object_id, block_index);
|
||||||
|
CREATE INDEX idx_blocks_object_id ON blocks (object_id);
|
||||||
29
crates/post3/migrations/20260227000001_multipart_uploads.sql
Normal file
29
crates/post3/migrations/20260227000001_multipart_uploads.sql
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
CREATE TABLE multipart_uploads (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
bucket_id UUID NOT NULL REFERENCES buckets(id) ON DELETE CASCADE,
|
||||||
|
key TEXT NOT NULL,
|
||||||
|
upload_id TEXT NOT NULL,
|
||||||
|
content_type TEXT NOT NULL DEFAULT 'application/octet-stream',
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_multipart_upload_id ON multipart_uploads (upload_id);
|
||||||
|
CREATE INDEX idx_multipart_bucket ON multipart_uploads (bucket_id);
|
||||||
|
|
||||||
|
CREATE TABLE multipart_upload_metadata (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
upload_id UUID NOT NULL REFERENCES multipart_uploads(id) ON DELETE CASCADE,
|
||||||
|
meta_key TEXT NOT NULL,
|
||||||
|
meta_value TEXT NOT NULL
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_mp_meta_key ON multipart_upload_metadata (upload_id, meta_key);
|
||||||
|
|
||||||
|
CREATE TABLE upload_parts (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
upload_id UUID NOT NULL REFERENCES multipart_uploads(id) ON DELETE CASCADE,
|
||||||
|
part_number INT NOT NULL,
|
||||||
|
data BYTEA NOT NULL,
|
||||||
|
size BIGINT NOT NULL,
|
||||||
|
etag TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_upload_parts_num ON upload_parts (upload_id, part_number);
|
||||||
123
crates/post3/src/backend.rs
Normal file
123
crates/post3/src/backend.rs
Normal file
@@ -0,0 +1,123 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
use std::future::Future;
|
||||||
|
|
||||||
|
use bytes::Bytes;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::{
|
||||||
|
BucketInfo, CompleteMultipartUploadResult, CreateMultipartUploadResult, GetObjectResult,
|
||||||
|
HeadObjectResult, ListMultipartUploadsResult, ListObjectsResult, ListPartsResult,
|
||||||
|
PutObjectResult, UploadPartResult,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// Trait abstracting storage operations. Implemented by `PostgresBackend` and `FilesystemBackend`.
|
||||||
|
pub trait StorageBackend: Clone + Send + Sync + 'static {
|
||||||
|
// --- Bucket operations ---
|
||||||
|
|
||||||
|
fn create_bucket(
|
||||||
|
&self,
|
||||||
|
name: &str,
|
||||||
|
) -> impl Future<Output = Result<BucketInfo, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn head_bucket(
|
||||||
|
&self,
|
||||||
|
name: &str,
|
||||||
|
) -> impl Future<Output = Result<Option<BucketInfo>, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn delete_bucket(
|
||||||
|
&self,
|
||||||
|
name: &str,
|
||||||
|
) -> impl Future<Output = Result<(), Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn list_buckets(&self) -> impl Future<Output = Result<Vec<BucketInfo>, Post3Error>> + Send;
|
||||||
|
|
||||||
|
// --- Object operations ---
|
||||||
|
|
||||||
|
fn put_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
content_type: Option<&str>,
|
||||||
|
metadata: HashMap<String, String>,
|
||||||
|
body: Bytes,
|
||||||
|
) -> impl Future<Output = Result<PutObjectResult, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn get_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
) -> impl Future<Output = Result<GetObjectResult, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn head_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
) -> impl Future<Output = Result<Option<HeadObjectResult>, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn delete_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
) -> impl Future<Output = Result<(), Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn list_objects_v2(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
prefix: Option<&str>,
|
||||||
|
continuation_token: Option<&str>,
|
||||||
|
max_keys: Option<i64>,
|
||||||
|
delimiter: Option<&str>,
|
||||||
|
) -> impl Future<Output = Result<ListObjectsResult, Post3Error>> + Send;
|
||||||
|
|
||||||
|
// --- Multipart upload operations ---
|
||||||
|
|
||||||
|
fn create_multipart_upload(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
content_type: Option<&str>,
|
||||||
|
metadata: HashMap<String, String>,
|
||||||
|
) -> impl Future<Output = Result<CreateMultipartUploadResult, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn upload_part(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
part_number: i32,
|
||||||
|
body: Bytes,
|
||||||
|
) -> impl Future<Output = Result<UploadPartResult, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn complete_multipart_upload(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
part_etags: Vec<(i32, String)>,
|
||||||
|
) -> impl Future<Output = Result<CompleteMultipartUploadResult, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn abort_multipart_upload(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
) -> impl Future<Output = Result<(), Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn list_parts(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
max_parts: Option<i32>,
|
||||||
|
part_number_marker: Option<i32>,
|
||||||
|
) -> impl Future<Output = Result<ListPartsResult, Post3Error>> + Send;
|
||||||
|
|
||||||
|
fn list_multipart_uploads(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
prefix: Option<&str>,
|
||||||
|
key_marker: Option<&str>,
|
||||||
|
upload_id_marker: Option<&str>,
|
||||||
|
max_uploads: Option<i32>,
|
||||||
|
) -> impl Future<Output = Result<ListMultipartUploadsResult, Post3Error>> + Send;
|
||||||
|
}
|
||||||
45
crates/post3/src/error.rs
Normal file
45
crates/post3/src/error.rs
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
#[derive(Debug, thiserror::Error)]
|
||||||
|
pub enum Post3Error {
|
||||||
|
#[error("bucket not found: {0}")]
|
||||||
|
BucketNotFound(String),
|
||||||
|
|
||||||
|
#[error("bucket already exists: {0}")]
|
||||||
|
BucketAlreadyExists(String),
|
||||||
|
|
||||||
|
#[error("object not found: bucket={bucket}, key={key}")]
|
||||||
|
ObjectNotFound { bucket: String, key: String },
|
||||||
|
|
||||||
|
#[error("bucket not empty: {0}")]
|
||||||
|
BucketNotEmpty(String),
|
||||||
|
|
||||||
|
#[error("multipart upload not found: {0}")]
|
||||||
|
UploadNotFound(String),
|
||||||
|
|
||||||
|
#[error("invalid part: upload_id={upload_id}, part_number={part_number}")]
|
||||||
|
InvalidPart { upload_id: String, part_number: i32 },
|
||||||
|
|
||||||
|
#[error("etag mismatch for part {part_number}: expected={expected}, got={got}")]
|
||||||
|
ETagMismatch {
|
||||||
|
part_number: i32,
|
||||||
|
expected: String,
|
||||||
|
got: String,
|
||||||
|
},
|
||||||
|
|
||||||
|
#[error("invalid part order in complete request")]
|
||||||
|
InvalidPartOrder,
|
||||||
|
|
||||||
|
#[error("part {part_number} is too small: size={size}, minimum=5242880")]
|
||||||
|
EntityTooSmall { part_number: i32, size: i64 },
|
||||||
|
|
||||||
|
#[error("io error: {0}")]
|
||||||
|
Io(#[from] std::io::Error),
|
||||||
|
|
||||||
|
#[error("serialization error: {0}")]
|
||||||
|
Serialization(String),
|
||||||
|
|
||||||
|
#[error(transparent)]
|
||||||
|
Database(#[from] sqlx::Error),
|
||||||
|
|
||||||
|
#[error(transparent)]
|
||||||
|
Other(#[from] anyhow::Error),
|
||||||
|
}
|
||||||
2173
crates/post3/src/fs.rs
Normal file
2173
crates/post3/src/fs.rs
Normal file
File diff suppressed because it is too large
Load Diff
11
crates/post3/src/lib.rs
Normal file
11
crates/post3/src/lib.rs
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
pub mod backend;
|
||||||
|
pub mod error;
|
||||||
|
pub mod fs;
|
||||||
|
pub mod models;
|
||||||
|
pub mod repositories;
|
||||||
|
pub mod store;
|
||||||
|
|
||||||
|
pub use backend::StorageBackend;
|
||||||
|
pub use error::Post3Error;
|
||||||
|
pub use fs::FilesystemBackend;
|
||||||
|
pub use store::{PostgresBackend, Store};
|
||||||
170
crates/post3/src/models.rs
Normal file
170
crates/post3/src/models.rs
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
use chrono::{DateTime, Utc};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||||
|
pub struct BucketRow {
|
||||||
|
pub id: Uuid,
|
||||||
|
pub name: String,
|
||||||
|
pub created_at: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||||
|
pub struct ObjectRow {
|
||||||
|
pub id: Uuid,
|
||||||
|
pub bucket_id: Uuid,
|
||||||
|
pub key: String,
|
||||||
|
pub size: i64,
|
||||||
|
pub etag: String,
|
||||||
|
pub content_type: String,
|
||||||
|
pub created_at: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||||
|
pub struct BlockRow {
|
||||||
|
pub id: Uuid,
|
||||||
|
pub object_id: Uuid,
|
||||||
|
pub block_index: i32,
|
||||||
|
pub data: Vec<u8>,
|
||||||
|
pub block_size: i32,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||||
|
pub struct MetadataEntry {
|
||||||
|
pub id: Uuid,
|
||||||
|
pub object_id: Uuid,
|
||||||
|
pub meta_key: String,
|
||||||
|
pub meta_value: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Backend-neutral bucket summary.
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct BucketInfo {
|
||||||
|
pub name: String,
|
||||||
|
pub created_at: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Backend-neutral object metadata (no internal IDs).
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct ObjectMeta {
|
||||||
|
pub key: String,
|
||||||
|
pub size: i64,
|
||||||
|
pub etag: String,
|
||||||
|
pub content_type: String,
|
||||||
|
pub last_modified: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct ObjectInfo {
|
||||||
|
pub key: String,
|
||||||
|
pub size: i64,
|
||||||
|
pub etag: String,
|
||||||
|
pub last_modified: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone)]
|
||||||
|
pub struct ListObjectsResult {
|
||||||
|
pub objects: Vec<ObjectInfo>,
|
||||||
|
pub is_truncated: bool,
|
||||||
|
pub next_continuation_token: Option<String>,
|
||||||
|
pub prefix: Option<String>,
|
||||||
|
pub delimiter: Option<String>,
|
||||||
|
pub common_prefixes: Vec<String>,
|
||||||
|
pub key_count: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct PutObjectResult {
|
||||||
|
pub etag: String,
|
||||||
|
pub size: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct GetObjectResult {
|
||||||
|
pub metadata: ObjectMeta,
|
||||||
|
pub user_metadata: std::collections::HashMap<String, String>,
|
||||||
|
pub body: bytes::Bytes,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct HeadObjectResult {
|
||||||
|
pub object: ObjectMeta,
|
||||||
|
pub user_metadata: std::collections::HashMap<String, String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Multipart upload models ---
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||||
|
pub struct MultipartUploadRow {
|
||||||
|
pub id: Uuid,
|
||||||
|
pub bucket_id: Uuid,
|
||||||
|
pub key: String,
|
||||||
|
pub upload_id: String,
|
||||||
|
pub content_type: String,
|
||||||
|
pub created_at: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||||
|
pub struct UploadPartRow {
|
||||||
|
pub id: Uuid,
|
||||||
|
pub upload_id: Uuid,
|
||||||
|
pub part_number: i32,
|
||||||
|
pub data: Vec<u8>,
|
||||||
|
pub size: i64,
|
||||||
|
pub etag: String,
|
||||||
|
pub created_at: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, sqlx::FromRow)]
|
||||||
|
pub struct UploadPartInfo {
|
||||||
|
pub part_number: i32,
|
||||||
|
pub size: i64,
|
||||||
|
pub etag: String,
|
||||||
|
pub created_at: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct CreateMultipartUploadResult {
|
||||||
|
pub bucket: String,
|
||||||
|
pub key: String,
|
||||||
|
pub upload_id: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct UploadPartResult {
|
||||||
|
pub etag: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct CompleteMultipartUploadResult {
|
||||||
|
pub bucket: String,
|
||||||
|
pub key: String,
|
||||||
|
pub etag: String,
|
||||||
|
pub size: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct ListPartsResult {
|
||||||
|
pub bucket: String,
|
||||||
|
pub key: String,
|
||||||
|
pub upload_id: String,
|
||||||
|
pub parts: Vec<UploadPartInfo>,
|
||||||
|
pub is_truncated: bool,
|
||||||
|
pub next_part_number_marker: Option<i32>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct MultipartUploadInfo {
|
||||||
|
pub key: String,
|
||||||
|
pub upload_id: String,
|
||||||
|
pub initiated: DateTime<Utc>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub struct ListMultipartUploadsResult {
|
||||||
|
pub bucket: String,
|
||||||
|
pub uploads: Vec<MultipartUploadInfo>,
|
||||||
|
pub is_truncated: bool,
|
||||||
|
pub next_key_marker: Option<String>,
|
||||||
|
pub next_upload_id_marker: Option<String>,
|
||||||
|
pub prefix: Option<String>,
|
||||||
|
}
|
||||||
44
crates/post3/src/repositories/blocks.rs
Normal file
44
crates/post3/src/repositories/blocks.rs
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
use sqlx::{Postgres, Transaction};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::BlockRow;
|
||||||
|
|
||||||
|
pub struct BlocksRepository;
|
||||||
|
|
||||||
|
impl BlocksRepository {
|
||||||
|
pub async fn insert_in_tx(
|
||||||
|
tx: &mut Transaction<'_, Postgres>,
|
||||||
|
object_id: Uuid,
|
||||||
|
block_index: i32,
|
||||||
|
data: &[u8],
|
||||||
|
) -> Result<(), Post3Error> {
|
||||||
|
let block_size = data.len() as i32;
|
||||||
|
sqlx::query(
|
||||||
|
"INSERT INTO blocks (object_id, block_index, data, block_size) \
|
||||||
|
VALUES ($1, $2, $3, $4)",
|
||||||
|
)
|
||||||
|
.bind(object_id)
|
||||||
|
.bind(block_index)
|
||||||
|
.bind(data)
|
||||||
|
.bind(block_size)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_all(
|
||||||
|
db: &sqlx::PgPool,
|
||||||
|
object_id: Uuid,
|
||||||
|
) -> Result<Vec<BlockRow>, Post3Error> {
|
||||||
|
let rows = sqlx::query_as::<_, BlockRow>(
|
||||||
|
"SELECT * FROM blocks WHERE object_id = $1 ORDER BY block_index ASC",
|
||||||
|
)
|
||||||
|
.bind(object_id)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(rows)
|
||||||
|
}
|
||||||
|
}
|
||||||
80
crates/post3/src/repositories/buckets.rs
Normal file
80
crates/post3/src/repositories/buckets.rs
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
use sqlx::PgPool;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::BucketRow;
|
||||||
|
|
||||||
|
pub struct BucketsRepository<'a> {
|
||||||
|
db: &'a PgPool,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> BucketsRepository<'a> {
|
||||||
|
pub fn new(db: &'a PgPool) -> Self {
|
||||||
|
Self { db }
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create(&self, name: &str) -> Result<BucketRow, Post3Error> {
|
||||||
|
let existing = self.get_by_name(name).await?;
|
||||||
|
if existing.is_some() {
|
||||||
|
return Err(Post3Error::BucketAlreadyExists(name.to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let row = sqlx::query_as::<_, BucketRow>(
|
||||||
|
"INSERT INTO buckets (name) VALUES ($1) RETURNING *",
|
||||||
|
)
|
||||||
|
.bind(name)
|
||||||
|
.fetch_one(self.db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(row)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_by_name(&self, name: &str) -> Result<Option<BucketRow>, Post3Error> {
|
||||||
|
let row =
|
||||||
|
sqlx::query_as::<_, BucketRow>("SELECT * FROM buckets WHERE name = $1")
|
||||||
|
.bind(name)
|
||||||
|
.fetch_optional(self.db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(row)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list(&self) -> Result<Vec<BucketRow>, Post3Error> {
|
||||||
|
let rows = sqlx::query_as::<_, BucketRow>(
|
||||||
|
"SELECT * FROM buckets ORDER BY created_at ASC",
|
||||||
|
)
|
||||||
|
.fetch_all(self.db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(rows)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete(&self, name: &str) -> Result<(), Post3Error> {
|
||||||
|
let bucket = self
|
||||||
|
.get_by_name(name)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| Post3Error::BucketNotFound(name.to_string()))?;
|
||||||
|
|
||||||
|
if !self.is_empty(bucket.id).await? {
|
||||||
|
return Err(Post3Error::BucketNotEmpty(name.to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlx::query("DELETE FROM buckets WHERE id = $1")
|
||||||
|
.bind(bucket.id)
|
||||||
|
.execute(self.db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn is_empty(&self, bucket_id: Uuid) -> Result<bool, Post3Error> {
|
||||||
|
let count: (i64,) = sqlx::query_as(
|
||||||
|
"SELECT COUNT(*) FROM objects WHERE bucket_id = $1",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.fetch_one(self.db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(count.0 == 0)
|
||||||
|
}
|
||||||
|
}
|
||||||
49
crates/post3/src/repositories/metadata.rs
Normal file
49
crates/post3/src/repositories/metadata.rs
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
use sqlx::{PgPool, Postgres, Transaction};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::MetadataEntry;
|
||||||
|
|
||||||
|
pub struct MetadataRepository;
|
||||||
|
|
||||||
|
impl MetadataRepository {
|
||||||
|
pub async fn insert_batch_in_tx(
|
||||||
|
tx: &mut Transaction<'_, Postgres>,
|
||||||
|
object_id: Uuid,
|
||||||
|
metadata: &HashMap<String, String>,
|
||||||
|
) -> Result<(), Post3Error> {
|
||||||
|
for (key, value) in metadata {
|
||||||
|
sqlx::query(
|
||||||
|
"INSERT INTO object_metadata (object_id, meta_key, meta_value) \
|
||||||
|
VALUES ($1, $2, $3)",
|
||||||
|
)
|
||||||
|
.bind(object_id)
|
||||||
|
.bind(key)
|
||||||
|
.bind(value)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_all(
|
||||||
|
db: &PgPool,
|
||||||
|
object_id: Uuid,
|
||||||
|
) -> Result<HashMap<String, String>, Post3Error> {
|
||||||
|
let rows = sqlx::query_as::<_, MetadataEntry>(
|
||||||
|
"SELECT * FROM object_metadata WHERE object_id = $1",
|
||||||
|
)
|
||||||
|
.bind(object_id)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let map = rows
|
||||||
|
.into_iter()
|
||||||
|
.map(|e| (e.meta_key, e.meta_value))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
Ok(map)
|
||||||
|
}
|
||||||
|
}
|
||||||
7
crates/post3/src/repositories/mod.rs
Normal file
7
crates/post3/src/repositories/mod.rs
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
pub mod blocks;
|
||||||
|
pub mod buckets;
|
||||||
|
pub mod metadata;
|
||||||
|
pub mod multipart_metadata;
|
||||||
|
pub mod multipart_uploads;
|
||||||
|
pub mod objects;
|
||||||
|
pub mod upload_parts;
|
||||||
50
crates/post3/src/repositories/multipart_metadata.rs
Normal file
50
crates/post3/src/repositories/multipart_metadata.rs
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
use sqlx::{PgPool, Postgres, Transaction};
|
||||||
|
use std::collections::HashMap;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::MetadataEntry;
|
||||||
|
|
||||||
|
pub struct MultipartMetadataRepository;
|
||||||
|
|
||||||
|
impl MultipartMetadataRepository {
|
||||||
|
pub async fn insert_batch_in_tx(
|
||||||
|
tx: &mut Transaction<'_, Postgres>,
|
||||||
|
upload_id: Uuid,
|
||||||
|
metadata: &HashMap<String, String>,
|
||||||
|
) -> Result<(), Post3Error> {
|
||||||
|
for (key, value) in metadata {
|
||||||
|
sqlx::query(
|
||||||
|
"INSERT INTO multipart_upload_metadata (upload_id, meta_key, meta_value) \
|
||||||
|
VALUES ($1, $2, $3)",
|
||||||
|
)
|
||||||
|
.bind(upload_id)
|
||||||
|
.bind(key)
|
||||||
|
.bind(value)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_all(
|
||||||
|
db: &PgPool,
|
||||||
|
upload_id: Uuid,
|
||||||
|
) -> Result<HashMap<String, String>, Post3Error> {
|
||||||
|
let rows = sqlx::query_as::<_, MetadataEntry>(
|
||||||
|
"SELECT id, upload_id AS object_id, meta_key, meta_value \
|
||||||
|
FROM multipart_upload_metadata WHERE upload_id = $1",
|
||||||
|
)
|
||||||
|
.bind(upload_id)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let map = rows
|
||||||
|
.into_iter()
|
||||||
|
.map(|e| (e.meta_key, e.meta_value))
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
Ok(map)
|
||||||
|
}
|
||||||
|
}
|
||||||
163
crates/post3/src/repositories/multipart_uploads.rs
Normal file
163
crates/post3/src/repositories/multipart_uploads.rs
Normal file
@@ -0,0 +1,163 @@
|
|||||||
|
use sqlx::{PgPool, Postgres, Transaction};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::MultipartUploadRow;
|
||||||
|
|
||||||
|
pub struct MultipartUploadsRepository;
|
||||||
|
|
||||||
|
impl MultipartUploadsRepository {
|
||||||
|
pub async fn create_in_tx(
|
||||||
|
tx: &mut Transaction<'_, Postgres>,
|
||||||
|
bucket_id: Uuid,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
content_type: &str,
|
||||||
|
) -> Result<MultipartUploadRow, Post3Error> {
|
||||||
|
let row = sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"INSERT INTO multipart_uploads (bucket_id, key, upload_id, content_type) \
|
||||||
|
VALUES ($1, $2, $3, $4) RETURNING *",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(key)
|
||||||
|
.bind(upload_id)
|
||||||
|
.bind(content_type)
|
||||||
|
.fetch_one(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(row)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_by_upload_id(
|
||||||
|
db: &PgPool,
|
||||||
|
upload_id: &str,
|
||||||
|
) -> Result<Option<MultipartUploadRow>, Post3Error> {
|
||||||
|
let row = sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"SELECT * FROM multipart_uploads WHERE upload_id = $1",
|
||||||
|
)
|
||||||
|
.bind(upload_id)
|
||||||
|
.fetch_optional(db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(row)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_in_tx(
|
||||||
|
tx: &mut Transaction<'_, Postgres>,
|
||||||
|
id: Uuid,
|
||||||
|
) -> Result<(), Post3Error> {
|
||||||
|
sqlx::query("DELETE FROM multipart_uploads WHERE id = $1")
|
||||||
|
.bind(id)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_by_upload_id(
|
||||||
|
db: &PgPool,
|
||||||
|
upload_id: &str,
|
||||||
|
) -> Result<bool, Post3Error> {
|
||||||
|
let result = sqlx::query("DELETE FROM multipart_uploads WHERE upload_id = $1")
|
||||||
|
.bind(upload_id)
|
||||||
|
.execute(db)
|
||||||
|
.await?;
|
||||||
|
Ok(result.rows_affected() > 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list(
|
||||||
|
db: &PgPool,
|
||||||
|
bucket_id: Uuid,
|
||||||
|
prefix: Option<&str>,
|
||||||
|
key_marker: Option<&str>,
|
||||||
|
upload_id_marker: Option<&str>,
|
||||||
|
max_uploads: i64,
|
||||||
|
) -> Result<Vec<MultipartUploadRow>, Post3Error> {
|
||||||
|
let rows = match (prefix, key_marker) {
|
||||||
|
(Some(pfx), Some(marker)) => {
|
||||||
|
let pattern = format!("{pfx}%");
|
||||||
|
// When key_marker is provided, return uploads with key > marker,
|
||||||
|
// or same key but upload_id > upload_id_marker
|
||||||
|
if let Some(uid_marker) = upload_id_marker {
|
||||||
|
sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"SELECT * FROM multipart_uploads \
|
||||||
|
WHERE bucket_id = $1 AND key LIKE $2 \
|
||||||
|
AND (key > $3 OR (key = $3 AND upload_id > $4)) \
|
||||||
|
ORDER BY key ASC, upload_id ASC LIMIT $5",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(pattern)
|
||||||
|
.bind(marker)
|
||||||
|
.bind(uid_marker)
|
||||||
|
.bind(max_uploads)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"SELECT * FROM multipart_uploads \
|
||||||
|
WHERE bucket_id = $1 AND key LIKE $2 AND key > $3 \
|
||||||
|
ORDER BY key ASC, upload_id ASC LIMIT $4",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(pattern)
|
||||||
|
.bind(marker)
|
||||||
|
.bind(max_uploads)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
}
|
||||||
|
(Some(pfx), None) => {
|
||||||
|
let pattern = format!("{pfx}%");
|
||||||
|
sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"SELECT * FROM multipart_uploads \
|
||||||
|
WHERE bucket_id = $1 AND key LIKE $2 \
|
||||||
|
ORDER BY key ASC, upload_id ASC LIMIT $3",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(pattern)
|
||||||
|
.bind(max_uploads)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
(None, Some(marker)) => {
|
||||||
|
if let Some(uid_marker) = upload_id_marker {
|
||||||
|
sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"SELECT * FROM multipart_uploads \
|
||||||
|
WHERE bucket_id = $1 \
|
||||||
|
AND (key > $2 OR (key = $2 AND upload_id > $3)) \
|
||||||
|
ORDER BY key ASC, upload_id ASC LIMIT $4",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(marker)
|
||||||
|
.bind(uid_marker)
|
||||||
|
.bind(max_uploads)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"SELECT * FROM multipart_uploads \
|
||||||
|
WHERE bucket_id = $1 AND key > $2 \
|
||||||
|
ORDER BY key ASC, upload_id ASC LIMIT $3",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(marker)
|
||||||
|
.bind(max_uploads)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
}
|
||||||
|
(None, None) => {
|
||||||
|
sqlx::query_as::<_, MultipartUploadRow>(
|
||||||
|
"SELECT * FROM multipart_uploads \
|
||||||
|
WHERE bucket_id = $1 \
|
||||||
|
ORDER BY key ASC, upload_id ASC LIMIT $2",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(max_uploads)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(rows)
|
||||||
|
}
|
||||||
|
}
|
||||||
139
crates/post3/src/repositories/objects.rs
Normal file
139
crates/post3/src/repositories/objects.rs
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
use sqlx::{PgPool, Postgres, Transaction};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::ObjectRow;
|
||||||
|
|
||||||
|
pub struct ObjectsRepository<'a> {
|
||||||
|
db: &'a PgPool,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'a> ObjectsRepository<'a> {
|
||||||
|
pub fn new(db: &'a PgPool) -> Self {
|
||||||
|
Self { db }
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn insert_in_tx(
|
||||||
|
tx: &mut Transaction<'_, Postgres>,
|
||||||
|
bucket_id: Uuid,
|
||||||
|
key: &str,
|
||||||
|
size: i64,
|
||||||
|
etag: &str,
|
||||||
|
content_type: &str,
|
||||||
|
) -> Result<ObjectRow, Post3Error> {
|
||||||
|
// Delete existing (cascades to blocks + metadata)
|
||||||
|
sqlx::query("DELETE FROM objects WHERE bucket_id = $1 AND key = $2")
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(key)
|
||||||
|
.execute(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let row = sqlx::query_as::<_, ObjectRow>(
|
||||||
|
"INSERT INTO objects (bucket_id, key, size, etag, content_type) \
|
||||||
|
VALUES ($1, $2, $3, $4, $5) RETURNING *",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(key)
|
||||||
|
.bind(size)
|
||||||
|
.bind(etag)
|
||||||
|
.bind(content_type)
|
||||||
|
.fetch_one(&mut **tx)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(row)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get(
|
||||||
|
&self,
|
||||||
|
bucket_id: Uuid,
|
||||||
|
key: &str,
|
||||||
|
) -> Result<Option<ObjectRow>, Post3Error> {
|
||||||
|
let row = sqlx::query_as::<_, ObjectRow>(
|
||||||
|
"SELECT * FROM objects WHERE bucket_id = $1 AND key = $2",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(key)
|
||||||
|
.fetch_optional(self.db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(row)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete(
|
||||||
|
&self,
|
||||||
|
bucket_id: Uuid,
|
||||||
|
key: &str,
|
||||||
|
) -> Result<bool, Post3Error> {
|
||||||
|
let result =
|
||||||
|
sqlx::query("DELETE FROM objects WHERE bucket_id = $1 AND key = $2")
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(key)
|
||||||
|
.execute(self.db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(result.rows_affected() > 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list(
|
||||||
|
&self,
|
||||||
|
bucket_id: Uuid,
|
||||||
|
prefix: Option<&str>,
|
||||||
|
start_after: Option<&str>,
|
||||||
|
max_keys: i64,
|
||||||
|
) -> Result<Vec<ObjectRow>, Post3Error> {
|
||||||
|
let rows = match (prefix, start_after) {
|
||||||
|
(Some(pfx), Some(after)) => {
|
||||||
|
let pattern = format!("{pfx}%");
|
||||||
|
sqlx::query_as::<_, ObjectRow>(
|
||||||
|
"SELECT * FROM objects \
|
||||||
|
WHERE bucket_id = $1 AND key LIKE $2 AND key > $3 \
|
||||||
|
ORDER BY key ASC LIMIT $4",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(pattern)
|
||||||
|
.bind(after)
|
||||||
|
.bind(max_keys)
|
||||||
|
.fetch_all(self.db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
(Some(pfx), None) => {
|
||||||
|
let pattern = format!("{pfx}%");
|
||||||
|
sqlx::query_as::<_, ObjectRow>(
|
||||||
|
"SELECT * FROM objects \
|
||||||
|
WHERE bucket_id = $1 AND key LIKE $2 \
|
||||||
|
ORDER BY key ASC LIMIT $3",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(pattern)
|
||||||
|
.bind(max_keys)
|
||||||
|
.fetch_all(self.db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
(None, Some(after)) => {
|
||||||
|
sqlx::query_as::<_, ObjectRow>(
|
||||||
|
"SELECT * FROM objects \
|
||||||
|
WHERE bucket_id = $1 AND key > $2 \
|
||||||
|
ORDER BY key ASC LIMIT $3",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(after)
|
||||||
|
.bind(max_keys)
|
||||||
|
.fetch_all(self.db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
(None, None) => {
|
||||||
|
sqlx::query_as::<_, ObjectRow>(
|
||||||
|
"SELECT * FROM objects \
|
||||||
|
WHERE bucket_id = $1 \
|
||||||
|
ORDER BY key ASC LIMIT $2",
|
||||||
|
)
|
||||||
|
.bind(bucket_id)
|
||||||
|
.bind(max_keys)
|
||||||
|
.fetch_all(self.db)
|
||||||
|
.await?
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(rows)
|
||||||
|
}
|
||||||
|
}
|
||||||
88
crates/post3/src/repositories/upload_parts.rs
Normal file
88
crates/post3/src/repositories/upload_parts.rs
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
use sqlx::PgPool;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::{UploadPartInfo, UploadPartRow};
|
||||||
|
|
||||||
|
pub struct UploadPartsRepository;
|
||||||
|
|
||||||
|
impl UploadPartsRepository {
|
||||||
|
pub async fn upsert(
|
||||||
|
db: &PgPool,
|
||||||
|
upload_id: Uuid,
|
||||||
|
part_number: i32,
|
||||||
|
data: &[u8],
|
||||||
|
size: i64,
|
||||||
|
etag: &str,
|
||||||
|
) -> Result<(), Post3Error> {
|
||||||
|
sqlx::query(
|
||||||
|
"INSERT INTO upload_parts (upload_id, part_number, data, size, etag) \
|
||||||
|
VALUES ($1, $2, $3, $4, $5) \
|
||||||
|
ON CONFLICT (upload_id, part_number) DO UPDATE \
|
||||||
|
SET data = EXCLUDED.data, size = EXCLUDED.size, \
|
||||||
|
etag = EXCLUDED.etag, created_at = NOW()",
|
||||||
|
)
|
||||||
|
.bind(upload_id)
|
||||||
|
.bind(part_number)
|
||||||
|
.bind(data)
|
||||||
|
.bind(size)
|
||||||
|
.bind(etag)
|
||||||
|
.execute(db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_info(
|
||||||
|
db: &PgPool,
|
||||||
|
upload_id: Uuid,
|
||||||
|
part_number_marker: Option<i32>,
|
||||||
|
max_parts: i64,
|
||||||
|
) -> Result<Vec<UploadPartInfo>, Post3Error> {
|
||||||
|
let rows = if let Some(marker) = part_number_marker {
|
||||||
|
sqlx::query_as::<_, UploadPartInfo>(
|
||||||
|
"SELECT part_number, size, etag, created_at \
|
||||||
|
FROM upload_parts \
|
||||||
|
WHERE upload_id = $1 AND part_number > $2 \
|
||||||
|
ORDER BY part_number ASC LIMIT $3",
|
||||||
|
)
|
||||||
|
.bind(upload_id)
|
||||||
|
.bind(marker)
|
||||||
|
.bind(max_parts)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
} else {
|
||||||
|
sqlx::query_as::<_, UploadPartInfo>(
|
||||||
|
"SELECT part_number, size, etag, created_at \
|
||||||
|
FROM upload_parts \
|
||||||
|
WHERE upload_id = $1 \
|
||||||
|
ORDER BY part_number ASC LIMIT $2",
|
||||||
|
)
|
||||||
|
.bind(upload_id)
|
||||||
|
.bind(max_parts)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(rows)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn get_ordered_by_numbers(
|
||||||
|
db: &PgPool,
|
||||||
|
upload_id: Uuid,
|
||||||
|
part_numbers: &[i32],
|
||||||
|
) -> Result<Vec<UploadPartRow>, Post3Error> {
|
||||||
|
// Fetch the requested parts in order
|
||||||
|
let rows = sqlx::query_as::<_, UploadPartRow>(
|
||||||
|
"SELECT * FROM upload_parts \
|
||||||
|
WHERE upload_id = $1 AND part_number = ANY($2) \
|
||||||
|
ORDER BY part_number ASC",
|
||||||
|
)
|
||||||
|
.bind(upload_id)
|
||||||
|
.bind(part_numbers)
|
||||||
|
.fetch_all(db)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(rows)
|
||||||
|
}
|
||||||
|
}
|
||||||
705
crates/post3/src/store.rs
Normal file
705
crates/post3/src/store.rs
Normal file
@@ -0,0 +1,705 @@
|
|||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
|
use bytes::Bytes;
|
||||||
|
use md5::{Digest, Md5};
|
||||||
|
use sqlx::PgPool;
|
||||||
|
|
||||||
|
use crate::backend::StorageBackend;
|
||||||
|
use crate::error::Post3Error;
|
||||||
|
use crate::models::{
|
||||||
|
BucketInfo, BucketRow, CompleteMultipartUploadResult, CreateMultipartUploadResult,
|
||||||
|
GetObjectResult, HeadObjectResult, ListMultipartUploadsResult, ListObjectsResult,
|
||||||
|
ListPartsResult, MultipartUploadInfo, MultipartUploadRow, ObjectInfo, ObjectMeta,
|
||||||
|
PutObjectResult, UploadPartResult,
|
||||||
|
};
|
||||||
|
use crate::repositories::blocks::BlocksRepository;
|
||||||
|
use crate::repositories::buckets::BucketsRepository;
|
||||||
|
use crate::repositories::metadata::MetadataRepository;
|
||||||
|
use crate::repositories::multipart_metadata::MultipartMetadataRepository;
|
||||||
|
use crate::repositories::multipart_uploads::MultipartUploadsRepository;
|
||||||
|
use crate::repositories::objects::ObjectsRepository;
|
||||||
|
use crate::repositories::upload_parts::UploadPartsRepository;
|
||||||
|
|
||||||
|
pub const DEFAULT_BLOCK_SIZE: usize = 1024 * 1024; // 1 MiB
|
||||||
|
|
||||||
|
/// PostgreSQL-backed storage. Also exported as `PostgresBackend`.
|
||||||
|
#[derive(Clone)]
|
||||||
|
pub struct Store {
|
||||||
|
db: PgPool,
|
||||||
|
block_size: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Alias for `Store` — the PostgreSQL-backed storage backend.
|
||||||
|
pub type PostgresBackend = Store;
|
||||||
|
|
||||||
|
impl Store {
|
||||||
|
pub fn new(db: PgPool) -> Self {
|
||||||
|
Self {
|
||||||
|
db,
|
||||||
|
block_size: DEFAULT_BLOCK_SIZE,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn with_block_size(mut self, block_size: usize) -> Self {
|
||||||
|
self.block_size = block_size;
|
||||||
|
self
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn pool(&self) -> &PgPool {
|
||||||
|
&self.db
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Private helpers ---
|
||||||
|
|
||||||
|
async fn require_bucket(&self, name: &str) -> Result<BucketRow, Post3Error> {
|
||||||
|
BucketsRepository::new(&self.db)
|
||||||
|
.get_by_name(name)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| Post3Error::BucketNotFound(name.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn require_upload(
|
||||||
|
&self,
|
||||||
|
upload_id: &str,
|
||||||
|
expected_bucket_id: uuid::Uuid,
|
||||||
|
expected_key: &str,
|
||||||
|
) -> Result<MultipartUploadRow, Post3Error> {
|
||||||
|
let upload = MultipartUploadsRepository::get_by_upload_id(&self.db, upload_id)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| Post3Error::UploadNotFound(upload_id.to_string()))?;
|
||||||
|
|
||||||
|
if upload.bucket_id != expected_bucket_id || upload.key != expected_key {
|
||||||
|
return Err(Post3Error::UploadNotFound(upload_id.to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(upload)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl StorageBackend for Store {
|
||||||
|
// --- Bucket operations ---
|
||||||
|
|
||||||
|
async fn create_bucket(&self, name: &str) -> Result<BucketInfo, Post3Error> {
|
||||||
|
let row = BucketsRepository::new(&self.db).create(name).await?;
|
||||||
|
Ok(BucketInfo {
|
||||||
|
name: row.name,
|
||||||
|
created_at: row.created_at,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn head_bucket(&self, name: &str) -> Result<Option<BucketInfo>, Post3Error> {
|
||||||
|
Ok(BucketsRepository::new(&self.db)
|
||||||
|
.get_by_name(name)
|
||||||
|
.await?
|
||||||
|
.map(|row| BucketInfo {
|
||||||
|
name: row.name,
|
||||||
|
created_at: row.created_at,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn delete_bucket(&self, name: &str) -> Result<(), Post3Error> {
|
||||||
|
BucketsRepository::new(&self.db).delete(name).await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn list_buckets(&self) -> Result<Vec<BucketInfo>, Post3Error> {
|
||||||
|
Ok(BucketsRepository::new(&self.db)
|
||||||
|
.list()
|
||||||
|
.await?
|
||||||
|
.into_iter()
|
||||||
|
.map(|row| BucketInfo {
|
||||||
|
name: row.name,
|
||||||
|
created_at: row.created_at,
|
||||||
|
})
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Object operations ---
|
||||||
|
|
||||||
|
async fn put_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
content_type: Option<&str>,
|
||||||
|
metadata: HashMap<String, String>,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Result<PutObjectResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let content_type = content_type.unwrap_or("application/octet-stream");
|
||||||
|
|
||||||
|
let mut hasher = Md5::new();
|
||||||
|
hasher.update(&body);
|
||||||
|
let etag = format!("\"{}\"", hex::encode(hasher.finalize()));
|
||||||
|
let size = body.len() as i64;
|
||||||
|
|
||||||
|
let mut tx = self.db.begin().await?;
|
||||||
|
|
||||||
|
let object_row = ObjectsRepository::insert_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
bucket_row.id,
|
||||||
|
key,
|
||||||
|
size,
|
||||||
|
&etag,
|
||||||
|
content_type,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
for (chunk_index, chunk) in body.chunks(self.block_size).enumerate() {
|
||||||
|
BlocksRepository::insert_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
object_row.id,
|
||||||
|
chunk_index as i32,
|
||||||
|
chunk,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
if !metadata.is_empty() {
|
||||||
|
MetadataRepository::insert_batch_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
object_row.id,
|
||||||
|
&metadata,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
tx.commit().await?;
|
||||||
|
|
||||||
|
Ok(PutObjectResult { etag, size })
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
) -> Result<GetObjectResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
|
||||||
|
let object = ObjectsRepository::new(&self.db)
|
||||||
|
.get(bucket_row.id, key)
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| Post3Error::ObjectNotFound {
|
||||||
|
bucket: bucket.to_string(),
|
||||||
|
key: key.to_string(),
|
||||||
|
})?;
|
||||||
|
|
||||||
|
let blocks = BlocksRepository::get_all(&self.db, object.id).await?;
|
||||||
|
|
||||||
|
let mut body = Vec::with_capacity(object.size as usize);
|
||||||
|
for block in blocks {
|
||||||
|
body.extend_from_slice(&block.data);
|
||||||
|
}
|
||||||
|
|
||||||
|
let user_metadata =
|
||||||
|
MetadataRepository::get_all(&self.db, object.id).await?;
|
||||||
|
|
||||||
|
Ok(GetObjectResult {
|
||||||
|
metadata: ObjectMeta {
|
||||||
|
key: object.key,
|
||||||
|
size: object.size,
|
||||||
|
etag: object.etag,
|
||||||
|
content_type: object.content_type,
|
||||||
|
last_modified: object.created_at,
|
||||||
|
},
|
||||||
|
user_metadata,
|
||||||
|
body: Bytes::from(body),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn head_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
) -> Result<Option<HeadObjectResult>, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
|
||||||
|
let object = ObjectsRepository::new(&self.db)
|
||||||
|
.get(bucket_row.id, key)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
match object {
|
||||||
|
Some(obj) => {
|
||||||
|
let user_metadata =
|
||||||
|
MetadataRepository::get_all(&self.db, obj.id).await?;
|
||||||
|
Ok(Some(HeadObjectResult {
|
||||||
|
object: ObjectMeta {
|
||||||
|
key: obj.key,
|
||||||
|
size: obj.size,
|
||||||
|
etag: obj.etag,
|
||||||
|
content_type: obj.content_type,
|
||||||
|
last_modified: obj.created_at,
|
||||||
|
},
|
||||||
|
user_metadata,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
None => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn delete_object(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
) -> Result<(), Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
ObjectsRepository::new(&self.db)
|
||||||
|
.delete(bucket_row.id, key)
|
||||||
|
.await?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn list_objects_v2(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
prefix: Option<&str>,
|
||||||
|
continuation_token: Option<&str>,
|
||||||
|
max_keys: Option<i64>,
|
||||||
|
delimiter: Option<&str>,
|
||||||
|
) -> Result<ListObjectsResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let max_keys = max_keys.unwrap_or(1000);
|
||||||
|
|
||||||
|
// MaxKeys=0 is valid: return empty result
|
||||||
|
if max_keys == 0 {
|
||||||
|
return Ok(ListObjectsResult {
|
||||||
|
objects: Vec::new(),
|
||||||
|
is_truncated: false,
|
||||||
|
next_continuation_token: None,
|
||||||
|
prefix: prefix.map(|s| s.to_string()),
|
||||||
|
delimiter: delimiter.map(|s| s.to_string()),
|
||||||
|
common_prefixes: Vec::new(),
|
||||||
|
key_count: 0,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch a generous batch for delimiter grouping (need enough to fill max_keys
|
||||||
|
// after rolling up common prefixes). For non-delimiter case, fetch max_keys+1.
|
||||||
|
let fetch_limit = if delimiter.is_some() {
|
||||||
|
// Fetch more to account for prefix rollups — worst case all keys share prefixes
|
||||||
|
(max_keys + 1) * 10
|
||||||
|
} else {
|
||||||
|
max_keys + 1
|
||||||
|
};
|
||||||
|
let rows = ObjectsRepository::new(&self.db)
|
||||||
|
.list(bucket_row.id, prefix, continuation_token, fetch_limit)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let all_objects: Vec<ObjectInfo> = rows
|
||||||
|
.into_iter()
|
||||||
|
.map(|o| ObjectInfo {
|
||||||
|
key: o.key,
|
||||||
|
size: o.size,
|
||||||
|
etag: o.etag,
|
||||||
|
last_modified: o.created_at,
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let prefix_str = prefix.unwrap_or("");
|
||||||
|
if let Some(delim) = delimiter {
|
||||||
|
// Separate into direct objects and rolled-up common prefixes
|
||||||
|
let mut seen_prefixes = std::collections::BTreeSet::new();
|
||||||
|
let mut direct_objects = Vec::new();
|
||||||
|
for obj in &all_objects {
|
||||||
|
let after_prefix = &obj.key[prefix_str.len()..];
|
||||||
|
if let Some(pos) = after_prefix.find(delim) {
|
||||||
|
let cp = format!("{}{}", prefix_str, &after_prefix[..pos + delim.len()]);
|
||||||
|
seen_prefixes.insert(cp);
|
||||||
|
} else {
|
||||||
|
direct_objects.push(obj.clone());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Filter out common prefixes that are <= continuation token
|
||||||
|
let all_prefixes: Vec<String> = if let Some(token) = continuation_token {
|
||||||
|
seen_prefixes
|
||||||
|
.into_iter()
|
||||||
|
.filter(|cp| cp.as_str() > token)
|
||||||
|
.collect()
|
||||||
|
} else {
|
||||||
|
seen_prefixes.into_iter().collect()
|
||||||
|
};
|
||||||
|
|
||||||
|
// Merge objects and common_prefixes in sorted order, limited to max_keys
|
||||||
|
let mut result_objects = Vec::new();
|
||||||
|
let mut result_prefixes = Vec::new();
|
||||||
|
let mut oi = 0usize;
|
||||||
|
let mut pi = 0usize;
|
||||||
|
let mut count = 0i64;
|
||||||
|
let mut last_key: Option<String> = None;
|
||||||
|
|
||||||
|
while count < max_keys && (oi < direct_objects.len() || pi < all_prefixes.len()) {
|
||||||
|
let take_object = match (direct_objects.get(oi), all_prefixes.get(pi)) {
|
||||||
|
(Some(obj), Some(pfx)) => obj.key.as_str() < pfx.as_str(),
|
||||||
|
(Some(_), None) => true,
|
||||||
|
(None, Some(_)) => false,
|
||||||
|
(None, None) => break,
|
||||||
|
};
|
||||||
|
|
||||||
|
if take_object {
|
||||||
|
last_key = Some(direct_objects[oi].key.clone());
|
||||||
|
result_objects.push(direct_objects[oi].clone());
|
||||||
|
oi += 1;
|
||||||
|
} else {
|
||||||
|
last_key = Some(all_prefixes[pi].clone());
|
||||||
|
result_prefixes.push(all_prefixes[pi].clone());
|
||||||
|
pi += 1;
|
||||||
|
}
|
||||||
|
count += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
let is_truncated = oi < direct_objects.len() || pi < all_prefixes.len();
|
||||||
|
let next_token = if is_truncated { last_key } else { None };
|
||||||
|
let key_count = result_objects.len() + result_prefixes.len();
|
||||||
|
|
||||||
|
Ok(ListObjectsResult {
|
||||||
|
objects: result_objects,
|
||||||
|
is_truncated,
|
||||||
|
next_continuation_token: next_token,
|
||||||
|
prefix: prefix.map(|s| s.to_string()),
|
||||||
|
delimiter: Some(delim.to_string()),
|
||||||
|
common_prefixes: result_prefixes,
|
||||||
|
key_count,
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
let is_truncated = all_objects.len() as i64 > max_keys;
|
||||||
|
let items: Vec<_> = all_objects.into_iter().take(max_keys as usize).collect();
|
||||||
|
let next_token = if is_truncated {
|
||||||
|
items.last().map(|o| o.key.clone())
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
let key_count = items.len();
|
||||||
|
|
||||||
|
Ok(ListObjectsResult {
|
||||||
|
objects: items,
|
||||||
|
is_truncated,
|
||||||
|
next_continuation_token: next_token,
|
||||||
|
prefix: prefix.map(|s| s.to_string()),
|
||||||
|
delimiter: None,
|
||||||
|
common_prefixes: Vec::new(),
|
||||||
|
key_count,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Multipart upload operations ---
|
||||||
|
|
||||||
|
async fn create_multipart_upload(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
content_type: Option<&str>,
|
||||||
|
metadata: HashMap<String, String>,
|
||||||
|
) -> Result<CreateMultipartUploadResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let content_type = content_type.unwrap_or("application/octet-stream");
|
||||||
|
let upload_id = uuid::Uuid::new_v4().to_string();
|
||||||
|
|
||||||
|
let mut tx = self.db.begin().await?;
|
||||||
|
|
||||||
|
let upload_row = MultipartUploadsRepository::create_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
bucket_row.id,
|
||||||
|
key,
|
||||||
|
&upload_id,
|
||||||
|
content_type,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if !metadata.is_empty() {
|
||||||
|
MultipartMetadataRepository::insert_batch_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
upload_row.id,
|
||||||
|
&metadata,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
tx.commit().await?;
|
||||||
|
|
||||||
|
Ok(CreateMultipartUploadResult {
|
||||||
|
bucket: bucket.to_string(),
|
||||||
|
key: key.to_string(),
|
||||||
|
upload_id,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn upload_part(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
part_number: i32,
|
||||||
|
body: Bytes,
|
||||||
|
) -> Result<UploadPartResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let upload = self
|
||||||
|
.require_upload(upload_id, bucket_row.id, key)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let mut hasher = Md5::new();
|
||||||
|
hasher.update(&body);
|
||||||
|
let etag = format!("\"{}\"", hex::encode(hasher.finalize()));
|
||||||
|
let size = body.len() as i64;
|
||||||
|
|
||||||
|
UploadPartsRepository::upsert(
|
||||||
|
&self.db,
|
||||||
|
upload.id,
|
||||||
|
part_number,
|
||||||
|
&body,
|
||||||
|
size,
|
||||||
|
&etag,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(UploadPartResult { etag })
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn complete_multipart_upload(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
part_etags: Vec<(i32, String)>,
|
||||||
|
) -> Result<CompleteMultipartUploadResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let upload = self
|
||||||
|
.require_upload(upload_id, bucket_row.id, key)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Validate part numbers are in ascending order
|
||||||
|
for window in part_etags.windows(2) {
|
||||||
|
if window[0].0 >= window[1].0 {
|
||||||
|
return Err(Post3Error::InvalidPartOrder);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch the requested parts
|
||||||
|
let part_numbers: Vec<i32> = part_etags.iter().map(|(n, _)| *n).collect();
|
||||||
|
let parts = UploadPartsRepository::get_ordered_by_numbers(
|
||||||
|
&self.db,
|
||||||
|
upload.id,
|
||||||
|
&part_numbers,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Validate all parts exist and ETags match
|
||||||
|
for (expected_num, expected_etag) in &part_etags {
|
||||||
|
let part = parts
|
||||||
|
.iter()
|
||||||
|
.find(|p| p.part_number == *expected_num)
|
||||||
|
.ok_or_else(|| Post3Error::InvalidPart {
|
||||||
|
upload_id: upload_id.to_string(),
|
||||||
|
part_number: *expected_num,
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Normalize ETags by stripping quotes for comparison
|
||||||
|
let stored = part.etag.trim_matches('"');
|
||||||
|
let expected = expected_etag.trim_matches('"');
|
||||||
|
if stored != expected {
|
||||||
|
return Err(Post3Error::ETagMismatch {
|
||||||
|
part_number: *expected_num,
|
||||||
|
expected: expected_etag.clone(),
|
||||||
|
got: part.etag.clone(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate minimum part size (5 MB) for all parts except the last
|
||||||
|
const MIN_PART_SIZE: i64 = 5 * 1024 * 1024;
|
||||||
|
for (i, part) in parts.iter().enumerate() {
|
||||||
|
if i < parts.len() - 1 && part.size < MIN_PART_SIZE {
|
||||||
|
return Err(Post3Error::EntityTooSmall {
|
||||||
|
part_number: part.part_number,
|
||||||
|
size: part.size,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compute compound ETag: MD5(concat of raw MD5 bytes of each part) + "-N"
|
||||||
|
let mut etag_hasher = Md5::new();
|
||||||
|
let part_count = parts.len();
|
||||||
|
for part in &parts {
|
||||||
|
// Part etag is quoted hex, e.g. "\"abcdef...\""
|
||||||
|
let hex_str = part.etag.trim_matches('"');
|
||||||
|
if let Ok(raw_md5) = hex::decode(hex_str) {
|
||||||
|
etag_hasher.update(&raw_md5);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let compound_etag = format!(
|
||||||
|
"\"{}-{}\"",
|
||||||
|
hex::encode(etag_hasher.finalize()),
|
||||||
|
part_count
|
||||||
|
);
|
||||||
|
|
||||||
|
// Concatenate all part data
|
||||||
|
let total_size: i64 = parts.iter().map(|p| p.size).sum();
|
||||||
|
let mut assembled = Vec::with_capacity(total_size as usize);
|
||||||
|
for part in &parts {
|
||||||
|
assembled.extend_from_slice(&part.data);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get upload metadata
|
||||||
|
let user_metadata =
|
||||||
|
MultipartMetadataRepository::get_all(&self.db, upload.id).await?;
|
||||||
|
|
||||||
|
// Begin transaction for the final object assembly
|
||||||
|
let mut tx = self.db.begin().await?;
|
||||||
|
|
||||||
|
// Insert the final object (deletes any existing object with same key)
|
||||||
|
let object_row = ObjectsRepository::insert_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
bucket_row.id,
|
||||||
|
key,
|
||||||
|
total_size,
|
||||||
|
&compound_etag,
|
||||||
|
&upload.content_type,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// Chunk into 1 MiB blocks
|
||||||
|
for (chunk_index, chunk) in assembled.chunks(self.block_size).enumerate() {
|
||||||
|
BlocksRepository::insert_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
object_row.id,
|
||||||
|
chunk_index as i32,
|
||||||
|
chunk,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Transfer metadata
|
||||||
|
if !user_metadata.is_empty() {
|
||||||
|
MetadataRepository::insert_batch_in_tx(
|
||||||
|
&mut tx,
|
||||||
|
object_row.id,
|
||||||
|
&user_metadata,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete the multipart upload (cascades to parts + upload metadata)
|
||||||
|
MultipartUploadsRepository::delete_in_tx(&mut tx, upload.id).await?;
|
||||||
|
|
||||||
|
tx.commit().await?;
|
||||||
|
|
||||||
|
Ok(CompleteMultipartUploadResult {
|
||||||
|
bucket: bucket.to_string(),
|
||||||
|
key: key.to_string(),
|
||||||
|
etag: compound_etag,
|
||||||
|
size: total_size,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn abort_multipart_upload(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
) -> Result<(), Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let upload = self
|
||||||
|
.require_upload(upload_id, bucket_row.id, key)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
// CASCADE deletes parts + metadata
|
||||||
|
MultipartUploadsRepository::delete_by_upload_id(&self.db, &upload.upload_id)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn list_parts(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
key: &str,
|
||||||
|
upload_id: &str,
|
||||||
|
max_parts: Option<i32>,
|
||||||
|
part_number_marker: Option<i32>,
|
||||||
|
) -> Result<ListPartsResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let upload = self
|
||||||
|
.require_upload(upload_id, bucket_row.id, key)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let max_parts = max_parts.unwrap_or(1000) as i64;
|
||||||
|
|
||||||
|
// Fetch one extra to detect truncation
|
||||||
|
let parts = UploadPartsRepository::list_info(
|
||||||
|
&self.db,
|
||||||
|
upload.id,
|
||||||
|
part_number_marker,
|
||||||
|
max_parts + 1,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let is_truncated = parts.len() as i64 > max_parts;
|
||||||
|
let items: Vec<_> = parts.into_iter().take(max_parts as usize).collect();
|
||||||
|
|
||||||
|
let next_marker = if is_truncated {
|
||||||
|
items.last().map(|p| p.part_number)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(ListPartsResult {
|
||||||
|
bucket: bucket.to_string(),
|
||||||
|
key: key.to_string(),
|
||||||
|
upload_id: upload_id.to_string(),
|
||||||
|
parts: items,
|
||||||
|
is_truncated,
|
||||||
|
next_part_number_marker: next_marker,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn list_multipart_uploads(
|
||||||
|
&self,
|
||||||
|
bucket: &str,
|
||||||
|
prefix: Option<&str>,
|
||||||
|
key_marker: Option<&str>,
|
||||||
|
upload_id_marker: Option<&str>,
|
||||||
|
max_uploads: Option<i32>,
|
||||||
|
) -> Result<ListMultipartUploadsResult, Post3Error> {
|
||||||
|
let bucket_row = self.require_bucket(bucket).await?;
|
||||||
|
let max_uploads = max_uploads.unwrap_or(1000) as i64;
|
||||||
|
|
||||||
|
// Fetch one extra to detect truncation
|
||||||
|
let rows = MultipartUploadsRepository::list(
|
||||||
|
&self.db,
|
||||||
|
bucket_row.id,
|
||||||
|
prefix,
|
||||||
|
key_marker,
|
||||||
|
upload_id_marker,
|
||||||
|
max_uploads + 1,
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let is_truncated = rows.len() as i64 > max_uploads;
|
||||||
|
let items: Vec<_> = rows.into_iter().take(max_uploads as usize).collect();
|
||||||
|
|
||||||
|
let (next_key_marker, next_upload_id_marker) = if is_truncated {
|
||||||
|
items
|
||||||
|
.last()
|
||||||
|
.map(|u| (Some(u.key.clone()), Some(u.upload_id.clone())))
|
||||||
|
.unwrap_or((None, None))
|
||||||
|
} else {
|
||||||
|
(None, None)
|
||||||
|
};
|
||||||
|
|
||||||
|
let uploads = items
|
||||||
|
.into_iter()
|
||||||
|
.map(|u| MultipartUploadInfo {
|
||||||
|
key: u.key,
|
||||||
|
upload_id: u.upload_id,
|
||||||
|
initiated: u.created_at,
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
Ok(ListMultipartUploadsResult {
|
||||||
|
bucket: bucket.to_string(),
|
||||||
|
uploads,
|
||||||
|
is_truncated,
|
||||||
|
next_key_marker,
|
||||||
|
next_upload_id_marker,
|
||||||
|
prefix: prefix.map(|s| s.to_string()),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
88
examples/aws-cli.sh
Executable file
88
examples/aws-cli.sh
Executable file
@@ -0,0 +1,88 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# post3 usage with the AWS CLI
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. post3-server running: mise run up && mise run dev
|
||||||
|
# 2. AWS CLI installed: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# bash examples/aws-cli.sh
|
||||||
|
#
|
||||||
|
# Or:
|
||||||
|
# mise run example:cli
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
ENDPOINT="http://localhost:9000"
|
||||||
|
BUCKET="cli-demo"
|
||||||
|
|
||||||
|
# AWS CLI needs credentials even though post3 doesn't validate them (yet)
|
||||||
|
export AWS_ACCESS_KEY_ID=test
|
||||||
|
export AWS_SECRET_ACCESS_KEY=test
|
||||||
|
export AWS_DEFAULT_REGION=us-east-1
|
||||||
|
|
||||||
|
aws() {
|
||||||
|
command aws --endpoint-url "$ENDPOINT" "$@"
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "=== post3 AWS CLI Demo ==="
|
||||||
|
|
||||||
|
# Create a bucket
|
||||||
|
echo ""
|
||||||
|
echo "--- Creating bucket '$BUCKET'"
|
||||||
|
aws s3api create-bucket --bucket "$BUCKET"
|
||||||
|
|
||||||
|
# List buckets
|
||||||
|
echo ""
|
||||||
|
echo "--- Listing buckets"
|
||||||
|
aws s3api list-buckets
|
||||||
|
|
||||||
|
# Upload a file
|
||||||
|
echo ""
|
||||||
|
echo "--- Uploading hello.txt"
|
||||||
|
echo "Hello from the AWS CLI!" | aws s3 cp - "s3://$BUCKET/hello.txt"
|
||||||
|
|
||||||
|
# Upload with metadata
|
||||||
|
echo ""
|
||||||
|
echo "--- Uploading report.txt with metadata"
|
||||||
|
echo "Report content" | aws s3 cp - "s3://$BUCKET/report.txt" \
|
||||||
|
--metadata "author=alice,version=1"
|
||||||
|
|
||||||
|
# List objects
|
||||||
|
echo ""
|
||||||
|
echo "--- Listing objects"
|
||||||
|
aws s3api list-objects-v2 --bucket "$BUCKET"
|
||||||
|
|
||||||
|
# List with prefix
|
||||||
|
echo ""
|
||||||
|
echo "--- Uploading docs/readme.md and docs/guide.md"
|
||||||
|
echo "# README" | aws s3 cp - "s3://$BUCKET/docs/readme.md"
|
||||||
|
echo "# Guide" | aws s3 cp - "s3://$BUCKET/docs/guide.md"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "--- Listing objects with prefix 'docs/'"
|
||||||
|
aws s3api list-objects-v2 --bucket "$BUCKET" --prefix "docs/"
|
||||||
|
|
||||||
|
# Download a file
|
||||||
|
echo ""
|
||||||
|
echo "--- Downloading hello.txt"
|
||||||
|
aws s3 cp "s3://$BUCKET/hello.txt" -
|
||||||
|
|
||||||
|
# Head object (metadata)
|
||||||
|
echo ""
|
||||||
|
echo "--- Head object report.txt"
|
||||||
|
aws s3api head-object --bucket "$BUCKET" --key "report.txt"
|
||||||
|
|
||||||
|
# Delete objects
|
||||||
|
echo ""
|
||||||
|
echo "--- Cleaning up"
|
||||||
|
aws s3 rm "s3://$BUCKET/hello.txt"
|
||||||
|
aws s3 rm "s3://$BUCKET/report.txt"
|
||||||
|
aws s3 rm "s3://$BUCKET/docs/readme.md"
|
||||||
|
aws s3 rm "s3://$BUCKET/docs/guide.md"
|
||||||
|
|
||||||
|
# Delete bucket
|
||||||
|
aws s3api delete-bucket --bucket "$BUCKET"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=== Done ==="
|
||||||
86
examples/curl.sh
Executable file
86
examples/curl.sh
Executable file
@@ -0,0 +1,86 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# post3 usage with raw curl commands
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# post3-server running: mise run up && mise run dev
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# bash examples/curl.sh
|
||||||
|
#
|
||||||
|
# Or:
|
||||||
|
# mise run example:curl
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
BASE="http://localhost:9000"
|
||||||
|
BUCKET="curl-demo"
|
||||||
|
|
||||||
|
echo "=== post3 curl Demo ==="
|
||||||
|
|
||||||
|
# Create a bucket (PUT /{bucket})
|
||||||
|
echo ""
|
||||||
|
echo "--- Creating bucket '$BUCKET'"
|
||||||
|
curl -s -X PUT "$BASE/$BUCKET" -o /dev/null -w "HTTP %{http_code}\n"
|
||||||
|
|
||||||
|
# List buckets (GET /)
|
||||||
|
echo ""
|
||||||
|
echo "--- Listing buckets"
|
||||||
|
curl -s "$BASE/" | head -20
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Put an object (PUT /{bucket}/{key})
|
||||||
|
echo ""
|
||||||
|
echo "--- Putting hello.txt"
|
||||||
|
curl -s -X PUT "$BASE/$BUCKET/hello.txt" \
|
||||||
|
-d "Hello from curl!" \
|
||||||
|
-H "Content-Type: text/plain" \
|
||||||
|
-o /dev/null -w "HTTP %{http_code}\n"
|
||||||
|
|
||||||
|
# Put with custom metadata (x-amz-meta-* headers)
|
||||||
|
echo ""
|
||||||
|
echo "--- Putting report.txt with metadata"
|
||||||
|
curl -s -X PUT "$BASE/$BUCKET/report.txt" \
|
||||||
|
-d "Report content" \
|
||||||
|
-H "Content-Type: text/plain" \
|
||||||
|
-H "x-amz-meta-author: bob" \
|
||||||
|
-H "x-amz-meta-version: 3" \
|
||||||
|
-o /dev/null -w "HTTP %{http_code}\n"
|
||||||
|
|
||||||
|
# Get an object (GET /{bucket}/{key})
|
||||||
|
echo ""
|
||||||
|
echo "--- Getting hello.txt"
|
||||||
|
curl -s "$BASE/$BUCKET/hello.txt"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Head an object (HEAD /{bucket}/{key})
|
||||||
|
echo ""
|
||||||
|
echo "--- Head report.txt"
|
||||||
|
curl -s -I "$BASE/$BUCKET/report.txt"
|
||||||
|
|
||||||
|
# List objects (GET /{bucket}?list-type=2)
|
||||||
|
echo ""
|
||||||
|
echo "--- Listing objects"
|
||||||
|
curl -s "$BASE/$BUCKET?list-type=2" | head -20
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# List with prefix
|
||||||
|
echo ""
|
||||||
|
echo "--- Putting docs/readme.md"
|
||||||
|
curl -s -X PUT "$BASE/$BUCKET/docs/readme.md" -d "# README" -o /dev/null -w "HTTP %{http_code}\n"
|
||||||
|
|
||||||
|
echo "--- Listing with prefix 'docs/'"
|
||||||
|
curl -s "$BASE/$BUCKET?list-type=2&prefix=docs/" | head -20
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Delete objects (DELETE /{bucket}/{key})
|
||||||
|
echo ""
|
||||||
|
echo "--- Cleaning up"
|
||||||
|
curl -s -X DELETE "$BASE/$BUCKET/hello.txt" -o /dev/null -w "DELETE hello.txt: HTTP %{http_code}\n"
|
||||||
|
curl -s -X DELETE "$BASE/$BUCKET/report.txt" -o /dev/null -w "DELETE report.txt: HTTP %{http_code}\n"
|
||||||
|
curl -s -X DELETE "$BASE/$BUCKET/docs/readme.md" -o /dev/null -w "DELETE docs/readme.md: HTTP %{http_code}\n"
|
||||||
|
|
||||||
|
# Delete bucket (DELETE /{bucket})
|
||||||
|
curl -s -X DELETE "$BASE/$BUCKET" -o /dev/null -w "DELETE bucket: HTTP %{http_code}\n"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=== Done ==="
|
||||||
107
mise.toml
Normal file
107
mise.toml
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
[env]
|
||||||
|
RUST_LOG = "post3=debug,post3_server=debug,info"
|
||||||
|
DATABASE_URL = "postgresql://devuser:devpassword@localhost:5435/post3_dev"
|
||||||
|
POST3_HOST = "127.0.0.1:9000"
|
||||||
|
|
||||||
|
[tasks."develop"]
|
||||||
|
alias = ["d", "dev"]
|
||||||
|
description = "Run the post3 server in development mode"
|
||||||
|
run = "cargo run -p post3-server -- serve"
|
||||||
|
|
||||||
|
[tasks."build"]
|
||||||
|
alias = ["b"]
|
||||||
|
description = "Build the workspace in release mode"
|
||||||
|
run = "cargo build --release"
|
||||||
|
|
||||||
|
[tasks."check"]
|
||||||
|
alias = ["c"]
|
||||||
|
description = "Type-check the entire workspace"
|
||||||
|
run = "cargo check --workspace"
|
||||||
|
|
||||||
|
[tasks."local:up"]
|
||||||
|
alias = ["up"]
|
||||||
|
description = "Start PostgreSQL via docker compose"
|
||||||
|
run = "docker compose -f ./templates/docker-compose.yaml up -d --remove-orphans --wait"
|
||||||
|
|
||||||
|
[tasks."local:down"]
|
||||||
|
alias = ["down"]
|
||||||
|
description = "Stop PostgreSQL and remove volumes"
|
||||||
|
run = "docker compose -f ./templates/docker-compose.yaml down -v"
|
||||||
|
|
||||||
|
[tasks."local:logs"]
|
||||||
|
description = "Tail PostgreSQL logs"
|
||||||
|
run = "docker compose -f ./templates/docker-compose.yaml logs -f"
|
||||||
|
|
||||||
|
[tasks."db:shell"]
|
||||||
|
description = "Open a psql shell to the dev database"
|
||||||
|
env = { PGPASSWORD = "devpassword" }
|
||||||
|
run = "psql -h localhost -p 5435 -U devuser -d post3_dev"
|
||||||
|
|
||||||
|
[tasks."db:reset"]
|
||||||
|
description = "Drop and recreate the dev database"
|
||||||
|
run = """
|
||||||
|
docker compose -f ./templates/docker-compose.yaml down -v
|
||||||
|
docker compose -f ./templates/docker-compose.yaml up -d --remove-orphans --wait
|
||||||
|
"""
|
||||||
|
|
||||||
|
[tasks."test"]
|
||||||
|
alias = ["t"]
|
||||||
|
description = "Run all tests (requires PostgreSQL running)"
|
||||||
|
depends = ["local:up"]
|
||||||
|
run = "cargo test --workspace -- --test-threads=1"
|
||||||
|
|
||||||
|
[tasks."test:integration"]
|
||||||
|
alias = ["ti"]
|
||||||
|
description = "Run S3 integration tests only (requires PostgreSQL running)"
|
||||||
|
depends = ["local:up"]
|
||||||
|
run = "cargo test --test s3_integration -- --test-threads=1"
|
||||||
|
|
||||||
|
[tasks."test:watch"]
|
||||||
|
description = "Run tests on file change"
|
||||||
|
depends = ["local:up"]
|
||||||
|
run = "cargo watch -x 'test --workspace -- --test-threads=1'"
|
||||||
|
|
||||||
|
[tasks."ci:pr"]
|
||||||
|
description = "Run CI PR pipeline via Dagger"
|
||||||
|
run = "cargo run -p ci -- pr"
|
||||||
|
|
||||||
|
[tasks."ci:main"]
|
||||||
|
description = "Run CI main pipeline via Dagger"
|
||||||
|
run = "cargo run -p ci -- main"
|
||||||
|
|
||||||
|
[tasks."example:basic"]
|
||||||
|
description = "Run the basic SDK example (requires server running)"
|
||||||
|
run = "cargo run -p post3-sdk --example basic"
|
||||||
|
|
||||||
|
[tasks."example:metadata"]
|
||||||
|
description = "Run the metadata example (requires server running)"
|
||||||
|
run = "cargo run -p post3-sdk --example metadata"
|
||||||
|
|
||||||
|
[tasks."example:aws-sdk"]
|
||||||
|
description = "Run the raw aws-sdk-s3 example (requires server running)"
|
||||||
|
run = "cargo run -p post3-sdk --example aws_sdk_direct"
|
||||||
|
|
||||||
|
[tasks."example:cli"]
|
||||||
|
description = "Run the AWS CLI example script (requires server running + aws CLI)"
|
||||||
|
run = "bash examples/aws-cli.sh"
|
||||||
|
|
||||||
|
[tasks."example:curl"]
|
||||||
|
description = "Run the curl example script (requires server running)"
|
||||||
|
run = "bash examples/curl.sh"
|
||||||
|
|
||||||
|
[tasks."example:large"]
|
||||||
|
description = "Run the large file upload stress test (requires server running)"
|
||||||
|
run = "cargo run -p post3-sdk --example large_upload --release"
|
||||||
|
|
||||||
|
[tasks."example:multipart"]
|
||||||
|
description = "Run the multipart upload stress test for huge files (requires server running)"
|
||||||
|
run = "cargo run -p post3-sdk --example multipart_upload --release"
|
||||||
|
|
||||||
|
[tasks."test:s3-compliance"]
|
||||||
|
alias = ["s3t"]
|
||||||
|
description = "Run Ceph s3-tests against post3 (FS backend)"
|
||||||
|
run = "bash s3-compliance/run-s3-tests.sh"
|
||||||
|
|
||||||
|
[tasks."test:s3-compliance:dry"]
|
||||||
|
description = "List which s3-tests would run (dry-run)"
|
||||||
|
run = "bash s3-compliance/run-s3-tests.sh --collect-only"
|
||||||
242
s3-compliance/run-s3-tests.sh
Executable file
242
s3-compliance/run-s3-tests.sh
Executable file
@@ -0,0 +1,242 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# Run Ceph s3-tests against post3 (FS backend).
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# bash s3-compliance/run-s3-tests.sh # run tests
|
||||||
|
# bash s3-compliance/run-s3-tests.sh --collect-only # dry-run: list matching tests
|
||||||
|
#
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
REPO_ROOT="$(cd "$(dirname "$0")/.." && pwd)"
|
||||||
|
S3TESTS_DIR="$REPO_ROOT/s3-tests"
|
||||||
|
SCRIPT_DIR="$REPO_ROOT/s3-compliance"
|
||||||
|
|
||||||
|
# --- Validate prerequisites ---------------------------------------------------
|
||||||
|
|
||||||
|
if [ ! -d "$S3TESTS_DIR" ]; then
|
||||||
|
echo "ERROR: s3-tests submodule not found at $S3TESTS_DIR"
|
||||||
|
echo "Run: git submodule update --init"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! command -v python3 &>/dev/null; then
|
||||||
|
echo "ERROR: python3 is required"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Pick a free port ---------------------------------------------------------
|
||||||
|
|
||||||
|
PORT=$(python3 -c 'import socket; s=socket.socket(); s.bind(("",0)); print(s.getsockname()[1]); s.close()')
|
||||||
|
echo "Using port $PORT"
|
||||||
|
|
||||||
|
# --- Temp data dir for FS backend ---------------------------------------------
|
||||||
|
|
||||||
|
DATA_DIR=$(mktemp -d)
|
||||||
|
echo "Data dir: $DATA_DIR"
|
||||||
|
|
||||||
|
# --- Build post3-server -------------------------------------------------------
|
||||||
|
|
||||||
|
echo "Building post3-server (release)..."
|
||||||
|
cargo build -p post3-server --release --quiet
|
||||||
|
|
||||||
|
BINARY="$REPO_ROOT/target/release/post3-server"
|
||||||
|
if [ ! -x "$BINARY" ]; then
|
||||||
|
echo "ERROR: binary not found at $BINARY"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Generate s3tests.conf ----------------------------------------------------
|
||||||
|
|
||||||
|
CONF="$DATA_DIR/s3tests.conf"
|
||||||
|
sed "s/__PORT__/$PORT/g" "$SCRIPT_DIR/s3tests.conf.template" > "$CONF"
|
||||||
|
echo "Config: $CONF"
|
||||||
|
|
||||||
|
# --- Start the server ---------------------------------------------------------
|
||||||
|
|
||||||
|
export POST3_HOST="127.0.0.1:$PORT"
|
||||||
|
"$BINARY" serve --backend fs --data-dir "$DATA_DIR/store" &
|
||||||
|
SERVER_PID=$!
|
||||||
|
|
||||||
|
cleanup() {
|
||||||
|
echo ""
|
||||||
|
echo "Stopping server (PID $SERVER_PID)..."
|
||||||
|
kill "$SERVER_PID" 2>/dev/null || true
|
||||||
|
wait "$SERVER_PID" 2>/dev/null || true
|
||||||
|
echo "Cleaning up $DATA_DIR..."
|
||||||
|
rm -rf "$DATA_DIR"
|
||||||
|
}
|
||||||
|
trap cleanup EXIT
|
||||||
|
|
||||||
|
# --- Wait for the server to become ready --------------------------------------
|
||||||
|
|
||||||
|
echo "Waiting for server on port $PORT..."
|
||||||
|
TRIES=0
|
||||||
|
MAX_TRIES=60
|
||||||
|
while ! curl -sf "http://127.0.0.1:$PORT/" >/dev/null 2>&1; do
|
||||||
|
TRIES=$((TRIES + 1))
|
||||||
|
if [ "$TRIES" -ge "$MAX_TRIES" ]; then
|
||||||
|
echo "ERROR: server did not start within ${MAX_TRIES}s"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
sleep 0.5
|
||||||
|
done
|
||||||
|
echo "Server is ready."
|
||||||
|
|
||||||
|
# --- Set up virtualenv for s3-tests -------------------------------------------
|
||||||
|
|
||||||
|
VENV_DIR="$S3TESTS_DIR/.venv"
|
||||||
|
if [ ! -d "$VENV_DIR" ]; then
|
||||||
|
echo "Creating virtualenv..."
|
||||||
|
python3 -m venv "$VENV_DIR"
|
||||||
|
fi
|
||||||
|
source "$VENV_DIR/bin/activate"
|
||||||
|
|
||||||
|
# Install dependencies if needed
|
||||||
|
if ! python3 -c "import boto3" 2>/dev/null; then
|
||||||
|
echo "Installing s3-tests dependencies..."
|
||||||
|
pip install --quiet -r "$S3TESTS_DIR/requirements.txt"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# --- Build the test filter expression -----------------------------------------
|
||||||
|
|
||||||
|
# Marker-based exclusions (features post3 doesn't implement)
|
||||||
|
MARKER_EXCLUDE="not appendobject"
|
||||||
|
MARKER_EXCLUDE+=" and not bucket_policy and not bucket_encryption"
|
||||||
|
MARKER_EXCLUDE+=" and not bucket_logging and not checksum"
|
||||||
|
MARKER_EXCLUDE+=" and not cloud_transition and not conditional_write"
|
||||||
|
MARKER_EXCLUDE+=" and not cors and not encryption"
|
||||||
|
MARKER_EXCLUDE+=" and not fails_strict_rfc2616"
|
||||||
|
MARKER_EXCLUDE+=" and not iam_account and not iam_cross_account"
|
||||||
|
MARKER_EXCLUDE+=" and not iam_role and not iam_tenant and not iam_user"
|
||||||
|
MARKER_EXCLUDE+=" and not lifecycle and not lifecycle_expiration"
|
||||||
|
MARKER_EXCLUDE+=" and not lifecycle_transition"
|
||||||
|
MARKER_EXCLUDE+=" and not object_lock and not object_ownership"
|
||||||
|
MARKER_EXCLUDE+=" and not role_policy and not session_policy"
|
||||||
|
MARKER_EXCLUDE+=" and not user_policy and not group_policy"
|
||||||
|
MARKER_EXCLUDE+=" and not s3select and not s3website"
|
||||||
|
MARKER_EXCLUDE+=" and not s3website_routing_rules"
|
||||||
|
MARKER_EXCLUDE+=" and not s3website_redirect_location"
|
||||||
|
MARKER_EXCLUDE+=" and not sns and not sse_s3 and not storage_class"
|
||||||
|
MARKER_EXCLUDE+=" and not tagging"
|
||||||
|
MARKER_EXCLUDE+=" and not test_of_sts and not versioning and not delete_marker"
|
||||||
|
MARKER_EXCLUDE+=" and not webidentity_test"
|
||||||
|
MARKER_EXCLUDE+=" and not auth_aws2 and not auth_aws4 and not auth_common"
|
||||||
|
|
||||||
|
# Keyword-based exclusions (individual tests requiring unimplemented ops)
|
||||||
|
KEYWORD_EXCLUDE="not anonymous and not presigned and not copy_object"
|
||||||
|
KEYWORD_EXCLUDE+=" and not test_account_usage and not test_head_bucket_usage"
|
||||||
|
KEYWORD_EXCLUDE+=" and not acl and not ACL and not grant"
|
||||||
|
KEYWORD_EXCLUDE+=" and not logging and not notification"
|
||||||
|
# Exclude features not yet implemented:
|
||||||
|
# - access_bucket / bucket access control tests (require ACL/policy)
|
||||||
|
KEYWORD_EXCLUDE+=" and not test_access_bucket"
|
||||||
|
# - POST object (HTML form-based upload)
|
||||||
|
KEYWORD_EXCLUDE+=" and not test_post_object"
|
||||||
|
# - Ranged requests (Range header)
|
||||||
|
KEYWORD_EXCLUDE+=" and not ranged_request"
|
||||||
|
# - Conditional requests (If-Match, If-None-Match, If-Modified-Since)
|
||||||
|
KEYWORD_EXCLUDE+=" and not ifmatch and not ifnonematch and not ifmodified and not ifunmodified"
|
||||||
|
KEYWORD_EXCLUDE+=" and not ifnonmatch"
|
||||||
|
# - Object copy tests not caught by copy_object keyword
|
||||||
|
KEYWORD_EXCLUDE+=" and not object_copy"
|
||||||
|
# - Multipart copy (UploadPartCopy)
|
||||||
|
KEYWORD_EXCLUDE+=" and not multipart_copy"
|
||||||
|
# - Public access block
|
||||||
|
KEYWORD_EXCLUDE+=" and not public_block"
|
||||||
|
# - Object attributes API
|
||||||
|
KEYWORD_EXCLUDE+=" and not object_attributes"
|
||||||
|
# - Auth-related tests
|
||||||
|
KEYWORD_EXCLUDE+=" and not invalid_auth and not bad_auth and not authenticated_expired"
|
||||||
|
# - Torrent
|
||||||
|
KEYWORD_EXCLUDE+=" and not torrent"
|
||||||
|
# - content_encoding aws_chunked
|
||||||
|
KEYWORD_EXCLUDE+=" and not aws_chunked"
|
||||||
|
# - GetBucketLocation (needs location constraint storage)
|
||||||
|
KEYWORD_EXCLUDE+=" and not bucket_get_location"
|
||||||
|
# - expected_bucket_owner (needs owner tracking)
|
||||||
|
KEYWORD_EXCLUDE+=" and not expected_bucket_owner"
|
||||||
|
# - bucket_recreate_not_overriding (needs data preservation on re-create)
|
||||||
|
KEYWORD_EXCLUDE+=" and not bucket_recreate_not_overriding"
|
||||||
|
# - object_read_unreadable (needs permission model)
|
||||||
|
KEYWORD_EXCLUDE+=" and not object_read_unreadable"
|
||||||
|
# - Versioned concurrent tests
|
||||||
|
KEYWORD_EXCLUDE+=" and not versioned_concurrent"
|
||||||
|
# - 100-continue
|
||||||
|
KEYWORD_EXCLUDE+=" and not 100_continue"
|
||||||
|
# - multipart_get_part (GetObjectPartNumber)
|
||||||
|
KEYWORD_EXCLUDE+=" and not multipart_get_part and not multipart_single_get_part and not non_multipart_get_part"
|
||||||
|
# - object_anon_put
|
||||||
|
KEYWORD_EXCLUDE+=" and not object_anon_put"
|
||||||
|
# - raw response headers / raw get/put tests (presigned-like)
|
||||||
|
KEYWORD_EXCLUDE+=" and not object_raw"
|
||||||
|
# - Object write headers (cache-control, expires)
|
||||||
|
KEYWORD_EXCLUDE+=" and not object_write_cache_control and not object_write_expires"
|
||||||
|
# - bucket_head_extended
|
||||||
|
KEYWORD_EXCLUDE+=" and not bucket_head_extended"
|
||||||
|
# - Restore/read-through
|
||||||
|
KEYWORD_EXCLUDE+=" and not restore_object and not read_through and not restore_noncur"
|
||||||
|
# - list_multipart_upload_owner (needs owner tracking)
|
||||||
|
KEYWORD_EXCLUDE+=" and not list_multipart_upload_owner"
|
||||||
|
# - bucket_create_exists (needs owner tracking)
|
||||||
|
KEYWORD_EXCLUDE+=" and not bucket_create_exists"
|
||||||
|
# - bucket_create_naming_dns (dots + hyphens adjacent)
|
||||||
|
KEYWORD_EXCLUDE+=" and not bucket_create_naming_dns"
|
||||||
|
# - object_requestid_matches_header_on_error
|
||||||
|
KEYWORD_EXCLUDE+=" and not requestid_matches_header"
|
||||||
|
# - unicode metadata
|
||||||
|
KEYWORD_EXCLUDE+=" and not unicode_metadata"
|
||||||
|
# - multipart_upload_on_a_bucket_with_policy
|
||||||
|
KEYWORD_EXCLUDE+=" and not upload_on_a_bucket_with_policy"
|
||||||
|
# - upload_part_copy_percent_encoded_key
|
||||||
|
KEYWORD_EXCLUDE+=" and not part_copy"
|
||||||
|
# - list_buckets_paginated (needs pagination support in list_buckets)
|
||||||
|
KEYWORD_EXCLUDE+=" and not list_buckets_paginated and not list_buckets_invalid and not list_buckets_bad"
|
||||||
|
# - multipart_resend_first_finishes_last
|
||||||
|
KEYWORD_EXCLUDE+=" and not resend_first_finishes_last"
|
||||||
|
# - ranged_big_request (Range header support)
|
||||||
|
KEYWORD_EXCLUDE+=" and not ranged_big"
|
||||||
|
# - encoding_basic (URL encoding in listing)
|
||||||
|
KEYWORD_EXCLUDE+=" and not encoding_basic"
|
||||||
|
# - maxkeys_invalid (needs proper 400 error for non-numeric maxkeys)
|
||||||
|
KEYWORD_EXCLUDE+=" and not maxkeys_invalid"
|
||||||
|
# - fetchowner (needs FetchOwner=true support in v2)
|
||||||
|
KEYWORD_EXCLUDE+=" and not fetchowner"
|
||||||
|
# - list_return_data (needs Owner data in old SDK format)
|
||||||
|
KEYWORD_EXCLUDE+=" and not list_return_data"
|
||||||
|
# - unordered listing tests (parallel create, needs strict ordering)
|
||||||
|
KEYWORD_EXCLUDE+=" and not bucket_list_unordered and not bucket_listv2_unordered"
|
||||||
|
# - block_public_policy/restrict tests (PutPublicAccessBlock not implemented)
|
||||||
|
KEYWORD_EXCLUDE+=" and not block_public"
|
||||||
|
# - multipart_upload_resend_part (uses Range header in _check_content_using_range)
|
||||||
|
KEYWORD_EXCLUDE+=" and not upload_resend_part"
|
||||||
|
|
||||||
|
FILTER="$MARKER_EXCLUDE and $KEYWORD_EXCLUDE"
|
||||||
|
|
||||||
|
# --- Run the tests ------------------------------------------------------------
|
||||||
|
|
||||||
|
export S3TEST_CONF="$CONF"
|
||||||
|
|
||||||
|
EXTRA_ARGS=("${@}")
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Running s3-tests..."
|
||||||
|
echo "Filter: $FILTER"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# --- Individual test deselections (can't use -k without affecting similarly-named tests)
|
||||||
|
|
||||||
|
DESELECT_ARGS=()
|
||||||
|
# test_multipart_upload: uses Range requests + idempotent double-complete (Ceph-specific)
|
||||||
|
DESELECT_ARGS+=(--deselect "s3tests/functional/test_s3.py::test_multipart_upload")
|
||||||
|
# test_multipart_upload_small: idempotent double-complete (Ceph-specific behavior)
|
||||||
|
DESELECT_ARGS+=(--deselect "s3tests/functional/test_s3.py::test_multipart_upload_small")
|
||||||
|
|
||||||
|
cd "$S3TESTS_DIR"
|
||||||
|
python3 -m pytest s3tests/functional/test_s3.py \
|
||||||
|
-k "$FILTER" \
|
||||||
|
"${DESELECT_ARGS[@]}" \
|
||||||
|
-v \
|
||||||
|
--tb=short \
|
||||||
|
"${EXTRA_ARGS[@]}" \
|
||||||
|
|| true # don't fail the script on test failures — we want to see results
|
||||||
49
s3-compliance/s3tests.conf.template
Normal file
49
s3-compliance/s3tests.conf.template
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
[DEFAULT]
|
||||||
|
host = 127.0.0.1
|
||||||
|
port = __PORT__
|
||||||
|
is_secure = no
|
||||||
|
|
||||||
|
[fixtures]
|
||||||
|
bucket prefix = test-{random}-
|
||||||
|
|
||||||
|
[s3 main]
|
||||||
|
display_name = test
|
||||||
|
user_id = testid
|
||||||
|
email = test@example.com
|
||||||
|
access_key = test
|
||||||
|
secret_key = test
|
||||||
|
api_name = default
|
||||||
|
|
||||||
|
[s3 alt]
|
||||||
|
display_name = testalt
|
||||||
|
user_id = testaltid
|
||||||
|
email = testalt@example.com
|
||||||
|
access_key = testalt
|
||||||
|
secret_key = testalt
|
||||||
|
|
||||||
|
[s3 tenant]
|
||||||
|
display_name = testtenant
|
||||||
|
user_id = testtenantid
|
||||||
|
email = testtenant@example.com
|
||||||
|
access_key = testtenant
|
||||||
|
secret_key = testtenant
|
||||||
|
tenant = tenant
|
||||||
|
|
||||||
|
[iam]
|
||||||
|
email = s3@example.com
|
||||||
|
user_id = testiam
|
||||||
|
access_key = testiam
|
||||||
|
secret_key = testiam
|
||||||
|
display_name = testiam
|
||||||
|
|
||||||
|
[iam root]
|
||||||
|
access_key = iamrootkey
|
||||||
|
secret_key = iamrootsecret
|
||||||
|
user_id = iamrootid
|
||||||
|
email = iamroot@example.com
|
||||||
|
|
||||||
|
[iam alt root]
|
||||||
|
access_key = iamaltkey
|
||||||
|
secret_key = iamaltsecret
|
||||||
|
user_id = iamaltid
|
||||||
|
email = iamalt@example.com
|
||||||
16
templates/docker-compose.yaml
Normal file
16
templates/docker-compose.yaml
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: 'postgres:18-alpine'
|
||||||
|
restart: 'always'
|
||||||
|
shm_size: 128mb
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: post3_dev
|
||||||
|
POSTGRES_USER: devuser
|
||||||
|
POSTGRES_PASSWORD: devpassword
|
||||||
|
ports:
|
||||||
|
- '5435:5432'
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U devuser -d post3_dev"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
21
todos/POST3-001-workspace-skeleton.md
Normal file
21
todos/POST3-001-workspace-skeleton.md
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
# POST3-001: Create workspace skeleton
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P0
|
||||||
|
**Blocked by:** —
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Set up the Rust workspace with both crates, Docker Compose for PostgreSQL 18, and mise.toml dev tasks.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] `Cargo.toml` workspace root with `crates/*` members
|
||||||
|
- [ ] `crates/post3/Cargo.toml` — library crate with sqlx, tokio, bytes, chrono, md-5, hex, thiserror, uuid, tracing, serde
|
||||||
|
- [ ] `crates/post3/src/lib.rs` — empty module declarations
|
||||||
|
- [ ] `crates/post3-server/Cargo.toml` — binary crate depending on post3, axum, clap, notmad, quick-xml, etc.
|
||||||
|
- [ ] `crates/post3-server/src/main.rs` — minimal tokio main
|
||||||
|
- [ ] `templates/docker-compose.yaml` — PostgreSQL 18 on port 5435
|
||||||
|
- [ ] `mise.toml` — tasks: up, down, dev, test, db:shell, db:migrate
|
||||||
|
- [ ] `cargo check --workspace` passes
|
||||||
|
- [ ] `mise run up` starts PostgreSQL successfully
|
||||||
24
todos/POST3-002-schema-models-errors.md
Normal file
24
todos/POST3-002-schema-models-errors.md
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# POST3-002: Database schema, models, and error types
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P0
|
||||||
|
**Blocked by:** POST3-001
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Define the PostgreSQL schema (buckets, objects, object_metadata, blocks), create Rust model types with sqlx::FromRow, and define the Post3Error enum.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] `crates/post3/migrations/20260226000001_initial.sql` with all 4 tables + indexes
|
||||||
|
- [ ] `crates/post3/src/models.rs` — BucketRow, ObjectRow, BlockRow, MetadataEntry, ObjectInfo, ListObjectsResult
|
||||||
|
- [ ] `crates/post3/src/error.rs` — Post3Error enum (BucketNotFound, BucketAlreadyExists, ObjectNotFound, BucketNotEmpty, Database, Other)
|
||||||
|
- [ ] Migration runs successfully against PostgreSQL
|
||||||
|
- [ ] `cargo check -p post3` passes
|
||||||
|
|
||||||
|
## Schema Details
|
||||||
|
|
||||||
|
- `buckets` — id (UUID PK), name (TEXT UNIQUE), created_at
|
||||||
|
- `objects` — id (UUID PK), bucket_id (FK CASCADE), key, size, etag, content_type, created_at; unique on (bucket_id, key)
|
||||||
|
- `object_metadata` — id (UUID PK), object_id (FK CASCADE), meta_key, meta_value; unique on (object_id, meta_key)
|
||||||
|
- `blocks` — id (UUID PK), object_id (FK CASCADE), block_index, data (BYTEA), block_size; unique on (object_id, block_index)
|
||||||
26
todos/POST3-003-repository-layer.md
Normal file
26
todos/POST3-003-repository-layer.md
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
# POST3-003: Repository layer and Store API
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P0
|
||||||
|
**Blocked by:** POST3-002
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Implement the repository layer (raw SQL CRUD for each table) and the high-level Store API that orchestrates them with transactions and chunking logic.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] `repositories/buckets.rs` — create, get_by_name, list, delete, is_empty
|
||||||
|
- [ ] `repositories/objects.rs` — upsert, get, delete, list (with prefix + pagination)
|
||||||
|
- [ ] `repositories/blocks.rs` — insert_block, get_all_blocks (ordered by block_index)
|
||||||
|
- [ ] `repositories/metadata.rs` — insert_batch, get_all (for an object_id)
|
||||||
|
- [ ] `store.rs` — Store struct with all public methods:
|
||||||
|
- create_bucket, head_bucket, delete_bucket, list_buckets
|
||||||
|
- put_object (chunking + MD5 ETag + metadata, all in a transaction)
|
||||||
|
- get_object (reassemble blocks + fetch metadata)
|
||||||
|
- head_object, delete_object, list_objects_v2
|
||||||
|
- get_object_metadata
|
||||||
|
- [ ] put_object correctly splits body into 1 MiB blocks
|
||||||
|
- [ ] get_object correctly reassembles blocks in order
|
||||||
|
- [ ] Overwriting an object deletes old blocks+metadata via CASCADE
|
||||||
|
- [ ] `cargo check -p post3` passes
|
||||||
20
todos/POST3-004-s3-server-skeleton.md
Normal file
20
todos/POST3-004-s3-server-skeleton.md
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# POST3-004: S3 HTTP server skeleton
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P0
|
||||||
|
**Blocked by:** POST3-003
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Build the post3-server binary with CLI (clap), state management, notmad component lifecycle, and axum router with all S3 routes wired up.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] `main.rs` — dotenvy + tracing_subscriber + cli::execute()
|
||||||
|
- [ ] `cli.rs` — clap App with `serve` subcommand
|
||||||
|
- [ ] `cli/serve.rs` — ServeCommand with --host flag, starts notmad::Mad with S3Server
|
||||||
|
- [ ] `state.rs` — State struct (PgPool + Store), runs migrations on new()
|
||||||
|
- [ ] `s3/mod.rs` — S3Server implementing notmad::Component
|
||||||
|
- [ ] `s3/router.rs` — all 9 routes mapped to handler functions
|
||||||
|
- [ ] Server starts, binds to port, responds to requests
|
||||||
|
- [ ] `cargo check -p post3-server` passes
|
||||||
20
todos/POST3-005-xml-responses-extractors.md
Normal file
20
todos/POST3-005-xml-responses-extractors.md
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
# POST3-005: XML response builders and extractors
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P1
|
||||||
|
**Blocked by:** POST3-004
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Implement S3-compatible XML response serialization and request query parameter extraction.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] `s3/responses.rs`:
|
||||||
|
- `list_buckets_xml(buckets)` — ListAllMyBucketsResult with Owner
|
||||||
|
- `list_objects_v2_xml(bucket, result, max_keys)` — ListBucketResult with Contents
|
||||||
|
- `error_xml(code, message, resource)` — S3 Error response
|
||||||
|
- [ ] `s3/extractors.rs`:
|
||||||
|
- `ListObjectsQuery` — list-type, prefix, max-keys, continuation-token, start-after, delimiter
|
||||||
|
- [ ] XML output matches S3 format (xmlns, element names, date format ISO 8601)
|
||||||
|
- [ ] All responses include `x-amz-request-id` header (UUID)
|
||||||
30
todos/POST3-006-s3-handlers.md
Normal file
30
todos/POST3-006-s3-handlers.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# POST3-006: S3 bucket and object handlers
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P1
|
||||||
|
**Blocked by:** POST3-005
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Implement all S3 HTTP request handlers that bridge the S3 REST API to the core Store API.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
### Bucket handlers (`s3/handlers/buckets.rs`)
|
||||||
|
- [ ] CreateBucket — PUT /{bucket} → 200 + Location header
|
||||||
|
- [ ] HeadBucket — HEAD /{bucket} → 200 or 404
|
||||||
|
- [ ] DeleteBucket — DELETE /{bucket} → 204 (409 if not empty)
|
||||||
|
- [ ] ListBuckets — GET / → 200 + XML
|
||||||
|
|
||||||
|
### Object handlers (`s3/handlers/objects.rs`)
|
||||||
|
- [ ] PutObject — PUT /{bucket}/{*key} → 200 + ETag header; reads x-amz-meta-* from request headers
|
||||||
|
- [ ] GetObject — GET /{bucket}/{*key} → 200 + body + ETag + Content-Type + Content-Length + Last-Modified + x-amz-meta-* headers
|
||||||
|
- [ ] HeadObject — HEAD /{bucket}/{*key} → 200 + metadata headers (no body)
|
||||||
|
- [ ] DeleteObject — DELETE /{bucket}/{*key} → 204
|
||||||
|
- [ ] ListObjectsV2 — GET /{bucket}?list-type=2 → 200 + XML
|
||||||
|
|
||||||
|
### Error handling
|
||||||
|
- [ ] NoSuchBucket → 404 + XML error
|
||||||
|
- [ ] NoSuchKey → 404 + XML error
|
||||||
|
- [ ] BucketAlreadyOwnedByYou → 409 + XML error
|
||||||
|
- [ ] BucketNotEmpty → 409 + XML error
|
||||||
27
todos/POST3-007-integration-tests.md
Normal file
27
todos/POST3-007-integration-tests.md
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# POST3-007: Integration tests with aws-sdk-s3
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P1
|
||||||
|
**Blocked by:** POST3-006
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
End-to-end integration tests using the official AWS S3 Rust SDK to validate the full stack.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] `tests/common/mod.rs` — TestServer helper:
|
||||||
|
- Starts server on ephemeral port (port 0)
|
||||||
|
- Configures aws-sdk-s3 with force_path_style, dummy creds, custom endpoint
|
||||||
|
- Cleans database between tests
|
||||||
|
- [ ] Test: create + list buckets
|
||||||
|
- [ ] Test: head bucket (exists + not exists)
|
||||||
|
- [ ] Test: delete bucket
|
||||||
|
- [ ] Test: put + get small object (body roundtrip)
|
||||||
|
- [ ] Test: put large object (5 MiB, verify chunked storage + reassembly)
|
||||||
|
- [ ] Test: head object (size, etag, content-type)
|
||||||
|
- [ ] Test: delete object (verify 404 after)
|
||||||
|
- [ ] Test: list objects v2 with prefix filter
|
||||||
|
- [ ] Test: overwrite object (verify latest version)
|
||||||
|
- [ ] Test: user metadata roundtrip (x-amz-meta-* headers)
|
||||||
|
- [ ] All tests pass with `cargo nextest run`
|
||||||
26
todos/POST3-008-client-sdk.md
Normal file
26
todos/POST3-008-client-sdk.md
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
# POST3-008: Client SDK crate
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P0
|
||||||
|
**Blocked by:** —
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Create a `crates/post3-sdk/` client crate that wraps `aws-sdk-s3` with post3-specific defaults.
|
||||||
|
|
||||||
|
## What was built
|
||||||
|
|
||||||
|
- [x] `crates/post3-sdk/Cargo.toml` — depends on aws-sdk-s3, aws-credential-types, aws-types, aws-config
|
||||||
|
- [x] `Post3Client` struct wrapping `aws_sdk_s3::Client`
|
||||||
|
- [x] `Post3Client::new(endpoint_url)` — builds client with force_path_style, dummy creds, us-east-1
|
||||||
|
- [x] `Post3Client::builder()` — for advanced config (custom creds, region, etc.)
|
||||||
|
- [x] Re-exports: `aws_sdk_s3` and `bytes`
|
||||||
|
- [x] Convenience methods:
|
||||||
|
- `create_bucket(name)`, `head_bucket(name)`, `delete_bucket(name)`, `list_buckets()`
|
||||||
|
- `put_object(bucket, key, body: impl AsRef<[u8]>)`
|
||||||
|
- `get_object(bucket, key)` → `Result<Bytes>`
|
||||||
|
- `head_object(bucket, key)` → `Result<Option<ObjectInfo>>`
|
||||||
|
- `delete_object(bucket, key)`
|
||||||
|
- `list_objects(bucket, prefix)` → `Result<Vec<ObjectInfo>>`
|
||||||
|
- [x] `inner()` access to `aws_sdk_s3::Client`
|
||||||
|
- [x] Unit tests + doc-tests pass
|
||||||
30
todos/POST3-009-ci-dagger.md
Normal file
30
todos/POST3-009-ci-dagger.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# POST3-009: CI pipeline with Dagger Rust SDK
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P1
|
||||||
|
**Blocked by:** —
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Set up a Dagger-based CI pipeline using a custom self-contained `ci/` crate with `dagger-sdk` directly (not the external `cuddle-ci` / `dagger-rust` components, which are too opinionated for post3's context).
|
||||||
|
|
||||||
|
## What was built
|
||||||
|
|
||||||
|
- [x] `ci/` added as workspace member
|
||||||
|
- [x] `ci/Cargo.toml` with dependencies: dagger-sdk, eyre, tokio, clap
|
||||||
|
- [x] `ci/src/main.rs` — custom pipeline with:
|
||||||
|
- `pr` and `main` subcommands (clap CLI)
|
||||||
|
- Source loading with dependency caching (skeleton files pattern from dagger-components)
|
||||||
|
- `rustlang/rust:nightly` base with clang + mold 2.3.3 for fast linking
|
||||||
|
- Dagger cache volumes for target/ and cargo registry
|
||||||
|
- `cargo check --workspace` compilation check
|
||||||
|
- PostgreSQL 18 as Dagger service container for integration tests
|
||||||
|
- `cargo test --workspace -- --test-threads=1` against Dagger PG
|
||||||
|
- Release binary build + packaging into `debian:bookworm-slim`
|
||||||
|
- `post3-server --help` sanity check in final image
|
||||||
|
- [x] `mise.toml` tasks: `ci:pr`, `ci:main`
|
||||||
|
- [x] No container publish (deferred until registry is decided)
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
Pattern inspired by dagger-components (`/home/kjuulh/git/git.kjuulh.io/kjuulh/dagger-components`) but self-contained — no external git dependencies.
|
||||||
34
todos/POST3-010-docker-compose-production.md
Normal file
34
todos/POST3-010-docker-compose-production.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# POST3-010: Production Docker Compose setup
|
||||||
|
|
||||||
|
**Status:** Todo
|
||||||
|
**Priority:** P1
|
||||||
|
**Blocked by:** POST3-009
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Create a production-oriented Docker Compose setup that runs post3-server alongside PostgreSQL, with proper networking, health checks, and configuration.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] `Dockerfile` (multi-stage) for post3-server:
|
||||||
|
- Builder stage: rust image, compile release binary
|
||||||
|
- Runtime stage: debian-slim or alpine, copy binary + migrations
|
||||||
|
- Health check endpoint (add `GET /health` to router)
|
||||||
|
- Non-root user
|
||||||
|
- [ ] `templates/docker-compose.production.yaml`:
|
||||||
|
- `postgres` service (PostgreSQL 18, persistent volume, health check)
|
||||||
|
- `post3` service (built image, depends_on postgres healthy, DATABASE_URL from env)
|
||||||
|
- Named volumes for PostgreSQL data
|
||||||
|
- Internal network
|
||||||
|
- Port 9000 exposed for post3
|
||||||
|
- [ ] `templates/.env.example` — sample env file for production
|
||||||
|
- [ ] `GET /health` endpoint on the server (returns 200 when DB is reachable)
|
||||||
|
- [ ] `mise.toml` tasks:
|
||||||
|
- `prod:up` — start production compose
|
||||||
|
- `prod:down` — stop production compose
|
||||||
|
- `prod:build` — build the Docker image
|
||||||
|
- [ ] README section on production deployment
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
The CI pipeline (POST3-009) will produce the container image. This ticket handles the compose orchestration for self-hosted deployment.
|
||||||
29
todos/POST3-011-examples.md
Normal file
29
todos/POST3-011-examples.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
# POST3-011: Usage examples
|
||||||
|
|
||||||
|
**Status:** Done
|
||||||
|
**Priority:** P1
|
||||||
|
**Blocked by:** POST3-008
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Create runnable examples demonstrating how to use post3 with both the SDK and shell tools.
|
||||||
|
|
||||||
|
## What was built
|
||||||
|
|
||||||
|
### Rust examples (`crates/post3-sdk/examples/`)
|
||||||
|
- [x] `basic.rs` — create bucket, put/get/delete object, list objects with prefix filter
|
||||||
|
- [x] `metadata.rs` — put object with custom metadata (x-amz-meta-*), retrieve via head/get
|
||||||
|
- [x] `aws_sdk_direct.rs` — use aws-sdk-s3 directly (without post3-sdk wrapper), shows raw config
|
||||||
|
|
||||||
|
### Script examples (`examples/`)
|
||||||
|
- [x] `aws-cli.sh` — shell script demonstrating all operations via `aws` CLI
|
||||||
|
- [x] `curl.sh` — shell script demonstrating raw HTTP calls with curl
|
||||||
|
|
||||||
|
### mise tasks
|
||||||
|
- [x] `example:basic` — runs the basic Rust example
|
||||||
|
- [x] `example:metadata` — runs the metadata Rust example
|
||||||
|
- [x] `example:aws-sdk` — runs the raw aws-sdk-s3 example
|
||||||
|
- [x] `example:cli` — runs the AWS CLI example script
|
||||||
|
- [x] `example:curl` — runs the curl example script
|
||||||
|
|
||||||
|
All examples tested and verified working against live server.
|
||||||
70
todos/POST3-012-authentication.md
Normal file
70
todos/POST3-012-authentication.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# POST3-012: Authentication system
|
||||||
|
|
||||||
|
**Status:** Todo
|
||||||
|
**Priority:** P1
|
||||||
|
**Blocked by:** —
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Add authentication to post3-server. Currently the server accepts any request regardless of credentials. We need to support API key-based authentication that is compatible with the AWS SigV4 signing process (so the official AWS SDKs and CLI work transparently).
|
||||||
|
|
||||||
|
## Approach
|
||||||
|
|
||||||
|
### Phase 1: API key authentication (simple)
|
||||||
|
|
||||||
|
Use a shared secret key pair (access_key_id + secret_access_key) configured via environment variables. The server validates that the `Authorization` header contains a valid AWS SigV4 signature computed with the known secret.
|
||||||
|
|
||||||
|
- [ ] Database table `api_keys`:
|
||||||
|
```sql
|
||||||
|
CREATE TABLE api_keys (
|
||||||
|
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||||
|
access_key_id TEXT NOT NULL,
|
||||||
|
secret_key TEXT NOT NULL, -- stored hashed or plaintext for SigV4
|
||||||
|
name TEXT NOT NULL, -- human-readable label
|
||||||
|
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
CREATE UNIQUE INDEX idx_api_keys_access_key ON api_keys (access_key_id);
|
||||||
|
```
|
||||||
|
- [ ] SigV4 signature verification middleware (axum layer)
|
||||||
|
- [ ] Extract access_key_id from `Authorization` header
|
||||||
|
- [ ] Look up secret_key from `api_keys` table
|
||||||
|
- [ ] Recompute the SigV4 signature and compare
|
||||||
|
- [ ] Return `403 AccessDenied` XML error on mismatch
|
||||||
|
- [ ] Environment variable `POST3_AUTH_ENABLED=true|false` to toggle (default: false for backward compat)
|
||||||
|
|
||||||
|
### Phase 2: Per-bucket ACLs (future)
|
||||||
|
|
||||||
|
- [ ] `bucket_permissions` table linking api_keys to buckets with read/write/admin roles
|
||||||
|
- [ ] Enforce permissions in handlers
|
||||||
|
- [ ] Admin API for managing keys and permissions
|
||||||
|
|
||||||
|
### Phase 3: Admin CLI
|
||||||
|
|
||||||
|
- [ ] `post3-server admin create-key --name "my-app"` — generates and prints access_key_id + secret_access_key
|
||||||
|
- [ ] `post3-server admin list-keys` — list all API keys
|
||||||
|
- [ ] `post3-server admin revoke-key --access-key-id AKIA...` — deactivate a key
|
||||||
|
|
||||||
|
## Migration
|
||||||
|
|
||||||
|
- [ ] New migration file for `api_keys` table
|
||||||
|
- [ ] Existing deployments unaffected (auth disabled by default)
|
||||||
|
|
||||||
|
## SDK Integration
|
||||||
|
|
||||||
|
- [ ] `Post3Client::builder().credentials(access_key, secret_key)` passes real credentials
|
||||||
|
- [ ] When auth is disabled, dummy credentials still work
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
- [ ] Test: request with valid signature succeeds
|
||||||
|
- [ ] Test: request with invalid signature returns 403
|
||||||
|
- [ ] Test: request with unknown access_key_id returns 403
|
||||||
|
- [ ] Test: auth disabled mode accepts any credentials
|
||||||
|
- [ ] Test: admin CLI key management commands
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
SigV4 verification requires access to the raw request (method, path, headers, body hash). The `aws-sigv4` crate from the AWS SDK can help with signature computation on the server side. Alternatively, implement the HMAC-SHA256 chain manually — it's well-documented.
|
||||||
|
|
||||||
|
The secret_key must be stored in a form that allows recomputing signatures (SigV4 uses the secret directly in HMAC, not a hash of it). This means secret_keys are stored as plaintext or with reversible encryption. This is inherent to SigV4's design.
|
||||||
68
todos/POST3-013-s3-compliance.md
Normal file
68
todos/POST3-013-s3-compliance.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# POST3-013: S3 Compliance Testing with Ceph s3-tests
|
||||||
|
|
||||||
|
## Status: Done
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Integrate Ceph s3-tests (the industry-standard S3 conformance suite) to validate post3's S3 compatibility. Uses the filesystem backend (`--backend fs`) for fast, database-free test runs.
|
||||||
|
|
||||||
|
## Results
|
||||||
|
|
||||||
|
**124 tests passing, 0 failures, 0 errors** out of 829 total tests (705 deselected for unimplemented features).
|
||||||
|
|
||||||
|
## What was done
|
||||||
|
|
||||||
|
### Phase 1 — Missing S3 operations (blocking for s3-tests)
|
||||||
|
- [x] `ListObjectVersions` stub — `GET /{bucket}?versions` (returns objects as version "null")
|
||||||
|
- [x] `DeleteObjects` batch delete — `POST /{bucket}?delete`
|
||||||
|
- [x] `ListObjects` v1 — `GET /{bucket}` without `list-type=2`
|
||||||
|
- [x] `GetBucketLocation` — `GET /{bucket}?location`
|
||||||
|
- [x] `--backend fs/pg` CLI flag + `--data-dir`
|
||||||
|
- [x] Bucket naming validation (S3 rules: 3-63 chars, lowercase, no IP format)
|
||||||
|
|
||||||
|
### Phase 2 — Delimiter & listing compliance
|
||||||
|
- [x] Delimiter + CommonPrefixes in `list_objects_v2` (both backends)
|
||||||
|
- [x] V1 and V2 XML responses emit delimiter/common_prefixes
|
||||||
|
- [x] MaxKeys limits total objects+common_prefixes combined (sorted interleave)
|
||||||
|
- [x] MaxKeys=0 returns empty, non-truncated result
|
||||||
|
- [x] StartAfter + ContinuationToken echo in v2 response
|
||||||
|
- [x] Owner element in v1 Contents
|
||||||
|
- [x] Empty delimiter treated as absent
|
||||||
|
|
||||||
|
### Phase 3 — Test infrastructure
|
||||||
|
- [x] s3-tests git submodule (pinned at `06e2c57`)
|
||||||
|
- [x] `s3-compliance/s3tests.conf.template`
|
||||||
|
- [x] `s3-compliance/run-s3-tests.sh`
|
||||||
|
- [x] mise tasks: `test:s3-compliance` and `test:s3-compliance:dry`
|
||||||
|
|
||||||
|
### Phase 4 — Compliance fixes from test runs
|
||||||
|
- [x] ETag quoting normalization in multipart completion (both backends)
|
||||||
|
- [x] ListObjectVersions pagination (NextKeyMarker/NextVersionIdMarker when truncated)
|
||||||
|
- [x] ListObjectVersions passes key-marker and delimiter from query params
|
||||||
|
- [x] EntityTooSmall validation (non-last parts must be >= 5 MB)
|
||||||
|
- [x] DeleteObjects 1000 key limit
|
||||||
|
- [x] delete_object returns 404 for non-existent bucket
|
||||||
|
- [x] Common prefix filtering by continuation token
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mise run test:s3-compliance # run filtered s3-tests
|
||||||
|
mise run test:s3-compliance:dry # list which tests would run
|
||||||
|
```
|
||||||
|
|
||||||
|
## Excluded test categories
|
||||||
|
|
||||||
|
Features post3 doesn't implement (excluded via markers/keywords):
|
||||||
|
ACLs, bucket policy, encryption, CORS, lifecycle, versioning, object lock,
|
||||||
|
tagging, S3 Select, S3 website, IAM, STS, SSE, anonymous access, presigned URLs,
|
||||||
|
CopyObject, logging, notifications, storage classes, auth signature validation,
|
||||||
|
Range header, conditional requests, public access block.
|
||||||
|
|
||||||
|
## Future work
|
||||||
|
|
||||||
|
- [ ] Add CI step (`ci/src/main.rs`) for automated s3-compliance runs
|
||||||
|
- [ ] Gradually reduce exclusion list as more features are implemented
|
||||||
|
- [ ] Range header support (would enable ~10 more tests)
|
||||||
|
- [ ] CopyObject support (would enable ~20 more tests)
|
||||||
|
- [ ] Idempotent CompleteMultipartUpload (Ceph-specific, 2 excluded tests)
|
||||||
Reference in New Issue
Block a user