Upload Files
Add an Upload parameter to any mutation. The frontend passes a File or Blob, the client auto-encodes multipart.
The Code
use forge::prelude::*;
#[forge::mutation]
pub async fn upload_avatar(
ctx: &MutationContext,
file: Upload,
user_id: Uuid,
) -> Result<String> {
let bytes = file.bytes();
let filename = file.name();
let content_type = file.content_type();
// Store in S3, R2, local disk, etc.
let url = ctx.http()
.put(&format!("https://storage.example.com/{}", filename))
.body(bytes.to_vec())
.header("Content-Type", content_type)
.send()
.await?;
Ok(url)
}
Frontend:
const url = await uploadAvatar({
file: inputElement.files[0],
userId: currentUser.id,
});
What Happens
When the generated client detects a File or Blob in your arguments, it automatically switches from JSON to multipart/form-data. Non-file arguments are serialized to JSON in a _json field. The gateway parses the multipart payload and reconstructs the full argument set with Upload instances.
Upload Properties
| Property | Type | Description |
|---|---|---|
name() | &str | Original filename from the client |
content_type() | &str | MIME type (e.g., image/png) |
bytes() | &Bytes | File content as bytes reference |
into_bytes() | Bytes | Consume and return owned bytes |
len() | usize | File size in bytes |
is_empty() | bool | True if zero bytes |
Patterns
Multiple Files
#[forge::mutation]
pub async fn upload_gallery(
ctx: &MutationContext,
images: Vec<Upload>,
album_id: Uuid,
) -> Result<Vec<String>> {
let mut urls = Vec::new();
for image in images {
let url = store_image(ctx, &image).await?;
urls.push(url);
}
Ok(urls)
}
With Metadata
#[derive(Deserialize)]
pub struct DocumentUpload {
title: String,
tags: Vec<String>,
}
#[forge::mutation]
pub async fn upload_document(
ctx: &MutationContext,
file: Upload,
metadata: DocumentUpload,
) -> Result<Document> {
let doc = sqlx::query_as!(
Document,
r#"
INSERT INTO documents (title, filename, content_type, size, tags)
VALUES ($1, $2, $3, $4, $5)
RETURNING *
"#,
metadata.title,
file.name(),
file.content_type(),
file.len() as i64,
&metadata.tags,
)
.fetch_one(ctx.db())
.await?;
// Store file bytes separately
store_file_bytes(&doc.id, file.bytes()).await?;
Ok(doc)
}
Frontend:
const doc = await uploadDocument({
file: selectedFile,
metadata: {
title: "Q4 Report",
tags: ["finance", "quarterly"],
},
});
Validate Before Processing
#[forge::mutation]
pub async fn upload_image(
ctx: &MutationContext,
file: Upload,
) -> Result<String> {
// Check content type
if !file.content_type().starts_with("image/") {
return Err(Error::validation("Only images allowed"));
}
// Check size (already enforced by gateway, but double-check business rules)
if file.len() > 5 * 1024 * 1024 {
return Err(Error::validation("Image must be under 5MB"));
}
store_image(ctx, &file).await
}
Progress Tracking
For large uploads, use XMLHttpRequest to track progress:
function uploadWithProgress(
file: File,
onProgress: (percent: number) => void
): Promise<string> {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
const formData = new FormData();
formData.append("file", file);
formData.append("_json", JSON.stringify({}));
xhr.upload.onprogress = (e) => {
if (e.lengthComputable) {
onProgress(Math.round((e.loaded / e.total) * 100));
}
};
xhr.onload = () => {
const response = JSON.parse(xhr.responseText);
if (response.success) {
resolve(response.data);
} else {
reject(new Error(response.error?.message));
}
};
xhr.onerror = () => reject(new Error("Upload failed"));
xhr.open("POST", `${API_URL}/rpc/upload_image/upload`);
xhr.setRequestHeader("Authorization", `Bearer ${getToken()}`);
xhr.send(formData);
});
}
Context Methods
| Method | Return Type | Description |
|---|---|---|
ctx.db() | &PgPool | Database connection pool |
ctx.http() | HttpClient | HTTP client for external calls |
ctx.auth | AuthContext | Current user authentication |
ctx.env(key) | Option<String> | Environment variable lookup |
Under the Hood
Streaming upload: The gateway processes multipart data in chunks using axum::extract::Multipart. Each field is read incrementally, preventing memory exhaustion from large files.
Size limits: The gateway enforces a 10MB default file size limit and 20 maximum upload fields. These prevent resource exhaustion attacks.
const MAX_FILE_SIZE: usize = 10 * 1024 * 1024; // 10MB
const MAX_UPLOAD_FIELDS: usize = 20;
Content-type detection: MIME type comes from the browser's Content-Type header for each multipart field. If the client omits it, the server defaults to application/octet-stream.
Automatic encoding: The generated TypeScript client inspects arguments recursively. Any File or Blob triggers multipart encoding:
// Client detects File/Blob in args
const hasFiles = this.containsFiles(args);
if (hasFiles) {
const formData = this.buildFormData(args);
// POST to /rpc/{function}/upload with multipart
} else {
// POST to /rpc/{function} with JSON
}
Type mapping: Rust's Upload type maps to TypeScript's File | Blob. The generated client accepts either.
Testing
#[tokio::test]
async fn test_upload_avatar() {
let ctx = TestMutationContext::builder()
.as_user(user_id)
.mock_http("storage.example.com/*", |_req| {
HttpResponse::ok().body("https://storage.example.com/avatar.png")
})
.build()
.await;
let file = Upload::new(
"avatar.png",
"image/png",
Bytes::from(include_bytes!("../fixtures/test-image.png").to_vec()),
);
let url = upload_avatar(&ctx, file, user_id).await.unwrap();
assert!(url.contains("avatar.png"));
}