Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bun S3 Presigned POST Policy Support #16667

Open
avarayr opened this issue Jan 23, 2025 · 3 comments
Open

Bun S3 Presigned POST Policy Support #16667

avarayr opened this issue Jan 23, 2025 · 3 comments
Labels
enhancement New feature or request

Comments

@avarayr
Copy link

avarayr commented Jan 23, 2025

What is the problem this feature would solve?

When implementing browser-based uploads to S3, we need to enforce security policies (file size limits, content types) at the storage level.

Currently, Bun's S3 client supports basic presigned POST but lacks S3's POST policy system, forcing developers to:

  1. Maintain two S3 clients (Bun + AWS SDK) (redundant)
  2. Use PUT uploads with client-side validation (not secure)
  3. Implement server-side proxies for upload validation (not practical)

What is the feature you are proposing to solve the problem?

Add support for S3 POST policies in Bun's native S3 client:

const { url, fields } = s3file.presignPost({
  conditions: [
    ["content-length-range", 0, 10_000_000], // Enforce max file size
    ["eq", "$Content-Type", "image/jpeg"], // Enforce content type
    ["starts-with", "$key", "uploads/"] // Restrict upload path
  ],
  fields: {
    "Content-Type": "image/jpeg",
    "success_action_status": "201"
  }
});

This would:

  • Generate policy documents for S3
  • Return necessary form fields for multipart uploads
  • Enable S3-level validation of uploads before transfer begins
  • Maintain Bun's performance advantages

What alternatives have you considered?

  1. Current Workaround: Using AWS SDK alongside Bun's client
import { createPresignedPost } from "@aws-sdk/s3-presigned-post";

const { url, fields } = await createPresignedPost(awsClient, {
  Bucket, Key,
  Conditions: [["content-length-range", 0, maxSize]]
});
  1. PUT with Client Validation:
const url = s3file.presign({ method: "PUT" });
// Requires client-side size validation, which is not secure
// No S3-level security guarantees
  1. Server Proxy:
// Stream through server to validate
app.post('/upload', (req, res) => {
  if (req.headers['content-length'] > maxSize) {
    return res.status(413).send();
  }
  // Stream to S3...
});

All alternatives either compromise security, performance, or developer experience.

@avarayr avarayr added the enhancement New feature or request label Jan 23, 2025
@zigazajc007
Copy link

zigazajc007 commented Jan 23, 2025

Something like this:

const upload = s3.presign("my-file", {
  method: 'PUT',
  expiresIn: 3600,
  type: 'application/json',
  minFileSize: 1,
  maxFileSize: 5 * 1024 * 1024 * 1024
});

and Bun does all the magic in the background.

@avarayr
Copy link
Author

avarayr commented Jan 23, 2025

Something like this:

const upload = s3.presign("my-file", {
method: 'PUT',
expiresIn: 3600,
type: 'application/json',
minFileSize: 1,
maxFileSize: 5 * 1024 * 1024 * 1024
});
and Bun does all the magic in the background.

Not the best idea tbh, I've encountered issues where one S3-compatible provider supports only a small subset of restrictions, some have completely different/nonstandard names, some don't support at all. Probably best to leave to the user

@zigazajc007
Copy link

Bun should be able to detect what S3 provider are you using based on endpoint. (Except for self-hosted ones, which is usually MinIO or GarageS3)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants