Use this file to discover all available pages before exploring further.
Production servers often read, upload, and write files to S3-compatible object storage services instead of the local filesystem. Historically, that means local filesystem APIs you use in development can’t be used in production. When you use Bun, things are different.
Bun provides fast, native bindings for interacting with S3-compatible object storage services. Its S3 API is designed to feel similar to fetch’s Response and Blob APIs (like Bun’s local filesystem APIs).
s3.ts
import { s3, write, S3Client } from "bun";// Bun.s3 reads environment variables for credentials// file() returns a lazy reference to a file on S3const metadata = s3.file("123.json");// Download from S3 as JSONconst data = await metadata.json();// Upload to S3await write(metadata, JSON.stringify({ name: "John", age: 30 }));// Presign a URL (synchronous - no network request needed)const url = metadata.presign({ acl: "public-read", expiresIn: 60 * 60 * 24, // 1 day});// Delete the fileawait metadata.delete();
S3 is the de facto standard internet filesystem. Bun’s S3 API works with S3-compatible storage services like:
Bun.s3 is equivalent to new Bun.S3Client(), relying on environment variables for credentials.To explicitly set credentials, pass them to the Bun.S3Client constructor.
s3.ts
import { S3Client } from "bun";const client = new S3Client({ accessKeyId: "your-access-key", secretAccessKey: "your-secret-key", bucket: "my-bucket", // sessionToken: "..." // acl: "public-read", // endpoint: "https://s3.us-east-1.amazonaws.com", // endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2 // endpoint: "https://<region>.digitaloceanspaces.com", // DigitalOcean Spaces // endpoint: "http://localhost:9000", // MinIO});// Bun.s3 is a global singleton that is equivalent to `new Bun.S3Client()`
The file method in S3Client returns a lazy reference to a file on S3.
s3.ts
// A lazy reference to a file on S3const s3file: S3File = client.file("123.json");
Like Bun.file(path), the S3Client’s file method is synchronous. It does zero network requests until you call a method that depends on a network request.
If you’ve used the fetch API, you’re familiar with the Response and Blob APIs. S3File extends Blob. The same methods that work on Blob also work on S3File.
s3.ts
// Read an S3File as textconst text = await s3file.text();// Read an S3File as JSONconst json = await s3file.json();// Read an S3File as an ArrayBufferconst buffer = await s3file.arrayBuffer();// Get only the first 1024 bytesconst partial = await s3file.slice(0, 1024).text();// Stream the fileconst stream = s3file.stream();for await (const chunk of stream) { console.log(chunk);}
Methods like text(), json(), bytes(), or arrayBuffer() avoid duplicating the string or bytes in memory when possible.If the text happens to be ASCII, Bun directly transfers the string to JavaScriptCore (the engine) without transcoding and without duplicating the string in memory. When you use .bytes() or .arrayBuffer(), it will also avoid duplicating the bytes in memory.These helper methods not only simplify the API, they also make it faster.
Bun automatically handles multipart uploads for large files and provides streaming capabilities. The same API that works for local files also works for S3 files.
s3.ts
// Write a large fileconst bigFile = Buffer.alloc(10 * 1024 * 1024); // 10MBconst writer = s3file.writer({ // Automatically retry on network errors up to 3 times retry: 3, // Queue up to 10 requests at a time queueSize: 10, // Upload in 5 MB chunks partSize: 5 * 1024 * 1024,});for (let i = 0; i < 10; i++) { writer.write(bigFile); await writer.flush();}await writer.end();
When your production service needs to let users upload files to your server, it’s often more reliable for the user to upload directly to S3 instead of your server acting as an intermediary.To facilitate this, you can presign URLs for S3 files. This generates a URL with a signature that allows a user to securely upload that specific file to S3, without exposing your credentials or granting them unnecessary access to your bucket.The default behaviour is to generate a GET URL that expires in 24 hours. Bun attempts to infer the content type from the file extension. If inference is not possible, it will default to application/octet-stream.
s3.ts
import { s3 } from "bun";// Generate a presigned URL that expires in 24 hours (default)const download = s3.presign("my-file.txt"); // GET, text/plain, expires in 24 hoursconst upload = s3.presign("my-file", { expiresIn: 3600, // 1 hour method: "PUT", type: "application/json", // Sets response-content-type in the presigned URL});// Presign with content disposition (e.g. force download with a specific filename)const downloadUrl = s3.presign("report.pdf", { expiresIn: 3600, contentDisposition: 'attachment; filename="quarterly-report.pdf"',});// You can call .presign() if on a file reference, but avoid doing so// unless you already have a reference (to avoid memory usage).const myFile = s3.file("my-file.txt");const presignedFile = myFile.presign({ expiresIn: 3600, // 1 hour});
To redirect users to a presigned URL for an S3 file, pass an S3File instance to a Response object as the body.This will automatically redirect the user to the presigned URL for the S3 file, saving you the memory, time, and bandwidth cost of downloading the file to your server and sending it back to the user.
s3.ts
const response = new Response(s3file);console.log(response);
To use Bun’s S3 client with MinIO, set endpoint to the URL that MinIO is running on in the S3Client constructor.
s3.ts
import { S3Client } from "bun";const minio = new S3Client({ accessKeyId: "access-key", secretAccessKey: "secret-key", bucket: "my-bucket", // Make sure to use the correct endpoint URL // It might not be localhost in production! endpoint: "http://localhost:9000",});
To use Bun’s S3 client with supabase, set endpoint to the supabase endpoint in the S3Client constructor. The supabase endpoint includes your account ID and /storage/v1/s3 path. Make sure to set Enable connection via S3 protocol on in the supabase dashboard in https://supabase.com/dashboard/project/<account-id>/settings/storage and to set the region informed in the same section.
s3.ts
import { S3Client } from "bun";const supabase = new S3Client({ accessKeyId: "access-key", secretAccessKey: "secret-key", bucket: "my-bucket", region: "us-west-1", endpoint: "https://<account-id>.supabase.co/storage/v1/s3/storage",});
Using Bun’s S3Client with S3 Virtual Hosted-Style endpoints
When using a S3 Virtual Hosted-Style endpoint, you need to set the virtualHostedStyle option to true.
If you don’t specify an endpoint, Bun will automatically determine the AWS S3 endpoint using the provided region and
bucket. - If no region is specified, Bun defaults to us-east-1. - If you explicitly provide an endpoint, you don’t
need to specify a bucket name.
s3.ts
import { S3Client } from "bun";// AWS S3 endpoint inferred from region and bucketconst s3 = new S3Client({ accessKeyId: "access-key", secretAccessKey: "secret-key", bucket: "my-bucket", virtualHostedStyle: true, // endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com", // region: "us-east-1",});// AWS S3const s3WithEndpoint = new S3Client({ accessKeyId: "access-key", secretAccessKey: "secret-key", endpoint: "https://<bucket-name>.s3.<region>.amazonaws.com", virtualHostedStyle: true, });// Cloudflare R2const r2WithEndpoint = new S3Client({ accessKeyId: "access-key", secretAccessKey: "secret-key", endpoint: "https://<bucket-name>.<account-id>.r2.cloudflarestorage.com", virtualHostedStyle: true, });
Credentials are one of the hardest parts of using S3. By default, Bun reads the following environment variables for credentials.
Option name
Environment variable
accessKeyId
S3_ACCESS_KEY_ID
secretAccessKey
S3_SECRET_ACCESS_KEY
region
S3_REGION
endpoint
S3_ENDPOINT
bucket
S3_BUCKET
sessionToken
S3_SESSION_TOKEN
If the S3_* environment variable is not set, Bun will also check for the AWS_* environment variable, for each of the above options.
Option name
Fallback environment variable
accessKeyId
AWS_ACCESS_KEY_ID
secretAccessKey
AWS_SECRET_ACCESS_KEY
region
AWS_REGION
endpoint
AWS_ENDPOINT
bucket
AWS_BUCKET
sessionToken
AWS_SESSION_TOKEN
These environment variables are read from .env files or from the process environment at initialization time (process.env is not used for this).These defaults are overridden by the options you pass to s3.file(credentials), new Bun.S3Client(credentials), or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your .env file and then pass bucket: "my-bucket" to the s3.file() function without having to specify all the credentials again.
S3File instances are created by calling the S3Client instance method or the s3.file() function. Like Bun.file(), S3File instances are lazy. They don’t refer to something that necessarily exists at the time of creation. That’s why all the methods that don’t involve network requests are fully synchronous.
Type Reference
interface S3File extends Blob { slice(start: number, end?: number): S3File; exists(): Promise<boolean>; unlink(): Promise<void>; presign(options: S3Options): string; text(): Promise<string>; json(): Promise<any>; bytes(): Promise<Uint8Array>; arrayBuffer(): Promise<ArrayBuffer>; stream(options: S3Options): ReadableStream; write( data: string | Uint8Array | ArrayBuffer | Blob | ReadableStream | Response | Request, options?: BlobPropertyBag, ): Promise<number>; exists(options?: S3Options): Promise<boolean>; unlink(options?: S3Options): Promise<void>; delete(options?: S3Options): Promise<void>; presign(options?: S3Options): string; stat(options?: S3Options): Promise<S3Stat>; /** * Size is not synchronously available because it requires a network request. * * @deprecated Use `stat()` instead. */ size: NaN; // ... more omitted for brevity}
Like Bun.file(), S3File extends Blob, so all the methods that are available on Blob are also available on S3File. The same API for reading data from a local file is also available for reading data from S3.
Method
Output
await s3File.text()
string
await s3File.bytes()
Uint8Array
await s3File.json()
JSON
await s3File.stream()
ReadableStream
await s3File.arrayBuffer()
ArrayBuffer
That means using S3File instances with fetch(), Response, and other web APIs that accept Blob instances works out of the box.
To read a partial range of a file, you can use the slice method.
s3.ts
const partial = s3file.slice(0, 1024);// Read the partial range as a Uint8Arrayconst bytes = await partial.bytes();// Read the partial range as a stringconst text = await partial.text();
Internally, this works by using the HTTP Range header to request only the bytes you want. This slice method is the same as Blob.prototype.slice.
Like Response and Blob, S3File assumes UTF-8 encoding by default.When calling one of the text() or json() methods on an S3File:
When a UTF-16 byte order mark (BOM) is detected, it will be treated as UTF-16. JavaScriptCore natively supports UTF-16, so it skips the UTF-8 transcoding process (and strips the BOM). This is mostly good, but it does mean if you have invalid surrogate pairs characters in your UTF-16 string, they will be passed through to JavaScriptCore (same as source code).
When a UTF-8 BOM is detected, it gets stripped before the string is passed to JavaScriptCore and invalid UTF-8 codepoints are replaced with the Unicode replacement character (\uFFFD).