API Docs
USDC Top-Ups
Browser billing uses the intent/submit flow. Agent and headless clients can use x402. Jump straight to the x402 Credits section for details.
curl -i -X POST "https://us-west-01-firestarter.pipenetwork.com/api/credits/x402" \
-H "Authorization: ApiKey $PIPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"amount_usdc_raw": 10000000}'https://us-west-01-firestarter.pipenetwork.com/openapi.yamlQuickstart
Authentication
Sign in with your Solana wallet in the portal to get your API key. Use it with the Authorization: ApiKey YOUR_KEY header. Your API key is available in Settings. For S3 operations, use S3 keys from Storage → Keys instead.
Set your API base and API key:
export API_BASE=https://us-west-01-firestarter.pipenetwork.com export PIPE_TOKEN=YOUR_API_KEY
Upload a file (async):
curl -X POST "$API_BASE/upload?file_name=hello.txt" \ -H "Authorization: ApiKey $PIPE_TOKEN" \ -H "Content-Type: application/octet-stream" \ --data-binary @hello.txt
Download raw bytes (stream):
curl -H "Authorization: ApiKey $PIPE_TOKEN" \ "$API_BASE/download-stream?file_name=hello.txt" \ -o hello.txt
Create a public link:
curl -H "Authorization: ApiKey $PIPE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"file_name":"hello.txt"}' \
"$API_BASE/createPublicLink"Fetch via public link (no auth):
curl "$API_BASE/publicDownload?hash=PUBLIC_LINK_HASH" -o hello.txt
Note: `file_name` is relative to your root. URL-encode `/` when using query params. Use `/download` for base64, `/download-stream` for raw bytes. The OpenAPI spec is public at https://us-west-01-firestarter.pipenetwork.com/openapi.yaml.
/openapi.yaml do not require auth.Programmatic Authentication
No browser needed
AI agents, scripts, and CI/CD pipelines can authenticate using a Solana keypair file. The flow is: get a challenge, sign it with your private key, exchange for tokens. Once authenticated, use the access_token as a Bearer token, or retrieve your permanent user_app_key from /user/me and prefer Authorization: ApiKey YOUR_KEY for agent/headless x402 calls.
Prerequisites
You need a Solana keypair file. Generate one with:
solana-keygen new --outfile keypair.json --no-bip39-passphrase
Then connect this wallet to the portal once to create your account, or use the keypair directly with the SIWS flow below.
curl
# 1. Get challenge
CHALLENGE=$(curl -s -X POST "$API_BASE/auth/siws/challenge" \
-H "Content-Type: application/json" \
-d '{"wallet_public_key":"YOUR_WALLET_PUBKEY"}')
# 2. Sign the "message" field with your private key (use solana CLI or SDK)
# 3. Verify
curl -X POST "$API_BASE/auth/siws/verify" \
-H "Content-Type: application/json" \
-d '{
"wallet_public_key":"YOUR_WALLET_PUBKEY",
"nonce":"FROM_CHALLENGE",
"message":"FROM_CHALLENGE",
"signature_b64":"BASE64_ENCODED_SIGNATURE"
}'
# Returns: { "access_token": "...", "refresh_token": "...", "csrf_token": "..." }Python (solders + requests)
import json, base64, requests
from solders.keypair import Keypair
API = "https://us-west-01-firestarter.pipenetwork.com"
# Load Solana CLI keypair (64-byte JSON array)
kp = Keypair.from_bytes(bytes(json.load(open("keypair.json"))))
wallet = str(kp.pubkey())
# 1. Get challenge
r = requests.post(f"{API}/auth/siws/challenge",
json={"wallet_public_key": wallet})
challenge = r.json()
# 2. Sign the challenge message
sig = kp.sign_message(challenge["message"].encode())
sig_b64 = base64.b64encode(bytes(sig)).decode()
# 3. Verify and get tokens
r = requests.post(f"{API}/auth/siws/verify", json={
"wallet_public_key": wallet,
"nonce": challenge["nonce"],
"message": challenge["message"],
"signature_b64": sig_b64,
})
tokens = r.json()
access_token = tokens["access_token"]
csrf_token = tokens["csrf_token"]
# Use it — get your API key
headers = {"Authorization": f"Bearer {access_token}"}
me = requests.get(f"{API}/user/me", headers=headers).json()
print("API key:", me["user_app_key"])Install: pip install solders requests
Node.js (@solana/web3.js + tweetnacl)
import { Keypair } from "@solana/web3.js";
import nacl from "tweetnacl";
import { readFileSync } from "fs";
const API = "https://us-west-01-firestarter.pipenetwork.com";
// Load Solana CLI keypair
const secret = new Uint8Array(JSON.parse(readFileSync("keypair.json", "utf8")));
const kp = Keypair.fromSecretKey(secret);
const wallet = kp.publicKey.toBase58();
// 1. Get challenge
const { nonce, message } = await fetch(`${API}/auth/siws/challenge`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ wallet_public_key: wallet }),
}).then(r => r.json());
// 2. Sign
const sig = nacl.sign.detached(new TextEncoder().encode(message), kp.secretKey);
const signature_b64 = Buffer.from(sig).toString("base64");
// 3. Verify
const tokens = await fetch(`${API}/auth/siws/verify`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ wallet_public_key: wallet, nonce, message, signature_b64 }),
}).then(r => r.json());
// Use it
const me = await fetch(`${API}/user/me`, {
headers: { Authorization: `Bearer ${tokens.access_token}` },
}).then(r => r.json());
console.log("API key:", me.user_app_key);Install: npm install @solana/web3.js tweetnacl
After authentication
- The
access_tokenis a short-lived JWT. UsePOST /auth/refreshwith therefresh_tokento get a new one before it expires. - For mutations (POST/PATCH/DELETE), include the
X-CSRF-Tokenheader. - Your
user_app_keyfrom/user/meis a permanent API key — useAuthorization: ApiKey YOUR_KEYfor agent/headless x402 calls and file operations. - See the Agent Setup guide for a complete end-to-end walkthrough.
x402 Credits (USDC Top-Ups)
Agent / headless flow
Use x402 for agent and server-to-server USDC top-ups. The server returns a Payment-Required header (base64 JSON), you pay on-chain with the supplied reference pubkey, then retry with Payment-Signature.
Prefer Authorization: ApiKey <user_app_key> for this flow. The browser portal should use the separate billing intent flow instead of x402.
Browser / portal billing flow
The portal web UI should not call x402 directly. Use the explicit intent → wallet send → submit → poll contract:
# Browser / portal flow
POST /api/credits/intent -> returns pending intent + payment instructions
wallet sends USDC -> include reference_pubkey in the transaction
POST /api/credits/submit -> returns 200 credited or 202 processing
GET /api/credits/intent/{intent_id} -> poll until credited or pending + error_messagecurl + python decode
export PIPE_API_KEY=YOUR_USER_APP_KEY
# 1) Request x402 payment requirements
# Preferred for agents/headless clients: use user_app_key as ApiKey
curl -i -X POST "$API_BASE/api/credits/x402" \
-H "Authorization: ApiKey $PIPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"amount_usdc_raw": 10000000}'
# 2) Decode the Payment-Required header (base64 JSON)
python - <<'PY'
import base64, json
raw = "PASTE_BASE64_HEADER"
print(json.dumps(json.loads(base64.b64decode(raw)), indent=2))
PY
# 3) Send USDC with the reference, then retry:
curl -i -X POST "$API_BASE/api/credits/x402" \
-H "Authorization: ApiKey $PIPE_API_KEY" \
-H "Payment-Signature: BASE64_JSON_INTENT_AND_TXSIG" \
-H "Content-Type: application/json"
# 4) If confirm returns 202 Accepted, poll the intent:
curl -s "$API_BASE/api/credits/intent/INTENT_ID" \
-H "Authorization: ApiKey $PIPE_API_KEY"Headers
Payment-Requiredcontainsaccepts[0]withasset(USDC mint),payTo(treasury ATA),amount, andextra.reference_pubkey.Payment-Signatureis base64 JSON:{"intent_id":"...","tx_sig":"..."}.- If confirm returns
202 Accepted, pollGET /api/credits/intent/{intent_id}until the intent reachescreditedor a recoverablepending + error_messagestate.
Agent & CLI Setup
For AI agents, scripts, and CI/CD pipelines
Pipe Storage is S3-compatible. Any tool that speaks S3 (boto3, AWS SDK, rclone, MinIO client) works out of the box. Create an S3 key in the portal, download the .env file, and hand it off to your agent.
1. Get your credentials
Go to Storage → Keys, click "Create key", and download the .env file. Your agent needs these environment variables:
# .env — source this or add to your agent's environment AWS_ACCESS_KEY_ID=<from portal: Storage → Keys → Create key> AWS_SECRET_ACCESS_KEY=<shown once at creation — download the .env> AWS_DEFAULT_REGION=auto AWS_ENDPOINT_URL=<s3-endpoint> PIPE_S3_BUCKET=<your-bucket>
2. Python (boto3)
import boto3
s3 = boto3.client(
"s3",
endpoint_url="<s3-endpoint>",
region_name="auto",
aws_access_key_id="YOUR_ACCESS_KEY_ID", # from S3 key
aws_secret_access_key="YOUR_SECRET_ACCESS_KEY",
)
# Upload
s3.upload_file("./test.txt", "<your-bucket>", "test.txt")
# Download
s3.download_file("<your-bucket>", "test.txt", "./test.txt")
# List objects
for obj in s3.list_objects_v2(Bucket="<your-bucket>").get("Contents", []):
print(obj["Key"], obj["Size"])3. Node.js (AWS SDK v3)
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
import { readFileSync, writeFileSync } from "fs";
const s3 = new S3Client({
endpoint: "<s3-endpoint>",
region: "auto",
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
forcePathStyle: true,
});
// Upload
await s3.send(new PutObjectCommand({
Bucket: "<your-bucket>",
Key: "test.txt",
Body: readFileSync("./test.txt"),
}));
// Download
const resp = await s3.send(new GetObjectCommand({
Bucket: "<your-bucket>",
Key: "test.txt",
}));
writeFileSync("./test.txt", await resp.Body.transformToByteArray());4. AWS CLI
Source the .env file and use the standard AWS CLI:
source .env aws s3 ls s3://<your-bucket> --endpoint-url <s3-endpoint> aws s3 cp ./file.txt s3://<your-bucket>/file.txt --endpoint-url <s3-endpoint>
Tips for agent integrations
- Use
AWS_ENDPOINT_URLenv var — most S3 SDKs pick it up automatically, no code changes needed. - Set
forcePathStyle: truein JS/TS SDKs for best compatibility. - S3 keys don't expire. Store them in your secrets manager or .env file.
- Download the .env from the portal when creating a key — the secret is only shown once.
- You can export credentials as JSON or AWS credentials file format from the key creation screen.
Migrating from AWS S3
Pipe Storage is S3-compatible
If your code already uses AWS S3, switching to Pipe usually means adding an --endpoint-url flag or setting AWS_ENDPOINT_URL. No SDK changes, no code rewrites. Every account includes 1 GB free storage + 100 GB free egress per month.
What's different from AWS S3?
| Feature | AWS S3 | Pipe Storage |
|---|---|---|
| Addressing | Virtual-hosted + path-style | Path-style only (forcePathStyle: true) |
| Buckets per account | Up to 100 | Default bucket + more with prepaid credits |
| Batch delete | DeleteObjects (1000/req) | Not supported — delete one at a time |
| Versioning | Supported | Not supported |
| Lifecycle rules | Supported | Not supported |
| Bucket policies / ACLs | IAM + bucket policies | Key prefix scoping + public read toggle |
| Multipart uploads | Up to 5 TB | Supported (use smaller part sizes for >5 GB) |
| Auth | IAM / STS | S3 keys (HMAC) — same SigV4 signing |
AWS CLI migration
# Point your existing AWS CLI at Pipe Storage — just add --endpoint-url aws s3 ls s3://<your-bucket> --endpoint-url <s3-endpoint> # Migrate from AWS S3 to Pipe (one-liner) aws s3 sync s3://my-aws-bucket s3://<your-bucket> \ --source-region us-east-1 \ --endpoint-url <s3-endpoint> # Or copy a single file aws s3 cp s3://my-aws-bucket/data.csv s3://<your-bucket>/data.csv \ --endpoint-url <s3-endpoint>
rclone (S3-to-S3 sync)
# ~/.config/rclone/rclone.conf [pipe] type = s3 provider = Other endpoint = <s3-endpoint> access_key_id = YOUR_ACCESS_KEY_ID secret_access_key = YOUR_SECRET_ACCESS_KEY region = auto acl = private # Usage: # rclone ls pipe:<your-bucket> # rclone copy ./local-dir pipe:<your-bucket>/remote-dir # rclone sync pipe:<your-bucket> ./backup # Migrate from AWS S3 to Pipe in one command: # rclone sync aws-remote:my-aws-bucket pipe:<your-bucket> --progress
Install: rclone.org/install
MinIO client (mc)
# Configure MinIO client mc alias set pipe <s3-endpoint> YOUR_ACCESS_KEY_ID YOUR_SECRET_ACCESS_KEY # Usage: mc ls pipe/<your-bucket> mc cp ./file.txt pipe/<your-bucket>/file.txt mc mirror ./local-dir pipe/<your-bucket>/remote-dir # Migrate from AWS S3: mc alias set aws https://s3.amazonaws.com AWS_KEY AWS_SECRET mc mirror aws/my-aws-bucket pipe/<your-bucket> --overwrite
Install: min.io/docs
SDK & Tool Configs
Copy-paste configurations for popular tools and languages.
Terraform (S3 backend + provider)
# Use Pipe Storage as a Terraform S3 backend for remote state
terraform {
backend "s3" {
bucket = "<your-bucket>"
key = "terraform/state.tfstate"
region = "auto"
endpoint = "<s3-endpoint>"
access_key = "YOUR_ACCESS_KEY_ID"
secret_key = "YOUR_SECRET_ACCESS_KEY"
# Required for S3-compatible providers
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
force_path_style = true
}
}
# Or manage objects with the aws_s3_object resource
provider "aws" {
alias = "pipe"
region = "auto"
access_key = "YOUR_ACCESS_KEY_ID"
secret_key = "YOUR_SECRET_ACCESS_KEY"
endpoints {
s3 = "<s3-endpoint>"
}
s3_use_path_style = true
# Skip AWS-specific checks
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
}
resource "aws_s3_object" "example" {
provider = aws.pipe
bucket = "<your-bucket>"
key = "hello.txt"
source = "hello.txt"
}Docker / docker-compose
# docker-compose.yml — pass Pipe credentials to any container
services:
app:
image: your-app:latest
env_file:
- pipe-agent-bundle.env # downloaded from portal
# Or set explicitly:
environment:
AWS_ACCESS_KEY_ID: ${PIPE_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PIPE_AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: auto
AWS_ENDPOINT_URL: <s3-endpoint>
PIPE_S3_BUCKET: <your-bucket>
# Example: backup with rclone sidecar
backup:
image: rclone/rclone:latest
command: sync /data pipe:${PIPE_S3_BUCKET}/backups
volumes:
- app-data:/data:ro
- ./rclone.conf:/config/rclone/rclone.conf:ro
environment:
AWS_ACCESS_KEY_ID: ${PIPE_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PIPE_AWS_SECRET_ACCESS_KEY}Go (aws-sdk-go-v2)
package main
import (
"context"
"fmt"
"os"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
client := s3.New(s3.Options{
Region: "auto",
BaseEndpoint: aws.String("<s3-endpoint>"),
Credentials: credentials.NewStaticCredentialsProvider(
os.Getenv("AWS_ACCESS_KEY_ID"),
os.Getenv("AWS_SECRET_ACCESS_KEY"),
"",
),
UsePathStyle: true,
})
// List objects
result, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
Bucket: aws.String("<your-bucket>"),
})
if err != nil {
panic(err)
}
for _, obj := range result.Contents {
fmt.Printf("%s (%d bytes)\n", *obj.Key, *obj.Size)
}
// Upload
file, _ := os.Open("test.txt")
defer file.Close()
_, err = client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String("<your-bucket>"),
Key: aws.String("test.txt"),
Body: file,
})
if err != nil {
panic(err)
}
}Install: go get github.com/aws/aws-sdk-go-v2/service/s3
Rust (aws-sdk-s3)
use aws_config::Region;
use aws_sdk_s3::config::{Credentials, Builder};
use aws_sdk_s3::Client;
use aws_sdk_s3::primitives::ByteStream;
use std::path::Path;
#[tokio::main]
async fn main() {
let creds = Credentials::new(
std::env::var("AWS_ACCESS_KEY_ID").unwrap(),
std::env::var("AWS_SECRET_ACCESS_KEY").unwrap(),
None, None, "pipe",
);
let config = Builder::new()
.region(Region::new("auto"))
.endpoint_url("<s3-endpoint>")
.credentials_provider(creds)
.force_path_style(true)
.build();
let client = Client::from_conf(config);
// List objects
let result = client
.list_objects_v2()
.bucket("<your-bucket>")
.send()
.await
.unwrap();
for obj in result.contents() {
println!("{} ({} bytes)", obj.key().unwrap_or("?"), obj.size().unwrap_or(0));
}
// Upload
let body = ByteStream::from_path(Path::new("test.txt")).await.unwrap();
client
.put_object()
.bucket("<your-bucket>")
.key("test.txt")
.body(body)
.send()
.await
.unwrap();
}Add to Cargo.toml: aws-sdk-s3 = "1", aws-config = "1", tokio = { version = "1", features = ["full"] }
rclone
# ~/.config/rclone/rclone.conf [pipe] type = s3 provider = Other endpoint = <s3-endpoint> access_key_id = YOUR_ACCESS_KEY_ID secret_access_key = YOUR_SECRET_ACCESS_KEY region = auto acl = private # Usage: # rclone ls pipe:<your-bucket> # rclone copy ./local-dir pipe:<your-bucket>/remote-dir # rclone sync pipe:<your-bucket> ./backup # Migrate from AWS S3 to Pipe in one command: # rclone sync aws-remote:my-aws-bucket pipe:<your-bucket> --progress
MinIO client (mc)
# Configure MinIO client mc alias set pipe <s3-endpoint> YOUR_ACCESS_KEY_ID YOUR_SECRET_ACCESS_KEY # Usage: mc ls pipe/<your-bucket> mc cp ./file.txt pipe/<your-bucket>/file.txt mc mirror ./local-dir pipe/<your-bucket>/remote-dir # Migrate from AWS S3: mc alias set aws https://s3.amazonaws.com AWS_KEY AWS_SECRET mc mirror aws/my-aws-bucket pipe/<your-bucket> --overwrite
Python (boto3), Node.js (AWS SDK v3), and AWS CLI examples are in the Agent & CLI Setup section above.
S3 Compatibility
Pipe Storage implements the S3 API via standard SigV4 signing. Most S3 tools work with zero code changes — just set the endpoint.
Supported operations
SDK compatibility
| SDK / Tool | Status | Notes |
|---|---|---|
| AWS CLI | Works | Add --endpoint-url or set AWS_ENDPOINT_URL |
| boto3 (Python) | Works | Pass endpoint_url to client constructor |
| AWS SDK v3 (JS/TS) | Works | Set forcePathStyle: true |
| aws-sdk-go-v2 | Works | Set UsePathStyle: true |
| aws-sdk-rust | Works | Set force_path_style(true) |
| rclone | Works | Provider: Other |
| MinIO mc | Works | Standard S3 alias |
| Terraform S3 backend | Works | Skip validation flags required |
| Cyberduck / Mountain Duck | Works | Use "S3 (generic)" connection type |
| s3cmd | Works | Set host_base and host_bucket |
Key differences from AWS S3
- Path-style only — virtual-hosted style (
bucket.endpoint.com) is not supported. SetforcePathStyle/s3_use_path_stylein your SDK config. - One bucket per account — your bucket is auto-created. Use key prefixes for logical separation.
- No batch delete —
DeleteObjectsis not supported. Delete one object at a time. - No versioning or lifecycle — objects are overwritten on re-upload. No automatic expiration or transition rules.
- S3 keys don't expire — revoke them explicitly via the portal or API.
- SigV4 signing — standard AWS SigV4. No SigV2 support.