TAG is fully compatible with standard S3 tools and SDKs. This guide shows how to configure common S3 clients to use TAG as an endpoint.
- TAG running locally or accessible via network
- TAG's own Tigris credentials configured via
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY(see configuration.md) - Transparent proxy mode (default): Clients use their own Tigris credentials directly when making S3 requests
- Signing mode: Clients must use credentials known to TAG's credential store
Configure AWS CLI to use TAG as the endpoint:
# Set credentials (if not already configured)
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
# Use --endpoint-url flag with each command
aws s3 ls --endpoint-url http://localhost:8080Create a profile in ~/.aws/config:
[profile tag]
endpoint_url = http://localhost:8080And in ~/.aws/credentials:
[tag]
aws_access_key_id = your_access_key
aws_secret_access_key = your_secret_keyThen use the profile:
aws s3 ls --profile tag# List buckets
aws s3 ls --endpoint-url http://localhost:8080
# List objects in a bucket
aws s3 ls s3://my-bucket --endpoint-url http://localhost:8080
# Download a file
aws s3 cp s3://my-bucket/my-key ./local-file --endpoint-url http://localhost:8080
# Upload a file
aws s3 cp ./local-file s3://my-bucket/my-key --endpoint-url http://localhost:8080
# Sync a directory
aws s3 sync ./local-dir s3://my-bucket/prefix --endpoint-url http://localhost:8080
# Delete an object
aws s3 rm s3://my-bucket/my-key --endpoint-url http://localhost:8080
# Get object metadata
aws s3api head-object --bucket my-bucket --key my-key --endpoint-url http://localhost:8080Check the X-Cache header to verify caching:
# Using curl to see cache headers
curl -I http://localhost:8080/my-bucket/my-key \
-H "Authorization: AWS4-HMAC-SHA256 ..."
# Response will include:
# X-Cache: HIT (served from cache)
# X-Cache: MISS (fetched from upstream, now cached)pip install boto3import boto3
from botocore.config import Config
# Create S3 client with TAG endpoint (path-style required)
s3 = boto3.client(
's3',
endpoint_url='http://localhost:8080',
aws_access_key_id='your_access_key',
aws_secret_access_key='your_secret_key',
config=Config(s3={'addressing_style': 'path'}),
)import boto3
from botocore.config import Config
import os
# Credentials from environment variables
# AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
s3 = boto3.client(
's3',
endpoint_url=os.getenv('TAG_ENDPOINT', 'http://localhost:8080'),
config=Config(s3={'addressing_style': 'path'}),
)response = s3.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])response = s3.list_objects_v2(Bucket='my-bucket', Prefix='data/')
for obj in response.get('Contents', []):
print(f"{obj['Key']} - {obj['Size']} bytes")# Download to file
s3.download_file('my-bucket', 'my-key', 'local-file.txt')
# Download to memory
response = s3.get_object(Bucket='my-bucket', Key='my-key')
data = response['Body'].read()# Upload from file
s3.upload_file('local-file.txt', 'my-bucket', 'my-key')
# Upload from memory
s3.put_object(Bucket='my-bucket', Key='my-key', Body=b'Hello, World!')s3.delete_object(Bucket='my-bucket', Key='my-key')response = s3.head_object(Bucket='my-bucket', Key='my-key')
print(f"Size: {response['ContentLength']}")
print(f"ETag: {response['ETag']}")
print(f"Last Modified: {response['LastModified']}")import boto3
from botocore.config import Config
# Create S3 resource (path-style required)
s3 = boto3.resource(
's3',
endpoint_url='http://localhost:8080',
aws_access_key_id='your_access_key',
aws_secret_access_key='your_secret_key',
config=Config(s3={'addressing_style': 'path'}),
)
# Get bucket
bucket = s3.Bucket('my-bucket')
# List objects
for obj in bucket.objects.filter(Prefix='data/'):
print(obj.key)
# Download file
bucket.download_file('my-key', 'local-file.txt')
# Upload file
bucket.upload_file('local-file.txt', 'my-key')For large files, use streaming to avoid loading entire objects into memory:
# Streaming download
response = s3.get_object(Bucket='my-bucket', Key='large-file.bin')
with open('local-file.bin', 'wb') as f:
for chunk in response['Body'].iter_chunks(chunk_size=1024*1024):
f.write(chunk)
# Streaming upload with multipart
from boto3.s3.transfer import TransferConfig
config = TransferConfig(
multipart_threshold=8*1024*1024, # 8MB
max_concurrency=10,
multipart_chunksize=8*1024*1024,
)
s3.upload_file(
'large-file.bin',
'my-bucket',
'large-file.bin',
Config=config,
)Ensure TAG is running and accessible:
curl http://localhost:8080/healthVerify credentials match those configured in TAG:
# Check environment variables
echo $AWS_ACCESS_KEY_ID
echo $AWS_SECRET_ACCESS_KEYFor large files or slow networks, increase timeout:
from botocore.config import Config
config = Config(
connect_timeout=30,
read_timeout=300,
s3={'addressing_style': 'path'},
)
s3 = boto3.client('s3', endpoint_url='http://localhost:8080', config=config)If you receive a 405 error when creating buckets, ensure path-style addressing is configured:
from botocore.config import Config
config = Config(s3={'addressing_style': 'path'})
s3 = boto3.client('s3', endpoint_url='http://localhost:8080', config=config)TAG does not support virtual-hosted style requests (e.g., bucket.localhost:8080).