Install s3fs

sudo apt update
sudo apt install s3fs

Create a credentials file

You'll need an IAM access key with the right bucket permissions. In the AWS console go to IAM > Users > (your user) > Security credentials > Create access key. Scope the key narrowly — a dedicated IAM user with an inline policy granting s3:GetObject, s3:PutObject, and s3:ListBucket on just this bucket is much safer than a root account key.

Write the key pair to a file that only your user can read:

echo "ACCESSKEY:SECRETKEY" > ~/.aws_s3.key
chmod 600 ~/.aws_s3.key
Watch the format

The separator is a single colon, no spaces, and the file should end with a newline. A surprising fraction of "authentication failed" reports on s3fs bug trackers trace back to a stray space or a \r\n (CRLF) line ending after pasting from Windows.

Mount the bucket

mkdir -p ~/mount/my-bucket

s3fs my-bucket ~/mount/my-bucket \
    -o passwd_file=$HOME/.aws_s3.key \
    -o use_path_request_style

Confirm it's there:

ls ~/mount/my-bucket
mount | grep s3fs

To unmount:

fusermount -u ~/mount/my-bucket

Make it survive a reboot

Add a line to /etc/fstab. Use the system-wide credentials file /etc/passwd-s3fs (root-owned, mode 600) instead of ~/.aws_s3.key:

sudo cp ~/.aws_s3.key /etc/passwd-s3fs
sudo chmod 600 /etc/passwd-s3fs

# Add to /etc/fstab:
my-bucket /mnt/my-bucket fuse.s3fs _netdev,allow_other,use_path_request_style 0 0

Mount immediately to test:

sudo mkdir -p /mnt/my-bucket
sudo mount -a
ls /mnt/my-bucket

The _netdev flag tells systemd that this mount depends on the network coming up, which avoids boot hangs when your instance starts before the NIC has an address.

Caveats

  • s3fs does not emulate POSIX semantics fully. Random writes in the middle of a large file require s3fs to download, modify, and re-upload the whole object — every time. Rename on S3 is a copy + delete, which makes it expensive for large files. Assume append-only or whole-file workloads.
  • No atomic directory operations. Directories are a fiction layered on top of object keys; moving or renaming one is not atomic.
  • Latency is real. Every open() is at least one HTTP round-trip. Tools that stat hundreds of files per operation (backup scripts, IDE indexers, ls -lR) will feel slow.

When to use something else

rclone mount — the modern default. Better performance, configurable VFS cache (--vfs-cache-mode full gives you local read/write semantics backed by S3), and works against two dozen backends. Worth it for any new workload.

goofys — faster than s3fs on metadata-heavy workloads, but trades off some POSIX correctness. Reasonable choice for read-heavy analytics pipelines.

AWS CLI (aws s3 sync/aws s3 cp) — if your workflow is "ship a directory of build artefacts to S3 at the end of the job," skip the FUSE layer entirely and sync directly.