How to install JuiceFS as a shared filesystem
Actualizado: 2026-05-03
After living with JuiceFS on a three-node cluster for a while, I think it’s worth writing the install walkthrough I would have liked to find when starting. There’s good official documentation, but it’s fragmented across very different use cases, and someone arriving from NFS or from a classic bind mount gets lost on the first real decision: which metadata backend to pick and how to mount the thing without surprises.
This walkthrough assumes a specific scenario: three modern Linux servers that need to see the same files, a PostgreSQL database already available, and S3-compatible storage as the final destination for the data.
Key takeaways
- JuiceFS delegates data to an object store (S3, MinIO, Hetzner) and metadata to a database you already operate (Redis, PostgreSQL, MySQL).
- From the client side it’s a POSIX FUSE filesystem with configurable local cache — mount it and use it like any other directory.
- The object store bucket must be exclusive to JuiceFS: it uses an internal key structure that assumes exclusivity.
- The
--free-space-ratio 0.3option is important and often forgotten — without it, a spike can fill the cache disk. - For simple single-network cases, NFS remains valid. JuiceFS pays off when you want replication or node-failure resilience.
Why JuiceFS and not something else
The classic alternative is NFS. It works, it’s well-known, and it’s supported by every Linux kernel, but it comes with a well-documented pile of problems: fragile cache semantics, difficulty scaling reads, and a network footprint admins usually have to wrap in specific firewalls.
CephFS solves the problem at scale, but installing Ceph to share a few terabytes across three machines is overkill.
JuiceFS sits in the middle. It delegates data to an object store and metadata to a database you already know how to operate. The elegant part is that all the distributed-systems complexity is absorbed by components you already monitor and understand.
Preparing the backend
Before touching JuiceFS, two pieces should be in place.
The object store: you need an endpoint, credentials, and the bucket name. The bucket must not be shared with other systems — JuiceFS uses an internal key structure that assumes exclusivity.
The metadata database: I recommend PostgreSQL if you already have an instance. JuiceFS creates its tables in a dedicated schema, the load is moderate, and the ops side benefits from all the infrastructure you already have for PG.
Initial formatting
With the backend ready, format the volume. You do it once from any node:
juicefs format
--storage s3
--bucket https://endpoint/bucket
--access-key <KEY>
--secret-key <SECRET>
--compress lz4
postgres://user:password@host:5432/juicefs my-volumeTwo options deserve attention:
lz4compression is a good default: adds a small CPU cost and reduces the volume sent to the object store by 20-40%.- The volume name is internal JuiceFS metadata and appears in metrics and logs.
Don’t format with --trash-days 0 in production. JuiceFS’s internal trash has saved more than one accidental deletion.
Mounting on each node
Once formatted, each node mounts the filesystem with a single command:
juicefs mount
--cache-dir /var/juicefs-cache
--cache-size 20480
--free-space-ratio 0.3
postgres://user:password@host:5432/juicefs
/mnt/jfsCache options are where you’ll have the most variation between nodes. Size is in MiB; 20480 is 20 GiB. --free-space-ratio 0.3 is important and often forgotten: it tells JuiceFS that if the disk hosting the cache drops below 30% free, it should start evicting entries.
For a persistent mount across reboots, the clean approach is a systemd unit that runs the same command with ExecStart.
Verification and first tests
Before considering it operational, three tests worth running:
- Create a file on one node and read it from another: with correct configuration, it should appear within seconds.
- Write a large file (several GB) from one node while monitoring CPU and network.
- Power off the node that wrote the file and check that the others keep reading it. This is the test that separates a real shared filesystem from a mirage.
Maintenance and monitoring
Once running, watch its dependencies:
- The metadata database needs normal PostgreSQL monitoring.
- The object store needs cost and error-rate monitoring.
- JuiceFS exposes Prometheus metrics on a local HTTP endpoint worth scraping.
The trickiest operation you’ll eventually need is cleaning up orphan blocks (juicefs gc). Running gc weekly with --compact is good hygiene, especially if your workload involves lots of deletes.
If you ever need to migrate to a different object store, JuiceFS has a sync command that parallelizes the copy without stopping the service.
Conclusion
What makes JuiceFS a good choice isn’t any single feature, but its alignment with components you already operate. Every Linux team knows how to back up Postgres, monitor an S3 bucket, and read Prometheus metrics. JuiceFS turns that into a shared filesystem, and does so without requiring a new operational plane.
The honest alternative is still NFS for very simple cases where all clients are on the same network and failure risk isn’t critical. For anything a little more serious, or where you want cross-region replication without reconfiguring clients, this route is worth a try.