University of California

File Storage

There are two data storage systems available to ShaRCS users.

1. BlueArc Storage Areas

This system has three user areas: user home, group home, and user scratch. All of these are NFS storage, not parallel. For all NFS areas, the <cluster_name> will be either mako (North Cluster) or thresher (South Cluster).

User Home Area

There are two gigabytes of space in the Home Storage system for each user. This storage is not purged, and backups will be done for the full production system (not current as of Spring 2010).

  • 2 GB per user
  • not purged
  • backups when in full production
  • /global/<cluster_name>/users/<username>

User home directories will appear at /global/<cluster_name>/users/<username> with two gigabyte quotas per user.

Group Home Area

Group storage areas are not created by default. Groups needing project space should send a request to ShaRCS Help to have their group directory created. Each approved project will receive 2 terabytes of disk space, owned by either the PI or the primary user.

  • 2 TB per group
  • request from ShaRCS Help
  • not purged
  • not backed up
  • /global/<cluster_name>/groups/<groupname>

Group home directories are located at /global/<cluster_name>/groups/<groupname> with two terabyte quotas per group.

NFS Scratch Area

There is a user scratch area for each user in the NFS storage space. This consists of a temporary scratch directory with no quota and no backup. This space should be considered volatile and not used for long-term storage. The purge interval for each file will be two weeks from the last access time.

  • volatile, short-term storage
  • 25 TB total for all users (no individual quotas)
  • not backed up
  • purged at 2-week file access time
  • /<cluster_name>/scratch/<username>

User scratch directories are located at //scratch/ with 25 terabytes per cluster shared among all users.

2. Lustre Parallel Storage Area

Notice! This space is not available during the early user pilot project phase.

ShaRCS includes a Lustre parallel file system from DDN. The entire area is scratch space, with no backups and purges at the two-week file access time. There will be about 100 terabytes per cluster available for all users with no quotas.

  • parallel file system for high performance on large files
  • volatile, short-term storage
  • 100 TB total for all users (no individual quotas)
  • not backed up
  • purged at 2-week file access time

Parallel scratch space is located at /scratch/<userid>.

Purge Policy: Users will have two weeks from the last access time to move data before risking automated removal. Backup Policy: Files in the Lustre file system are not backed up. Disk failures and inadvertent removal may cause permanent loss of data.