University of California

User Guide

This page is an organizational guide for user documentation of the ShaRCS clusters at UC. Here you will find introductory descriptions of all the user-oriented technical documentation on this site. Detail pages for all the topics below provide in-depth coverage for running jobs on ShaRCS. Use this page to locate specific information within the other documents, to gain a sense of how the user documentation is organized on this site, or to familiarize yourself with usage processes of the cluster.

Getting Started Guide

Read this section for a quick rundown of everything you need to know before you start to work on ShaRCS. Brief summaries with enough information to get you started, including accessing the system, preparing and running jobs, storing data and moving files. Go to Getting Started...

Hardware Description

There are two ShaRCS clusters, located at Lawrence Berkeley Labs and San Diego Supercomputer Center. They are virtually identical in terms of physical hardware. Both have 272 compute nodes, each with a eight processing cores and 24 gigabytes of memory. The North Cluster is named Mako, and the South Cluster is named Thresher. See the Cluster Description page for details.... View the ShaRCS Architecture Overview Diagram ...

Login Information

Users must log in to the cluster assigned to their project. For users on the North Cluster, the login host is mako.srcs.edu. South Cluster users must log in to thresher.srcs.edu. Read more...

One-Time Passwords

Users must generate a One-Time Password (OTP) to authenticate a session on ShaRCS. A physical encryption device known as a CRYPTOCard is used for this purpose. You must request a CRYPTOCard from an ShaRCS administrator (or your PI). This is the only method of authentication supported by ShaRCS. Read more...

Compiling Codes

Several compilers are available on ShaRCS, including Intel and GNU compilers for both C/C++ and Fortran. You can compile and test codes on the login nodes. Read more...

Porting Codes

Software Environment

ShaRCS uses the Modules system to control user environments. This makes it very easy to switch between versions and libraries in support of various packages. Read more...

Running Jobs

Jobs are run through a Resource Manager called TORQUE and a Workload Manager called Moab. Users submit jobs to the queues either interactively or via a batch script using the PBS scripting language. Compilation of codes and testing of jobs use the login nodes (four nodes per cluster), while interactive and batch jobs run on the compute nodes (268 nodes per cluster) allocated by the scheduler based on job submission parameters. You will typically share the login nodes with other users, so they are not intended for production runs or time-critical processes. Read more...

File Storage

Notice! Important data should not be left for on the ShaRCS file storage components, as they may be irretrievably erased or destroyed.

ShaRCS has two file storage systems on each cluster for users to read and write data: an NFS area from BlueArc and a Lustre area from DDN. The NFS area consists of user, group and scratch areas. The Lustre area is completely scratch.

The user /home filesystem is NFS-based, and users are limited to two gigabytes. Projects can also request a /home area, which will consist of two terabytes per group. The scratch area is about 25 terabytes total for all users.

The paths for the NFS storage areas are:

  • /global/<cluster_name>/users/<username>
  • /global/<cluster_name>/groups/<groupname>
  • /<cluster_name>/scratch/<username>

The Lustre scratch area contains shared storage space for all users. The total capacity is about 100 terabytes per cluster. This is a parallel file system. This storage is located at:

  • /scratch/<userid>

Find out more...

Data Transfer

There are two preferred mechanisms for moving data to and from ShaRCS. Small files should be copied using scp. Large files should use bbFTP, which also works well for large numbers of small files. It uses a parallel transfer algorithm to speed up large data movements for parallel file systems such as Lustre. Read more...

Get Help

ShaRCS User Support is available to help make your experience easier and more productive. Please contact ShaRCS Help with your questions regarding relevant content on this Web site, or any ShaRCS topic not addressed here. You can visit our User Support page for other options as well.