University of California

Cluster Description

The Shared Research Computing Services (ShaRCS) Pilot Project is sponsored by The University of California Office of the President (UCOP). The project consists of two compute Linux clusters: the North Cluster, known as Mako, located at the University of California, Berkeley (UCB), and the South Cluster, known as Thresher, located at the University of California, San Diego (UCSD). The clusters are known as ShaRCS resources or simply ShaRCS, and are often referenced collectively since they are virtually identical.

Compute Cluster Overview

Each ShaRCS cluster is an IBM system based on the Intel Xeon (Nehalem) processor. Each cluster has 272 dual quad-core Linux nodes equipped with a high-performance, low-latency InfiniBand interconnect ideal for a stable, high-performing resource to run a wide diversity of scientific applications. The cores are configured as symmetric multiprocessing (SMP) units, meaning each core can be used as a separate processor, with full access to all data and memory of the node. The eight-core nodes thus comprise a 2176-core cluster.

Cluster Configurations (each)

Processor Type Node Count Sockets
Speed Memory
Intel Nehalem E5530 272 2 4 8 2.4 GHz 3 GB 24 GB InfiniBand

See the ShaRCS Architecture Overview Diagram page for a visual perspective.

Compute Nodes

Each compute node is an IBM iDataPlex dx360 M2 server with dual Intel Xeon 5500 (Nehalem) quad-core 64-bit processors (eight processing cores in total) configured as an SMP unit. The core frequency is 2.4 gigahertz and each core supports four floating-point operations per clock cycle. This delivers a peak performance of 9.6 billion floating point operations per second (gigaFLOPS or GFLOPS) per core, or 76.8 GFLOPS per node. Each node contains 24 gigabytes of memory and an eight-megabyte Level 3 cache. See details on the Intel specs page.


Compute nodes are equipped with Mellanox ConnectX single-port 4X Quad Data Rate (QDR) Host Channel Adaptors (HCAs) and interconnected via a 324-port Voltaire QDR InfiniBand Grid Director 4700. This switch delivers 40 Gbp/s per port and extremely low latency (between 100-300 ns) for a peak of 51.8 Tbps of non-blocking bandwidth.


Each cluster is equipped with BlueArc-based NFS storage and a Data Direct Networks (DDN)-based Lustre file system. The NFS storage is available currently, while the Lustre storage is a future enhancement.

User home storage area

Each cluster has an installation of theBlueArc NFS file system which serves as its home storage area. This area is accessible at /global/<cluster_name>/users/<username>. Each cluster's home storage area is cross-mounted and accessible from the other cluster. Thus on the North Cluster, along with its own home storage area (at /global/mako/users/<username>), the South Cluster's home storage area will be accessible (at /global/thresher/users/<username>). The home storage areas are cross-mounted over a 1 gigabit-per-second dedicated CENIC Layer-2 network connection. Each user has a quota limit of two gigabytes in this area, to be used for storing source code and scripts.

Scratch storage area

At present, the scratch storage area is also based on a BlueArc file system. In the future, a Lustre parallel file system based on DDN hardware will be hosting the scratch file system on each of the clusters. The BlueArc NFS scratch file systems are accessible at /<cluster_name>/scratch/username. This space is purged periodically and any data which has not been accessed over the previous 14 days will be subject to removal without notification.

Optional group home storage area

Projects can request a shared storage area of two terabytes. This area is accessible at /global/<cluster_name>/groups/<groupname>. These areas are not created by default and will not exist unless requested by the project.

See complete details on the File Storage page.

Technical Summary

Component Description
System Name ShaRCS (Shared Research Computing Services Pilot Project)
Login Host URLs (North Cluster) (South Cluster)
Operating System CentOS 5.4 x86_64 (based on RedHat Linux 5.4)
Number of nodes (processing cores) 272 (2176) each cluster
Total Aggregate Memory 6.43 TB each cluster
Performance 9.6 GFLOPS per processing core; 76.8 GFLOPS per node
Processor Intel Xeon quad-core 64-bit Nehalem E5530
Network interconnect Quad Data Rate InfiniBand
Resource Manager TORQUE (PBS)
Job Scheduler Moab
Disk (Storage) BlueArc NFS Data Direct Networks Lustre parallel
User Environment Management Modules
Serial Compilers Intel ver 11.1: Fortran 90/95, C, C++
GNU ver 4.1.2: Fortran 90/95, C, C++
PGI ver 10.2; Fortran 90/95, C, C++
Parallel Compilers OpenMPI ver 1.4 (built with above compilers)