University of California

Frequently Asked Questions

Click on a question in the FAQ index to display its answer. You can also click on the section headings themselves, to navigate to the documentation pages for that topic.

FAQ Index

About ShaRCS

  1. What is the ShaRCS Project?
  2. Who sponsors ShaRCS?
  3. What is the mission of ShaRCS?
  4. What are the phases of the ShaRCS project?
  5. Who can use the ShaRCS compute clusters?

Hardware

  1. What are the ShaRCS clusters?
  2. What are the system hardware details?
  3. What are the system performance specifications?
  4. What interconnect hardware is used?
  5. What storage is available?

Software

  1. What software applications are available?
  2. What compilers are available?
  3. What debugger is available?
  4. How is the system environment managed?

Access

  1. Who can use the ShaRCS resources?
  2. How can I obtain an account on ShaRCS?
  3. How is authentication on ShaRCS managed?
  4. How can I log in to ShaRCS?

Jobs

  1. How do I compile codes for ShaRCS?
  2. What are the job queues?
  3. What is the workload manager?
  4. What is the scheduler?
  5. How can I run a batch job?
  6. How can I run an interactive job?
  7. Can I run a job that requires an exception to the job queue limits?

FAQ Answers

About ShaRCS

  1. What is the ShaRCS Project?

    Read about the origin, purpose, and history of the ShaRCS project on the ShaRCS Main page and the History and Goals page.

  2. Who sponsors ShaRCS?

    The Oversight Board consists of UC faculty, technical staff, and administrators.

  3. What is the mission of ShaRCS?

    The mission of the Shared Research Computing Services project is to provide highly available, extremely powerful, leading edge research compute resources to all campuses of the University of California. These resources are provided through the UC system, exclusively for UC research projects, with costs borne by the university rather than directly by the projects. This will allow projects to concentrate on producing results and delivering scientific breakthroughs without the burden of overhead incurred by competition for and logistics management of high-performance computing infrastructure.

  4. What are the phases of the ShaRCS project?

    The first phase of ShaRCS is for Early Users only. This allows one or two projects to have access to each cluster for several weeks, with the understanding that the system will be unstable and have limited reliability. System administrators will develop and install features and configuration settings during this period, and users will expect to encounter some fundamental problems common to all such environments during resource construction. Some software will be installed during this phase, and users will contribute to problem resolution of newly installed applications. This phase should run from February through early Spring 2010.

    The second phase is for Pilot Projects to use the clusters in a semi-formal production setting. Each cluster will have 10-12 projects assigned, and users will have unlimited access to the resource. Most of the software stack will be available by the start of this phase, so users should find a fairly stable and reliable environment in which to conduct research. The emphasis of this phase is to determine the long-term viability of such resources for the UC community, and to establish standards and guidelines for the future provision of similar resources to a larger pool of UC researchers and affiliates. This phase will begin in early Spring 2010 and last about one year.

    The third phase will involve extending the ShaRCS compute resources to more UC research projects. This phase is dependent on successful completion of the Pilot Projects phase, and a favorable analysis of the benefits of extending these types of services to a broader segment of the UC community. It will likely involve a greater number of users, possibly upgraded hardware and storage components, and would incorporate lessons learned from the earlier phases to improve usability and productivity toward successful research results. This phase may also include seamless job migration between the North and South clusters. Moab GridSuite will be used to support this service.

  5. Who can use the ShaRCS compute clusters?

    Only approved projects have access to ShaRCS during the Pilot Projects phase. If a third phase is justified, it will be announced and procedures for obtaining access will be publicized on this Web site.

[ Back to Top ]

Hardware

  1. What are the ShaRCS clusters?

    There are two clusters in the ShaRCS. During the Pilot Project phase, all projects will be restricted to one or the other cluster. Eventually, projects may be given access to both clusters, if accommodations for all can be made.

    One cluster supports the northern campuses and sites, the other supports the south-based projects. The North Cluster is named Mako, accessible at mako.berkeley.edu. The South Cluster is named Thresher, and is accessible at thresher.sdsc.edu. See the Logging In and the History and Goals pages for more details.

  2. What are the system hardware details?

    Each cluster has 272 IBM iDataPlex dx360 M2 servers with dual Intel Xeon E5530 (Nehalem) quad-core 64-bit processors configured as SMP nodes, providing 2176 processing cores per cluster. Jobs will be allowed to request up to 256 nodes at one time. Each node has 24 GB memory and 8 MB of L3 cache. See the Cluster Description page for more details.

  3. What are the system performance specifications?

    The core frequency is 2.4 gigahertz. Each core supports four floating-point operations per clock cycle. This delivers a peak performance of 9.6 billion floating point operations per second (GFLOPS) per core, or 76.8 GFLOPS per node.

  4. What interconnect hardware is used?

    Compute nodes are equipped with Mellanox ConnectX single-port 4X Quad Data Rate (QDR) Host Channel Adaptors (HCAs) and interconnected via a Qlogic 12800-180 4X QDR InfiniBand switch.

  5. What storage is available?

    Each cluster has a home and scratch storage area. These are NFS services using BlueArc servers. Individual user home areas have a 2 GB limit. The scratch areas are 25 TB per cluster with no individual limits. Projects can also request a 2 TB shared space for their project.

    A parallel file system from DDN using Lustre will be available in the future. This will contain about 100 TB per cluster.

    More details on size of storage areas, as well as backup/purge policies, are described on the File Storage page.

[ Back to Top ]

Software

  1. What software applications are available?

    All of the common software applications are listed on the Software Applications page.

  2. What compilers are available?

    Several compiler choices are available for both serial and parallel programs on ShaRCS. Complete descriptions with examples are available on the Compiling page. A brief summary is provided here:

    Serial source code should be compiled for the ShaRCS system with the following compiler commands:

    • icc [options] file.c (C programs)
      • icpc [options] file.cpp (C++ programs)
    • ifort [options] file.f (fixed form Fortran source code)
    • ifort [options] file.f90 (free format Fortran source code)

    Parallel Programs

    MPI source code should be compiled for the ShaRCS system with the following default compiler commands:

    • mpicc [options] file.c (MPI parallel C++ programs)
      • mpicxx [options] file.cpp (MPI parallel C++ programs)
    • mpif77 [options] file.f (Fortran 77 source code)
    • mpif90 [options] file.f (free format code/dynamic memory allocation/object oriented Fortran source code)

    Other MPI stack/compiler combinations may be obtained by choosing the appropriate Modules Environment.

  3. What debugger is available?

    ShaRCS has a license for the DDT debugger. The user guide is available in /opt/ddt/doc [PDF]. You can also find useful help on the DDT Web site.

  4. How is the system environment managed?

    ShaRCS uses the Environment Modules package, or "Modules" to manage user environments. This allows for easy loading and unloading of default path, library, and variables information. You can read about this on the Modules Environment page. Some quick example are presented on SourceForge, and you can read the module and modulefile man pages.

[ Back to Top ]

Access

  1. Who can use the ShaRCS resources?

    To use the ShaRCS resources, you must work for a project that has been given an allocation. See the Getting Started Guide for more details.

  2. How can I obtain an account on ShaRCS?

    All requests for user accounts on ShaRCS should go through the project PI (or a designated representative). See the Getting Started Guide for more details.

  3. How is authentication on ShaRCS managed?

    Users who are authorized to login and utilize the resources will receive a One-Time Password (OTP) device (or software tool, pending future availability) to generate passwords for use on the login nodes. OTP will be the only method for users to log in to ShaRCS. See the One-Time Password page for complete details.

  4. How can I log in to ShaRCS?

    Access to ShaRCS during the early user phase will be through login nodes specific to each cluster. One-time passwords (OTPs) will be required for access. See the Logging In page for complete details.

[ Back to Top ]

Jobs

  1. How do I compile codes for ShaRCS?

    For full details, please see the Compiling on ShaRCS page.

    Serial source code should be compiled for the ShaRCS system with the following compiler commands:

    • icc [options] file.c (C programs)
    • icpc [options] file.cpp (C++ programs)
    • ifort [options] file.f (fixed form Fortran source code)
    • ifort [options] file.f90 (free format Fortran source code)

    MPI source code should be compiled for the ShaRCS system with the following default compiler commands:

    • mpicc [options] file.c (MPI parallel C++ programs)
    • mpicxx [options] file.cpp (MPI parallel C++ programs)
    • mpif77 [options] file.f (Fortran 77 source code)
    • mpif90 [options] file.f (free format code/dynamic memory allocation/object oriented Fortran source code)
  2. What are the job queues?

    Job queues are used to schedule jobs for execution. They are the only mechanism available to accept requests to run batch jobs. Users select queues based on their purposes and characteristics. For example, a queue may allow higher priority access as a trade-off for limiting the time a job can run.

    ShaRCS has four user-accessible queues:

    • normal
    • short
    • long
    • express

    Please see the Running Jobs page for details about these queues and information on how to submit jobs to the ShaRCS queuing system.

  3. What is the workload manager?

    The workload manager for ShaRCS is TORQUE. Please see the Running Jobs page for more information.

  4. What is the scheduler?

    ShaRCS uses Moab for its scheduler software. Please see the Running Jobs page for more information.

  5. How can I run a batch job?

    To submit a script to TORQUE, use the following syntax:

    $ msub <batch_script>

    Please see the Running Jobs page for an example and more details.

  6. How can I run an interactive job?

    To request an interactive session, use the following syntax:

    $ msub -I

    To terminate your interactive session, use the exit command.

    $ exit

    Please see the Running Jobs page for an example and more details.

  7. Can I run a job that requires an exception to the job queue limits?

    You can get special permission to run jobs exceeding the limits allowed for standard job submissions to the scheduler. Please contact to make your request.

[ Back to Top ]