Getting started

Welcome to the Purdue Analysis Facility!

This guide will help you to quickly set up the work environment for your analysis.

🚀 Login to Purdue Analysis Facility

1. Choose a login method

  • Purdue University account - recommended if you are a Purdue user

  • CERN account (CMS users only)

  • FNAL account

2. Select resources

After a successful login, you will be redirected to a page where you can select the number of CPU cores, RAM, and GPUs for your session.

The default values are enough to get started; if more resources are needed, you can close the session (Shut Down button in top right corner) and recreate it with a different selection.


There are two options for GPU selection:

  • 5GB “slice” of Nvidia A100 GPU - immediately available, but less powerful

  • Full 40GB instance of Nvidia A100 GPU - more powerful, but subject to availability


If for any reason the session creation fails but you need urgent access to your files, use Minimal JupyterLab interface option.

3. Review storage volumes

After the session has started, review the available storage options:

  • The default directory in file browser and Terminal is /home/<username>, it has 25GB quota.

  • In the file browser you will see symlinks to the following directories:

    • work (also mounted at /work/) - shared storage for AF users.

      There are 100GB personal directories under work/users, and project directories under work/projects.

    • depot (also mounted at /depot/cms) - shared storage only for Purdue users.

      Any code that uses SLURM or Dask Gateway should be stored here.

    • eos-purdue (also mounted at /eos/purdue) - read-only directory that stores large datasets and users’ Grid directories.

See also

4. Review kernels and Conda envs

There is a pre-installed Python3 kernel that includes all of the most common packages used in HEP analyses (see full list of packages). This kernel will be automatically selected when you create a new Jupyter notebook.

When working in a Terminal instead of a Jupyter Notebook, you need to activate the environment explicitly:

conda activate /depot/cms/kernels/python3

If you need a package that is missing from the default kernel, please contact Purdue AF support.

You can also create and share custom kernels.

5. Set up GitHub access

Follow these instructions:

After you have generated an SSH key and added it to your GitHub account, run the following command in a Terminal to finish GitHub authentication:

ssh -T

6. Set up VOMS proxy

  1. In order to access data via XRootD, you will need a VOMS certificate. To obtain and install your CMS VOMS certificate, follow the instructions at CMS TWiki, specifically the section “Obtaining and installing your Certificate”.

    Uploading files to Purdue AF

    There is no ssh access to Purdue Analysis Facility. In order to upload a VOMS certificate or any other file to your /home/ storage at Purdue AF, you can do one of the following:

    • Drag-and-drop a file from your local file browser into Purdue AF file browser.

    • OR (Purdue users only):

      1. Upload the file from your computer to the /home/ directory at Hammer cluster:

        scp /local/path/mycert.p12 <username>
      2. SSH into Hammer cluster:

        ssh <username>
      3. Copy the file to your Depot directory where it will be visible from Purdue AF:

        cp /hammer/path/mycert.p12 /depot/cms/users/<username>/
      4. Open your Purdue AF session and copy the file from Depot:

        mkdir ~/.globus
        cp /depot/cms/users/<username>/mycert.p12 ~/.globus
  2. (Optional) Specify the path where your VOMS proxy will be stored. If you are using SLURM or Dask Gateway, the proxy location must be on Depot (currently only allowed for users with Purdue account):

    export X509_USER_PROXY=/depot/cms/users/$USER/x509up_u$NB_UID
  3. Activate the VOMS proxy:

    voms-proxy-init --rfc --voms cms -valid 192:00

7. Subscribe to Purdue AF mailing list

Instructions to subsrcibe to the mailing list.


Currently only possible for users with Purdue email accounts.