Difference between revisions of "LSST UK"
From GridPP Wiki
								
												
				 (→Sites setup)  | 
				|||
| Line 41: | Line 41: | ||
  xrootd_utils  |   xrootd_utils  | ||
| + | |||
== Data Challenge ==  | == Data Challenge ==  | ||
Revision as of 12:52, 29 June 2017
General
Currently LSST UK setup uses the VOMS servers at FNAL, it has a CVMFS software area always hosted at fnal
User setup
Sites setup
Here is Operations Portal LSST page
VOMS config
VOMS_SERVERS="'vomss://voms2.fnal.gov:8443/voms/lsst?/lsst'"
VOMSES="'lsst voms1.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms1.fnal.gov lsst'
        'lsst voms2.fnal.gov 15003 /DC=org/DC=opensciencegrid/O=Open Science Grid/OU=Services/CN=voms2.fnal.gov lsst' "
VOMS_CA_DN="'/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' '/DC=org/DC=cilogon/C=US/O=CILogon/CN=CILogon OSG CA 1' "
Roles
"/lsst/Role=pilot/Capability=NULL" "/lsst/Role=pilot" "/lsst/Role=NULL/Capability=NULL" "/lsst"
Accounts
- 10 normal accounts mapped to the generic role
 - 10 pilot accounts mapped to the pilot role
 
Storage
There isn't yet a precise requirement on the size but LSST users will use few TB across different sites currently using 8TB across 4 sites.
Software area
VO_LSST_SW_DIR = /cvmfs/lsst.opensciencegrid.org/uk
Requested software
xrootd_utils
Data Challenge
DC1 at NERSC
Resources
- 500 nodes (Haswell x 32 cores each) at NERSC
 - DC1 utilization at NERSC: http://portal.nersc.gov/project/lsst/glanzman/graph6.html
 
DC2 on the grid
Resources
- 2x 3x DC1