RAL Tier1 Operations Report for 27th July 2016
| Review of Issues during the week 20th to 27th July 2016.
|
- There has been continued saturation of the inbound 10Gbit OPN link - it has now shown complete saturation for a fortnight.
| Resolved Disk Server Issues
|
- GDSS650 (LHCbUser, D1T0) which failed on Monday (19th July) was returned to service on Wednesday afternoon (20th). There was a single file lost which was being written when server failed.
- GDSS634 (AtlasTape, doT1) crashed on Thursday 21st July. It was returned to service on Monday, 25th July. This looks like a disk controller failure. Eleven files that were being written as it failed were reported lost to Atlas.
- GDSS678 (CMSTape D0T1) crashed on Saturday (23rd July). It was returned to service, initially read-only, the following day.7 files reported lost to CMS.
| Current operational status and issues
|
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
| Ongoing Disk Server Issues
|
- GDSS675 (CMSTape D0T1) was taken out of service on Tuesday morning. 26th July. It had a second disk failure while the first one was being rebuilt. All files awaiting migration to tape were flushed off the server before it was taken out of service.
| Notable Changes made since the last meeting.
|
- The LSST VO has been enabled on the batch farm.
- Three disk servers each of 57TB capacity have been deployed into each of the following tape caches: AtlasTape, cmsTape, lhcbRawRdst. (I.e. a total of nine servers). These will enable the withdrawal of some of the older disk servers from these service classes.
- A modification to the Condor configuration to make use of shared ports has been made. This has significantly reduced the incidence of batch job restarts.
- Both "production" and "test" FTS3 services have been upgraded to version 3.4.7
- The migration of Atlas data from "C" to "D" tapes continues. We have migrated over 900 of the 1300 tapes so far.
| Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
| lfc.gridpp.rl.ac.uk, lfc.gridpp.rl.ac.uk,
|
SCHEDULED
|
WARNING
|
01/08/2016 12:00
|
01/08/2016 17:00
|
5 hours
|
RAC Oracle backend migration to new hardware
|
| lfc.gridpp.rl.ac.uk, lfc.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
01/08/2016 09:00
|
01/08/2016 12:00
|
3 hours
|
RAC Oracle backend migration to new hardware
|
| Advanced warning for other interventions
|
| The following items are being discussed and are still to be formally scheduled and announced.
|
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of data from T10KC to T10KD tapes (Affects Atlas & LHCb data).
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on older disk servers.
| Entries in GOC DB starting since the last report.
|
| Open GGUS Tickets (Snapshot during morning of meeting)
|
| GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
| 122827
|
Green
|
Less Urgent
|
In Progress
|
2016-07-12
|
2016-07-13
|
SNO+
|
Disk area at RAL
|
| 122818
|
Green
|
Less Urgent
|
In Progress
|
2016-07-12
|
2016-07-12
|
Atlas
|
Object Store at RAL
|
| 122804
|
Green
|
Less Urgent
|
Waiting Reply
|
2016-07-12
|
2016-07-15
|
SNO+
|
glite-transfer failure
|
| 122364
|
Green
|
Less Urgent
|
In Progress
|
2016-06-27
|
2016-07-15
|
|
cvmfs support at RAL-LCG2 for solidexperiment.org
|
| 121687
|
Yellow
|
Less Urgent
|
On Hold
|
2016-05-20
|
2016-05-23
|
|
packet loss problems seen on RAL-LCG perfsonar
|
| 120810
|
Green
|
Urgent
|
In Progress
|
2016-04-13
|
2016-06-24
|
Biomed
|
Decommissioning of SE srm-biomed.gridpp.rl.ac.uk - forbid write access for biomed users
|
| 120350
|
Green
|
Less Urgent
|
Waiting Reply
|
2016-03-22
|
2016-07-26
|
LSST
|
Enable LSST at RAL
|
| 119841
|
Red
|
Less Urgent
|
On Hold
|
2016-03-01
|
2016-04-26
|
LHCb
|
HTTP support for lcgcadm04.gridpp.rl.ac.uk
|
| 117683
|
Yellow
|
Less Urgent
|
On Hold
|
2015-11-18
|
2016-04-05
|
|
CASTOR at RAL not publishing GLUE 2
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
| Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
| 20/07/16 |
100 |
100 |
100 |
98 |
100 |
100 |
100 |
Single SRM test failure: User Timeout.
|
| 21/07/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
| 22/07/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
| 23/07/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
| 24/07/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
| 25/07/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
| 26/07/16 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|