Recently there have been some pretty spectacular failures of OLE in a number of high profile locations giving rise to some pretty dreadful ‘bad wire’ days.

Well, there’s a lot going on behind the scenes. Network Rail has a plan. In fact it has a ten point plan. Grahame Taylor has a look at what’s involved…

If you look back at the April edition of The Rail Engineer magazine (issue 102), you will find our article on OLE (Overhead Line Equipment) trials and tribulations. How, over a period of about a month, the OLE failed spectacularly in a number of high profile locations, giving rise to some pretty dreadful ‘bad wire’ days. Our article explained how the ‘knitting’ is put together, what types there are out there and the root causes of some of the main problems encountered.

If that was all there was to OLE design and management, then you’d be right in thinking that we’re in for a regular diet of equipment failures. It could just look as if that’s what always happens and there’s not much that can be done about it.

Well, there’s a lot more going on behind the scenes. Network Rail has a plan. In fact, it has a ten point plan and in this concluding article on OLE we’ll be having a look at what’s involved.

Building up a picture

Our guide, as before, is Nigel Edwards, who is Network Rail’s reliability improvement manager E&P (Electrification and Plant) working in the maintenance services reliability team. His main focus is OLE and signal power supplies but he, Paul Ramsey and Steve Price also cover other E&P assets like points heating and distribution equipment.

The reliability team is part of Maintenance Services in Network Operations and has been running the programme for a couple of years.

It looks at all equipment failures and failure types, whether they are delay-causing or not.

It would be easy to focus on just the last high profile incident, such as the St. Neots insulator failure, and then decide on a mass changing of all insulators. That would cost an absolute fortune and may not necessarily be the right thing to do in the light of all the other data available.

“We try and develop a picture from everything we gather, right down to the ten minute failures, rather than just concentrate on the big incidents. So we’re trying to build up a picture of the reliability of the whole OLE infrastructure. The ten point plan was developed for OLE because we knew it needed to improve.”

Ten workstreams

The plan focussed on reducing all delay causing incidents on OLE with a coordinated approach. The E&P community has been very good at doing things individually so this was an opportunity to identify ten workstreams with clear accountability and a focus – a leader – looking after each one.

The idea was to discover and accelerate best practice around the business and flush out any ‘blockers’ or problems to secure capital funding. It was envisaged that the exercise would operate for a limited period. Addressing underlying causes would mean that, at the end of the process, there’s a return to ‘business as usual’ (or in fact ‘much better than usual’!) without the need for an ongoing ten point plan. The steering part of the programme is largely complete but the findings are still being implemented and monitored across the routes.

So the workstreams (WS) were divided out like this:


policy and standards


required data and information


future inspection techniques


best practice in maintenance management


failure modes and analysis


renewals planning


work delivery quality


staff competence and training


project interface issues


first response strategy

WS1 (policy and standards) – in many cases the basis of good maintenance is a quality set of work instructions and many of these have been improved by the route teams supporting the ten point plan. They also form the basis of training and development regimes.

WS2 (required data and information) provided management information in the form of the period short circuit fault report and delivered the maintenance compliance report.

Test vehicles

WS3 (future inspection techniques) was not straightforward. Before it is possible to look too far forward it’s necessary to check that current systems are up and running and fit for purpose. There are three test vehicles monitoring the OLE. There’s the NMT (New Measurement Train), the EMV (Electrification Monitoring Vehicle) and Mentor. The latter is currently being refurbished. And there are up to three measuring systems on each of these vehicles:

» Contact wire height and stagger position

» Contact wire wear

» Pantograph force and longitudinal acceleration.

The wire wear system has been fine – that’s on EMV and NMT. But there have been problems with the height and stagger system. The NMT does an excellent job with track data, but perhaps E&P monitoring has been a poor relation over the years.

So this was an early task in WS3 – to get the existing systems up and running reliably and repeatedly.

There’s now an agreed improvement plan in place with the data collection services team that puts OLE monitoring back in the spotlight. Thereafter, new techniques can be investigated – and there are plenty to find from around the world.

WS4 (best practice in maintenance techniques) is the gathering of all the maintenance procedures and evaluating critically which ones, over the network, are
the most effective. The process for managing vegetation encroaching the OLE came from this workstream.

WS5 (failure modes and analysis) has already been touched on. This is the careful analysis of the impact and frequency of component failures to assess what is best spent where to avoid knee-jerk expenditure. There are a total of 85 campaign changes which have been identified to improve the inherent reliability of original designs.

WS6 (renewals planning) is linked to understanding asset condition, the risk of catastrophic failure and prioritisation of improvement works. A standard process and forms are available so that the routes see proposals in the same way.

WS7 (work delivery quality) – the examination of work delivery quality is an interesting topic. Drawing from data revealed through the other workstreams, it had become apparent that roughly 30% of failures have been caused by the maintenance regime or maintenance interventions and another 30% by design or construction interventions. Some of these were effectively ‘own goals’. So, by focussing on construction and maintenance quality, it is possible to make significant reductions in failures, some of which might only occur some considerable time after the work was done.

WS8 (staff competency and training) was another major exercise. Focussed initially on contractors, the need was to expand the numbers of competent staff available for major commissioning. Hard lessons had been learnt from the experience at Rugby a couple of Christmases ago. A brand new OLE training span facility has been built at Walsall. Peterborough and Romford training spans have been enhanced to be followed by the facility at Carstairs. There are now plans in place to deliver national training at these venues and at places such as Preston and, of course, there will be a need for whole new setups to serve the Great Western electrification scheme.

WS9 (project interface issues) examines the issue of managing the interface and interactions between the existing railway and major schemes. This can be tricky as there are rarely clear-cut boundaries between the maintainers and the installers.

And finally there is WS10 (first response strategy) – the development of techniques and expertise that will cut down the time needed to restore systems to full working order if a failure occurs.

A camera-on-a-pole has been developed in collaboration with KML (Kongsberg Maritime Ltd). It sounds like a simple device, and to some extent, it is. The advent of very small digital cameras with astonishing resolution with wireless connections to handheld devices has enabled problems to be examined in fine detail without having to get the ladders out – and without having to turn off the power.

“Steve Price also developed a pole mounted cutter prior to the Olympics to respond better to faults. We work very closely with critical product suppliers like Arthur Flury who produce the single rod neutral sections, and WT Henley who produce the insulators, to make sure they design and build in reliability into their products.”


The ten point plan is a flexible system that can cope with anything that needs to be measured or monitored.

“Vegetation management was a classic example where we put in a new process, rolled it out and now we’re monitoring how the routes are taking it up. We’ve got a system that’s flexible and that makes sure that the routes are focussing on things that are found to be important. Obviously, devolution has come along and ultimately the routes will make their own decisions, but we are still working closely with them, tailoring solutions to prevent the risk of catastrophic failure.”

All these developments were already underway before the particularly awful ‘bad wire’ days at Hanslope Junction and St Neots. Failures of that magnitude just serve as a reminder that doing nothing is not an option – best to have plan!