Andy Bartlett returns to the SpheraNOW podcast to discuss hazardous area management and its implications for business with Alex Studd, one of Sphera’s product marketers.
Listen to other episodes of “Andy’s Almanac on Accidents.”
The following transcript was edited for style, length and clarity.
Alex Studd:
Hello, everyone. Welcome to SpheraNOW ESG Podcast, a program focused on safety, sustainability and productivity goals. My name is Alex Studd, product marketer at Sphera with a focus on operational risk management. Today, we welcome back to the program Andy Bartlett, Sphera’s solution consultant for operational risk management, for part nine of Andy’s Almanac on Accidents. Andy, thank you so much for joining us today.
Andy Bartlett:
Well, nice to hear your voice again, Alex, and this is the first one of these we’ve done together. Let’s hope they are successful as the others were, and we get some feedback. I’m pleased to be doing this.
Alex Studd:
Terrific. I feel the same way. Andy, in our previous episode of Andy’s Almanac, we discussed human error. On today’s podcast, we’re excited to talk about successfully managing hazardous areas.
Specifically, I’m eager to talk with you about what can be achieved and what can be avoided by simply reviewing lessons from the past. So, Andy, what would you say is an important metric that companies in 2022 should be measuring or tracking?
Andy Bartlett:
The big metric now is environmental social and governance (ESG) performance. It’s now an essential metric for successful companies. Those with strong ESG performance produce higher returns on investments, lower risks and better resilience during a crisis. Management of personal work and in hazard facilities is a complex process. If an incident pollutes air, land or water, it can affect environmental performance. If an incident harms people, it can affect social performance resulting in litigation and fines. And if a company gains a reputation for incidents and poor risk management, it can be fined by the legislators. Its governance process can be questioned by the shareholders, which as we know, shareholder prices are one of the important things to keep companies going.
Alex Studd:
Now that all makes sense, Andy, but around the world, accidents continue to cause life altering injuries, fatalities and obviously environmental damage. Why is this still happening?
Andy Bartlett:
Basically, it’s not learning from previous incidents. You know, how many times have we switched on a TV after an incident and see those in charge, quoting the overused phrase, “We will learn lessons and apply them to prevent it happening again.” And the question I always ask, “Will they?” Because I’ve seen it quite a few times. There is a man called Trevor Kletz, one of the founding fathers of safety in the chemical industry, who was a safety advisor at Imperial Chemical Industries (ICI). He was often tempted to tell a manager who had allowed a familiar accident to happen on his plant, “Don’t bother to write an accident report. I’ll send you one from my files,” which tells you the accidents continue to happen over and over again. We’re not learning lessons. As Carl Sagan, an American astrophysicist, once famously put it, “You have to know the past to understand the present.”
One of the examples that Trevor uses is about a compressor up on a platform and around it was some steel posts. And in the winter, the people working on this compressor said, “Oh, wouldn’t it be nice if we had some metal shielding to stop the wind.” So they went ahead and put up a box around it, making a shed out of it. And they had a gas release, and it affected the people who were trapped inside working on the compressor. And when they went to do the investigation report, they said, “Oh, that’s why the fence was taken down the last time. Same thing happened.” He said it was a generational thing—about every 10 years, corporate memory tends to get lost. We’re not learning the lessons.
Alex Studd:
That’s really interesting. I know, Andy you spent a lot of time in the chemical and hydrocarbon industries. I’m sure you witnessed a lot of accidents and took part in the investigations where people had unfortunately been killed or seriously injured. Can you talk about that?
Andy Bartlett:
Yes, Alex, some incidents or accidents, whatever people want to call them, have been unique, but the majority happened before in one form or another. For example, an employee was opening a manual valve in a Sulphur recovery plant on a platform about two meters high above grid. The valve gland started to weep and a rotten egg smell was smelled by the employee. He knew what that meant and descended the ladder quickly but fainted a few feet from the ground. I arrived at the scene just as his coworker put on a breathing set and pulled the employee to safety. The coworker then went up the ladder and closed the valve to stop the gland weeping. This was an H2S release. The employee was taken to the onsite clinic and given oxygen to treat his shock.
He fully recovered, but unfortunately, he never went back to operations, which was a shame because he was quite a good operator. So, what is H2S? At low concentrations, it can be detected by a rotten egg smell. Continuous low-level exposure and high concentrations of H2S rapidly reduce the ability to smell the gas. If you get an H2S release and you don’t smell it, well, you could end up going down. And if the exposure’s at 100 parts per million and above, you immediately go into shock, experience convulsions, have the inability to breathe, you can fall into a coma and you can even die. The effects can be extremely rapid. Usually within a few breaths. I did work with a Canadian man who had a big line across his throat where they’d cut his throat because his breathing had stopped and they had to get in to open up his windpipe.
He did recover from H2S but that was because there were medics on site who knew how to treat it. On this particular occasion, I was pleased to see employees’ training kick in. And the previous incident’s lessons learned were shared by corporate safety organizations to employees, resulting in H2S being a well-known hazard. And then the facilities I worked at, self-contained breathing sets were placed strategically around the facility. Training in their use was required, with an annual reminder to complete refresher training. We held monthly competitions for speed donning, and we got a little prize, which was usually a ticket for the local restaurant to go out and have a meal on the company.
Alex Studd:
Well, I’m glad they took that so seriously. So, where else can we share information and obtain lessons learned from accidents and incidents?
Andy Bartlett:
Right. Let’s stick with H2S because I put that as a theme going through today. Let’s start with the U.S. Chemical Safety Board (CSB) an independent non-regulatory federal agency that investigates the root causes of major chemical incidents. The CSB website contains very informative films, which are also available on YouTube. I’ve used them several times for training purposes. There was an H2S incident published by a CSB that tragically illustrates several safety issues. And every time I read this, I get upset. In 1999, in the USA, an employee responded to a pump oil level alarm at a water flood station. The employee closed the pump’s discharge valve, and partially closed the pump suction valve to isolate the pump from the process. The employee did not perform lockout/tagout first to isolate the pump from energy sources before preparing to work on the pump.
At some point, the pump automatically turned on and the water containing H2S, the toxic gas, released from the pump and the employee was fatally injured from his exposure. The spouse of the employee went looking for him because he hadn’t returned home from work and saw his truck at the water flood station. She parked up alongside him and went inside. I don’t know whether she saw her husband dead on the floor, but she soon went down and she was dead herself. The only good point about all this was that she had two children in the car and she hadn’t left the windows open. So those two children went on to survive. But again, they lost both their parents.
Alex Studd:
Wow, that’s horrible, Andy. What lessons learned did CSB emphasize on that particular incident?
Andy Bartlett:
Well, one good thing about CSB, when they put these items out, they put them out there in the public domain so everybody can see and ask the question, “Am I doing this on my facility? This employee didn’t wear his H2S detector. Whether it was a requirement to be worn or not, they weren’t very clear on. Lockout/tagout was not performed, which was a requirement to work on any rotating equipment, and that’s happening all over the world. You’ll see incidents where equipment’s not locked out and people get electrocuted or damaged by rotating equipment. The confinement of H2S inside the pump house, watching the CSB video, I noticed there were no safety data sheets on display or signs. Maybe the video missed them, but I would think they should have had them. There was a lack of a company safety management program. The management program didn’t include the type of data that the Chemical Safety Board recommended.
The H2S detector alarm system inside the pump house was not functioning. If it was, it would’ve given an alarm. In the facilities I worked in, when an H2S alarm went off, you could see them in the next unit, a bit like a light on the police car would be going round and round. You didn’t have to hear the alarm; you could see the light. And of course, refineries and gas plants are noisy places, but they have a light you can see and that wasn’t there. The gate wasn’t locked—the employee’s spouse was able to drive in and park her vehicle right alongside his. Training competency, human errors and poor safety practices all led to this unfortunate incident.
Alex Studd
Now I noticed in that specific example, Andy, there’s no mention of permit to work. All of that work should have required the need to issue a permit, right?
Andy Bartlett:
Well, whether you call it a permit or a template or a risk assessment, there should have been something that the employee looked at before he went in to do this job. And of course, that had dire consequences. The risk assessment or template would’ve reminded the employee to take safety precautions such as wearing his H2S detector, performing isolation before going to look at the pump. So yes, they would need to have something like that in place. Probably they did, he just didn’t use it.
Alex Studd:
Now that example, Andy, that took place in 1999, are similar accidents with H2S still happening?
Andy Bartlett:
Unfortunately, yes. I reviewed the United States Bureau of Labor Statistics census of fatal occupational injuries and deaths related to hydrogen sulfide from 1993 to 1999. Fifty-two workers died of hydrogen sulfide toxicity at this seven-year period. In 21 percent of the cases, a coworker died simultaneously in an attempt to save workers. And that’s true worldwide. Most H2S fatalities that I’ve been aware of, there’s always more than one person who went down because somebody else thought, “Oh, he’s fainted, I’ll go and save him.” And the first thought is not to think, “Oh, it’s H2S that has knocked him down.” In 2006, Saudi Arabia’s report in the Arab news covered a gas leak at a gas facility that resulted in the death of two employees.
Four other employees were also injured with one in critical condition. In 2009, a United Arab Emirates Arabian business newspaper reported on three workers who died during an accident on an oil field involving the deadly toxic hydrogen sulfide gas. During that particular incident, one person went down into a sump and collapsed. The next guy went down to save him, and he collapsed. The next guy went out to save that one and he collapsed. And it wasn’t until they were missing at the end of the shift that anybody knew that this incident had happened. In Australia in 2018 two fatalities from cardiac arrest occurred due to hydrogen sulfide inhalation at a paper mill.
The company was fined over a million dollars and the judge ordered the company to fund an educational animated video highlighting the incident, the lessons that could be learned and how the risks could have been reduced. Similar to the Chemical Safety Board, their videos show you what can happen, and how it happened. And that is a fine point for the judge to make. But unfortunately, two people died and another four were seriously injured to make that point.
Alex Studd:
You’re right. This obviously continues to happen. So where else can we obtain lessons learned from incidents?
Andy Bartlett:
I looked at the 26th edition of Marsh which includes the 100 largest losses in the hydrocarbon industry. There’s always 100 in the report. New ones come in and drop out, but it’s always the top 100 largest property damage losses from the hydrocarbon extraction transport and processing industry. Marsh highlights four key factors that prevent lessons being learned from losses. This includes distance—parties unconsciously feel less affected by events a long way away. Imagine that with a big company with sites in different countries around the world.
Culture prevents lessons learned from being implemented effectively. People think, “Oh, we don’t have to do that. We don’t have those bad things happening here. We’re good here.” Tunnel vision prevents realizing the wider relevance of lessons. Over time, lessons are learned, but then forgotten, or solutions are insufficiently robust as we talked about with Trevor Kletz’s example earlier. After an accident or case of ill health, many organizations find when they do the investigation, they already had systems, rules, procedures or instructions that would’ve prevented the event, but they were not complied with, which takes us back to the last podcast about human error.
Alex Studd:
Andy, that leads me to ask, if the systems and the rules were already in place, why were they missed?
Andy Bartlett:
Well, in many cases, there’s barriers within the organization. Different departments operate in silos, which inhibits organizational learning. Organizational learning is a key aspect of health and safety management. If reporting and follow up systems don’t fit the purpose, for example, if a blame culture acts as a disincentive to report near misses, then valuable knowledge will be lost.
Talking about near misses, I worked closely with insurance companies at my previous company. They came out from Lloyd’s of London, and they would survey the facilities before they would look at what the next round of insurance costs were going to be. At one facility, they found a near miss reporting system—that was great. Everybody reported near misses. They had a prize draw for those who reported the most effective near miss. And then there was another place they went to where they’d only had six near misses in two years.
They found that the blame culture was quite relevant there and it was all down to management. The manager in charge didn’t want near misses. He considered them a stain on his character. Leaders and managers need to be aware of the people-related cultural organization issues that may prevent lessons from being learned effectively from their organization. A management supported structured program is required to share and act on lessons learned. We had a good, structured program where every time there was an incident, the report came in from corporate safety.
There were lessons learned shared on a PowerPoint presentation. Everybody in the facility had to see that and there was a check system to make sure that those particular lessons learned had been shared amongst the workforce. There were audits done where people asked about particular lessons learned. We didn’t always have it—it was something that was put in place after these insurance surveys showed we weren’t doing as good as we could have.
Alex Studd:
It’s a shame, Andy, because the technology to assist with these endeavors already exists today.
Andy Bartlett:
That’s true, Alex. We worked on a paper-based system, but today you have solutions that can assist in sharing lessons learned. Experiences distilled from past activities that should be actively taken into account in future actions and behaviors and where to find these valuable lessons, accident investigations, near miss reports, organizational vulnerabilities identified during monitoring audit and review processes, rework of repairs, difficulty in performing repairs as planned.
An example I’ll give you on that last one is, as a member of a job safety analysis (JSA) preparation team at a mega turnaround, we developed over 40,000 job safety analyses. In one example, we were preparing a JSA to remove a tube bundle from an exchanger located three floors up on a platform. Looking at the job in the field and reading the job plan, there was no indication of having to remove a steel beam to dismantle equipment using the crane.
This information was provided by the interview with the unit operations supervisor. And when we asked him if any unusual activity had occurred in the last turnaround, which was five years earlier, when he’d been an operator on the same unit, he talked about this beam having to be removed, but this important lesson was not shared anywhere. It was not captured. So, when we did the JSA, we made sure we put it in there and it was archived, so that the information is there for the next turnaround. We put the tag number in and we put in the actual crane size. We put in quite a few bits of information that would be needed for the planners for the next time they did this job.
I have seen companies that invest heavily in planned shutdowns and turnarounds have a session at the end of the turnaround to gather lessons learned, what went wrong, what worked well as part of the continuous improvement process. And this info is stored for use by planners in the next turnaround cycle. But with the technology available today, these lessons can be cataloged, embedded and linked by location, equipment type, substance, value and risk. So, I think we’ve gone through quite a little journey there, Alex.
Alex Studd:
Andy, I think you’ve given us a lot to think about. Thank you for another great episode of Andy’s Almanac on Accidents. And, Andy, I can’t wait for the next one.
Andy Bartlett:
Okay. Alex, look forward to that.