IRM 4.0 The Digital Transformation of Risk Mitigation
Safety

IRM 4.0 The Digital Transformation of Risk Mitigation

By Sphera’s Editorial Team | April 22, 2020

IRM 4.0 is the intersection of process, progress and performance. State-of-the-art technologies gather and analyze data from a host of sources across an organization so companies can make more-informed, strategic business decisions.

IT’S SAID THAT “WITH GREAT RISK COMES GREAT REWARD,” BUT IN THE DIGITAL AGE, GREAT REWARD COMES FROM GREAT RISK insights. For decades risk mitigation details were devilishly hidden, sometimes in plain sight, in spreadsheets and even handwritten logs. While these types of data collection and storage methods have not moved into woolly mammoth territory as of yet, the future itself will be composed of a mammoth amount of information that can be processed and analyzed quickly and automatically. Until recently, there simply was no easy way to use the information to its full potential. Outside of meeting regulatory, internal and customer compliance requirements, the data was more integer than integrated. Even the biggest brainiacs in the world couldn’t employ all that information.

Those days are numbered. Thanks to a host of technological innovations and the explosion of the number of sensors available—which is estimated to turn into a $266 billion global market by 2023, according to Market Research Future—to continually monitor equipment and record data, being able to use that stored information for risk mitigation purposes has ushered in an unprecedented era of learning, prediction and prescription. It’s a concept known as Integrated Risk Management 4.0, which is an integrated, Industry 4.0 software platform approach that brings together disparate technologies, data and information from across the organization.

IRM 4.0 is designed to deliver timely information to help organizations predict and manage when, where and how risk will impact operations, production, people and the supply chain by constantly recording risk-related data.

In May 2019, Paul Marushka, Sphera’s president and CEO and publisher of Spark, revealed the concept of IRM 4.0 at Sphera’s inspire user conference. He stated that employing IRM 4.0 strategies is a key component of any Digital Transformation. He later wrote: “In IRM 4.0, we see the strategic applications of Industry 4.0 technologies. By that I mean the process is data-driven.”

In line with his point, Deloitte wrote in a 2017 article titled “Forces of Change: Industry 4.0”: “For business leaders accustomed to traditional linear data and communications, the shift to real-time access to data and intelligence enabled by Industry 4.0 would fundamentally transform the way they conduct business. The integration of digital information from many different sources and locations can drive the physical act of doing business in an ongoing cycle.”

With the evolution of cloud, mobile, sensors and other Internet of Things technologies, the Digital Transformation has changed the way companies do business, and risk management is no exception, including via Digital Twin technology that is designed to enable risk modeling that simulates what-if scenarios for better planning and informed decision-making (more on that later).

Whether it’s for Operational Risk Management, Environmental Health & Safety or Product Stewardship, the number of outlets to collect and record data to the second or even microsecond is truly staggering. Add that to the emergence of software solutions that can add predictive and prescriptive capabilities for mitigating risk, and companies have the most powerful tool ever produced to ensure safety and sustainability in any industrial setting.

IRM 4.0 is the intersection of process, progress and performance. It uses state-of-the-art technologies to gather and analyze data from a host of sources across an organization so companies can make more-informed, strategic business decisions to manage their risk while saving money in the process by eliminating time-consuming, inefficient data collection and ensuring machinery runs smoothly and efficiently among other things.

In this article we’ll explore some of the emerging technologies and how they could be used to generate the data necessary to predict risk … and help mitigate it as well.



BRAIN TEASER

First, we need to take a step back and talk about the anatomy of this evolution, starting with the human brain. It has an uncanny capacity for knowledge—really.

The 3-pound organ inside your cranium has about 100 billion neurons and an almost unimaginable 600 trillion synapses firing within, and a 2016 study from the Salk Institute found that the human memory capacity is even larger than previously thought. Ten times more to be precise.

By learning that synapses are not the same size, the researchers estimated that the brain’s memory capacity is really a petabyte—or 1 million gigabytes. This is big data indeed. Just to put that in perspective, an answer to a question about how much information there is on the web, which was posed on the BBC’s Science Focus magazine website, explained that the amount of data stored by Amazon, Facebook, Google and Microsoft alone is roughly 1,200 petabytes. If that guesstimate is accurate, it’s about the same amount of brainpower that’s stored in the noggins of the estimated 1,276 people who call Craig, Alaska, home.

The problem is that, even though a mega amount of information can be and is stored in the brain, accessing much of it is a different story. So when someone says, “I’ve forgotten more than you know,” believe them, or, if you’re feeling salty, just reply, “Likewise, I’m sure.”

Paul Reber, a professor at Northwestern University, told Scientific American at the time the Salk study was released that: “Any analysis of the number of neurons will lead to a sense of the tremendous capacity of the human brain. But it doesn’t matter because our storage process is slower than our experience of the world. Imagine an iPod with infinite storage capacity. Even if you can store every song ever written, you still have to buy and upload all that music and then pull individual songs up when you want to play them.”

Humans, unlike machines, cannot tap into all the copious amount of information within, and even if they could, they wouldn’t necessarily know what to do with it. That could change though if Elon Musk and his NeuraLink colleagues have something to say about it. The company is developing robots and sensors to plant in the brain to initially help people communicate who have been paralyzed. If successful, the long-term goal is to be able to give anyone the opportunity to transmit data to and from their brain to a computer via sensors. In a recent presentation, Musk said, “With a high-bandwidth machine interface, we can actually go along for the ride, and we can effectively have the option of merging with AI.” But that’s well down the road at best.

While humans struggle to pick their own brains, technology has no such shortcomings.

A recent study by International Data Corp. predicted that, by 2025, the “global datasphere,” which includes information from traditional and cloud-based data centers, infrastructure points like cell towers and PCs, and other electronic storage vehicles, will hit 175 zettabytes. That’s 175 trillion gigabytes by the way. To put that into perspective, at the recent Content Marketing World conference in Cleveland, Christopher Penn, the co-founder and chief data scientist at Trust Insights, explained that, “If you started binge-watching Netflix 55 million years ago, you’d get to 1 zettabyte today.” Even the Mind Flayer from “Stranger Things” couldn’t accomplish that. We think. But that means you’d need to watch Netflix for 9.6 trillion years to consume all the information in the “datasphere.” They better make some new episodes of “The Office” first and rehire Michael Scott while they’re at it.

The abundance of info out there is why Industry 4.0, Industrial Internet of Things, smart factories, intelligent factories or whatever you choose to call them have revolutionized the way companies do business. It uses the data supplied from the sensors and inputs derived from Internet of Things technologies for both predictive and prescriptive purposes as well as typical record keeping ones as well. Whether it’s a food company knowing exactly when to order more supplies or a fire department being able to better predict the path of a blaze from a wildfire, the Internet of Things, mobile and cloud-based technology, artificial intelligence, machine and deep learning, augmented reality, virtual reality, robotics and more are changing the game for companies.

So why shouldn’t risk management follow suit?

The abundance of info out there is why Industry 4.0, Industrial Internet of Things, smart factories, Intelligent factories or whatever you choose to call them have revolutionized the way companies do business.


A 2017 report from Accenture found that artificial intelligence alone could increase corporate profitability by 38% by 2035 and productivity by 40%. In a news release, Paul Daugherty, Accenture’s chief technology and innovation officer, said: “To realize this significant opportunity, it’s critical that businesses act now to develop strategies around AI that put people at the center, and commit to develop responsible AI systems that are aligned to moral and ethical values that will drive positive outcomes and empower people to do what they do best—imagine, create and innovate.”

Of course, part of the data-collection equation is persuading your people that this is not for Big Brother purposes.

For people concerned about why exactly we need to monitor and track so much data, “I would question how many of those folks then are out at the bar taking selfies and putting them on Instagram,” Penn told Spark. “It is the nonvoluntary nature of it, in like a warehouse or a workplace, compared to the voluntary nature where you do all sorts of crazy and inadvisable things on your personal social media that you probably shouldn’t. The biggest thing companies can do there is ‘informed consent.’ This is a condition of working here. ‘Here’s what we’re going to use the data for. Here’s how we will not use the data. We will not use it to discriminate against you based on race or gender or religious background. And here’s how we use it.’”



GETTING THERE

David Metcalfe is the CEO and co-founder of Verdantix, an independent research and consulting firm based in London and New York. Spark caught up with him when he was visiting Chicago recently to get his thoughts on IRM 4.0 and where this is all heading.

“What’s happening is, historically, we had separate teams and technology stacks for different risks,” Metcalfe said. “So the EHS team would manage the safety risk related to workers, and then you would have maintenance and asset reliability looking at all the risks, equipment failures, posed to the production process. Now what we’re seeing is, especially with the rise of edge devices and real-time risk analytics, that you can put those two datasets together. So that’s a huge prize. It’s going to be a long journey to get there, but that’s definitely what the more sophisticated organizations are trying to do.”

In terms of the predictive applications, he explained that human behavior is not as predictable as, say, equipment failures. “You’ve got a much more controlled system than the human being—or multiple human beings,” Metcalfe said. “You’ve got much higher levels of predictability, and therefore you can have machine learning, artificial intelligence approaches to them prescribing what actions should be—and optimizing things like maintenance activities.”

In others words, we’re only human after all, but machines are not.

So piggybacking on what Metcalfe said, AI will be a big part in the future of IRM 4.0. At Sphera, one of the people who is playing a key role in that evolution is Jerry Shaughnessy, Sphera’s chief architect. He talked to Spark about some of the things he thinks are moving the needle and how artificial intelligence fits in the equation, but he also cautioned that AI can mean different things to different people.

“We have an initiative in our innovation pipeline that deals specifically with machine learning and artificial intelligence,” he said. “Now, just like anything else in the world, there are so many definitions. You have companies that say, ‘Oh, we have AI that does this or that or the other thing.’ The reality is that it’s pretty much a Wild West.”

He continued, “We’re just taking some of that human interaction that used to be necessary to make some of those decisions. We’re augmenting that. We’re not replacing it.”

One area that is ripe for an “augmented” risk mitigation process is Maintenance, Repair & Operations (MRO). Think about it: Every component of every machine used by every company has a finite life cycle. It may be one year, it may be five years, it may be 25 years or more, but eventually every part will fail. Being able to predict when that might occur is not only a boon for safety but also productivity as well. Changing parts can be done strategically instead of reactively that way. As more and more sensors and Internet of Things devices enter the workplace, there is more opportunity to extract the data to maintain equipment and parts.

“That’s an example of a very specific type of prediction called ‘time series forecasting,” Trust Insights’ Penn said. “Where MTBF, mean time between failure or as the more colloquial and profane, more time between F-ups, is taking known data, because almost everything that fails in an industrial setting has predictable failure, right? A part wears down over time. Most of the time, in most safe, otherwise safe working conditions, things don’t fail randomly. They fail very predictably.”

A 2017 Gartner report, for example, says that there will be more than 20 billion connected “things” by 2020, and in 2018 Juniper Research predicted that there will be more than 50 billion combined IoT devices and sensors by 2022. Connectivity equals opportunity for risk mitigation.

“It’s going to enable all sorts of crazy stuff like your fridge self-ordering its food,” said David Stroud, Sphera’s vice president of MRO. “But much more valuable to us is the machine being able to really transmit, ‘I’m in trouble. I need a spare part, I need a repair, I need more oil,’ all that kind of stuff. It won’t happen by surprise anymore.”

Perhaps one day, you’ll even see your machines order their own parts or even print them out on location via a 3-D printer so no need to wait for days for a delivery or pay a premium to get the part sooner than that, he explained. Even though companies often store parts in house, Murphy’s law cannot be ruled out: There’s always that one part that a worker desperately needs and it just isn’t available. For older machinery, it might not be that easy to find Original Equipment Manufacturer parts at all, so 3-D printing could be a faster and more cost effective way to get the exact part you need in those situations.

“I think, like any technology, as it becomes more ubiquitous, the cost will come down and people will, instead of sending for a spare part, just send the specifications to the printer, and it’ll be printed overnight or even quicker,” Stroud said.

Having technology that knows what you need and when you need it would mean less of a requirement to carry a warehouse full of spare parts, too. Parts come in handy when they’re close at-hand, but many have a shelf-life that also can be wasteful if they’re not used in time. “If you hold them for too long, they can deteriorate,” Stroud said. “The less money we tie up holding spare parts we may never need, the better. And once the information is out there, you can get into another level of saving, which is pooling. Now you could have companies co-operating with each other to pool their spare parts. We can just share one expensive spare between five different companies in a region.”

The Oil & Gas industry, he said, has looked at this quite a bit, especially in remote regions with difficult supply chains like Alaska. “You know it’s quite hard to get spares up there,” Stroud said. “If they actually just knew what each other had and could maybe standardize the machines a bit, then they’re going to save a lot of money.”


DEEPER DIVE INTO
THE DATA

Data can be used in many areas of risk mitigation, Trust Insights’ Penn said, through machine learning and deep learning as well as other technologies. So what’s the difference? Since we’ll be crunching some numbers here, it’s probably not too surprising that Penn used an example with some crunch to it: cookies. Machine learning is basically a way for the technology to “learn” how you want to bake the cookies, if you will, and do the baking for you. Deep learning, on the other hand, is where the computer would consider “every possible combination of the ingredients, every possible combination of oven temperature and type.” From there, “the machine would figure out, through this massive trial and error, this is the best way to bake this kind of cookie.”

Similar to how Stroud explained that sharing data could be a boon for spare parts replacement, there is an opportunity for “blending data” in other areas, Penn said. “Blending data really is all about bringing in different data sources to augment your existing data. Again, if we go back to the model of cookies, right? If all your company does is bake chocolate chip cookies, you have a pretty good sense of what goes into a chocolate chip cookie and how to make one. But you have no clue, other than rough ideas, about what an oatmeal cookie is or what a frosted fondant cookie is. If you were to start bringing in data from other manufacturers, from Jim’s Oatmeal Co., you might learn new sources, new techniques, new ingredients that could make your chocolate chip cookies better.”

Source: BCG Analysis

Another area primed for risk mitigation performance is block chain. Shipping companies, for instance, Penn said, employ “tons” of IoT sensors that can be used for moment-by-moment recordkeeping that is an “immutable record of what happened on this journey.” When a container arrives in a port, the receiver can look at the blockchain data and say, “‘Our contract was these 10 containers should never go above 90 degrees Fahrenheit, and it was at a 108 for two days, so we’re not taking the shipment because there’s a problem.’ ”

In terms of parts, for instance, Penn said being able to predict the likely time when a part or piece of machinery will fail gives business owners the opportunity to strategize: “At what threshold of probability of failure do we feel it is unacceptable and we want to switch out the part before then?” he said.

Still, Penn cautions that companies can use artificial intelligence incorrectly in any area of business.

“Most companies screw up on AI,” he said, “not at the technology level, but at the people and the process level. They have internal silos. They have internal politics. I was doing some work with one customer, on a data science project, and two of their departments are at war with each other. These two department heads are literally undercutting each other, stabbing each other in the back. I’m like, ‘No amount of machine learning is going to fix the problem that you basically have two jerks here who are crapping on each other all day long.’ ”

The other potential problem area is in the data itself at the process level. “If you don’t have great data, if you don’t have clean data, if you don’t have well-defined processes for managing that data, for governance, for compliance, for risk management, your AI projects are going to go haywire.”

For risk mitigation, one way that Sphera’s Shaughnessy sees as a potential outlet for data collection that you might already be familiar with from home is Alexa. “You don’t always have a laptop in front of you or a desktop,” he said. “You’re on a shop floor, you’re the foreman, you’re walking around, you see an issue, something, so this is just another way to interact with the system.”

He foresees that not only would you be able to use the virtual assistant to document incidents and near-misses, but also you’d be able to pull up safety-related information on the spot as well by saying things like, “How many incidents occurred in the northeast region where someone was actually injured?” he explained.

This type of technology could also be used for not only documenting risks but also for calling emergency personnel as well, e.g., “Alexa, there has been an accident. Call 911.” And Alexa could ask a series of questions about what happened and then everything gets documented—and analyzed—along the way. Since actor Samuel L. Jackson recently signed a deal to be a celebrity voice for Alexa, just imagine the possibilities there.



MORE INPUT

When the World’s Fair was held in Chicago in 1933 for the Century of Progress exposition (Chicago was founded in 1833, of course), the Museum of Science and Industry was established at a building that had been created 40 years before it for the Columbian Exposition: The Palace of Fine Arts. One of the exhibits available to attendees when the museum opened was the Coal Mine, which is still considered one of the must-see attractions. The museum, which was recently renamed the Kenneth C. Griffin Museum of Science and Industry, calls it “A subterranean tour for the senses.”

Speaking of senses, what we touch, see, hear, smell and taste all creates data points in the mind. They also create potential data points for sensors as well, so Spark recently ventured to Chicago’s Hyde Park neighborhood on the South Side to learn more about wearables housed in the current Wired to Wear exhibit.

People use wearables to monitor more than just their vital signs, such as heart rate, blood pressure, respiratory rate, etc. The technology is often also designed to detect and alert users if there are any abnormalities.

“A wearable is something that gives you a new ability by what you put on your body,” said Kathleen McCarthy, who is the museum’s director of collections and head curator. “It can be something that protects you or it can be something that gives you something new that humans didn’t have before you put it on. So, we see it as synonymous with possibility.”

One of those possibilities on display at the recent Wired to Wear exhibit is the ability for people to maneuver themselves around without being able to see. It’s called SpiderSense, and it was designed by University of Illinois at Chicago alumnus Victor Mateevitsi. In the demonstration area, your friendly neighborhood editor dawned a vest that resembles a flak jacket and walked around a small space with his eyes closed. As he approached the wall, haptic feedback (vibration, that is) would alert said user that there was something that needed to be avoided.

It wasn’t perfect as there were a couple of knee bumps along the way—although that could be, ahem, user error—but it’s easy to see how this type of technology could not only help those with vision impairment but also people working in mines or other very dark spaces. Maybe even a coal mine. This technology is still being developed and not yet ready for commercial use.

Still, there was much more to explore, including the Dainese D-Air Racing Suit. It’s basically a wearable airbag suit for motorcycle riders, alpine skiers and people participating in similar activities. It’s designed to sense when a collision is about to happen—and then soften the landing. According to the company’s website, “a patented internal Microfilaments Technology ensures that the air that inflates the airbag propagates in a balanced way. The result is that every single centimeter of the area is covered by effective protection.”

There was also a simple-looking cap that hung in the upper corner of the Wall of Wearables display. It looks like the kind of hat that they’d give away at a minor league baseball game, but don’t let the looks deceive you. The Safe Cap designed by Ford Motor Co./GTB Brasil is a wearable that helps prevent truckers from falling asleep at the wheel. It vibrates, sounds off and flashes when it senses someone’s about to be snoozing to help keep them awake and safe on the road. In a news release on Ford’s website, Lyle Watters, president of Ford South America, said that with the technology “We are able to reinforce our commitment on bringing embedded technology not only for vehicles, but also through accessories that are capable of making the lives of drivers easier and the focus on safety as a priority in our technology investments.”

The exhibit also showcased a group of sensors designed to monitor everything from vital signs for stroke patients to sweat hydration levels for NBA players. In an article on Northwestern University’s website, John Rogers, an engineering professor who developed the technology, explained, “Stretchable electronics allow us to see what is going on inside patients’ bodies at a level traditional wearables simply cannot achieve.”

These devices offer “physiolytic” capabilities, as H. James Wilson called it in a 2013 Harvard Business Review article. This means they can link wearable technology with “data analysis and quantified feedback to improve performance.”

It’s that intersection of technology and data where IRM 4.0 really comes to play.

The more data input opportunities a company has, the more capabilities emerge for using the data for predictive and prescriptive means.

Dainese D-Air Racing Suit

“Riding a motorcycle at high speeds is thrilling–but dangerous. What if your clothes had an airbag? This smart suit monitors you 1,000 times a second, reporting your speed and motion to a built-in computer. When a collision is about to happen, the airbag inflates, giving you a softer landing.

Lino Dainese began sewing protective clothing in Venice, Italy, in 1968. He invented the knee slider, back protector, and aerodynamic back hump, when he approached scientists about an airbag in clothing. His idea was dismissed as impossible. His airbags now protect motorcyclists, mountain bikers, sailors, horse riders and skiers.”– Wired to Wear Exhibit | Museum of Science and Industry


For Operational Risk Management, that could be sensors that monitor heart rate and other vital signs to ensure workers who are in extreme environments aren’t being pushed too far.

For Product Stewardship, that could be a scenario where a sensor on a product or package tells companies when certain chemical hazard thresholds are approached or surpassed, or it could even be a sensor that alerts people working with chemical hazards if they’re not wearing the right personal protective equipment. The technology is not there yet, of course, but one day, who knows? Of course, in a 2014 research paper titled “A Hazardous Chemical-oriented Monitoring and Tracking System Based on Sensor Network,” the authors from the North University of China wrote: “Through an experimental simulation performed on sea and land, the experimental data demonstrated that the monitoring and tracking system is perfect with fine monitoring ability and adequately met technical requirements. With continued and penetrating studies, the system holds significant promise for the future and may become widely applied in the field of containers and carrier vehicles for hazardous chemicals. Therefore, the monitoring and tracking system can be widely applied in monitoring and tracking hazardous chemicals and can protect lives and property from danger.”

For EHS, it could mean sensors to measure emissions from coal-fired power plants on a smokestack, which the National Institute of Standards and Technology recently tested. In an article on NIST.gov, it says: “When NIST researchers analyzed the data, their results were promising, agreeing to within 2% with their laboratory findings.” The article also explained that “coal-fired power plants must have their smokestack emissions audited or checked by an independent third party. NIST researchers wanted to make this test quicker to save the plants money during their audits, while also improving accuracy of the sensors.”

Additionally, the MSI’s McCarthy said: “The opportunity with wearables to improve our medical health and well-being is huge. If we can figure out those, gathering and using the data in a way that benefits everybody, that will be really one of the most amazing things that wearables can do.”


FROM DRONES TO DIGITAL TWINS

We don’t mean to hover too long about the potential for capturing data, but for EHS and ORM in particular, there is enormous opportunities for drones to do the job that humans can’t do or shouldn’t do.

A recent study from the University of Kentucky, partially titled “Monitoring Tropospheric Gases with Small Unmanned Aerial Systems,” used drones to monitor trace atmospheric gases. According to the study, small unmanned aerial systems (sUAS) “can be effectively employed in the petroleum industry, e.g., to constrain leaking regions of hydrocarbons from long gasoducts,” which are natural gas pipelines. The report goes on to say that current greenhouse gas (GHG) emissions estimates are “incomplete” and “[r]educing the uncertainty of low-altitude (<100 m) trace gas emissions is critical to fully understanding emission processes and implementing sustainable industrial practices.”

Besides being able to collect GHG emissions data, drones are also very good at tracking GPS coordinates—latitudinally and longitudinally—and taking videos, Sphera’s Shaughnessy said. Sphera recently purchased its own drone for research purposes to learn more about the potential for using the airborne technology to mitigate risk. If an “IoT sensor went off, and you’re not sure what’s going on,” he said, “can I get in there? Can I look at a video feed to do a visual inspection of the facility before I’m risking anyone? Because, again, it’s all about decreasing risk and harm and everything else. So can I get immediate feedback without endangering anyone? A drone is one way to get that.”

But if you’re looking for information about drones, Sphera does have an enthusiast in-house: Adrian Engele, the company’s RFP manager, who is a drone hobbyist studying to take his Part 107 certificate test from the U.S. Federal Aviation Administration, which would allow him to fly drones commercially. He explained how Oil & Gas and utility companies are using drones to monitor miles and miles of pipeline for leaks—some drones include thermal imaging cameras, he said, that can detect leaks based on temperature differences—or for assessing storm damage or other incidents in the field.

 

Agriculture, Civil Engineering, Energy and Utilities, Mining and Oil & Gas industries can benefit from aerial craft. Drones can be a valuable piece in industrial applications and are already being used in such ways today.


Echoing Shaughnessy’s thoughts, Engele sees drones playing a bigger role in risk mitigation in the future to keep humans from having to visit dangerous sites. “We can send a drone in, where it doesn’t care,” he said. “And very often all you’re really doing is just inspecting something to get an initial idea of the severity of an incident or potential incident. This provides additional situational awareness to make an informed decision before sending in a repair crew into a hazardous environment.”

Of course, Verdantix’s Metcalfe cautions not to get too far ahead of ourselves. “Drones obviously are very interesting,” Metcalfe said. “They’re just another way of picking up specific types of data, which previously were hard to acquire. Definitely you’re going to see more digital sensors deployed on equipment. We’re starting to see more trials of digital PPE wearables in terms of workers. However, I think at the moment 90% of data is really being collected from operator rounds or human observation, so I think we need to be careful not to get too carried away with the new technologies.”

Fair enough, but how about current technologies, such as Digital Twins, a recent concept that actually traces its roots back to a critical time in NASA history when the Apollo 13 was in trouble. You’ve heard it over and over, “Houston, we’ve had a problem here.” As Abhilash Menon, Sphera’s business consultant for digital transformation technologies, recently wrote in a blog post: “NASA had to find a way for the three astronauts to fix the space vessel quickly before they ran out of oxygen. The team in Houston had to find a way to visualize the exact issue based on the description that the team in space relayed to them from the vessel, and then they had to find a way to help the team in space fix the problem so the astronauts at risk could return to Earth safely.” Today, Menon wrote, Digital Twin for Operational Risk Management can give companies a bird’s-eye view of their risk exposure.

“Using all the information available from the individual sensors and equipment,” Menon continued, “we can now draw a virtual picture of what the real-life plant status is. We can then overlay information on what human interactions need to happen in the plant to then start to get a better picture of what is the true operational reality of the plant. Once we start to get the picture and we plan our operations, first in the digital space and then in the physical asset, we can start to manage the real Operational Risk of the asset.”

With all this information, Digital Twin software can be used to predict what’s going to happen to an asset, simulate the changes and offer prescriptive behavior, he added, to “minimize the chances of a disaster occurring at the asset.”

The dawn of IRM 4.0 is here, and it will offer an opportunity for a safer future for everyone who adopts and embraces it.

“As a result of all this technology that’s out there in the marketplace, there’s a paradigm shift taking place,” Sphera’s Marushka said at the inspire user conference in San Antonio earlier this year, “a shift with respect to data, integration, insights and decisions. Data is going from nondigital to digital. Integration is going from silos of information to truly connected information across an enterprise. It’s going from difficult manual information-gathering from reports from people in organizations to easy insights that are available at your fingertips when you want it and where you want it. And lastly, decisions as a result … they’re agile and nimble because you have all the information that you need.”


DOWN THE ROAD

There’s a ton of technology out there already that ties into the IRM 4.0 risk-mitigation formula nicely, but—and you guessed it—there’s more to come. For instance, the brain, as we’ve discussed has the potential for doing so much more than currently possible, so why should risk mitigation be any different? CTRL-Labs is a startup company based in New York that Facebook recently agreed to buy for between $500 million and $1 billion, according to Bloomberg.

We reached out to CTRL-Labs for a comment, but a company representative said, “We’re pausing on communications efforts at the moment given the recent company news but may be in touch down the line.”

The company produces a “neural interface platform” that let’s people control devices with their minds. Really. You still need to produce the motions—it’s doesn’t offer Professor X telepathy powers just yet—but there are no cameras involved as is typical with artificial and virtual reality devices. In an NPR video, tech reporter Elise Hu is shown using the device to control a robotic spider. “It takes a little bit to get used to,” she said in the video, “but I’m amazed at how little it takes to navigate and negotiate this object. We are not connected in any way except for digitally, and then these are just intentions and the force and strength of my arm, which means there’s so many potentialities for the future.”

Could workplace safety be one of them by allowing humans to handle dangerous things vicariously through a robot? Perhaps. Think about it; your brain likely already has.

The Best of Spark Delivered to Your Inbox
Sphera
Sphera is the leading provider of Environmental, Social and Governance (ESG) performance and risk management software, data and consulting services with a focus on Environment, Health, Safety & Sustainability (EHS&S), Operational Risk Management and Product Stewardship.
Subscribe to Spark
Receive expert content from Sphera about Safety, Sustainability and Productivity.

 
close-link