Consequential, Certain & Disruptive: 3 Cybersecurity Risks that Will Impact Operations in 2022

2021 was a challenging year for manufacturers, energy producers, and utilities. A chaotic pandemic year created an opportunity for threat actors to take advantage of disruption to infrastructure integrity and IT to OT operational dependencies, something they achieved with frightening rapidity and effectiveness.

As many organizations transitioned to a hybrid workforce, novel integrations between IT and OT systems created new vulnerabilities that threat actors exploited, leading to surging ransomware attacks, infrastructure compromise, and other problematic repercussions.

According to one industry survey, 63 percent of respondents indicated that their organization experienced an ICS/OT cybersecurity incident in the past two years. With the average ICS/OT cybersecurity incident costing companies nearly $3 million, organizations have plenty of reasons to improve their defensive posture in the year ahead.

It’s critical that they do. Manufacturers, energy producers, and utilities should not expect heightened cybersecurity risk to subside alongside the pandemic. Instead, they should expect OT-related cybersecurity threats to be a certainty — and more expensive, consequential, and disruptive in the year ahead.

Expensive

As last year’s Data Breach Investigations Report glibly notes, “money makes the cyber-crime world go round.” In 2022, that price is going up.

For example, in 2020, the average ransomware payment exceeded $200,000, nearly four times the amount from just a year prior. In 2021, several high-profile ransomware payments netted multi-million dollar payouts as organizations and utilities worked to restore system access as quickly as possible.

Organizations should expect ransomware demands to continue increasing in the year ahead. Meanwhile, opportunity cost, regulatory implications, and other factors are making cybersecurity failures increasingly expensive. Therefore, timely and effective investments in holistic defensive capacity are essential to mitigating the financial implications of a cybersecurity incident.

Consequential

In 2021, cybersecurity failures halted manufacturing operations, exposed sensitive data, and eroded brand reputation – significantly raising the stakes for companies of every size in every sector.

Moving forward, companies should expect that the consequences of a cybersecurity incident will be more severe than ever before. For example, ransomware gangs are increasingly looking to leverage their network access to acquire and leak sensitive company data. Data exfiltration incidents surged in 2020, something that will inevitably continue in 2022.

Most prominently, when utilities and energy producers are compromised, public safety is often at risk as threat actors can disrupt critical services. It’s clear that without proper cyber protection, the consequences of failure are likely to become more extreme each year.

Disruptive

In November 2021, the Federal Bureau of Investigation (FBI) released a memo to companies completing “time-sensitive financial events,” warning that ransomware gangs are targeting these companies, looking to capitalize on the urgent and public nature of their operations. This warning most prominently applies to the financial sector, where mergers and acquisitions are time-sensitive, and public events, which can be derailed by a ransomware attack.

However, given the prominent attacks on critical infrastructure in the past year, it’s likely that threat actors will look to exploit companies and municipalities with time-sensitive operations, hoping to capitalize on the high-stakes nature of their sector to maximize payment opportunities.

Implementing Solutions That Work

Recognizing the immense challenges posed by today’s cybersecurity threats, manufacturers, energy producers, and utilities should turn to a simple to deploy zero-trust access control platform that can keep companies secure and operational, especially when IT and OT platforms are united.

Taken together, it’s clear that cybersecurity needs to be a top priority for every company in 2022, and they should start preparing today to meet tomorrow’s challenges.

Getting to Resilience

When I turned 7, I got my first BMX bike. Of course, within a week my best friend and I built a ramp with plywood and cinderblock. I remember the first jump vividly. I sped down the street like a miniature Evil Knievel and hit the ramp at a pretty good clip. A moment after I caught “big air,” my front tire hit the road, and I went over the handlebars – leaving a fair amount of skin on the road.

Clearly, the operational process of pedaling the bike up a ramp and into the air and landing was not done the right way. The data was clear. All I had to do was look at the blood on my knee and my stinging hands and recognize that I needed guidance. Fortunately, there was another older kid on his bike who was watching the whole thing and with the wisdom of Socrates said, “you have to lean back when you jump.”

This was the moment I learned about resiliency. I not only found out that I could endure adversity, but I now had knowledge to recover and make sure that the next time I went off that ramp I would likely stay on the bike…though wearing knee pads also would probably not be a bad idea.

Over the last 18 months, we have all learned more about resiliency. Large corporations have gone remote practically overnight, and our critical industrial sectors have had to adjust as well to limited travel schedules, while also needing to protect OT assets and interdependent IT systems from nefarious threat actors.

Recent shutdowns of these systems due to cyber-attacks and the cascading effects on society cannot be understated. Most of us have now experienced first-hand the fragility of operational processes that don’t have proper logical access safeguards in place. We all need the “older kid” who knows how operational processes work, so we are not crashing the bike or leaving it unlocked in an open area.

There are a lot of folks, including politicians and many in the media, talking about the problems with aging insecure infrastructure and the need for more money and resources for upgrading systems and putting in cybersecurity tools.

Unfortunately, this money is often spent on politically aligned companies who implement expensive and complex technology – resulting in solutions that are not effectively integrated and handed off to people who are not trained or much too busy with other tasks such as operating a power plant. This approach will not make our critical infrastructure resilient, and many times, it can lead to misconfiguration and exposure of critical systems to cyber-attack.

Getting to resilience requires the older kid experience with simple solutions that can make managing critical operations less expensive and more secure. The right resources are in almost every control room – the challenge is to put operational processes and technology in place that enables more effective operational management and reduces cyber risks simultaneously.

The Colonial Pipeline Incident Fallout and Building Zero-Trust

Colonial is an archetype of critical infrastructure.

Back in March, a hacking group known as DarkSide began a campaign on Colonial Pipeline’s IT network and billing systems. On May 7th, Colonial publicly announces the attack, shuts down servers and some pipelines and pays DarkSide $4.4M in ransom.  On May 12th, Colonial restores operations and announces fuel delivery timelines amidst panic buying at gas stations.

While Colonial was able to get operations back up and running after the 6-day shutdown, the incident’s economic ripple effects were stark.

  • Gas Stations: Last week, 71% of gas stations in North Carolina, 55% in Virginia, 54% in South Carolina and 49% in Georgia were dry.
  • Air Travel: American Airlines altered schedules and announced adding refueling stops for long-haul routes out of Charlotte, NC.
  • Department of Transportation: The DoT announced a regional state of emergency for 17 states, easing restrictions for transport of fuel.

Clearly, the closure of the 5,500-mile pipeline system has been the most disruptive cyberattack on record.

Colonial’s OT network uses automation systems to control and monitor the flow of fuel from refineries and tank farms into Colonial’s pipeline, and from Colonial’s pipeline into the tanks and transportation facilities belonging to suppliers and distributors.

According to CNN, people briefed on the matter were concerned they wouldn’t be able to figure out how much to bill customers, and the billing system is central to the unfettered operation of the pipeline.

The interdependency between the IT billing system and OT automation system is clear. Colonial automated fuel monitoring, and control data from the OT network is fed into the IT billing system so they know how much to bill customers.

The Problem – lack of proper access controls for critical systems

Colonial said it shut down the pipelines as a precaution to prevent the infection from spreading. The reality is that there are cascading dependencies when you automate processes and IT systems are dependent on OT systems and vice versa.  In addition to billing systems, Colonial’s IT network includes HR/payroll systems, supplier data, business analytics, pipeline schematics, etc… which are not interdependent on the pipeline automation system.

I don’t doubt that Colonial was taking a precautionary measure to “prevent spreading” – but this statement illuminates a bigger problem. Why would an attack on a critical billing system spread to other IT systems or the OT network? The likely answer is that this critical system was not properly segmented with separate logical access controls including multi-factor authentication and granular system or application authorization. There appears to be a lack of appreciation or recognition of the difference between a “critical” system and a “confidential” or “sensitive” system within Colonial’s IT operations.

IT systems that are interdependent on OT systems become critical infrastructure systems and must have separate logical access controls based on zero-trust. 

The Solution – Zero-Trust access platform for both critical IT and OT systems

While corporate IT networks must be connected to the internet, there are critical systems that need additional authentication and authorization. For example, it is no problem to give keys to the janitor to clean your office, but would you give him the combination to the safe under your desk? This is the concept of “zero-trust.”

For critical IT systems such as Colonial’s billing system, a zero-trust access layer including multi-factor authentication (MFA) and granular role and time-based authorization should be required. In addition, full user session logging, monitoring and recording of access to these systems is paramount.

The risk of ransomware is mitigated when a separate “zero-trust” user access layer is deployed between the “sensitive” corporate network and the “critical” billing systems.

There also needs to be a secure operational link between critical IT systems and OT network. This can be accomplished by additional segmentation, logging and monitoring.

The corporate IT network needs to have a separate zero-trust user access platform for connecting to the OT network. There may be OEMs that need access to control systems, and this access should also be controlled through MFA, user-to-asset connection control, logging, monitoring and recording.

Summary

Critical Infrastructure systems need to be identified in every large organization and measures need to be taken asap to ensure that the systems – whether on the IT network or OT network – are protected with a separate “zero-trust” user access platform.  A system housing credit card data is not critical infrastructure.  17,000 gas stations don’t run out of gas when a few hundred or thousand people need new credit cards.  We must understand relative risk and impacts and employ separate granular authentication and authorization to critical systems. We can mitigate risks from threat actors such as DarkSide as well as from other nefarious and skilled actors through a zero-trust methodology.

Taking an IT-Focused Approach to Securing OT Remote Operations at Municipal Utilities May be Risking Lives

The Oldsmar, Florida, water breach is two months behind us, but the lessons learned will continue to reverberate for thousands of budget-constrained municipal utilities in North America, as well as other regions across the world.

Lesson #1: Technology Budget Constraints

Oldsmar, like many other municipal utilities, occasionally needed remote access to their site, so they chose TeamViewer because it “didn’t cost anything extra.” Reading between the lines, the key point here is that the IT department had already purchased TeamViewer for their needs and had extra licenses that OT could use. The IT department probably had secure infrastructure around TeamViewer, but they could not forklift this infrastructure over to the water treatment plant because it would be too expensive to replicate for a few “critical” HMIs and other systems. TeamViewer in itself is not the issue – the problem is with the complex and expensive proposition of scaling IT cybersecurity architecture to OT.

Lesson #2: Cybersecurity Resource Constraints

Senior plant managers have mechanical and/or electrical engineering backgrounds and are not well versed in IT protocols, 2FA, firewalls, VPNs and Jump Servers, etc. They don’t have time or the expertise to manage IT cybersecurity stacks. If they have to remote into a plant at 2am and check systems, they want something that just works. Some utilities may invest in integrating a cybersecurity tool, but plant managers will not know if everything is properly configured and just want it to work. The need for easy access to the plant could drive behavior away from complex secure remote access through IT infrastructure and over to “give me the free ‘easy’ button.”

Lesson #3: IT and OT Cultural Differences – Confidentiality vs. Availability

A utility’s IT network of consists of billing, accounting and HR systems, which contain PCI and PII data that must be kept confidential. IT operations and cybersecurity personnel need to make sure that access to these systems is limited and controlled through several integrated secure authentication and authorization mechanisms. IT operations is hyper-focused on providing secure access to sensitive and confidential data for its users.

The OT network consists of process and automation controls and distributed control systems for valves, pumps, meters, etc., as well as human machine interface (HMI) computing systems and SCADA applications that interact with these real-time systems. The safety and availability of these real-time systems is paramount.

The very culture of OT operations is keeping systems running. IT is focused on protecting confidential data. These differing priorities mean that cybersecurity in the OT context needs to be built-in with unique features for both senior managers and technicians.

The Final Lesson: IT Remote Access Solutions Can Increase Risks to Public Safety in OT Environments

The nature of OT requires a very secure and simple remote operations platform that doesn’t break the bank. IT/OT converged networks can create complexity where insecure protocols such as RDP can be exposed into the IT network and out to the internet. Critical OT systems that have exposed protocols can be found with tools such as Shodan. Complex IT cybersecurity infrastructure and Security Operations Centers are focused on IT networks and not built to look for issues within OT networks. While larger utilities do implement OT-specific cybersecurity stacks, smaller municipalities cannot usually afford these, as was the case with the breach in Oldsmar.

In addition, there are specific operational needs that require OT-specific secure remote operations platforms. OT-specific user access and operations can reduce risks to public safety by including unique features such as:

  1. User access screen recording on HMIs and other OT systems – this can help diagnose user errors and help with training junior technicians to mitigate automation and control issues that could lead to disastrous consequences
  2. Granular role-based access controls such as a Remote Access Manager and File Transfer Manager – these roles can be given to specific individuals for specific tasks, thus limiting access privileges and mitigating risks associated with oversubscribed access to non-IT OT managers
  3. Live user connection monitoring – which provides senior managers visibility to technician input to walk through processes and provide real-world training

Summary

Enterprise IT remote access technologies such as VPNs and Jump Servers, when used with multi-factor authentication, intrusion detection systems and firewalled network segmentation can reduce risks associated with confidential data compromise; however, these integrated enterprise technologies cannot be forklifted and replicated for OT. Often, an OT staff will deploy a subset of these technologies to enable remote access, which then opens up the OT network to compromise. OT has very specific needs to ensure operational availability and public safety. They cannot afford the vulnerabilities associated with incomplete enterprise remote access tools or complex full stacks, which are too expensive to acquire and maintain in resource-limited OT environments.

To learn about XONA’s user access solution built for OT that puts all of these lessons into action, schedule a demo now.

Cybersecurity & Remote Workers: How to Protect Your Data & OT Infrastructure

Even before the Coronavirus pandemic created an environment ripe for bad actors to exploit, cybersecurity was a top priority at many companies. Most industries identified cybersecurity as a serious threat to their business continuity and longevity. Since the onset of COVID-19, 75% of business leaders view cybersecurity as a top priority to while navigating the new normal.

It’s easy to see why. According to IBM’s annual Cost of a Data Breach Study, the average data breach will cost companies nearly $4 million, a significant sum at a time when most organizations are already facing serious business disruptions.

Unfortunately, these risks are amplified in a remote work environment as unsecured connections, careless employees, and unsophisticated data privacy standards put company data at risk.

These risks are amplified in Operational Technology (OT), as compromised data and systems can lead to catastrophic incidents and put lives at risk. Therefore, as companies increasingly embrace a hybrid workforce and the remote operations capacity that comes with it, it’s vitally important to ensure that access to your organization’s OT systems are cyber-secure.

Here are three steps that every organization can take today to begin this process.

#1 Ensure that remote workers operate in a safe OT environment.

From fraud attempts to compromised connections, remote workers face a deluge of cybersecurity threats that put companies at risk. In this environment, employees need a comprehensive, secure remote operations platform that provides:

  • Protocol isolation
  • VDI access – no data-in-transit
  • Multi-factor authentication
  • Application and system segmentation
  • Time-based access control
  • Session logging
  • Screen recording

These zero-trust features provide a level of accountability for employees while also ensuring safe access to critical infrastructure.

#2 Implement zero-trust technology.

In the past several years, companies have spent extravagant sums to fortify their on-site defensive posture. Unfortunately, those efforts are useless when it comes to keeping a hybrid workforce cyber-secure.

While VPN services and other security-focused technologies offer a basic level of network access protection, remote operations require more granular authorization and monitoring controls for access to critical systems. A zero-trust architecture is needed, as it combines strong multi-factor authentication, segmented system authorization, and full user access monitoring and recording.

#3 Require moderated directional secure file transfer capability to move files into an OT environment.

The past several years have seen an unprecedented number of data breaches, and billions of digital records have been compromised in the process. The consequences can be much more devastating to public safety in OT.

However, simple strategies, like moderated unidirectional secure file transfer, can provide better safeguards to ensure files moved into the OT environment are audited and validated.

For example, enabling a technician to update the software on a critical system should require that only unidirectional access is allowed from the remote technician, and a supervisor must also approve the file to be moved. In addition, the integrity of the file should be validated and also checked for malware. These features are often optional, but companies should make them standard when public safety is at stake. The extra step can help prevent a consequential data or network breach leading to a disastrous outcome.

Conclusion

Cybersecurity is a bottom-line issue for every organization. The economic implications of COVID-19 are forcing many companies to make difficult concessions, which increases the importance of addressing cyber threats with an integrated zero-trust user access and remote operations platform.

Simply put, getting the most proverbial bang for your buck means turning to solutions that include cybersecurity as a built-in, baseline part of their product.

How Remote Operations Capacity Improves Organizational Efficiency

The Coronavirus pandemic is proving to be one of the most disruptive forces of our generation. In addition to being a prolific public health emergency that’s tragically cost the lives of hundreds of thousands of people, the economic implications have been vast and far-reaching.

As a result, companies of every size in nearly every sector are contending with a new financial reality. Shrinking consumer demand, decreased revenues, and increased costs associated with safety and cybersecurity, will collectively force organizations to assess their priorities and maximize their efficiency.

In this environment, optimizing workflows, mitigating pain points, and otherwise increasing agility will be critical to ensuring operational continuity and long-term success.  These pain points extend to industrial control systems in critical industries such as Energy, Oil and Gas, Manufacturing and Transportation.  Remote operations capacity, the ability to communicate and collaborate from anywhere and interact with these critical infrastructure systems, can help organizations gain new operational efficiencies.

Here’s how.

#1 Access and optimize a global talent pool.

Moving forward, it’s clear that a hybrid workforce that accommodates in-person, remote, and distributed teams will be a defining feature of the future of work.

To make this change successfully, teams will need more than a Zoom account and a Slack chat. They need to operate critical infrastructure, diagnose problems, implement solutions, and safely and securely collaborate with on-site employees. Most importantly, they need to be able to do this from anywhere at any time.

In doing so, companies gain access to a global cadre of ready professionals who will help address pressing problems with once inaccessible talent. Optimizing a global talent pool allows companies to access the most qualified from around the world, but, without the right tools, it’s a bottleneck with costly implications.

Whether you’re accommodating an international organization or hiring individual talent abroad, remote operations capacity is key to maximizing efficiency.

#2 Monitor and maintain decentralized and multi-site infrastructure.

Multi-site workspaces are especially difficult to manage during a pandemic. Not only is this work less tenable as safety restrictions and other measures can hinder travel and in-person meetings, but it’s profoundly inefficient.

Remote operations capacity equips employees to monitor and maintain infrastructure from anywhere, giving them the ability to:

  • Centrally monitor on-site operations
  • Diagnose and troubleshoot alarms and issues
  • Instruct, guide, and dispatch on-site personnel
  • Remotely operate, startup, and shutdown physical infrastructure.

This capacity can reduce travel and personnel costs while ensuring that critical infrastructure is optimized and well-run.

#3 Reduce costs associated with on-site facilities management.

Even for employees working on-site, remote operations capacity allows for new efficiencies that maximize growth and opportunity. For instance, this technology allows workers to easily collaborate with remote staff and experts.

Similarly, as social distancing protocols keep group meetings to a minimum, this technology ensures that organizations operate reliably with reduced on-site staffing. Most importantly, all employees – whether on-site or remote – can quickly and easily respond to incidents and real-time needs from anywhere.

Conclusion

As companies are forced to do more with less, the right tools can be the difference between flourishing and failure. Remote operations capacity isn’t the only ingredient for successfully navigating this challenging time, but it’s a powerful tool for maximizing efficiency without compromise.

Empower, Maintain, Attract: 3 Priorities When Adapting to a Hybrid Workforce

2020 is a transformative year in many ways. Most prominently for businesses, the on-the-ground reality of the Coronavirus pandemic ushered in what Time Magazine described as “the world’s largest work-from-home experiment.”

Of course, what began as a grand experiment has quickly become the new normal. More than half of corporate executives and small and medium-sized businesses plan to continue offering a remote work option even after the COVID-19 pandemic eventually comes to a close. Collectively, Gartner estimates that nearly three-quarters of all companies plan to make remote work a central part of the present and future of work.

Unfortunately, there is a growing chasm between remote work ambitions and the on-the-ground reality, as many companies lack the capabilities to empower off-site employees in a way that maintains operational continuity, workplace flexibility, and cybersecurity.

In this regard, adaptability is critical, and successful companies will develop remote operations capacity that’s ready to meet the moment. Here’s why.

#1 Empower distributed teams from anywhere

While much attention is given to the growth of popular remote work tools like Zoom and Slack, these technologies fail to bridge the gap between off-site workers and on-site responsibilities.

Especially when physical infrastructure is involved, employees need a way to assess problems, coordinate functions, and implement solutions.

A long-term remote work arrangement requires companies to provide more than just communication and collaboration tools. They need to be able to remain operational in physical spaces, even from a distance.

Developing this capacity empowers a hybrid workforce, and it poses an opportunity to introduce new efficiencies to existing workflows. Specifically, workers can:

  • Operate on-site infrastructure.
  • Identify and respond to on-site problems.
  • Securely communicate and collaborate with team members.

Empowering distributed teams to be effective from anywhere requires comprehensive remote operations capacity that unites on-site, remote, and distributed teams.

#2 Maintain business continuity in any environment

The long-lasting, far-reaching consequences of the COVID-19 pandemic are devastating, and they underscore a broader reality that companies need to embrace: the only certainty is unpredictability.

Natural disasters, international conflict, and shifting consumer trends continually threaten to undermine business continuity. For instance, scientists at the National Oceanic and Atmospheric Administration have described this hurricane season as “the most active in history,” and catastrophic wildfires on the West Coast offer a startling reminder that significant disruptions can occur at any moment.

In other words, the COVID-19 pandemic is astounding, but it isn’t an aberration. Moving forward, companies should be prepared to maintain business continuity regardless of circumstances.

Remote operations capacity is a prominent component of this reality. Especially for critical sectors like energy, oil & gas, and government, businesses need to be ready regardless of circumstances.

#3 Attract and retain a mobile-first generation of employees

Today’s workers are increasingly mobile-first, something that is likely to increase now that many have proven off-site productivity during a pandemic. What’s more, the transition to remote work is well-received by today’s workers, and, according to Gallup, nearly 60% want to continue working off-site indefinitely.

The benefits are obvious. Many employees achieve better work-life balance, endure less commute-related stress, and attain affordable housing in desired areas.

Many of the same industries that stand to benefit from remote operations capacity, including energy, oil & gas, and government, are struggling to recruit top talent, and comprehensive remote work capabilities can support these initiatives. For example, an energy industry survey by Perkbox found that flexibility is one of the top ways for companies to attract new, and especially younger, talent.

For many, flexible work is a distinguishing feature, separating the most compelling companies from the rest. To compete for top talent, companies will need to adapt to the moment by developing remote operations capacity that allows them to offer flexible work arrangements without compromise.

Conclusion

The concept of workplace “agility” is a corporate buzzword with renewed meaning and importance in 2020. The COVID-19 pandemic is forcing companies to adapt to new workplace arrangements that include remote, on-site, and distributed teams.

However, this transition is about more than just this moment. Companies that can harness this transformative time to retool and adapt their practices to address the challenges and opportunities ahead will be better positioned for long-term success. Our circumstances might be unique and the challenges immense, but what started as a grand experiment is now the new normal, and we need to be ready to adapt.

3 Priorities for Securely Transitioning to Remote Plant Operations

Few industries are undergoing a digital transformation as quickly or as thoroughly as the energy sector. Complex market forces and unique challenges have converged to create an environment where new digital solutions are required to produce greater efficiencies, better safety standards, and a more compelling work environment.

A critical part of this change is a transition to a remote environment, which has only accelerated in response to the Coronavirus pandemic. A return to in-person, on-site work as the de-facto work arrangement is unlikely to return anytime soon.

Instead, it’s increasingly clear that the new normal will consist of on-site, remote, and distributed teams. To accommodate this environment, power producers will need to scale their remote operations capacity quickly.

The Benefits of Remote Operations Capacity

Perhaps most obviously, remote operations capacity addresses a clear desire from workers to have more flexibility in their workplace. This is both a response to the COVID-19 pandemic and a long-developing trend. Within the energy industry, 70% of employees indicate that they want to continue working remotely after the pandemic. At the same time, companies will bolster their ability to ensure productivity from a mobile workforce while enabling sharable expertise and services.

At the same time, distributed workers are empowered like never before, as remote operations capacity allows them to:

  • Centrally monitor plant operations
  • Diagnose and troubleshoot alarms and issues
  • Instruct, guide, and dispatch on-site personnel
  • Remotely operate, startup, and shutdown.

While remote operations capacity is often associated with off-site work, it also bolsters the abilities of on-site workers. For example, these employees can:

  • Collaborate with remote staff and experts
  • Increase mobile staff effectiveness and flexibility
  • Improve employee health and safety
  • Operate reliability with reduced staffing.

Remote operations capacity addresses many other important issues for the industry. It empowers an emerging, multi-generational workforce, updates cybersecurity standards for the digital age, and allows energy producers to remain agile during catastrophic events (like a global pandemic).

Priorities That Matter Most

Power generators are moving in the right direction by adopting technologies that enable remote or mobile control procedures that ensure business continuity and staffing flexibility. Ensuring a secure, effective transition is top-of-mind for many leaders. Failure in this regard could have long-term implications for companies’ bottom lines and their customers’ ability to receive reliable, affordable services. To achieve this, leaders should focus on three priorities:

Simplicity. Technological advancement is only forward-thinking if people can harness these capabilities to improve upon existing infrastructures. Companies are making this transition during an already-disruptive time, so they should strive to implement software that is simple to install, easy-to-use, and incredibly powerful.

Cybersecurity. Today, most power plants are equipped with next-generation firewall (NFGW) products, a defensive standard for preventing bad actors from accessing and meddling with these critical networks. These services allow for powerful functions like sandboxing, application-level inspection, and intrusion prevention. However, this technology isn’t designed for remote access, which is why a new “connection broker,” a zero-trust OT platform, is helpful. It allows users to authenticate with any standard browser on their PC or tablet. Users can simply log onto the broker using an encrypted HTTPS protocol and are screened through a multi-factor authentication process to verify their identity. This standard ensures that power producers are cyber secure without limiting their employees’ ability to effectively operate off-site.

Immediacy: Undoubtedly, the trend toward remote work and the need for remote operations capacity is a long-term reality that will reshape the industry in its image. It’s also an immediate concern. Not only are the cost-saving efficiencies needed to compete in an already troubled energy market, but keeping workers safe and productive during the pandemic is a top priority. Embracing an off-the-shelf software solution means not having to build something from the ground up, which is important when time is of the essence.

Conclusion

Secure remote operations enhance the flexibility, capability, and responsiveness of utilities to effectively meet this transformative moment by bringing together on-site and remote operators to increase operational efficiency and public safety. Moving forward, whatever the need, the solutions must be flexible and adaptable, able to exist as a temporary band-aid solution and as a long-term, comprehensive digital transformation.

The energy sector is undoubtedly an industry in transition, and companies that make adjustments will be poised to flourish moving forward while those that stand still will struggle to keep up.

Assume Everything is Broken

“Change is the law of life. And those who look only to the past or present are certain to miss the future.” – John F. Kennedy

Early in my IT career, I worked as a Novell LAN Administrator for a government contractor. (For the millennials out there, Novell was a network operating system that was really popular in the 1990s when you still needed to dial-in to the internet…so LANs or “high speed” Local Area Networks were the hip, cool thing).

In a 150-year old government building, at the end of a long hallway of high-walled cubicles, sat my first real IT mentor, Chris, a Novell guru and an amateur philosopher. Chris taught me how to troubleshoot. One day, I remember struggling with a network printing issue, and I probably spent a few hours trying to solve the problem until I decided to go ask Chris for help.

Chris looked at me as I went through a dissertation on how I tried to solve the issue and calmly said, “Assume Everything is Broken.” I blankly looked back and asked him to just come look at the problem. I wanted the lazy way out and just have Chris fix it…so I could go on to less difficult tasks.

Chris didn’t let me off the hook and told me to sit down and think about the problem differently to first find out what works. I went back to my cube and methodically started from the assumption that everything is broken and was able to narrow the issue down to a faulty interface card and solve the problem.

Today, every device is becoming part of a network, from your car to your light bulbs. Phones can create their own LANs or be in the cloud. This hyper-networked world has created a risk management nightmare for enterprises. There is a dizzying array of cybersecurity companies and tools and “buzzwords” for trying to secure this constantly changing environment – from automated AI-driven threat detection and response, along with instrumented orchestration and identity management, and governance and endpoint protection and user behavior analytics and network monitoring – the list goes on and on.

The goal of putting all this cybersecurity stuff in the enterprise seems generally to be driven by an assumption that everything is at risk (broken)…so we need more actionable data about our devices, networks and people so that we can reduce risk to our enterprise, brand, sensitive data, critical assets, etc.

Assuming that everything is broken or vulnerable means that we as an industry may need to rethink “everything,” including our legacy network architecture, which was initially designed based on the need for a perimeter or moat around the enterprise to protect the crown jewels. Many of the technologies used for securing network communications that are still operational today are 20, 30 or 60 years old.

Below are some legacy technology examples:

  • Passwords – the first computer password was developed by MIT in 1961 – though it is still used by many enterprises as a single means for authentication for most or all of their employees. Enterprises are breached regularly because many passwords can be compromised fairly easily. Most enterprises still do not employ multi-factor authentication (MFA) for all network endpoint devices.
  • Firewalls – or packet filters (as 1st generation firewalls were called) were invented in 1989. Firewalls have been an integral network architecture component over the last 30 years for protecting enterprises from the nefarious actors operating on the public internet. Firewalls have improved greatly over the last 10 years, and now firewalls provide more granular security for applications and users. The problem is even newer Next Generation Firewalls must be integrated with several other technologies such as multi-factor authentication and user access monitoring and recording to substantially reduce risk to critical systems and data. This adds complexity and costs that can make it too onerous for many under-resourced IT or OT enterprises.
  • VPNs – Virtual Private Networks (VPNs) have been around since 1996 and are still used by enterprises to connect to critical data and systems. VPNs provide a secure channel of communication but do not protect access to individual critical systems. The problem is when VPN credentials are compromised on an endpoint, the attacker may now have a secure channel to exfiltrate data from your enterprise.
  • Jump Servers – have been around since the 1990s and were created for secure access between two dissimilar security zones. Basically, a jump server is a machine that provides a check point so you can connect to other more sensitive systems. The problem with jump servers is that many are built on systems that are not hardened or patched regularly and many times expose insecure communication protocols across security zones and out to the Internet.

These technologies, which are part of most enterprises’ network architectures, were initially designed to assume only specific things were broken or at risk – they were point solutions to solve specific problems with either securing a communications channel or user access to a system.

If you took a poll in the 19th century on what folks needed to get from their farm to town faster, they would have unanimously said a faster horse. They couldn’t have anticipated the automobile – just as computer scientists in the 1990s were not thinking about smart toasters and networked fish tanks. We don’t need better passwords, firewalls, VPNs or jump servers – we need a better holistic secure network architecture.

Zero Trust Networks – A 21st Century Approach

The art of war teaches us to rely not on the likelihood of the enemy’s not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable. – Sun Tzu

Legacy network architecture assumes a perimeter of security that those outside of the network are not trusted and those inside the network are trusted. Zero-Trust network architecture recognizes that perimeter security is just a component of an establishment of trust between a user or machine and its connection to a specific resource.

Zero Trust calls for enterprises to leverage micro-segmentation and granular perimeter enforcement based on users, their locations and other data to determine whether to trust a user, machine or application seeking access to a particular part of the enterprise.

The concept of zero trust was built on the premise that you cannot trust the network, device or user independent of one another.  Each individual connection between a user and a system must be authenticated, authorized, encrypted, audited and monitored.

While cybersecurity tools such as threat detection and response are still needed, they will be more effective if they are deployed in conjunction with a zero-trust network architecture.

If you assume everything you are protecting in your enterprise today is broken, you can start testing and validating to see what is working and then build a secure network architecture based on zero-trust principles for 2020 instead of “checking the box” with 20-year-old integrated security technology.

Zero-Trust: A New Buzzword That Ought to Stick

It has been a couple of weeks since RSA Conference, so I thought I would share some observations on the cybersecurity industry in general and what I believe needs to be employed into every enterprise that has crown jewels…or at least important customer and corporate data or critical industrial control systems.

At RSA, I was fortunate enough to visit some cybersecurity technology vendors who claimed their next-generation AI data-driven threat intelligence platform would combine endpoint protection with deep learning behavioral and network analytics in a zero-trust model in order to realize an actionable real-time reduction in risk.

That’s right, I went to all 37,455 booths!

Kidding aside, CISO and other buyers of security technology must contend with a dizzying array of technologies and buzzwords popping up over the last several years in the Cybersecurity Industry. One of the newer entries becoming UBER popular is “Zero-Trust.” This term essentially means enforcing least privilege and never trusting, always verifying. This means all users should use strong authentication (2FA), have granular authorization to a system or application, and should always monitor system access.

The employment of Zero-Trust may sound draconian at first, but this concept forces strategic thinking into the equation of how to mitigate risks. It additionally has the potential to eliminate many duplicative data-driven reactive technologies, as well as many of the current buzzwords, or at least make them just a subset of Zero-Trust. This is a huge bonus, since true employment of this principle and methodology would actually radically reduce risk across every organization.

The best analogy I can think of today of a Zero-Trust model being used is your local bank. Banks employ “Zero-Trust” every day in their branches.  Why? Because they are protecting a really important asset: cold hard cash. A bank employs debit card and pin numbers (2FA Authentication) and users are only authorized to access cash in their account (granular authorization). You also have to have a special key to get into a deposit box in a vault (application micro-segmentation). Every time you access cash, there is a transaction ID and other details (session logging). There is also a security guard and video cameras (continuous monitoring and recording of access). Sure, there will still be a few bank robbers willing to take the risk – but most will be deterred because of Zero-Trust.

In the first few years of this century, we had desktops and laptops with anti-virus, a corporate firewall, and maybe an intrusion detection system at the perimeter, and we were reducing corporate risk.  Zero-Trust wasn’t as important because there were only a few bank robbers and millions of organizations. This was the old world.

Today, the world is hyper-connected. We now communicate data more over mobile devices than with laptops or desktops. We have an internet of things both at home and in our workplace that communicate on their own. Piling on more analysis to all of this communication to find threats with a limited talent pool dissecting this data is, at best, arduous and, at worst, entirely fruitless. We also now have millions of “bank robbers” because they know most organizations are not employing a Zero-Trust model.

We must build a better foundation of security where we are not drowning in data to be analyzed and buzzwords to be digested. Zero-Trust is the foundational model that every organization trading in data or systems protection needs to employ.