The ever-present issue of cybercrime reveals a significant increase in breach data and compromised devices becoming the de facto expectation. Due to the continued increase of cybersecurity threats, most organisations and individuals are actively evolving their defences. To put things into perspective, the global information security market is predicted to hit approximately USD$170 billion in 2022, continuing the year-on-year trend we’ve seen over the past decade. Most businesses are now seeing new spending driven by increased awareness and regulations as organisations’ needs evolve toward addressing more complex challenges.
Over the past 5 years or so, we’ve seen bandwidth becoming less of a constraint than in preceding decades as adoption of broadband becomes universal for the consumer market, and corporate networks having access to increasingly fast connections. These changes have allowed for, and been influenced by, more complex topologies as the thirst for data – and more importantly, its utilisation – have increased meteorically by boosts in things like video consumption and IoT.
With an uptick in the sheer volume of data and the adoption of IoT devices by consumers and organisations alike, coupled with the increasingly distributed nature of workers, a trend accelerated by COVID, the amount of formally confidential information available on the dark web is skyrocketing. As cybercrime attacks and data breaches have become a part of today’s digital reality, what’s the solution?
What is Edge Computing?
Edge computing is the design of positioning devices to collect, process, and share data without the need to transport that data to other environments. In other words, it’s the practice of collecting, processing, and analysing data closer to its originating source than in the historic typical topologies.
Originally adopted to minimise costs relating to bandwidth, the cost-savings born from conducting processing locally due to the reduction in the amount of data processed in a cloud-based or centralised location is a secondary benefit. Bandwidth is typically not an operational or financial consideration for most use cases. However, problems are bound to arise when there’s an increase in the number of devices transmitting data simultaneously. For example, a scenario where hundreds of CCTV cameras transmit live footage. In such a situation, the quality may downgrade due to latency. So, today we are seeing the rise of edge computing primarily to ensure that data doesn’t suffer latency issues that may affect the performance of an application and/or user experience.
The increased deployment of IoT devices has bolstered the rise and further development of edge computing. There was a need to develop this solution because one of the main IoT challenges has been the collective volume of data IoT devices can create. They must connect to the network, most often the internet, to deliver data, or similarly, receive information or instruction. While latency is held in primacy as the catalyst for edge computing, as data is not being transported over as many network hops, nor as likely to become ‘data at rest’ on a server, edge computing can also alleviate some security concerns due to this decreased footprint.
Some examples of edge computing applications include drone-enabled crop management, flow systems on oil rigs, and the current load on a local power grid. In each of these examples, data from each device is collected, processed, and then only the relevant output sent for presentation or further processing and/or analysis. For instance, an edge gateway might receive and aggregate data from a fleet of autonomous vehicles, and can process that data and send only relevant information to the cloud (and ultimately further data use), significantly decreasing latency and reducing bandwidth requirements.
What is a Zero Trust Architecture?
Zero Trust refers to the strategic initiative that eliminates the inherent trust of everything inside an organisation’s network architecture, and thus suppress the occurrence of a data breach. The concept is based on the principle of not implicitly trusting any user (or activity) and always seeking verification. In essence, zero trust architecture assumes that all users are potential threats. So, it prevents access to resources and data until the user is verified and their access confirmed. The user is granted access that’s limited to performing primary duties. In the case of a compromised device, zero trust ensures that the damage is constrained. Conceptually, you don’t assume that someone inside your castle walls isn’t an enemy and challenge them, especially when their activities are outside their norm.
Normally, attackers use lateral movement techniques to navigate through networks when searching for valuable data and assets. In case this occurs under zero trust architecture, the system distrusts the user, despite their operations taking place inside the network, thereby limiting the movement of the user. It identifies the erstwhile attack in time, allowing for prompt action from a SOC or IT team, and subsequent remediation.
The proliferation of industrial IoT services, mobile devices and cloud computing have dissolved conventional network boundaries. Currently, hardened network perimeters aren’t self-sufficient when it comes to providing reliable security. As such, zero trust architecture reduces organisations’ risk of exposure to vulnerabilities in a “perimeter-less” environment. Basically, Zero Trust protects digital environments by preventing lateral movement, offering Layer 7 threat protection, leveraging network segmentation, and streamlining user-access control.
How does Zero Trust Impact Edge Computing and IoT
Zero trust architecture is quite proactive at preventing IoT security issues. Instead of allowing further access into an organisation’s network, zero trust protocols will trigger authentication if a user (or application) is seeking to gain deeper or atypical access. Such a process prevents threat actors from penetrating further into the corporate network.
As organisations usually begin deploying zero trust architecture at the edge by introducing strict policies regarding data centre security and devices, this is most often an attempt to strengthen network defences by helping combat key security threats posed by IoT. As a result of weak security standards on many IoT devices, this approach to restrict unauthorised and unnecessary access to the network is apposite.
With correctly designed and implemented zero trust policies, the potential security issues that may be introduced by IoT devices can be mitigated. Deploying zero trust architecture can help guard organisations against known and unknown threats, while allowing the organisation to realise the advantages of IoT adoption.
Starting to implement zero trust isn’t a complicated initiative. Security experts aren’t required to install expensive new software or replace the existing network segments. The relatively simple refinement of access policies, segmentation gateways, and applications can be put in place to improve security posture.
CyAmast is committed to providing unique and effective IoT solutions without compromising security or user experience. We help organisations identify assets and provide context for device behaviours at both the edge, covering IoT, and also with IT and Operational Technology. This insight forms the basis for decision-making and shaping access policies for organisations without the need of additional security solutions, nor further network segmentation or redesign. As CyAmast sits parallel to your current network, there’s no interruption to your operations, and any new initiatives like IoT adoption are easily accommodated.