Edge computing enables processing of data closer to where it’s created (ie: motors, pumps, generators, or other sensors), reducing the need to transfer that data back and forth between the Cloud.
Think of Edge computing in manufacturing as a network of micro data centres capable of hosting, storage, computing and analysis on a localised basis while pushing aggregate data to a centralised plant or enterprise data centre, or even the Cloud (private or public, on-premise or off) for further analysis, deeper learning, or to feed an artificial intelligence (AI) engine hosted elsewhere.
There is no distinct hardware definition of industrial edge computing today; it’s in the eye of the beholder as to how much compute power or data response may be required in a given application or across a specific production process. Dedicated servers with virtualisation can host apps with significant footprints, store related production data, communicate to the Cloud, and perform onboard analytics in the footprint of an appliance, a server, or a PC in a PLC rack. For the purpose of this discussion, we will define the functionality just described versus defining the specific hardware footprint it may occupy.
Above all else, organisations must agree internally on standards of functionality required for various processes and then on the appropriate hardware and vendor(s) to fulfil the need. When referring to Edge, it will almost always be on-premise or at-asset to avoid over-generalising the IT infrastructure at the plant level.
Industry 4.0 and Smart Manufacturing: Impact of Operational Architecture
Manufacturing organisations benefit as they leverage the IIoT for the Operational Architecture of the future. Technology and tools that are continuously evolving will bring about the widely discussed Smart Manufacturing models. There are dozens of associated technologies and uses, but this discussion focuses primarily on manufacturing and process operations and the link to the Digital Twin, operational improvements and the technology shifts affecting the metrics that matter.
The Shift from Traditional Tech to Modern Equivalents
Edge can sit at the control level or above it, providing real-time responses and structured cost benefits.
The new LNS Research take on Operational Architecture based on the IIoT platform views analytics in the same context as all other applications. It also supports the concept of Cloud to Edge without implying any difference between them. The definition of Edge leans towards a hardware-centric view of the enterprise – any system that is below a plant data centre (or corporate one if no plant data centre exists) is considered part of the Edge. That’s not a hard and fast rule, but excellent guidance to further the discussion about Operational Architecture with distributed applications.
Depending on organisational viewpoint, edge infrastructure can be complementary to or inclusive of level 1 or 2 control and information layers of the production process. An organisation can define industrial Edge as an extension of Cloud activities or as an extension of Control activities based on requirements of speed, data structure, volume and velocity.
Organisational agreement on location (eg: an unmanned pumping station), capabilities, use cases and desired outcomes is critical before conducting any pilots or scalable implementations.
Edge Is Powering Up
Manufacturing is only one of many IoT universes where cloud and edge coexist. However, the growing number of IoT devices and applications requiring immediacy of response dictates that more cloud-like functions will move back to the Edge where on-sight management and response can be managed similarly to a hosted public environment.
Eventually, the cloud will be overwhelmed by smart transportation, cities, infrastructure, agriculture, and healthcare providers and the dilemma of too much cloud data will sacrifice time-critical applications.
Further, the sophistication of algorithms and the data thirst of machine learning will require ever more compute power in the cloud. Every major cloud provider recognises that the growth of IoT devices will require more localised cloud services with less latency and deterministic closed-loop response. Therefore they are investing billions of dollars into edge compute infrastructure. Similarly, PLC vendors are providing more robust in-rack solutions for compute and minor app hosting to provide high response without the dependencies on outside providers. The ultimate answer will be a compute power shift to smaller, at-the-asset devices that OT staff can manage with minimal support post-startup.
Response, reasoning, and reaction in real time will be in the fluid edge domain, shared by PLC-based compute and stand-alone edge hosts. This, in turn, will save cloud services for non-time critical data aggregation and analytic functions.
We don’t differentiate between applications and analytics running at the edge from those running in the cloud. An Operational Architecture is primarily software-based, and applications and analytics can run anywhere in the enterprise architecture that makes sense. This approach means you can build the Operational Architecture without concern for hardware limitations. For example, a company could decide to provide sufficient processing power in a PLC to run local analytics. That might be cost-effective and fit with the analytics goals, but it doesn’t preclude the company from running analytics elsewhere, as long as there is a logical connection to the architecture without being tied to hardware.
Ultimately, decisions about PLC versus Edge versus Cloud-based hosting and analytics depend on data volume, structures, and velocity accompanied by the response latency and the degree of difficulty in data normalisation between disparate data producers.
The Evolving Role of Edge Computing
Distributed control and compute will merge as cloud applications become more portable and edge devices become more powerful, including additional power at the controller. Real-time analytics, artificial intelligence (AI) and app hosting at the Edge will form virtualised intelligent agents, supplementing traditional PLC or DCS control. In some cases, companies will equip devices themselves with chips that contain AI or with analytic chips embedded.
This movement to the Edge, regardless of platform, solves many issues relating to IT infrastructure and cloud-based hosting or aggregation: network availability, latency, bandwidth, and security specifically.
Edge computing will become more necessary as analytics become more mainstream and cloud becomes more crowded. As we move into more advanced fields such as edge analytics and big data analytics in the cloud, data abstraction and cleansing will become ever more important. Managing local analytics at the deep Edge (eg: on a motor controller) and directly feeding the control system changes the dynamics of data. We often talk about the “four V’s” of data – velocity, volume, variety, and veracity.
In the deep edge example, we want to be able to store fast and voluminous data locally for a short time while we conduct local analytics. Longer term decision making will take place at a higher level in the data stack (perhaps in MOM or in the Cloud) and require reduced velocity and volume through consolidation. Similarly, the longer-term feedback loops will not require much speed or volume, but they must deliver the necessary feedback to the system as designed.
Aligning People, Processes and Technology
When mapping out an Edge strategy, manufacturers should avoid making it just a discussion about technology; indeed it’s about properly aligning people, processes and technology to create the right environment to drive outcomes.
By Matthew Littlefield, Principal Analyst, LNS Research.
READ MORE INDUSTRIAL AUTOMATION NEWS
WANT MORE INDUSTRY INSIGHTS? SUBSCRIBE TO IAA NOW!
CHECK OUT IAA’S CURRENT AND PAST ISSUES: DIGITAL MAGAZINE