Introduction
Virtualization and Cloud Computing are becoming common terms. Global leaders in Information Technology (IT) advertise with their new solutions of Virtualization and Cloud Computing.
Is Cloud Computing similar to Virtualization? Are these solutions a proven-technology or a hype? Can a virtualized environment handle real-time data? What are the benefits of these solutions? And more over, what are the challenges of these solutions? This white paper will provide insights in these matters and how to phase in with existing and new systems.
A brief explanation about Virtualization technology
If you would take a look in the current server parks you would find racks with a multitude of servers. Each server has its own dedicated task, for instance one server handles all Manufacturing Execution System (MES) data, another server handles the SCADA platform and another server for the Enterprise Resource Planning (ERP). Probably there will also be a group of servers for the IT infrastructure, such as WSUS server, Domain Controller (DC) server or Active Directory (AD) server.
Each of these servers run their own OS, for instance the MES runs on Windows Server 2003, ERP runs on Windows Server 2008 and the SCADA solution runs on Linux. Logically these applications with their own OS cannot run on one physical server.
Virtualization allows several types of OS's to run on one physical server, while maintaining a dedicated platform per application. On a virtualized server there is no OS installed, instead a hypervisor is installed. On this hypervisor the instance of an OS is installed on which the applications can be run. Multiple OS instances of different types can run on the hypervisor. As figure 1 shows, by virtualization a combination of logical servers can be consolidated in to one physical server while maintaining the same functionality.
Figure 1 Server Consolidation
Hypervisors
A hypervisor is the backbone of Virtualization. On a hypervisor the OS instances run with their own environment, which is referred to as Virtualization. There are two sorts of hypervisors; native hypervisor and hosted hypervisor.
Until now we have been discussing a specific type of hypervisor, this is the so called native hypervisor. These are mostly applied when server consolidation is required. Note that an additional workstation is required to configure and manage the OS instances by means of the Hypervisor Management Console (HMC).
A more common type of hypervisor is the hosted hypervisor. A hosted hypervisor is installed on a host OS as opposed to the native hypervisor. This type of hypervisor is found mostly on workstations such as desktops and laptops. These workstation already have their own OS installed, mostly is configured and equipped with applications for daily use. The hosted hypervisor can then be used to run applications on the OS instances which differ from the host OS. A common use of this client virtualization software is to configure OS instances dedicated for development or test environments. In the unfortunate event of a corrupt environment occurring, the host OS is not affected. This environment is ideally suited to test beds and evaluation of applications, without disturbing the host OS environment.
Figure 2 Configuration Difference Native & Hosted Hypervisor
Benefits of Virtualization
Figure 3 Disaster Recovery |
The availability of data within industrial automation is paramount. There are other IT solutions, besides virtualization, which provide high availability. For instance servers that are equipped hardware redundant, unfortunately these are mostly expensive and have short durability. The industry demands sustainable solutions which further enhance the system availability. Virtualization can contribute to this demand.
Commonly servers are set up by installing a single OS on the Server. The applications, e.g. the SCADA application, are installed on the OS/server configuration. If a contemporary server would go down, for instance by a failure in the power supply or CPU, the application will go down as well. This will lead to data loss and loss of functionality of the IT infrastructure, eventually leading to unanticipated costs.
Efficient disaster recovery ensures continuity, and therefore contributes to the process availability.
With Fault Tolerance Virtualization, the availability is enhanced by running two virtualized servers simultaneously.
One of these runs the applications in the same way single virtualized system would, this server is the active server. The difference is that this active server is connected to a second virtualized server. This second server runs a shadow image of the virtualized instance from the active server. This means that the shadow server has an exact runtime copy of the active server.
When the system is stable, the primary communication to- and from the server is with the active system. For a SCADA system this implies that the active server communicates with the controllers in the field and the HMI's in the control room.
When a malfunction occurs, such a failure of the power supply or CPU, the primary communication shifts from the active server to the shadow system. Since the shadow server is in sync with the active server it is able to take over the operation of the active server seamlessly. With that, the shadow server becomes the active. Figure 3 shows how this can be established.
Figure 4 Fictitious display of performance usage AD Server. |
Another significant benefit of Virtualization is the performance, or resource, balancing. Some applications require high performance server activity during certain parts of the day. For example an Active Directory server only requires the resources during the morning or after lunch. In the contemporary setup a high performance server is dedicated to this high resource demand, which is only required for a short period per day. For the remainder of the day the server is minimally utilized.
With Virtualization the HMC manages the performance of the physical server. The HMC allocates the resources to the virtual machine that requires it at a certain moment. Virtualization allows multiple applications to be run on the same server. By installing applications with different resource requirements on the same server, the available hardware resources will be used more efficiently. The HMC delegates resources to the demand of the OS instances. By delegating the performance use, two heavy applications and their OS's make use of one physical server without any limitation to the performance.
Moreover, when several virtualized servers are connected, then the HMC can even decide to run the instance of the OS, which requires more than the available resources, to a neighboring server which is running on a lower performance demand.
Challenges of virtualization
Figure 5 Fictitious display of performance usage AD Server and MES Server on a Virtualized Machine |
Recourse allocation is one of the many features within Virtualization. When configuring the resource allocation with a hosted hypervisor the maximum amount of RAM of the workstation must be taken into account. When running an OS instance it uses the allocated amount of resources. Take the example of Figure 6, when the OS instances are configured to a certain resource allocation, these OS instances will use the allocated resources when activated. The sum of allocated resources in Figure 6 is 8 GB and the memory of the main OS is 4 GB. Therefore in this case the laptop should have at least 12 GB of RAM in order to be able to run all the OS instances. Another example, if the laptop has an 8 GB of RAM (memory) installed, then besides the main OS it can run OS instances which require to no more than 4 GB. Even if the instance is just activated but no applications are executed, the OS instance will always obtain its allocated memory. When the sum of the OS instances exceeds the amount of RAM, the machine will perform sluggishly.
When a hosted hypervisor runs an OS instance that deploys network activities it can conflict network activities of the main OS. Virtualization uses hardware emulation, in which the hypervisor pretends to be a hardware device such as a CPU or network card. With hardware emulation the hypervisor can produce a virtual network card. For the OS instances it is as if the instance has its own hardware, whereas it is running on the same device. The OS instance runs its executions as if it has full control over the hardware. The executed command to the hardware is intercepted by the hypervisor, which in turn runs the command in emulation. The hypervisor returns the expected result to the OS instance.
An important remark regarding this solution is that it is slower than direct control over the hardware, since it is running through software. A more comprehensive solution for "OS instance – to – hardware" communication is by hardware-assisted virtualization. Hardware-assisted virtualization uses VT-x and AMD-V technology which is built with the new generation Intel and AMD chipsets. This technology deploys the same functionality as the hypervisor does with hardware emulation. However instead of running the emulation in the software the hypervisor intercepts the command from the OS instance and communicates it to the hardware-assisted virtualization technology. Because the command of the OS instance is executed in the hardware, the process becomes much faster. Hardware assigned virtualization is now common place in virtualization since all current generation of chip manufacturers support virtualization instruction sets.
Virtualization of the SCADA environment
Figure 6 Resource allocations Hosted Hypervisor |
The most obvious reason to virtualize the SCADA environment is to reduce hardware, infrastructure and facilitation costs. Besides this obvious reason there are other, and perhaps, equally significant reasons for virtualizing the SCADA environment.
Virtualization enables better integration of SCADA software into the existing virtualized IT environment. In the past there was a clear line between Process Control departments and IT departments when it came to responsibilities and the decision making. This clear line has faded out; making this adaption become very important. The collaboration between these departments is on such a high level that Process Control solutions are already compliant with both Process Control and IT needs, with for instance the integration of cyber security.
All industrial processes are constantly subject to change. This change can be physical, for instance when changes occur in the Piping & Instrumentation Diagram such as: adding pumps, valves, change in process flow, etc. However the changes can also be on application level. The integration of new or substitute software components in the SCADA environment, such as data historians, MES functionality, ERP integration, Logbook functionality.
Working in a real-time environment with high production demands and hazardous conditions makes seamless implementation a high priority. Test environments offer a reliable solution, but are often very costly since this requires all hardware, software, infrastructure, maintenance and facilitation costs similar to the live system. This is where virtualization becomes significantly useful, since the test environment can be run virtualized, reducing costs and being able to downsize the environment when the test phase is accomplished.
When and how will the next step towards virtualizations in SCADA be? We are experiencing that the demand for SCADA to adapt to the corporate IT environment is increasing. Therewith an increasing demand for virtualization since this is commonly used at corporate level. Momentarily this is mainly applied on the less critical processes where communication loss for a short period of time is allowed. Virtualization will continue to evolve, with the expectation that it will evolve in to a state that the technology is adaptable in the field of real-time automation.
Please be advised that this document solely provides a global overview on Virtualization. The most suitable solution must be determined on a per case basis. Yokogawa Global SCADA Center can be contact to give advice on these matters.
Industries
-
Power
In the mid 1970s, Yokogawa entered the power business with the release of the EBS Electric Control System. Since then, Yokogawa has steadfastly continued with the development of our technologies and capabilities for providing the best services and solutions to our customers worldwide.
Yokogawa has operated the global power solutions network to play a more active role in the dynamic global power market. This has allowed closer teamwork within Yokogawa, bringing together our global resources and industry know-how. Yokogawa's power industry experts work together to bring each customer the solution that best suits their sophisticated requirements.
-
Water & Wastewater
Water resources are finite, and therefore contributing to a sustainable water cycle is one of the Sustainable Development Goals (SDGs). Yokogawa has been providing advanced digital control solutions for the stable supply of clean and safe water, wastewater treatment for protecting the water environment, water loss management and optimization of plant operation for reducing CO2 emissions and running costs. With our leading-edge technologies, dependable products and extensive expertise and experience of diverse water projects around the world, we work with you to provide sustainable water solutions that boost your business and add value throughout the plant lifecycle.
Yokogawa supports a wide range of water control applications in both the municipal and industrial water markets.
Related Products & Solutions
-
FAST/TOOLS
Originating as the Flexible Advanced System Techniques (FAST) project, FAST/TOOLS today is a comprehensive, fully-integrated SCADA application suite. Powerful and flexible, FAST/TOOLS serves installations ranging from 50-point unit processes to multimillion-point offshore production and pipeline systems that extend over thousands of miles.