I didn't think about the determinism issue at all. It's conceivable that sharing cycles on a CPU might affect the timing parameters of a control system's programs... I'd be most especially concerned with those that do some form of internal timing, most often seen in really really old control systems. This would become a huge concern on EMS state estimators and transmission applications, but would it affect operations down at the controller level? Controller events are timestamped independently from the workstation hardware,but I can't speak for how the HMI would handled fluctuating times between controllers and HMI. Though it would certainly have an effect on SOE for HMI alarms, which are based off the HMI clock. I'm wondering myself about the certification of the hardware platform by the vendor, and looking forward to the day when I send them a set virtual hardware instead of the whole box. Supposedly, the virtual hardware is the same no matter what physical hardware you put it on. It would certainly reduce complexity in choosing CPU, motherboard, network card, if Invensys/Foxboro (or any othe vendor) could link their validation process to VMware's virtual hardware. Might even be another market segment for VMWare? If you're referring to network throughput, I'm looking to mitigate by using multiple Gig/E interfaces (or even 10Gig) on the host system, which are trunked using VLAN tagging to compatible (i.e. 10/100, and BEEFY) switches. Those VLANs would then split out on the switch to individual ports. The concept is sound, as it preserves 10/100 nature of most of these networks, but also ensures that the host system NICs don't hobble communications of it's guest VMs. There are several issues to contend with though, namely appropriate failover in case of host NIC failure or switch failure. It's remarkable how many of these efforts end with "more research is necessary", isn't it? Mike Toecker Digital Bond, Inc On Thu, Jun 7, 2012 at 10:28 AM, Corey R Clingo <corey.clingo@xxxxxxxx>wrote: > The proprietary driver/"certified" hardware issue was one reason I'm > interested in VMs. One side benefit for the users from a vendor supporting > their software running in a VM guest is that they have to ditch the > proprietary drivers, and they can't make a "you must buy certified > hardware from us" argument :) > > As for robustness, it's speculation on my part. I've played a lot with > workstation-class virtualization, and while it works very well, I've seen > the occasional hiccup. I've read that server-class virtualization can have > throughput issues, though for process control that's likely less of a > problem than determinism. Dedicating or "passing through" hardware, > particular with the newer hardware-assist mechanisms for that, might go a > long way towards eliminating both throughput and determinism problems. > > > I personally wouldn't co-host guests at different trust levels, either. I > too am looking at virtualizing some ancillary systems, to save space as > much as anything, but we'll be doing a risk assessment and evaluating as > part of that assessment the scenario where the hypervisor/host OS is > breached from a guest. However, even in these cases if with a certain > guest I decide I don't want to share a host with other guests, I'll likely > be looking at virtualizing it anyway just to isolate me somewhat from the > hardware (the fun in reinstalling Windows is no longer there for me :), > and to facilitate backups and disaster recovery. > > > Corey Clingo > BASF > > > > From: Michael Toecker <michael.toecker@xxxxxxxxx> > To: foxboro@xxxxxxxxxxxxx > Date: 06/06/2012 06:05 PM > Subject: Re: [foxboro] how many cores? > Sent by: foxboro-bounce@xxxxxxxxxxxxx > > > > Yeah, I'm dealing with this on a concept project right now. The biggest > concern I've got is the drivers in use by many control systems. All of > them have them, IA has the ones mentioned, Ovation has the OHI drivers, GE > MARK is a little better but has some legacy drivers and hardware in > certain > applications that don't play well in a virtual environment. > I'm less concerned about robustness, and would love to hear your concerns. > I've been working with the notion that appropriate assignment of RAM, CPU > and Disk space (in other words, hard assignments and not pools) would > mitigate this issue. Also, dedicated assignment of the network adapters > would help as well, rather than working through a virtual interface. > > As far as co-hosting less trusted guests, I haven't been doing that at > all. It's definitely a potential waste of the resources, but CIP > requirements have taken precendence in many environments I look at, and > mixing inside/outside ESP has been a compliance issue in the past. I've > been able to offset the waste by looking into virtualizing ancillary > systems that interact with control systems, such as environmental, cyber > security, and things like Pi/eDNA. Still not ideal. > > Mike Toecker > Digital Bond, Inc > > > > > > > > _______________________________________________________________________ > This mailing list is neither sponsored nor endorsed by Invensys Process > Systems (formerly The Foxboro Company). Use the info you obtain here at > your own risks. Read http://www.thecassandraproject.org/disclaimer.html > > foxboro mailing list: //www.freelists.org/list/foxboro > to subscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=join > to unsubscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=leave > > -- _______________________________________________________________________ This mailing list is neither sponsored nor endorsed by Invensys Process Systems (formerly The Foxboro Company). Use the info you obtain here at your own risks. Read http://www.thecassandraproject.org/disclaimer.html foxboro mailing list: //www.freelists.org/list/foxboro to subscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=join to unsubscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=leave