The proprietary driver/"certified" hardware issue was one reason I'm interested in VMs. One side benefit for the users from a vendor supporting their software running in a VM guest is that they have to ditch the proprietary drivers, and they can't make a "you must buy certified hardware from us" argument :) As for robustness, it's speculation on my part. I've played a lot with workstation-class virtualization, and while it works very well, I've seen the occasional hiccup. I've read that server-class virtualization can have throughput issues, though for process control that's likely less of a problem than determinism. Dedicating or "passing through" hardware, particular with the newer hardware-assist mechanisms for that, might go a long way towards eliminating both throughput and determinism problems. I personally wouldn't co-host guests at different trust levels, either. I too am looking at virtualizing some ancillary systems, to save space as much as anything, but we'll be doing a risk assessment and evaluating as part of that assessment the scenario where the hypervisor/host OS is breached from a guest. However, even in these cases if with a certain guest I decide I don't want to share a host with other guests, I'll likely be looking at virtualizing it anyway just to isolate me somewhat from the hardware (the fun in reinstalling Windows is no longer there for me :), and to facilitate backups and disaster recovery. Corey Clingo BASF From: Michael Toecker <michael.toecker@xxxxxxxxx> To: foxboro@xxxxxxxxxxxxx Date: 06/06/2012 06:05 PM Subject: Re: [foxboro] how many cores? Sent by: foxboro-bounce@xxxxxxxxxxxxx Yeah, I'm dealing with this on a concept project right now. The biggest concern I've got is the drivers in use by many control systems. All of them have them, IA has the ones mentioned, Ovation has the OHI drivers, GE MARK is a little better but has some legacy drivers and hardware in certain applications that don't play well in a virtual environment. I'm less concerned about robustness, and would love to hear your concerns. I've been working with the notion that appropriate assignment of RAM, CPU and Disk space (in other words, hard assignments and not pools) would mitigate this issue. Also, dedicated assignment of the network adapters would help as well, rather than working through a virtual interface. As far as co-hosting less trusted guests, I haven't been doing that at all. It's definitely a potential waste of the resources, but CIP requirements have taken precendence in many environments I look at, and mixing inside/outside ESP has been a compliance issue in the past. I've been able to offset the waste by looking into virtualizing ancillary systems that interact with control systems, such as environmental, cyber security, and things like Pi/eDNA. Still not ideal. Mike Toecker Digital Bond, Inc _______________________________________________________________________ This mailing list is neither sponsored nor endorsed by Invensys Process Systems (formerly The Foxboro Company). Use the info you obtain here at your own risks. Read http://www.thecassandraproject.org/disclaimer.html foxboro mailing list: //www.freelists.org/list/foxboro to subscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=join to unsubscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=leave