Re: OM Optimization The simplest way to reduce CP OM loading caused by 3rd party packages, e.g., an OPC based, is to use AIM*OPC DA Server on the same machine as AIM*Historian. The two consumers of data will share the same OM connections. Thus, if your AIM*Historian and OPC based Historian use the same tag, the CP will see a single use. If you have multiple AIM*Historians, you'd put AIM*OPC DA Server on each and have the OPC based Historian access the tag from the "right" server. If the 3rd party application uses FoxAPI, e.g., PI's fxbais data collector or AspenTech's CIMIO for FoxAPI, this same sharing can be implemented by setting a flag in the AIM*API configuration file. Re: For all: I read somewhere that FoxView updates at 1 second by default (not sure about display manager). Does this mean the FV opens an OM list in the source CP instructing that CP to scan the list every second and send an update if it changes by more than the change delta configured for the graphic? Otherwise, no data is sent. Or, does it mean the CP sends the data every second no matter what? Data will be sent by the every second IF a change is noted. A heartbeat message is sent every 2s if data updates do not occur due to "calm" data. Re: For all: I have been assuming that the bottleneck in my system (23 nodes connected to the mesh via ATSs) would be the old CPs as the nodebus is good for 1,600 pkts/s, the ATS for about 1000 pkts/s, and the MESH should rock. Am I thinking correctly or not? The ATS can handle any and all traffic on the Nodebus. It's rated capacity is lower than its actual performance. It won't be a bottleneck. Generally, communication is restrained by the CPs and DIs (FDGs). There are five major limits in the control stations: 1) OM Connections - the number of stations to which a CP can sustain a conversation 2) OM Lists - the number of "sets" of tags that a CP can manage. The maximum OM list size is 255, but they vary a lot. The old I/A Series historian used lists of 50 tags, FoxView uses lists of 75 tags, CP Peer-to-Peer communications use lists of 150 tags, AIM*Historian is often configured to use lists of 255 tags. 3) OM Scanner Table entries - The number of entries is a multiple of 20 - think columns - and the total table size (rows * columns) varies with the station type, but 600 is common. Note that a "row" can be used only by one list so space in the OM Scanner Table is likely to be wasted making the maximum more than somewhat theoretical. 4) Free Memory in the Control Station - The OM lists require RAM to hold the tag name and other data. 5) CPU capacity of the controller - When heavily loaded the CP can be slow to respond and can miss certain types of messages. Regards, Alex Johnson Invensys Operations Management 10900 Equity Drive Houston, TX 77041 +1 713 329 8472 (desk) +1 713 329 1600 (operator) +1 713 329 1944 (SSC Fax) +1 713 329 1700 (Central Fax) alex.johnson@xxxxxxxxxxxx (current) alex.johnson@xxxxxxxxxxxxxxxx (good until September 2010) -----Original Message----- From: foxboro-bounce@xxxxxxxxxxxxx [mailto:foxboro-bounce@xxxxxxxxxxxxx] On Behalf Of dave.caldwell@xxxxxxxxxxxxxx Sent: Wednesday, March 16, 2011 1:34 PM To: foxboro@xxxxxxxxxxxxx Subject: Re: [foxboro] Matrikon OPC Server for Foxboro I/A Thanks to all for the replies! I am learning a lot. William Ricker's application most closely matches my own. The intention would be to remove all OM loading for historians and DM or FoxViews from the I/A system (a situation of each CP to many AWs and WPs), and replace it with at least one collection server(maybe more)to serve data to many HMIs on a separate network (a situation of each CP to one AW collector). I have been told OM loading can in effect be reduced on the CPs, nodebus, and so on... Provided limits are placed on the number of tags continuously scanned and the frequency with which they update. This method of providing HMI still leaves some challenges: single points of failure, latency, and maintainability. That said, it appears to have been successfully achieved by William (well done)! Now for more questions: * For William: From the description all tags were read all the time. As Russ points out, the option exists to read the historian tags all the time and then the HMI tags as needed to decrease OM loading. My hallucination is that "always-on" decreases latency and the broadcasts/multicasts needed to open/close OM lists dynamically. With "make-break" as Russ describes the number of points scanned continuously is decreased, but the price paid is in "intial call-up" time, broadcast/multicast traffic, and OM list operations. Do I have this right? Is this why you chose the "always-on" approach? * For William: Could you describe how you chose to handle alarms? It appears that you have collected enough info from the Foxboro side to recreate alarms on the WW side. How were alarms accomplished? * For William: The issue of Client write back before OM list initialization complete... Would this issue only exist when the AW collector / OPC Server was first turned on as you collect all tags all the time? * For William: Would the fastest_rsr parameter in the foxapi.cfg that Terry mentions have purposely been set to 2 (1 second) in your application to provide for the one second scan? * For all: I read somewhere that FoxView updates at 1 second by default (not sure about display manager). Does this mean the FV opens an OM list in the source CP instructing that CP to scan the list every second and send an update if it changes by more than the change delta configured for the graphic? Otherwise, no data is sent. Or, does it mean the CP sends the data every second no matter what? * For all: I have been assuming that the bottleneck in my system (23 nodes connected to the mesh via ATSs) would be the old CPs as the nodebus is good for 1,600 pkts/s, the ATS for about 1000 pkts/s, and the MESH should rock. Am I thinking correctly or not? Thanks, _______________________________________________________________________ This mailing list is neither sponsored nor endorsed by Invensys Process Systems (formerly The Foxboro Company). Use the info you obtain here at your own risks. Read http://www.thecassandraproject.org/disclaimer.html foxboro mailing list: //www.freelists.org/list/foxboro to subscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=join to unsubscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=leave *** Confidentiality Notice: This e-mail, including any associated or attached files, is intended solely for the individual or entity to which it is addressed. This e-mail is confidential and may well also be legally privileged. If you have received it in error, you are on notice of its status. Please notify the sender immediately by reply e-mail and then delete this message from your system. Please do not copy it or use it for any purposes, or disclose its contents to any other person. This email comes from a division of the Invensys Group, owned by Invensys plc, which is a company registered in England and Wales with its registered office at 3rd Floor, 40 Grosvenor Place, London, SW1X 7AW (Registered number 166023). For a list of European legal entities within the Invensys Group, please go to http://www.invensys.com/legal/default.asp?top_nav_id=77&nav_id=80&prev_id=77. You may contact Invensys plc on +44 (0)20 3155 1200 or e-mail reception@xxxxxxxxxxxxx This e-mail and any attachments thereto may be subject to the terms of any agreements between Invensys (and/or its subsidiaries and affiliates) and the recipient (and/or its subsidiaries and affiliates). _______________________________________________________________________ This mailing list is neither sponsored nor endorsed by Invensys Process Systems (formerly The Foxboro Company). Use the info you obtain here at your own risks. Read http://www.thecassandraproject.org/disclaimer.html foxboro mailing list: //www.freelists.org/list/foxboro to subscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=join to unsubscribe: mailto:foxboro-request@xxxxxxxxxxxxx?subject=leave