OPUS

Home
News
About
Phocus Inside
OPUS
Download
Prices
Success
Contacts
 

The Opus Real-time Database Server

CONTENTS

 

What is the Opus Realtime Database Server?

The Opus database server is the task that runs at the core of our Phocus and Sitex SCADA and HMI software product range. It acts as a central point of contact and both provides systems with all the functionality and control you would expect from a mature well developed SCADA and control software product. It maintains memory resident, tag oriented records, that can be updated from external devices, such as PLCs, data acquisition units or discrete controls and distributes that data to clients tasks that are attached to the server. In addition to this most basic functionality the Opus server provides many other facilities both directly and indirectly through various sub-servers. Direct services include user access control, alarm distribution management, client data update management, IO Server management and control interface management. The indirect services, provided by sub-server tasks, include the trend data historian and alarm event logger.

Initially the user creates a database to reflect the process I/O attached to either an internal or external remote physical IO device. Each of these database points is attached to an IO server through a unique IO Server Id. The IO Server is responsible for interfacing to the physical IO system, extracting the relevant data and writing that data to the Opus server. The IO server writes the data to the server using a library function call, ProcessData(). This function includes many standard facilities such as alarm generation and engineering unit conversion.

Once the data is received by the server it goes though a number of internal processes. Firstly it is checked to see if it has changed significantly enough to be historically recorded, if so, the data is written to the special historian queue. Next it is checked for alarms, if the alarm flag is set, or has cleared it is passed on the the alarm manager which constructs an alarm message and places the data in its internal rotating alarm buffer.

Finally the change of data is passed to the update manager function. It will check to see if any registered client is interested in that particular point and if so, it adds the data header portion of the record to that clients update list, to be read the next time the client contacts the server.

This has given us an overview of the basic functionality of the Opus server and some of its sub-services. These topics will be covered in more detail in the following sections. The following graphics show an overview of the Opus services and the structure of the sub-services and clients utilities that surround it.


ENLARGE

How is the server identified?

When the server starts up (or any client application for that matter) it will read the environment variable SRVRNAME, (set to Demo in the demonstration installation /usr/bin/photon/phsdemo file). The server uses this name to uniquely identify itself to client utilities or other sub-services.

When starting a new database configuration, it is recommended that a server name be assigned so as to reflect either the overall system purpose, application or manufacturing area for which the server is being used.

The server name can be up to twelve characters in length, but the more concise the name, the quicker the clients can locate a server when multiple distributed servers are in use. Another consideration is that the first few characters of the name should differ from other server names on a networked system to improve search speed.

How is the server located by a client (Server List)?

The server list environment variable (SRVRLIST) is used to identify the logical network node number, or IP based host name, on which a server can be found. When an Active Standby system is being configured the server list will include the node numbers for both the active and standby server nodes. The server list is used by all the client utilities to define the servers they are allowed to attach to. Changing the contents of the server list allows for very flexible control over the access of remote LAN or WAN based workstations and other server nodes running client tasks.

Does OPUS support Backup configuration?

Yes. Backup system works in an Active Standby mode. This is the most reliable standby redundancy. All data from PLCs and field devices is received and recorded by both active and standup servers, but the only active server sends control signals to the backside. Active server sends a notification to standby system regularly. When the notification is failed, the standby server becomes active.

What types of record does it support?

The Opus server currently provides four basic record types namely, Numeric, Logical, Text and Accumulator (Numeric Array) but new record types will become available in time. Also with the release of Phocus2 graphics interface it's now easier than ever to create your own record types and integrate them into both the database builder utility and the custom mimic display builder and viewer. Each database record is identified by its type and its tag name. The tag name may be up to 12 characters in length, it must not contain spaces or commas but it can include hyphens and underscores.

Numeric points are represented as double precision floating point numbers and include data conversion, multiple alarm limits, description and units fields. Logical points are represented as one, two or three bits, providing 2,4 or 8 possible states. Alarm levels and custom alarm messages can be defined, with up to 10 characters available for each state description. Text points can hold up to 300 characters of text which can for example be read from an external device such as PLC or perhaps from a bar code reader. Finally, Accumulator points are stored as an array of 12 double precision floating point numbers and are intended for use in meter accumulation and other totalisation applications.

More details of the fields and the functionality of each point type can be found in the Database builder documentation.

How are the records structured within the server?

The Opus server is structured in Groups. Groups can be used to represent a collection of logical data points or they may represent a physical collection of points. There can be any number of Groups and within each group any number of individual database points, up to the maximum allowed by the selected Opus server product. As with individual record, each group is also given a twelve character tag name.

Server names must be unique within a network-distributed system. Group names must be unique within a server but can be repeated across servers. Individual points must have a unique name within that particular point type within a group; they do not need to be unique across different groups. The following picture gives a graphical interpretation of how a database server might be structured:


ENLARGE

More about the various Opus sub-services

As we have already briefly mentioned, Opus as well as providing standard memory resident record maintenance also provides a number of other sub-services that are an essential part of providing a complete SCADA, control or data acquisition product. We will now look at each of these facilities in a little more detail, firstly those services provided internally by the Opus server, then by the external tasks.

I/O server management

The Opus server is responsible for I/O Server management. These responsibilities include starting I/O server tasks and maintaining statistical data relating to the communications performed by each I/O server. The IO Server list is set-up and controlled by the IO Server configuration client utility. This program allows IO Servers to be installed on the system, stopped, started and their command line options changed.

Each IO Server is assigned a unique IO Server identity, this acts as a connection index between an IO Server and individual records within the database for which it will be responsible for updating. As mentioned earlier, it is a good idea to be aware, right from the outset of your system design of exactly what devices you are going to be interfacing with. You must also be aware of exactly which points within that device you will be reading or writing data to.

Each server may be assigned as many as 32 IO Servers, though in practice this is very unlikely. This would allow you to interface to up to 32 different types of IO device. Each IO Server can be connected to several field devices, depending upon the type of physical interface available. For example a data acquisition system might have several field units connected together using some kind of network, perhaps Ethernet or a multi-drop RS485 link. Or perhaps you may be interfacing with remote telemetry units using a dial up modem. In each case only a single IO Server would be required to interface to multiple remote devices.

During initialization an IO Server will construct a list of groups or fetch blocks based on the records within the database that it is responsible for (i.e. have a matching IO Server Id). Details are sent back to the server for each of these blocks and contains a description of the block and a block identifier. After each scan of the I/O device the IO Server informs the server as to whether the scan was successful or not using the block identifier as a reference. This allows the server to maintain comprehensive update communications statistics and status information for the IO Server. If a scan fails, the total scans will increment but not the valid scans. If the scan was successful, the total scans and valid scans are incremented. The percentage efficiency is calculated using a ratio of the total scans and valid scans.

The Opus server will also periodically check the health of the IO Servers it has started. If an IO Server fails (perhaps due to a programming fault) this is indicated on the IO Server statistics display and produces as a high priority system alarm. The IO Server Statistics display program request this data from the Opus server.

Controls management

A control is performed either by an operator, through a custom mimic or the Data Tables display, or indeed by custom control applications such as those written in SBL or in "C". A control is issued in order to send a value, through the Server and IO Server and on to the physical I/O device.

The Opus server provides Control Management facilities in the form of a multiple entry queue mechanism responsible for buffering control operation requests between client applications and the I/O server tasks. Any I/O Server capable of performing control operations registers itself with the Opus server as it starts. When a client program wishes to perform a control, it sends a control message to the server, which in turn places it in the appropriate I/O server queue. The client application can choose to wait until the control is complete or can continue immediately.

A signal is sent to the I/O server which subsequently reads the control entry from its private queue. If the I/O server does not perform the control within the required period (the time-out value can be set for each individual record), the server will generate a control-failed alarm. This should only occur in extreme circumstances, i.e. when the I/O server terminates abnormally and thereby fails to acknowledge that a control has been completed. Client functions do not register with the server control management facility directly, they simply fill in the control request send it to the server.

When the IO Server has completed the control request it send a control complete confirmation message to the Opus server so that the control can be removed from its queue. The operation of controls is dependent on the structure of the IO Server, some can read multiple controls from the queue and execute them all in a single pass, others read controls one at a time perhaps only issuing controls between complete data scans. This very often depends on the type of physical device the IO server is communicating with.

User Access management

When a client application with a user interface starts up the first thing it does is to connect to its default server (as defined by SRVRNAME environment variable). Next it will request the current access level for the user logged in on its node. The user access level is then used to determine what facilities may be available to the operator, or indeed whether the operator is allowed to use the requested application at all.

When a user logs on through a Phocus GI terminal main menu (on either the main system or a network workstation), the server supplies a list of possible user names. The user selects the appropriate name and if necessary a password. The user name is checked against the pre-defined user list and if necessary the password is verified. The server will also check to see if the user is already logged on to another node, if so the user will be denied access until logged off from their original node.

When constructing the user list the server checks the node number of the originator of the request (in the case of a networked workstation) against the network access list defined for that user. This facility allows certain users to be restricted to selected workstations. Finally the user access level is returned and any subsequent requests made by a client task on that node will be subject to the access level returned. Here are the available user access levels currently supported (it is possible to support up to 254 levels if required by using sub-levels within each of these levels):

  • View Only - view only (default logged out access level), no user interaction allowed.

  • Operator - general access, alarm acknowledgment and controls.

  • Supervisor - alarm disable and manual overwrite.

  • Manager - report generation and some configuration utilities.

  • Engineer - system configuration utilities.

  • Super User - Access Builder utility and access from any network node.

The general rules of access described above may change from application to application and may be altered by developers using the custom development software.

* Note: Super Users do not have any node access restrictions applied to them.

Finally, if configured to do so, the server will produce a special "User Log" event message each time a user logs on and off the system.

* Note: The user list is specific to a particular instance of an Opus server. Each Opus server active on the network has its own user access list. Any client task is governed by the access list of its DEFAULT Opus server node.

Update Manager

The Opus update management system provides a method of informing clients of data changes in a large numbers of records. Certain classes of client task are designed to display the real-time data values of a large number of records within the Opus server, for example the Data Tables display or Mimic Viewer.

A custom mimic display might have 400 dynamic links to the real-time value of an Opus record and is being updated twice a second, the Mimic Viewer would need to read each record for each update, resulting in 800 hits per second for a single display. Now imagine five workstations displaying a similar mimic, this would result in over 4000 hits per second.

The update manager removes the necessity to poll each record. During initialization the Mimic Viewer, for example, would read each record it needs to display then register the list of records with the update manager. After that the update manager checks each record as it's updated, if it changes it adds the newly changed data to the private buffer maintained for that Mimic Viewer client. When the Mimic Viewer is ready it reads all of the changes that occurred in a single transaction. As you can imagine, this significantly reduces loading on the server and enables clients displaying quite large amounts of real-time data to work well even across slow communications links, like the Internet.

Alarm and Event Manager

When a client or IO Server writes or updates a record in the database, the server checks the condition of the standard alarm flag. If this flag is set the Alarm/Event manager is informed, as a result it will generate an Alarm/Event message record and store it in the rotating alarm buffer. The size of the alarm buffer varies dependent upon the Opus server product selected.

When a client utility needs to be aware of a change in alarm conditions it registers with the alarm manager. The alarm manager maintains a private pointer into the rotating alarm buffer for each subscriber, in this way slower clients, for example the event logger that writes alarms and events to disk, does interfere with or hold up a higher speed client, like the Alarm display.

When a client first registers the alarm manager will try and send as much of the alarm buffer as it can until the client catches up with the most recent alarms, after that the client periodically checks with the alarm manager and receives all the alarm updates since its last request.

The Historian

Writing Historical Data

The Opus Historian is a simple but very robust historical data recording system. Our design goal was to create an historian that was extremely reliable, had good recording and retrieval speeds and should the worst happen and a data file become corrupted, was a simple enough format to enable it to be repaired.

The Historian is responsible for collecting changing data provided by the Opus real-time server and then storing it in chronologically order on disk. There is a special queue between the Historian and the server that can buffer up to 1000 records. The queue will signal the Historian either every few seconds or when it has hit a high water mark of 500 samples. The Historian will read the queue in a quick burst and sort the data before writing it to disk. This queue mechanism allows the historian to cope with a large burst of data changes.

Data is stored relative to a records database index. Information is stored in two files. One file contains a list of indexes with details of the position of the first and last records and any data that has been inserted in between. The second file simply contains the time stamped data itself. The index file allows rapid access to the data stored in the data file and also allows the Historian to store data that arrives in non-chronological order. Files are created daily and only the files for the current day are kept permanently open.

The HistWriter sub-service has been optimized to take advantage of features in the QNX operating system file system. It includes a progressive block allocation algorithm giving more regularly updated data larger contiguous blocks of disk space. As data is received from the queue it is sorted on a point-by-point basis. The first few times a point is written it is allocated a single record at a time. After a few receiving more data a larger block of records is allocated, each new record is added to this block until it becomes full, then a new even larger block is allocated. This is repeated as the amount of data recorded increases. This method of recording data increases the rate at which data can be recorded and significantly increases the speed at which it can be recalled by the historian reader task.

The Historian only keeps the current days files permanently open, data prior to this can be manipulated as the user sees fit. It may be compressed or copied to remote high capacity storage devices, or even compressed, the choice is yours. Please feel free to contact Nautsilus to discuss any specific requirement you may have.

The normal Historian is limited to a time resolution of one second, this is adequate for most applications but can sometimes be a restriction so Opus also includes an optional High Speed Historian capable of recording data with millisecond accuracy. It creates a data file for each day but there is only a single file and it always assumes that data is received in chronological order.

Reading Historical Data

Historical data is read through the Opus server and is provided by the HistReader sub-server. HistReader deals specifically with historical data requests, when the Opus server receives a history read request it passes it onto the HistReader task and then gets on with its normal duties. Some time later when the HistReader has read the required data from disk it sends it to the server who in turn replies to the original client request. The HistReader service reads the historical data from both the normal and high-speed historian file. Each request for data is limited to up to 3600 samples, applications retrieving larger amounts of data will fetch data using a number of blocks, one continuing after the other, although this is transparent when using any of our standard applications such as the Trend or Report Viewer.

The Event Logger and Reader

When Opus is started it starts an optional Event Logger task. This task registers with the Alarm manager function of the Opus server to receive all alarms and events generated by the server or any clients attached to it. As it reads the alarms and events it buffers them and then writes them to individual daily event files. The records stored in this file are in a simple binary format.

Client applications can read the contents of event files through the Opus server. It in turn passes the request onto the Event Reader sub-service task. Using much the same method as the trend data reader, the Opus server can continue servicing other clients while the Event Reader reads the alarm/event data and constructs a reply.

 



Russian

© Nautsilus, Ltd. 2007