A key process manufacturing problem yet to be solved is the management of knowledge, the know-how, know-who, and know-when. Just as we have been eliminating data silos by introducing common data repositories, and creating common information (semantic) services by adding structure to the data, we need a common rules repository rather than having them distributed in Excel spreadsheets, work-flows, documents, government regulations etc. which causes silos of rules, lack of consistency, difficulty to ensure consistent compliance, and many more problems.
Semantic models (RDF, RDFS, and OWL) are excellent at federating information from multiple sources, but there are competing approaches for information integration. However semantic models are also the perfect way to express rules (SPIN) because rules are also described as data within the same database, unlike other technologies where the rules have to be expressed as code.
Given the importance of consistent application of rules in the process manufacturing business, is this then the sweet spot for semantic technologies?
Is data transparent?
We have been struggling with managing knowledge about our process plants for many years. We tried to solve this problem starting in the 80’s with sophisticated data collection applications and real-time databases. However a data-centric alone solution only provides a view of the process plant as a very long list of measurement tags, reinforcing the definition of data as “being discrete, objective facts or observations, which are unorganized and unprocessed and therefore have no meaning or value because of lack of context and interpretation.”
Thus our view of the knowledge about the plant, its equipment, its operation, its performance provides a new definition of ‘transparency’ or lack there-of. If we do want to obtain knowledge we must make use of application programs within which are encoded our knowledge-extraction rules. Most prevalent is the ubiquitous spreadsheet in which there are:
- Object names written into cells
- RTDB Tag names in hidden cells
- A fixed number of feed/product rows
- New spreadsheet for each unit/report
Consequences of this data-without-information or knowledge are a ‘gagging’ of spreadsheets: information is encoded in the tag names, knowledge is encoded in the Excel layout and formulae, action relies on a user running the application and using their experience to detect problems and deduce remedial actions, uncertainty as to the consistent application of the rules throughout the organization, and many more problems.
Data + Model = Information
The problems of the data-centric approach have driven the process manufacturing industries to seek better information management where information is defined as “organized or structured data, which has been processed in such a way that the information now has relevance for a specific purpose or context, and is therefore meaningful, valuable, useful and relevant.” Recognizing the deficiency of a data-only approach, much effort in the last 20 years has been expended on adding context to this data. This usually takes the form of a database schema which contextualizes the data within a model of the process business and plant.
One of the ways that structure is added to data is to use a relational data model. A cornerstone of the relational model is the use of referential integrity rules. An example of a referential integrity rule within the context of a process plant data model might be that a material movement must have one, and only one, source and destination. However there are limits to what rules can be expressed with referential integrity constraints alone.
To obtain knowledge using this information approach we must ask the database via a database query. The advantage of the information approach over the data approach is that it allows one to ask the database complex questions. For example, to obtain a unit’s material balance:
- For selected unit
- Find all feed streams
- For each feed stream fetch desired criteria
This allows for the use of one report for all units throughout the enterprise, compared with a new spreadsheet for every unit. These reports are also more robust since, for example, if the plant changes in some way then the report will reflect those changes. This certainly tackles some of the inconsistency problems faced by a data-centric approach, however many problems remain. For example, it would be exceedingly difficult to express the rule that an operator’s training concurrency expires, say, 2 years after his last training. Thus we still need to resort to complex reports, application programs, Excel macros, etc into which we can encode our ‘rules’ about the information.
Information + Rules = Knowledge
As good as an information-centric approach may be, it still fails to solve many of the business problems that we face that can only be solved by creating knowledge, defined as “know-how, know-who, and know-when; knowledge is action, not a description of action”. Some of the business challenges that can only be solved by a rule-centric solution are shown below.
|Business Rule Challenges
|Rule-centric Solution Advantages
|Business rules are distributed throughout the business
|Very difficult to know all of the business rules in place, are they duplicated, are they consistently applied
|Provide common ‘Rules repository’ for the entire organization
Parallels concept of ‘data warehousing’ of data
|Many Excel spreadsheets containing knowledge of how to handle information
|Data Management left to IT or DBA
End users cannot modify the model or *their* data
Instead they resort to Excel on the path to hell!
|Use rules server to consistently apply the semantic rules that are then common to all applications
Use Excel for reporting against data, information, and inferred results of rules
|Custom reports written against custom information models have encoded rules
|Reporting languages such as SQL end up containing many business rules
Rules are duplicated in similar but not identical reports
|Report against information server and inferred results of application of rules against information
|Manual work processes, some of which are not documented
|Exercises such as ISO90001 remain only as documented procedures with no means of automating those procedures
|Use common rules repository to define the rules.
Documentation of the rules can then be created from the rules repository
|Regulatory compliance rules exist only as documents
|Difficult to assure compliance to regulations when it is left to individuals who must be familiar with entire regulation
|Translate regulations to rules deposited in the rules repository
|Loss of skills with aging workforce
|Loss of knowledge in the form of the rules (aka experience) as to how to handle situations
|Capture the experience as rules within rules repository
|Difficult to audit the adherence to rules
|Since the rules are not formalized it is difficult to ensure that procedures are followed.
Personnel might be trained on procedures, but if the system does not enforce them then management cannot be certain they are followed.
|Since most actions are recorded, it is possible to verify that the actions taken comply with the rules even if they were not being forced to follow the rules in the form of a workflow
|Complexity of data and information
|Difficult for users to determine what rules should apply
|The results of rules become inferred information that is available for reporting
|Impossible end-user reporting
|The Holy-Grail but never achieved
Even if good informational model that provides context to the data, the ‘knowledge’ that should be the result of a report (‘poor yields’, ‘excessive material loss’, ‘pending equipment failure’’) is impossible for end-users to encode into their reports
|Semantic information can be reported using ‘query by example’: far easier than any other reporting
Inferred results of rules is available for reporting using the same technique
|Knowledge which is defined as “know-how, know-who, and know-when.” requires rules about information, which requires contextualized data
|Measurement tags disguise the model. Users forced to interact with abstract measurement tags (10FI107.OP)
General Information models become too complex.
Customers desire to support standards, but competing standards supported by different constituencies: PPDMA/ProdML/WitsML/ISA95/MIMOSA/IEC-CIM etc
|Although rules can be applied to any informational model, a semantic informational model is a better match to semantic rules
To obtain knowledge from a data-centric approach we encoded many rules into the application such as Excel. Although the information-centric approach could encode certain types of rules into the database schema, such as the referential examples above, there are many rules about the business that cannot be expressed in this way. Throughout any business we have many rules distributed throughout spreadsheets, reports, application programs, work-flows, procedures, etc. Below are examples of rules that can be found throughout Process Manufacturing.
|Example operational compliance rules throughout the Process Manufacturing Model (PMM)
|Validate that the information is consistent with known rules such as a movement must have a source and a destination
|Calculating another (data) statement such as the power of a pump is the product of the flow and pressure rise
|Deducing additional (object) statements such as knowing that the measurement of something downstream is the same as the upstream measurement
|Invocating an external process to ensure the correct action is taken based upon the change of a rule
|Validate that equipment in use has a valid HAZOP assessment
|Initiate the safety review work process after an incident.
Initiate the HAZOP review whenever major equipment changed
|Initiate review of encoded rules when document containing rules is revised.
|Validate that the equipment has undergone appropriate repair or upgrade as recommended
|Calculate Overall Operational Efficiency (OEE) based on availability, planned, and actual
|Deduce the onset of increased operational risk based on past observations, and planned use of equipment
|Initiate maintenance or repair process
|Provide efficiency, energy consumption calculation based on data and model
|Deduce links to MSDS, maintenance records, maintenance procedures and other documents
|Validate that critical equipment has a valid security policy.
Validate that users with access to critical equipment have valid training.
|Deduce connectivity between critical equipment through the LAN
|Initiate remedial action to update software and utilities when risk identified.
|Validate user has the correct access privileges to perform this action
|Calculate the currency of any users privileges to perform action
|Deduce that the building containing critical equipment has secure access controls
|Initiate security review when equipment moved to new location
|Validate the correct training status of individuals
|Calculate time remaining for currency of their training
|Deduce what assets and facilities an individual has based on their training
|Initiate retraining program when retraining is deduced to become necessary
|Validate that inventory of consumables matches the measured consumption
|Deduce the route of consumable materials (additives etc) into the product stream based on the topology so that the costs can be correctly calculated
|Calculate the quantities of utilities in the absence of complete measurements
|Deduce the route of utilities (water, electricity, fuel etc) into the production facilities based on the topology so that the costs can be correctly assigned
|Validate that no measured emissions are exceeding regulations
|Calculate emissions that are not directly measured
calculate total emissions
|Deduce the flow of regulated material from the plant topology
|Validate that the correct procedures are being followed.
|Deduce which business processes should be applied in particular situations. For example, if an area is designated as secure, then all processes applied to sub-areas must follow that same designated work processes.
|Initiate a process to update work processes when deviates from following recommended work processes are detected.
|Validate that there is a valid exploration-rights associated with options
|Calculate the time remaining to take advantage of exploration rights
|Initiate review of exploration rights prior to their expiration.
|Validate that the field has active contracts
|Calculate the royalty payments based on the individual contracts
|Deduce the applicable contract rules
|Initiate contract reviews and payment processes.
|Validate that each well has an active contract.
Validate that each well is operating in accordance with its operating permits.
|Calculate the actual flow based on pressure and temperature (in absence of flow measurement).
Calculate variables required for regulatory reporting.
|Infer the line-up between well and receiving station based on topology of lines.
|Initiate transmission of regulatory reporting requirements
|Validate that the nomination and routing information is complete: source and destination, quantity, and quality
|Validate that a new crude from pipeline is not being run into the incorrect storage
|Calculate the overall assay of the inventory based on component assays
|Deduce the assay available at the crude unit based on the line-up of the crude tanks to units
|Initiate rerouting of incoming crude to more appropriate storage.
Initiate crude-switchover on crude unit based on assay of new crude tank
|Validate that the configured mode of operation matches the planned or scheduled mode of operation
|Calculate material balances, yields, qualities, efficiencies.
|Deduce measurements of downstream elements based on the operational configuration and knowledge of location of actual measurements.
Deduce the operational configuration and flow model of the plant given the material movement and battery limit flows
|Initiate a work flow to switch modes of operation
|Check to ensure that material planned to flow into storage is compatible with in-store material
|Calculate the actual contents of the storage
|Deduce the grade of the stored material based on existing stored material and inbound movements
|Invoke rescheduling of blends when an actual blend is found to be out-of-specification
|Validate that material is not planned for a line that would contaminate the contents of the line or the planned material.
|Calculate the material movement based on either source or destination quantity measurements
|Invoke custody transfer dispute when transfer outside of acceptable measurement deviation
|Validate that quantity available for planned shipments
|Calculate inventory remaining after current shipment commitments
|Deduce the grade of material based on mixed assay of storage or stockpile
|Initiate pull-through of more inventory when commitments exceed current inventory and planned receipts
|Check that the vessel is compatible with the scheduled berth
|Calculate demurrage charges based on agreed rules
|Deduce the stored material destination based on the vessel berth.
|Initiate loading re-schedule in the event that a vessel is delayed
|Validate that customer order has valid contract upon which transfer can be based
These rules have similar characteristics that caused us to resolve the original data-silo problem. A simple example is that we would want to calculate the corrected custody transfer quantities both for operational and financial needs. We can also observe that rules span multiple business areas. For example the currency of an operator’s privileges span the training records, access control to the building housing the equipment, maintenance records of the equipment, and more. Finally we do not want the rules to be passive. Instead we want any deviation of the rules to initiate, or at least recommend, the correct remedial action. Thus we want our rules and information to be combined to achieve active knowledge, as shown below.
Knowledge + Action = Results
Even if we have the perfect set of rules, they have no business value unless we act upon the know-how, know-when, and know-who. Thus it is important to close the business loop by taking action on the knowledge to produce the desired results; no action, then no results. This means that the knowledge must have a mechanism for invoking the remediation process.
Realizing a rule-centric solutions
How is it possible to abstract the handling of rules away from the individual applications into which they are encoded? Our recommendation is that there should be a separate rules repository; a container that defines all of the rules. Although there are several candidates for describing rules, one favored choice would be to semantically define the rules using something like SPARQL Rules or SPIN. Since information is conveniently modeled semantically, it then makes sense to harmonize the technologies and use the same for the rules repository. The complete rules-centric architecture is shown below.
Data: Raw measurements collected from the instruments and data entry, stored in real-time databases and historians.
Technology: Real-time historians
Model: Context and structure added to the data to create information. It takes the form database schema in the case of a relational model, or ontology in the case of a semantic model plus the configuration that represents the plant: equipment, topology, etc.
Technology: Relational schema, object structure or semantic ontology
Information: The combination of data and model manifested as a database, relational, object, or semantic.
Technology: Relational, object or semantic data store
Rules: A repository of the rules. Traditional information system architectures fail to separate this as a separation element. Instead the rules are distributed throughout the application systems. We propose that all rules should be held in a common repository, just like data and information. This repository should be able to handle all rules: validations, calculations, deductions, and invocations. The best choice for organizing such a repository is semantically as this allows both information and rules to share the same technology.
Technology: Semantic rules data store
Knowledge: The combination of information and knowledge manifested as an inference engine capable of executing the rules. However it is unrealistic to expect rules to be only executed within the inference engine, so rules within spreadsheets, workflows, applications, and calculation engines should be synchronized with the rules repository.
Technology: Rules inference engine together with synchronization interfaces
Action: The actions invoked by the knowledge manifested as a workflow engine capable of invoking external actions via web-service interfaces.
Technology: Workflow or temporal rules engine.
Visualize: A portal through which the data, information, knowledge, and actions can be presented, as well as through which the model and rules can be configured.
Technology: Portal, preferably one whose presentation is semantically deduced from the action, knowledge, information, and data
Control: Either part of the visualization portal or a separate application through which the users can execute control based on the actions.
Technology: Conventional control system interface through which users can control the plant.
Is it feasible?
One of the key architectural elements is the management rules as data, along with the closely related model and action elements. Outside of the process manufacturing industry and especially in finance and insurance rules engines have been in use for some time, thus there are several vendors, shown below. Of particular interest is TopQuadrant, the sponsor of SPARQL Rules or SPIN, that provides a standards-based way to define rules and constraints for Semantic Web data, and OntoRule an EU project that brings together leading vendors of knowledge-based systems and a handful of top research institutions to develop the technology that will empower business policies in the enterprise of the future.
- Corticon Decision table or rulesheet-centric business rules management system
- FICO Blaze Advisor General purpose business rules management system with .Net, Java and COBOL deployment
- IDIOM Decision-centric business rules management system
- IBM ILOG RulesGeneral purpose business rules management system with .Net, Java and COBOL deployment
- InRule.Net based business rules management system
- JBoss Drools/JBoss Enterprise BRMSOpen source business rules management system that is working on updating its Decision Tables)
- ModellicaEuropean business rules management system focused on the credit risk business available in the US through GDS Link
- OntoRuleLeading vendors of knowledge-based systems and a handful of top research institutions join their efforts to develop the technology that will empower business policies in the enterprise of the future.
- OpenRules Decision Management SystemOpen source Excel-based business rules management system.
- PegasystemsA unified business rules and process management environment now including the Chordiant decision management products.
- Progress BRMSDrools-based business rules management system acquired with Savvion
- Sparkling LogicA “social-logic” platform for managing business rules
- TopQuadrantTopBraid Suite™ leverages emerging technology to help our customers connect silos of data, systems and infrastructure and to build flexible applications from linked data models. SPIN is a standards-based way to define rules and constraints for Semantic Web data.
- Visual RulesJava-based business rules management system from Bosch Innovations
- XpertRuleXpertRule develops advanced Business Rules Management and Expert System software that helps organizations:
- Capture expertise and skills in risk assessment, advising, and performance improvement as well as in selling and supporting both products and services.
- Comply with regulations, policies, laws and legislations.
- Automate process orchestration both for intelligent front-end user interface navigation, back-end process flow and data interchange.
- ZementisA cloud-based execution platform for business rules and analytic models.