OData2SPARQL is an OData proxy protocol convertor for any SPARQL/RDF triplestore. To compare SPARQL with OData is somewhat misleading. After all SPARQL has its roots as a very powerful query language for RDF data, but is not intended as a RESTful protocol. Similarly OData has its roots as an abstract interface to any type of datastore, not as a specification of that datastore. Some have said “OData is the equivalent of ODBC for the Web”.
The data management strengths of SPARQL/RDF can be combined with the application development strengths of OData with a protocol proxy: OData2SPARQL, a Janus-point between the application development world and the semantic information world.

Figure 1: OData2SPARQL Proxy between Semantic data and Application consumers

What is OData?

OData is a standardized protocol for creating and consuming data APIs. OData builds on core protocols like HTTP and commonly accepted methodologies like REST. The result is a uniform way to expose full-featured data APIs (Odata.org).  Version 4.0 has been standardized at OASIS, and was released in March 2014.

OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools. Some have said “OData is the equivalent of ODBC for the Web” (OASIS Approves OData 4.0 Standards for an Open, Programmable Web, 2014). Thus a comprehensive ecosystem of applications, and development tools, has emerged a few of which are listed below:

  • LINQpad: LINQPad is a tool for building OData queries interactively.
  • OpenUI5is an open source JavaScript UI library, maintained by SAP and available under the Apache 2.0 license. OpenUI5 lets you build enterprise-ready web applications, responsive to all devices, running on almost any browser of your choice. It’s based on JavaScript, using JQuery as its foundation, and follows web standards. It eases your development with a client-side HTML5 rendering library including a rich set of controls, and supports data binding to different models including OData.
  • Power Query for Excel is a free Excel add-in that enhances the self-service Business Intelligence experience in Excel by simplifying data discovery, access and collaboration.
  • Tableau – an excellent client-side analytics tool – can now consume OData feeds
  • Teiid allows you to connect to and consume sources (odata services, relational data, web services, file, etc.) and deploy a single source that is available as an OData service out-of-the-box.
  • Telerik not only provides native support for the OData protocol in its products, but also offers several applications and services which expose their data using the OData protocol.
  • TIBCO Spotfire is a visual data discovery tool which can connect to OData feeds.
  • Sharepoint: Any data you’ve got on SharePoint as of version 2010 can be manipulated via the OData protocol, which makes the SharePoint developer API considerably simpler.
  • XOData is a generic web-based OData Service visualization & exploration tool that will assist in rapid design, prototype, verification, testing and documentation of OData Services. 

OData vs SPARQL/RDF

To compare SPARQL with OData is somewhat misleading. After all SPARQL has its roots as a very powerful query language for RDF data, and is not intended as a RESTful protocol. Similarly OData has its roots as an abstract interface to any type of datastore, not as a specification of that datastore.  Recently JSON-LD has emerged (Manu Sporny, 2014), providing a method of transporting Linked Data using JSON. However JSON-LD focusses on the serialization of linked data (RDF) as JSON rather than defining the protocol for a RESTful CRUD interface. Thus it is largely an alternative to, say, Turtle or RDF/XML serialization format.

OData and SPARQL/RDF: Contradictory or Complimentary?
  OData SPARQL/RDF
Strengths ·   Schema discovery

·   OData provides a data source neutral web service interface which means application components can be developed independently of the back end datasource.

·   Supports CRUD

·   Not limited to any particular physical data storage

·   Client tooling support

·   Easy to use from JavaScript

·   Growing set of OData productivity tools such as Excel, SharePoint, Tableau and BusinessObjects.

·   Growing set of  OData frameworks such as SAPUI5, OpenUI5, and KendoUI

·   Growing set of independent development tools such as LINQPad, and XOdata

·   Based on open (OASIS) standards after being initiated by Microsoft

·   Strong commercial support from Microsoft, IBM, and SAP.

·   OData is not limited to traditional RDBMS applications. Vendors of real-time data such as OSI are publishing their data as an OData endpoint.

 

·   Extremely flexible schema that can change over time.

·   Vendor independent.

·   Portability of data between triple stores.

·   Federation over multiple, disparate, data-sources is inherent in the intent of RDF/SPARQL.

·   Increasingly standard format for publishing open data.

·   Linked Open Data expanding.

·   Identities globally defined.

·   Inferencing allows deduction of additional facts not originally asserted which can be queried via SPARQL.

·   Based on open (W3C) standards

Weaknesses ·   Was perceived as vendor (Microsoft) protocol

·   Built around the use of a static data-model (RDBMS, JPA, etc)

·   No concept of federation of data-sources

·   Identities defined with respect to the server.

·   Inferencing limited to sub-classes of objects

 

·   Application development frameworks that are aligned with RDF/SPARQL limited.

·   Difficult to access from de-facto standard BI tools such as Excel.

·   Difficult to report using popular reporting tools

 

Table 1: ODATA AND SPARQL/RDF: CONTRADICTORY OR COMPLIMENTARY?

OData2SPARQL: OData complementing RDF/SPARQL

The data management strengths of SPARQL/RDF can be combined with the application development strengths of OData with a protocol proxy: OData4SPARQL. OData4SPARQL is the Janus-point between the application development world and the semantic information world.

  • Brings together the strength of a ubiquitous RESTful interface standard (OData) with the flexibility, federation ability of RDF/SPARQL.
  • SPARQL/OData Interop proposed W3C interoperation proxy between OData and SPARQL (Kal Ahmed, 2013)
  • Opens up many popular user-interface development frameworks and tools such as OpneUI5.
  • Acts as a Janus-point between application development and data-sources.
  • User interface developers are not, and do not want to be, database developers. Therefore they want to use a standardized interface that abstracts away the database, even to the extent of what type of database: RDBMS, NoSQL, or RDF/SPARQL
  • By providing an OData4SPARQL server, it opens up any SPARQL data-source to the C#/LINQ development world.
  • Opens up many productivity tools such as Excel/PowerQuery, and SharePoint to be consumers of SPARQL data such as Dbpedia, Chembl, Chebi, BioPax and any of the Linked Open Data endpoints!
  • Microsoft has been joined by IBM and SAP using OData as their primary interface method which means there will many application developers familiar with OData as the means to communicate with a backend data source.

Consuming RDF via OData: OData2SPARQL

All of the following tools are demonstrated accessing an RDF triple store via the OData2SPARQL porotocol proxy.

Development Tools

XOData

A new online OData development is XOData from (PragmatiQa, n.d.). Unlike other OData tools, XOData renders very useful relationship diagrams. The Northwind RDFD model published via OData4SPARQL endpoint is shown below:

 Figure 2: Browsing the EDM model Published by ODaTa2SPARQL using XOData

XOData also allows the construct of queries as shown below:

Figure 3: Querying The OData2SPARQL Endpoints Using XODATA

LINQPad

(LINQPad, n.d.) is a free development tool for interactively querying databases using C#/LINQ. Thus it supports Object, SQL, EntityFramework, WCF Data Services, and, most importantly for OData4SPARQWL, OData services. Since LINQPad is centered on the Microsoft frameworks, WCF, WPF etc, this illustrates how the use of OData can bridge between the Java worlds of many semantic tools, and the Microsoft worlds of corporate applications such as SharePoint and Excel.

LINQPad shows the contents of the EDM model as a tree. One can then select an entity within that tree, and then create a LINQ or Lambda query. The results of executing that query are then presented below in a grid.

Figure 4:  Browsing and Querying the OData2SPARQL Endpoints Using LINQPad

LINQPad and XOData are good for testing out queries against any datasource. Therefore this also demonstrates using the DBpedia SPARQL endpoint as shown below:

 Figure 5: Browsing DBPedia SPARQLEndpoint Using LINQPad via OData2SPARQL

Browsing Data

One of the primary motivations for the creation of OData2SPARQL is to allow access to Linked Open Data and other SPARQLEndpoints from the ubiquitous enterprise and desktop tools such as SharePoint and Excel.

Excel/PowerQuery

“Power Query is a free add-in for Excel 2010 and up that provide users an easy way to discover, combine and refine data all within the familiar Excel interface.” (Introduction to Microsoft Power Query for Excel, 2014)

PowerQuery allows a user to build their personal data-mart from external data, such as that published by OData2SPARQL. The user can fetch data from the datasource, add filters to that data, navigate through that data to other entities, and so on with PowerQuery recording the steps taken along the way. Once the data-mart is created it can be used within Excel as a PivotTable or a simple list within a sheet. PowerQuery caches this data, but since the steps to create the data have been recorded, it can be refreshed automatically by allowing PowerQuery to follow the same processing steps. This feature resolves the issue of concurrency in which the data-sources are continuously being updated with new data yet one cannot afford to always query the source of the data. These features are illustrated below using the Northwind.rdf endpoint published via OData2SPARQL:

Figure 6: Browsing the OData4SPARQL Endpoint model with PowerQuery

Choosing an entity set allows one to start filtering and navigating through the data, as shown in the ‘Applied Steps’ frame on the right.

Note that the selected source is showing all values as ‘List’ since each value can have zero, one, or more values as is allowed for RDF DatatypeProperties.

Figure 7: Setting Up Initial Source of Data in PowerQuery

As we expand the data, such as the companyProperty, we see that the Applied Steps records the steps take so that they can be repeated.

Figure 8: Expanding Details in PowerQuery

The above example expanded a DatatypeProperty collection. Alternatively we may navigate through a navigation property such as Customer_orders, the orders that are related to the selected customer:

Figure 9: Navigating through related data with PowerQuery

 Once complete the data is imported into Excel:

Figure 10: Importing data from OData2SPARQL with PowerQuery

Unlike conventional importing of data into Excel, the personal data-mart that was created in the process of selecting the data is still available.

Application Development Frameworks

There are a great number of superb application development frameworks that allow one to create cross platform (desktop, web, iOS, and Android), rich (large selection of components such as grids, charts, forms etc) applications. Most of these are based on the MVC or MVVM model both of which require a systematic and complete (CRUD) access to the back-end data via a RESTful API. Now that OData has been adopted by OASIS, the number of companies offering explicit support for OData is increasing, ranging from Microsoft, IBM, and SAP to real-time database vendors such as OSI. Similarly there are a number of frameworks, one of which is SAPUI5 (UI Development Toolkit for HTML5 Developer Center , n.d.) which has an open source version OpenUI5 (OpenUI5, n.d.).

SAPUI5

SAPUI5 is an impressive framework which makes MVVC/MVVM application development easy via the Eclipse-based IDE. Given that OData4SPARQL publishes any SPARQLEndpoint as an OData endpoint, it means that this development environment is immediately available for an semantic application development.  The following illustrates a master-detail example against the Northwind.rdf SPARQL endpoint via OData4SPARQL.

Figure 11: SAPUI5 Application using OData4SPARQL endpoint

Yes we could have cheated and used the Northwind OData endpoint directly, but the Qnames of the Customer ID and Order Number reveals that the origin of the data is really RDF.

Handling Contradictions between OData and RDF/SPARQL

RDF is an extremely powerful way of expressing data, so a natural question to ask is what could be lost when that data is published via an OData service. The answer is very little! The following table lists the potential issues and their mitigation:

Issue Description Mitigation
OData 3NF versus RDF 1NF RDF inherently supports multiple values for a property, whereas OData up to V2 only supported scalar values Odata V3+ supports collections of property values, which are supported by OData4SPARQL proxy server
RDF Language tagging RDF supports language tagging of strings OData supports complex types, which are used to map a language tagged string to a complex type with separate language tag, and string value.
DatatypeProperties versus object-attributes OWL DatatypeProperties are concepts independent of the class, bound to a class by a domain, range or OWL restriction. Conversely OData treats such properties bound to the class. In OData4SPARQL The OWL DatatypeProperty is converted to an OData EntityType property for each of the DatatypeProperty domains.
Multiple inheritance Odata only supports single inheritance via the OData baseType declaration within an EntityType definition.  
Multiple domain properties An OWL ObjectProperty will be mapped to an OData Association.  An Association can be between only one FromRole and one ToRole and the Association must be unique within the namespace. OData Associations are created for each domain. The OData4SPARQL names these associations {Domain}_{ObjectProperty}, ensuring uniqueness.
Cardinality The capabilities of OData V3 allow all DatatypeProperties to be OData collections. However the ontology might have further restrictions on the cardinality. OData4SPARQL assumes cardinality will be managed by the backend triple store. However in future versions, if the cardinality is restricted to one or less, then the EntityType property can be declared as a scalar rather than a collection.

Table 2: Contradictions between OData and RDF/SPARQL

Availability of OData2SPARQL

Two versions of OData2SPARQL are freely available as listed below:

  1. inova8.odata2sparql.v2 : OData V2 based on the Olingo.V2 library supporting OData Version 2
  2. inova8.odata2sparql.v4 : OData V4 based on the Olingo.V4 library supporting OData V4 (in progress)

 


Answering complex queries with easy-to-use graphical interface

The objectives of lens2odata are to provide a simple method of OData query construction driven by the metadata provide by OData services

  • Provides metamodel-driven OData query construction
    • Eliminates any configuration required to expose any OData service to lens2odata
  • Allows searches to be saved and rerun
    • Allows ease of use by casual users
  • Allows queries to be pinned to ‘Lens’ dashboard panels
    • Provides simple-to-use dashboard
  • Searches can be parameterized
    • Allows for easy configuration of queries
  • Compatible with odata2sparql, a service that exposes any triple store as an OData service
    • Provides a Query-Answering-over-Linked-Data (QALD) interface to any linked data.

Concept of Operation

Lens2odata consists of 3 primary pages with which users interact:

  1. Query: is the page in which users can
    1. add new OData services,
    2. compose queries,
    3. save those queries for reuse, and
    4. pin the queries as result fragments on a Lens
  2. Search: is the page in which users can
    1. select an existing query, and execute that query to explore the results
    2. from where they can navigate to Lens pages for specific entities or collections of entities
  3. Lens: are the pages, composed by users, which display fragments of details, optionally grouped into tabs, about a specific entity or collections of entities. Fragments can either be forms or tables. Other fragment layouts are being added.

Navigation between these pages are shown in the diagram below. Specifically these navigation paths are:

  1. Toggle between Search and Query to explore how the results would appear to a casual user
  2. Navigate to a concept’s Lens from Query preview hyperlinks
  3. Navigate to a concept’s Lens from Search results’ hyperlinks

Figure 1: lens2odata Navigation

Quick Start

Login to lens2odata

  1. Navigate to the Url provided by your administrator for lens2odata, http://<server>/lens2odata
  2. Enter users name and password
  3. Since no service has been previously setup, you will be prompted to enter the service display name and Url of that service. Check to use default proxy if not a local service
  4. After ‘save’, as long as your service was validated, you will immediately enter the Query page with a new query initialized with the first collection found in the service
  5. Execute ‘Preview’ to populate the Results Preview with a few values from the collection:
  6. You are now ready to:
    1. View the query via Search
    2. Explore the results further via Lens
    3. Expand the query with more values and filters

Let’s explore Search first of all

  1. Click on at the top-left of the Query page to navigate to the search page:

Search is the page that general users will access. From this page they can select a predefined query, and execute that query to start their discovery journey.

  1. Click on to populate the results form:

 

This shows more details than the Query page because, in the absence of any specific definition about what details of the instances of collections should be displayed, search will display whatever it can find.

  1. Click on the Url “Categories(1)” to navigate to the Lens for that type of instance.

The Lens page is an information dashboard that can be constructed for any type of instance that is discovered. In this case, since no specific lens page has been setup for ‘Category’ types of instance, a default page has been used with a single tab.

  1. There is another Uri on this page “Products”. Navigating this link will take you to a similar lens page, but one for a collection of instances:

 

  1. This Lens for Products shows a list of instances of products. The first column is a Uri to the lens page for the individual product. Click one to navigate to its Lens:
  2. You are now discovering information by navigating through the data, having started by querying a collection. Next steps would be:
  • The original query was not particularly specific. Lens2odata allows that query to be further refined by specifying which attributes should be displayed, adding navigation properties to other entities, adding filters to the query to restrict results or even parameterizing the query so that a user can execute the saved query, modifying the results just by supplying parameter values.
  • The Lens dashboard page can be further refined by adding more fragments of information on a tab, or adding more tabs to the lens page to logically group the information.

 

Availability of Lens2OData

A versions of Lens2OData  is available on GitHub at com.inova8.lens2odata


SKOS, the Simple Knowledge Organization System, offers an easy to understand schema for vocabularies and taxonomies. However modeling precision is lost when skos:semanticRelation predicates are introduced.

Combining SKOS with RDFS/OWL allows both the precision of owl:ObjectProperty to be combined with the flexibility of SKOS. However clarity is then lost as the number of core concepts (aka owl:Class) grow.
Many models are not just documenting the ‘state’ of an entity. Instead they are often tracking the actions performed on entities by agents at locations. Thus aligning the core concepts to the Activity, Entity, Agent, and Location classes of the PROV ontology provides a generic upper-ontology within which to organize the model details.

Vehicle Manufacturing Example

This examples captures information about vehicle manufacturing. Following

  1. Manufacturers: the manufacturer of models of cars in various production lines sited at plants
  2. Models: the models that the manufacturer produces
  3. ProductionLines: the production lines set up to produce models of vehicles on behalf of a manufacturer
  4. Plants: the plants that house the production lines

In addition there are different ‘styles’ of manufacturing that occur for various models and various sites:

  1. Manufacturing: the use of a ProductionLine for a particular Model

SKOS Modeling

If we follow a pure SKOS model we proceed as follows by creating a VehicleManufacturingScheme  skos:ConceptScheme

s:VehicleManufacturingScheme

rdf:type skos:ConceptScheme

.

Then we create skos:topConceptOf Manufacturer, Model, Plant, and Production as follows:

 

s:Manufacturer

rdf:type owl:Class ;

rdfs:subClassOf skos:Concept ;

skos:topConceptOf s:VehicleManufacturingScheme

.

s:Model

rdf:type owl:Class ;

rdfs:subClassOf skos:Concept ;

skos:topConceptOf s:VehicleManufacturingScheme

.

These top-level concepts are being created of type owl:Class and a subClassOf skos:Concept.  This is the pattern recommended in (Bechhofer, et al.)

Finally we can create skos:broader concepts as follows:

s:Ford

rdf:type s:Manufacturer ;

skos:broader s:Manufacturer ;

skos:inScheme s:VehicleManufacturingScheme

.

s:Fusion

rdf:type s:Model ;

skos:broader s:Model ;

skos:inScheme s:VehicleManufacturingScheme

.

The resultant SKOS taxonomy of the VehicleManufacturingScheme  skos:ConceptScheme then appears as follows:

Figure 1: SKOS taxonomy

Why?

By starting with a pure SKOS model we provide access to the underling concepts in a more accessible style for the less proficient user, as illustrated by the SKOS Taxonomy above. Yet we have not sacrificed the ontological precision of owl:Classes.

Thus we can ask questions about all concepts:

SELECT * WHERE

{

       ?myConcepts rdfs:subClassOf+ skos:Concept .

}

Or we can get a list of anything broader than one of these concepts:

SELECT * WHERE

{

       ?myBroaderConcepts skos:broader s:Model .

}

SKOS+OWL Modeling

Although skos:semanticRelation allows one to link concepts together, this predicate is often too broad when trying to create an ontology that documents specific relations between specific types of concept.

In our VehicleManufacturingScheme we might want to know the following:

  1. isManufacturedBy: which manufacturer manufactures a particular model
  2. operatedBy: which manufacturer operates a particular production facility
  3. performedAt: which plant is the location of a production facility
  4. wasManufacturedAt: which production facility was used to manufacture a particular model

Figure 2: SKOS+OWL model of Relations

 

These predicates can be defined using RDFS as follows:

so:isManufacturerBy

 rdf:type owl:ObjectProperty ;

rdfs:domain s:Model ;

rdfs:range s:Manufacturer ;

rdfs:subPropertyOf skos:semanticRelation

.

so:operatedBy

rdf:type owl:ObjectProperty ;

rdfs:domain s:Production ;

rdfs:range s:Manufacturer ;

rdfs:subPropertyOf skos:semanticRelation

.

Note that the definition of Model, Manufacturer etc. as subClassOf skos:Concept allows us to precisely define the domain and range.

s:Fusion

so:isManufacturerBy s:Ford ;

so:wasManufacturedAt s:Halewood-SmallVehicle ;

.

s:Dagenham-Truck

so:operatedBy s:Ford ;

so:performedAt s:Dagenham ;

.Thus we have used the flexibility of SKOS with the greater modeling precision of RDFS/OWL.

Why?

By building upon the SKOS model, one can ask an expansive question such as what concepts are semantically related to, say, the concept s:Fusion with a simple query:

SELECT * WHERE

{

       s:Fusion  ?p ?y .

       ?p rdfs:subPropertyOf* skos:semanticRelation

}

Yet with the same model we can ask a specific question about a relationship of a specific instance:

SELECT * WHERE

{

       s:Camry so:isManufacturerBy   ?o .

}

SKOS+OWL+PROV Modeling

One of the attractions of SKOS is that a taxonomy can grow organically. One of the problems of SKOS is that a taxonomy can grow organically!

As the taxonomy grows it can be useful to add another layer of structure beyond a catalog of concepts. Many models are not just documenting the ‘state’ of an entity. Instead they are often tracking the actions performed on entities by agents at locations. Thus aligning the core concepts to the Activity, Entity, Agent, and Location classes of the PROV ontology (Lebo, et al.) provides a generic upper-ontology within which to organize the model details.

Figure 3: PROV model

Thus our VehicleManufacturingScheme has each core PROV concept:

  1. Manufacturers: the Agents who manufacture models, and operate plants
  2. Models: the Entities
  3. ProductionLines: the Activities that produce Models on behalf of Manufacturers.
  4. Plants: the Location at which Activities take place, and Agents and Entities are located.

Figure 4: PROV Model

 

s:Production

rdfs:subClassOf prov:Activity ;

.

s:Model

rdfs:subClassOf prov:Entity ;

.

s:Manufacturer

rdfs:subClassOf prov:Organization ;

.

s:Plant

rdfs:subClassOf prov:Location ;

.

Similarly we can cast our predicates into the same PROV model as follows:

so:isManufacturerBy

rdfs:subPropertyOf prov:wasAttributedTo ;

.

so:operatedBy

rdfs:subPropertyOf prov:wasAssociatedWith ;

.

so:performedAt

rdfs:subPropertyOf prov:atLocation ;

.

so:wasManufacturedAt

rdfs:subPropertyOf prov:wasGeneratedBy ;

.

Why?

The PROV model is closer to the requirements of most enterprise models, that are trying to ‘model the business’, than a simple E-R model. The latter concentrates on capturing the attributes of an entity that record the current state of that entity. Often those attributes focus on documenting the process by which the entity gained its current state:

  • The agent that created the entity
  • The activity used to create the entity
  • The location when things were performed
  • The data of the activity, etc

Superimposing the PROV model formalizes this model, and thus allows a structure within which a more casual user can navigate, rather than a sea of entities.

By building upon the PROV model, one can ask an expansive question such as what entities behave as Agents and in which entities are they involved:

SELECT * WHERE

{

?organization a ?Agent .

?Agent rdfs:subClassOf* prov:Agent .

?entity ?predicate ?organization

}

SKOS+OWL+PROV-Qualified Modeling

Within the structure of PROV, predicates define the relationships between Activities, Entities, Agents, and Locations. However it is sometimes necessary to qualify these relationships. For example, the so:wasManufacturedAt predicate defines that a s:Production facility was used to manufacture a s:Model. When? How was it used? Why?

To extend the model, PROV adds the concept of a qualified influence, which allows the relationship to be further defined.

Figure 5: Qualified PROV for some predicates

We do this first of all by creating sopq:Manufacturing:

sopq:Manufacturing

rdf:type owl:Class ;

rdfs:subClassOf skos:Concept ;

rdfs:subClassOf prov:Generation ;

skos:topConceptOf s:VehicleManufacturingScheme ;

.

Note that this is a rdfs:subClassOf prov:Generation, the reification of the predicate prov:wasGeneratedBy

We then add two predicates, one (sopq:wasManufacturedUsing) from the prov:Entity to the prov:Generation, and one (sopq:production) from the prov:Generation to the prov:Activity as follows:

sopq:wasManufacturedUsing

rdf:type owl:ObjectProperty ;

rdfs:domain s:Model ;

rdfs:range sopq:Manufacturing ;

rdfs:subPropertyOf skos:semanticRelation ;

rdfs:subPropertyOf prov:qualifiedGeneration ;

.

sopq:production

rdf:type owl:ObjectProperty ;

rdfs:subPropertyOf skos:semanticRelation ;

rdfs:subPropertyOf prov:activity ;

.

Finally we can create a Manufacturing qualified generation concept as follows:

sopq:L-450H_at_Swindon-Hybrid

rdf:type sopq:Manufacturing ;

sopq:production s:Swindon-Hybrid ;

 skos:broader sopq:Manufacturing ;

.

s:L-450H

sopq:wasManufacturedUsing sopq:L-450H_at_Swindon-Hybrid ;

.

In the figure below we can see that these qualified actions simply extend the SKPOS taxonomy:

Figure 6: Taxonomy extended with Qualified Actions

Why?

Using qualifiedActions provides a systematic, rather than ad-hoc, way to provide more precision to a model.

Remaining Issues

  1. The PROV structure does not manifest itself within the taxonomy. Should Activity, Entity, Agent, and Location therefore be ConceptSchemes?

Model

The model files used in this example are included here: model2rdf

  • skos.ttl
  • skos+owl.ttl
  • skos+owl+prov.ttl
  • skos+owl+provqualified.ttl

References

Bechhofer, Sean and Miles, Alistair. Using OWL and SKOS. [Online] W3C. https://www.w3.org/2006/07/SWD/SKOS/skos-and-owl/master.html.

Lebo, Timothy, Satya, Sahoo and McGuinness, Deborah. PROV-O: The PROV Ontology. W3C. [Online] https://www.w3.org/TR/prov-o/.Reaming Issues


A large community exists “that sees Linked Data, let alone the full Semantic Web, as an unnecessarily complicated technology”, Phil Archer. However early adopters are always known for zealotry rather than pragmatism: mainframe to mini, mini-to-pc, pc-to-web and many more are examples where the zealots often threw the baby out with the bath-water.

Rather than abandoning the semantic web perhaps we should take a pragmatic view and identify its strengths, together with some soul-searching to be honest about its weaknesses:

SPARQL is unlikely ever to be end-user friendly:

However one can say exactly the same of SQL (which, believe it or not, used to be called User-Friendly-Interface, or UFI for short). SQL is now hidden behind much more developer-friendly facades so that they can deliver user-friendly experiences such as Hibernate/JPA, C#+LINQ, Odata, and many more. Even so SQL remains the language of choice for manipulating and querying RDBMS.

So we still need SPARQL for manipulating semantic data, but we need developer- and user-friendly facades that make the semantic information more accessible. Some initiatives are JPA for RDF, LINQtoRDF, Odata-SPARQL but the effort is fragmented with few standard initiatives.

RDF/OWL is unlikely to supplant Entity-Relational, JSON-Object databases:

However we have all learnt the lesson that the data model is often at the core of any application. Furthermore any changes to that data model can be costly, especially in the later stages of the development lifecycle. Who has not created an object-attribute data model deployed in an RDBMS to retain design flexibility without the need to change the underlying database schema? Are we not reinventing the semantic data model?

So the principles and flexibility of semantic (RDF, RDFS, and OWL) data modeling of data could be adopted as a data modeling paradigm representing the evolution that started with Entity-Relationship (ER), went through Object-Role Modeling (ORM or NIAM), to Semantic Data Modeling (SDM). Used in combination with dynamic REST/JSON, such as Odata, one could truly respond to the Dynamic Business Application Imperative (Forrester).

Triple-stores are unlikely to supplant Big-Data:

However, of volume, velocity, and variety, many big data solutions are weaker when handling variety, especially when the variety of data is changing over time as it does with evolving user needs. Most of the time the variety at the data source will be solved by Extract-Transform-Load (ETL) into a big-data store. Is this any different than data warehousing which has matured over the last 20 years, except for the use of different data storage technology? Like any goods in transit, data gets damaged and contaminated when it is moved. It is far better to use the data in-situ if at all possible, but this has been the unachievable Holy Grail of data integration for many years.

So the normalization of any and all data into triples might not be the best way to store data but can be the way to mediate variable information from a variety of data-sources: Ontology Based Data Access (OBDA). The ontology acts a semantic layer between the user and the data. The semantics of the ontology are used to enrich the information on the sources and/or cope with incomplete information in them. 

In summary …

Semantic technologies will see more success if it pursues the more pragmatic approach of the solving those problems that are not satisfactorily solved elsewhere such as querying and reasoning, dynamic data models, and data source mediation; instead of resolving the already solved.  

References

Phil Archer http://semanticweb.com/tag/phil-archer

LINQtoRDF: https://code.google.com/p/linqtordf/

JPA/RDF: https://github.com/mhgrove/Empire

ODATA/SPARQL: https://github.com/brightstardb/odata-sparql

Dynamic Business Imperative: http://www.forrester.com/The+Dynamic+Business+Applications+Imperative/fulltext/-/E-RES41397?objectid=RES41397

Ontology Based Data Access: http://obda.inf.unibz.it/


Enterprises create data cathedrals with an enforced dogma to control data purity, causing much information to be outside its walls where informal information bazaars thrive. These information bazaars have suspect quality, uncertain provenance, yet are responsive to users’ needs. Metcalf’s law suggests that the benefit gained from integrated information grows geometrically1
 with the number of data communities that are integrated. How can we balance the dogma of the data cathedrals and the spontaneity of the information bazaar?

Enterprise’s database cathedrals reflect corporate dogma. Nothing gets changed without approval from high. Change is very slow. New databases orders get integrated only after a considerably long time assuming that the new data is 100% squeaky clean. So there are a lot of databases that are entirely outside the database cathedrals’ walls. Badly behaved sources of data might even be excommunicated.

Where does the other data go? It is not as though this other data does not exist, although many would like to pretend it to be so. Instead they are all in the information bazaar. Anyone with any information can set up their own information stall, and store their own data in Excel, Access, anywhere they want. They only specialize in their own data for their own use. This data is pretty good because that is all they need for their business. They share well with others but on a barter basis. In fact the information bazaar is chaotic, but lively, always changing to users’ demands, and a fun place to be. 

Why do we have the conflict between the database cathedral and the information bazaars?

The data cathedral offers security, quality, and good provenance. It provides the system of record for users who then should have complete confidence in their decision making. It does this using accurate relational models capturing enterprise information. But a relational model is designed by the cathedral hierarchy based on the closed model: only pure data can be entered into the database; impure data can lead to excommunication. 

The information bazaar has few rules of entry. As demonstrated by the web, it allows anyone to say anything about anything (AAA). Even with this deficiency we will regularly search the web to help us with our decision making, not exploring sources that are suspect, and filtering information that we feel lacks accuracy until we end up with information to support our decision.

Can we resolve these conflicting objectives?

Can we expect the cathedral hierarchy to relax its admittance criteria to let in as much of the information bazaar as possible? Somewhat, but we cannot expect miracles.

Can we expect the information bazaar to become more sober and responsible so that it can securely provide information with guaranteed quality and provenance? Somewhat, but we cannot expect an evangelical conversion?

Really this is not optimal, because the benefit of having data integrated grows geometrically with the number of interconnected sources, yet the database cathedral cannot grow because the information bazaar does not meet their purity dogma.

So how can these conflicting objectives be redeemed?

One path to redemption is to unite the information bazaar through a common semantic model. This allows all information to be available within a universal graph (model). Of course some riff-raff will get in, but again that is an advantage for the semantic model as you can also declare rules that will verify the accuracy of the data even though it is already stored. 

At the same time the data cathedral can continue to expand, hopefully at faster pace, by integrating those graphs that meet their criteria. 

However we allow users to access both the data cathedral, from where they can obtain the system of record, and information bazaar. We could even report results federating form the two data-sources annotating that information from the information bazaar with its provenance and hence less certain data quality. Doing this in a standards compliant way turns existing enterprise information resources into connectable, responsive and interoperable semantic assets.

Harmony

Using this approach we don’t need to force the data cathedral to relax its dogma, nor do we ask the information bazaar to shut down. Yet we can offer users access to 99% of the enterprise information providing users the ‘Metcalf’1 benefits of full integration. As semantic assets grow and connect, they enable a resilient semantic ecosystem of meaningful interactions between people, applications and data irrespective of the differences in structures, data schemas, governance and technologies. The dividing boundaries between the cathedral and the bazaar no longer need to be obstacles to information users. Semantic ecosystem seamlessly embraces and provides integrated access to data cathedrals and information bazaars alike.

 

1 If I have 10 database systems running my business that are entirely disconnected, then the benefits are 10 * K, some constant. If I integrate these databases in pairs (operations + accounting, accounting + payroll, etc), then the benefits increase to 10 * K * 2. If I integrate in threes, (operations + accounting + maintenance, accounting + payroll + receiving, etc), then the benefits increase four-fold (a corollary of Metcalf’s law) to 10 * K * 4. For quad-wise integration my benefits would be 10 * K * 8 and so on. Now it might not be 8 fold but the point is there is a geometric, not linear, growth in benefits as I integrate all of my information across my organization


A key process manufacturing problem yet to be solved is the management of knowledge, the know-how, know-who, and know-when. Just as we have been eliminating data silos by introducing common data repositories, and creating common information (semantic) services by adding structure to the data, we need a common rules repository rather than having them distributed in Excel spreadsheets, work-flows, documents, government regulations etc. which causes silos of rules, lack of consistency, difficulty to ensure consistent compliance, and many more problems.

Semantic models (RDF, RDFS, and OWL) are excellent at federating information from multiple sources, but there are competing approaches for information integration. However semantic models are also the perfect way to express rules (SPIN) because rules are also described as data within the same database, unlike other technologies where the rules have to be expressed as code.

Given the importance of consistent application of rules in the process manufacturing business, is this then the sweet spot for semantic technologies?

Is data transparent?

We have been struggling with managing knowledge about our process plants for many years. We tried to solve this problem starting in the 80’s with sophisticated data collection applications and real-time databases. However a data-centric alone solution only provides a view of the process plant as a very long list of measurement tags, reinforcing the definition of data as “being discrete, objective facts or observations, which are unorganized and unprocessed and therefore have no meaning or value because of lack of context and interpretation.

Thus our view of the knowledge about the plant, its equipment, its operation, its performance provides a new definition of ‘transparency’ or lack there-of. If we do want to obtain knowledge we must make use of application programs within which are encoded our knowledge-extraction rules. Most prevalent is the ubiquitous spreadsheet in which there are:

  • Object names written into cells
  • RTDB Tag names in hidden cells
  • A fixed number of feed/product rows
  • New spreadsheet for each unit/report

Consequences of this data-without-information or knowledge are a ‘gagging’ of spreadsheets: information is encoded in the tag names, knowledge is encoded in the Excel layout and formulae, action relies on a user running the application and using their experience to detect problems and deduce remedial actions, uncertainty as to the consistent application of the rules throughout the organization, and many more problems.

Data + Model = Information

The problems of the data-centric approach have driven the process manufacturing industries to seek better information management where information is defined as “organized or structured data, which has been processed in such a way that the information now has relevance for a specific purpose or context, and is therefore meaningful, valuable, useful and relevant.” Recognizing the deficiency of a data-only approach, much effort in the last 20 years has been expended on adding context to this data. This usually takes the form of a database schema which contextualizes the data within a model of the process business and plant.

One of the ways that structure is added to data is to use a relational data model. A cornerstone of the relational model is the use of referential integrity rules. An example of a referential integrity rule within the context of a process plant data model might be that a material movement must have one, and only one, source and destination. However there are limits to what rules can be expressed with referential integrity constraints alone.
To obtain knowledge using this information approach we must ask the database via a database query. The advantage of the information approach over the data approach is that it allows one to ask the database complex questions. For example, to obtain a unit’s material balance:

  • For selected unit
    • Find all feed streams
    • For each feed stream fetch desired criteria

This allows for the use of one report for all units throughout the enterprise, compared with a new spreadsheet for every unit. These reports are also more robust since, for example, if the plant changes in some way then the report will reflect those changes. This certainly tackles some of the inconsistency problems faced by a data-centric approach, however many problems remain. For example, it would be exceedingly difficult to express the rule that an operator’s training concurrency expires, say, 2 years after his last training. Thus we still need to resort to complex reports, application programs, Excel macros, etc into which we can encode our ‘rules’ about the information.

Information + Rules = Knowledge

As good as an information-centric approach may be, it still fails to solve many of the business problems that we face that can only be solved by creating knowledge, defined as “know-how, know-who, and know-when; knowledge is action, not a description of action”. Some of the business challenges that can only be solved by a rule-centric solution are shown below.

Business Rule Challenges
Business Issue Problem Rule-centric Solution Advantages
Business rules are distributed throughout  the business Very difficult to know all of the business rules in place, are they duplicated, are they consistently applied Provide common ‘Rules repository’ for the entire organization

Parallels concept of ‘data warehousing’ of data

Many Excel spreadsheets containing knowledge of how to handle information Data Management left to IT or DBA

End users cannot modify the model or *their* data

Instead they resort to Excel on the path to hell!

Use rules server to consistently apply the semantic rules that are then common to all applications

Use Excel for reporting against data, information, and inferred results of rules

Custom reports written against custom information models have encoded rules Reporting languages such as SQL end up containing many business rules

Rules are duplicated in similar but not identical reports

Report against information server and inferred results of application of rules against information
Manual work processes, some of which are not documented Exercises such as ISO90001 remain only as documented procedures with no means of automating those procedures Use common rules repository to define the rules.

Documentation of the rules can then be created from the rules repository

Regulatory compliance rules exist only as documents Difficult to assure compliance to regulations when it is left to individuals who must be familiar with entire regulation Translate regulations to rules deposited in the rules repository
Loss of skills with aging workforce Loss of knowledge in the form of the rules (aka experience) as to how to handle situations Capture the experience as rules within rules repository
Difficult to audit the adherence to rules Since the rules are not formalized it is difficult to ensure that procedures are followed.

Personnel might be trained on procedures, but if the system does not enforce them then management cannot be certain they are followed.

Since most actions are recorded, it is possible to verify that the actions taken comply with the rules even if they were not being forced to follow the rules in the form of a workflow
Complexity of data and information Difficult for users to determine what rules should apply The results of rules become inferred information that is available for reporting
Impossible end-user reporting The Holy-Grail but never achieved

Even if good informational model that provides context to the data, the ‘knowledge’ that should be the result of a report (‘poor yields’, ‘excessive material loss’, ‘pending equipment failure’’) is impossible for end-users to encode into their reports

Semantic information can be reported using ‘query by example’: far easier than any other reporting

Inferred results of rules is available for reporting using the same technique

Knowledge which is defined as “know-how, know-who, and know-when.” requires rules about information, which requires contextualized data Measurement tags disguise the model. Users forced to interact with abstract measurement tags (10FI107.OP)

General Information models become too complex.

Customers desire to support standards, but competing standards supported by different constituencies: PPDMA/ProdML/WitsML/ISA95/MIMOSA/IEC-CIM etc

Although rules can be applied to any informational model, a semantic informational model is a better match to semantic rules

To obtain knowledge from a data-centric approach we encoded many rules into the application such as Excel. Although the information-centric approach could encode certain types of rules into the database schema, such as the referential examples above, there are many rules about the business that cannot be expressed in this way. Throughout any business we have many rules distributed throughout spreadsheets, reports, application programs, work-flows, procedures, etc. Below are examples of rules that can be found throughout Process Manufacturing.

Example operational compliance rules throughout the Process Manufacturing Model (PMM)
  Validation Calculation Deduction Invocation
Validate that the information  is consistent with known rules such as a movement must have a source and a destination Calculating another (data) statement such as the power of a pump is the product of the flow and pressure rise Deducing additional (object) statements such as knowing that the measurement of something downstream is the same as the upstream measurement Invocating an external process to ensure the correct action is taken based upon the change of a rule
Materiel Chain Finance        
Safety Validate that equipment in use has a valid HAZOP assessment     Initiate the safety review work process after an incident.
Initiate the HAZOP review whenever major equipment changed
Docs       Initiate review of encoded rules when document containing rules is revised.
Assets HME Validate that the equipment has undergone appropriate repair or upgrade as recommended Calculate Overall Operational Efficiency (OEE) based on availability, planned, and actual Deduce the onset of increased operational risk based on past observations, and planned use of equipment Initiate maintenance or repair process
Fixed Plant   Provide efficiency, energy consumption calculation based on data and model Deduce links to MSDS, maintenance records, maintenance procedures and other documents  
Technical Infrastructure Validate that critical equipment has a valid security policy.
Validate that users with access to critical equipment have valid training.
  Deduce connectivity between critical equipment through the LAN Initiate remedial action to update software and utilities when risk identified.
Security Validate user has the correct access privileges to perform this action Calculate the currency of any users privileges to perform action Deduce that the building containing critical equipment has secure access controls Initiate security review when equipment moved to new location
People Validate the correct training status of individuals Calculate time remaining for currency of their training Deduce what assets and facilities an individual has based on their training Initiate retraining program when retraining is deduced to become necessary
Consumables Validate that inventory of consumables matches the measured consumption   Deduce the route of consumable materials (additives etc) into the product stream based on the topology so that the costs can be correctly calculated  
Raw
Materials
       
Utilities   Calculate the quantities of utilities in the absence of complete measurements Deduce the route of utilities (water, electricity, fuel etc) into the production facilities based on the topology so that the costs can be correctly assigned  
Emissions Validate that no measured emissions are exceeding regulations Calculate emissions that are not directly measured
calculate total emissions
Deduce the flow of regulated material from the plant topology  
Business Procedures Validate that the correct procedures are being followed.   Deduce which business processes should be applied in particular situations. For example, if an area is designated as secure, then all processes applied to sub-areas must follow that same designated work processes. Initiate a process to update work processes when deviates from following recommended work processes are detected.
Intellectual
Property
       
Product Resource Validate that there is a valid exploration-rights associated with options Calculate the time remaining to take advantage of exploration rights   Initiate review of exploration rights prior to their expiration.
Field Validate that the field has active contracts Calculate the royalty payments based on the individual contracts Deduce the applicable contract rules Initiate contract reviews and payment processes.
Well Validate that each well has an active contract.
Validate that each well is operating in accordance with its operating permits.
Calculate the actual flow based on pressure and temperature (in absence of flow measurement).
Calculate variables required for regulatory reporting.
Infer the line-up between well and receiving station based on topology of lines. Initiate transmission of regulatory reporting requirements
Pipeline Validate that the nomination and routing information is complete: source and destination, quantity, and quality      
Crude Storage Validate that a new crude from pipeline is not being run into the incorrect storage Calculate the overall assay of the inventory based on component assays Deduce the assay available at the crude unit based on the line-up of the crude tanks to units Initiate rerouting of incoming crude to more appropriate storage.
Initiate crude-switchover on crude unit based on assay of new crude tank
Processing Validate that the configured mode of operation matches the planned or scheduled mode of operation Calculate material balances, yields, qualities, efficiencies. Deduce measurements of downstream elements based on the operational configuration and knowledge of location of actual measurements.
Deduce the operational configuration and flow model of the plant given the material movement and battery limit flows
Initiate a work flow to switch modes of operation
Storage Check to ensure that material planned to flow into storage is compatible with in-store material Calculate the actual contents of the storage Deduce the grade of the stored material based on existing stored material and inbound movements Invoke rescheduling of blends when an actual blend is found to be out-of-specification
Pipeline Validate that material is not planned for a line that would contaminate the contents of the line or the planned material. Calculate the material movement based on either source or destination quantity measurements   Invoke custody transfer dispute when transfer outside of acceptable measurement deviation
Port Storage Validate that quantity available for planned shipments Calculate inventory remaining after current shipment commitments Deduce the grade of material based on mixed assay of storage or stockpile Initiate pull-through of more inventory when commitments exceed current inventory and planned receipts
Shipping Check that the vessel is compatible with the scheduled berth Calculate demurrage charges based on agreed rules Deduce the stored material destination based on the vessel berth. Initiate loading re-schedule in the event that a vessel is delayed
Customer Validate that customer order has valid contract upon which transfer can be based      

 

These rules have similar characteristics that caused us to resolve the original data-silo problem. A simple example is that we would want to calculate the corrected custody transfer quantities both for operational and financial needs. We can also observe that rules span multiple business areas. For example the currency of an operator’s privileges span the training records, access control to the building housing the equipment, maintenance records of the equipment, and more. Finally we do not want the rules to be passive. Instead we want any deviation of the rules to initiate, or at least recommend, the correct remedial action. Thus we want our rules and information to be combined to achieve active knowledge, as shown below.

Knowledge + Action = Results

Even if we have the perfect set of rules, they have no business value unless we act upon the know-how, know-when, and know-who. Thus it is important to close the business loop by taking action on the knowledge to produce the desired results; no action, then no results. This means that the knowledge must have a mechanism for invoking the remediation process.

Realizing a rule-centric solutions

How is it possible to abstract the handling of rules away from the individual applications into which they are encoded? Our recommendation is that there should be a separate rules repository; a container that defines all of the rules. Although there are several candidates for describing rules, one favored choice would be to semantically define the rules using something like SPARQL Rules or SPIN. Since information is conveniently modeled semantically, it then makes sense to harmonize the technologies and use the same for the rules repository. The complete rules-centric architecture is shown below.

 

Data:                Raw measurements collected from the instruments and data entry, stored in real-time databases and historians.

Technology:      Real-time historians

Model:             Context and structure added to the data to create information. It takes the form database schema in the case of a relational model, or ontology in the case of a semantic model plus the configuration that represents the plant: equipment, topology, etc.  

Technology:      Relational schema, object structure or semantic ontology

Information:     The combination of data and model manifested as a database, relational, object, or semantic.

Technology:      Relational, object or semantic data store

Rules:               A repository of the rules. Traditional information system architectures fail to separate this as a separation element. Instead the rules are distributed throughout the application systems. We propose that all rules should be held in a common repository, just like data and information. This repository should be able to handle all rules: validations, calculations, deductions, and invocations. The best choice for organizing such a repository is semantically as this allows both information and rules to share the same technology.

Technology:      Semantic rules data store

Knowledge:     The combination of information and knowledge manifested as an inference engine capable of executing the rules. However it is unrealistic to expect rules to be only executed within the inference engine, so rules within spreadsheets, workflows, applications, and calculation engines should be synchronized with the rules repository.

Technology:      Rules inference engine together with synchronization interfaces

Action:             The actions invoked by the knowledge manifested as a workflow engine capable of invoking external actions via web-service interfaces.

Technology:      Workflow or temporal rules engine.

Visualize:         A portal through which the data, information, knowledge, and actions can be presented, as well as through which the model and rules can be configured.

Technology:      Portal, preferably one whose presentation is semantically deduced from the action, knowledge, information, and data

Control:            Either part of the visualization portal or a separate application through which the users can execute control based on the actions.

Technology:      Conventional control system interface through which users can control the plant.

Is it feasible?

One of the key architectural elements is the management rules as data, along with the closely related model and action elements. Outside of the process manufacturing industry and especially in finance and insurance rules engines have been in use for some time, thus there are several vendors, shown below. Of particular interest is TopQuadrant, the sponsor of SPARQL Rules or SPIN, that provides a standards-based way to define rules and constraints for Semantic Web data, and OntoRule an EU project that brings together leading vendors of knowledge-based systems and a handful of top research institutions to develop the technology that will empower business policies in the enterprise of the future.

  • Corticon  Decision table or rulesheet-centric business rules management system
  • FICO Blaze Advisor General purpose business rules management system with .Net, Java and COBOL deployment
  • IDIOM Decision-centric business rules management system
  • IBM ILOG RulesGeneral purpose business rules management system with .Net, Java and COBOL deployment
  • InRule.Net based business rules management system
  • JBoss Drools/JBoss Enterprise BRMSOpen source business rules management system that is working on updating its Decision Tables)
  • ModellicaEuropean business rules management system focused on the credit risk business available in the US through GDS Link
  • OntoRuleLeading vendors of knowledge-based systems and a handful of top research institutions join their efforts to develop the technology that will empower business policies in the enterprise of the future.
  • OpenRules Decision Management SystemOpen source Excel-based business rules management system.
  • PegasystemsA unified business rules and process management environment now including the Chordiant decision management products.
  • Progress BRMSDrools-based business rules management system acquired with Savvion
  • Sparkling LogicA “social-logic” platform for managing business rules
  • TopQuadrantTopBraid Suite™ leverages emerging technology to help our customers connect silos of data, systems and infrastructure and to build flexible applications from linked data models. SPIN is a standards-based way to define rules and constraints for Semantic Web data.
  • Visual RulesJava-based business rules management system from Bosch Innovations
  • XpertRuleXpertRule develops advanced Business Rules Management and Expert System software that helps organizations:
    • Capture expertise and skills in risk assessment, advising, and performance improvement as well as in selling and supporting both products and services.
    • Comply with regulations, policies, laws and legislations.
    • Automate process orchestration both for intelligent front-end user interface navigation, back-end process flow and data interchange.
  • ZementisA cloud-based execution platform for business rules and analytic models.


Excel Solver has some limitations for production-ready data reconciliation, despite Excel being used often for process manufacturing production and operational accounting. To overcome these limitations, Resolver is an Excel add-in using the same model configuration as Solver, but using a solver that is optimized for the data reconciliation problem. This results in a fast, robust, and very accurate solver for linear, nonlinear, equality, and inequality data reconciliation problems. The combination of Excel and robust solver could bring ubiquity to process data reconciliation.

Resolver is an Excel add-in that performs data reconciliation using a high performance, accurate, and robust algorithm optimized for the data reconciliation problem. It uses the same problem setup as the Solver Excel-add-in. In fact the problems are interchangeable. You can setup and solve a problem in Resolver and then use the same setup and problem definition to try and solve it with Solver, just in case you want verification of the results!

Resolver came about as a result of trying to use Solver to solve process data reconciliation problems, as reported in Is Excel Solver suitable for Production Data Reconciliation?. As noted, Excel has a lot of advantages for setting up process data reconciliation problems, and is widely used for operational and production accounting. Along with some advantages, some issues were revealed which have been overcome with Resolver.

Excel Data Reconciliation Problem Setup

Resolver uses a reconciliation model defined via constraint equations expressed as Excel cell formula in the same way as Solver.

Solving with Resolver

The data reconciliation problem is defined in a very similar way to Excel Solver. In fact the model definitions are interchangeable, although Resolver can only solve for the Gaussian objective function, and Solver cannot produce the reconciliation diagnostics.

Resolving will write the results back into the variable cells, and also produces a report to help diagnose the reconciliation problems. This produces the well established gross, constraint, and measurement error diagnostics.

Solver versus Resolver Excel Process Data Reconciliation

Advantages of Resolver-based Data Reconciliation

  • The use of symbolic differentiation of the constraint equations eliminates the need for numeric differentiation and its inherent inaccuracy
  • Resolver does not rely on Excel to recalculate the objective function which makes it very fast.
  • The constraint-sequential solution technique ensures solution stability even when the problem is poorly formulated, such as zero tolerances, and redundant constraints.
  • Covariance factorization improves numeric stability and accuracy.
  • Compatible (and interchangeable) with Solver problems
  • Data reconciliation can take place within the same spreadsheet environment with which many production and operational accounting systems exist.
  • Constraints are defined as Excel cell formula, not strings within Excel cells that use the reconciliation application’s own syntax.

Disadvantages of Resolver-based Data Reconciliation

  • There is no visual modeler or designer of the flowsheet or problem.
  • Constraint formulae need to be differentiatable.

Advantages of Solver-based Data Reconciliation

  • “Very easy to prepare data within Excel and to post process the results for presentation.”
    • Resolver uses the same setup and problem definition. In fact they are interchangeable.
  • “Solver is not limited to linear-only constraints, thus component balance (bi-linear) reconciliation problems can be handled, although they were not tested here.”
    • Resolver can handle the same range of problems which includes the ‘traditional’ linear balances, non-linear balances, and inequality constraints. To achieve its accelerated and robust convergence Resolver uses symbolic differentiation of the constraint formula provided in the Excel cells and their precedents. At present not all Excel functions are supported. Additional functions will be supported in further releases of Resolver.
  • “Solver is not limited to equality constraints, thus for example all estimates can be forced to be positive.”
    • Resolver supports inequality constraints, as well as the convenient options that forces all variables to be positive (AssumeNonNegative)

Limitations of Solver-based Data Reconciliation

  • “Solver uses a generic solver but is not optimized for the characteristics of a data reconciliation problem, such as linear or bi-linear problems, and quadratic cost functions.”
    • Resolver uses an algorithm4 optimized for the data reconciliation problem that is fast, accurate and robust.
  • “It is easy to define a problem that becomes unstable. In the example, adding the redundant constraint of the ‘world’ causes solution instability.”
    • Resolver uses an algorithm[1] that is very efficient and robust. It is optimized to handle some of the quirks of the process data reconciliation problem such as variables that differ significantly in numeric value, unobservable variables, and variables that we wish to ‘fix’[2].
  • “Irrespective of objective function, Solver took 25~75 iterations to converge. This could be an issue for realistically sized (100~1000’s of variables) reconciliation problems.”
    • Resolver will solve linear data reconciliation problems with equality and inequality constraints immediately. Nonlinear problems are usually solved within a few (5~15) iterations.
  • “There is no access to such intermediate products such as the covariance matrix with which to perform more sophisticated gross error detection.”
    • Resolver produces all of the intermediate diagnostics needed to perform gross error detection, measurement error detection, and constraint error detection. These are made available so users can superimpose their own constraint and measurement removal algorithms.

Excel Data Reconciliation Model Setup

Model Constraints

Resolver uses a reconciliation model defined via constraint equations expressed as Excel cell formula in the same way as Solver.

Thus we can express the reconciliation model constraints as Excel formula within cells using the familiar Excel syntax. As usual Excel will display the numeric result of these formulae as shown below. I chose to organize the formula to show the total feed and total product in adjacent cells, but other formulations are possible.

However Excel also allows us to ‘Show Formulae’, offering the best of both worlds. Since I chose to use named ranges, the constraint formulae are very readable. Not only that, we can use ‘Trace Precedents’, and ‘Trace Dependents’ to track the relationship between constraints, a level of user interaction that is difficult to duplicate in dedicated reconciliation applications.

Also, of course, there is nothing to limit you to linear reconciliation problems in Excel.

Model Variables

Reconciled variables are organized as named cells so that the constraint equations are more readable. These will be the variables adjusted by Solver and Resolver. In principle these variables could be distributed throughout the spreadsheet. However it is more convenient to group them with the tolerances and estimates (measurements) in adjacent cells:

Furthermore the named ranges in the Labeled Variable column were in fact ‘arrays’ pointing to the Solution, Tolerance, and Estimates.

This is a convenient way of grouping the solutions with the corresponding tolerances and estimates in a way that is compatible with both Solver, that only requires access to the variable value, and Resolver, that needs access to the corresponding measurement and tolerance. As a reminder, to create an array variable select the cells as shown below and complete the edit with ‘Control-Shift-Enter’

This will be shown as follows in the Excel edit box. Note the curly-brace {…} indicating that this is an array formula.

Reconciliation objective function

Resolver minimizes the straight-forward sum-of-weighted-squares (Gaussian) formulation. Most reconciliation systems will use a straight-forward sum-of-weighted-squares (Gaussian) formulation for this function. It can be argued that the measurement errors are drawn from a Gaussian distribution. Aside from this fact the Gaussian formulation is popular mainly because, in the case of linear constraints such as mass balances, the objective function can be solved without the need for an iterative search.

For convenience the Resolver add-in includes a Gaussian function that will calculate the same objective function as used by Resolver. When a model is saved, a cell is created that contains a suitable target which Solver can use.

Reconciling with Resolver

Define and Solve a Reconciliation Problem

On the Data tab, in the Resolver group, click Resolver.

  • If the Resolver command is not available, load the Resolver Add-in (add-in: A supplemental program that adds custom commands or custom features to Microsoft Office.)
  1. Click the Microsoft Office Button , click Excel Options, and then click the Add-ins
  2. In the Manage box, click Excel Add-ins, and then click Go.
  3. In the Add-ins available box, select the Resolver Add-in check box, and then click OK.

In the Subject to constraints frame, enter any constraints

(constraints: The limitations placed on a Solver problem. You can apply constraints to adjustable cells, the target cell, or other cells that are directly or indirectly related to the target cell.) that you want to apply.

  • How to add a constraint or a range of constraints
  1. In the Constraint frame box, under Subject to the Constraints, Click on the ellipsis (…) to identify the cell or range that defines the right-hand or left-hand side of the constraint.
  2. In the Cell Reference box, enter the cell reference (cell reference: The set of coordinates that a cell occupies on a worksheet. For example, the reference of the cell that appears at the intersection of column B and row 3 is B3.) or name (name: A word or string of characters that represents a cell, range of cells, formula, or constant value. Use easy-to-understand names, such as Products, to refer to hard to understand ranges, such as Sales!C20:C30.) of the cell range for which you want to constrain the value.

  1. Repeat for the other side of the constraint, and select an alternative equality if required. This equality will be applied to all of the constraints within the selected range.

  1. Click the Add button to add this constraint to the set of constraints that define the reconciliation problem. The constraint will be checked to ensure the same number of cells is defined in the ranges on the left-hand and right-hand side of the constraint.
  2. This can be repeated until all constraints have been added to the reconciliation problem.
  • How to update or delete a constraint
  1. Select a constraint to be updated or deleted from the constraints list.

  1. If Edit is chosen the constraint is transferred to the Constraint frame where it can be updated.
  2. If Delete is chosen the constraint is removed from the model.

In the By adjusting variables frame, enter the variables that you want to reconcile.

  • How to add a variable or range of variables
  1. In the Variable frame click on the ellipsis (…) to identify the cell or range that defines the variables. These are the array formula cells described above. The use of an array formula provides consistency with Solver and the ability for Resolver to access the corresponding measurement and tolerance values.

  1. Click the ‘Add’ button to add this variable or variables to the set of variables that define the reconciliation problem. This can be repeated until all variables have been added to the reconciliation problem.

  • How to update or delete a variable
  1. Select a variable or range of variables to be updated or deleted from the variable list.

  1. If Edit is chosen the variable is transferred to the Variable frame where it can be updated.
  2. If Delete is chosen the variable is removed from the model.
  • How to let Resolver guess the reconciled variables
  1. Use the Guess button to use the defined constraints to guess the adjustable (reconciled) variables.
  2. Guess will add any additional variables to the set of variables. It usually does a good job, but success is not guaranteed.
  • How to consolidate the list of reconciled variables
  1. Use the Consolidate button to merge the variables into one or more ranges if the list of variables becomes fragmented.

Click Resolve to initiate the reconciliation.

  • To use the reconciled values on the worksheet, click Keep Resolver Solution in the Resolver Results dialog box.
  • To restore the original state, click the Restore Original Values.
  • To create a report that is based on your solution, you can click Answer Report. The report is created on a new worksheet in your workbook. If Resolver does not finds a solution the report option is unavailable.

Save or load a problem model

  1. In the Resolver Parameters dialog click Save Model or Load Model
  2. When you click Save Model enter the reference for the first cell of a vertical range of empty cells in which you want to place the problem model. The number of cells used depends on the number of constraint ranges defined. The saved model is compatible with Solver, but has an additional cell that defines an equivalent objective function that Solver would need to use as a target.

  1. When you load a model enter a reference for the first cell.

Control how Resolver finds solutions

  1. In the Resolver Parameters dialog, click Options.

The Option button provides access to the convergence control options available to Resolver. These options are designed to be consistent with Solver.

  • How to control solution time and iterations
    1. In the Max Time box, type the number of seconds that you want to allow for the solution time.
    2. In the Iterations box, type the maximum number of iterations that you want to allow.
  • How to control convergence and accuracy
    1. In the Precision box, type the degree of precision that you want, the smaller the number the higher the precision. The precision is a measure of the magnitude of deviations of constraints.
    2. In the Convergence box, type in the amount of relative change that you want to allow in the objective function (Gaussian) before Resolver stops with a solution. The smaller the number, the less relative change is allowed.
  • How to initialize estimates to improve convergence
    1. Select Initialize Values if you want Resolver to use the current values of the adjusted variables as a starting point. Otherwise it will use measurements as the starting point where ever available.
  • How to add non-negativity constraints
    1. Select Assume Non Negative if you want Resolver to force all variables to be greater than zero. This is equivalent to adding additional inequality constraints, one for each adjusted variable.

Solver Options

You can control advanced features of the solution process. Each option has a default setting that is appropriate for most problems.

Max time Limits the time taken by the solution process. While you can enter a value as high as 32,767, the default value of 10 (seconds) is adequate for most problems.

Iterations Limits the time taken by the solution process by limiting the number of interim calculations. While you can enter a value as high as 32,767, the default value of 30 is adequate for most problems.

Precision Controls the precision of solutions by using the number that you enter to determine whether a composite measure of the constraint residuals is small enough. Precision must be indicated by a fractional number between 0 (zero) and 1. Higher precision is indicated when the number that you enter has more decimal places — for example, 0.0001 is higher precision than 0.01.

Convergence When the relative change in the target cell value is less than the number in the Convergence box for the last five iterations, Solver stops. Convergence applies only to nonlinear problems and must be indicated by a fractional number between 0 (zero) and 1. A smaller convergence is indicated when the number that you enter has more decimal places — for example, 0.0001 is less relative change than 0.01. The smaller the convergence value, the more time Solver takes to reach a solution.

Assume Non-Negative Causes Resolver to assume a lower limit of 0 (zero) for all adjustable variables.

Interpreting the Results

Error Detection Diagnostics

The diagnostics to detect gross measurement and balance errors are just as (if not more) important than the reconciled results. Thus Resolver provides a comprehensive set of diagnostics as follows:

Scope Measure Metric Names
Measured

(A Priori)

Metric calculated based on measurements results and only use reconciled values where appropriate

Reconciled

(A Posteriori)

Metric calculated based on reconciled results and only use reconciled values where appropriate

Global Cost Measured Cost: The value of the objective function at initialization. Reconciled Cost: The value of the objective function at convergence, as calculated by Resolver
Critical Value n/a Global Critical Value: the global gross error detection value as defined by Madron[3]. If the Reconciled Cost exceeds this value, then there is likely to be a gross error … somewhere.
Variables Value Measured Value: the value of the measurement, if any. Reconciled Value: the values of the adjusted variable at convergence.
Tolerance Measured Tolerance: the tolerance (95% confidence level), of the measurement Reconciled Tolerance: the tolerance of the result as determined by Resolver.
Test Measured Test: the difference between the Measurement and Reconciled Value normalized by the Measurement Tolerance. This is used only as an indicator of the magnitude of the measurement adjustment. Reconciled Test:  the difference between the Measurement Value and Reconciled Value, normalized by the Reconciled Tolerance. This should not exceed the Measurement Critical Value unless there is a gross error.
Critical Value n/a Measurement Critical Value: the measurement gross error detection value as defined by Tamhane[4]. If a measurement error exceeds this value, then it is likely to be a gross error
Balance Residual Measured Residual: the residual of the constraint calculated using the original Measurement Value values where available and the Reconciled Value where not. Reconciled Residual: the residual of the constraint using the reconciled results. This should be zero or close to zero. Closeness to zero is controlled by the Precision option.
Deviation Measured Deviation: the expected variability of residual errors as defined by the measured tolerances. Reconciled Deviation:  the expected variability of residual errors as determined by Resolver.
Test Measured Test: the Measured Residual normalized by the Measured Deviation. This is used only as an indicator of the magnitude of constraint adjustments. Reconciled Test: the Measured Residual normalized by the Reconciled Deviation. This should not exceed the Constraint Critical Value unless there is a gross error.
Critical Value n/a Constraint Critical Value: the constraint or nodal gross error detection value as defined by Mah et al[5]. If a constraint error exceeds this value, then it is likely to be a gross error caused either by an associated measurement error or a missing variable from the constraint equation.

 

Reconciliation Report

The reconciliation report has three sections that provide both the results and the diagnostics.

  1. Problem
    1. Total Constraints: The total number of linear, nonlinear, equality, and inequality constraints specified in the model.
    2. Linear: Number of linear equality constraints.
    3. Nonlinear: Number of nonlinear equality constraints
    4. Active Linear: Number of linear inequality constraints set to active.
    5. Active Nonlinear: Number of nonlinear inequality constraints set to active.
    6. Variables: Number of adjustable (reconciled) variables.
    7. Fixed variables: Number of variables that are fixed by virtue of a 0.0 tolerance.
  2. Convergence
    1. Converged: True if converged within the specified precision and convergence.
    2. Termination: The condition that caused termination of convergence, whether or not successful.
    3. Time: Time in seconds to achieve convergence within the specified precision and convergence.
    4. Iterations: Number of iterations needed to achieve convergence.
    5. Precision: The precision at convergence.
    6. Convergence: The measure of convergence, the relative change of the cost, at convergence.
  3. Target
    1. Cell: The cell containing the Gaussian objective function that would be minimized by Solver
    2. Reconciled Cost: The value of the objective function at convergence, as calculated by Resolver
    3. Global Critical Value: the global gross error detection value as defined by Madron[6]. If the Cost exceeds this value, then there is likely to be a gross error … somewhere.
    4. Redundancy Degree: the number of degrees of freedom in the defined model.
    5. Measurement Critical Value: the measurement gross error detection value as defined by Tamhane[7]. If a measurement error exceeds this value, then it is likely to be a gross error.
    6. Constraint Critical Value: the constraint or nodal gross error detection value as defined by Mah et al[8]. If a constraint error exceeds this value, then it is likely to be a gross error caused either by an associated measurement error or a missing variable from the constraint equation.
  4. Adjusted Variables
    1. Cell: the cell address or name containing the array of solution, tolerance, measurement values.
    2. Reconciled Value: the values of the adjusted variable at convergence.
    3. Measured Value: the value of the measurement, if any.
    4. Solvability: the solvability of the variable:
      1. Unobservable: the variable is neither measured nor deducible from the model.
      2. Observable: the variable is unmeasured but is deducible from the model.
      3. Determined: the variable is measured but there is insufficient information in the model of other measurements to provide alternate estimates.
      4. Redundant: the variable is measured and there is information in the model and other measurements to provide alternate estimates.
      5. Fixed: the variable is effectively constraint by virtue of its 0 (zero) tolerance.
    5. Reconciled Test: the difference between the Measurement Value and Reconciled Value, normalized by the Reconciled Tolerance. This should not exceed the Measurement Critical Value unless there is a gross error.
    6. Measured Test: the difference between the Measurement and Reconciled Value normalized by the Measurement Tolerance. This is used only as an indicator of the magnitude of the measurement adjustment.
    7. Measured Tolerance: the tolerance (95% confidence level), of the measurement.
    8. Reconciled Tolerance: the tolerance of the result as determined by Resolver.
  5. Constraints
    1. Cell: a simplified expression of the two cells and equality that define the constraint.
    2. Active: an indicator of the state of the constraint. Inequality constraints are False until admitted to the active-set.
    3. Measured Residual: the residual of the constraint calculated using the original Measurement Value values where available and the Reconciled Value where not.
    4. Reconciled Residual: the residual of the constraint using the reconciled results. This should be zero or close to zero. Closeness to zero is controlled by the Precision
    5. Measured Test: the Measured Residual normalized by the Measured Deviation. This is used only as an indicator of the magnitude of constraint adjustments.
    6. Reconciled Test: the Measured Residual normalized by the Reconciled Deviation. This should not exceed the Constraint Critical Value unless there is a gross error.
    7. Reconciled Deviation: the expected variability of residual errors as determined by Resolver.
    8. Formula: the constraint equation spelt out using the variable names, and including any dependent cells.

Resolver Frequently Asked Questions

Does Resolver always converge to the correct solution?

If the problem is linear, then convergence is guaranteed. In fact it will converge to the global minimum in a single iteration.

If the problem includes nonlinearities then convergence might take several iterations before it arrives at the minimum. There is no absolute guarantee that this is the global minimum, but for most data reconciliation-like problems it is invariably the correct solution. 

If the problem includes inequalities as well as nonlinearities then it is possible to define an infeasible problem.

Can Resolver access cells on other sheets?

No. All cells that are directly or indirectly referred to in the constraints equations need to be on the same worksheet.

What are the limits on the constraint equations?

The constraint equations need to involve only differentiable functions: +, -, /, *, ^, SUM(…), PRODUCT(…),… We are working on ways to handle the other, non-differentiable, functions.

Can I use my own user defined functions?

Not yet. Resolver needs to be able to differentiate the constraint, and this is not possible for user-defined functions.

Why do I need a license to run Resolver?

Resolver needs either a trial license valid for 20 days, or a full license usable for an unlimited duration but limited to a single machine. The license price is modest to encourage the use of process data reconciliation.

What Algorithm and methods used by Resolver?

The unique Resolver solution algorithm is best described as constraint sequential. It starts off with a solution surface based on the initial estimates and no constraints. It then takes one of the constraints and adjusts the estimates taking into account this individual constraint. This then moves the solution to that new position, and the process is repeated for the next constraint. If the constraints are equality or inequality linear, then each step is the same as ‘Newtonian’ convergence. The advantages of this algorithm are very high numerical stability, due in part to the UDUT covariance factorization, even for badly formed numerical problems. However, if the constraints are non-linear then Newtonian convergence can be too severe, the algorithm reduces the size of the step that the algorithm takes towards the optimum. At the same time the direction of the step swings away from ‘Newtonian’ to ‘steepest descent’ direction. As the step size reduces, the algorithm guarantees convergence, albeit more slowly.

One pass through all of the constraints is sufficient for linear-only constraints. If there are non-linear constraints then the solution is used as a new starting point, and the procedure repeated.

Inequality constraints are handled by the use of the active-set methodology. At each stage the most ‘active’ inequality constraints are added until no more inequality constraints are in violation.

References

[1] “Multi-frequency Testing of Nonlinear Systems”; Lawrence, P (Oct 1980); PhD Thesis, University of Wales Institute of Science and Technology, Chap 7

[2] “Filtering and Error Analysis via the UDUT Covariance Factorization”; Thornton, C; Bierman, G (Oct 1978); IEEE Trans AC-23 Issue 5, pp 901-907

[3] “Statistical Analysis of Material Balance of a Chemical Reactor”; Madron, T., Veverka, V., and Venecek, V. (1977); The American Institute of Chemical Engineers Journal, 23, 482-486.

[4] “A Note on the Use of Residuals for Detecting an Outlier in Linear Regression”; Tamhane, A. C. (1982); Biometrika, 69, 488- 489.

[5] “Reconciliation and Rectification of Process Flow and Inventory Data”; Mah, R. S. H., Stanley, G. M., and Downing, D. M (1976); Industrial and Engineering and Chemical Process Design and Development, 15, 175-183.

[6] “Statistical Analysis of Material Balance of a Chemical Reactor”; Madron, T., Veverka, V., and Venecek, V. (1977); The American Institute of Chemical Engineers Journal, 23, 482-486.

[7] “A Note on the Use of Residuals for Detecting an Outlier in Linear Regression”; Tamhane, A. C. (1982); Biometrika, 69, 488- 489.

[8] “Reconciliation and Rectification of Process Flow and Inventory Data”; Mah, R. S. H., Stanley, G. M., and Downing, D. M (1976); Industrial and Engineering and Chemical Process Design and Development, 15, 175-18


If you do not plan where you are going you will not get there, but you will probably get what you deserve. This software development model, that accurately predicts resources and schedule given scope, can greatly help planning when you will get there and what will be needed on the way. This may not be a perfect model, but perfection is the enemy of the good. What software development model experience can you share?

In a previous posting Innovation != R&D$, I reasoned that the impact of R&D expenditure is not well correlated with gross operating margins because a significant proportion of R&D monies is absorbed by the less revenue generating activities of maintenance and support rather than innovation, by which I mean in this context the creation of new software applications. However it is still important to ensure that whatever monies are invested in innovation are wisely invested. Unfortunately our industry is beset with project delivery problems. We have earned the IT Rule-of-5: 5 times over-budget, 5 times schedule, 1/5th functionality.  Perhaps an exaggeration but there is a problem that few would deny.

One problem is that of creating unrealistic expectations. The development team’s answer to a feature request of ‘it is just a small matter of programming’ (one unit of SMOP) gets heard as ‘it will be on my desk tomorrow morning, tested and with a quality and quantity of documentation that would shame Charles Dickens’.  

The answer is a good looking model. I love models. Not the type you are thinking of, but simple mathematical formulae that allow me estimate what will happen.  Having been long involved with automation, MES, and software development I always want to know how long a software development will take. ‘As long as a piece of string’ is not the most useful answer when customer expectations or development budgets need to be met. So over the years I have developed a model that estimates total development effort and tracks development progress over the life-cycle of the project with surprising accuracy. Before you say that most models have so many ‘tuning factors’ that you can of course make it always fit, I want to point out that this model has just one factor. Also I want to point out that most of this model originates with Capers-Jones seminal work ‘Applied Software Measurement’

Estimating Model

The objective of the estimating model is to use a measure of the size of the development and come up with estimates for project duration, the number of project resources, total development effort and average productivity. From these estimates the project cost can be derived.

Tuning factor (J)

This is the only factor you need for this model. Fortunately Capers-Jones also provides a range of suggested values, as shown below. I would suggest 0.4 as a starting point.

Kind of software Best in class Average Worst in class
Systems 0.43 0.45 0.48
Business 0.41 0.43 0.46
Shrink-wrap 0.39 0.42 0.45

Estimated Scope (LOC)

Estimated size for the project (lines-of-code). OK, it can be difficult to come up with a really accurate estimate, but there is much written on the subject. Quick ways are to simply say that this application is approximately the same size as a similar one done in the past. For greater accuracy, Function Point or Story Point estimating can be used and then converted to lines-of-code.

Estimated Function Point (FP)

= Lines-of-Code (LOC) / LOC-per-FP, where LOC-per-FP is taken as 54 for languages such as C#

Estimated Project Duration (D)

= FPJ  (months)

The ideal project duration given the ideal number of resources for the project.

Development Resources (R)

= FP2*J/27 (persons)

The ideal number of full-time project-persons to complete the project.

Total development effort (Effort)

= D * R (person-months)

Average Productivity (P)  

= LOC/Effort (lines-of-code per person-month)

Below is an example of applying this model to an example project:

Development Estimate Model
Factor Units Formula Example Project
Tuning Factor dim J 0.4
Lines of Code LOC LOC 75000
LOC/FP LOC/FP LOC-per-FP 54
Function Points FP FP 1389
Estimated project duration months FPJ 18
Development resources persons FP2*J/27 12
Total development effort person-months FP3*J/27 219
Average productivity LOC/person-month 27*FP (1-3*J) 343

 

The graphs below shows the project effort and productivity plotted for various sizes of project. As would be expected we can see the productivity falling off as project size increase (see Mythical Man Month)

Tracking Model

The Tracking Model recognizes that the assumptions in the original estimate might not apply in practice. For example the number of resources assigned to the project might change, or the scope decreases or, more likely, increases. Thus the tracking model uses known measurements of the project such as code produced to date, resources actually assigned, and current total project size to calculate what the progress to date should be and to predict into the future the revised project completion.

Estimated Scope (LOC)

Estimated size for the project (lines-of-code). This is the estimate at the beginning of the project because scope creep and scrope additions will inevitably occur.

Scope Creep (Creep)

Percent change month-on-month of the project scope (%). This requires careful measurement because, as has been shown elsewhere, projects can only tolerate a small amount of scope creep before they become ‘runaways’.

Estimated Scope including creep (ELOC)

= LOC * (1+Creep) + Any additional scope     

Estimated Scope (EFP)

= ELOC / LOC-per-FP

Project Duration (D)                              

Duration of project, measured from the original project start, assuming optimal resource allocation, no scope creep, and sustained productivity

= EFPJ

Required Project Resources (R)

Resources that should be allocated to the project for the duration

= EFP2*J/27

Actual Project Resources (AR)

Actual person-month assigned to the project for the period.

Estimated productivity (P)

Estimated lines-of-code per assigned person-month. Note that productivity reduces as the assigned resources increase (see Mythical Nan Month)

= ELOC*27*(27*AR)(1-3*J)/(2*J)

Estimated Production (PR)

Estimated lines-of-code produced in period based on assigned resources and estimated productivity.

= P * AR

Accumulated lines-of-code (ALOC)

Accumulated lines of code to-date based on assigned resources and estimated productivity.

Remaining Scope                                

Estimated scope less estimated accumulated code

= ELOC – ALOC

Accumulated Cost (AC)

Cost of actual resources assigned based on annual rate.

Cost per Line-of-code

A metric indicating how much each line-of-code is costing

= Accumulated Cost / Accumulated lines-of-code

Agile Productivity Ratio  

The promise of Agile/SCRUM is that productivity will increase as large teams as split into Scrum teams. As indicated before, productivity reduces as team size increases. This is the ratio of the productivity of a single team versus multiple Scrum teams with the same total resources.

Allowable scope change

Scope creep is a project killer. If each month the project scope is allowed to increase, then the project size will increase. If the project size increases, then duration and number of project resources follow. As project resources increase then productivity will reduce, further decreasing project duration. As project duration increases, then more scope changes can accumulate, further increasing project duration, and so on. The tipping point when the project becomes a runaway has been determined and is calculated as follows:

= 1/EFPJ

The graphs below show the results from a typical project. We see the project scope increasing (scope-creep), resulting in more resources being pulled in to tackle the retreating deadline. Note that without scope creep the project would have completed in month 23 even with the same assigned resource profile. This signals the hidden dangers of even mild scope-creep.

The above graph is based on the actual resources assigned. The graph below compares the estimated with the actual production of code over the same period, confirming the accuracy of the model.

Observations

  • Although there is only one factor in this model, estimates are quite sensitive to the value chosen. Therefore it is best to track the estimates with actual measurements to ensure the accuracy of the model and hence the tuning factor.
  • Include test code or not? If the development teams are using test-driven development (TDD) or any form of Agile/SCRUM then I think it is important to include test code along with production code in the estimates, at the same time including the test resources with the overall project resources. Generally I expect to see test code lines-of-code to be approximately 50% of the production code.
  • Counting lines-of-code should use the same tool, such as Visual Studio, for consistency. There is much debate about what should be included and excluded. However I think it is more important to simply be consistent because you will end up deriving your own tuning factor based on your assumptions.
  • There is a difference between development productivity, and productivity of customer expectations. Just because the lines-of-code have been efficiently produced, albeit error free and with great documentation, it does not mean that customer expectations have been met because they might have wanted an entirely different solution.
  • Scaling lines-of-code to function points. Despite the superiority of Function Points (or Story Points) over lines-of-code as a measure of software size, lines-of-code seems to be more tangible to management. Thus although the model is expressed in function points, I expect most will estimate lines-of-code or convert (‘back-cast’ to quote Capers-Jones) from lines-of-code to function points using a factor based on the type of programming language being used. For example there are approximately 54 C# lines-of-code per function point.

Associated Spreadsheet

To those who have got this far, I am sharing a spreadsheet version of this model that you are free to download and use for your own estimating. I hope it works as well for you as it has for me.

References

  1. Applied Software Measurement: Global Analysis of Productivity and Quality. Third Edition; Capers Jones; 2008
  2. The Mythical Man-Month: Essays on Software Engineering; Frederick Brooks;1995


Our desire to add features and functionality to applications in order to solve the ever-changing challenges facing Process Manufacturing has driven up complexity. There has been a lot of focus on application product and program management using such techniques as
Lean, Agile, and SCRUM, justifiable given the failure rate of IT projects. However our thesis is that sustainability of an application over its 5-7 year deployment life deserves far greater attention; what is the advantage of having an optimal application that becomes shelf-ware after 18 months due to its user-complexity, when a sub-optimal application remains in use throughout its life? Such minimalism has been embraced so successfully at Apple (Steve Jobs and Minimalism).

IT Project Rule of Five

Some may have heard of the IT rule of 5: 5 times projected cost, 5 times projected schedule, 1/5th projected functionality! Not entirely unjustified when The Standish Group reported:

Specifically, 32 percent of IT projects were considered successful, having been completed on time, on budget and with the required features and functions. Nearly one-in-four (24 percent) IT projects were considered failures, having been cancelled before they were completed, or having been delivered but never used. The rest (44 percent) were considered challenged: they were finished late, over budget, or with fewer than the required features and functions.

Thus there is justifiable focus on the development and deployment phases of a Process Manufacturing application with some notable successes. For example we believe that adopting Lean Software Principles, using Agile development methodologies such as SCRUM, provides many advantages over the traditional waterfall approach, allowing usability problems to be detected early and avoiding excessive functionality that is not required by the end-user.

Projects are justified by the ongoing benefits to the organization. However we find that many successfully deployed applications are no longer in use a few years down the road. Ask yourself how many applications that you know have become shelf-ware, no longer used and fulfilling no useful purpose for the organization.

Given that most applications are justified using an assumption of 7-year use, an active life measured in months or a few years puts that application into the category of ‘challenged’ along with those that were late , over budget, or with fewer than the required features and functions.

Declining Application Benefits

What is happening to cause the ongoing benefits to decline? There are many reasons, some of which are as follows:

  • Degradation of quality of application configuration:
    • Applications are becoming increasingly complex so they depend on configuration that is an accurate reflection of the underlying process. As soon as the originally deployed configuration starts to err away from the actual changes that are occurring on the plant, the application becomes less useful and consequently less beneficial. An example might be an enterprise material balancing application whose model no longer reflects reality.
  • Loss of expertise, no business processes:
    • There is usually an internal application champion who sees the application through to successful deployment. This catches the eye of management so the champion is moved on to higher and better things. However without a champion succession plan, the application may lose its appeal, and usage will drop.
  • Failure to adapt to changing business processes:
    • Most applications are designed to reflect the business process, whether or not that is the ‘as-is’ or ‘to-be’. Unfortunately business processes are a moving target. New regulations might change the way you have to do things. Better operational procedures might demand a change in business processes.

Process Manufacturing Application Focus over the Years

Perhaps the problem of sustainability lies in the process manufacturing application focus. Once the problem was that of measurement data in silos, solved by the introduction of real-time historians. However that created the problems of data visibility, solved by the introduction of graphical user interfaces. This introduced data overload which was partially solved by the introduction of analytical tools to digest the information and produce diagnostics. Unfortunately these tools were difficult to deploy across all assets within an organization, so we have been trying to solve that problem with information models. The current problem is how to convert the diagnostics into actionable knowledge with the use of work-flow engines.

As we solve one problem, another emerges. Each layer of technology, however, increases the overall complexity making it more challenging to sustain any application.

Process Manufacturing Application Problems and Solutions over the Years
  1985-

1995

1990-

2000

1995-

2005

2000-

2010

2005-

2015

2010-
Problem Measurement data in silos Data access and visualization Analysis and business intelligence Contextualized information Consistent actioning Sustainability
Industry

Response

Real-time databases collecting measurements

(proprietary)

Graphical user interfaces, trending and reporting tools

(proprietary)

Analytical tools to digest data into information and diagnostics Plant data models (ProdML, ISA-95, ISO15926, IEC 61970/61968, Proprietary) ISO-9001 Outsourcing

Standards

Consequence Data but no user access Data overload Deployability of analysis to all assets Interpretation limited to experts Complexity, much more than RTDB, limiting sustainability Improved ongoing application benefits
Response now Composable integration

(Semantic Federation of external data)

Composable UI (Portal) Composable analysis (Complex Event Processing, Odata-Entity Descriptions) Composable model

(Semantic model)

Composable workflow Composable platform and data(PaaS/DaaS)

Composable applications

 

Changing Focus to Sustainability

So if we want to sustain an application’s benefits we need to start focusing on those aspects of the application’s technology, development, and deployment that is designed to improve sustainability. This may lead to less than optimal applications but what would you rather have; the optimal application that yields 100$/year for 2 years, or a sub-optimal application that yields 80$/year for 7 years?

Sustainability of an application is a multifaceted problem. Below is a catalog of some of the problems, together with the proposed solutions.

All problems lead to sustainability
Problem Proposed Solutions Issue  
We cannot afford experts to create a rigid 5 year plan nor will we follow it with the staff turnovers we have. Solutions must deliver incremental value with additional solutions building upon the installed platform. Agility Sustainability
We are too small to have a large IT department. Solutions cannot increase the size of the internal IT. Outsource IT. Affordability
We cannot afford to be good custodians of our data, nor want to manage its increasing complexity. Outsource data management to the ‘cloud’: Data management as a Service, DaaS = Google Fusion/Dydra. Capability
We do not have in-house experts to continually adapt to my changing business needs and changing regulations. We must make the solutions easily adaptable with built-in best practices (data, presentation, analysis, and work flows). Adaptability
We are growing, and do not want the IT apps to impede that growth. We must make the solutions self-configurable or configurable with the minimum of skill. Configurability
We do not have the in-house skills to deploy such technology. We must make the solutions self-deploying
with built-in best practices.
Deployability
We will be employing Gen Y/Gen Z who are used to iPhone, Twitter, Facebook, and other social applications where they can change their own experience. “A true web2.0 app gets better the more people use it…” Tim O’Reilly. Sociability
I only want to pay for what I use, and want to avoid large capital investments. PaaS, SaaS, DaaS Cloud. Affordability
I don’t want my people to have to become IT experts to use any application. Applications should have the same intuitiveness of use as the iPad, Facebook, etc. Simplicity
My business and how I run it is different than everyone else’s; this is my competitive advantage. Composability of applications without constructing new, custom code. Composability
I do not want to put all my bets on a single vendor because they cannot be an innovative as the ‘crowd’. We should use standards and de-facto standards so that additional applications and utilities can be ‘crowd-sourced’. Constructability
I want my best practices to be owned and managed by the business rather than individual experts within the company. Analytics, diagnostics, work-flow automation, event-processing, inferencing, knowledge management. Quality
Regulations demand that we demonstrate strong governance over information and decisions Audited data and transactions, change management procedures. Auditability

Ensuring Solution/Application Sustainability

In the articleSolving or simply displacing problems: Application HAZOPS?’ the issue of problem displacement was discussed: do information technology ‘solutions’ create more problems than they solve, do information technology ‘solutions’ simply displace problems to another group? HAZOPS have been used to systematically assess safety risks in processes and operations. Why not ‘HAZOP’ the supposed information technology solutions that we propose?

Thus given the potential problem of sustainability, we propose conducting a Solution Sustainability Assessment, based on HAZOP principles as shown below. This should be included within the solution evaluation along with conventional criteria such as project and deployment costs and schedules. 

Solution Sustainability Assessment (HAZOP) – Example
Solution Characteristic Intention Deviation Causes Consequences
Agility Solution must deliver incremental value with additional solutions building upon the installed platform. Business requires extended functionality due to change of business practice or regulation. Solution does not offer the required extended functionality. Add an independent solution offering the extra features and integrate together.
Affordability Solution cannot increase the size of the internal IT. Solution has increased internal IT support requirements beyond that which we can sustain. Reduced IT budget.

Increased solution portfolio without increasing IT budget.

IT support deteriorates.
Capability Solution cannot depend on establishing our own good custodianship of our data, nor manage its increasing complexity. Increased complexity of custodianship of configuration data. Effective use of the solution requires increased complexity in the custodianship of the configuration data. Trust in solution results deteriorates.
Adaptability Solutions easily adaptable with built-in best practices (data, presentation, analysis, and work flows) Business requires adapted business practices encoded in solution. Business practices change in response to new competition and regulations. Failure to capture benefits of improved business practices.
Configurability Solutions should be self-configurable or configurable with the minimum of skill. User fails to configure application to meet their changed needs. User training insufficient to allow self-configuration.

Solution more complex than expected inhibiting users from making their own changes

Users rely on in-house experts or IT to make changes, increasing costs.
Deployability Solutions should be self-deploying
with built-in best practices.
Upgrade of solution not self-deploying. Solution is within a domain with rapidly changing regulations. Solution upgrades delayed so that users are forced to use obsolete solution and outdated business practices.
Sociability Value of solution should increase with more users (see Should Enterprise Applications be evolving to Socialized Enterprise?)

 

Addition of more users Addition of more users simply increases IT load with no improvement of business. Users not taking advantage of the intra-solution features (notifications, document sharing etc) to improve their business practices. Solution does not improve interactions between departments as planned.
Affordability Should only pay for what I use, and want to avoid large capital investments. Cost not proportional to users or usage. Licensed by server or site, not by users or usage. Reluctance to sustain license as cost per user greater than anticipated.
Simplicity Solution should not depend on my people becoming IT experts to use any application. Needs a local expert for effective use. Solution more complex or intricate than expected or solution’s configuration became overly complex. Hidden cost of sustainment.
Composability Composability of applications without constructing new, custom code. Need to improve efficiency in use. User needs to create new mash-up to avoid viewing separate reports and/or pages. User’s efficiency degrades
Constructability Solution should use standards and de-facto standards so that additional applications and utilities can be ‘crowd-sourced’. Need another utility. Need another utility. Expensive to add a new utility.
Quality Solution should have built-in process quality management: Analytics, diagnostics, work-flow automation, event-processing, inferencing, knowledge management. Measurability of process quality. Need to ensure that the process quality is adhering to Sarbanes-Oxley, ISO-9001, Lean and other guidelines/regulations. Lack of confidence in the quality of the processes that are automated by the solution.
Auditability Audited data and transactions, change management procedures. Some transactions now need to be audited. Change of business practice now requires additional transactions be audited. Solution may need to be replaced if auditing is mandated.