[198]                               home                           [200]

 

Tuesday, November 15, 2005

 

 The BCNGroup Beadgames

National Project à 

Challenge Problem  à

 Center of Excellence Proposal à

 

 

Additional comments on web service

execution environment and ontology à  [200]

 

 

Semantic Execution Environment

 

To

 

James Bryce Clark <jamie.clark@oasis-open.org>

 

The concept of a program execution environment for "communicating about services" seems right.

 

Certainly  the examples of ontologies used, or usable, in web services as seen at:

 

http://www.wsmo.org/

 

are well done and could be of use in many environments. 

 

But there seems to be an incompleteness to the set of all ontology models available for public use.    The issue may be in the complexity of real deployments.  It is this complexity that one faces in large scale enterprise modeling.  A model of time and duration seems proper, but one also needs to understand the actual ontology of transactions of various types.  A foundation is developed in the WSMO non-functional properties (see [200-1] ). 

 

Part of the true requirements is expressed in a WSMO working document (authors not specified):

 

The evolution of service-delivery technology and the growing complexity of services demand a service engineering discipline just as the growing complexity of information systems demanded an information engineering discipline. Competitive advantage is moving toward communities of service providers and service consumers who share a social experience and co-create value in experience networks.  This dynamic marketplace has market participants entering and leaving the market continuously.  This places a demand on architectures to support communication and collaboration in ad-hoc environments and create billable services in near real-time.  The architecture must include social and technical infrastructures to support heterogeneous experiences and facilitate the co-construction of experiences and experience networks. Furthermore, the architecture must be “future proof” to support emerging technologies such as peer-to-peer computing, grid computing, and extreme mobility infrastructures for wearable computing.

 

This incompleteness is present at a time in history with heightened political awareness of the (potential) importance to web services. These services need to be seen in real systems, having real social acceptance and use.  By this we mean that the examples at the WSMO web site are not sufficient in specific ways.  They are useful, but one wonders about how to move to the more general case. 

 

The most significant in-sufficiency is the absence of a stable web environment for the use of ontological modeling (perhaps without inference).  The notion of ontological modeling without inference is developed in my work, and is a key difference between the notion of Human-centric Information Production and AI is described at:

 

On the AI Myth and Human Information Production

 

The semantic execution environment for web services might need to have a deeper understanding of event structure analysis than what is being proposed in the charter.

 

http://www.oasis-open.org/committees/semantic-ex/charter.php

 

 

The language being used, by Semantic Web community, often leads to an incomplete technical specification inconsistent with the common (and/or political) expectations.  Humans understand each other "easily", but how this occurs involves a lot more than knowing how to put words into a data structure and do computations on this. 

 

Modern natural science suggests that human being rely on there being a common substructure to the mechanisms that are involved in an aggregation from that substructure into mental events and to speech acts.  This substructure certain exists for physical chemistry, in the form of atoms.  Speech is produced from an aggregation and expression of phoneme [1].   Event structure for web interactions should have a common set of substructural atoms having the property of openness to new atoms and re-synthesis (clean-up) of the full set of atoms in ways that do not disrupt existing or future web interactions.  There are various examples of execution environments that have this property (CoreSystem and Wikipedia being the best two examples). 

 

Even in these systems, Wikipedia and CoreSystem, the computer merely reaches an efficient level of interoperability.  Understanding is not the same as interoperability, and the use of the term “interoperability using substructural (program) aggregation” [2] is more proper than the use of the “phrase “machine understanding each other”.  CoreSystem achieves this substructural program aggregation through the introduction of an Iconic language having advanced theory of substructural type called by Klausner “cubism”.  

 

There is real progress being made within a limited set of "services" and service types, and this real progress cannot be pointed out enough.  Budgets depend on the perception of success (were success is defined in some way).   But will the progress be stable?  Will the work done lead to long-term stable (and hopefully freely available) standards?  Is there a larger issue related to the drivers for standards (business interests narrowly defined as opposed to social interest more broadly defined?)  Is there a business community resistant to the development of interoperability using substructural aggregation? 

 

Suppose that everyone uses the "machines that understands each other" language (common to Semantic Web committees).  "Everyone" might mean "those in the business community trying to do constructive work that leads to a paycheck".  The definition of "everyone" is then set.  But many individuals are doing other things, ie the context of "everyone" is not consistent with social reality.  The business community might complain that they have a right to earn a living, but if there is shallowness to how the standards committees define the information technology standards then this right impacts on the right of people using these standards to try to get work done. 

 

This is an old controversy, which has become well ignored by the W3C and OASIS committees..  although there was hope when Topic Maps was first being discussed.

 

There are those who feel that something should be said about the use of Semantic Web language.  I am one of these people. 

 

Over the past years, we have really had to retreat from the standards activity.   The scientists are perplexed about what to do in an environment where it seems as if the infrastructure for reasonable knowledge representation is a moving target.  The driving personalities (Jim Hendler and Tom Berners-Lee etc) always are acting as if some ultimate solution to something is about to appear for the first time, and then be stable from now on. 

 

When one uses the phrase “interoperability using substructural aggregation”, one introduces a stratified theory of event causation.  Stability comes from the discovery of the periodic table of atomic elements.  In chemistry this periodic table is now known and provide stability to chemical sciences. In speech acts, the characteristic use of a set of phones [3] by an individual provides stability to the motor system and cognitive system.  In future “semantic execution environments” one needs sets of “substructural” elements and some common way to align web service requests when separate computational processes are using different substructural sets.  Breanna Anderson’s work on “reconciliation of terminology differences” [4] . 

 

My conjecture is that if the approach to web services is not “stratified”, with substructural invariances forming the basis for interoperability, then the correct notions of the Semantic Web community will not be achieved and their use of AI type language will not be set aside.  Real enterprise interaction has unique elements that have to be understood and accomdated. 

 

The use of pattern recognition techniques, form AI, seems right until one gets to the stability issue.  (There is more to say here.)

 

 

Dr Paul S Prueitt

founder

www.bcngroup.org

 

703-981-2676

 

 

 

 

 



[1] (From : http://en.wikipedia.org/wiki/Main_Page : In human language, a phoneme is the basic theoretical unit that can be used to distinguish words or morphemes. That is, changing a phoneme in a word produces either nonsense, or a different word with a different meaning.  Phonemes in oral languages are not physical sounds, but mental abstractions of speech sounds. A phoneme is a family of speech sounds (phones) that the speakers of a language think of as being, and usually hear as, the same sound. A "perfect" alphabet is one that has one symbol for each phoneme.  In sign languages, a phoneme is a similarly basic theoretical unit of hand shape, motion, position, or facial expression. It was formerly called a chereme (or cheireme), but usage changed to phoneme when it was recognized that the mental abstractions involved are essentially the same as in oral languages.  Phonemics, a branch of phonology, is the study of the systems of phonemes of languages.

[2]  Goggle the phrase “interoperability using substructural aggregation” returns the work by Prueitt on a National Project to define the knowledge sciences.

[3] See first footnote.

[4] See produce literature from www.schemalogic.com and description of possible integration path in http://www.bcngroup.org/area1/2005beads/GIF/RoadMap.htm