[77]                               home                            [79]

On the separation of syntax and semantics and the issue of (real time) pragmatics  à

 

 

Wednesday, August 18, 2004

 

The BCNGroup Beadgames

 

 

Substructural Semantics

Is one of the differences between the Semantic Web and the Anticipatory Web

 

Comment by Ken Ewell, founder Readware

Response by Dr. Paul Prueitt, Founder of BCNGroup Inc

 

Still being edited:  12:01 PM

Ken’s comment

Paul response

 

Other’s comments ? (to be posted)

 

Footnotes inserted by Prueitt

 

Ken’s comment

 

The patterns are the regular recurring patterns found in reality, memory and language: Mother, Ball, Light, Mind, Man, Model, etc. 

 

The first phase of our work was finding the regularity in the substructural semantics.  The second phase was both singling-out and then 'fitting' the patterns into a coherent schemata that could represent any part of reality or domain of knowledge (the memory of concepts, names, processes and entities, etc.).

 

It is this schemata that can be laid on top or within the PriMentia system.  We have 4000 essential elements.   In our system each one is a hash mark in a binary map an the files that contain these elements become a linked list.  I imagine that each one of the concepts will be a file in the Primentia system. 

 

We did this once in an Oracle database and we had nearly instant indexing of data records and instant recognition of concepts.

 


Paul’s comments

 

The BCNGroup board has made many new communications to the board members of the Santa Fe Institute, regarding participation in the planning process.  One of them called, but has not yet agreed to participate in the planning for the National Project.  But the National Project should be kept in mind as we talk about the PriMentia patent and the various notions of substructural semantics. 

 

The process you describe follows a generally agreed on concept of how science is to be done. 

 

The National Project moves us closer to the type of financial support for innovation, separated from purely business processes, that is envisioned in the BCNGroup Charter.  I mention this now, because of the absolute need for transparency on future funding decisions, and the documented diminishment of transparency on these decisions since 9-11-2001.  The need is not simply based on what is fair, but what leads to functional deployments of a simpler information technology that is more useful and has higher measures of performance. 

 

Ken, as you know, a set of observed patterns can be acquired from any “target of investigation”.  As the pattern repository development process continues new patterns are added to a set of those patterns already obtained.  Pattern repository development is expressed in many types of organizational methodologies.  At a deeper level this process is the process of doing science. 

 

In any pattern repository development process, one has a number of very specific issues to deal with

 

1)       Measurement: The measurement process might be reasonable or not

2)       Adjustment: The measurements might need interpretation and thus early categorizing might need to be adjusted

3)       Regularity: One would expect that if the object of investigation is a “regular phenomenon” that is naturally occurring then the set of patterns becomes relatively complete at one point.

 

The measurement process is a critical aspect of our actionable intelligence process model, as discussed in the Anticipatory Web design document at [70]. 

 

SAIC’s Latent Semantic Indexing (LSI) engine is a good example where first measurements need to be modified by some type of reification process so that the basins of attraction produced by the LSI algebraic transform of, defined over, incoming text elements can be modified.  The modification assists in the pattern repository development process so that patterns of occurrence can be studied.   More can be done in this area, including benchmarking with stochastic LSI and generalized LSI.  Generalized LSI is unpublished. 

 

While I was senior scientist at Object Sciences and working on this for Army Intelligence, I advocated independently using link analysis where the links, again the ordered triples in the form < a, r, b >, are developed by an machine examination of the basins of attraction in range of the LSI’s transform.  “Range” of a transform is the technical term for the output space ( see slides at (#) ) .  This advocation is discussed in my work on differential and formative ontology, in particular in the notion of “differential ontology”.  But neither the managers at Object Sciences, SAIC, Hicks Associates, and BAH nor the INSCOM Senior Scientist, nor In-Q-Tel understood at that time what I clearly proposed.  But what I proposed I developed, at private costs, into the Orb technology.  This technology has been precisely expressed in the Orb notational system. 

 

It has taken many years for the business consultants to get to the point where such refinements in the use of LSI can be envisioned, and perhaps built.  Within a community of scholars, we have been over this ground many times over the past two decades.  The scholars better understand why one would develop technology in a particular way, I mean if the purpose of the process is to develop reasonable information technology.    

 

The imbalance between the weight of scientific opinion and MBA decisions is the primary reason why the community of scholars developed the specific mechanisms that are described in the BCNGroup Charter

 

Ontologystream Inc stands ready to work with SAIC on the use of LSI and the LSI “Ontology lens” that we invented and made public domain in 2002.  As remarked by Ballard in the previous bead, the best discrete form of localized knowledge representation is the n-ary, but the simplest form is the order triple in the form < a, r, b >.  The Orb constructions can encode either representation of data regularity within contextual constraints.  There is more to say about this.

 

The issue of regularity might be addressed with high specificity.  This specificity can appeal to the work done by the Soviet cybernetics community and captured by the formal notational system called quasi-axiomatic theory.  Ballard has, in the past, objected to this appeal to work that is not very well known by him. 

 

Or one can frame this as a process of descriptive enumeration (see slide 7).  In this case, the methodology for descriptive enumerating the set of patterns is something that children easily understand.  The challenge is to avoid some so called expert saying that descriptive enumeration will not work. 

 

Given the huge importance of semantic stratification and the huge sums being spend on information systems that never seem to get completed; perhaps a test might be a reasonable thing to do at this time. 

 

But in all these cases, the most critical aspect of the work is that it cannot be developed in a proprietary and/or secrete environment.  The issues need a broad and open review of the foundational concepts by, financially disinterested, scholars.  Many scientists have given written testimony to the BCNGroup that this is not what has been occurring.

 

The PriMentia technology has a patent (2003), as does the CCM technology and the Readware technology.  SAIC has rights to the LSI methods, and has an internal research group that knows what to do, if given the chance to show real progress.

 

So protection exists for those who develop real innovation.  Many in the community of information technology innovators feel, I might add quietly, that our problem is not so much protection but exposure and peer review.  The marketing forces involve in placing technology of the government, and industry is very powerful, while the marketing abilities of the single lone innovator are zero.  The same economic interests that govern the direct marketing of consulting services to government and industry capture processes related to government awards for Small Business Innovation.  It is a type of impenetrable groupthink. 

 

The BCNGroup Charter privatizes the objective evaluation of software patents by creating a testing environment were new innovations can be deployed in operational settings, those setting being in purely commercial and entertainment context.

 

Knowledge Sharing Foundation

 

And by creating a comprehensive stratified ontology about the concepts and claims expressed in the software patent descriptions.  (The notion of a IP map of the patent space is similar to the M-CAM model for IP evaluation.)