[83]                               home                            [85]

 

 

 

Saturday, August 21, 2004

 

The BCNGroup Beadgames

 

History and foundational work in Knowledge Science

 

 . John:

 

As always, your scholarship is a prize for this community. I does suggest that your writings on this field are a very significant historical record into which we should all be contributing artifacts from our own experience. Clearly you are the historian and teacher that has put his arms around this whole long period of significant achievement.

 

You were signally important in introducing me to declarative forms. The multiplicity of executable procedures which you favored initially were evidence to me that the form and languages themselves were likely not foundational. When you footnoted years ago that the declarative form  was a singular alternative and then used it yourself with conceptual graphs -- that made me feel quite certain, that I too was riding the more powerful of stallions.

 

I have a large piece of what is so far missing from your record and will go through my records to see if I can put together the more significant milestone artifacts for inclusion in your scholarly records.

 

Retroactively, the different periods have acquired Mark X labels since 1989. The Mark 0 period (1970-83) covers both the NSF sponsored "Technical Innovations in Education" centers of excellence projects (Irvine, Plato, and Mitre-Ticcit) under Andy Molnar, the subsequent shift of these authoring and production methodologies to micro-based platforms (Tektronix, Altair, Radio Shack, Apple, Commodore, Atari), and the Micro-based Software Industry Formation from 1975 (NSF MISSIP, Educulture, Hayden, Milliken, etc.) through 1985 as seen through all the micro-based software publishing start-up clients of the Apple Education Foundation and International Data Corporation TALMIS research and development support consultancies (virtually every publisher).

 

From my perspective that period was bounded by the advance of computer graphics on time shared networks, introduction of a massive micro computing market, and ended in the "Cost of Knowledge Crash" 1984-87 that ended the micro's "first global computer literacy epoch."

 

The Mark 1 Period (for us 1983-1990, for everyone else 1983-2004) is for us the model of most of the efforts we see now (mis)labeled as "industry leading". It was the first significant tests of the system of systems aggregations of AI, object orientation, and the script-based method interpreters that foreshadowed mark-up languages and the Internet. It hangs on in the last faint hopes that execution, language, and a priori schema representations might someday be made "scalable" in complexity growth. FAT CHANCE!!!

 

From the perspective of the "computer science" record of both periods, the deficiencies are glaring because "academic computer science" never found its science and lost the bubble of even knowing where science or the engineering world was likely to go next.

 

The Mark 0 period was the first discontinuity, because of the economics of software creation and scope of production talent teaming required a dramatic step up from programs to products to projects to industries. Even those with straight scholarly intents had to decide between writing about what others did via journal papers or becoming directly involved in leading and documenting their inventions and discoveries in transferable "computing codes".

 

That period drained most departments of their engineering talents and brought in waves of mathematicians that scorned anything practical as too vexing and focused on making small set-piece artifacts provably correct. Ultimately the Deans and Alumni had to step in a force some measure of useful practicality by forcing "computer science/engineering department mergers." Unfortunately students assumed programming went with computer science and not with systems engineering so they created today's constituency that props-up departmental and now school sized "technology following" scholarly cultures waiting for industry to name the next wave for them.

 

The Mark 1 period spanned the time when AI, the last practioners of non-systematic, hand-made (LISP) codes, made their much ballyhooed attempt to attract their own "outside" engineering constituency and industry sponsorship. Their gift was "knowledge = rules + databases", backward objective to methods chaining, procedural inheritance attachments, agents, case-based reasoning, etc.

 

After attracting military attention from "Star Wars" (SDI -- Strategic Defense Initiative), they (AI) were told to sell directly to engineering firms already immersed in SDI systems of systems programming. The C-based Expert System Shell was their public sub-system response to that demand. It is right there that academic computing lost the bubble of what happened next -- in part, because the public record of SDI developments was presumed, but not actually classified and the champions of advanced research had to create their own profitable business plans and methods for sustaining what became 20 years of R&D through advanced engineering development. I kept my eye on you, John, and on Fritz and Doug and the repeated efforts out of Stanford with KIF and ontology developments.

 

The Mark 1 period is the heretofore unknown, but unclassified picture, of what happened within SDI architectural experiments and the origins of the "brittleness" clouds that caused engineers to throw out AI with little concern for telling the academic world why. When the grade is an "F," who wants to argue with students telling them why. The problem is that in not knowing why, the university teachers go on teaching all the same dumb stuff like it should be important. The bottom line is that logic is a way to preserve forever what you think is "true" and worthy. If it is not worthy, its "trueness" is the trap that keeps you forever locked in bullshit up to your armpits. It guarantees that Eric Miller can sell RDF by claiming its logically correct and make a' priori schema look scalable, when they are not -- if given a reality that learns and changes behavior from direct experience.

 

I am convinced that knowledge science is the necessary vehicle for rescuing computer science from its preoccupation with executing procedure. The only excuse for creating and relying upon an ensemble of singular, non-reversible reasoning path/procedures has been storage memory costs and that constraint was long ago ended. The endless, mindless, repetitious execution of the same procedure without learning or changing anything is not a science worth keeping -- especially when blessed as forever sacred, true, and unchangeable.

 

John, you have been the bastion in preserving our respect for logic and its history. That is why I think you are most important as a fair critic and judge of this science that fairly states what can be made and not made of proving something logically true. Einstein said it with:

 

"As far as the propositions of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."

 

But everyone assumes Einstein is forever mysterious and deep, not meant to ask questions like, "When is your 'truth' relevant to anything?"

 

I will start sending you some of the things I have in boxes marked Mark 0 and 1, and you can tell me if you find them useful.

 

Thanks to all.

 

Dick