23 January 2007
Appendix to the Resilience Project’s White
Paper [1]
One of the core problems in the
first-school/second-school controversy is that the founders of the second
school make a claim. The claim is that
the first school used status quo to inhibit the development of a competing
alternative paradigm to artificial intelligence. How this was done, and what are all of the long term consequences
will be a subject of historians’ work.
From the current vantage the notion that business, and information
sector business in particular, has not served the public interest optimally is
a notion that is still very novel. The
notion that market forces always work to advance our social values should be
regarded as a hypothesis and not a religious principle. In the current case, the second school
claims that market forces re-enforced a poor evolutionary path for modern
information technology development and design.
These market forces achieved the current outcome when consumers and suppliers
became the same community.
Based on a set of observations, the federal
funding decisions over the past thirty years produced a compounding of the
error of reductionism. [2]
In the first school, intelligence is defined to be a process that has a precise
model, located in a specific way. This school is best exemplified by the
academic discipline of artificial intelligence where the goal has been to
create a replacement for human intelligence.
The current Semantic Web activity has the related goal of producing
ontological models, formal models driving computational machines, of natural
social complexity. According to the
first school paradigm, systems that are naturally intelligent always use a
model to express intention. The more fully a precise model is used, the more
intelligent the behavior of the system.
In the Semantic Web activities it is asserted that this model must be
produced using the Resource Description Framework standard equipped with an
ontology inference layer. [3]
How can the great majority of federal funding
go towards supporting a paradigm that is in clear opposition to recent
developments in natural science? Our
answer is simple and direct. The basis
for asserting the W3C’s definition of the foundation of human knowledge
representation is made to develop lines of business. We claim that these lines of businesses developed without
adequate exploration of what types of system properties might be expected when
capitalism works without any form of moderating influence.
Important cultural concepts do not consider the
formal model to be to the perfect expression of human intelligence. For example, we see in sports a clear
recognition that the best performances occur without a pre-fixed model, and
that agility develops when the behavioural expression is allowed to fix the
pragmatics of situations in real time.
The natural science contains a great deal of evidence that model
building is not the ultimate expression of intelligence; that the there are
other elements including a real time perception-action cycle that measures
reality and makes real time assessments about consequences of actions. These measurements drive learning. Learning does not lead to a precise model
that is held onto dogmatically.
Of course, paradoxically the presence of model
building activity is vital also. The
point, beyond the paradox, has to do with the origin of the design of that
model. If the user is not designing the
model, then the model can be usurped by business simply to increase economic
value to the line of business.
Precisely this is what historians will record. The IT sector became its own costumer, and the normal evolution
of a marketplace became blocked.
Model building behavior is clearly important,
but is it essential to natural intelligence?
The science is still out. Business,
particularly, information technology and entertainment business over the last
half of the last century, have acted as if there is no principled issue that
natural science might bring up in opposition to what has been done by the
information industry. The principled
issue is the issue of perceptual measurement and human awareness.
In an environment where models are built out of
software and then owned, the clear danger is that participants in business
activities will not engage fully in using the model because the question of
ownership has already been established.
The issue is not one of mere preferences but rather one of
mechanism. Ownership and creativity go
hand in hand. Consistent with the first
school thinking, individual participation is placed under the thumb of Weberian
bureaucracy. [4] We see this
with software systems designed to govern administrative and financial services,
such as the software developed by the OMB’s e-Gov project.
The
technical issue compounds legal/social issues.
Current information technology is clearly confused about the difference,
observed by natural science, in nature between deductive and inductive
inference; and more generally about the neuro-cognitive-quantum reality
underlying how human brain systems supports action and perception. However, this confusion has consequences
when coupled by the self-centeredness of industry. History will see the first school as having been shaped by its
subservient role to business interests.
The conflict between businesses’ needs to own and the current needs of
science to communicate was subjected to business interests. These interests have shown a willingness of
ignore key aspects to the nature of human communication. Due to federal funding mechanisms, academic
computer science is developed to support an ever increasingly powerful
professional computer science, not to focus on the issues of human
communication.
Science should deal with the actual nature of
reality rather then the imposed assertions, by business processes, which limit
science to the production of ownership.
Science must trump business.
Currently business is represented as “living in reality” whereas
scholarship is represented as bad for business. As in other debates, like the debate over global warming, the
role of the government is required to shift away from the support of a specific
economic sector. The American democracy
is a government by and for the People, not a government by and for the
corporate interests.
The Resilience Project will start up by
increasing the collective value of our common cultural heritage, demonstrating
higher value from natural science due to the separation of business interests
and basic science. Positive value from
the natural sciences will not be blocked from public view so that narrow
business interests can impose yet one more layer of ownership and un-necessary
costs. National interests in an
increasingly competitive world social structure is served by increased
transparency.
The Resilience Project may be the instrument
through which the American people will balance what is out of balance in our
economic, social and environmental systems.
Honesty will be returned to science.
The infrastructure supporting human communication and knowledge sharing
will be placed on a responsible footing.
Software ownership is not producing the best
tools for our modern world. One can
image easily a world in which ownership of algorithms did in fact drive
innovation towards a better kind of computer science. However, this is not the world we live in. In our world, the drive to own is coupled
with legalized deception, as well as with the confusion that developed at the
heart of computer science over the notion of complexity.
Who owns the knowledge gained when an
individual human interacts with models of reality? If the human perceiving the model interprets the model, does not
that individual human enjoy some rights of ownership due to individual
creativity as part of the act of ownership?
The mechanics of perception involve creative private
intentionality. By blindly allowing
business interest to “own” models one truncates the pragmatics that is added in
the moment. The greatest power of human
intelligence is blocked at the very point
The core principle of the second school is that
categorical meaning arises in the moment by an interpretative act of an
individual. This interpretation
involves human intentionality. Natural
science suggests that an in “induction” of categorical meanings, always, occurs
as part of conscious experience and this experience occurs in the present
moment in real time. The interpretation
becomes “owned” in that moment. If new
aspects of ownership are not allowed, by non-disclosure agreements and patents
on software, the process of interpretation breaks down. The process of innovation is warped by undue
burdens on human communication. In this
circumstance, it is quite natural that the local economic aspects become the controlling
factor in what is reported and what is funded.
The asserted ownership of innovations in the
design of software may have acted against the public interest, by bringing to
the market those things that are asserted to be owned, while inhibiting the
development of collaborative tools and tools that facilitate several aspects of
natural human communication. The
history of the e-Gov project provides the perfect setting to examine the role
of software ownership. [5]
[1] White Paper, Resilience Project URL:
http://www.ontologystream.com/beads/nationalDebate/ResilienceProjectWhitePaper.htm
[2] We are referring to the paradigm that asserts that all causality is local and that these local reactions can be completely reduced to Newtonian mechanics.
[3] This is the so called W3C’s OWL (Ontology Web Language) standard
[4] See footnote #9, in the Resilience Project’s White Paper on Max Weber’s viewpoint.
URL: http://www.ontologystream.com/beads/nationalDebate/ResilienceProjectWhitePaper.htm#_ftn9
[5] Fountain, Jane E. (2004) “ Prospects for the Virtual State: National Center for Digital Government