RE: [unrev-II] Is "bootstrapping" part of the problem?

From: Garold L. Johnson (dynalt@dynalt.com)
Date: Tue Dec 19 2000 - 09:55:01 PST

  • Next message: Eric Armstrong: "Re: [unrev-II] Is "bootstrapping" part of the problem?"

    I am new to this group, and I hesitate to step in here as it looks a lot
    like a mine field. That is clearly not enough to stop me!

    -----Original Message-----
    From: Paul Fernhout [mailto:pdfernhout@kurtz-fernhout.com]
     First let me summarize: there is more to living than "intelligence".
    Intelligence doesn't call one to act, "desire" does that. "Intelligence"
    doesn't define why one should do one thing rather than another, unless
    one already has "values".
    <SNIP>
    We are talking
    about putting ever more powerful "intelligence" in the hands of
    organizations that have already shown themselves capable of building
    50,000 nuclear warheads, letting close to a billion people starve, and
    dumping PCBs in water bodies and resisting attempts to clean them up.
    One must question the desires and values of such organization, even if
    to an extent some of those decisions may have also been due to faulty
    reasoning or lack of knowledge (i.e. nukes=MAD, starvation=racism,
    PCBs=ignorance).

    [Garold L. Johnson] While what you say is true, I believe that there are
    some points being missed.
    The expansion of technological ability continues to outstrip our ability to
    make use of it, to reason about it, to deal with the values and desires
    issues. While this is true, it is nothing new. This has been going on for a
    very long time. It does so because those who can think have allowed those
    who don’t to set the values agenda. Since science agreed to stay out of
    certain aspects of knowledge in order to keep from being destroyed by the
    church, science has refused to deal with any of the “soft” issues. The
    result is a strong tendency for those in the soft issues (hardly sciences)
    to be unqualified in science and for those who are qualified in science to
    avoid the soft sciences.
    Ability to think better empowers those who think and does very little for
    those who won’t.
    If there is a promise for the future, IMO, it lies in the fact that
    continuing growth in computing capability makes it possible for small teams
    to tackle and accomplish feats which only a few years ago were possible
    only to major corporations or governments.
    As we develop the tools and techniques for organizing knowledge into
    accessible information and increase the possibility of learning supported by
    better information tools, we begin to break the stranglehold that
    governments have on education, and the dependence on large organizations of
    all sorts.
    When a small group of individuals can perform the research required to bring
    about some of the goals that you consider important, there is a chance of it
    getting done. If the future relies on our ability to convert bureaucracies
    or mass humanity to any better way of doing things, we are indeed doomed.

    1) Value Affirmation. There should be an affirmation of core human
    values and humane purposes in a statement of purpose for "bootstrapping"
    as defined by the Bootstrap Institute.
    [Garold L. Johnson] Unfortunately, we can start that debate and expend all
    of our energy on it and get nothing accomplished. I would prefer that we
    create a set of tools that make it possible for those who will to
    investigate the mammoth amount of knowledge required to investigate the
    major issues that you raise. Part of the reason for staying out of the soft
    areas is that the amount of information that has to be understood and
    manipulated to deal with even the simplest of social issues continues to
    outstrip the abilities of those who would do so. Until we can begin to
    understand and model how we work together to achieve any goal, it seems
    unlikely that we will have much impact on it.
    I believe that we are at a point where we need to begin to take charge of
    our own intellectual evolution or perish. If we allow our next set of
    institutions to develop with no more thought than the current set, we are
    indeed headed for trouble.
    However, the view that all of our problems would be solved if only others
    saw the issues as clearly as we do is a self-defeating viewpoint. All
    utopian ideas are basically “all that has to happen is for human nature to
    change to the way I would like it to be”. It isn’t going to happen.
    If social goals are going to be met, it will be done by people who: already
    have such goals, develop the necessary tools and abilities to accomplish
    those goals, and set about getting it done.
    As a consequence, developing the tools that make it possible and providing
    them to the small groups that have the values and the desire seems to me to
    be the only realistic road out.
    I submit that our problem isn’t so much too much technology as an inability
    to martial the knowledge necessary to apply it well. As we get more
    information on how natural systems work, for example, such things as organic
    farming which works with natural systems to produce more food better and
    without massive amounts of chemicals provide the possibility of bypassing
    the large dinosaur systems that currently have to provide the chemicals. If
    there is going to be a $5 box that will power a village it will far more
    likely be the result of a small group working to solve that problem than it
    will because the existing system decided to build such a device. This is
    knowledge and research which is just now becoming available to groups small
    enough to care.
    2) Understanding Exponential Growth. To the extent the colloquium still
    operates and desires to discuss issues that will have great (possibly
    negative) impact over the next few decades, the colloquium needs to have
    a focus on dealing with this problem of rapid exponential change itself
    and what it is leading towards.
    [Garold L. Johnson] I agree that there needs to be some energy devoted to
    the problem of exponential growth if only to address the technical issues of
    data inflow overwhelming all attempts to organize it with whatever tools and
    for the answer to be obsolete by the time you discover what they are.
    Addressing exponential growth with any view that any efforts we take are
    going to change it is wasted effort – it isn’t going to happen. The best we
    can hope for is to empower those willing to make a difference in the face of
    the growth.

    3) Accepting the Politics of Meeting Human Needs. Addressing human needs
    (beyond designing an OHS/DKR) was one of Doug's major goals and
    something that occupied many presenters in the Colloquium. The
    colloquium needs to accept that there are effectively no technical
    issues requiring extensive innovation related to supporting contemporary
    society that are of any significant importance.
    [Garold L. Johnson] This is true, but not terribly relevant, I am afraid. We
    have had the technological ability to carry out nearly any set of goals that
    we could get sufficiently widespread agreement to tackle for years. To the
    extent that there is hunger in the world, for example, it is held in place
    by governments and those in power to whom their power is all that is of
    importance. This is lamentable, but it is a fact. Continuing to lament it
    isn’t going to change it. What will change it is empowering those with the
    will to do something more than talk about it. This is where the efforts we
    are discussing can have value.
    We need the ability to manage knowledge in much greater volume much faster
    than we can today before we can even think meaningfully about why it is that
    the conditions we decry exist and what can be done about them in human
    terms.
    Very few people think in any measure. Even fewer think clearly to any
    degree. We have yet to devise the tools and techniques for dealing with
    human values and motivations in any meaningful way. There is no agreement
    about how to reason about issues of values, since reasoning about values ahs
    almost never been done in human history. It is a new area of discovery. We
    don’t have any rules of evidence, nor any concept of what proof means in
    this context.
    Additionally, when we enter the realm of social interactions, group
    dynamics, social mechanics, evolution of organizations, etc. there is no way
    to model the massive problems that arise. This is the entire area of
     “wicked” problems – problems where what we think of as independent
    variables are mutually independent. This is an area for philosophical
    inquiry, certainly, but it is also an area in which the ability to model
    systems and make the information available is of utmost importance. This was
    Buckminster Fuller’s focus, and that effort has yet to succeed.
    <SNIP>
    So to the
    extent the Colloquium wants to focus on current issues (world hunger,
    California electricity crisis) it needs to support tools more related to
    dealing with politics or social consensus.
    [Garold L. Johnson] That is consistent with what I have been saying, but I
    believe that the issues for this forum are of the nature of “what factors
    involved in problems of the scale of human social and political interactions
    impact the requirements and design of the knowledge tools that we propose to
    build to assist in solving these problems?” That brings the effort into one
    of requirements elicitation in order to build an information management
    technology of sufficient power and scope to allow it to be used to address
    such problems.
    In my youth, I believed that what was needed to improve the world was a way
    to allow those who make decisions that impact the rest of us to have the
    relevant information and knowledge to make those decisions in a informed
    manner. It took several years for me to realize that until we did something
    about the unwillingness and the inability of those decision makers to think,
    and to think about the value systems they used to address the problems, just
    more information or better organized information wasn’t going to solve the
    problem.
    I know believe that this is a task that can only be accomplished without
    asking for or expecting support from the existing institutions. That happens
    by making it increasingly possible for individuals and small groups to live
    independently (or at least more so) of the existing institutions. For
    example, IMO, our entire educational system is beyond redemption as the
    basic underlying belief structure is mistaken. Trying to get education to
    fix itself is not going to solve this problem. It won’t even be solved
    directly by private or home schooling, since the technology for both comes
    from the same pool that has caused the problem. If we can make it possible
    for individuals and families to learn, to educate themselves, and to
    discover things for themselves, then the educational system can be bypassed
    and allowed to wither. The vast majority of people wouldn’t use such a
    system if we had it, but some would, and they might be able to make a
    difference.

    If corporations now doing IT have the major goal of profit as opposed to
    "meeting unmet social needs" (to quote William C. Norris)
      http://www.digitalcentury.com/encyclo/update/william_norris.html
    then corporations whether they do IT or KM are irrelevant to human
    survival.
    <SNIP>
    The only
    hope to resist this is some form of government intervention or worker
    (individual or union) resistance. These decisions will all be made in
    bits and pieces, each one seeimgly sensible at the time.
    [Garold L. Johnson] How can we seriously expect governments to provide the
    solution when they are the major source of the problem? The problem is not
    so much with the organizations as with the way that we as humans think or
    fail to think – organizations reflect that failure magnified. “The mind set
    that got us into this mess is not the mind set that will get us out of it.”
    (loosely) Einstein.
    Expecting the organizations and thought processes that brought about the
    current situation to resolve that situation is simply not reasonable. We
    need better ways of approaching knowledge and thinking. We need ways to
    model human behavior and human social systems far better than we can
    currently. Just decrying the fact that the current reality isn’t to our
    liking doesn’t move us closer to changing that reality. Complaining about
    corporations pursuing profit is pointless, as it is an essential of what
    they have to do to survive. The way they pursue it is possibly open to
    change, but the fact is that the organization that doesn’t survive doesn’t
    have any chance of making a difference, as well meaning but non-functional
    organizations demonstrate repeatedly. Example, we once had major problems
    with corporations spilling all sorts of smoke related pollutants into the
    atmosphere. The places where I saw it resolved most quickly were those that
    discovered that the minerals and materials that could be extracted from that
    smoke were of more value than the cost of installing equipment to clean the
    smoke. If you want corporate behavior to change, change the profit picture.
    It is here that small scale research has some real potential.

    The corporate social form has had little time to evolve (a few hundred
    years?) so there is not guarantee that contemporary corporate
    organization forms will be capable of doing more than exhausting
    convenient resources (passing on external costs when possible) and then
    collapsing.
    [Garold L. Johnson] We should be so lucky that they will just collapse. They
    will get a lot worse before that happens. Worse, we have no better options
    to offer as a replacement. The problem is that the organization of the
    corporate social form as well as all our other social forms was completely
    undirected by any coherent human thought. Our problem remains that the
    evolution of social forms is far too slow to handle the expected rates of
    change, and that all attempts to devise a better scheme than “just let it
    happen” have been such uniform disasters – social planning has been a major
    disaster nearly every time it has been tried. Is this failure because we
    lack the tools to model systems of this complexity, because we lack any way
    of thinking about the problems in the first place, or because we haven’t yet
    stepped up to expend the effort, energy, and thought necessary to address
    these problems.

    Obviously, to the extent KM could transform an organization like GE into
    one that makes good on their corporate slogan "if we can dream it we can
    do it" and deliver on their implied promises in their 1986 Disney Epcot
    center pavilion (underwater cities, space habitats) then KM will be
    useful.
    [Garold L. Johnson] KM will be useful whether GE uses it as you would like
    them to or not, it will be more useful to others who cannot currently offer
    any viable alternative to the GE’s of the world.
    There is one obvious exception to saying KM won't change the direction
    of organizations, which is to the extent humans as individuals in
    corporations have access to KM tools and might see the bigger picture
    and act as individuals. The only other hope is that a general increase
    in organizational capacity in large corporations or governments will let
    some small amount leak through for unsanctioned human ends (but the cost
    in human suffering to that approach is high
    [Garold L. Johnson] The human cost is high, yes. The problem remains that we
    haven’t yet demonstrated any system that can accomplish “unsanctioned human
    ends” with a lower human cost. The human cost of all other such efforts has
    been incredibly higher than the one of markets and corporations. While it is
    true that markets and corporations appear to be inefficient in many ways, we
    haven’t yet devised any system that works better. I think that devising and
    modeling such a system would be a great thing to do. We need useful KM at
    the individual level even to attempt that.

    <SNIP>
    One of the debaters made the point that even if capitalism is good at
    generating wealth, it is not good at distributing it. That is why I say
    capitalism without charity is evil. Taken to an extreme when machine
    intelligence is possible on a human level, capitalism as we now know may
    leave (most) people behind, while at the same time owning or controlling
    all the resources, preventing most people from earning a living
    ("shading them out"). Historically, this has happened many times before
    [Garold L. Johnson] I have heard this endless times. If you think that
    capitalism is inefficient at distribution, try any other competing system
    and see how efficient it is at either production or distribution. Capitalism
    needs improvement, to be sure, but it is still the best that man has ever
    done in terms of the general well being of the population. Don’t be too
    quick to discard it.

    <SNIP>
    I hope the situation does not come down to this, and that in the end
    charity will win out over avarice and a mentally disturbed need for
    excessive power. But it is by no means certain charity will win out,
    given the power of technology to amplify both the best and worst in
    people.

    [Garold L. Johnson] If you want certainty, you are in the wrong universe,
    sorry. What stands a chance is ways that improve the individual’s ability to
    cope with the world as it is evolving and to assist individuals in creating
    successful groups that can survive while accomplishing other worthwhile
    goals. The growth of computing has come closer to offering that than ever
    before in human history. What we could use is a way to leverage that
    development for worthwhile goals.

    Rod, what you are doing is worthwhile, as is what Doug is doing. But the
    deeper point is simply that dealing with overwhelming complexity due to
    rapid change is a different issue than meeting basic human needs right
    now. Both are important, but they are different issues.

    The technology and material resources to feed and educate all children
    (and adults) exists right now. There is enough to go around right now.
    The reason this does not happen is for political and social reasosn --
    not technological. Technology could and will make some of the choices
    less hard (i.e. when $5 can feed a village forever instead of a few
    people for a few days) but still the issue is not primarily a
    technological one.

    [Garold L. Johnson] True, but not relevant. As you say, they are (somewhat)
    different problems. However, I don’t see that there is any way to solve
    these problems with the mind set that created them. It seems that you are
    advocating dropping everything and solving these basic human needs. Not only
    will that not happen, I think that it is exactly the wrong direction. Well
    meaning individuals and groups have been pushing for this for decades and
    the situation remains. Provide those people with better tools, and maybe
    they can build a door in the wall instead of continuing to beat their heads
    against the wall.

    <SNIP>
    I am not leveling this criticism directly at "bootstrapping" as the
    Bootstrap Institute and Doug tries to define it. What I am trying to say
    is that "bootstrapping" in terms of exponential growth of technology
    (which enables more technology etc.) is already happening. Bootstrapping
    is the given. So the issue is, how do we use related exponential growth
    processes to deal with this? To the extent Doug's techniques are used
    just to drive the technological innovation process faster, in no
    specific direction, they are potentially just making things worse. To
    the extent such techniques are used for specific human ends(example,
    dealing with world hunger, making medical care more accessible, ensuring
    children don't grow up in ignorance and poverty, reducing conflicts and
    arms races) they make things better.
    [Garold L. Johnson] How would you suggest that the development of knowledge
    tools can be accomplished in such a way that only those of good social
    conscience can make use of them? Since I know of no way to do that, the best
    that I see that we can do is to aim our requirements at the scale of these
    major social problems. If we develop anything less, it will help those with
    lesser goals without providing what is needed by those who would tackle
    problems of this scale. We would provide those we oppose with tools that
    they can use without gaining the tools that we need. It seems to me that if
    we are ever to tackle problems of the scale of human social systems, we are
    going to need tools and techniques that are far beyond what we currently
    have. That is what this effort seems to me to be all about.
    The thing is, in a world where
    competition (the arms race) has moved from physical weapons to infotech
    (both corporate and military), simple saying you will speed the arms
    race is not enough. In my thinking, it is the arms race itself that is
    the potential enemy of humankind, and the issue is transcending the arms
    race (whatever grounds it is fought on -- nuclear, biological,
    infotech).

    [Garold L. Johnson] Perhaps so, but without tools that can handle problems
    of the level of social systems, we aren’t going to fix it either.

    > I think
    > there is another way to explain bootstrapping that avoids this conflict,
    but you
    > seem to be arguing against it. Can you clarify?

    I don't have a conflict in thinking about an OHS/DKR or working towards
    one. I accept the possibility that this bootstrap process may end badly
    for most of humanity. It is a shame, and humanity should try to avoid
    this looming disaster, and may well, but I have accepted that one can
    not save everyone.

    For over a decade I have wanted to build a library of human knowledge
    related to sustainable development. I as a small mammal am using the
    crumbs left over by the dinosaurs to try to do so (not with great
    success, but a little, like our garden simulator intended to help people
    learn to grow their own food).
    [Garold L. Johnson] This is exactly the sort of approach that I think has
    merit. The better the tools that you can have to build such a library, the
    more useful the result can be because of the design of the system, and the
    degree to which it is possible for you to accomplish this without government
    or corporate support.
    <SNIP>
    The way to put it is that "bootstrapping" has linked itself conceptually
    to an exponential growth process happening right now in our
    civilization. Almost all explosions entail some level of exponential
    growth. So, in effect, our civilization is exploding. The meaning of
    that as regards human survival is unclear, but it is clear people are
    only slowly coming to take this seriously.

    [Garold L. Johnson] The first step is to take it seriously. The second is to
    investigate what can be done about it. That is what I see going on here.

    As one example, lots of trends:
      http://www.duke.edu/~mccann/q-tech.htm
    Lou Gerstner(IBM's Chairman) was recently quoted as talking about a near
    term e-commerce future of 10X users, 100X bandwidth, 1000X devices, and
    1,000,000X data. Obviously, IBM wants to sell the infrastructure to
    support that. But I think the bigger picture is lost.

    Even for seeing the "trees" of individual quantitative changes, the
    "forest" that these quantitative changes would have a qualitative change
    on the business or human landscape is ignored. Or if people see it, it
    is the "elephant in the living room" no one talks about (well obviously
    a few like Kurzweil or Moravec or Joy). More of everything yes, but
    always business as usual.

    To be relevant and of goof for humanity, Bootstrapping must address how
    this quantitative exponential growth will lead to qualitative changes,
    at what point if any an "S-curve" effect will set in, and how
    "bootstrapping" as an intellectual concept will do good amidst this
    setting.

    [Garold L. Johnson] I think that it is important to discuss how
    bootstrapping can support the goals and values that we bring to it, but
    primarily as a source of requirements for the technology itself. These
    issues are all part of the reality into which we wish to introduce
    bootstrapping, and they need to be taken into account. The issues themselves
    are part of the motivation of the effort. Attempting to solve these problems
    directly rather than developing the tools with which to address them is an
    invitation to more pointless debate and no accomplishment. Either you will
    meet with agreement regarding your social views, in which case you are
    preaching to the choir, or you won’t, in which case we will just expend more
    effort on the debate and still not have what we might develop that might
    make a difference.
    This is the reason that I haven’t commented on any of the social views you
    express – my views or yours on any of this is, IMO, relevant only to the
    extent that we need to strive to evolve tools that will allow us to
    investigate the true nature of the problems and to model proposed solutions
    to see that they do what we intend rather than have some dramatically other
    result because we try to solve the problems (again) with inadequate tools
    and techniques.

    Thanks,

    Garold (Gary) L. Johnson
    DYNAMIC Alternatives <http://www.dynalt.com/>
    dynalt@dynalt.com <mailto:dynalt@dynalt.com>



    This archive was generated by hypermail 2b29 : Tue Dec 19 2000 - 10:20:17 PST