Re: [unrev-II] Is "bootstrapping" part of the problem?

From: Garold L. Johnson (dynalt@dynalt.com)
Date: Thu Dec 21 2000 - 10:23:25 PST

  • Next message: Henry van Eyken: "Re: [unrev-II] Is "bootstrapping" part of the problem?"

    To reiterate my main points, which perhaps are getting lost in the
    debate over future economics:
    1) Technological (as opposed to social or political) progress, while
    desirable for many reasons, is not required to solve basic human
    problems.
    2) The exponential growth of technology is both a threat and a blessing,
    and at this point is a given, and like fire we need to do what we can
    with it for good ends (however we define those, where we may not agree).
    3) To an extent the exponential growth of technology may help meet human
    needs of the disenfranchised through reduced costs it may seem
    desirable, but it is not required to do so. This means people driving
    technological innovation, including the Bootstrap Institute, should be
    clearer about what it is they are trying to accomplish. Is it simply to
    escalate the infotech arms race, is it to make charity more effective,
    is it in some belief in "progress", or is it for other reasons?

    -Paul Fernhout

    It seems this should probably start a different thread, but …

    * 1) Technological (as opposed to social or political) progress, while
    desirable for many reasons, is not required to solve basic human
    problems.

    The question that I have in this regard is whether the social or political
    changes are really doable without better ability to collaborate, share
    knowledge, investigate systems and options, etc.
    One of the areas that I study is the question of how to reason about just
    the sorts of problems that are fueling this debate.
    * In complex systems, the obvious is not always correct. In fact, in
    most complex systems, many properties that seem obvious are simply not
    correct.
    * The penalty for answers based on incorrect models of the way the
    world works is failure to solve the problem. It is possible to hypothesize
    solutions that are simply unworkable if the model is incorrect.
    o e.g. it is possible to establish certain chess checkmate positions
    that cannot be achieved in the sense that there is no possible sequence of
    legal moves that result in the position.
    o I read a book (I think it was “Wasted Wealth” by ??? Smith – I’ll
    check it) in which the author makes some very good arguments that the amount
    of work that needs to be done is improperly allocated among the people doing
    the work. His thesis was that about 50% of the work being done was taking
    twice as many people as possible just to provide individuals a slice of the
    economic pie. His basic observations were sound, but his implicit assumption
    was that (some unspecified sort of) central planning would allocate work
    more equitably and efficiently resulting in phenomenal increases in
    efficiency and productivity. The only problem is that the rearrangement that
    he recommends has no way to be accomplished in anything he recommends nor in
    any way that I can see working. People are not arbitrarily reassignable to
    tasks as the statistical approach would indicate.
    * In social and political debate there is a very strong tendency to
    assume that anything and everything that is in pursuit of a “good cause” is
    in fact possible simply because the cause is “good”. Supply your own
    definitions. The problem is that this is not true. What determines the
    workability of a solution is dependent on the nature of reality and the
    correspondence of the solution with reality. The merit of the cause is no
    guarantee that a proposed solution can be implemented. The tendency,
    however, is to brand anyone who suggests that a proposed solution is
    unworkable is opposed to seeing the problem solved and is therefore
    (clearly!) in favor of continuing the problem and therefore evil. While this
    mechanism tends to be more evident on the left, it has no monopoly. Until we
    can get a handle on the fact that reality is not malleable just because it
    is inconvenient. There was a story, likely mythological, about a legislature
    trying to pass a law setting the value of pi to be exactly 3 because the
    current value was too terribly inconvenient. It is difficult to credit that
    even legislators could be this dumb, but other proposed legislation ignores
    truth in less obvious ways.
    If this is a correct assessment, political and social problems are not
    totally independent of the ability to understand complex systems,
    particularly social systems. There was an attempt to replace a series of
    “water temples” that were the traditional mechanism for allocating
    irrigation water with a “scientific” system. The eventual discovery (I don’t
    know whether it was before or after the temples were displaced) was that the
    temple system came closer to an optimum solution than any software mechanism
    they were able to devise.
    Therefore, I contend that the problems that are social or political rather
    than technical may well require that we understand more about the nature of
    the social or political systems that have to be modified than we ever have
    before, and *that* is a KM problem of magnitude. The solutions to the social
    an political problems are not going to happen just because it would be
    convenient.

    We don’t have models for even the most obvious issues. Consider the way
    polarization on a problem work, for example.
    * A problem is stated as being a major issue.
    * One or several solutions are proposed.
    * Nobody bothers to define what the desired outcomes really are or
    whether there is any set of outcomes upon which agreement can be reached.
    * Forces polarize on the nature of the solutions, some adamantly
    opposed, other adamantly in support. The other viewpoint is characterized as
    benighted, misguided, and (eventually) evil.
    * At this point, any attempt to investigate either the validity of
    proposed solutions or of actually workable solutions is attacked by both
    factions.
    * At this point, there is no chance of arriving at any workable
    proposal because only those in one faction or the other are ever heard.
    If we can’t find a way around this problem, the chance of solving other
    social and political problems seems to me to be vanishingly small.
    We don’t understand how groups organize or what contributes to their success
    or failure. There are all sorts of explanations for business failure rates,
    for example, but the only things that can be said with any definiteness are:
    * Every enterprise that fails does so because there are one or more
    things that were essential to their survival that were not accomplished
    correctly or to an adequate level. This is a tautology, and yet it gets lost
    in the myriad of “single point” explanations.
    * We still haven’t identified a workable set of success factors for
    organizational success.
    * As a result, every new organization begins in ignorance of whatever
    success principle there might be, and ends up having to discover the success
    factors by trial and error, and the search for success factors is not even
    explicit in the group.
    * We are having similar problems with this forum. We have little
    agreement on what we are trying to do, why we are trying to do it, or even
    how to frame these questions in a way that stands a chance of arriving at
    answers rather than endless, largely pointless debate.

    In short, I contend that certain technological advances are essential to the
    solution of some social and political problems, and that among those
    advances are tools that allow people to collaborate effectively and to
    investigate the working of complex systems. Without this we cannot form
    successful groups that can
    * Formulate problems in ways that permit of solution
    * Allow self-organization of individual efforts
    * Evaluate proposed solutions for actual workability, resulting in
    workable programs for achieving the solution.
    * See that solutions are implemented effectively, and are modified
    when (and only as) necessary when reality contradicts preconceived notions.
    We can’t accomplish this in the relatively simple case of defining and
    implementing a set of software tools. Let’s not even consider the next
    larger problem of how to organize efforts to develop successful software
    systems (any candidate definitions for what it means for a software
    development project to be successful?). Just how does anyone suggest that we
    go about tackling world scale problems of vastly greater complexity when we
    can’t begin to handle such a small scale endeavor?

    >2) The exponential growth of technology is both a threat and a blessing,
    and at this point is a given, and like fire we need to do what we can
    with it for good ends (however we define those, where we may not agree).

    Here I agree. There are some forces that we aren’t going to be successful at
    opposing no matter how we view them. The best that I can see is to try to
    find ways to attack problems of interest to us while the rest of the world
    does what it will.
    Realize that as bad as things may appear, we have more people having more
    energy that doesn’t have to be devoted directly to survival, and more tools
    for them to work with than at any time in history. A cynic would say that
    this results in too many people with too much time on their hands.
    While the remaining problems may indeed need solution, it is necessary to
    maintain some degree of historical perspective. In short, a far greater
    percentage of humanity has a higher standard of living that ant any time in
    history, and that seems to be improving. Even that supposition can’t be
    evaluated with currently existing KM capability. Certainly just stating that
    there is a problem and then that any who disagreed with the currently
    proposed solution, workable or not, known to be workable or not, are somehow
    part of the problem is not going to get them solved.

    >3) To an extent the exponential growth of technology may help meet human
    needs of the disenfranchised through reduced costs it may seem
    desirable, but it is not required to do so. This means people driving
    technological innovation, including the Bootstrap Institute, should be
    clearer about what it is they are trying to accomplish. Is it simply to
    escalate the infotech arms race, is it to make charity more effective,
    is it in some belief in "progress", or is it for other reasons?

    If you don’t believe that the tools will support the efforts that you
    consider socially worthy, don’t support them.
    The intent of building a tool of the generality of the KM solution is such
    that I don’t see how the use of the result can be constrained by anything
    but its lack of capacity. I don’t see better tools for collaboration and
    helping groups manage their efforts is in any way detrimental to the
    accomplishment of social agendas.
    How do you build a system of the generality being proposed that can be used
    only for “good” uses or that cannot be used for “good” uses?
    I can see no way to force such constraints on a system like this except to
    build it on models of authoritarian management, or to develop a solution
    that is so limited that it cannot manage efforts of the scale of social or
    political solutions. Since I can’t see how we can possible create a system
    that has the problems that are supposed for it, I can’t see how this debate
    is useful

    If we really want to see that the evolution of such a system is appropriate
    to the sorts of problems that we want to tackle, we need to look at
    requirements on the system that are levied by the nature of the efforts
    required to address problems of the complexity that we face, not the
    specific problems, their proposed solution, or the moral benefit to be
    derived from their solution.

    As a simple example, a tool that would allow proponents to create proposals
    that are at least self-consistent and make some attempt at completeness.
    Take a look at any piece of legislation as a document, and it is clear that
    we need better ways to evolve and organize knowledge and information. This
    is completely aside from whether you agree with the legislation or can even
    understand what it proposes.
    If we could add some ability to model at least some of the possible effects
    of implementing these proposals, we could advance dramatically the ability
    of people to achieve the ends they agree upon and organize to achieve.
    *Then* we might have tools that would allow a debate such at this to be more
    than an exercise in using bandwidth.

    Thanks.

    Garold (Gary) L. Johnson



    This archive was generated by hypermail 2b29 : Thu Dec 21 2000 - 10:39:12 PST