[unrev-II] WILL SPIRITUAL ROBOTS REPLACE HUMANITY

From: Eric Armstrong (eric.armstrong@eng.sun.com)
Date: Thu Apr 06 2000 - 20:30:29 PDT

  • Next message: Jack Park: "[unrev-II] Thinking about Use Cases"

    Frode Hegland wrote:
    >
    > >> WILL SPIRITUAL ROBOTS REPLACE HUMANITY BY 2100? A SYMPOSIUM AT
    > >> STANFORD
    >
    > So, how was it? Was anybody there from this group? Have I missed
    > reports?
    >
    You had to ask. Now I need to write up the summary I've been planning
    for awhile.

    Bill Joy pointed to the old enemies we have so far faced down since the
    middle of the last century: atomic power, germ warfare, and ______. But
    he noted that those technologies required big, expensive programs and
    were the province of a few governments (although the list is growing).
    The additional issues that face in the next century though: biotech,
    nanotech, and robotics, differ in two fundamental ways:
      1) They are potentially self-replicating
         Rather than being confined to a single instance, therefore,
         harmful effects can multiply exponentially.

      2) They do not require big, expensive programs.
         Especially with the growth of computing power and information
         access, we are "democratizing the capacity for evil", such that
         a Kacinski in our midst could bring major portions of our
         selves, our civilization, or even the biosphere. (Biotech, for
         example, holds the ability to build "designer viruses" that
         lethally attack a given race.)

    The alternative he presented was "relinquishment" -- giving up the
    pursuit of knowledge in those areas.

    The panel provided a number of interesting counter-arguments and
    counter-proposals, most of which were buried beneath a flurry of bad
    logic and specious arguments. I was doing a fair amount of tongue-biting
    during those moments when arguments were presented that "missed the
    point" but between those moments there were some well-reasoned
    hypotheses.

    To summarize:

    Ralph Merkle made the most well-reasoned defense of nanotechnology,
    pointing out that "replication capacity" does not necessarily imply
    "self-replication". If the DNA-information is stored on board, like a
    cell, then the cell is self-replicating. But if it is broadcast from
    afar, then any component that is cut off from that stream can no longer
    reproduce. So the technology is controllable.

    [He also made the point that business does not want to kill off its
    customer base, so it will never intentionally do harmful things.
    Unfortunately, that argument misses the problem. It is not what business
    intends that is the problem, but rather what they might do in a
    shortsighted quest for profit (witness MTBE) and, more importantly, what
    some badly misguided individual might do.]

    On the other hand, Ralph also made the point that if we run from this
    technology, and some malevolent person or government or person *does*
    pursue it, we will be left without any means to defend ourselves. So if
    there is a problem, we want to know about it as far ahead of time as
    possible. And if there is no problem, we'd like to know about that, too.

    Another counter-argument made by John Holland (I think) was that if a
    problem were unleashed by an individual, massive resources of government
    and industry would be immediately brought to bear to find a solution.
    Although that approach has not been particulary effective with AIDs, the
    reasoning is that the massive computing power that is coming into our
    hands (the power of one thinking person in a single affordable system by
    2010, of *all* thinking people by 2030) will make it possible to solve
    such problems before they do egregious harm. [On the other hand, the
    equivalent of putting oil back into the Valdez always seems like a much
    easier problem to solve when you haven't been faced with it.]

    As for Robotics and thinking machines replacing us, Holland, pointed out
    that there are serious discrepancies between what we can get a computer
    to do and what humans do. He mentioned Herbert Simon, who estimated that
    it would take 10 years to develop a machine capable of beating the best
    human chess players -- in 1950. He also pointed out that for Deep Blue
    to play a great game of chess after analysing millions of combinations
    every *second* was not very amazing -- what *was* astonishing was that a
    human could do so. [This is more astonishing given the background
    information that humans -- including Grandmaster -- look at 35
    positions, on average, before selecting a move. I have long felt that
    chess programs should be restricted to evaluating 50 positions. To play
    well under that restraint, they will have to do the same kind of pattern
    recognition that humans do.]

    He also gave some insight into the human pattern-recognition process,
    too -- patterns like the ones players see on a chessboard. It turns out
    the human eye darts from place to place, absorbing bits of the picture,
    and that the "darting" action is governed by deep cognitive processes we
    don't understand. The darting actions themselves are call "sacades"
    (sah-cahds) and they are integral in human pattern matching. [Another
    note on chess: Given a chess position in which the best move found by
    analysis was resonably obscure, 2 out of seven Grandmasters considered
    the move in their deliberations, while none of the 5 or 6 Masters in the
    study considered it -- another indication of the degree to which the
    limited moves considered are controlled by pattern recognition.]

    Anyway, the point was made that many of the things we simulate with
    computers today don't really come near to performing in ways that we
    could consider "intelligent" must less "self-aware", "conscious", or
    "spiritual". So we probably don't have to worry about robots for a while
    yet.

    [There were other predictions about glorious futures, like biotech
    having the capability to eliminate world hunger. It strikes me that
    would be a good idea because, if there *is* a possibility that a small
    government or even an individual could wreak massive harm, then it seems
    to me that we really *want* everyone on the planet to be just as totally
    happy and comfortable as possible, and we better get busy thinking about
    how to get them that way just as soon as we can. However, I note with
    chagrin that no business is funding genetic research for a wild tomato
    that grows like crazy in adverse conditions and produces abundant fruit.
    Instead, I see funded research for a tomato that can withstand stronger
    pesticides -- so we can pump even *more* pesticides into the eco system
    and keep profits up! So it seems to me that even with a magic ray gun
    that produces a powerful orgasms, the so-an-so's that run our businesses
    wouldn't have the sense to know which way to point it...]

    Thus endeth the sermons, diatribes, and soapbox standing, cleverly (?)
    disguised as a summary...

    ------------------------------------------------------------------------
    PERFORM CPR ON YOUR APR!
    Get a NextCard Visa, in 30 seconds! Get rates as low as
    0.0% Intro or 9.9% Fixed APR and no hidden fees.
    Apply NOW!
    http://click.egroups.com/1/2121/3/_/444287/_/955078228/
    ------------------------------------------------------------------------

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II



    This archive was generated by hypermail 2b29 : Thu Apr 06 2000 - 20:37:53 PDT