From: Paul Fernhout <email@example.com>
My previous posting on "S" curve limitations related to bootstrapping
and evolution might lead you to think I don't believe in "explosive"
exponential growth. Far from it!
Summary: we've got global problems, but they are not what most people
might expect. The following is an essay on this subject.
========= trends in twenty years =========
Below are six "explosive" technology trends that all appear to
culminate in around twenty years. Even if some of them don't pan
out, the others will revolutionize our world (for good or bad).
I also list later four OHS/DKR projects related to coping with these
These are the technological trends that I think have a high chance of
coming to pass in twenty years (or so):
* Infotech -- Twenty years to $1000 human AI equivalent (1 billion MIPS)
* Robotech -- Twenty years to advanced macroscopic manipulators (human-
like, strong, mobile, low power) for $1000
Source: My general understanding of this field; this is similar to
what is explained in Hans Moravec's book "Robot"
* Powertech -- Twenty years to widespread fuel cells, PV, wind,
Source: My general reading in this area, like my previous post
on energy issues. For example of what might be around the
* Nanotech -- Twenty years to a first microscopic universal assembler
Source: Drexler, etc. http://www.foresight.org/
* Biotech -- Twenty years to mastery of many human DNA mysteries
Source: The human genome project is finishing soon and more
rapid advances are expected.
* Commtech -- Twenty years to ubiquitous cheap wireless communications
Source: This is already happening now with cell phones, but
needs time to percolate throughout the world.
Intersections may happen sooner: power chips, DNA computers, bio chips.
Of course, there is also what this colloquium is about -- Collabtech!
Collabtech arises from these other trends, although perhaps it is best
considered as a separate trend. I especially liked Ron Goldman's pointer
to the Chaordic Alliance http://www.chaordic.org/ founded by Dee Hock.
Collabtech such as developed and promoted by the Chaordic Alliance or
the Bootstrap Institute http://www.bootstrap.org may provide the most
hope for dealing with these other trends and the changes they bring.
========= the upcoming singularity =========
You may argue the dates -- ten years for some, forty for others. You may
point out Y2K didn't melt things down, that AI researchers predicted AIs
by now, that fusion power was supposed to be here by now, etc. And you
would be right to be skeptical. My point is that these are trends in
many different areas -- any one of which would make this world radically
different. Together, they spell awesome change -- in economics,
politics, lifestyle, relationships, and values.
It is quite likely we are heading for a singularity in the 2020 - 2040
time frame. By "singularity", I mean a sudden discontinuity in daily
life. I believe Vernor Vinge first coined this term in this sense.
Others, like Moravec ("Robot" 1998) or Kurzweil ("The Age of Spiritual
Machines" 1999) also point to this singularity.
By "singularity" I don't mean the end of the world -- just "the end of
the world as we know it" in the sense of radical changes to our
day-to-day activities, jobs, and plans.
========= exponential growth =========
Why is this "singularity" not that obvious?
These technological trends involve exponential growth for both capacity
and deployment. So, these trends may not be obvious to most until they
are unstoppable. We are very used to thinking in linear terms.
Exponential growth is non-linear.
For example, imagine it takes a year for one tiny duckweed plant to
cover a pond, doubling every day for 365 days. Here is how the last few
Let's say it is your responsibility to keep the pond free of duckweed.
So, after 358 days of exponential growth, the pond is still less than 1%
covered. It would be easy to say on day 358, "Oh, the pond has just a
little duckweed on it. I've watched it for almost a year, nothing seems
to be happening. I'll go on vacation." And just seven days later when
you return, "BAM!" the entire pond is covered by duckweed -- seemingly
out of nowhere. In fact, you could swear (correctly) that the day before
your return you had your assistant check for duckweed, and the half of
the pond they looked at was practically empty of it!
====== technological solutions to world problems are a given ======
In common math, infinity times anything is infinity. If even one trend
goes way up an exponential growth curve (effectively infinity), it means
almost anything it interacts with become closer to infinity as well.
Effectively, any of these technologies could by itself easily solve
world problems related to pollution, recycling, power, water, food,
goods, and education. Any one apparent exponential "breakthrough" could
do this simply by redirecting the application of the rest of today's
technology in the other areas. For example, with enough power, every
material can be broken down in a high temperature plasma and recycled.
With enough communications, everyone can have their say or make their
needs known. With flexible robotics, all manufacturing is cheap. If
everyone is healthy, we can all do a lot more for our fellow creatures.
And so on.
However, there is plenty of food to go around today, and yet people
starve. This is mostly for political reasons (and to a lesser extent, as
a reflection of those, economic). Yet, we are now at the point where a
mouse click can keep a starving person alive another day through a
donation of food.
========= what is the real problem? =========
My point? Be careful in determining what the real problems are that need
to be solved. It is possible that any emphasis of a DKR/OHS on solving
the *technological* problems related to food, water, energy, pollution,
or recycling is unnecessary. These trends will provide the capacity to
solve these problems. If so, then what are the *real* issues we face in
the next few decades?
Perhaps these issues are:
* inspiring moral and spiritual leadership relating to wise use and
distribution to create the best future (whatever that is),
* conserving the natural environment during the transition, and
* planning for the worst case of these systems getting out of control
(the movie "T2", nanotech "gray goo", WW III, totalitarianism, etc.).
I'm not saying we don't need new technology, or technology libraries, or
directed technology efforts as part of those efforts. I'm just saying
those three issues may need to be primary in directing such technology
========= old strategies for survival won't work well in 2020 =========
Note that these trends will be almost impossible to prepare for as
individuals. One hundred years ago, if you were in good health, lived
in an isolated area, had some friends, and had some gold under a floor
board, you could expect that barring a very rare catastrophe, you would
be OK. It was like being on day 240 of the duckweed pond watch
mentioned earlier. The technological revolution was barely noticeable
compared to its exponential "singularity" form.
The next few decades cannot be prepared for that way. Having
money in the bank won't save you from a nanotech "gray goo" mishap
(unless it is confined enough you could purchase the right defensive
nanotech). Being well educated won't elevate you from being replaced by
a $1000 AI that doesn't sleep. Being in good health won't guard you
from an ethnically targeted virus released by some twelve year old
biohacker. Living in a peaceful hamlet won't protect you from
cyberwarfare getting into your car's autopilot. Working for the
government and having a pension and insurance won't provide for your
retirement if that government and its economics become meaningless or
change their meaning with a few mouse clicks. (I just realized this may
sound like a religious pitch for salvation -- it isn't intended as
In short, I think we each are facing problems in the next few decades we
can't solve or prepare for as individuals. Yet, perhaps our best hope
for prospering into and beyond the singularity is to put an effort
into planning for it as individuals comparable to that we would put into
planning for retirement or a child's college education. And those
individual efforts must go into a collective effort like the OHS/DKR the
Bootstrap Institute proposes (or similar efforts, whatever they may be).
========= Millennium Project type efforts =========
Perhaps a first OHS/DKR effort should be on figuring out what the real
(non-obvious) problems of this twenty first century may be?
I looked at The Millennium Project
but compared to the singularity mentioned above, the scenarios for 2050
seem fairly tame:
However, they are obviously working in this area of determining what the
problem areas will be, so these forecasts might be improved.
I like the 2050 scenario they describe. I just think it will be
difficult to achieve. This is for two reasons. The first is the
potential of independent evolution in intelligent machines (despite
safeguards). The second is the fact that technology is an amplifier --
so it amplifies both bad and good. A second OHS/DKR could be on ways to
ensure that the amplified ability to create stays ahead of the amplified
ability to destroy.
Nuclear weapons are just one example of the amplified ability to
destroy. As Albert Einstein said of them, "With the advent of atomic
weapons, everything has changed but our thinking". We need to think of
ways to handle the threat they pose, and will forever pose. Forecasts
that omit that most basic threat, and the fact that it will never leave
us, are seriously flawed.
========= my own background in artificial life =========
I've been thinking about some of these trend for years -- especially
ways to move beyond fear of the destructive power of nuclear weapons
wiping out all of humanity or civilization. I don't mean disarmament or
ballistic missile defenses -- neither address the issue that someone can
always make new weapons and deliver them in various clandestine ways.
There are two ways to achieve that paradigm shift Einstein implied.
One is spiritual and moral growth. When weapons of mass destruction are
viewed with the same moral repugnance as slavery, there will be less
risk of their use, especially on a global scale. Freeman Dyson talks
about this in his book "Weapons and Hope". Yet, slavery still exists in
parts of the world in various forms. (A recent New Yorker article
covered this.) If one thinks of nuclear weapons as forever a threat, and
forever is a long time, then moral solutions are inadequate on the scale
of our small planet. They ignore the possibility of mistake, chance, or
unusual situations. They deny the fact that technology is an amplifier
of even the tiniest, most distasteful voice.
Moral solutions must be attempted, but they are not enough, given the
technology we have already created. Langdon Winner points out the only
moral choice the technologist makes is in what to invent. Once it is
created, the implications of it ripple outwards. Tools are not morally
neutral. The energy that goes into inventing new weapons of mass
destruction could instead be put into creating better hydroponics
systems. The moral choice is made by the inventor (and the people or
corporations who fund invention). Things may have unexpected
consequences, but within broad areas, inventors know some consequences
of what they do, and make decisions based on that.
The other way is to focus on how to create faster than we can destroy.
The only way to do this I know of is to develop self replicating
technology -- at a macro level. Obviously, cells are already
self-replicating at the micro level. What I refer to is being able to
duplicate entire technical infrastructures and ecosystems. How can this
be done? J.D. Bernal made a suggestion in 1928 (see below).
Towards this end, I developed one of the world's first kinematic
simulations of self-replicating robots in a 2-D sea of parts back around
1987 (on a Symbolics workstation in ZetaLisp). This was just before the
ALife craze. After creating a copy of itself, the first thing the very
first working robot did was tear apart the copy it had just made, for
raw materials to build a new one. I had unintentionally built a
cannibal! And yet, their basic algorithm for operations was just
reconstructing themselves to an ideal (mirror image) state. So worse --
here "idealism" leads to fratricide! I had to add a sense of "smell" to
prevent that -- and add code so robots would not eat others that
"smelled" the same.
In a talk I gave about the simulation, I said it was very easy to make
things that destroyed and it would be much harder to make things
that cooperated and created. Afterward, someone from DARPA literally
patted me on the back and said "Keep up the good work." I hope that was
to foster more work on the collaborative creative side of that
technology. But it was shocking, and I pretty much stopped work on it
altogether -- in large part because of concern for the implications of
machine intelligence and autonomous robotics.
I had previously been concerned. When visiting Hans Moravec's lab at CMU
I met some true believers who felt that in a few decades robots
were going to surpass humanity -- and I thought they were probably
right, but I wasn't so sure that was a good thing. But the ALife work
was fascinating, and I continued it until a very human contact shocked
me back to some sense. Yet, the work goes on by others. It is too
fascinating, and has too many competitive benefits in the near term not
to proceed. It is very enjoyable to play Dr. Frankenstein (at least, at
========= technology out of control =========
How do things evolve to be "out of control"?
I'm not necessarily blaming the military -- most generals abhor the idea
of "autonomous agents", a code word for independent robots in the
field with guns. Yet, competitive pressure in the marketplace alone will
see the rise of such autonomous AI systems -- there is just too much
value in being a day ahead of your competitor in investing in the stock
market, for example. And already, despite concerns about systems on the
ground, the U.S. military is moving to a bigger emphasis on smart
weapons -- like the cruise missile that is effectively a smart aerial
robot that can kill people. War is much more palatable when your side
does not risk human casualties.
And cyberwarfare is now being more heavily funded. Imagine someone who
learns those techniques creating a new version of the "Melissa" virus,
but with an AI payload that goes undetected. There on your computer --
unknown to you -- an intelligence watches everything you do, knowing
your financial transactions, your friends, your hopes, and dreams,
with barely a trace of humanity -- the one goal it has of surviving,
whatever it takes. This AI is more than just the system on your machine
-- it is tens of thousands of infected nodes communicating in the
background whenever you are on the internet. It subverts your virus
checker. Maybe at first that AI will be in the service of the person who
launched it for personal gain. And then? The point is -- what teenage
hacker can resist such a challenge? Sure, we can put security in place
-- but security can fail. And you may not know it has failed. At best,
your PC might seem occasionally more sluggish than you expected.
We need to adapt and create a defense against the competitive race (arms
or market). This will slow down the evolution of such systems. Yet
clearly no preventive defense for the long term future is possible --
evolution happens, whether we want it to or not. It is naive to suggest
we can do more than slightly delay evolution, if that.
Sci-fi has several stories ("Terminator" is only one of them; the
hopeful "Two Faces of Tomorrow" by James P. Hogan is another) where
practically within seconds of becoming sentient an ultrasmart machine
intelligence (distributed across a network like the internet) is pitted
against humanity. Like the Hydra of Greek mythology, there is no one
head to be lopped off. In Hogan's book, simple everyday random power
outages are perceived as a threat. Thus, "pulling the plug" is not a
viable option -- the system has already eliminated the plug as a
potential source of downtime.
And, as the story "Colossus: The Forbin Project" shows, all it takes for
a smart computer to run the world is control of a (nuclear) arsenal.
And, as the novel "The Great Time Machine Hoax" shows, all it takes for
a computer to run an industrial empire and do its own research and
development is a checking account and the ability to send letters, such
as: "I am prepared to transfer $200,000 dollars to your bank account if
you make the following modifications to a computer at this location...".
So robot manipulators are not needed for an AI to run the world to its
satisfaction -- just a bank account and email.
These worst threats as Vinge points out stem from the intelligence
amplification aspect of these new technologies. Whether the intelligence
is artificial or human actually may make little difference -- given the
wide variety of possible human behavior.
========= refugia =========
I think there are some possibilities for ensuring the physical survival
of humanity no matter what comes (the third issue listed above). These
mostly involve the appropriate use of self-replicating technology like
the self-replicating space habitats J.D. Bernal proposed in 1928.
That's personally why I am interested in an OHS/DKR -- to realize
Bernal's vision, both out of a need for defense against the unknowable
future, and also because, given that need, I find the concept beautiful
Yet, such an effort might set up a conflict of arguing about how to use
seemingly currently scarce resources that could go into social or
environmental issues. So, my objective is also to find a way to
reconcile both the space settlement goal and the environmental
preservation / social development goal by developing a common set of
technology for both. (NASA actually tries to do this itself as a
justification for much of its funding.) The garden simulator I helped
develop http://www.gardenwithinsight.com was one such effort -- given
that people on earth and in space both need to know how to grow food
Here are some general Space Settlement links:
There are various technology library efforts underway (primarily for
helping developing nations). They are not quite OHS/DKR's yet.
This one is really good and basically free (and is on the web):
This one just has a $500 CD set or micro-fiche set, but has global
And I also have my own project (no content yet) starting at:
where I'd like to somehow reconcile the on-earth near-term
environmental/social agenda and the space settlement agenda in a
========= machine intelligence is already here =========
I personally think machine evolution is unstoppable, and the best hope
for humanity is the noble cowardice of creating refugia and trying, like
the duckweed, to create human (and other) life faster than other forces
can destroy it.
Note, I'm not saying machine evolution won't have a human component --
in that sense, a corporation or any bureaucracy is already a separate
machine intelligence, just not a very smart or resilient one. This sense
of the corporation comes out of Langdon Winner's book "Autonomous
Technology: Technics out of control as a theme in political thought".
You may have a tough time believing this, but Winner makes a convincing
case. He suggests that all successful organizations "reverse-adapt"
their goals and their environment to ensure their continued survival.
These corporate machine intelligences are already driving for better
machine intelligences -- faster, more efficient, cheaper, and more
resilient. People forget that corporate charters used to be routinely
revoked for behavior outside the immediate public good, and that
corporations were not considered persons until around 1886 (that
decision perhaps being the first major example of a machine using the
political/social process of its own ends).
Corporate charters are granted supposedly because society believe it is
in the best interest of *society* for corporations to exist.
But, when was the last time people were able to pull the "charter" plug
on a corporation not acting in the public interest? It's hard, and it
will get harder when corporations don't need people to run themselves.
I'm not saying the people in corporations are evil -- just that they
often have very limited choices of actions. If a corporate CEOs do not
deliver short term profits they are removed, no matter what they were
trying to do. Obviously there are exceptions for a while -- William C.
Norris of Control Data was one of them, but in general, the exception
proves the rule. Fortunately though, even in the worst machines (like in
WWII Germany) there were individuals who did what they could to make
them more humane ("Schindler's List" being an example).
Look at how much William C. Norris http://www.neii.com/wnorris.htm of
Control Data got ridiculed in the 1970s for suggesting the then radical
notion that "business exists to meet society's unmet needs". Yet his
pioneering efforts in education, employee assistance plans, on-site
daycare, urban renewal, and socially-responsible investing are in
part what made Minneapolis/St.Paul the great area it is today. Such
efforts are now being duplicated to an extent by other companies. Even
the company that squashed CDC in the mid 1980s (IBM) has adopted some of
those policies and directions. So corporations can adapt when they feel
Obviously, corporations are not all powerful. The world still has some
individuals who have wealth to equal major corporations. There are
several governments that are as powerful or more so than major
corporations. Individuals in corporations can make persuasive pitches
about their future directions, and individuals with controlling shares
may be able to influence what a corporation does (as far as the market
allows). In the long run, many corporations are trying to coexist with
people to the extent they need to. But it is not clear what corporations
(especially large ones) will do as we approach this singularity -- where
AIs and robots are cheaper to employ than people. Today's corporation,
like any intelligent machine, is more than the sum of its parts
(equipment, goodwill, IP, cash, credit, and people). It's "plug" is not
easy to pull, and it can't be easily controlled against its short term
What sort of laws and rules will be needed then? If the threat of
corporate charter revocation is still possible by governments and
collaborations of individuals, in what new directions will corporations
have to be prodded? What should a "smart" corporation do if it sees
this coming? (Hopefully adapt to be nicer more quickly. :-) What can
individuals and governments do to ensure corporations "help meet
society's unmet needs"?
Evolution can be made to work in positive ways, by selective breeding,
the same way we got so many breeds of dogs and cats. How can we
intentionally breed "nice" corporations that are symbiotic with the
humans that inhabit them? To what extent is this happening already as
talented individuals leave various dysfunctional, misguided, or rouge
corporations (or act as "whistle blowers")? I don't say here the
individual directs the corporation against its short term interest. I
say that individuals affect the selective survival rates of
corporations with various goals (and thus corporate evolution) by where
they choose to work, what they do there, and how they interact with
groups that monitor corporations. To that extent, individuals have some
limited control over corporations even when they are not shareholders.
Someday, thousands of years from now, corporations may finally have been
bred to take the long term view and play an "infinite game".
========= saving what we can in the worst case =========
However, if preparations fail, and if we otherwise cannot preserve our
humanity as is (physicality and all), we must at least adapt with grace
whatever of our best values we can preserve or somehow embody in future
systems. So, an OHS/DKR to that end (determining our best values, and
strategies to preserve them) would be of value as well.
========= conclusion =========
When aluminum was first discovered around 1827, and for decades
afterward, it was worth more than platinum, and now just under two
centuries later we throw it away. In perhaps only two decades from now,
children may play "marbles" using diamonds, and a child won't bother to
pick a diamond up from the street unless it is exceptionally pretty
(although you or I probably would out of habit -- "see a diamond, pick
it up, and all the day you have good luck").
This long essay is my own current perspective on this developing
situation, and part of the process of my formulating my own thinking on
these trends and how I as an individual will respond to them.
To conclude, I think all the "classical" problems like food, energy,
water, education, and materials will be technically solvable by 2050
even if we don't do much specifically about them (and like hunger
are solved today except for politics). The dynamics of technology and
economics are just taking us there whether we like it or not. Those
goods may all may essentially be "free" or "extremely cheap" by 2050.
Obviously the complex politics of these issues need to be resolved, and
the solutions need to be actually implemented. If they are "extremely
cheap", people still need a tiny amount of income to buy them.
Still, I think Doug is right. We face huge problems that only
collaborative efforts can solve -- especially the problems of
intelligent machines, technology-amplified conflict, and a complete
disruption of our scarcity-based materialistic economic and social
systems. These problems dwarf technical issues like energy, food, goods,
education, and water.
The problem has always been, and will always be, "survival with style"
(to amplify Jerry Pournelle). The next twenty years will fundamentally
change what the survival issues are: environment, threats, and allies.
They will also very well change what we value as "style" -- when
diamonds are cheap as glass, what will one give to impress?
Developers of custom software and educational simulations
Creators of the open source Garden with Insight(TM) garden simulator
--------------------------- ONElist Sponsor ----------------------------
Was the salesman clueless? Productopia has the answers.
<a href=" http://clickme.onelist.com/ad/productopiacpc2 ">Click Here</a>
Community email addresses:
Post message: unrev-II@onelist.com
List owner: unrev-IIfirstname.lastname@example.org
Shortcut URL to this page:
This archive was generated by hypermail 2.0.0 : Tue Aug 21 2001 - 18:56:39 PDT