Re: [ba-unrev-talk] Continuation of Doug's Colloquium
Thanks. Besides, as you mentioned, when we talk about extending the
colloquium we talk about an activity that is organizationally, legally
unrelated to the Bootstrap/Stanford effort. (02)
Maybe we can get back to brass tacks. (03)
One urgent, complex problem is what stance to take with respect to Iraq.
And w.r.t. Mugabe. And w.r.t. Kim Jong II. And w.r.t. methods used in
interrogating Al Quaeda prisoners. Etc., etc. (04)
I believe that, if we wish to maintain a democratic society, the road
toward solving this problem is through a better informed public and
that, moreover, this public needs standards and means for better
evaluating and formulating a stance for action on that information. (05)
While improving public education (through improving instruction and
journalism) is an obvious path to follow, mere "meatware" is not enough.
We urgently need digital augmentation for individuals and groups. We
also need a communal acceptance of standards for conduct, i.e. a great
deal of commonality in moral stance. The Iraq situation demonstrates
that as does the not unrelated Israel-Palestinian conflict. Moral
stance, therefore, is something that bears on digital augmentation. (06)
For a period of time, I was looking at programs for conferences, mostly
those listed on the site of the ACM. While I found quite a bit about
(morally blind, people excluding) artificial intelligence, I can't
remember seeing anything about digital augmentation of the human
intellect, which from an academic point of view and by its very nature
need be a multidisciplinary subject; and from a societal point of view
calls for informed citizens fully aware about what may or may not be
decided in the digital domain; and what direction science and technology
will permit digital augmentation to take. (07)
I said, "from an academic point of view." Academe cannot stand apart
from the world it functions in; it is merely its servant even when its
role is to lead. I see Fleabyte (intended as a continuation of the
colloquium) as the public counterpart of academe as well as of
governance in the domain of digital augmentation. As a means to
stimulate and maintain control. A checking of its contents page will
show this is the range we have been working in. (08)
I had hoped that a forum of intellectuals originally dedicated to
extending Doug's vision would be responsive to the above issue. The
question now is whether this is still the case. And, if so, shouldn't
thought be given to how to make digital augmentation a useful instrument
for enhancing (saving??) a democratic mode for living together on a
small planet? (09)
On Fri, 2003-01-24 at 03:18, Peter P. Yim wrote:
> Ref. Paul's cautionary message of 22-Jan-03
> 1. It might be worth revisiting (i) the scope of the colloquium, and (ii)
> the intent of the "permission to use" statement.
> (i) The colloquium was offered as a Stanford University professional
> education course. The course description says,
> See http://scpd.stanford.edu/SOL/courses/proEd/EC/
> Course Name:
> Engelbart Colloquium at Stanford
> An In-Depth Look at "The Unfinished Revolution"
> Faculty: Douglas C. Engelbart, Ph.D.
> This colloquium will offer professionals and executives a rare opportunity
> to listen to and learn from visionary Doug Engelbart as he talks about his
> life's work, creative process, and his concerns and vision for the future.
> To help refresh everyone's memory, I have brought up, from the archives, the
> original colloquium page from the Bootstrap website (dated around early
> March 2000) where the public information on the colloquium is posted. That
> and the course pages on the Stanford would have been the main source of
> public information about the colloquium. I have it, now, at
> A lots of the links don't work any more, but it clearly explains what the
> colloquium was about. I'm going to leave that page there for the next couple
> of weeks in case people want to look at it.
> Based on the above descriptions, developing the OHS or any other system,
> obviously does not fall within the scope of this Stanford course. I,
> therefore, suggest that we stop talking about system development as if it
> were a colloquium activity, and even less so, an extended activity (because
> it is out of scope.)
> (ii) The intent of putting a "permission to use" statement in place was to
> facilitate the boradcast, webcast, taping and the subsequent publication of
> the colloquium content into courseware, a book or something in that vein.
> The indemnification clause (which, actually is fairly standard) is there to
> make sure that participants are responsible for their own acts.
> If someone had spoken during the colloquium (since dialog was a feature)
> and, either intentionally or unintentionally, divulged his/her employer's
> trade secrets, and the employer sues -- neither Bootstrap Institute nor
> Stanford University would want to be (nor should they be) involved. This is
> the type of situation the indemnity clause was there for. It does take a
> long stretch to get from this to interpreting it as being akin to "asking
> volunteers to absorb liability."
> 2. All this discussion is interesting, but the real issue is -- is there a
> real and appreciable risk and threat, to the extent that we should stop
> certain pursuits because of it. I guess each of us will have to answer that
> for himself or herself. I don't believe a legal professional, that we pay
> some money to, could do it for us.