Engelbart Colloquium at Stanford

                                         "The Unfinished Revolution"

                                       Tape 1 Session 8 February 24, 2000

 

Engelbart: Welcome to session eight of the "Unfinished Revolution" Colloquium that we are holding here. This session is entitled Pilot Outpost of the
Frontier. That frontier has a special meaning that we will review. During the day we are going to go though different aspects that we talked about
before, trying to tie it in with high performance. The high performance in knowledge work, and leading to the high performance that is in all sorts of
capabilities.

Slide: Session 8: Pilot Outposts on the Frontier

Engelbart: Fundamental challenge in the bootstrapping sense is to develop an optimal, evolutionary environment for organizations and they are saying
that a lot of change is coming forth for organizations. A lot of change in their environments. A lot of change in what they are going to have to adapt to.
And change in the way they can work. So these multiple levels of change are coming about so the evolutionary environment is what we really have to
focus on.

Slide: Fundamental Challenge

Engelbart: So we used the term social organisms in referring to organizations of almost any size to help get oriented about the evolutionary concept for
that. The evolutionary process gets better and better at determining which direction represents improvement. It is a lot of years in evolving these sorts
of things; so some of the concepts don't really seem to fit in other people's framework about this.

Slide: The Worlds Organization Space

Engelbart: One of the things that we want to remind everybody about is that we have been talking about the co-evolution about the tool system and the
human system. Each of those is a quite complex system in itself. Their co-evolution is what has to happen. That you don't just get driven by the
technologies, especially so you don't get driven by technologies automating what we used to do because their are going to be so many changes
brought about through our society. Changes in the way we work those processes are going to change, so they are going to want to look at the new
ways to change processes. The ways of thinking and working so that you can better harness the new options of the technology. That is one of the
things.

Slide: Co-evolution Frontier

Engelbart: Simple-minded you could put up two dimensions of tool system and human system. Although each is a multidimensional vector. So you
don't just get just a plane when you try to position any given organization in what degree of how far out in the tool system did they use, how far up in
the human system are they employing. You can make a rough guess that our societies through out the world are distributed something like this.

Slide: The World's Organization in Human-Tool Space

Engelbart: In the evolution for all the years has been such that was some boundary that people could anticipate the way the technologies were going to
emerge over the coming decade or so. Same with the way organizations could, so evolution was slow, and a lot of it organic.

Slide: Co-evolution Frontier: Probable View

Engelbart: This is what we assume is the thing which we are facing now with the technology that is explosive. It may not be all the way out there now,
but already you can anticipate things that are going to be available technically and how are you going to harness them? New options for the way that
people work together, think, etc. So there is this big frontier. We call it the co-evolution frontier because this is what has to happen. You have to work
your way not just over here and up here, but evolving through there. This is the challenge.

Slide: Some recommended principles of "Facilitated Evolution"

Engelbart: There isn't anybody who is going to be able to say exactly where we are going to go. This where you belong out there in that space. It is an
evolutionary process because there is so much complexity of inner reacting kind of evolution change. The best that we could do is to provide an
environment, which evolution can take place in ways that give the maximum ways in which natural adaptation and evolution can take place. With in
that, nobody knows exactly where you are going to be, because it is too complex. No dictators, no monopolies. In some sense you say if what is it that
going to change like that and need to evolve. Make an analogy. How did our cooking and taste in food evolve? How would they evolve if they were
one restaurant in the world? They just have one market place and that keeps them happy enough, etc.that is all there is to it, unless you have a lot them
of evolving, trying things, communicating, and people finding out what is going on.

Slide: Some recommended principles cont.

Engelbart: So every social organism in order to evolve needs the best visibility of what is happening in the world, what is likely to happen, what other
organizations of it's similar nature are doing. So that it can decide and start maneuvering and evolving. There is a lot of clustering and moving. The best
thing to provide an evolutionary environment is to maximize the visibility both of the guesses of what is happening ahead, and case studies and
assessments of what is going on today. Also, the options that are there in experiments and how to integrate good ideas so they have a chance to get
forward. If these things get caught up with proprietary hooks on things it would be sort of be like proprietary hook that would be like the only way you
could get a restaurant would be to buy the whole thing from one vendor. Some thing has to evolve in that sense and it is an interesting challenge.

So one of these things that comes up, this concept of very high performance organizations. We are assuming that the organizations are going to get
much higher in their performance. This is saying that if they don't get higher in their collective ability to cope with complex problems, we are in troubles.
So that is the evolution we need to provide in order to help evolve organizations that are smart enough to cope with the complexity that is going to
come with that. My orientation begins as an engineer. You need to build prototypes and try them, fly them, and test them. Assess them, improve, and
rebuild them, etc. so this is evolution of real working organizations.

Slide: Clear Need for Hi-Performance Organizational Prototypes

Engelbart: So way back in the sixties about this...our laboratory is going to use it, we live on it every day and make it work better and better for us. Then
by 1974 we started in order for the same thing to get real organizations, not just the laboratory and research institute, we started selling service over the
arpa-net for people. We made the contract with them in such a way that we actually had a B person assigned. We formed a community of them, so we
called it the knowledge workshop architect. So the community was called the knowledge workshop architect community. Then everybody told me, you
know what the acronym for that is nobody will like that, it's QUACK.

The community in fact loved it. The chairman nominated himself as king quack. The cooperation among them was very interesting. We started talking
about the things that they would like to see improved. One guy says, my boss said that I could get an extra thirty thousand dollars; I could plunk into
the kitty to do some improvement of this. What should we do? So the other groups said this is what we really like, they all talked it over and the one
guy said ok, let's do that. The kind of cooperation. It was just beginning to take off when our world shut on us. Never in the commercial world can we
get that kind of cooperation. The commercial owners of our system were telling us no, no, don't do that. You guys are from some ivory tower, and all
they'll do out there is want us to spend money improving the system. If we shut down that kind of user group, then they will be quiet. What we learned
was that you really need to build and try and evolve. You really need multiple units that are evolving in their own way, with a lot of communication
between them so that we can all look over there and see. Instead of being closed and saying that I am going to have one up on them. If you flounder
because you didn't know that someone else already tried that, and it didn't work or why. This is the kind of thing engineering: build, fly, test; research:
study, analyze, understand; engineering: build, fly, test; There would have not been any other way we would have gotten this space but built and tried,
and done lots of assessment and analysis, but have to build and try.

If we are going to populate undersea with undersea domes, well, you can't go far with people who are going to go down in swimming pools and
learning to live in little domes under there. No, you have to build real ones; with people down they're really carrying out their full life and what they are
going to do. So let's build them and make them work. This is the case here. This is what we get in the frontier.

Slide: Outposts on the Co-Evolution Frontier

Engelbart: We really need outposts. We really need real life organizations that are trying to do this. General Motors, why don't you move way out there
and try and do this. Obviously there are practical problems in moving big organizations. Yet the little ones teaching you what. To weigh around this,
there are strategic concepts like that.

Slide: Let's call them HPATs

Engelbart: Let's call them High-Performance Augmented Teams. They are specially equipped, specially trained, specially recruited, specially targeted
capabilities, and specially deployed. In order to get that, I don't think it is enough to say that we have this team of engineers over there and we are
going to turn them into a high performance team. The kind that I am talking about is I am a navy seals or something like that. I want something where
they really stretch to try to do something. Need them to be recruited for the task. Specially equipped, specially trained. Then the question of how do
you put it to work?

Slide: This Social Organism is a specially developed, High Performance Team

Engelbart: You have seen this picture before. It says capability infrastructure. We are going to turn some group of people into a high performance team.
So what is going to change in this picture? The same picture can represent a high performance team. We need to have some feelings for where we are
going to change things. You can do some things just having team spirit training. Things like that. The co-evolution needs some evolutionary things
happening in both sides. So you can get some unusual things going on in there to get started. Later today I would like to get through the framework
that we have been generating among us to say how we could get started to do something like that.

Slide: Strategic Issue: How best to invest in Creating & Using Hi-Performance Teams?

Engelbart: Assuming it is, they're another thing that you are going to create a high performance team. So how best to invest in creating and using them.
So that answer is put them so that they can work closely and usefully with important parts of the real working organization. This we watched and
learned about too. So it is an important consideration because this will provide working day visibility to other people in the organization as to what this
high performance can do. So they should be able to witness things happening and spurring them to upgrade their own capabilities. So we will talk a
little later actually having capability ratings. This is something that we have learned about and watched.

Slide: Strategic Choice

Engelbart: I am not skill full enough with the graphics to say I am going to build something that says the core knowledge work proficiency for any given
individual distributed across the organization. So you know it is going to slant. So it is going to be some kind of curve there. I didn't know how to draw
it in power point. It is going to be declining. So you are going to lift the whole organization at once. So where are you going to lift them too? It would
cost a hell of a lot to move them very far out with in the organization. If you don't have a good idea of where you are going to move them it is going to
be a huge amount to spend. So here is the idea of getting a small group that is going to be working with in the organization. That is going to be a
prototype that is pushed out there. Nothing I have pictured or watched or observed is equal to this. Go to a conference on GroupWare and they say we
have this really neat system for people to work together. So we had five or six graduate students work on a project together three afternoons a week,
and this is all of the data a that we took. I'd look at that and think, only three years that we had of an experience like that we just learned that there is so
much going on with in the organization that that is not going to tell you that you can get that system and plug it in at some place.

Slide: Strategic Choice cont.

Engelbart: There are several ways that you can start evolving. You can say I can move the whole system by introducing e-mail. That will pull the whole
system up. If I really want to go for high performance, I could start with the high performance augmented support teams. That is the way my head has
moved. We are not going to augment the whole design team. We are going to put a support team in the middle. That is going to help them significantly
do their knowledge work by being a support.

The one thing that we could do is expanding that until eventually the whole design team is in that mode. Or we can start replicating that, and move it
down to the marketing team. I even got approached by the people who were auditing, and they said we have a complex job. Well I didn't realize that, but
I guess yet you do. So the auditing teams could be augmented to high performance. The marketing teams, the strategic planning teams. So you have
other places to put these high performance augmented support teams. That to me is a significant thing. It is so central to really learning how to make an
evolutionary environment that I cannot imagine not looking at that.

Slide: Strategic Choice cont.

Engelbart: This may look familiar this diagram. In this case it represents a large parent organization in which you are going to harness a high
performance support team. Some capability down inside the capability infrastructure is where I am going to plug in my team. Where? It warrants an
explicit goal and role in there. It is a good thing to do. I am going to keep plaguing you guys with this basic diagram, because it so centrally represents a
lot of the things.

Slide: Parent Organization

Engelbart: If you think about all of the things going on inside the capability infrastructure of any big organization. I could put something in here you
realize that some capabilities own a few levels often gets drawn upon by the higher levels capabilities that depend upon them. That is what
infrastructure means. That strategic issue inside our organization, what jobs will they do?

Slide: Next Strategic Issue

Engelbart: So a strategic answer is process facilitation and knowledge integration within in the organizations dynamic knowledge depository. So,
example pay off. It is going to be such that we had it even by 1974 and certainly today that a high performance helper can connect and share screens
and show you how to do something. You can watch and see how much faster that they do things. You can see that they have this whole thing of some
kind of macros or something. These things belong to a category of capabilities that are several grades above that rating of what you have now. You can
work your way up to it, in about four or five months. You can take the next course or something. It is like peddling on my bicycle and someone swishes
by on a car. That is mobility. So the mobility and the capability manipulate a portray things. It has a huge amount of progress to be made. A claim like
that is one of the problems. How do you get people to get some type of perception about how much change there could be? I keep trying to make some
examples of some of that.

We have seen from the shared screen kind of thing that people can call us from any place in the country, and the trainer can connect with them and
says let me see your screen. The trainer says now show me what you were doing. There are other ways. If you pass me the controls I will show you
how to do it. So they pass the controls, talk it though and show how they do it. It is a terrific way of facilitating people that they start learning. It is
amazing how many people, if we are having a training class, a lot are climbing to come. They come into our lab during the training and they would watch
how other people worked. Then they would come back in the lab and say they were doing things that I never heard of before. They are using a
higher-grade user interface. We were giving you the beginner's vocabulary. The whole class rebelled. No, we want the real thing. If we had not shown
them the difference and started them out with the beginners, they wouldn't have objected and been Gun ho. If we just lay in on from the outset saying
look at all of the verbs and nouns everything. It is an important thing for them to see.

Slide: What a Bootstrap would recommend

Engelbart: The bootstrapping too. It is a strategic thing about giving them dynamic knowledge depository support. If you have some operational,
advanced prototype, high performance augmented support teams, where do you use them? Our best investment leverage would be supporting our
capability improvement infrastructure. Is a bootstrapping strategy. There are a number of things. We are going get in to dialogue.

Slide: Special Focus Domains for early DKRs

Engelbart: Going in to these dynamic knowledge repositories, there are these special focus groups. Intelligence collection and analysis regarding
external events and trends of concern. Very important basic one. Scenarios regarding future trends of concern. The current state of the internal
capability infrastructure and improvement that are planning. We have to have that under control. Current relations with customer, suppliers, partners,
and improvement planning. New products and services. There are quite a few domains of specialized knowledge that many organizations that dynamic
knowledge repositories must keep up on. One of the critical ones that we keep talking about in the improvement infrastructure, are these first three. We
are going to get some special discussion about that. If you have a team that is going to help you with dynamic repositories. You need to realize that any
large organization really has nested dynamic repositories.

Slide: Multiple DKRs in any large organization

Engelbart: Here is the engineering or the sales, marketing, financial etc. Each of those groups has to have it's own way to operate. This is what we have
been seeing. And yet the whole has to have one. This brings about a strong need for concurrency in the evolution of them. It also says, where in there
would you plug in some higher performance dynamic knowledge repository support? That is a very good question. This is one of the strategic issues
that are there to work on. This is why we said earlier put them into the improvement infrastructure. If you are going to do effective bootstrapping, you
need effective dynamic knowledge repository capabilities. You have to learn how to do this better, and you have to learn how to apply that to our
capability-improvement capability. This needs to be a core part of what the NICs (network improvement communities) help their member organizations
improve. NICs are there, they are operating. They have members that are joined together with the C energy to say what is going on. What its NIC can
do for its members is a very important way to help them see how they can use dynamic knowledge repositories to see how they can improve
themselves. It is a basic bootstrapping thing, before they start working on other capabilities that the members say that they want. I want better
communications with my customers or I want a better source of leather for the shoes that I am building. Those other capabilities need attention too, but
the one that has strategic leverage is the capabilities to improve. The core of that is the collective IQ with the CoDiak capabilities, and the core of that is
the dynamic knowledge repository.

Slide: Effective Bootstrapping

Engelbart: Effective scenario capacity. For the scale and rate of change that the Unfinished Revolution needs to prepare for, this knowledge product
part of the DKRs of the many critical social organisms needs the best possible up to date look ahead scenarios. How to do that is very important. We
have a talk prepared by Rod Swigart.

We have had these basic pictures of what a DKR would be since the early 80s. So some of our papers on our website actually describes that. One of
them talks about special kinds of communities. Mission oriented communities, and discipline oriented communities. Like an individual, a community
would have a community knowledge workshop. If they are distributed it has to be a knowledge workshop with special characteristics. Because
distributed knowledge work is going to be so critical, to all of the issues that we are talking about, that is going to be a central point.

Swigart: I am primary interested in story telling. I come to this not as an engineer, but as a fiction writer with an interest in what is going to happen
because human beings have always wanted to know what is going to happen. One of the tools that a lot of people use in business environments, and in
our research is the scenario process. How many people here are familiar with scenario planning. That is quite a few. I'll skim over this stuff pretty fast
because I am interested in my addition to it.

Slide: What is a Scenario?

Swigart: Very briefly then, what is a scenario, when you get a team together and you start to build a scenario, you want to have a lot of different people
with a lot of different input that begin to generate a story. So people will start to say if such and such happens then this will. You start to get a snippet
of a story. A cause and effect relationship. You collect a lot of those, condense them together, and your snippets build into a story line. We try to say
that all of these factors interrelate, and you build what a story is. You get that nailed down, everything is logically and internally consistent, and you
make a scenario out of it. So a scenario is a way of anticipating. It is not predictive. Your are not forecasting, you are creating a set of options. What
might happen, and make it useful. In business planning or anticipating what is going to happen. So I prefer, instead of saying complex planning, since
we live in a world in which planning seems to get thrown out before it can come into fluation. I call it strategic anticipation. Being ready for might
happen, rather than planning for what you are erroneously convinced will happen. So a scenario is a historical memory of the future. It is something in
which you can sit down and read and say this is a consistent story of what might happen. I remember that this is sort of a future thing.

Experience tells us that for scenarios, for different options of the future, work the best. Two is ok, the problem with three.... there was an exercise once in
Thailand where they said, let's get a group together and decide what would be the best future for Thailand, the worst future for Thailand, and what they
would think would be in the middle somewhere. Everybody gravitated toward the middle one. You want to challenge people by creating a number of
scenarios that don't allow for that black hole gravity well. In which people tend to fall. You also don't want to create scenarios about what you hope
will happen. In thinking about the future there is a certain time horizon up to which you can be fairly confident that you can forecast. That there is a
high degree of certainty or a low degree of uncertainty. A fairly high number of predetermined things that you are fairly sure are going to happen. There
comes a point where those things begin to decay. Uncertainty increases and the predetermines begin to disappear. The possibility of surprises
increases, things come out a blind side you. That is the area in which you want to start thinking about scenarios, where you start creating alternatives.
There is another break point further beyond that where you move into the realm of hope. Scenarios aren't going to help much there, usually nothing
will. Usually these things are a few pages long, too long and you loose people's attention. Too short and they don't contain enough of a story. The
ones that I have seen written in third person point of view. I am going to give you an example at the end, a couple paragraphs from a scenario and a
couple paragraphs from what I am adding to this. They require constant revision, because the future never gets here. It is constantly changing. So
revising the scenarios, or throwing them out and starting over is an unending process.

Slide: Developing Scenarios

Swigart: This is the easy way to start building a scenario. Just make a four square, start with something low in the lower corner, something high in the
upper and outer corners. See if they fall into four different kinds of stories. So as an example we did a project a couple years ago looking at the evolution
of consumer tech particularly computers and things like this. We have low integration of devices that is lots of different kinds of devices in the lower
left.

Slide: Fleshing the Scenario

Swigart: The origin of the content for these devices moving from externally developed, Disney develops all of the content for your pilot moving to
self-generated. So you have two coordinates and you can find your own. You try to figure out the kinds of things that you will get. These are the kinds
of things that we came up with.

Slide: An example of four scenarios

Swigart: The top row is called the box. Everything gets integrated into one box. The turbo charged couch potatoes, which is the title of that scenario,
where the people who had big screen TV web access, telephone Internet, everything hooked up in one box with a giant remote control device. Most of
the content is downloaded via satellite or cable and pumped into your brain. The right side where you have self-generated content, you have
everybody's a producer. We have access to digital video, audio; we create our own content ship it via e-mail or the Internet. So everybody is beginning
to produce high quality content. Lower left, pocket watchers. Ubiquitous devices by full of hand held TVs or palm seven you could call one of the
pocket watch devices. You are connected to a source of information but it is pumped out to you, not generated by you. The mobile sociable are the cell
phone, pager, palm pilot beaming crowd. With an exoskeleton of media devices, which I sometimes feel like I am. Then you start to build a scenario; you
start to tell a story about what these things are. You want to push the story out and make it challenging. Not the obvious stuff, but try to make it
extreme so that when you think about the future you may decide that one of more of these is not likely and you throw them out. You need to push that
and challenge it before you can decide that. The fact of the matter is that none of these are going to take over, it looks like, and they are all going to
happen. Then it is a question of where the weight falls, what is the largest of these domains. If you are in a technology business this is significant.
What I have done with scenarios now is add what I call vignettes. These are not the high level, third person, summary stories that scenarios are. Most
of you, if you have read scenarios know that they describe the world they say in 2004, the oil economy collapsed and global warming took off. These
three or four major events happened and the world begins to look like this. As a result regulations came in and did that. And so on. You get this
overview.

Slide: Why Vignettes?

Swigart: Vignettes are something that we started to develop about five years ago as little short seams. Little pieces of story that happened inside, we
did them inside some reports as a way of illustrating significant points. The first project we did was a study of how people used technology in the
household, in Silicon Valley. We came up with about eighteen lessons that came out of some ethnographic interviews. The one I remember best was
that people were requiring technology as a sort of social plumage. They weren't actually using the technology that much, but they liked to have it
sitting around so that people would admire them for it. A couple of years later we did a global survey like this. We did some ethnographic interviews in
a number of countries, on of them was Chile, one of the researchers there reported that it is illegal to speak on a cell phone when you are driving in Chile
and fully thirty percent of the people who were arrested for speaking on a cell phone while driving, were using a fake cell phone. That is an illustration
of how people use technology as plumage. If you were to create a vignette around that, you would actually have a character in the sense and would see
the stuff happening. Vignettes as storytelling with characters and all sorts of other stuff is a way of transmitting cultural information. People learn in
different ways. There are people who are very fond of graphs and charts and learn a lot from them. There are other people who like to read analysis.
Some people like stories, and I am one of them so it is fun to do it. They are all also more emotionally compelling, at least for me, than graphs and charts.
I am not a data person.

So what is a vignette? It is a short scene. Originally we kept them to less than a page. It presents a problem. The character has to overcome something.
There is a challenge that has to be met.

Slide: What is a vignette? Cont.

Swigart: These are basic storytelling elements. It illustrates an important part in the scenario. So vignettes become a kind of illustration maybe like a
cartoon panel of various aspects of the scenario itself. It shows characters interacting with each other. Real people in real situations, doing human
things. They use dialogue. People talk to each other and there is communication going on. They contain emotion and for me the most important reason
to use these things is that they reveal the human meaning of this over view scenario that you have created. They show how it affects people's lives.

Slide: Story Structure

Swigart: Now a little about story structure, I love this because it is a formula. An event (E) causes a person (P) to desire a goal (G). This is basic story
structure, it goes back to Aristotle. Some of it happens to cause a person to desire a goal. That is why we identify with characters in TV shows, novels,
and movies. Basically what happens in a story is P tries to get G until P gets G or gives up or gets run over. Now it gets a bit complicated with the
graphics here.

Slide: Character Quad

Swigart: There are four major characters in any story. A satisfying and successful story will have all four of them. I use this a lot because it has turned
out to be an extremely useful tool. The protagonist who is the main character, the hero, the star of the story is in pursuit of the goal. They are trying
figure out how to achieve the goal, how to get it. How to break into the bank vault or get the girl. That is Luke Skywalker by the way. The antagonist is
obviously in opposition to the protagonist, and is trying to block the protagonist from achieving the goal. Or to force the protagonist how to don't
consider how to figure it out, or reconsider. There are two characters that are orthogonal to those in opposition to each other. The one on the left is the
guardian, OB-1. Helping the protagonist, but also a kind of conscience. A character that tries to keep the protagonist on the straight and narrow.
Opposing that is the thief, who is trying to hinder and tempt. You will see the use of this in a minute. The antagonist is the empire, and the thief is
actually Darth Vader. Come over to the dark side. Trying to tempt the protagonist off of the path towards the goal. I have added the stuff about past
oriented stability and future oriented change because I think when you are doing scenarios this is useful to think about. The antagonist is trying to
keep the status quo, maintain things the way that they are, and stop the protagonist from making the change. The guardian represents tradition,
represents values, helping but still representing some kind of past. The thief is trying to change things too, but his or her way. The white part of the
square is the future. What does this mean for doing scenarios?

Slide: Story Quad: Example

Swigart: Well if you are going to do a study for a client, whom we do a lot of, you might want to plug the client in for the protagonist role. Let's say for
the sake of argument that we are doing a non-profit here. What is trying to stop the non-profit? The for profit, let's say. Maybe the for profit is going to
come in and give free housing. The Association of American Realtors doesn't like that. They are in opposition to each other. You have some one who
is trying to do something, make a change, and you have someone who is stopping them. Maybe the regulators are on the side of the conscience and
help. That is the guardian. Maybe you have some start-ups out here that are not directly opposed to non-profits, but in fact see and opportunity here
and want to get around the regulations. They are hindering and tempting. Come over and help us out. We are just a little start up, we need some
non-profit.

Now the nice thing about this is you can actually rotate this. You could put the regulators up in the top as the protagonist, but I just substituted a
couple of others. Let's say your client or business is a start up, and you are worried about how this story might unfold. Then you plug in different
characters around this quad. It is very useful, because any good story has to have all four. It is interesting in that it falls out the same as the four
squares.

Four scenarios, and four characters.

Slide: Story Quad: Example Two

Swigart: If you figured out what they four major players are, you could rotate them around and create four different stories with the different
protagonists.

Slide: Drama has a curve

Swigart: Drama. Drama has a curve. There is an initial situation. That is the event that triggers (P) to want (G). There is a setting, there is a place. That is
why it is important. It is a real place that has real meaning for people. It's not the world. It creates a theme. Why are we doing this? What is the
message? What domain are we talking about? Is it the ecology of the world or are we talking about markets? What thematically are we looking at? There
is an episode. Something happens. Again the episode has a setting and a problem. Then there is a resolution. Does P get G or not?

Slide: Story structure continued

Swigart: A little complicated. This is just a graphic depiction of those points. A story includes all of those. It has the four characters, and it has a
beginning, a middle and an end. It is very convenient; it's a nice three x structure. Any book on screen writing will tell you they all have three x. It goes
back to Aristotle. I don't believe the hypertext. Post-modernists will say that you don't have to have them; I think that they are important. So you have
an initial situation. You have an episode, or maybe a series of episodes, which are like small versions of the over all story. You have a resolution and a
conclusion. Down at the bottom you see the state of the world. Sometimes the state can go negative on you. If P gives up, or gets run over by a bus. So
I thought I would finish with a quote from one scenario. From Peter Swartz's book, The Art of A Longview.

Slide: A Scenario

Swigart: This is from a scenario that is called, change without progress. "The change without progress scenario is the dark side of market world, a future
of chaos and crisis in which people see themselves as the Lone Ranger fighting the system, and the system falls apart. It is a future similar to the world
in the movie Blade Runner; here is a world with fast pace economic activity, but one in which ruthless self-interest and corruption run rampant. Social
conflict widening the gap between those who have made it and those who are permanently locked out, and environmental decay are all commonplace.
Economic volatility and the disdain for the welfare of average people color pubic policy and corporate practice. There is little threat of big wars, imagine
global gang wars instead. Small, short lived, incoherent fights, more and more out of pride and bad temper, then conflicts of real interest. The principle
that my enemy of my enemy, is my friend, dominates international politics. Instead of rigid alliances there are quick paced uneasy tense marriages of
convenience...."

It is just the flavor or this. An overview high level, fast paced summary, third person, no characters. This is from a project that we did last fall. We were
trying to look at technology about twenty years out. This is set in 2020. There were four of these stories. We are actually doing vignettes now that are
short stories. This is more like four or five pages long. This was well, what is the dark side. We were looking at biotech and this is a story about gene
hacking. What are teenagers going to be doing in 2020? They probably won't bring down Yahoo; they'll do something else. This is Columbine the next
generation.

"Jose Stewart Narcado and Billy Yackmanhan hacked into the gene bank early Saturday morning before any of the watchdog agents were back from the
subway attack in Paris, which had rubbed them half over the wrong way. First, who ever did it had hacked a bug that ate pollomber insulation, and then
decomposed. Leaving nothing but a few traces of organic molecules. That was one bitchin bug, and you had to admire. Second the electrical short
spired off a number of microwave frequencies disrupting not only communication channels, but also all sensor traffic in the sixteenth aroundisment.
That was cool too. What wasn't cool was people got killed and hurt. Very uncool. Stewart, aka. Demo Man, resented it. Billy he said, the two of them
crouched beside a junction box at the edge of their Buenos Airs neighborhood, we have to do this right. Yeah we got to show them, Billy said. As the
youngest he was keeping watch, the soft air hiss of high-speed vehicle traffic was only centimeters away, on the other side of the wall.

The story goes on and you see what they do. Again the flavor there is what I was after. I was trying to make it personal, make it real, and make it into a
story. That is what I have to say about scenarios today. Questions?

Audience: Are you concerned at all about people reading a story, and being unhappy maybe about the style and the nuances, and kind of overlooking
the point, which is the scenario that you are putting together?

Swigart: No. I am a good writer. I think that if it is done right, the point of it makes the impression. What we did at the meeting was I read these to a small
group. We had about forty people at a workshop and then there was discussion about the stories. There was a lot of material around it, to keep it
focused on the story.

Audience: I am notorious in this seminar for bringing up law. I can't help in bringing it up again in connection with your comment about story being a
powerful mode of communication. The fact that law teaches itself and reproduces itself in the form of stories, otherwise known as cases, is probably the
most forgivable thing about the field. If we only sited rules, I think we would have been annihilated a long time ago.

Swigart: No, I think it's true. It's the think that makes law human. It is a human activity.

Audience: I'd like you to say a little more about your comment about hope. I'd particularly wanted to say the scenarios are formed out of projecting into
the future for possibilities. The reason for doing that is to come back to the president and say what kind of choices that we might make today that can
influence those outcomes. I'd like that part of it to be explicit and linked to the notion of...you sort of treated hope like that's way in the future when
everything is so misty that all you have left is hope. I would like you to bring the hope into the scenarios as one of the compelling drivers of the choices
that we make.

Swigart: That is a great question. The hope, there is a timeframe beyond which your speculating. We don't know what that time frame is. What we do at
the institute is try to go five to ten years, and sometimes twenty. In twenty years who remembers what we said. The institute has been putting out a ten
year forecast for twenty-one years now. We have outlived the first ten forecast. The results have been actually good. The whole point of doing the
scenarios is to make good decisions today. That means to paraphrase and anthropologist friend of mine who said the stories that we tell about the past
is a way understanding the social realities of the present. I say that the stories we tell about the future are also another way of understanding the social
realities of the present. They don't really tell us about the future, but they tell us how we feel about it. They hopefully make us make better decisions
today.

Engelbart: I'll just introduce the next speaker, Norman McCachron, from SRI who is going to be talking about benchmarking.

Slide: Benchmarking to improve management practices

McCachron: One of the things that Doug mentioned in his remarks, that I found interesting the importance of in effect learning from the activities and
success of today, and in fact running experiments. Part of the challenge that he mentioned about laboratory scale experiments is the low level of
complexity in which the events are taking place and the difficulty in effect of replicating some of the information generated in the social setting there for
a larger environment. There is another approach that I have run a lot at SRI over the last twenty years which tries to get information directly from those
highly complex environments by, in effect, treating each organization as a series of natural experiments in performance. Trying to identify the good
performers from a particular point of view. Say that I am working with a technology company that wants to identify how to do product development
more quickly. Which was a classic series of those benchmarking activities in the late 80's and early 90's. So they take a look at companies that seem to
be bringing products to market pretty well, and they ask us to do a survey across those companies to say what practices are those companies
undertaking that really allows them to bring things to the market more quickly. So what we end up doing is looking at the organizations, discerning what
are the critical factors in speed to market, if that is the subject. Then looking at the combinations of practices that seem to characterize those companies
that are actually more effective in bringing products to the market more quickly.

That is an example of benchmarking. To identify good practices taking in effect the working organization as a laboratory through comparison. Rather
than trying to do it through a controlled experiment. That is why I called it natural experiments. I suppose my spiritual progenerator on this one is
Donald Campbell.

Slide: Using best practices to improve

McCachron: What we are looking at then is identifying best practices as a tool for systematic improvement. If we are looking at how to bring new
products to market, that might be sort of a level one. We could equally well, and we have, looked at things like how to manage and improve the quality
of the product that you are delivering. Which is in effect looking at how organizations manage in improvement community. So there are various ways in
which you can do this. The focus of the best practice can vary from level to level, so this same activity of benchmarking can apply as well to the
improvement community as it can to a fundamental practice of delivering products or managing technology.

Slide: Using best practices to improve cont.

McCachron: Today, because of this being perhaps the most recent large-scale benchmarking study, I am going to bring the focus on a particular area of
practice which is the variety of things that companies have found to effectively manage their technology including the knowledge base for that
technologies, the knowledge repository. The whole focus of benchmarking, it is an interesting discipline. At one level it is a systematic learning process
in which the learning is shared across the participating organizations, not just with the sponsoring organizations. It's fundamental that everybody has
to win. A second part of benchmarking that is absolutely critical is that you have to go in to the organization and observe in a natural setting, and
collect and observe as a third party. You don't just take their word for it. Sometimes the companies themselves don't realize what is an astonishingly
interesting, and effective, sometimes simple good practice because they aren't outside their organization looking in. It is particularly appropriate when
you want to look at a target area, that you understand what the organization that's sponsoring or cosponsoring the study is struggling with. Once you
understand that you understand what is new and different. This could improve their performance. In that sense there is a lot of learning that could take
place. There is another part of the process that I want to bring to our attention that is not explicitly on the slide, but it appears in that second bullet. One
of the things that are powerful in getting change is to convince people that another course of action is possible.

Slide: Using best practices to improve cont.

McCachron: One of the ways you can do that in a business setting is to look at a competitor or a company that is admired by your client in another
industry, and look for those practices that an admired company is doing and bring them back. The reality of those practices combined with the superior
performance of the other organization can overcome the political opposition that says we have never done it that way around here, we couldn't
possibly consider doing it that way, you are crazy. Part of the challenge for any consultant is to become credible. As the bible said, my fondest quote
from the bible is the one that says a profit is honored everywhere except in their own house. Those of you who have tried to achieve change from with
in the organization know exactly how valid that is. One hundred thousand percent. So to get credence, you have to in effect, line up your evidence. One
of the best ways to get credible evidence from outside the organization. This is our approach to taking this both research and learning process on the
one side, and political convincing, communication process on the other side, and making it a dynamic, living thing, that generates useful results. We
have it lined up as a five step and it all looks nice and digested, but believe me, doing it in the real world it is anything but linear.

Slide: SRIC's approach

McCachron: There is a lot of interaction that takes places, and a lot of dispute. In some ways if you are engaged in the benchmarking process with a hot
passionate involvement by various parts of the organization in understanding what is going to be done, you got a pretty good idea that there are some
stakeholders out there that are really going to pay attention to the results. The thing that you don't want is a study that doesn't have any passion in it
at all. Somewhat a kin to our previous speaker. So assessing the client situation, we have these dry words called diagnostic interviews and gaps that is
where you step in to the organizations with some background. The kinds of things that come about from having done this a lot. So you have both the
perspective of the past in good performance, but some additional points of view and critical questions. Begin to identify areas of potential weakness
and gaps in the organizations and start generating some controversy and involvement. Then you get a sense of what the most critical tasks are. At that
stage you are trying to illicit the doubts, questions, and challenges that you know you will face at the end, but you identify them up front so that you
can go out there and collect the evidence to address them. You don't want to be blind sighted at the end by a question that didn't come up at the
beginning.

Slide: SRIC's approach cont.

McCachron: The second thing that you do is to develop a compendium of the relevant practices that address those gaps, issues, and challenges. You
can start from an experience base, but it is very often the case that the most critical information is the freshest from the companies that are regarded
politically from opinion leaders as being the best from the standpoint of leadership. So you get collecting information with those technology leaders
and feeding back similar kinds of profiles to them so that everyone is a winner.

Slide: SRIC's approach cont.

McCachron: Rating the performance against others. This is often the most challenging part from a standpoint of the science of it. In effect, this is a very
rich set of indicators, evidence, and observations that you end up with, and having to make those assessments is not an analytical task with survey
questionnaires and standardized responses. It is heavily dependant on judgment. So a lot of the richness and experience base of the benchmarking team
goes into this rating performance. That is one place where we found it is critical that every benchmarking discussion or interview is done by two people,
two observers. One of them may take the role of asking the questions, and the other will take the role of recording the answers. There are two
independent viewpoints present, as to what is being said and what are the critical follow on issues to address. So you at least have two points of view.
Then when you do that rating you come back and work together as a team, to understand the assessment that is involved.

Slide: SRIC's approach cont.

McCachron: The fourth step is converting the results to an action plan that is appropriate for the organization that paid the money for the study to be
done in the first place. There the critical question is transferability. In what circumstances do those best practices that you found in other organizations
work and understanding the relevance or non-relevance of them to the client organization. Then there is developing the action plan. Even though I
spend very little time on that here, that can be a critical step. So here is the flow chart on how this process works.

Slide: SRIC's 'Best Practice' benchmarking and improvement process

McCachron: You start with the internal assessment, looking at the gaps and the existing practices that can lead to those gaps. Then, take that
information to guide the formulation of your external assessment looking for the appropriate practices that have achieved superior performance by some
concrete measurement. Sometimes you can actually form metrics that is beyond just the pure business success of the organization. Other times it is just
a judgment call. Then bringing those relative ratings together to compare the company with it's other organizations in the survey.

Slide: Collecting and organizing best practices: a recent example

McCachron: This is an example of good practices in effect we should have probably said that we collected the best from each of the companies over
four hundred practices from thirteen different participating companies. Most of which are household names in the U.S. In pharmaceutical, electronics,
petrochemicals, oil exploration and production. The sponsoring organization was in Oil Company, but had the wisdom to look outside the oil industry,
because much of what they were interested in was done better was in other industries. Then we organize the performance around eight critical areas.

Slide: Performance areas: an example set in managing technology

McCachron: These were the areas, each of these had on average about four excellent practices that were associated with them. You begin to see here
some of the themes I think, Doug, that you have been talking about in terms of organization, alignment and direction, leadership, managing the human
resource base, information, leveraging and employing in technology info, managing, capturing, deploying business knowledge as well. Assessing and
tracking performance. These areas were basically developed as a construct for saying what do these four hundred practices really boil down to? Which
are the best of the best and how do they fit together into a structure? This is our current representation at SRI of the eight critical areas for managing
technology effectively. It is just an illustration; you can do a benchmark on any number of different things. One thing I might comment on, in every
benchmark that we have done, there are certain common elements that appear. Leadership is one, alignment is another. Alignment means the correlation
between the actives done on a business level, technical level, on a marketing level. Is the organization all pointing in the same direction? Or is R&D off
doing stuff over here while the company management thinks it's over here. I remember a classic misalignment case at a company that we all know that is
headquartered in St. Louis where they had a group doing research on a new process for manufacturing polyethylene, and it was a very capital intensive
process, required at that time which was the early 80's about 800 million dollars in capital investment to carry it into operation. It turned out that five
years before the completion of their research, the board of directors of the company had decided that they weren't going to make any more capital
investments in that industry, but nobody had communicated that to the dozens of researchers who were diligently pursuing this research activity. It
became a total loss and a devastation to careers of several of those leading professionals because the parent company sold off that unit with out being
able to gain much from that technology. From an evaluation viewpoint.

Alignment is one of those issues. Another one that has become much more prominent in the last five years which is consistent with what we are talking
about here is the knowledge management arena. It's probably the fastest growing cross industry concern. What do we have as intellectual capital not
just property in the sense of documented and legally controlled, but also intellectual capital in people's heads and how do we identify it and bring it to
bare where we need it. As one book title I just love says, I wish if only we knew (conscious, deployable) what we know (unconscious, not deployed,
somewhere). So that issue is very relevant as well in the benchmarking arena because in effect internal benchmarking helps you know what you know.

Slide: Company Ratings

McCachron: So when we do these company ratings we are, as I mentioned before, using a team judgment process but we find it very helpful and
sometimes highly controversial (controversy is good in this business because it involves people) to put it together in a graphical form. So we identify
the key elements, which are shown here as a radar diagram, then we look at how companies rate compared to their industry.

Slide: Company G vs. electrons industry average and overall average

McCachron: Here is company G in the electronics industry vs. the industry average. The company itself, it's interesting, was good in managing
knowledge compared to the industry average, but was having some struggles in respect to leadership and commitment and managing opportunities. In
a couple of cases it had fumbled the ball on some new technology. So the process that was required to change, interestingly enough, wasn't located
primarily in the technical arena. They knew about those technologies, but in the management process that made decisions about how to use that
technical knowledge. Now here is a comparison that shows how best practices even help when your dealing with a company that is pretty good.
Company G relative to the electronic industry wasn't too bad.

Slide: Company G vs. overall best

McCachron: When you start comparing it with the best of the best, in other words when you take the out around of all the practices and ratings of all of
the scales, you will see there is some real opportunities for improvement in nearly all of the areas. The one where that wasn't true is where the company
set the standard, which happens to be in managing knowledge internally with its customers and suppliers. So, one of the things that happens in
benchmarking is that everybody can win. The best company in the sample always finds that it can still learn things from somebody else, because it is
never best in everything in every area at the same time.

Slide: Leader Profiles

McCachron: So here are examples that show how that chart is a useful communication tool. You see company G there and you also see the best
company in the sample right next to it, which was company F happened to also be in the electronics industry and a household name. One of the
interesting questions for company F became are we actually trying to be too good in too many areas instead of being focused in the right areas to be
good in. So it isn't always the case that you want to be best in every area.

Slide: Implementation Planning

McCachron: So just to make some comments on the nature of implementation planning, these gaps that we identify at the front end of the process, on
the left, these are the critical gaps that we aim at when we try to identify the best practices. Then when we come up with those practices, filter them
down to those that are applicable to the company, transferable, adaptable. Then we can identify the improvement targets.

Slide: Example

McCachron: This shows an example of an improvement plan that is related to the knowledge management arena. Looking at some of the gaps that are
there some of the best practices that we identified as addressing those gaps, now naturally each bullet has more rich detail that is beneath it. We are
keeping it at a summary level. Let me read through what example gaps were with in the original company. Limited documentation, no formal follow up
process, this is a typical problem companies neglect to document what they learned from the last project, so they continually make the same mistake on
the next project. Reluctance to share knowledge. Does this sound familiar. I got my knowledge; I should share it with Jack? Jack might get a better
bonus so why don't I keep the knowledge here and let Jack stew in his own juice. Technical knowledge not transferred to remote locations. This was a
particular problem because this was a global company with widely distributed technical exploration and production activities. And laboratory staff
alienated from operations. We chose that word carefully because they were really alienated from operations. Best practices establishing knowledge
values and priorities. Using common infrastructure worldwide. It sounds like a familiar theme we have been talking about here, about having the ability
to share knowledge on a common basis. Create a best practices catalogue both internally, and with respect to external observation. Integrate external
technology intelligence system. What's going on out there and what will it tell us about what we need to do and maybe what opportunities we could
take by acquisition or purchase, instead of having to do our own technical development. Lastly adapt a company intranet as the platform for
distributing this knowledge. So those were the key best practices as of 1998 in terms of this arena of knowledge management. No we have come some
distance in the last two years. In many companies these are still critical areas. Maybe with some slightly technical advancement but still critical area. So
the improvement plan requires selecting a team chartering a redesign team specifying a knowledge management system and developing a roll out plan.
The standard kinds of management activities for implementation. Now, given the best practices, you are in a more solid position to understand what it
will do for you and even to have a series of contacts with those companies that have already done that. To say what are some of the lessons learned in
implementation since you are not a competitor, but you have also done a knowledge management system, which can help us out, avoid it. So we have
seen one of the things from benchmarking is a community of interest in improvement is formed in sharing practices, experiences and implementation
over time, which fit with one of the things Doug had mentioned in his introduction as well. Thank you.

 

 

                                        Engelbart Colloquium at Stanford

                                         "The Unfinished Revolution"

                                       Tape 2 Session 8 February 24, 2000

 

Engelbart: Welcome back to the second half of session 8. What I think we will do right away is introduce Marcelo Hoffmann who will describe some of
the activities associated with doing intelligence collections. He has being working in business and intelligence for five or six years now. He also has
been very active in running the colloquium. Here is Marcelo Hoffmann.

Slide: Intelligence: Collection, Analysis, Dissemination

Hoffmann: What I will say today is something that is based on our experience working on this for a few more years than Doug remembers. I have been
at this for about twelve years, but time goes fast. What I will take is the example of the business intelligence center activities. This is a group that has
been doing multi-client research and some consulting for some forty plus years. I'll use this as an example of what Doug calls DKR, dynamic knowledge
repository and some of the issues that come about from this.

Slide: SRI Website

Hoffmann: What we have here is seven multi-client programs. You can check them out on the web under future.sri.com. Think of them as seven
syndicated research activities. The whole point is to organize these types of activities around customers in ways that they particularly find useful. We
have sixty-five or seventy people working on these types of activities. That is also important because we are distributed, our clients are distributed, and
that affects the way that we do our business and how we connect with out clients. So we have people in Menlo Park, Princeton, Tokyo, London, around
these different activities intelligence transportation systems, psychographics, consumer financial decisions, and learning on demand, media futures,
business intelligence program and explorer.

Slide: SRI Website

Hoffmann: I'll limit most of my comments about the explorer program, which is about technology monitoring. It is perhaps the closest to what Doug
looks at in terms of intelligence collection, dissemination, that sort of thing.

We have found that there are some classic design problems with this. Given a limited set of resources, you have the dilemma: do you go deeper or do
you go wider in breadth in terms of what you cover for the limited resources funding people or whatever resources you are dealing with. Then, how do
you do this? What do you collect, how, for whom? I will get more specific.

Slide: Classic Design Problems

Hoffmann: These are not rhetorical questions; they are very specific and rather problematic. How do you analyze what you get? For whom? The who
are the people is extremely important. Then how do you disseminate? This is also a major issue, and I will use one example, again, under the explorer
program as a case. In general the answer to these kinds of issues, we do what is actionable for our clients and in ways that make sense to them. That is
highly variable, but we have to do it because there is no other measure.

Slide: Example: Commercialization of Internet Commerce

Hoffmann: This is a classic example of one of the graphics that we use in one of the multi-client programs. It's how to we disseminate the information, in
this case commercialization of Internet commerce itself. We tend to use lots of graphics, which unfortunately don't show up here after many
conversions of Mac to Windows to Mac, and so on. If you were to see it you would see that the graphics define something about synergistic
technologies, there is a graphic on the left upper hand that is a graphic that shows how far along our different parameters when it comes to commercial
development. Things like the resources are higher or lower than average. Factor required for the technology and so on. We try to summarize things as
much as we can in a graphical form. This is important because we have clients all over the world, and they tend not to have too much time to spend
analyzing lots of text. This is something that I would bring back, if we were thinking about dynamic knowledge repository in the sense of what Doug's
says. We have to start thinking what kind of dissemination would we want, how would we and to present things, represent things, and analyze them.
We emphasize summarization for executives, visualization and put together for non-English speakers.

Slide: Purpose: Company Growth

Hoffmann: The purpose. Typically the clients we are concerned about growth, do business, and this is generic. If you could see this better, you would
see on the left hand side all kinds of inputs to a company. Company X is the white box there then on the right is where would the natural growth be for
the particular company. If nothing is done if they don't get additional inputs they would still have some type of growth or lack there of and then the
question is what do we provide. I think that this would be pretty much generic for any outside provision of knowledge and understanding to expand
their capabilities. We sell what amounts to a balance of understanding between demand, technology, and competition.

Slide: Approach: A Balance of Understanding

Hoffmann: Then the question is how do you analyze this combination in ways that make sense? On the right hand side you would see something about
commercial development parameters. Basically, the parameters that the company and clients want to see to help them make better decisions. When you
superimpose that on top of the existing functioning of the company, the client, then hopefully what you would have is the greater expansion of
opportunities on the right hand side.

Slide: Using Explorer to Understand Implications and Opportunity

Hoffmann: That is very important to consider in terms of the face matching of what ever the inputs a company/organization takes in has to match what
decision making styles they have in ways that they better absorb it, use it and so on. These are not rhetorical questions; I will expand on this even a
little bit better.

Slide: Lessons

Hoffmann: The lesson that we have found is quite broad. The collection, analysis, and dissemination are extremely contextual. If I were to take a case of
let's say packaging or bottling, it's not good enough to know that a company like Coca-Cola and Pepsi Cola both are interested in bottles and glass. It
is more important to know who is in charge of finding this out. What exactly do I need to know, because if not you could find out something that they
already know, or they don't care or that is too futuristic or too far back in their time horizon. So timing, frameworks, ways to present information are very
important. We have found that we have to continuously evolve both the content of what we are doing, how we present it, and in what styles. As far as
that goes we have found that where as we used to publish lots of text, now a days we have got more graphic, much more use of the Internet, the web,
presentations and so on. People have less time to digest text so they want us to do things faster in sound bits just about. So we do more presentations,
we do much more interactions. So I think that should be something that is considered if one were thinking of intelligence collection with regard to
bootstrapping and the DKR. What kinds of multi-media would be appropriate for whom, and under what context? One of the challenges we have found
ongoing is how do you define a multi-client universe. Who all needs to get what kind of information?

Slide: Lessons

Hoffmann: If you make it too broad, then no body is interested. If you make it too deep, then very few people are interested. There is literally an art to
define what is the collective base that you are going to study collectively.

Just for example in our multi-client designs it might take us six months, sometimes a year to actually define this and go back with clients and find out
what is needed, what isn't, how and so on. It is a really difficult exercise. Often times, what you find is not what you'd expected. We have a case
sometime back when we decided that the environment was really a big deal. Around 1995. What we found was the definition of the environment was
very different for everybody that we talked to. There was no commonality that we could find, that we could study, and sell as a multi-client. I would not
be surprised that this will happen many times over again. Semantics and syntax are very problematic. You don't realize that until you actually get in and
say ok what if we give you this, would you want it? Often times you get the answer yes, maybe and then you come back and say would you pay for it?
You might get a different answer at that point, for different reasons. So these things are not easy to define, you literally have to go through the exercise.
Another thing we have found is to understand who the clients may be, you have to go through specific individuals. It's very difficult to give generic
titles, you know, that marketers would want this. Actually you have to find out whom, and what specifics do they want. Does that match with
somebody else's, and enough of a universe to make a syndicated research activity worth pursuing. Other than that, the other challenge that we have
found is to improve the intelligence collection you need feedback. In order to get better at it you have to find out what are you doing well, what are you
not doing well. It is very problematic.

Slide: Lessons

Hoffmann: We tend to get that at Hawk although we try to get as much information from our clients as we can. It is always difficult and you always get
partial information, and I would also like to suggest that even for this colloquium. Although the organizers including myself have been trying to get
feed back, it is always difficult because those who would benefit don't always want to spend the time giving feedback. So there is the dislocation there.
You usually end up doing what you think is right, then get some sort of feedback. You kind of go along but you don't optimize. That is pretty much
what I have, except one other lesson that we picked up that is quite interesting. There is a difference between those who pay and those who use. They
often do not match. What you get in terms of request from one is not what the other one wanted. You end up bi-fracting your responses. You get into a
sort of schizoid situation that is somewhat problematic but you have to accept it, go with it, and try to satisfy both. That is pretty much what I have.

Slide: Intelligence, DKR and CoDiak

Hoffmann: I am hoping that we will have this discussion further on in terms of intelligence collection with respect to the DKR, and CoDiak. That it is
something that is a practice that you cannot define before hand, and it is like a sport. You get better when you do it.

Engelbart: We only have two of our victims left, so I'd like to take a few minutes and get some dialogue and questions. My question is in both cases
you guys were doing things for clients. You have to go out and sell it to the clients. The kind of thing that I am picturing that would be different is that
is part of the business of a NIC and especially of a MetaNIC is to keep watching the future and keep doing the intelligence work, so that they are likely
if they want help, to come to you. Could you stand such a difference? Or else another thing assumable what they would like to be doing is start
evolving better processes, methods, and tools, by which organizations can do that kind of work. So organizations say why don't we do it and you
coach us. So otherwise it is like saying how much value could high performance support teams be to what you guys do.

Audience: I don't personally go out and sell to the clients. So what I get is kind of a backstage view of it. I suspect there is a kind of inertia about
designing the research in order to sell it to the clients. Then trying to figure out a way to go and do what you want anyway. Then convince the clients
that that is what they wanted all along. Maybe I am being cynical. I think in a way, when you are talking about the future, companies and it is usually a
group inside the company isn't sure what they want to know because it is in the future. Unless it is a specific project in which they want to develop
some.

Engelbart: We are talking about what a NIC is, it is a bunch of organizations saying we are going to work together to, among other things, to keep a
better idea on the future. They have already come together in order to pool resources to do this sort of thing.

Audience: I think in fact that is what the clients of the institute are. They are looking for that.

Audience: I think my experience is somewhat similar, but with a slight bend. That regardless to whether you do it on purpose or not, within an
organization or outside, there is always a client and usually, they tend to give you answers that don't quite match with what you expected before hand
and you present stuff like prototype. People are like that is really nice, and you come back a month from now...oh but we wanted something else. So
often times our clients don't know what they want. They agree to things that they haven't approved of and vice versa. It is really problematic. I wish
that we had a better way of understanding what clients want. More in the consumer arena sort of do we buy the shoes, or do we not buy the shoes. In
terms of the capabilities and doing this in a feedback loop of sorts, it is also very problematic that the feedback loops are not defined. You never know
exactly what you get back.

Engelbart: Anyone have some questions for a few minutes.

Audience: Marcelo, the point that you made about the people that pays or not pays, may have different needs then the people who are actually going
to use. Can you expand on that? That is something I have seen in lots of different places. The decision makers, maybe because they are so
disconnected from the actual work that is going on.

Hoffmann: It is a good question. What we have found is the whole issue that Norm mentioned about alignment is often times nowhere near as clear
once you get into the details as from the top level. Senior mangers assume that the whole corporation is aligned. What we have found is- not
necessarily. Particularly when we go at low to middle levels, often times there are almost opposite goals and opposite views. For us, looking at this from
a psychographics perspective, often what the senior manager wants is a summary. What the operational person wants is much more detail. And the
lowest person, say an engineer in the design stage, wants all the data that is possible to be gotten from the world. How do you engage what kind of
package, when you sell to multiple buyers? It is really problematic. Often times, somebody approves and then other person doesn't like. It goes back
and forth. It's really not just about money but it is understanding what is useful.

Audience: I am reminded about that old saying. Chickens don't buy chicken feed, farmers do.

Audience: I was wondering if you find in the new media, any solutions to that problem. In that through linkages and layers, and having the ability of a
graphic organizer and with in that it links to a summary, and that in turn can link to in depth information, if you are beginning to use that. If that
somehow addresses those issues.

Audience: We are experimenting with that, and trying to find different ways of shaping the reports and information so that it is accessible at different
levels. We had a meeting on it the other day, and we have a lot of ideas coming out about how to do that. There are so many different learning
strategies and needs that people have. In finding some different way to tailor that information to meet all of the different clients needs in a non-trivial
problem. Maybe storytelling is the answer here.

Engelbart: I think those are just examples of what the knowledge environment is going to be in the future. What does an organization need, what are the
different divisions, projects, and parts of it need. Are going to be changing, and the business about being able to generate usable knowledge as it
shifts, and portray it, is going to be more universal. It has to be a continuing thing. It may or may not be something that outsource like this. I think we
will move on.

Audience: I just had one anecdote, and it has to do with the whole knowledge management issue. It is a story I love. It is a very large consumer
products company that was denied a paten, on the grounds that they already owned it.

Audience: There is a question that this discussion prompts in my mind. That is do we have to be thinking about storing knowledge in lots of different
ways? Taking the time to actually put the knowledge away in different forms so that different people can retrieve it and understand what it is about, or
do we think we are clever enough to have the viewers change it into the form that people need.

Engelbart: That is such a general problem. I don't think you have to lay it on them.

Audience: Basically you are providing different views. Stories vs. neat little chats, vs. reports and things. In the disability field we have the problem of
converting information that was meant for a graphical user interface into a form that works for blind people. It is a huge job. It would be a lot easier if
there were a word version of it.

Hoffmann: I think the way that we are trying to deal with this is put what used to be our reports, documents, on the web and assume that is accessible
by everyone. From there we are seeing how many of our clients want what and then we start doing that on talking basis. Presentations, conversations
and so on because it is too expensive in a formal, organized fashion until we have enough. Say half of our clients want it this way. Then we optimize it
and go with that. Otherwise the whole issue of conversion is really problematic. Ideally if there was a way to do it with XML, and then turn the knob
and make it graphic or make it textual. Stories or something but I don't know anything along the horizon that is going to do this in any easy way yet.

Engelbart: Thank you for bringing that up. When I use the terms portrayal and views that are what we have to be more flexible about generating, that is
the kind of thing I mean. That is the kind of thing high performance support teams can fly in and say I'll do that for you. That is a little downstream.

Audience: I don't think it is an opposition between stories and graphics. They are mutually supportive. What is happening is that there is a real shift in
the way information is organized initially. So that instead of going into formal reports that can be fifty, sixty, seventy pages long, it's broken up into
what I call NITs narrative units. They are small things that can be hyper textually linked back and forth. I think that is changing way people are thinking
and experiencing even literature for example. This is going to have a big impact. It is going to come in a new generation that thinks this way, as opposed
to the formal essay structure that we are all used to.

Audience: The problem that we are experiencing at the moment is we did the blind over. Now the multimedia is making life really difficult for deaf people
because information was always available in the printed form. Now a lot of the key information is being presented in a verbal form. There is no
translation of it so we have to develop technologies to do that. I bring up the question, should we as a society be thinking that we need to have
different views for different cognitive labels, for different perceptual labels?

Engelbart: Absolutely. Further more, things are moving faster and a lot of things we can't spoon-feed important people. They have got to get in and be
learning all the time. Then we all think that the learning system is going for the K-12 and Universities, etc. but that is going to be continually. Everybody
has got to keep learning. How do you provide for that is important. Your dynamic knowledge repository essentially needs in a way to sort of be flexibly
adaptable for whatever learning techniques there are for grabbing, showing, digesting, and testing. So it is evolving together. That is a good example to
bring up. I have some things I have been wanting to dig out too. It has been a problem for me. I show diagrams and make statements, but I don't know
how they hit you. I am going to try something here. I have five slides of which you have seen before. I would like to spend a few minutes with each one
and say what kind of questions and comments about these come from you guys. Here is the first one.

Slide: This Source of Human Capability Individual and Collective

Engelbart: I would enjoy hearing some questions or comments. What do you see when you look at this? I think it is so rich.

Audience: One of the first things I see is I want to know more about those different tool systems and human systems. Do you have to drill down deeper
to see what they consist of and how they interact.

Engelbart: Are you by any chance an engineer? Now I want somebody to ask about this left side too.

Audience: I meant the human system also, both sides.

Engelbart: I got some good interactions in just the last year from people trying to go down inside that. Also in the middle. What is a capability
infrastructure like with in the organization? What are the capabilities up high, what are they depend upon. It sort of spreads down like this. Down at the
medium to low there are some basic ones about how people write, talk, communicate, and interact and things like that. Those are things that are going to
get affected tremendously in the future. When they get affected like that, it is going to mean the layers above them in the infrastructure would adapt to
harness that. It is because such fundamental things down low in that infrastructure will be impacted so heavily if we really look at how our knowledge
work, our thinking, and our collaborating and going to change. That changes a lot and it goes down into the way that you are going to harness, sense,
actuate, and perceive. What goes on in your head both conscious and unconscious is going to shift a lot. It is just a very good lesson to learn as you
watch how you operate through different kinds of activities. What kind of unconscious skills you have that had to develop.

Audience: It is implicit in your structure, but I just wanted to make explicit- that our culture tends to think of knowledge as what is between our ears,
instead of what is between people. It is a relationship between us and the external world. Knowledge is very much a social process. It emerges in a
social dynamic. Like I said that is implicit in there, but it might be good to try to get at it more explicitly.

Engelbart: I would like to have some working groups get together with me, and as weeks go by really trying to drill down those things. I do it by myself
and it runs off the table.

Slide: This Source of Human Capability Individual and Collective Cont.

Audience: I was wondering if you had anything in here about how to deal with failures. Is dealing with failures something in procedures, paradigms,
and customs cause it seems to me that when we make our own failures more visible by having a knowledge repository that confronts us with the fact
that we failed? Time and again last week. We have to learn to live with that. It is very uncomfortable for some folks. We need training in living with the
fact that we are really failing a lot more often than we convince ourselves we are.

Engelbart: These are kinds of things you bring up and are going to get more collaboration. This goes into the unconscious part of the human. How you
unconsciously deal with people. How you react and send messages to them, like oh you failed or something. If you are just supportive, let's try again. It
reminds me of my one shot of making an aphorism. Did I tell you about earning thirty five dollars that somehow it fell into the hands of Reader's Digest
and they actually published it there as a footnote someplace. Mine is the rate at which you mature is directly proportional to the embarrassment that
you can tolerate. So maybe we can build that kind of thing into a social sense at a larger scale. There is so much richness through all those things there.
The little funny thing about technology exploding at that right hand column just pervades all through that to make changes. Over and over again today
we are trapped at looking at the technology that is there to kind of automate the kind of things we used to do. I really want to spend some more time
drilling down in there to say there are things one can do about how technology and people can work together that would start to make a real difference.
That is something I would like to toss up to people too. Before that I have a few more of these.

Audience: What are those capability infrastructures that have been identified so far? Is that published anywhere?

Engelbart: Sometimes when people pat me on the back and give me awards, I think that there is really so much that is left to do. I don't know how to do
a lot of it. We need to collaborate. That's a dirty word I know. That is why we need an evolutionary environment.

Audience: Speaking of stories, when we analysis things so far, eventually they get a little bit fuzzy. Many of the things that.... the stories.... our culture
and many cultures are built on are really myths of various kinds. I don't mean myths in that they are untrue, but perhaps symbolic stories in many
cases. Someone who is very famous for pursuing this was Joseph Campbell. Where he made studies across different cultures, and across different
times, to try and extract what might be almost cultural invariance for different types of societies. If we are going to look at this sort of thing, it might be
useful to read some of his writings or see some of his videotapes.

Engelbart: When you start looking at some of these things, there is a term, we use CoDiak. We are trying to identify the core set of capabilities to be
collectively smarter. The middle initial I for integration is just a terrific challenge. There are all of these kind of things that people have generated and
come up with over the years, that some how they have to get integrated. So we can integrate them in to sort of useful body of understanding and
knowledge. I look at some of the books out there, and the situation is only going to get worse. So really striving to find out how we collectively can do
that integration into applicable forms is terribly important. Every time I hear about something like that. When people are putting things on our form, Eric,
here just pours out book like things, so pretty soon I am going to challenge him. Who better to start integrating that? It is an extremely important
capability and it is going to take focus to do that.

Slide: NOTE: Multiple DKRs in Any Large Organization

Engelbart: Let's go to a second slide that you see a lot of. What does this do for people? It is a very important one in my framework. For a few years I
was a young aspiring professor at Berkeley, and I began realizing that everybody that has been to very much schooling has that face. They bring up a
question and they look just interested enough just awake enough so that you won't call on them and think that they are asleep, but not enough to be
called on. Suddenly we get that professional student face out there.

Audience: The thing I like in this diagram whenever I see it is all of those inter connections. You call out the concurrency, that you really are going to
be doing all of these things at the same time. One of the questions that I have is I keep looking at the little ones under there, what is going on
underneath that supports these pieces. When I look at the dialogue that has been going on in the on-line discussion and how you take that and make
that into a real DKR. What does that take? What do we really need to do to support that recorded dialogue, and intelligence collection, and knowledge
products? Especially when you think about some of Neil's concerns for other people with differing capabilities. Is that all happening under here? What
does that take for us? How do we do that?

Engelbart: It has been done organically, slowing, in imperfect ways for a long time. As the world is changing that is part of the scaling thing that you
put attention to before. The issues are going to be more complex, happening faster. We have got to have; we will fall apart if we don't find a better way
to manage. Then I put CoDiak down as the challenge for that. What is this knowledge products box mean to you?

Slide: NOTE: Multiple DKRS in Any Large Organization

Audience: I am noticing that what you are not making clear is that the DKR has to be contributed to by everybody. Everybody is an author in this. If
you have a meeting and you don't put it into a DKR, your personal knowledge repository about what happened at that meeting, it's gone. You have to
collect it. If you can collect it you can share it quite effectively. You have seen Rod Welche's.

Engelbart: That is part of recorded dialogue in my book. What is going to be valuable from the interactions between in your own notes in fact?

Audience: The point that Rod makes is that you have to go back over your appointments for today. Say what it was that happened and what needs to
happen next in each of those things.

Engelbart: Great. That is integration. Then you also have your intelligence collection what is happening there. So the dialogue is in the intelligence box
that is coming in. The knowledge product box over here is a current summary status. That is where you're applicable.

Audience: My main point is that this is not something that somebody can do for you.

Engelbart: Well that is right. Everybody has to be a part of it. That is what the concurrency part is. You have to concurrently be doing your part while
your team is doing its. The project will develop like that. That concurrency thing, integration. That is tough, but the concurrency one is the real
challenge. That is the kind of thing that leads us to say that we can't have violence. You can't have these guys are using Microsoft and these are using
Lotus notes, therefore they are islands. This is why the standards for the way documents and communications are have to evolve. Then the evolution
of that has to be a lot more than just what comes out every two years as a next version. So the evolutionary process for the whole thing to grow is part
of it. The knowledge products we used to call the handbook. Then people started objecting well that is too much. Then during some of the meetings
people said why don't we start calling it the encyclopedia. It's got the same sense to it, where you go find what it is you need to know about the current
state. If there are given issues flying around like that, what is the current state? If you are not quite sure on that you should be able to backtrack the
steps on the issue development of that. The back link management is extremely important in there.

Audience: So do you have a list of twenty or thirty of these DKRs that are analyzed for how good and effective they are?

Engelbart: No. That is what I was going to get Norm do, but he left. It is one of the things that needs doing in the improvement infrastructure, is to learn
how to give the kind of rating and benchmarking. How well is this being done and it would be terrific when it starts going downstream, and you say this
advertises that it is going to be doing all kinds of things in it's products for people. It rates very low internally for doing this, so how come they are
telling us what to do? Individuals, rating individuals need to have. Things are changing rapidly. You can't say hey I good grades in high school, what
the hell. Think of benchmarking and having ratings so that you know where you rate. If you come to work some place and they look at these different
categories. Your general knowledge management is just that you have not upgraded. That sort of rating that you had the fourth class two years ago,
well that is fine, but things have moved on. In order for you to keep up with your peers you are going to move yourself up some notches.

Jon: I would like to follow up on that question. A concern I have about this picture is the implied amount of overhead. Not just the matter of making all
of us responsible for what got said, but also this relates to Neil's question earlier to make that information useful you need to add information to it.
Meta-information. To provide hook, links, and summaries. I've never seen a system where that can be provided automatically. To me it boils down to a
philosophical question, of where does meaning come from anyway? It seems to keep coming back to the meaning has to come from the people. This
relates to what Neil asked because with the nearsighted people...the reason things are so flat is because there's information, organized, hierarchical,
logical, information being conveyed visually by an the special organization of things on a page. That isn't going to come through in the bare text.
Adding that information, you always get an added twenty, forty percent effort. My problem with this is that, to make this work we actually have to put
more time into the information. What we are being pressed to do constantly is spend less time doing it. I don't understand the way out of the dilemma.

Engelbart: I get it. That is part of what is going to happen anyway. More complexity, happening more rapidly. How are you going to cope? If you look at
that, the big central part of coping is how do you do this integration of innovative ideas, thoughts, and issues, while keeping track of what it is you
need to know to be applicable. It is going to take more overheads. Therefore it becomes more of a team sort of thing. The trade off for that would be
that, as Jeff Wilson likes to talk about boosting your collective IQ. Ten people can do what twenty used to as a team effort. Somehow in there it has to
be done. What are the practices? One of the things I am really interested in is for people to have a discussion. I got one of these speech recognizers and
I want to start training it. What I would like to do next is when I call up someone, or talk to them that they have one of their own. It may be a different
brand, but it works. So there is a network connection between the two, and they fuse into a transcript of your dialogue. You will learn as you go along
like that, how to make tags as you are talking, just as a natural kind of thing. You also have smart agents that you can call on to do things. Today how
do you juggle and use those. The world is going to be more complex and if we are going to survive in a sane and coherent way. A central part of it is the
way you deal with your knowledge.

Eric: I wanted to reply to what Jon was saying. I think he is absolutely right with the necessity to add information to get value out starts to become a
limiting factor. So I think the high leverage point is to work with the information that you are already recording in the form of documents, specifically at
this point. I do think multi-media become important as we go forward. We are already recording design documents, we are already having decisions, and
to the extent that we can capture those and relate them to what we are doing I think we can get a high percentage return for a lower percentage effort.
Certainly not a zero percentage, but lower than trying to track every meeting that I have been to.

Engelbart: It depends what you mean by tracking. One of the things it would be interesting to do would be to lay out some propositions and see how
people vote about what is going to happen some day. Maybe they would bet on it or something. If you look at the automobile transportation, and if
you told people a hundred years ago, they maybe wouldn't believe that you could spend that much money on a wagon and that you could travel so far,
things change. I am betting that we will get better at that. A lot of things are going to change. There are things that as you walk along in your day you
will learn to toss of the tanks and the meta kind of stuff, it will just be part of your skill. Instead of just talking and walking away.

Audience: I want to address part of that also and raise another issue that is related. Part of what you have to consider that is part of the bigger picture,
is that the DKR can't necessarily be the end all solution. The person who is actually using the knowledge that is stored with in the DKR has to bring
something to that. Ideally, we want to take all of that information that is stored in people's heads and make it available to others, but maybe realistically
it has to be the responsibility of the person coming in. Looking at the information to understand the context of the information and go discover in other
ways what the context of the information is. People who are looking at design documents and don't understand them, there is that social process, the
human system side. The left side of the previous chart, where people learn things and they read the documents and they discuss it with other people,
and it is converted by knowledge, clarifying things that way.

A comment I wanted to raise that I think is also related, and may refute what I just said, was it's a point that Marcelo made. It is a point that concerns me
about this document. It is about context. If in intelligence collection, the context is so important in determining what you are trying to collect, does that
act as an impediment to the scalability of this. In other words, if you are going to take a bunch of DKRs and the DKRs are so contextual in nature is that
going to be useful if you try and group them together? Having other people looking at the information and trying to gather knowledge out of that.

Engelbart: The feeling I have is because it is going to be such a strenuous job, to do the scenarios and the intelligence collection, that it will be
something sensibly shared out there. The larger communities will be doing that just as a matter of course, all the time.

Hoffmann: I have a comment on that, following up on what you just said. I am reminded of what happened with object-oriented software. At SRI there is
a fellow who did a project sometime back looking at winded writing software in the sense of objects. It was cost effective. He found that they had to be
used at least five to ten times before it was worthwhile. It comes back to this. Unless there is sort of a scale up, then the overhead kills you. Once there
is a scale up that is large enough, then the overhead is not that important. The question then is how to get up high enough so that the scaling is
favorable. The problem is how do you start and get enough folks involved with low enough overhead and get all of the benefits.

Neil: What this suggests when you first talk about it, is that each of those circles is a self-contained law of knowledge about something. If all of us went
home and did a report, we would put all of that stuff into our own law, we still haven't accumulated what has happened in this meeting. It still is not
clear who that blob belongs to. Do I belong to many blobs, or do I have a blob around me. That is the thing. I was going to ask the question before Jon
came up and that was my understanding of data base indexing. That implies so much that can never be done unless people put the knowledge into the
correct blob. That is the individual's responsibility.

Engelbart: You have got something there. It is something that we need to do. I am going to talk about high performance support teams. This is a
challenge and I appreciate this kind of dialogue. I have a few more of these slides that are basic like that. Thank you very much Neil.

Neil: I was fascinated by something I read in a science magazine a while ago, the old factory system is something that is used to make our neurons grow
the right place when we are forming. It smells its ways to your fingertips. How do we put our factory nodes into this so that the neurons will smell their
way to the right place?

Engelbart: These are kind of portrayals that help talk about the scale of the challenge.

Audience: The one thing that occurs to me when I look at this picture is that the knowledge products intelligence collection recorded dialogues in
higher levels of the organization. Say we have a group, a division in a company. If you were to take the collection of groups, what's going to make it
useful to see a division is some kind of inference. You have to take the stories learned from each of the individual ones, generalize it to some level and
make it useful for the entire corporation. The marketing people, the engineers, and the sales people are all different cultures but there are interesting
lessons to be learned from each of them. Since they are all from the same pool or division, there has to be some common thing. If you are going to build
one of these things, it seems that you are going to need some facility to cheaply and easily allow people at the division level to create generalizations
that are partially true, partially false. Then support those generalizations with specific cases pointed to the individual DKRs at the level below it. There
are a couple ways of going about doing that. You could annotate them with links; you could annotate the links with who ever created the
generalization, and thought that was interesting. You could use story telling, or the hearsay, or Xerox repair people sitting around and telling war
stories. These are excellent ways of communicating information across organizations that don't have a lot in common. So if you are going to build a
system like this, it seems that you should build the capability of generalization. Then looking at generalizations that may or may not be true, rating them
and then supporting or refuting those generalizations with specific instances at the lower levels.

Engelbart: I agree. A lot of these things have to be tried and evolve, that is why the evolutionary environment that you have to go after these kinds of
things. Not just in one organization, but also out here on a big scale it has to take place. I'd like to go on.

Audience: Do you really think there are going to be multiple DKRs or one DKR with people usually living in part of it? It seems that one of the problems
that comes up is with this alignment question that came up earlier. This generalizing the divisional level. It is not just information flowing up; it's also
those ideas flowing down. When it all gets linked together, it's just one big DKR or multiple ones?

Engelbart: They all have to be interlinked. Then there are lots of questions dynamically, about say if I link to something that you've got, there has to be
some kind of understanding between us about how permanent yours is, the changes, or the access control. We though about the details about these
things, I hoped to give a short presentation about what we would like to kick off early on about something that would be different and could evolve.
This is extremely valuable for me because when I toss these things up, I have the meaning that is built into my mind. So, I put it up there to carry that
kind of meaning, but I don't know how much it carries to other people. I needed this. That is what I put in one slide. You may not need it, but I do. Here
is the second of that and here is the third one. What kind of meaning, questions, and issues does that bring to you?

Slide: Outposts on the Co-Evolution Frontier

Audience: One of the things that comes up is tool system utilization, human system development, it seems like a lot of those things are not yet done.
With the tool system, there is the Internet now so there are a lot more capabilities out there now and how do we use them? That might be pushing out
there, yet we don't know how to use them. Human system development, I don't know what has been developed that has not been used.

Engelbart: That is frontier and explorer. It is really the ones is the green envelope that is used today. How are we going to move out into that space?

Audience: How much of that on the access is exploring and developing new tools and new human systems, and how much is just simply applying the
ones that some one else has made up.

Slide: Outposts on the Co-Evolution Frontier Cont.

Engelbart: One thing about the market place that is pretty clever is not putting things on the market that they don't think will sell. All of these outposts
are out beyond what you would expect to see the organization using. The necessity of exploring that multi-dimensional space with a lot of things out
there that we can't picture. This is what we talk about with scenarios and case studies. How do we provide a useful picture of what probably off there to
go after? What kinds of new tools, techniques, and processes, customs and skills...all the things in the tools and human system. What one perceives
and assumes how far are the boundaries of the frontier makes a huge difference then in how much they will want to invest in exploring it. This
pervading paradigm is what I'd get the label for what those people think the boundaries are. If they think, we already have the Internet, webs, and
e-commerce, we are practically there.

Audience: One thing I don't get from this diagram that I hear from the talk. I don't know how to do it visually, but I think it would be worth doing. It is
the question about the hope in the stories. We have these ways to go, and we have the outposts. If there is some way to show with an overlay, that
these can evolve and we can go beyond this little green thing so that it can keep moving. The one problem I have with this is that the outposts look
lonely out there. I am kind of going well how can you go out there if we show it is possible to do. If we keep moving these systems, if we keep
co-evolving. It can maybe motivate people to maybe put some energy, money, effort, and commitment, into it knowing that they can get out there and
that there are rewards in doing that.

Engelbart: So that the thing that they have to create as at least technology tools that potentially can be applied out there. Then how do you put those
to work? Then the whole question about where you put them to work so that it is realistic. Then there is the whole thing about every organization that
is going to evolve moving out there, has to get this explicit and implicit kind of feeling and knowledge about where it wants to go. That is all part of it. I
want to start talking about some of the things in the tool system that you can kick off something that is an evolutionary base. The criteria that we are
using for it is that it has to work with what other people are already using. You have to have higher performance capabilities moving around that will do
things with knowledge bases and the web pages that people are not using now. Then you have to give it some time so quite a few people can see it in
that context and you can have an evolution. It doesn't have to wait until it comes out in some big product. So having that evolutionary tool system,
there is no way you are going to get the real evolution unless the tool systems can be evolving. Like the open source provides for now.

Audience: Hi I am Pete Jacobs. One of the things that occurs to me now as I look at these various charts and the modules of information that we look at
in the colloquium, is that it might be helpful to me to see some kind of graph or representation of that space that represents the difference between what
is desired at any given point and what is attainable. I am thinking for example, about a gradient that might start out with something like the unknown,
then go into confusion, then deal with something like conflict. That is present in the things that we have been talking about all along. I am thinking
about even adversarial situations where you have, for example, like the intelligence community that may not want to give up it's authority and power,
and it is hiding behind it's dark cloak. You have governments that might not want to give up power to democratic systems like a DKR, and so forth. If
you had one graph it might serve purposes to help explain all of these others, and some of the questions that have been raised about them.

Engelbart: I wouldn't even know how to start with that because I don't get the picture. Would you like to help me do something like that?

Jacobs: I would be glad to help you. Perhaps I haven't been that articulate. I see that seems to be present in all of the things that we have discussed.
There seems to be a starting point where we are. We never quite know where we are going because we are involved in an evolutionary system. There is
always a chase going on between the verb and the noun. The lion and the antelope, so to speak. The object oriented software, you could think about it
like that.

Engelbart: There is a big challenge to me, how do you set up the evolutionary environment that would be the healthiest that you could do?

Jacobs: I think that the evolutionary environment is the thing that is undefined. It does have certain known characteristics. We tend to use these terms:
well we are confused now, where do we go from here, so and so over there disagrees or how do we get from this point to that point, what is it that we
want. Those are all to me, if it ways of saying or defining in vague ways, and sometimes very accurate ways this unknown territory. In some sense, to
me at least, it is a continuity or a string that ties together all of these charts and all of the things that we talk about. It is always present in all of these
discussions, but we never the subject itself. So I find that we are chasing our tails when we say how are we going to get there? We say when we get
there, we will have to figure out what we will do about it, or we will have to figure out a way to get there because it is part of the evolution. So it just
becomes language with out practice without knowing what that terrain really looks like in abstract terms.

Engelbart: If you can turn it into something more concrete, that is great. The way that I do is that you are painting a space out there, some of these
concepts can be used to formulate the basis of what would the strategy be. Part of the strategy has got to be that you have ways to show people, or
satisfy those types of questions. Other than that I don't know how to help.

Audience: As I look at this co-evolution map, it occurs to me that some of the outposts that we are seeing are what you normally do when you are
taking risks. For the human development system, I'll say the evolution of the economic system; the capital market system has tried to do the evolution.
Some of the critique that we have of that market system is that it is not efficient enough to cope with the rate of change of the complexity. I would say
that some of those outposts, some people talk about loneliness or who will pay for it so the factor of how do we make the risks or failures more
teachable to us. People take risks and they fail in the evolutionary system, but the rest of us do not benefit from those failures. People feel that the
capital market is the way that the evolution is taking place. It is a critique of the inefficiency of the system. It gives us something that we have used to
date. How in fact can change economic practices to cope with the problem. That is one contribution I have in the evolution.

In the tools systems side, the engineering discipline has coped with it very well. For the human system, if you use the terms social engineering, it carries
lots of baggage with it. For systematic social evolution. It just has historic baggage and you can't go on that access with out sounding, people don't
like that term social engineering. They are comfortable with the term markets, and capitalism as a way of evolving the social mechanism. I think there is a
sense in which it is a good graph, but the language of talking about them could be a little difficult.

Slide: Suppose Only Par of What This SO Produces Can Improve an SO's "Capability-Improvement Capability"

Engelbart: There have been a few difficulties over the decades in communicating. The longer I stay with this, the more my intuition says it's higher
probability. There is another one that if you say out of any organization, there are products that it produces. Knowledge. University people put out
something, or research people put out something, or manufacturing people put out something. What if you divide that set of output into two kinds. One
process. That is a fundamental picture about bootstrapping. If there isn't any feedback like that, then the organization can't do by itself any
bootstrapping. If you join something larger, and start pooling it, you can contribute things that the others can contribute, but among you all can
produce things that can come back and do that bootstrapping. The term bootstrapping has to be tied to what comes back to improve your improvement
process. That is why that picture is there. I hear people use the terms that sometimes make me wince. It's ok for people to generate their own ideas
about what it is, but not if they are trying to echo or say what I have been trying to say. Are there questions about this particular thing?

Audience: The piece that I don't see is the part that you just said. That is the interaction between the different organizations. I feel like that needs to be
part of the feedback loop too.

Engelbart: It is too. I made another one, which had a bunch of these sitting here with various amounts of what could come out. Then they pooled and
came back to all of them. It is sort of that coming back to improve your improvement that is the bootstrapping sense. On a national scale if they decided
that we are going to start investing more federal money in boosting the nations improvement infrastructure. First you need to go through and clarify
what that is. That could be a large-scale issue.

Audience: I think by bootstrapping now you mean establishing a positive feedback loop. That is not completely obvious to everybody.

Engelbart: Right. That was the first time I heard what a bootstrap was that was called a bootstrap circuit in a radar set. This was just positive feedback.
That was there a long time. Somehow trying to identify strategically that if we were going to invest money in trying to improve things, as early as
possible, it would be nice to improve what would improve the improvement process. On a scale like that. If we look at the capability improvement
challenge that is out there with the way that the world is changing and happening. Wow, we really have a challenge. That is one way.

Audience: Part of this is about developing a clear view. Then I have to caution you never to confuse a clear vision with a short path.

Engelbart: I actually had a little bit of practical training as an engineer. One of the things I like to distinguish about that. They like to build it to see it
work, and then improve it. That reflex in the strategy that I talk about tremendously. The high performance teams are something that you can build and
put them to work. Not to build them and watch them play games or demonstrate that they can jump around pretty flexibly, but really put them to work in
key places. What does this tell you?

Slide: The Bootstrap Alliance

Engelbart: We can move on then. It uses the A, B, C's, and pooling your C resources and so you have an improvement community. If you are really
trying to do it in the network way, we will let you use the term NIC. Then we point out that NICs themselves would like to improve. So if they form an
improvement community that is a NIC, then that is a good way for them to bootstrap their way up. That is what we call a MetaNIC and we said that is
going to be a key thing. It is part of a scaleable infrastructure for improving, for bootstrapping.

Audience: Ray Glocker again. This is going back to number two, the concept of a knowledge product, my remarks are addressed to. In a sense, I am
seeing knowledge products as we have traditionally experienced them. As being either something that is relatively static, but attempts to be rather
comprehensive, as connoted by the world encyclopedia, or alternatively is dynamic and on going, and current. It is not particularly comprehensive, but
is a particular point, either a point of research report, or a new point of insight or whatever. Which is connoted by the term journal. Part of the difficulty
is that the encyclopedia or static is by definition, as soon as we hear this comprehensive scheme it's obsolete by the time we start to articulate it's
second sentence. Unless we are willing to go ahead and tolerate an obsolete presentation, we will never get the sense of wholeness. We have these
trade offs in our knowledge products. Either you are obsolete or you are so fragmented that you are not integrated. That is what I like about the CoDiak
concept, because it is an invitation to somehow participate in the experience of both of these knowledge products simultaneously. While we currently
have our encyclopedias that have our seventeenth and eighteenth editions of it, there are more dynamic and current things. When you sight an Internet
site, you have to say this is the way it was on December 19th 1998. Somehow part of this has to be the way we integrate the static with the dynamic in
our knowledge products.

Engelbart: The NIC and MetaNIC level that is a very central concern at that level, how you do that. The hypothesis is that the CoDiak dynamic
knowledge repository is very central in how you run a NIC effectively. Yet we really note that you have to transfer experience also. You have to have
people getting involved at the NIC level and moving back and forth from their members. That part I don't know if we make clear in the picture, but there
is no way it is going to happen electronically for sometime till we have virtual being together.

Audience: I want to reply to that with a general outline, certainly not in specifics. When you talk about encyclopedia, you are talking about a codified
knowledge base. Where there is a point in time of what we know. One of Marcelo's slides really caught my eye today...the one that said education on
demand. Just as a blanket concept. What knowledge repository has to be fundamentally is a system that educates you on demand, on a specific
subject. It gives you the information you need to compensate for whatever level of ignorance yours is outside of one's specialty area. We are all
ignorant of a vast variety of subjects. Plumbing, ditch diggers. There are an amazing number of things to know about the best way to dig a ditch. If you
look at doctors, there is this huge composititory of medical knowledge going every day. By the time that they graduate, they are already ten years out
of date. They need a system that will educate them on demand when they are facing something new. That in a general outline has to be the solution to
the problem.

Slide: Our "Large Scale" Has Two Ends

Engelbart: That has been a basic picture all along about how you integrate the teaching machine, or educational facility in the actual structuring of your
knowledge products you have to provide for the books and guides so that you can have learning on demand. I have to close off the meeting. In this
large-scale problem situation that we have, it is large-scale in another sense. It goes from the big to the very small. Down in the details are some of the
important things we have to start getting going. One of those are the new things we have to get worked into the tool system that start helping do things
like flexibly portray, and really break away from the what you see is what you get version of a page. That opens up a lot of interesting potentials. You
can color code, you can have smart agents that can color code the parts of speech, how much training and skill will that take before it will help you go
through much faster. People can try it but they also need to put it to work and see what role it would play.

We started painting a little design. In session four we started pointing out how with this web based intermediary that something that IBM puts on the
market. I don't know if that is terribly unique, but it offers a lot of things that you could between your browser and products you have this thing that
would translate for you. What kind of translations. When we go get something, we tell this what we want to get. We also tell it that when we get it what
we also want to see is a particularly view. So it can transform that. You have to trust it still to not distort what is there but just give you the view that
you wanted. It needs to be something that is available as people start browsing around our world. Even from our world looking out to others. It would
be a marvelous experience. Another thing you could do is just name the kind of programming source code, C, C++, or Java, and you could make
transcoding so that it would be as you are addressing a point in there. It goes and gets it transforms it to what it takes to portray on your browser and
could install tags in there that would be target points for links. So every time you see it, it has those tags on it that you could use for spot linking. So
then you say that could make a lot of difference in the programming world. The specifications and everything else go together. We could also do it for
computer aided design systems. That sort of thing is something that doesn't wait for a big ponderous new application coming out. We could
experiment with things like that, that would reveal and show stuff a lot. We actually want to get something started like that. We have some volunteer
energy going and we hope we can push some doorbells for other places to do it. Something like that can come into other people's world and show them
that they can look at whatever they want. Hey, I can look at a word document now and see it in the different ways. Or Lotus notes, spreadsheets, or
whatever you have. So this transcoding intermediary opens up tremendous things to experiment with, and maybe actually produce it's a way to say
maybe if we called this OHS1, and next month it is going to go up and change. We are going to get more and more ideas like the people out there that
have been experimenting with the ways in which you can depict knowledge, study it, take it apart, parse it, and sketch it. Things like that which are all
very good. Those have to have some place that in the evolutionary set up you have to come in and find out the ways in which you could integrate
those things. They will all get integrated in a way in which the new capabilities are there. They have to be lived. It doesn't work to have something
sitting off all by itself. You have to go and learn about that environment, work with in it, and then what do you do?

Next week we will have and example of that. We have a young man, Ted Nelson that is coming. You have to have faith that somehow that stuff will get
integrated into the rest of the world. It is very stimulating. There a quite a few others that pop up that would really like to get on a list about how do you
start integrating the proposed new ways of thinking and working that some of them offer. We just have to do something like that for evolution. The
kind of standards that Jon Bosak has fostered is very important to try to be consistent with that, so that what you do can be more generalized. I think
we are just about out of time. This is the second time I have brought slides along that I was going to show you the potential ways of working that there
isn't time. I really want to go through and get in the basic pictures and discuss them with you. I appreciated that a lot. I think we are going to have to
close now. Thank you gentlemen for giving your special talks. Maybe you'll come along with this for the part three of this.