|[Date Prev] [Date Next] [Thread Prev] [Thread Next]||Indexes: Main | Date | Thread | Author|
“[Doug] envisioned people sitting in front of cathode-ray-tube displays, "flying around" in an information space where they could formulate and portray their concepts in ways that could better harness sensory, perceptual and cognitive capabilities heretofore gone untapped. Then they would communicate and communally organize their ideas with incredible speed and flexibility.” http://www.bootstrap.org/engelbart/index.jsp#nid04
"Sharing the Bootstrap" - Doug once said:
"Oh, look, what if, as we paid early, special attention to learn how to be more effective collectively, we applied as much of the new capability as possible to improving selected parts of society's improvement infrastructure."From the early '60s, I'd been calling that "bootstrapping." The better that gets, the better we're going to improve all the other things we want to improve, including how you're going to improve improvement."
But, Robert Cringely says: "No Good Deed Goes Unpunished. If You Think Microsoft Can’t Undermine Open Source, You Are Wrong."
Croquet is also a totally ad hoc multi-user network. It mirrors the current incarnation of the World Wide Web in many ways, in that any user has the ability to create and modify a "home world" and create links to any other such world. But in addition, any user, or group of users (assuming appropriate sharing privileges), can visit and work inside any other world on the net. Just as the World Wide Web has links between the web pages, Croquet allows fully dynamic connections between worlds via spatial portals. The key differences are that Croquet is a fully dynamic environment, everything is a collaborative object, and Croquet is fully modifiable at all times.
Croquet is a joint project being developed by David A. Smith, Alan Kay, David P. Reed, and Andreas Raab. More information is available at: http://www.opencroquet.org
"Simply put, he said, we have to start looking at spectrum as an almost limitless commodity, not a scarce one. The current regulatory regime that allocates spectrum ``is a legal metaphor that does not correspond to physical reality,'' he said.
Why not? First, he said, the notion of interference has more to do with the equipment we use to send and receive signals than with the physics of radio waves. ``Radio waves pass through each other,'' Reed said. ``They do not damage each other.'' In the early days of radio, the gear could easily be confused by overlapping signals. But we can now make devices that can sort out the traffic.
The second way that reality defies the old logic is what happens when you add wireless devices to networks. I won't go into the details of Reed's argument, which you can find on his site, but he contends that you end up with more capacity -- the ability to move bits of data around -- than when you started.
``In principle, the capacity of a certain bandwidth in a certain physical space increases with the number of transceivers in a given space,'' he said. Yet the FCC regulates the airwaves as if the capacity was a fixed amount.
Yes, he said, this is counter-intuitive. And, to be sure, there are experts who disagree with him.
But if he and others in his camp are right, we have a lot of work ahead to fix a hopelessly broken regulatory system. And if that happens, the sky is literally the limit for future communications -- but the consequences for some of the most powerful companies in our economy may be grim."
John, Thank you for drawing this important relationship, it certainly resonates with me. Bayesian inference is a "version" of pattern recognition (knowledge pattern recognition) that for me is the kind of thing that Doug is pointing to when he talks about augmenting human intelligence. My own background includes some time in the Operations research department as an OR analyst, so this makes a great deal of sense to me, that you would identify these relationships. I'll take the opportunity to point out the degree of interest that has occured due to the SWITCH interview. It was "slashdotted" which resulted in 14,860 downloads of the transcript of Doug's interview (35 printed pages), in the last week. Not sure how many people actually read the whole thing. Here's the slashdot reference for those interested in seeing the comments - which are all over the map. One post: wondered if it was a hoax, because they couldn't believe that all this was true and they didn't know about Doug until that post. http://slashdot.org/articles/03/04/15/0437224.shtml?tid=189&tid=95 Mei Lin -------Original Message------- From: "John J. Deneen" <email@example.com> Sent: 04/16/03 01:17 PM To: firstname.lastname@example.org Subject: Re: [ba-ohs-talk] We the Networked People....Jack and Mei Lin,I believe I've discovered something very important in the following quotes regarding 1) "We the Networked People" by Jim Downing April 10th 2003. 2) Doug's hyperscope (OHS) "compass", 3) Prof. Pierre Levy's Project - Is a Science of Collective Intelligence Possible? <http://www.collectiveintelligence.info/>, 4) the invitational transcript by SWITCH, 5) the Keynote address at the World Library Summit in Singapore, and 6) Autonomy technology <http://www.autonomy.com/Content/Technology/>: (snip)
Jack and Mei Lin,
I believe I've discovered something very important in the following quotes regarding 1) "We the Networked People" by Jim Downing April 10th 2003. 2) Doug's hyperscope (OHS) "compass", 3) Prof. Pierre Levy's Project - Is a Science of Collective Intelligence Possible?, 4) the invitational transcript by SWITCH, 5) the Keynote address at the World Library Summit in Singapore, and 6) Autonomy technology:
1) "We the Networked People" by Jim Downing April 10th 2003On pp. 20-21 of Doug's invitational transcript with SWITCH he mentioned:
"Computers most radically and usefully extend our capabilities when they extend our ability to collaborate to solve problems beyond the compass of any single human mind..” - Englebart 2002
2) Hyperscope (OHS) "Compass"
"Wisdom has to do with not only intuiting the long view, understanding systems in the context of their larger whole, but also acting in resonance with what is known as true and lasting. Only wisdom can guide effective decisions on how we invest our attention, both individual and organizational, in the conditions of galloping "complexity multiplied by urgency." - Doug Engelbart
3) Prof. Pierre Levy's Project - Is a Science of Collective Intelligence Possible?
"Starting from this position, the CIG will simulate evolutionary scenarios about the ideas ecosystem of the community. These simulations will give relevant indications on the course to be followed in order to progress - in each different situation - towards an increase in collective intelligence (the "compass" shows "North"), while avoiding the reefs of imbalance and unsustainability."
4) "SWITCH aims to critically evaluate developments in art and technology in order to contribute to the formation of alternative viewpoints with the intention of expanding the arena in which new art and technology emerge."
Engelbart: "That would be interesting but I never have. I would really like to talk to some deep theoretical linguist who has thought about how our sensory perceptual machinery works and how much beyond the terms and words we are using now could that sensory perceptual thing cope with more complex stuff. One thing I think would be really fun is I'm going to take up the Artificial Intelligence guys that have been a plague in my life for all these years and say, "great, you bring me your sentence parser and I am going to have that parser running ahead of a person who is reading and parse the sentences and then give the different parts of speech a different color brightness or maybe actually a diagram." Just to see if you got used to it if it would speed up your comprehension of a successive chain of sentences. It could."
Slayton: "This is similar to one of the claims that PARC made when they did the "future of reading" exhibition at The Tech Museum a few years ago. One of the exhibits that they had was a demonstration of reading that would display one word at a time to you very, very quickly. Through their research they claimed that they had made comprehension levels increase dramatically when the text was taken out of context in that way. What do you think about that?"
Engelbart: "I wish I would have known about that. Can somebody write down a note and give me..."
Slayton: "I'll give you the person that was working on that. I'm not sure if they're there anymore."
Engelbart: "I can track him down."
5) Keynote address, World Library Summit in Singapore (2002) < http://www.wls.com.sg/main.htm >
... "Another difference between innovation at the C level and innovation that is more focused on specific results is that, at the C level, context is tremendously important. We are not trying to solve a specific problem, but, instead, are reaching for insight into a broad class of activities and opportunities for improvement. That means attending to external information as well as to the specifics of the particular work at hand. In fact, in my own work, I have routinely found that when I seem to reach a dead end in my pursuit of a problem, the key is usually to move up a level of abstraction, to look at the more general case." - Doug Engelbart
6) Engelbart: "I can track him down." (Michael Lynch Ph.D., Managing Director and CEO Autonomy)
301 Howard Street
Tel: (415) 243 9955
Fax: (415) 243 9984
"Autonomy employs advanced pattern matching technology (non-linear adaptive digital signal processing) to extract a document's digital essence and determine the characteristics that give the text meaning. Once Autonomy's technology has identified and encoded the unique "signature" of the key concepts, Concept Agents are created to seek out similar ideas in websites, news feeds, email archives and other documents. Because it does not rely on keywords, Autonomy's technology can work with any language.
Autonomy's architecture combines innovative high-performance pattern-matching algorithms with sophisticated contextual analysis and concept extraction to automate the categorization and cross-referencing of information, improve the efficiency of information retrieval and enable the dynamic personalization of digital content.
Autonomy's strength lies in a unique combination of technologies that employs advanced pattern matching techniques (non-linear adaptive digital signal processing), utilizing Bayesian Inference and Claude Shannon's principles of information theory. Autonomy software identifies the patterns that naturally occur in text, based on the usage and frequency of words or terms that correspond to specific ideas or concepts. Based on the preponderance of one pattern over another in a piece of unstructured information, Autonomy enables computers to understand that there is X% of probability that a document in question is about a specific subject. In this way, Autonomy is able to extract a document's digital essence, encode the unique "signature" of the key concepts, then enable a host of operations to be performed on that text, automatically.
The innovative high-performance pattern-matching algorithms that provide the sophisticated contextual analysis and concept extraction, automate the categorization and cross-referencing of information, thereby dynamically improving the efficiency of information retrieval and enabling the dynamic personalisation of digital content. For the first time, computers can now be enabled to automatically form an understanding of a page of text, web pages, e-mails, voice, documents and people and automate operations dependant upon them.
The theoretical underpinnings for Autonomy's approach can be traced back to Thomas Bayes, an 18th century English cleric whose works on mathematical probability were not published until after his death ( "Philosophical Transactions of the Royal Society of London" 1763). Bayes' work centred on calculating the probabilistic relationship between multiple variables and determining the extent to which one variable impacts on another.
A typical problem is to judge how relevant a document is to a given query or agent profile. Bayesian theory aids in this calculation by relating this judgement to details that we already know,such as the model of an agent. More formally, the resulting, "a posteriori" distribution p (. q1/2x ),which is applicable in judging relevance can be given as a function of the known "a priori" models and likelihood
Extensions of the theory go further than relevance information for a given query against a text.
Adaptive probabilistic concept modelling (APCM) analyses correlation between features found in documents relevant to an agent profile, finding new concepts and documents. Concepts important to sets of documents can be determined, allowing new documents to be accurately classified.
Shannon's Information Theory
Information Theory is the mathematical foundation for all digital communications systems.
Claude Shannon's innovation as described in his "Mathematical Theory of Communication" (1949) was to discover that "information" could be treated as a quantifiable value in communications.
Consider the basic case where the units of communication (for example, words or phrases) are independent of each other. If p i is the probability of the ith unit of communication, the average quantity of information conveyed by a unit, Shannon's entropy or measure of uncertainty is:
This formula reaches its maximum when the probabilities are all equal; in this case the resulting text would be random. If this is not the case the information conveyed by the text will be less than this maximum; in other words there is some redundancy. This result is then extended, by more sophisticated mathematical arguments, to when units are related.
Natural languages contain a high degree of redundancy. A conversation in a noisy room can be understood even when some of the words cannot be heard; the essence of a news article can be obtained by skimming over the text. Information theory provides a framework for extracting the concepts from the redundancy.
Autonomy's approach to concept modelling relies on Shannon's theory that the less frequently a unit of communication occurs, the more information it conveys. Therefore ideas which are more rare within the context of a communication, tend to be more indicative of its meaning. It is this theory which enables Autonomy's software to determine the most important (or informative) concepts within a document.
The Dynamic Reasoning Engine (DRE™)
At the heart of Autonomy's software is the Dynamic Reasoning Engine (DRE™). The DRE™ is based on advanced pattern-matching technology that exploits high-performance probabilistic modelling techniques.
The DRE™ performs the core information operations:
- Concept Matching : The DRE™ accepts a piece of content* or reference (identifier) as an input and returns references to conceptually related documents ranked by relevance, or contextual distance. This is used to generate automatic hyperlinks between pieces of content.
- Agent Creation : The DRE™ accepts a piece of content* and returns an encoded representation of the concepts, including each concept's specific underlying patterns of terms and associated probabilistic ratings.
- Agent Retraining : The DRE™ accepts an agent and a piece of content* and adapts the agent using the content.
- Agent Matching : The DRE™ accepts an agent and returns similar agents ranked by conceptual similarity. This is used to discover users with similar interests, or find experts in a field.
- Agent Alerting : The DRE™ accepts a piece of content and returns similar agents ranked by conceptual similarity. This is used to discover users who are interested in the content, or find experts in a field.
- Categorization : The DRE™ accepts a piece of content and returns categories ranked by conceptual similarity. This is used to discover which categories the content is most appropriate for, allowing subsequent tagging, routing or filing.
- Summarization : The DRE™ accepts a piece of content and returns a summary of the information containing the most salient concepts of the content. In addition, summaries can be generated that relate to the context of the original inquiry - allowing the most applicable dynamic summary to be provided in the results of a given inquiry.
- Clustering : The DRE™, in conjunction with Classification Server, can organize large volumes of content or large numbers of profiles into self-consistent clusters. Clustering is an automatic agglomerative technique which partitions a corpus by grouping together information containing similar concepts.
- Active Matching : The DRE™ can accept textual information describing the current user task and returns a list of documents ordered by contextual relevance to the active task.
- Retrieval : The DRE™ accepts natural language queries and returns a list of documents containing the concepts looked for, ordered by contextual relevance to the query. The DRE™ also supports Boolean queries and Parametric Searches.
* piece of content refers to a sentence, paragraph or page of text, the body of an e-mail, a record containing human readable information, or the derived contextual information of an audio or speech snippet.
By understanding concepts as expressed within a particular context, Autonomy's technology represents a significant advance beyond simple text searching, collaborative filtering, parsing or manual tagging. As the amount of unstructured text available to users explodes, companies are in greater need of an intelligent infrastructure that can automate the management, personalisation and delivery of information while reducing the level of manual effort required to get the right information to the right people at the right time.
Computers need to be smarter. They need to understand more about the information being communicated, and less about where and how information is stored.
Autonomy looks forward to continuing progress in enabling computers to better understand the meaning of digital information in any form.
- "Interactive Technology Strategies - Personalize or Perish?" - The Forrester Report; Volume Two, Number Three, May 1997
- "Bayesian Methods Applied to Neural Networks" - by Radford Neal, Springer Verlag
- "Bayesian Forecasting" - by Mike West, Springer Verlag
- "Applied Bayesian Forecasting and Times Series Analysis" - by Pole, West & Harrison, Chapman & Hall
- "Bayesian Statistics" - by P.Lee, John Wiley & Sons
- "Numerical Bayesian Methods Applied to Signal Processing" - by O-Ruinadah and W.J. Fitzgerald, Springer Verlag"
Mei Lin Fung wrote:
I came across this recent post related to Doug’s World Library Summit talk and thought it might be interesting for some on this list. For those of you who weren’t on the list in April last year, Henry van Eyken has posted the entire talk at http://www.fleabyte.org/eic-11.html ) , Thank you for continuing to make it available to everyone, Henry!
We the Networked People
By Jim Downing April 10th 2003
Douglas Engelbart, the man that invented the computer mouse, wrote a paper in 1962 entitled “Augmenting Human Intellect: A Conceptual Framework” Taking Vannevar Bush’s idea of the mind working by association where:
“We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human "feel for a situation" usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.”
He calls for man's intellect to be augmented by computers.
“Man's population and gross product are increasing at a considerable rate, but the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity. Augmenting man's intellect, in the sense defined above, would warrant full pursuit by an enlightened society if there could be shown a reasonable approach and some plausible benefits. “
In April last year delivering the keynote address to the World Library Summit held in Singapore Engelbart continued his vision for computers to augment what people do, which he has had for over 50 years. At the core of this vision lies the idea that people can make use of computers to work collaboratively together to solve complex problems, something which is going on everyday with people using the internet.
“So, it is not the computer, working alone, that produces a solution. It is the combination of people, augmented by computers. The key word here is "augment." The reason I was interested in interactive computing, even before we knew what that might mean, arose from this conviction that we would be able to solve really difficult problems only through using computers to extend the capability of people to collect information, create knowledge, manipulate and share it, and then to put that knowledge to work. Just as the tractor extends the human's ability to work the earth, and planes extend our ability to move, so does the computer extend our ability to process and use knowledge. And that knowledge production is a group activity, not an individual one. Computers most radically and usefully extend our capabilities when they extend our ability to collaborate to solve problems beyond the compass of any single human mind..”
With the arrival of the Internet it would seem that much of Engelbart's vision has been realized. Yet at it’s core the ideas of his work on "augmentation" remain unrealized. The personal computer has allowed us to work better, but we still work, for the most part, alone. Today's computers and the internet and the World Wide Web are advances, but the challenges still remain. My area of study would be to explore how the internet can be used to increase co-operation amongst people. Cooperation amplification if you like.
….. (MLF: it goes on to reflect on other ideas that extend this line of thought.)
Mei Lin Fung