[unrev-II] Fwd: Re: [topicmapmail] loss-less transformation of topic maps

From: Jack Park (jackpark@thinkalong.com)
Date: Mon Apr 30 2001 - 06:28:39 PDT

  • Next message: Peter Jones: "Re: [unrev-II] Fwd: Re: [topicmapmail] loss-less transformation of topic maps"

    This discussion, going on at the topicmapmail group (available by surfing
    http://www.infoloom.com), plays deeply to the kinds of issues any OHS might
    try to cover. Steve Newcomb is one of the inventors of what we call Topic
    Maps. He came to this kind of thinking because he got involved with
    computer representation of music, which lead to the SGML standard we now
    call HyTime, and needed a way to navigate in HyTime space. Topic Maps
    naturally fell out.

    Bernard Vatant is one of those really deep thinkers who often pops with
    something profound. I, personally, am quite enthused with the following
    articulation of his views that I thought the Unrev group might see fit to
    extend the discussion.

    Cheers
    Jack

    >----- Original Message -----
    >From: Bernard Vatant <b.vatant@wanadoo.fr>
    >
    > > [Steve N.]
    > >
    > > ... my perception is that we are both dyed-in-the-wool text-editing
    >bigots,
    > > and that we are both totally unreconstructable. I rarely admit this in
    > > public, though. It is not at all fashionable to care about exactly how
    > > things are expressed for interchange ...
    > >
    > > Steve
    > >
    > > As another "dyed-in-the-wool text-editing bigot", I'd like to add some
    >other
    > > "non-fashionable" comments. There again, an interesting insight is given
    >by
    > > looking back to mathematics history, which presents the advantage of
    being
    > > scores of centuries old, and interchange syntax history looks like a
    baby
    > > compared to it.
    > > Mathematics history shows a long struggle between text-editing and other
    > > representations - schemas, figures, algorithms etc. One interesting
    thing
    >is
    > > that the validation of whatever mathematical result is still nowadays
    made
    > > through peer review of "natural language" demonstrations. Even if these
    > > demonstrations are stuffed with formal calculus, graphical
    representations
    > > etc. they are basically edited text, of which meta-structure is all
    >natural
    > > language. It's very amazing indeed to notice that the most pervasive
    >tools
    > > for scientific and technical developments are grounded in such peer
    > > agreement over linear text-like natural-language grounded presentations.
    > > For example, strong debate araised in the 80's when the first computer
    > > assisted demonstration appeared, namely for a long pending conjecture
    >known
    > > as "four colors conjecture" (assuming that four colors were enough to
    >color
    > > any 2-D territory map so that you change color every time you cross a
    > > fronteer - very tricky question, of which not the least difficult part
    was
    > > to define properly what is a map and what is a territory and what is a
    > > fronteer). The human-readable part of that demonstration demonstrates
    that
    > > the infinite set of possible maps can be reduced to a finite - but very
    > > large - set of particular maps, and a computer program is shown to be
    able
    > > to construct a proper coloring for each of them. This program was
    > > successfully ran, so the authors assumed the demonstration was
    validated.
    > > The controverse raged about that validation, since even if the computer
    > > work - about a week of Cray 1 at the time - could be reproduced and *in
    > > theory* be translated in text, the procedure was so long that the
    edition
    >of
    > > this text would have taken billions of pages at least, that no human
    > > mathematician could ever be able to read - let alone understanding - in
    a
    > > reasonable life span ... so the usual peer review rules were broken.
    > >
    > > What do we learn from that? Basically, we are still in the text age,
    even
    >if
    > > it is pretended otherwise here and there. Despite the fact we all
    pretend
    > > the true nature of knowledge is pluridimensional, references of the
    >highest
    > > aknowledged level in all domains are still texts. Laws, regulations and
    > > international treaties are texts, mathematical demonstrations are texts,
    > > syntaxes and languages specifications are texts, where "exactly how
    things
    > > are expressed" is a fundamental feature ... and computer programs are
    >texts
    > > too - not to speak about books about all that, of course.
    > >
    > > So, Steve, when you say "we" [are both dyed-in-the-wool text-editing
    >bigots]
    > > it's not only you and Murray - and Bernard and Paul and <...> (enter
    your
    > > name here). "We" are there all knowledge and IT workers even if they
    don't
    > > aknowledge it. Shifting from that to a really pluridimensional paradigm,
    a
    > > world where non-linear graph-like representations will be primitive
    > > references would be a real unprecedented historical shift. Although I am
    a
    > > strong graph and hypertext true-believer; my hunch is that we are very
    >very
    > > far away from ready for this paradigm shift. As the mathematics history
    > > shows, we are more than dyed-in-the-wool, we are really - culturally
    > > speaking - built out of linear text ... Centuries and centuries of
    "Book
    > > Civilization" and Biblic culture are certainly not for nothing in that,
    >and
    > > Ten Commandements are always in some corner of our minds ...
    > >
    > > Therefore sticking to texts is maybe a prudent strategy, since they've
    > > proven so versatile and efficient, and no one really knows if shifting
    to
    > > pluridimensional as the basis of all representation is possible or even
    > > desirable. And the debate about what should be the reference normative
    > > representation of a TM, a text-like document (in XTM or any other
    >serialized
    > > syntax), or a pluridimensional object (a semantic graph representation)
    is
    > > not a minor one. Any sort of demonstration that these representations
    >could
    > > be considered equivalent will reinforce the "text-above-all" paradigm,
    and
    >a
    > > demonstration that they are not - that there is something in the graph
    > > representation which is not reductible to linear text, or/and the other
    >way
    > > round - would be interesting indeed!
    > >
    > > But my hunch is that whatever of those demonstrations should be somehow
    > > text-written to be peer reviewed and validated :)
    > >
    > > Regards
    > >
    > > Bernard
    > >
    > > _______________________________________________
    > > topicmapmail mailing list
    > > topicmapmail@infoloom.com
    > > http://www.infoloom.com/mailman/listinfo/topicmapmail

    ============================================================================
    This message is intended only for the use of the Addressee(s) and may
    contain information that is PRIVILEGED and CONFIDENTIAL. If you are not
    the intended recipient, dissemination of this communication is prohibited.
    If you have received this communication in error, please erase all copies
    of the message and its attachments and notify postmaster@verticalnet.com
    immediately.
    ============================================================================

    Community email addresses:
      Post message: unrev-II@onelist.com
      Subscribe: unrev-II-subscribe@onelist.com
      Unsubscribe: unrev-II-unsubscribe@onelist.com
      List owner: unrev-II-owner@onelist.com

    Shortcut URL to this page:
      http://www.onelist.com/community/unrev-II

    Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/



    This archive was generated by hypermail 2b29 : Mon Apr 30 2001 - 06:42:06 PDT