Loading ...
Sorry, an error occurred while loading the content.

Metaphorical Web #18: Forty (Text Only)

Expand Messages
  • Kurt Cagle
    The Metaphorical Web #18 By Kurt Cagle ======== Forty ======== I turned forty a few days ago. Normally I am not one to bring up birthdays, but this one, of
    Message 1 of 1 , Jul 6, 2003
    • 0 Attachment
      The Metaphorical Web #18
      By Kurt Cagle

      ========
      Forty
      ========

      I turned forty a few days ago. Normally I am not one to bring up birthdays,
      but this one, of course, has a great deal of significance. Each decade has
      its own significance, its own demeanor. At ten, you make that transition
      from child to teen (and all the angst and havoc that produces, having a
      teenager at home myself). At twenty, you enter the age of being an adult, so
      cocksure of yourself, determined to take on the world because you are
      immortal. Upon reaching thirty, the wind has been knocked out of your sails
      a bit, and quite typically you have taken on the responsibilities of raising
      a family. You can no longer afford to live solely in the present - rather,
      you now understand that others are dependent upon you and the decisions you
      make, and all of a sudden everything has ramifications that extend out ten,
      twenty, thirty years.

      Forty. My eyes are no longer as sharp as they once were, and the glasses I
      wear sport bifocals. The paunch that I carry around is not getting any
      smaller, and seems to have persisted in becoming a part of me that will not
      go away. The writing habit that I took up more than a decade ago as a
      curious pastime has rather become attached as well - I consult periodically,
      but my life has become a life of words and phrases, chapters and articles
      and books. I've written four books completely and substantial parts of
      eleven more, with another three on the way. There is even, in the works, my
      first novel, a book that may be of more interest to programmers than to the
      average populace, but you write what you know.

      Those who have followed the fits and starts of the Metaphorical Web know
      that the last year has not been the greatest for me - the decade, for that
      matter, was not exactly what I'd call the most sterling, even though it is
      the one that in theory should define one's successes. I am abashed that I am
      older now than the oldest Mariner baseball player on the field (at least
      officially - Edgar Martinez is no more thirty nine than I am). I look around
      at peers and compatriots that have had their names and faces in Wired or
      other contemporary magazines, while mine have very occasionally peered from
      deeply buried articles in such titles as Visual Basic Programmer's Journal
      and XML Magazine. Important titles, admittedly, but not exactly the kind
      that you expect to see on the grocery newsstand. I've had friends who've
      gone on to become millionaires, to become heads of large companies, to get
      the wealth and the accolades. Ah well.

      I write. I walk upon the beach with my white trousers rolled, eating a
      peach, listening to the songs of the mermaids. T'is not a bad life, all
      told. Perhaps the forties are to be my philosophical decade. What is XML?
      XML is what you use to frame the question, and perhaps, to phrase the
      answer. What is the purpose of programming? There has to be more of a motive
      in life than material profit, and more to this philosophical game of
      building grand castles in the sky that is a programmer's stock in trade.

      I have decided that I will enjoy forty, savor it, to paraphrase Frank
      Sinatra, as a good vintage wine. It sure as hell beats the alternative.

      ==================================
      Event Loops, XSLT, and XForms
      ==================================
      A thought struck me the other day, one that has been perhaps crystallizing
      for some time. XSLT is a powerful paradigm for doing transformations, and
      especially with the advent of XSLT2/XPath2 is capable of things that are
      more characteristic of compilers and macro-engines than they are of "simple"
      style sheets. However, from an application standpoint, XSLT is essentially a
      black box - you push data into it, perhaps set some parameters, and after a
      bit you get data out the back end. In that respect XSLT is a purely
      functional language.

      Functional languages are extremely useful, mind you, but they lack one
      characteristic that even the most rudimentary Windows application has. Deep
      within the bowels of any GUI application you will find, somewhere, tucked
      away so deep that it's usually very hard to find it, a small snippet of code
      that looks something like:

      while (!application.end()){
      application.update();
      }

      Admittedly, with threading it'll look a little bit different, but the intent
      is the same - unless the application is done, update it to reflect any
      changes in its state. This bit of code can turn functional code into an
      engine that will be applied repeatedly to change the state of the
      application continuously.

      However, such functionality is not a part of XSLT, and in all honesty it
      shouldn't be. There are some very sound mathematical reasons for keeping
      XSLT as declarative and functional as possible, reasons that have actually
      contributed to its adoption on such a wide variety of platforms. An event
      loop such as this implies that you are maintaining state between calls to
      the transform, something that is very much at odds with the underlying
      programming model that XSLT uses. That does not mean, though that you
      couldn't set up a transformation like this:

      Xmlstate = loadInitialState();
      Xslstate = loadTransformation("stateTransform.xsl");
      RenderXsl = loadTransformation("render.xsl");
      while (!XmlState.AtEndCondition()){
      View = RenderXsl.Transform(XmlState);
      Viewer.Render(View);
      Xmlstate = XslState.transform(XmlState);
      }

      This is, in fact, almost a textbook definition of Turing machine - each
      state is directly dependent upon the previous state through a well known and
      clearly defined set of transformations. There is no theoretical limitation
      to working with this, as there are no real side-effects here; the
      transformation has no knowledge about the history of the state maintainer,
      and is not itself materially changed by the state maintainer.

      The renderer is an independent transformation. It works upon the model (in
      this case the XmlState) to produce a view object in XML that is in turn
      passed to a viewer, or user agent. Significantly, the view object again has
      no direct connections with the XML model ... yet. This kind of application
      works in situations where there is no real external input, but as one of the
      goals here comes in attempting to understand how to create that input, is
      needs to be fleshed out a bit more. Ideally, you do not want the
      transformation to introduce side effects, so it is dangerous here to assume
      that event management is something that is intrinsic to the transformation.

      Yet such events have to come from somewhere. Looking at this model again,
      the most obvious source for such events is the Viewer object, which would be
      responsible for matching events coming into the requisite window via mouse
      actions or keyboard interactions and making these available to the
      application. Completing the loop, then, you get an application which looks
      something like this:

      Xmlstate = loadInitialState();
      Xslstate = loadTransformation("stateTransform.xsl");
      RenderXsl = loadTransformation("render.xsl");
      while (!XmlState.AtEndCondition()){
      View = RenderXsl.Transform(XmlState);
      Viewer.Render(View);
      Xslstate.setParameter("events",Viewer.GetEvents());
      Xmlstate = Xslstate.transform(XmlState);
      }

      where Viewer.GetEvents() returns a grove of XML event trees.

      Notice what is going on here. There is a very clear path of processing at
      play that insures that everything gets updated properly:
      1. The model updates the view, which gets displayed.
      2. The state transformer gets updated by the events generated in the view.
      3. The state transformer, acting on the old model, generates a new model.
      4. Go back to Step #1.

      The question remains of course whether this is necessarily an efficient way
      of working. The immediate blush answer to this is that, no, of course it's
      not. The cost of converting the view from an XML representation into a
      representation within the viewer is particularly expensive. Typically such
      representations are binary objects, and the cost of changing a property (or
      even a whole set of properties) on such objects is usually much lower than
      the cost of destroying the objects then rebuilding them.

      Note that this is much less true in the case where latency concerns - i.e.,
      client/server interactions - are at play. In that case, the cost of
      destroying then rebuilding the objects is comparable to the cost of changing
      a single property, and the above event loop by itself begins to become
      feasible. This point is not trivial, and I'll come back to it later, but for
      now, let's concentrate on the low latency scenario.

      One possible solution to the problem of performance comes by questioning
      what the goal of the XSLT itself is. XSLT has two components to it -
      establishing the set of nodes that will need to be transformed, then
      performing the transformation on those nodes. The former of course is more
      properly the province of XPath - it establishes the initial bindings that
      identifies the set of nodes to be manipulated and associates them with the
      appropriate manipulations. In some cases the manipulations consist of the
      creation of new sets of elements - a clear role for XSLT, while in other
      cases the manipulation involves simply the replacement of one scalar value
      with another, a role that can be handled by XPath operations.

      In essence, then, the view representation passed to the viewer can be
      thought of as being a delta map - instead of replacing the entire structure,
      the delta map would contain a set of imperative instructions, such as:

      Locate the node with id "foo" and set its value from "0" to "1".
      Locate all nodes of type <bar> and replace them with <bar_prime>.
      Locate all nodes of type <foobar> and add <boofar> elements as children.
      Locate all nodes of type <goobar> and remove them from the tree.

      Such commands differ from XSLT because they explicitly assume that they are
      changing the state of an existing structure rather than creating a
      completely new structure. However, these also differ from classical
      imperative programming because they are generated specifically for use at
      run time, in response to changes that occur due to event interactions.
      Moreover, the specific changes could be described easily in an XML context:

      Locate the node with id "foo" and set its value from "0" to "1".
      <bind select="id('foo')" value="1"/>

      Locate all nodes of type <bar> and replace them with <bar_prime>.
      <bind select="//bar" action="replace">
      <bar_prime/>
      </bind>

      Locate all nodes of type <foobar> and add <boofar> elements as children.
      <bind select="//foobar" action="append">
      <boofar/>
      </bind>

      Locate all nodes of type <goobar> and remove them from the tree.
      <bind select="//goobar" action="remove"/>

      Intriguingly, because the initial model can potentially be altered by the
      sequence of steps outlined above, it also means that the same model can be
      altered more than once; one of the big differences between declarative,
      template based processing and imperative (command) based processing is the
      fact that no assumption can or should legitimately be made about the order
      of processing in the first, but perforce must be considered in the latter.

      If these concepts seem vaguely familiar, it's because they are to a great
      extent how XForms operates. XForms assumes that it has an abstract user
      interface space (defined by the variety of controls), one or more models
      that represent the underlying state of the system which can be changed
      dynamically, and a binding mechanism that permits both selection and
      delta-oriented changes on both the inherent model and the interface. Within
      a specific XForms action, it is possible to have multiple binding operations
      (among other operations) that act upon the underlying model, and the
      operations are monotonic in nature.

      I bring this point up to emphasize, to a greater extent than I have in the
      past, that one of the principle roles of XForms is to act as the XML analog
      to an event loop application. Personally I suspect that XForms will fact the
      same kind of uphill struggles for recognition that XSLT has, and for much
      the same reason - the name of the standard tends to obscure its real
      significance. When people hear references to XForms, they make the
      understandable assumption that XForms is the W3C's forms description
      language, and almost immediately they try to liken it to commercial forms
      tools such as Microsoft's InfoPath. It's actually fairly marginal in
      handling forms in the traditional sense - it provides no indications about
      the positioning of elements, defines "form" elements in a very abstract
      mechanism, and in fact can be an application which has no "form" elements at
      all.

      That's because the most significant reason that XForms is important is
      because it provides a declarative, yet imperative foundation for graphical
      user interfaces. It actually strikes a pretty nice balance - imperative
      enough to insure that multiple actions to a model will be handled in the
      proper order, a necessary requirement when event handling (i.e.,
      asynchronous transactions) becomes a part of the underlying system, yet at
      the same time declarative enough that it can be easily generated via
      transformations, the issue I want to address next.

      I'm going to try to flesh this thread out more and build it into a formal
      application to illustrate the principles covered here. I think that issues
      such as this will be an integral part of moving SVG from being a cool
      graphics language to the principle language used for graphical user
      interfaces on the Internet.

      ======================
      Code Gen Redux
      ======================
      My last metaphorical web talked about the use of XSLT in its role as a code
      generator. This time around, perhaps because of the overall more
      philosophical direction of this particular issue, I wanted to explore WHY I
      am beginning to feel that code generation is both a huge part of the future
      of XML and why I simultaneously want to raise some worries about it.

      I think this comes back for me to the question - what is the role of the
      programmer? While the flip answer is to say a programmer is someone who
      writes programs, I'm not really quite sure I would agree with that
      statement. However, taking the answer on its face, I would restate the
      question as "What is a program?"

      I think in the business world, such a question would be answered in terms of
      its applicability to a certain problem. A word processor is an assistance
      device - most of us are (or should be, at any rate) capable of stringing
      together sequences of words in the right order to create a cogent document
      without a computer being the mediating instrument; a pen and paper will do
      much the same thing, albeit more slowly. In the hands of a novice, a word
      processor will let them get past the mechanical aspects of writing and
      concentrate more on the techniques of writing well, so in that sense it can
      improve their writing skills somewhat, but in general there are still
      relatively few exceptional novelists out there, because that skill requires
      both perseverance and a good sense of narrative ... of story-telling.

      Drawing and painting programs are assistance devices - Photoshop will give
      you an instant mastery of the media tools such as airbrushes, watercolor
      brushes and so forth, mastery that can often take years of practice (as one
      who is at best an indifferent airbrush artist, I can attest to this).
      However, Photoshop will not give you any more ability to create good
      artwork; there's still a matter of the artist's eye, the sense of what is
      technically competent work vs. true artistic genius. A world-class painter
      who spends a couple of weeks working with Photoshop will produce better work
      than an indifferent artist who's used Photoshop daily for several years.

      I think this holds true in pretty much any business category one wanted to
      discuss - a program's principle effect is to automate the mechanical aspects
      of the job at hand. In some cases, where the mechanical aspects constituted
      a significant amount of the job at hand, this could in effect mean that the
      automation would likely serve to replace the function of the person who
      previously did that job. A person who processes invoices typically performs
      a number of mechanical steps to insure that the invoice is valid and
      reasonable (the two are not the same thing - a invalid invoice would be one
      in which a negative number was found in the number of items field, whereas
      an unreasonable invoice would be one where a person made an order for
      9,250,306,152 new cars.

      Automating the validity here is almost trivial - it is in automating the
      reasonableness where things get to be tricky. This is where the concept of
      "business logic" really comes into its own - the purpose of business logic
      is essentially to insure that information being processed is both valid and
      reasonable, and is, to a certain extent, an attempt to capture the notion of
      common sense within the lines of code. Typically, one way to do that is to
      insure that there are strong limitations placed upon the points of entry for
      data - the laxer these restrictions, the more likely unreasonable data is
      likely to get through. However, this is far from being simple, because the
      more that you create gateways for assuming reasonableness on the input side,
      the more complex the applications become, and the more code that gets
      involved in writing them.

      In essence, the real skill of that invoice person was their ability to look
      at an invoice and determine, at a glance, that it was a reasonable document.
      That "glance" of course encapsulated the ability to do pattern matching
      based upon experience with thousands of other invoices to determine whether
      or not there was something that appeared "odd" about the document, and hence
      worth a second look. In essence, the invoice person's "glance" serves the
      same purpose as the writer's "ear" or the artist's "eye", the ability to
      discern discordant patterns and to emphasis good ones within a particular
      work.

      In a way, this can be thought of as "talent". You can be an artistic genius
      and have poor eye hand coordination or color sense (indeed, Kelly Freas, one
      of the best known painters in the science fiction genre and an artist who's
      work has appeared on book and magazine covers for more than six decades, is
      completely color blind), or be a sterling writer and an absolutely abysmal
      typist. The talent is essentially discernment, in being able to see what
      makes for true art, or good story-telling ... or a reasonable invoice. And
      it's damnably difficult to automate, thankfully.

      Yet returning to the initial question of what defines a programmer, I think
      you could reasonably say that a programmer is a person who is able to
      discern the patterns of automation and then build applications to exploit
      those patterns. A bad programmer may in fact understand the tools fairly
      well, but if they lack the ability to see those patterns of automation then
      the solutions that they come up with will typically be fairly inefficient,
      will rely to a great extent upon pre-existing libraries, may end up using
      the wrong tools for the wrong problems, and will usually end up compromising
      the degree to which the solution models the problem.

      The irony is that this description sounds to a certain extent like the
      standard "best practices" in programming. Code re-use for instance has often
      been seen as being one of the great holy grails of programming, and is the
      foundation for the Algol derived languages (C, C++, Java, C#, etc.)
      Unfortunately, one problem that occurs with such languages is that they
      encode solutions into large, complex frameworks with thousands of
      interrelated classes, and programming then comes down to the degree to which
      you know which particular foundation classes are intended for what purposes.
      Mind you, this is great for vendors of software, who recognize that there is
      profit to be had in "simplifying" this morass of classes, but typically
      there you become increasingly dependent upon architectures that you have to
      have faith will work, simply because the space becomes too complex.

      I suspect that there is a semantic equivalent to Shannon's laws about
      entropy and complexity within information systems. Carl Shannon, for those
      of you who aren't familiar with his work, was an engineer working for Bell
      Labs in the 1940s. In one of his seminal papers on information and entropy,
      he showed that there was a direct equivalence between information (which he
      defined at a VERY low level, essentially the encoding of bits) and energy
      flow, and that consequently that the second law of thermodynamics, which
      defines the notion of entropy, can similarly be applied to information
      manipulation. Note that the level of definition of information in Shannon's
      case made no implications about semantics, and its dangerous in fact to make
      any assumptions concerning any higher order structures and their
      applicability to entropy or energy dynamics.

      However, I am going to coin a few basic observations, call them Cagle's
      Principles, though I have no doubt that they have been encapsulated in far
      better form than I'm presenting here (I'm in fact looking for such, and
      would definitely welcome any links to more in-depth information on this
      topic). However, to whit:

      1) The degree of complexity of a problem can never be reduced, it can only
      be transferred from one location to another within the solution.
      2) The complexity of a problem usually does not reside in that part which
      insures the validity of the model, but instead determines the reasonableness
      of the model.
      3) Validation code is highly structured, reasonableness code is highly
      relational.

      The first principle is essentially my statement of Semantic entropy - there
      is essentially a certain minimal configuration of code that can be thought
      of as the ideal configuration, and no amount of coding can take the
      application below this ideal configuration. The problem is that such a
      configuration is not knowable in advance (and in fact I suspect that the
      notion of semantic entropy and the class of problems known as Non-Provable
      (NP) is intimately intertwined). Simplifying one aspect of the code will
      usually end up pushing the complexity to some other point in the system.

      The second principle comes from experience. I've worked on a number of
      software productions over the years, and I keep finding that the 80-20 rule
      seems to hold consistently - it takes 20% of the time to do 80% of the work,
      and 80% of the time to do the remaining 20%. I think that this occurs
      because many program designers and architectures assume that all aspects of
      programming are pretty much equally easy, when in fact you have this
      distinction between valid and reasonable models. The solution is often to
      truncate the reasonableness model at some point as your code becomes
      increasingly filled with handling more and more sophisticated exceptions.

      To many programmers, these exceptions are troubling because they can often
      be seen as a failure of the model, or worse, of their own programming
      ability. This is one of the reasons that exception-handling code is usually
      placed into the code fairly late in the game, and honestly is placed in too
      late in the game for most applications. In point of fact, I am beginning to
      think that our entire orientation of programming is wrong because we assume
      that there is in fact one right solution with a number of exceptions to
      handling the "odd" cases. Instead, it seems to me that a better solution
      would be to recognize that the space of all exceptions includes the primary
      code, and to essentially build our programs with the viewpoint that all
      programs are in fact exceptions, because these exceptions define the
      reasonableness of the model.

      The last principle is again just an observation, and one filled no doubt
      with countervailing examples, but it seems to hold true often enough to
      bring it up. Validation code in general lends itself very much toward
      structured programming. In other words, validation code tends to recur in
      patterns, to such an extent that understanding those patterns can
      significantly reduce the amount of "noisy" code that is written.
      Significantly, such patterns mean that it is possible to automate much of
      the production of validation code via code generators. A pattern is not a
      static template. There are variations within each pattern - parameters to
      those patterns, if you will - but if you can reasonably ascertain the nature
      of those parameters then the code generators (such as XSLT) can in fact
      build the requisite code via a series of transformations on descriptions of
      the model.

      So far, we've done fairly well at determining the first order patterns
      (structured imperative code) and are doing okay, though not great, on
      determining the second order patterns (class frameworks). I think as more
      people begin working with XML and XSLT, however, they are beginning to
      realize that there are higher order patterns that can in fact be automated,
      as we begin to abstract out the notion of classes in terms of XML based
      interface descriptions. To me, that will be where the bulk of the conceptual
      programming efforts will take place over the course of the next decade.

      However, the reasonableness code is a little more problematic, because it is
      largely contextual. What is the best interface to use for retrieving
      information about a person's interests? What if that person is three years
      old? What if that person can't see? What if the interests only lie within
      the domain of computation, or politics, or choosing a date for the evening?
      In other words, it is the relationship of the user to the external context
      that determines the nature of the application, the "business logic". The
      underlying computational models may in fact be very similar from one use
      case to the next, but the interfaces for the applications couldn't be more
      different. Here, structural patterns do not help us, but instead can often
      prove to limit the domain in which commonality of structured can be re-used.

      Notice that I've focused on the use of interfaces here. This was not an
      accidental choice. Quite typically, reasonableness is directly correlated
      with effective human/computer interface design. With the exception of
      multimedia developers, most interface design tends to get fairly short
      shrift in programming efforts because of the perception that such interfaces
      are "easy" compared to the complexities of data manipulation, socket
      management, searching, etc, and so is often assigned as a secondary task in
      the application development. Nothing could be further from the truth.

      It is relatively simple to automate interface editors - tools that assist
      designers in interface development. The problem here is that what is being
      automated is the validation code - the tools that provide the minimal
      constraints for inputting information, and usually work upon the deployment
      of a set of "standard" interfaces that are used both because they are
      available and because users have become acclimatized to them (think treeview
      controls, editor panes, and so forth). However, the code that's necessary to
      provide reasonableness on top of that data (to insure, for instance, that
      the city, state and zipcode in an invoice are all internally correct)
      necessitates building in additional code - extending the standard models -
      and often introducing dependencies and coupling in the code that make code
      reuse difficult. Building bindings so that the same underlying code can be
      used for the three-year-old and the visually impaired user adds even more to
      the complexity of the code, especially since the components typically have
      wildly divergent programmatic interfaces.

      I think that certain of the second generation XML technologies - XForms,
      SVG, XML Events, etc., will be central to being able to break the conundrum.
      XML is fundamentally an abstracting relational mechanism. XML makes creating
      multiple layers of abstraction possible that can be manipulated via a single
      set of interfaces (either DOM or XPath oriented). It can move the complexity
      out of the imperative code (which ultimately should be responsible for
      handling the low level mechanisms for keeping the program running, such as
      those described in the previous piece) and can consequently encapsulate it
      within a declarative document. Put another way, in order for business logic,
      for reasonableness, to become manageable, it needs to migrate to XML.

      This doesn't solve the problem of complexity, by the way - XML can become
      complex in its own rights - but it increases the degrees of freedom by which
      that complexity can be represented, and in doing so makes the patterns
      associated with those bundled sets of information more obvious. Procedural
      code typically assumes linear patterns and models, and each deviation from
      linearity makes the code more complex. However, using declarative pattern
      matching templates as embodied in XSLT or XForms can make it possible for
      data to engage in "best-fit" matches that remove the developer from having
      to arbitrarily make these decisions based upon some predefined "type"
      association.

      This model, of course, is not yet in wide use, but it will be simply because
      XML is becoming more pervasive. That I can make the above arguments and
      comments at all is indicative of the fact that people are beginning to
      realize that meta-languages imply meta-structure, and meta-structure can of
      course be manipulated programmatically. In essence, what is happening is
      that we are seeing a gradual transition to a higher-order programming model
      than the one that was ushered in with object-oriented programming. It is one
      that increasingly assumes anonymity of devices, programmatic interfaces
      exposed via an XPath like mechanism upon a virtual representation of the
      device in XML. It doesn't make OOP go away, any more than OOP eliminated the
      need for structural programming. Rather it encapsulates OOP, pushing it down
      the stack so that it provides the efficient lower-level architecture upon
      which the more general XML space can operate.

      =================================
      The Dark Side of Productivity
      =================================
      Of course, given the above, given that one can essentially define talent as
      the ability to create a set of pattern-oriented rules for discerning what
      makes for good (reasonable) vs. bad (unreasonable) content, what does this
      do for the writer, the artist, or (perhaps more germane to this crowd) the
      programmer? Well, this is where the technologist in me collides with the
      economist and the philosopher. There is a term that our current economic
      managers (most especially the Fed) seem to use with relish. That term is
      "productivity". It is, in effect, the measure of "how much" a typical person
      is able to accomplish in a certain period of time ... essentially the Taylor
      notion of the work-hour.

      In the mid-1960s, we started automating the collation and accounting
      functions of businesses, because these basically are problems that have
      relatively low reasonableness to validation (R/V) ratios (and can be solved
      with relatively low applications of computing power). By the 1980s, we had
      automated a significant portion of purely mechanical work - the creation of
      everything from automobiles to toys is now accomplished largely by robots,
      with the role of human beings reduced to doing the initial designs and
      assuring reasonableness. In the 1990s, the increase in computing power and
      the rise of the Internet made it possible to deal with problems that had
      much higher R/V ratios - the design process for those same cars could now be
      handled largely via CAD tools by people who were 1000 miles from the design
      centers. This has had the effect of decreasing the requirement for managers
      significantly, since one manager could insure reasonableness for a much
      larger number of projects (though perhaps not quite as well).

      The automation that had hit areas such as typesetting in the 1980s was now
      hitting mainstream artists in the late 1990s. The animation field has been
      largely decimated by computer technology, since most animators were
      tweeners - the people who handled the production of intermediate cels in an
      animation between the key frames - or were colorists. At first, animation
      software meant that one could dispense with both of these people, as these
      were fairly mechanical activities, but increasingly, specialized filters on
      3D software make it possible to create 2D animations with 3D resources,
      cutting out even the initial designers. Yes, new jobs have been created in
      the initial design work and the story interactions (those areas which have
      very high R/V ratios) but the number needed is far below the number of other
      "technicians" who are now displaced.

      The same can be said for almost every area that involved the production of
      goods, or the mediation of services. Productivity is going through the
      roof - but that means fewer and fewer people needed for any particular task.
      This doesn't hold true for all jobs, certainly, but even in those positions
      that haven't been immediately impacted by automation the fallout is felt.
      There is currently a surfeit of lawyers, for instance, because many of the
      managers and technologists who lost their jobs in the dot-com fallout have
      since gone back to school to get law degrees, even though certain of the
      more vital services that lawyers performed - finding laws that could be used
      to help prospective clients, for instance - can increasingly be performed by
      lay-people on the web.

      One of the major trends of this decade is of course wireless technology.
      Wireless doesn't mean that salespeople can now enter orders at the place
      where they pitch their product - it means you can dispense with the
      salesperson altogether. It has also caused other real weird dislocations -
      coffeehouses have become work-centers, one of the reasons that Starbucks is
      one of the fastest growing chains on the planet.

      Installers and repair people still have jobs of course (indeed, those are
      two of the safest professions to be in right now), but even there automation
      is changing the nature of the game. Auto-diagnostic cars mean that a person
      spends less time trying to figure out what's wrong with the car, which means
      that you have a smaller wait to get your car serviced. But it also means
      that auto-repair shops don't have to hire as many knowledgeable mechanics,
      only ones that can be trained to use the diagnostics; great if you're
      running an auto repair shop, but not so good if you're looking to get into
      that field.

      By pushing our technology increasingly up the R/V curve, we are creating a
      society that requires far fewer workers to sustain it. This would be fine if
      our societal structures changed to accommodate that (reduced the number of
      working hours per week, for instance, instituted a living wage, any number
      of other ideas), but that's simply not happening right now.

      Instead, if you are not working a traditional job, you are essentially
      penalized by not receiving wages. You may receive unemployment, but the
      purpose of unemployment is not to provide an alternative way of living but
      rather to be an incentive to find another job, and quickly. If you are, like
      so many computer professionals, employed by a contract service or are a
      small business consultant in your own right, things can be even stickier, as
      unemployment is basically geared only to former employees of medium to large
      corporations that have been employed steadily for some time.

      This imbalance will continue, and while I find fault with our current
      president on a number of issues, I think I can safely say that he has had
      only a minor deleterious effect on the job market - the problems that we are
      facing right now vis-à-vis unemployment are much deeper and more endemic
      than any one president, no matter how inept, can claim responsibility for.
      We are seeing some fundamental instabilities building in the infrastructure
      of capitalism due to the computer/network revolution, and it will cause some
      major societal shifts before it's all said and done with.

      The second danger of moving up the R/V curve, even with the advent of
      meta-language programming, is the fact that the reasonableness portion of an
      application typically can be thought of as a series expansion with an
      infinite number of terms (or harmonics, for those more musically inclined) -
      these basically map to the exceptions in a given application. In developing
      applications, we are typically forced into truncating this series at some
      point to get the application out the door. The problem with this is that the
      curve that a given series maps to is thus only an approximation of the
      "real" application curve, and the wider the audience the more likely that
      there will be meaningful information that isn't properly handled.

      For instance, take a loan application. Loan applications provide a mixture
      of validation and reasonableness code. Typically, a loan officer is often
      employed within the loop to review loan applications, using the
      recommendations provided by software along with more difficult to quantify
      gut-feels for the special cases - a person comes from a low income family,
      had one bad spate when a company he worked for went under, but has good
      school habits and grades, has shown considerable initiative and have some
      solid references. Because of the recent period of unemployment, his credit
      rating was negatively impacted, but the less tangible considerations would
      suggest that this was an aberration, and he gets the loan.

      The bank gets bought out, and the local loan officers are let go because the
      new bank has what it sees as a superior application system. However, this
      software still is not really able to factor in these intangibles, even
      though the belief is there that is handles more of the "reasonable" domain.
      In essence, the software is now asked not to suggest a course of action, but
      rather to make the decision on that course of action. The same person, with
      the same background, would be turned down by the loan application software,
      because it doesn't have the ability to make subjective rather than objective
      judgements.

      I think we're several years, if not several decades, away from software that
      is capable of doing so. This is not to say that I think the human brain is
      inherently a better computer; it's manifestly not. The problem rather is
      that we do not yet fully understand code to a point where reasonableness can
      be made manifest in code. In time, I suspect we may, but at that point we
      will also be reaching a point where we have made ourselves obsolete.

      Language, programming, society and economics are all interrelated. That
      there are processes that can be automated does not imply that they should
      be, but unfortunately means they likely will be. The over-emphasis on our
      culture on efficiency (and hence short term profitability) all too often has
      an impact on the most vulnerable in society - but this has been true for a
      long time. What is relatively new is the perhaps too late realization that
      our economic structure is in fact built upon the system needing a certain
      amount of inefficiency in it, a modicum of oversight to insure that
      reasonableness is maintained within the applications that we build.

      One final thought. The transition from an agricultural to an industrial
      society caused a fair amount of dislocation, but the innovations that were
      brought about by the industrial society forced a change in society that
      required specialization of skills, a greater need for education, and the
      increased mobility of the populace. It did not, in general, put people out
      of work - it simply shifted the burden of work from agrarian tillage and
      cottage industries to factory work and factory management. The problem that
      we are potentially facing now is more severe, because more and more of the
      work is not shifting to different forms - it is disappearing outright.

      There are emergent forms of society that seem to be adapting to this
      reality, but in many ways they run very much counter to the prevailing
      culture of this country. In the next issue of Metaphorical Web I want to
      talk about them in greater detail.

      ===============
      SVG Open
      ===============
      I will be presenting a class on SVG Component Development on Sunday, July
      13th at the SVG Open Conference in Vancouver, British Columbia, and will
      also be presenting a paper on Friday, July 18th at the same conference. I
      hope to be there throughout the conference, if any of you are also planning
      to attend it, please look me up or let me know ... I'd love to talk shop.

      The Metaphorical Web is copyright 2003 by Kurt Cagle, and is also reprinted
      at http://www.metaphoricalweb.com If you have any questions or comments,
      feel free to contact me at kurt@....
    Your message has been successfully submitted and would be delivered to recipients shortly.