Loading ...
Sorry, an error occurred while loading the content.

The Metaphorical Web #17: When Description Becomes Program

Expand Messages
  • Kurt Cagle
    The Metaphorical Web #17 By Kurt Cagle 2003-06-01 Up and down and up and down . I took my family to the Pacific Science Center a couple of weeks ago in order
    Message 1 of 1 , Jun 19, 2003

      The Metaphorical Web #17

      By Kurt Cagle




      Up and down and up and down … I took my family to the Pacific Science Center a couple of weeks ago in order to see the carousel exhibit there. It was quite a show: dozens upon dozens of delicately carved horses, dragons, hares, and other denizens more mythological were on display around the exhibit, and a full size, fully functional carousel bore my two daughters, my wife and I on a brief but exhilarating trip away from the cares of the world. To me, though, what was even more fascinating was the intricately engineered mechanisms that combined strength, safety and ease of transportability, or the calliope that played a dozen instruments at once through the punched scrolls that hinted at (but predated) the Hollerith cards of computer usage by at least a few years.


      There is a tendency to dismiss carousels and merry-go-rounds as childish entertainments, yet at one time such machines represented the pinnacle of technology, combining many of the features that robotics engineers would not reinvent for another eight decades. Even in the realm of technology, newer is not necessarily better, and there are few “solutions” out there, to use 21st century parlance, that have not been at least considered at some time in the past.


      For me, a fairly long drought is finally ending; I am writing once again and staying busy, focusing on books and articles and a likely teaching assignment in the fall at the University of Washington (evening extension classes in XML). For those that have been wondering, my server, host to my blog, finally succumbed to the beating and battering from multiple moves, my daughter’s swan dive from the case, and the accumulated wear and tear from a less than healthy fan. As near as I can tell, the problem seems centered in the CPU, not the hard drive (I hope) so I should be back online soon once I get the computer back from the shop.


      For those interested in SVG, I will be teaching a day-long course on Advanced SVG Programming at the SVG Open Conference, July 13-18, 2004 in Vancouver, Canada (http:///www.svgopen.org).





      One of my favorite classical pieces is Ravel’s Bolero. At first, barely audible, the piece takes a simple (albeit very sinuous) theme and tosses it from lowly oboe to French horn to violin to bassoon, building in intensity and complexity as percussion and deep throated strings pick up the thread, becoming polyphonic and highly textual in the process. In the end, it becomes a veritable wall of music, chaotic and frenzied, before finally collapsing of its own weight in one grand crescendo.


      I got to thinking about this as I was looking over a trade paper a few days ago. Microsoft had just announced that they were licensing SCO Unix, in a move that will likely send major tremors through the Open Source community.


      Bolero – the same theme expressed in different, overlapping ways, reinforcing one another. Microsoft, or at least the leaders at Microsoft, is facing a real problem. The ground upon which Microsoft has built its predominance, desktop computing, is fading away as hard economic times and the conflicting demands for stability in an OS and the need to ship commercial products on a real timetable collide. They can improve and innovate (or at least purchase those who are innovating), yet the irony is that the desktop paradigm has developed a very strong inertia that is becoming increasingly costly for them to fight.


      They have cash, but what they don’t have is the ability to say “Okay, everyone start all over.” This was proved with the licensing debacle for Windows XP, a move that brought them more short-term cash, but at the cost of a PR nightmare. This echoes in the slower than anticipated adoption of C# and .NET in general, the growing specter of Linux eating into the server, set-top and hand-held markets and with Linux beginning to appear on the radar of the desktop market.


      The current model that Microsoft employs is unsustainable, and they know it, but they also have at least one more iteration of Windows left before the day of reckoning comes. Significantly, there are several possible reasons for why SCO Unix was licensed. The most cynical would assume that the actions by SCO, specifically it’s allegations that IBM incorporated much of the SCO Unix architecture into their open source submissions to the Linux and Apache working groups, providing a rational that Microsoft can use to sue the various Linuxes – or IBM. Its possible that this is in fact the case, but I rather doubt it.


      For one thing, Microsoft suing IBM would be, at best, counterproductive, and at worst devastating, especially in the wake of the Sun debacle. Arguably Microsoft may have won the browser wars, but at a cost of locking them into a decade long legal battle that could very well re-erupt in the event that a less business-friendly administration comes into power. Additionally, by attempting (whether directly or through SCO) to attack movements such as UnitedLinux, there is a very real possibility of a backlash, especially among governments that are already leery of Microsoft’s already significant hold on their IT infrastructure.


      Instead, I envision a time in the not too distant future, where Windows may be run natively using Linux. Windows can be run non-natively even now in that space using such emulation tools as Crossover Office (making it possible to run Office and other Windows centric applications within Linux) or VMWare (which creates a separate virtual machine layer under Linux that can be used to run Windows itself), and the .NET strategy could very readily migrate to a UNIX platform, as the fairly advanced state of both Dot .GNU and Mono can attest. The next generation of Windows could very well run on a Unix platform.  Thus the issue here is not so much a technical one as a licensing/business model one.


      I would contend that Linux arose in great part as a reaction to the hegemony that Microsoft had in the desktop sphere. The open source model, while I think is an important and significant part of its heritage, was the key to helping it survive the treacherous shoals that have wrecked companies such as Be, Amiga, or even Apple (and just to forestall the inevitable complaints from Mac lovers, Apple now exists as a boutique company producing a designer box that essentially runs a form of Unix – it would not have survived without the shift away from a completely proprietary OS). 


      Yet the open source model is already under attack from the commercial sector in a thousand different ways, more so now even than when it was a fairly obscure hobbyist operating system. The kernel of Linux will probably always remain open source, if only because it is rapidly becoming the de facto operating system of most national governments, and it is also just as likely that the principle shell running on top of this kernel will likely also remain open source, in the form of KDE, again for that same reason.


      A common stable shell idiom makes possible the development of applications. This was the lesson of Windows, and this is the lesson of KDE. Yet the poison pill of GPL has led many in the commercial sector to be skittish about wanting to develop for Linux. The strategy that instead seems to be emerging is one that SCO Unix followed, one which protected the intellectual property of developers by stating that they could develop their applications on top of the shell while not giving up their rights to protect their own proprietary code.


      Will this become the dominant paradigm in the Open Source world? I think that’s an experiment that is taking place now. One of the principle purposes of the GPL was to assert, in essence, that the operating system is a community effort, and it is the rare programmer who could build an application that did not in some way have dependencies upon what others produced. The SCO model would in essence create a new kind of symbiosis – applications could be developed around SCO, which would protect the company in question from having to contribute explicitly to the maintenance of the underlying OS, though obviously the owner of SCO could negotiate with the developers of new services to acquire those services for the operating system.


      It would essentially be a fork of the Unix tree, just as Linux was, and it would be close enough to Linux that applications developed for one could work on the other … for a little while, anyway, especially if a deliberate effort is made to maintain some stability on interfaces. That this might take place anyway under the efforts of both Java and the various .NET clones (dot-gnu and mono) means that the split would be gradual indeed. This may lead to a migration by businesses away from Linux to whatever SCO Unix becomes, no doubt the desired goal of Microsoft in working with SCO in the first place.


      Or it could create another situation altogether, one that I suspect may actually arise.  The UnitedLinux efforts are leading to the situation where you have a core system that is largely homogenous at the kernel and core desktop services, but that have variation in terms of the face that is presented to the world (and the kinds of non-core support services). This leads to Linux deployments that are key to a particular country or region – one for China, one for Japan, one for Brazil, one for Germany, and so forth. I suspect such trends will continue – there are some powerful economic incentives to having a native software industry, as well as a few (perhaps not always benign) social reasons for a country to be able to control its own operating system.


      Interoperability and kernel level conformance becomes key in keeping such systems from diverging into a mish-mash of incompatible systems, so it is likely that a world-wide effort toward maintaining this base will result in something analogous to the emergence of a W3C or OASIS (or the migration of Linux into OASIS, which I personally think has a very high probability of occurring). Consequently, should this happen a very powerful incentive emerges for computer and system device vendors to converge upon the Linux architecture, as it becomes the de facto global operating system.


      Bill Gates is far from stupid … he no doubt recognizes the fact that the dominance that Windows enjoyed in the 1990s was largely the result of the confluence of a number of factors, only some of which Microsoft can claim to have actively initiated. With Carbon, Apple effectively threw in the towel on the Unix front (although admittedly they had also had A/UX existing as an alternative platform for the Macintosh computers), and IBM has actively been moving their operations to Linux for several years. The recent takeover rumors about Sun illustrate the fragility of their position, while they still have strength on the higher end systems, Linux is having a devastating effect upon Solaris systems.






      As you may have guessed, this column is not all completed in one chunk. Rather, I work it in around my other projects when I can. The last section was a case in point, having been written about two weeks ago. I felt that the article listed above was still relevant, but at the same time there have been a few additional actions that have taken place since then that color the story considerably.


      SCO is in the process of shooting themselves in the foot … repeatedly … with a very large magnum. As this case has evolved, it’s becoming increasingly evident that SCO may have prevaricated itself into a very nasty legal box. The difficulty come about when SCO, which has been on the ropes financially for some time because of the encroachment of Linux (and perhaps more directly, through IBM’s pushing of Linux), decided to drop a bombshell by charging that core parts of the code that ended up in Linux were originally in SCO, and that IBM appropriated that code and illegally pushed it into Linux. The intent, no doubt, has been to put a chill in the Linux market by raising the specter that Linux itself was compromised, which possibly would have benefited SCO.


      However, things have become considerably muddier the more people begin to examine SCO’s position. SCO and IBM did work together for a while on Project Monterey, which was designed to create a 64-bit version of Linux on the Intel Architecture. After a great number of problems occurred with this (principally on SCO’s side), IBM pulled out to develop alternatives that did make their way into Linux, even as SCO incorporated many of the changes from the failed project into their own distribution.


      Significantly, SCO also has been unable to field patents indicating that work that it has done has in fact made its way into Linux; indeed, a number of authorities in the field (including the inestimable Bruce Perens) have wondered whether in fact a great deal of the Linux core has found its way into the SCO shell. Perhaps the most damning piece of counter-evidence is the fact that the work that SCO did for the Monterey Project was ITSELF done under the GPL, which was designed for the express purpose of insuring that once someone did commit material under GPL, that they couldn’t later change their mind and pull it out. This is the intrinsic cost of the GPL – you essentially relinquish the right to later claim a proprietary stock in a technology should that technology work out, in exchange for getting access to the body of work that has already taken place.


      Many see the actions of the SCO Group as being the noisy death-cries of a dying company, with the threats becoming even more bizarre as more dirt gets unearthed for the grave. Recently, SCO’s CEO McBride stated that if their case proved strong enough, they would even issue a lawsuit against Linus Torvalds, the principle inventor of Linux. If there was a better way to harden everyone in the Open Source and Unix communities against SCO, I can’t see what it would be.


      I think that Microsoft should save themselves a serious headache and either buy up SCO rather than just license its technology, or get out before the ink dries. SCO’s lawsuit against IBM is extraordinarily weak, especially given its own GPL licensing, the role of Novell in illustrating that that many of the patents that SCO is hinting at are in Novell’s portfolio and not SCO’s, and the tortured and inconsistent timeline that SCO offers about critical events toward proving the suit.


      I honestly feel that it is in Microsoft’s best interest to get into the Unix game, for reasons mentioned previously. However, sticking with SCO is not the way to do it. The company is racing towards its own self-destruction.


      When Description Becomes Program


      I follow an XSL mailing list on the web when I get the chance, in between articles or chapters (and sometimes even during them when the right gerund or code sample fails to come to hand).  Recently, a heated discussion caught my attention, talking about the use of XSLT as a means of generating code … not just XML code, but C#, Java, Javascript, even (*gasp*) C++ code. Real programming language code. This wasn’t exactly new to me … I had written a fairly sophisticated Javascript code generator for a client late last year, and found that while there were occasional kludges that I had to introduce, XSLT was actually pretty efficient at creating the requisite code (and really good at it once you began moving to XSLT2).


      Yet what I found fascinating was that others had begun to explore the code generation aspects of XSLT as well. I’ve actually run into a number of people who routinely use XSLT now to generate, at a very minimum, the initial interface code based upon an XML representation of that code. This idea may seem counterintuitive at first – XML is usually considerably more verbose that C++ or C#, and as such it would seem that going to XML as a way of building your code might seem to introduce more effort than it solves.


      Surprisingly enough, there are a number of reasons why using an XML model to model a class (or set of classes) makes a great deal of sense. A class represents a natural container/contained relationship, containing methods, properties, exposing events, perhaps even holding anonymous child classes. You can use XML to define the various entities within the class without necessarily needing to use a specific languages code notation.


      For instance, consider a simple Button class. Such a button, represented in a language such as C#, might look something like this:


      Listing 17-1. A Button Class (Button.cs)

      using System;

      using System.Drawing;


      namespace SimpleButton


            /// <summary>

            /// SimpleButton.Button provides a simple abstract button layer

            /// that describes the various states of a button and exposes

            /// the methods Press(), Release().

            /// </summary>

            public delegate void StateChangedEventHandler(object sender,EventArgs e);


            public class Button


                  public event StateChangedEventHandler StateChanged;

                  protected virtual void OnStateChanged(EventArgs e)


                        if(StateChanged!=null) StateChanged(this,e);


                  public enum ButtonState {Normal, Pressed, Highlighted, Disabled }


                  private ButtonState m_State =ButtonState.Normal;

                  private String m_Text = "Press Me";


                  public Button(String p_Text)


                        m_Text = p_Text;




                  public String Text




                              return m_Text;








                  public ButtonState State




                              return m_State;









                  public void Press()





                  public void Release()







      This particular button doesn’t have a physical manifestation – it is simply the embodiment of a four state named system (and it is VERY simple).


      In looking at this code, one of the first things that may become obvious, especially if you are not used to reading C#, is that there is no real distinction describing what is a method, what is an event, or what is a property. You can ferret it out if you know the language (the get and put accessors are a dead giveaway for property, for instance), but what you have here is the program’s view of a particular abstract object.


      Here’s a bit of an exercise. How would you cast this same class as an XML object. Assume, for the moment, that the specific implementation details are irrelevant – the trick here is expressing the interfaces. The result might look something like this:


      Listing 17-2. A Button Class as XML (Button.xml)


      <namespace name=”SimpleButton”>


                  <include href=”System”/>

                  <include href=”System.Drawing”/>



            <delegate name=”StateChangedEventHandler” scope=”public” type=”void”>

                  <parameter name=”sender” type=”Object”/>

                  <parameter name=”e” type=”EventArgs”/>



            <class name=”Button” inherits=”Object”>


                  SimpleButton.Button provides a simple abstract button layer

                  that describes the various states of a button and exposes

                  the methods Press(), Release().




                        <parameter name=”p_text” type=”String”/>



                  <enum name=”ButtonState” scope=”public”>

                        <item name=”Normal”/>

                        <item name=”Pressed”/>

                        <item name=”Highlighted”/>

                        <item name=”Disabled”/>



                  <property name=”Text” type=”String” scope=”public” accessors=”get,set”/>

                  <property name=”State” type=”ButtonState” scope=”public” accessors=”get,set”/>


                  <method name=”Press” type=”void” scope=”public”/>


                  <method name=”Release” type=”void” scope=”public”/>


      <method name=”OnStateChanged” type=”void” scope=”protected” virtual=”yes”>

                        <parameter name=”e” type=”EventArgs”/>



                  <event name=”StateChanged” type=”StateChangedEventHandler” scope=”public”/>





      This particular view provides the basic interfaces that were described within the previous class definition, but it does so in a more descriptive manner. You can readily identify the methods, events, properties and enumerations and so forth because they are labeled explicitly as such. By itself, such a descriptive document gives you little more than what you had originally. However, it is what you can do with such a document that makes this description useful.


      One of the first potential uses for an XML interface description is simple documentation. Because various pieces are clearly delineated, you could generate an HTML page via XSLT that will clearly indicate the pieces, ordered by functionality. A basic example of this, written in XSLT 2.0, is shown in Listing 17-3 (ViewClass.xsl):


      Listing 17-3. XSLT2 Based Class Viewer (ViewClass.xsl)

      // To Come


      When run against the button class, the ViewClass.xsl transformation generates a web page as shown in Figure 17-1.


      What is perhaps more significant about button.xml is the fact that you can actually infer a great deal of information from the structure without necessarily needing to incorporate linguistic information into the XML. For instance, consider the <property> elements within the document. In C#, properties use special functoids called accessors that at a bare minimum assign or retrieve the value of an internal variable. You can actually generate these internal variables from XSLT, as well as the relevant accessor function.


      For instance, the following XSLT2.0 fragment creates the local internal variables for C# from the XML property definition for the Text property. If the given property was:



      <property name=”Text” type=”String” scope=”public” accessors=”get,set” default=””/>



      the XSLT template to generate the local variable would be given as follows:



      <xsl:template match=”property” mode=”defineLocal”>

      <xsl:value-of select=”(‘private’,@type,concat(‘m_’,@name),if (@default) then concat(‘ = ‘,@default) else ‘’,’;’)” separator=” “/>




      The separator attribute within the value-of tells the XSLT processor to use that separator character (in this case a space character) between the text representation of each item in the sequence. If a default attribute is supplied, this is assigned to the new variable, if not, the line is terminated after the local variable name is given. For the Text property, the result of this would be the simple statement:


      private String m_Text = ‘’;


      Similarly, the properties can be reduced to accessors through another template:


    Your message has been successfully submitted and would be delivered to recipients shortly.