Loading ...
Sorry, an error occurred while loading the content.

5202Re: [rest-discuss] Principles for designing RESTful systems to back-up choices?

Expand Messages
  • Jan Algermissen
    Sep 2, 2005
    • 0 Attachment

      just found a design priciple that could be applied to my problem.

      Sean McGrath wrote[1] about differentiating between system components
      that produce data for general consumption (an RSS feed for eaxmple)
      and system components that produce data with the intent to change the
      behaviour of another system component ( 'illocution' vs. 'perlocution').

      Design principle: if there is an illocution, consume the data using
      GET, if there is a perlocution, the data producing component should
      be an actor and POST the data.

      In my example below, the database component has no special intent
      related to publishing the data, thus the data should be consumed by
      the monitoring app instead of being POSTED to it.

      [Hmmm....writing this, I wonder if the illocution/perlocution is
      semantically bound to the data (the message), too and how this
      relates to the message's self-descriptiveness.]


      [1] http://www.itworld.com/AppDev/1562/nls_ebizutterance060528/

      On Aug 31, 2005, at 5:32 PM, Jan Algermissen wrote:

      > Hi,
      > I am puzzling with a question regarding a system design choice. My
      > problem is that I see at least two options and I cannot base a choice
      > fore either one of them on a design principle (and I just hate
      > arbitrary
      > decisions not backed up by a reason).
      > The issue is propably interesting and maybe someone can help me out.
      > Here is the situation:
      > I am having a network of several hosts (say C1...C42), a special host
      > M running a network monitoring application and a host X running a
      > database with configuration information about the hosts C1...C42.
      > The goal is to generate part of the monitoring application's config
      > file (what hosts to monitor, their IP addresses, names etc.) from the
      > database running on X.
      > The database comes as a Web application and among others there is
      > a resource for the site (suppose it is named 'SITE') as a whole:
      > http://ex.org/db/items/SITE
      > A GET on this URI with an Accept header of application/rdf+xml will
      > give me an RDF graph including general site information and all the
      > hosts that make up the site and information about the hosts
      > (addresses,
      > functionality and the like).
      > Now, how do I use that information to generate the monitoring config
      > file?
      > Option 1: Write a script to be started on the monitoring host M that
      > does
      > the following:
      > - GET the site RDF graph
      > - extract the host data from it and generate the file
      > - put the file in the rigth place
      > - restart monitoring application
      > - side issue: send errors/debugging information via POST to
      > a resource that is the foo:error_processor_resource of
      > the
      > site (we discovered the URI of that resource from the
      > site's
      > RDF graph, too)
      > There is an OO pattern called 'Transaction Script' which
      > seems
      > to cover option 1 (conceptually, there might be other HTTP
      > request involved in the script's flow).
      > Option 2: Since the monitoring application is a noun, shouldn't it
      > actually
      > be a resource? As well as the config file itself? This
      > option
      > would mean to set up a tiny HTTP server with at least two
      > resources
      > http://host-M.ex.org/monitoring-app
      > http://host-M.ex.org/monitoring-app/host-config-file
      > The config file resource could be implemented in a way
      > that would
      > allow me to POST the site RDF graph to it and have the rest
      > being managed by the resource itself. After that, I could
      > somehow
      > send a request to the monitoring system resoure to have
      > it restart.
      > With this option, logging would be (very naturally) done
      > to a
      > local HTTP access log, which is a pretty common place to
      > look
      > for stuff if something breaks.
      > Let's ignore any issues of the amount of work, system admin's
      > decisions and the
      > like for now. What principles could I use to make a choice for option
      > one or
      > two?
      > What I find interesting in particular is the issue of who
      > actually triggers the 1. start of the script or 2. the sequence of
      > GET on
      > site URI and POST to config file URI (a transcation script too I
      > suppose).
      > In case 1. it can only be a some administrator logging in to M and
      > starting
      > the script. With option two, there are much more possibilities for
      > having
      > other events trigger the config file generation.
      > Hmm...maybe that is the principle already: greater reusability or
      > being able to
      > plug in filters between the GET and the POST in case there is a
      > semantic change in
      > the RDF graph I GET.
      > Thoughts?
      > Jan
      > ______________________________________________________________________
      > __
      > _______________
      > Jan Algermissen, Consultant & Programmer
      > http://jalgermissen.com
      > Tugboat Consulting, 'Applying Web technology to enterprise IT'
      > http://www.tugboat.de
      > ------------------------ Yahoo! Groups Sponsor --------------------
      > ~-->
      > Get Bzzzy! (real tools to help you find a job). Welcome to the
      > Sweet Life.
      > http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/W6uqlB/TM
      > --------------------------------------------------------------------
      > ~->
      > Yahoo! Groups Links

      Jan Algermissen, Consultant & Programmer
      Tugboat Consulting, 'Applying Web technology to enterprise IT'
    • Show all 8 messages in this topic