Loading ...
Sorry, an error occurred while loading the content.

What are the actors?

Expand Messages
  • Jan Algermissen
    Vincent, ... I just realized that the general question behind my initial mail is how to decide which components of the overall system will be actors
    Message 1 of 8 , Aug 31 1:04 PM
    • 0 Attachment
      Vincent,

      On Aug 31, 2005, at 8:31 PM, Vincent D Murphy wrote:

      > Furthermore, it would seem appropriate for the monitoring process to
      > subscribe to the host database resource,

      I just realized that the general question behind my initial mail is
      how to decide which components of the overall system will be 'actors'
      (components that issue HTTP requests).

      Given all the resources that make up my system....how do I decide where
      to put the application code that actually calls methods of my resources?
      (that advances through the Web applications using hypermedia)

      Is that an arbitrary decision?

      This touches on another thought I have been having: RESTs application
      model
      is that the user agent advances through the application by traversing
      links
      (hypermedia as the engine of application state). But what is
      *driving* the
      user agent? How does the program code that appearently gets executed
      in the
      traditional sense (driven by some CPU's cycles) fit in REST's
      architectural
      style? What significance do the events have that actually trigger the
      start
      of these programs?

      [sorry, can't articulate that better yet and sorry if it's nonsense]


      Jan






      ________________________________________________________________________
      _______________
      Jan Algermissen, Consultant & Programmer
      http://jalgermissen.com
      Tugboat Consulting, 'Applying Web technology to enterprise IT'
      http://www.tugboat.de
    • Vincent D Murphy
      On Wed, 31 Aug 2005 20:59:43 +0200, Jan Algermissen said: [snip] ... Shouldn t those manual additions be in the host
      Message 2 of 8 , Sep 1, 2005
      • 0 Attachment
        On Wed, 31 Aug 2005 20:59:43 +0200, "Jan Algermissen"
        <jalgermissen@...> said:
        [snip]
        > > Why does an intermediate 'config file' have
        > > to be generated?
        >
        > Well, mostly because
        >
        > - there will need to be manual additions

        Shouldn't those manual additions be in the host database? Imagine
        a host database which can be exposed as a http server AND as a
        DHCP server. Two different processes, which speak the different
        protocols, but share the underlying state.

        Is there a need to explicitly repeat this state in the monitoring
        process? Surely it can download a copy on demand.

        > - nagios just needs that file and there is likely no chance to install
        > a modified (with REST interface) nagios

        This reason has little to do with the architecture you know, and is more
        of an design or implementation detail.

        That said, you could have a startup script for nagios, which GETs the
        host database and generates a config file from it, and then starts
        nagios.
        Perhaps this script could run on HUP/restart as well. This could be
        supplemented with a trigger mechanism which sends nagios a SIGHUP on
        updates to the host database.

        > > I have imagined doing this with a DHCP server and nagios or similar.
        >
        > You mean adding a REST interface?
        >
        > And....what role does the DHCP server play? (sorry to be slow...but this
        > sounds interesting)

        I imagine the network as having abstract elements. I mean abstract in
        the
        sense that what application protocol they use is not relevant. So rather
        than a DHCP server, you have a 'host database' which manages all the
        relevant
        state. If you want to modify or read the state you can use a HTTP
        interface,
        a host can configure itself from the same database using DHCP or BOOTP
        or
        whatever. In effect DHCP is a special-purpose/legacy 'read' application
        protocol. Of course it works with layer 2 (broadcast) addresses, so it
        has
        its place alongside HTTP. The important thing is how you manage the
        state.
        If its in one place, other processes can use it, such as:

        - A logging process, like syslog can be just a HTTP server that gets
        PUTs
        or POSTs from other elements on the network, perhaps after subscribing
        to
        them.

        - A monitoring/watchdog process, like nagios, could just do GETs of
        different
        URIs if they were exposed on the elements it is interested in. For
        example
        UNIX/windows machines could expose http://whatever/interfaces, which
        is the
        equivalent of the output of ifconfig. Same for routing tables,
        firewall rules
        etc.

        - Now imagine exposing these same resources as PUTable. Now this is
        starting to
        smell like SNMP. Rather than

        ssh whatever
        ifconfig eth0 up

        or the SNMP way, you do

        PUT http://whatever/interfaces/eth0/status

        up

        Maybe we could start a wiki and start brainstorming other ideas if you
        like
        where this is going.

        > > It would be nice if all networking hardware and software was exposed
        > > as resources over HTTP with RDF representations.
        >
        > Yes, it would. OTH, I am working in an environment where sometimes
        > all you
        > have is bash (do not even think of an HTTP server) and no chance to
        > install
        > any libraries that are not already on the hosts.

        Most machines have perl. HTTP::Daemon should be enough to get started.
        In my experience you will end up exposing the resources anyway, through
        ad-hoc ssh tunnels or whatever, so might as well have a little bit of
        structure in there from the beginning.

        > Hell...I am even facing the need to sort of 'tunnel' HTTP through
        > email+someone
        > using a floppy to get data from one network to the other due to
        > restrictive
        > policies. (not sure whether this is doable at all with clever caching).

        This may be more difficult than normal HTTP but not impossible. Even if
        you are using a sneakernet, you are still doing
        REpresentational-State-Transfer.
        Remember, you can use the REST style to guide your choices.

        RFC822 bodies, or files on a filesystem are just as much a
        representation
        as the entity body of a HTTP message. Sometimes its tougher to link them
        with their URIs or metadata though (mail messages have headers, files
        can be
        tar'd with their metadata).
      • Jan Algermissen
        Hi, just found a design priciple that could be applied to my problem. Sean McGrath wrote[1] about differentiating between system components that produce data
        Message 3 of 8 , Sep 2, 2005
        • 0 Attachment
          Hi,

          just found a design priciple that could be applied to my problem.

          Sean McGrath wrote[1] about differentiating between system components
          that produce data for general consumption (an RSS feed for eaxmple)
          and system components that produce data with the intent to change the
          behaviour of another system component ( 'illocution' vs. 'perlocution').

          Design principle: if there is an illocution, consume the data using
          GET, if there is a perlocution, the data producing component should
          be an actor and POST the data.

          In my example below, the database component has no special intent
          related to publishing the data, thus the data should be consumed by
          the monitoring app instead of being POSTED to it.

          [Hmmm....writing this, I wonder if the illocution/perlocution is
          semantically bound to the data (the message), too and how this
          relates to the message's self-descriptiveness.]


          Jan



          [1] http://www.itworld.com/AppDev/1562/nls_ebizutterance060528/
          pfindex.html

          On Aug 31, 2005, at 5:32 PM, Jan Algermissen wrote:

          > Hi,
          >
          > I am puzzling with a question regarding a system design choice. My
          > problem is that I see at least two options and I cannot base a choice
          > fore either one of them on a design principle (and I just hate
          > arbitrary
          > decisions not backed up by a reason).
          >
          > The issue is propably interesting and maybe someone can help me out.
          > Here is the situation:
          >
          > I am having a network of several hosts (say C1...C42), a special host
          > M running a network monitoring application and a host X running a
          > database with configuration information about the hosts C1...C42.
          >
          > The goal is to generate part of the monitoring application's config
          > file (what hosts to monitor, their IP addresses, names etc.) from the
          > database running on X.
          >
          > The database comes as a Web application and among others there is
          > a resource for the site (suppose it is named 'SITE') as a whole:
          >
          > http://ex.org/db/items/SITE
          >
          > A GET on this URI with an Accept header of application/rdf+xml will
          > give me an RDF graph including general site information and all the
          > hosts that make up the site and information about the hosts
          > (addresses,
          > functionality and the like).
          >
          > Now, how do I use that information to generate the monitoring config
          > file?
          >
          > Option 1: Write a script to be started on the monitoring host M that
          > does
          > the following:
          > - GET the site RDF graph
          > - extract the host data from it and generate the file
          > - put the file in the rigth place
          > - restart monitoring application
          > - side issue: send errors/debugging information via POST to
          > a resource that is the foo:error_processor_resource of
          > the
          > site (we discovered the URI of that resource from the
          > site's
          > RDF graph, too)
          >
          > There is an OO pattern called 'Transaction Script' which
          > seems
          > to cover option 1 (conceptually, there might be other HTTP
          > request involved in the script's flow).
          >
          > Option 2: Since the monitoring application is a noun, shouldn't it
          > actually
          > be a resource? As well as the config file itself? This
          > option
          > would mean to set up a tiny HTTP server with at least two
          > resources
          >
          > http://host-M.ex.org/monitoring-app
          > http://host-M.ex.org/monitoring-app/host-config-file
          >
          > The config file resource could be implemented in a way
          > that would
          > allow me to POST the site RDF graph to it and have the rest
          > being managed by the resource itself. After that, I could
          > somehow
          > send a request to the monitoring system resoure to have
          > it restart.
          >
          > With this option, logging would be (very naturally) done
          > to a
          > local HTTP access log, which is a pretty common place to
          > look
          > for stuff if something breaks.
          >
          > Let's ignore any issues of the amount of work, system admin's
          > decisions and the
          > like for now. What principles could I use to make a choice for option
          > one or
          > two?
          >
          > What I find interesting in particular is the issue of who
          > actually triggers the 1. start of the script or 2. the sequence of
          > GET on
          > site URI and POST to config file URI (a transcation script too I
          > suppose).
          > In case 1. it can only be a some administrator logging in to M and
          > starting
          > the script. With option two, there are much more possibilities for
          > having
          > other events trigger the config file generation.
          >
          > Hmm...maybe that is the principle already: greater reusability or
          > being able to
          > plug in filters between the GET and the POST in case there is a
          > semantic change in
          > the RDF graph I GET.
          >
          > Thoughts?
          >
          > Jan
          >
          >
          >
          >
          >
          >
          > ______________________________________________________________________
          > __
          > _______________
          > Jan Algermissen, Consultant & Programmer
          > http://jalgermissen.com
          > Tugboat Consulting, 'Applying Web technology to enterprise IT'
          > http://www.tugboat.de
          >
          >
          >
          >
          >
          >
          > ------------------------ Yahoo! Groups Sponsor --------------------
          > ~-->
          > Get Bzzzy! (real tools to help you find a job). Welcome to the
          > Sweet Life.
          > http://us.click.yahoo.com/A77XvD/vlQLAA/TtwFAA/W6uqlB/TM
          > --------------------------------------------------------------------
          > ~->
          >
          >
          > Yahoo! Groups Links
          >
          >
          >
          >
          >
          >
          >

          ________________________________________________________________________
          _______________
          Jan Algermissen, Consultant & Programmer
          http://jalgermissen.com
          Tugboat Consulting, 'Applying Web technology to enterprise IT'
          http://www.tugboat.de
        • Jan Algermissen
          ... Ok, I understand you now. Good idea (though my szenario is more complex and will definitely require manual additions for service checks (Nagios term).
          Message 4 of 8 , Sep 5, 2005
          • 0 Attachment
            On Sep 1, 2005, at 4:15 PM, Vincent D Murphy wrote:
            >
            > Shouldn't those manual additions be in the host database? Imagine
            > a host database which can be exposed as a http server AND as a
            > DHCP server. Two different processes, which speak the different
            > protocols, but share the underlying state.

            Ok, I understand you now. Good idea (though my szenario is more
            complex and will definitely require manual additions for service
            checks (Nagios term).

            And...the IP numbers of the hosts are determined by scripts running
            on the hosts and POSTed to the host database.

            But conceptually I like your idea of letting the two servers share
            the same state.

            >
            > Is there a need to explicitly repeat this state in the monitoring
            > process? Surely it can download a copy on demand.
            >

            On startup, yes. That would be fine I think.

            >
            >> - nagios just needs that file and there is likely no chance to
            >> install
            >> a modified (with REST interface) nagios
            >>
            >
            > This reason has little to do with the architecture you know, and is
            > more
            > of an design or implementation detail.
            >
            > That said, you could have a startup script for nagios, which GETs the
            > host database and generates a config file from it, and then starts
            > nagios.
            > Perhaps this script could run on HUP/restart as well. This could be
            > supplemented with a trigger mechanism which sends nagios a SIGHUP on
            > updates to the host database.

            Yes, that is sort of what I was thinking (better than touching Nagios
            itself).
            And yes - it is an implementation detail as long as we think 'GET'.

            >
            >
            >>> I have imagined doing this with a DHCP server and nagios or similar.
            >>>
            >>
            >> You mean adding a REST interface?
            >>
            >> And....what role does the DHCP server play? (sorry to be
            >> slow...but this
            >> sounds interesting)
            >>
            >
            > I imagine the network as having abstract elements. I mean abstract in
            > the
            > sense that what application protocol they use is not relevant. So
            > rather
            > than a DHCP server, you have a 'host database' which manages all the
            > relevant
            > state. If you want to modify or read the state you can use a HTTP
            > interface,
            > a host can configure itself from the same database using DHCP or BOOTP
            > or
            > whatever. In effect DHCP is a special-purpose/legacy 'read'
            > application
            > protocol. Of course it works with layer 2 (broadcast) addresses, so it
            > has
            > its place alongside HTTP. The important thing is how you manage the
            > state.

            I like 'managing the state' wrt network configurations. Nice!

            > If its in one place, other processes can use it, such as:
            >
            > - A logging process, like syslog can be just a HTTP server that gets
            > PUTs
            > or POSTs from other elements on the network, perhaps after
            > subscribing
            > to
            > them.
            >
            > - A monitoring/watchdog process, like nagios, could just do GETs of
            > different
            > URIs if they were exposed on the elements it is interested in. For
            > example
            > UNIX/windows machines could expose http://whatever/interfaces, which
            > is the
            > equivalent of the output of ifconfig. Same for routing tables,
            > firewall rules
            > etc.

            Yes, I see. OTH, in (my current) reality there would not be any
            chance of doing
            this to most of the machines.

            >
            > - Now imagine exposing these same resources as PUTable. Now this is
            > starting to
            > smell like SNMP. Rather than
            >
            > ssh whatever
            > ifconfig eth0 up
            >
            > or the SNMP way, you do
            >
            > PUT http://whatever/interfaces/eth0/status
            >
            > up
            >
            > Maybe we could start a wiki and start brainstorming other ideas if you
            > like
            > where this is going.

            Would like to do that (or at least join) but I cannot promise any
            significant
            amount of time from my side. Perhaps a REST Wiki page
            ('RESTfulNetworkManagement'
            or so) would be a good place to start?
            >
            >
            >>> It would be nice if all networking hardware and software was exposed
            >>> as resources over HTTP with RDF representations.
            >>>
            >>
            >> Yes, it would. OTH, I am working in an environment where sometimes
            >> all you
            >> have is bash (do not even think of an HTTP server) and no chance to
            >> install
            >> any libraries that are not already on the hosts.
            >>
            >
            > Most machines have perl.

            Yak...not in my world. Some only have a shell and some only have Perl
            4 and forget about
            installing additional Perl libs anyhow. (I think this is interesting
            because
            it shows that you cannot simply assume the availability of HTTP libs
            and instead
            might have to code something yourself.)

            > HTTP::Daemon should be enough to get started.
            > In my experience you will end up exposing the resources anyway,
            > through
            > ad-hoc ssh tunnels or whatever, so might as well have a little bit of
            > structure in there from the beginning.
            >
            >
            >> Hell...I am even facing the need to sort of 'tunnel' HTTP through
            >> email+someone
            >> using a floppy to get data from one network to the other due to
            >> restrictive
            >> policies. (not sure whether this is doable at all with clever
            >> caching).
            >>
            >
            > This may be more difficult than normal HTTP but not impossible.
            > Even if
            > you are using a sneakernet, you are still doing
            > REpresentational-State-Transfer.
            > Remember, you can use the REST style to guide your choices.
            >
            Yes!

            > RFC822 bodies, or files on a filesystem are just as much a
            > representation
            > as the entity body of a HTTP message. Sometimes its tougher to link
            > them
            > with their URIs or metadata though (mail messages have headers, files
            > can be
            > tar'd with their metadata).

            Good thoughts and I am still digesting, thanks.

            Jan



            >
            >
            > ------------------------ Yahoo! Groups Sponsor --------------------
            > ~-->
            > Most low income households are not online. Help bridge the digital
            > divide today!
            > http://us.click.yahoo.com/cd_AJB/QnQLAA/TtwFAA/W6uqlB/TM
            > --------------------------------------------------------------------
            > ~->
            >
            >
            > Yahoo! Groups Links
            >
            >
            >
            >
            >
            >
            >

            ________________________________________________________________________
            _______________
            Jan Algermissen, Consultant & Programmer
            http://jalgermissen.com
            Tugboat Consulting, 'Applying Web technology to enterprise IT'
            http://www.tugboat.de
          Your message has been successfully submitted and would be delivered to recipients shortly.