Loading ...
Sorry, an error occurred while loading the content.
 

Re: [linux] Re: Configuration management

Expand Messages
  • ed
    ... Ah, yes, error 10b, forgetting people cannot read minds. There are some programs out there, such as puppet which, I believe (have not researched properly)
    Message 1 of 10 , Apr 13, 2013
      On Sat, Apr 13, 2013 at 12:27:15AM -0000, thad_floryan wrote:
      > --- In linux@yahoogroups.com, ed <ed@...> wrote:
      > >
      > > Super quick question:
      > >
      > > What do people use for configuration management these days?
      >
      > Hi Ed,
      >
      > Please define 'configuration management' or, better, exactly what you're
      > seeking to do.

      Ah, yes, error 10b, forgetting people cannot read minds.

      There are some programs out there, such as puppet which, I believe (have
      not researched properly) can keep a group/class of computers running the
      same versions of configuration. We have a custom program at $job where
      a perl script distributes config over ssh to hosts which are in a given
      glass. Telling it "push everything under /etc" will go through all
      classes that the host belongs to before arriving a decision of what to
      actually copy to the host.

      I'm wondering what do people use at home and what do they use at work to
      do such jobs.

      --
      Best regards,
      Ed http://www.s5h.net/
    • Pascal Bernhard
      ... As a home user you could have a look at the package etc-keeper , it should be in the repositories of every major distribution. This packages keeps
      Message 2 of 10 , Apr 13, 2013
        <snip>

        > I'm wondering what do people use at home and what do they use at work
        > to do such jobs.


        As a home user you could have a look at the package 'etc-keeper', it
        should be in the repositories of every major distribution. This packages
        keeps a version of your /etc - directory on Github using git. You need
        an account there, although I think you could also use a different
        service or have it store versions on a server of your choice, in case
        you have a private server for example.
        It is nice to be able to switch back to previous versions of your
        configuration when something goes wrong. And you can see, which changes
        had which effect.




        --
        Pascal Bernhard

        Schwalbacher Stra├če 7
        12161 Berlin

        Telefon: 030 / 32 66 58 00
        Mobil: 0152 / 38 50 23 63
      • J
        ... Yay! So I at least get to keep my first answer: vim At least in the sense that I used vim to write a very brief shell script that uses rsync. But for home
        Message 3 of 10 , Apr 13, 2013
          On Sat, Apr 13, 2013 at 4:53 AM, ed <ed@...> wrote:
          > On Sat, Apr 13, 2013 at 12:27:15AM -0000, thad_floryan wrote:

          >> Hi Ed,
          >>
          >> Please define 'configuration management' or, better, exactly what you're
          >> seeking to do.
          >
          > Ah, yes, error 10b, forgetting people cannot read minds.
          >
          > There are some programs out there, such as puppet which, I believe (have
          > not researched properly) can keep a group/class of computers running the
          > same versions of configuration. We have a custom program at $job where
          > a perl script distributes config over ssh to hosts which are in a given
          > glass. Telling it "push everything under /etc" will go through all
          > classes that the host belongs to before arriving a decision of what to
          > actually copy to the host.
          >
          > I'm wondering what do people use at home and what do they use at work to
          > do such jobs.


          Yay! So I at least get to keep my first answer: vim

          At least in the sense that I used vim to write a very brief shell
          script that uses rsync. But for home use, I only do what you've
          described for a couple of laptops that I use for work and personal
          stuff. One is a big desktop replacement machine, the other a
          light-weight Thinkpad X201 that I use when I travel. So I have a very
          brief shell script that rsyncs a few select directories and files
          between them before and after a trip. My other systems have vastly
          different roles so they have nothing in common that would require
          syncing. One is a file server/IRC proxy/gateway machine that runs
          24/7, the other is a 1U server I use for development and
          testing/debugging of tools and it is constantly re-installed with
          various versions of Ubuntu Server, Xen, Rackspace Cloud (forget the
          actual name of their product) and some assorted Openstack based
          projects like Devstack.

          For actual system config, I also rsync some files in /etc. As for
          package differences, that's where my ad hoc system breaks down. There
          are apt tools out there that can ensure two systems have the same
          packages installed, I've just never used them, mainly because I've
          found that the majority of things I install on the primary system are
          not needed on the secondary, because while I may use the secondary for
          work, I often don't have the goals during use, so I don't need a lot
          of the development and testing tools I keep on the primary.

          At work, we have a home-grown system that performs identical network
          installs across systems in the labs. It handles all the bits
          necessary for PXE booting installers and launching pre-seed
          installations and post-install configuration so that every system is
          installed identically.
        • ed
          ... Similar thing here. I try and store all system changes in a Makefile that just gets what I need from package management and copy /etc into place. Although
          Message 4 of 10 , Apr 15, 2013
            On Sat, Apr 13, 2013 at 12:26:00PM -0400, J wrote:
            > On Sat, Apr 13, 2013 at 4:53 AM, ed <ed@...> wrote:
            > > On Sat, Apr 13, 2013 at 12:27:15AM -0000, thad_floryan wrote:
            >
            > >> Hi Ed,
            > >>
            > >> Please define 'configuration management' or, better, exactly what you're
            > >> seeking to do.
            > >
            > > Ah, yes, error 10b, forgetting people cannot read minds.
            > >
            > > There are some programs out there, such as puppet which, I believe (have
            > > not researched properly) can keep a group/class of computers running the
            > > same versions of configuration. We have a custom program at $job where
            > > a perl script distributes config over ssh to hosts which are in a given
            > > glass. Telling it "push everything under /etc" will go through all
            > > classes that the host belongs to before arriving a decision of what to
            > > actually copy to the host.
            > >
            > > I'm wondering what do people use at home and what do they use at work to
            > > do such jobs.
            >
            >
            > Yay! So I at least get to keep my first answer: vim
            >
            > At least in the sense that I used vim to write a very brief shell
            > script that uses rsync. But for home use, I only do what you've
            > described for a couple of laptops that I use for work and personal
            > stuff. One is a big desktop replacement machine, the other a
            > light-weight Thinkpad X201 that I use when I travel. So I have a very
            > brief shell script that rsyncs a few select directories and files
            > between them before and after a trip. My other systems have vastly
            > different roles so they have nothing in common that would require
            > syncing. One is a file server/IRC proxy/gateway machine that runs
            > 24/7, the other is a 1U server I use for development and
            > testing/debugging of tools and it is constantly re-installed with
            > various versions of Ubuntu Server, Xen, Rackspace Cloud (forget the
            > actual name of their product) and some assorted Openstack based
            > projects like Devstack.
            >
            > For actual system config, I also rsync some files in /etc. As for
            > package differences, that's where my ad hoc system breaks down. There
            > are apt tools out there that can ensure two systems have the same
            > packages installed, I've just never used them, mainly because I've
            > found that the majority of things I install on the primary system are
            > not needed on the secondary, because while I may use the secondary for
            > work, I often don't have the goals during use, so I don't need a lot
            > of the development and testing tools I keep on the primary.

            Similar thing here. I try and store all system changes in a Makefile
            that just gets what I need from package management and copy /etc into
            place. Although of late, UUID partition names get in the way of that.
            Damn things changing under me.

            > At work, we have a home-grown system that performs identical network
            > installs across systems in the labs. It handles all the bits
            > necessary for PXE booting installers and launching pre-seed
            > installations and post-install configuration so that every system is
            > installed identically.

            What happens when your gloabl configuration needs to be updated, say,
            changing /etc/motd and that needs to be copied to all the servers? Or
            perhaps just to the servers in US?

            Just wondering, would the approach be to reinstall with updated
            configuration from boot server?

            --
            Best regards,
            Ed http://www.s5h.net/
          • J
            ... Yes but that rarely happens as the target machines are all test systems that could be re-installed multiple times a day. So basically, we have 4 servers in
            Message 5 of 10 , Apr 15, 2013
              On Mon, Apr 15, 2013 at 12:44 PM, ed <ed@...> wrote:
              > On Sat, Apr 13, 2013 at 12:26:00PM -0400, J wrote:
              >> At least in the sense that I used vim to write a very brief shell
              >> script that uses rsync. But for home use, I only do what you've
              >> described for a couple of laptops that I use for work and personal
              >> stuff. One is a big desktop replacement machine, the other a
              >> light-weight Thinkpad X201 that I use when I travel. So I have a very
              >> brief shell script that rsyncs a few select directories and files
              >> between them before and after a trip. My other systems have vastly
              >> different roles so they have nothing in common that would require
              >> syncing. One is a file server/IRC proxy/gateway machine that runs
              >> 24/7, the other is a 1U server I use for development and
              >> testing/debugging of tools and it is constantly re-installed with
              >> various versions of Ubuntu Server, Xen, Rackspace Cloud (forget the
              >> actual name of their product) and some assorted Openstack based
              >> projects like Devstack.
              >>
              >> For actual system config, I also rsync some files in /etc. As for
              >> package differences, that's where my ad hoc system breaks down. There
              >> are apt tools out there that can ensure two systems have the same
              >> packages installed, I've just never used them, mainly because I've
              >> found that the majority of things I install on the primary system are
              >> not needed on the secondary, because while I may use the secondary for
              >> work, I often don't have the goals during use, so I don't need a lot
              >> of the development and testing tools I keep on the primary.
              >
              > Similar thing here. I try and store all system changes in a Makefile
              > that just gets what I need from package management and copy /etc into
              > place. Although of late, UUID partition names get in the way of that.
              > Damn things changing under me.
              >
              >> At work, we have a home-grown system that performs identical network
              >> installs across systems in the labs. It handles all the bits
              >> necessary for PXE booting installers and launching pre-seed
              >> installations and post-install configuration so that every system is
              >> installed identically.
              >
              > What happens when your gloabl configuration needs to be updated, say,
              > changing /etc/motd and that needs to be copied to all the servers? Or
              > perhaps just to the servers in US?
              >
              > Just wondering, would the approach be to reinstall with updated
              > configuration from boot server?

              Yes but that rarely happens as the target machines are all test
              systems that could be re-installed multiple times a day.

              So basically, we have 4 servers in 3 labs and 1 datacenter. All the
              config stuff for installing the systems goes into the same source code
              tree on Launchpad via Bazaar (not Git, because it's Ubuntu, eh?)
              Changes are then pulled from Launchpad by the servers that control
              each lab/DC (this is manual, though could be automated via cron, but
              to be honest, we don't have to change things but once a month or so at
              most).

              So the setup isn't quite like yours where, I imagine, the servers
              you're updating are production systems, or at least have longer life
              expectancy than a few hours as ours do. When I install a system of
              one of those satellite servers, that system could remain up and
              running for a month, or only long enough to run a couple quick tests
              before being re-installed for other tasks.

              As for location, as mentioned above, the code that contains all the
              config data, scripts and other things necessary for operation are
              stored on Launchpad.net. Changes that are required for the Taipei
              satellite, for example, would be pushed there, and then the lab
              manager (or whomever) would update the Taipei satellite pulling down
              the latest changes. The other labs wouldn't need to update unless
              changes had been applied to the code base that affect their labs or
              for general bug fix updates to the code base, but that is even less
              frequent than the individual lab updates.

              TBH, it's a somewhat confusing system, and a bit kludgey, as home
              grown systems often are.
            • ed
              ... We have around 1000 or so systems in different DCs around the globle. Well, some in NY, some in Amsterdam, some in Sydney, some in Singapore, the vast
              Message 6 of 10 , Apr 15, 2013
                On Mon, Apr 15, 2013 at 02:59:49PM -0400, J wrote:
                > On Mon, Apr 15, 2013 at 12:44 PM, ed <ed@...> wrote:
                > > [...]
                > > Just wondering, would the approach be to reinstall with updated
                > > configuration from boot server?
                >
                > Yes but that rarely happens as the target machines are all test
                > systems that could be re-installed multiple times a day.
                >
                > So basically, we have 4 servers in 3 labs and 1 datacenter. All the
                > config stuff for installing the systems goes into the same source code
                > tree on Launchpad via Bazaar (not Git, because it's Ubuntu, eh?)
                > Changes are then pulled from Launchpad by the servers that control
                > each lab/DC (this is manual, though could be automated via cron, but
                > to be honest, we don't have to change things but once a month or so at
                > most).
                >
                > So the setup isn't quite like yours where, I imagine, the servers
                > you're updating are production systems, or at least have longer life
                > expectancy than a few hours as ours do. When I install a system of
                > one of those satellite servers, that system could remain up and
                > running for a month, or only long enough to run a couple quick tests
                > before being re-installed for other tasks.

                We have around 1000 or so systems in different DCs around the globle.
                Well, some in NY, some in Amsterdam, some in Sydney, some in Singapore,
                the vast majority in UK DCs.

                Mostly web servers with yearly uptimes, one box has an uptime of 15
                years I think... can't remember without logging back in and I'm not
                going to do the logging-back-into-work-and-check-mail dance again
                tonight.

                I hadn't thought of a situation where boxes would have short life spans
                where configuration of the host could be carried out through rebuild.

                One of Sun/Oracles training centres in the UK had some labs which were
                rebuilt daily when classes finished, come to think about it.

                > As for location, as mentioned above, the code that contains all the
                > config data, scripts and other things necessary for operation are
                > stored on Launchpad.net. Changes that are required for the Taipei
                > satellite, for example, would be pushed there, and then the lab
                > manager (or whomever) would update the Taipei satellite pulling down
                > the latest changes. The other labs wouldn't need to update unless
                > changes had been applied to the code base that affect their labs or
                > for general bug fix updates to the code base, but that is even less
                > frequent than the individual lab updates.
                >
                > TBH, it's a somewhat confusing system, and a bit kludgey, as home
                > grown systems often are.

                The department I'm working in (Internet Operations) has been around
                since 1996 or earlier, so the configuration for the hosts seems peculiar
                to outsiders who use industry standard tools such as puppet. I was kind
                of hoping someone might describe how they use puppet in their work place
                or at home.

                --
                Best regards,
                Ed http://www.s5h.net/
              Your message has been successfully submitted and would be delivered to recipients shortly.