Loading ...
Sorry, an error occurred while loading the content.

Re: [solarisx86] s10u7

Expand Messages
  • Ian Collins
    ... Often with Nevada builds (I ve never seen the advice to halt zones!). -- Ian.
    Message 1 of 5 , May 1 1:00 AM
    • 0 Attachment
      dick hoogendijk wrote:
      > I wonder...
      > I have a mirrored solaris 10u6 running of ZFS root with three
      > non-global zones installed.
      >
      > Do I need to halt these zones (as adviced with UFS) if I lucreate a new
      > BE for upgrading? Or can I just lucreate newBE, update that one and do
      > a luactivate on the running system?
      >
      > Anyone has some advice on this?
      >
      >
      Often with Nevada builds (I've never seen the advice to halt zones!).

      --
      Ian.
    • Bob Netherton
      ... Under what circumstances did you receive the advice to halt the zones ? In general that is not required, although there are some corner cases where LU
      Message 2 of 5 , May 1 6:57 AM
      • 0 Attachment
        dick hoogendijk wrote:
        > Do I need to halt these zones (as adviced with UFS) if I lucreate a new
        > BE for upgrading?

        Under what circumstances did you receive the advice to halt the zones
        ? In general
        that is not required, although there are some corner cases where LU can
        get a bit
        aggressive in copying zone data (NFS mounts is one that I recall). But
        in general
        you should not have to stop your zones to do the maintenance - that
        defeats the purpose
        of LU.

        What you can't do is change zone state while LU is running (lucreate,
        lumake or
        luupgrade). If the zones are down they need to stay down. If they
        are running,
        they need to stay running. Fortunately ZFS cloning based lucreates run
        pretty
        quickly.
        > Or can I just lucreate newBE, update that one and do
        > a luactivate on the running system?
        >
        > Anyone has some advice on this?
        >
        >

        Just make sure that you are up to date on your patching, packaging and
        LU patches. I just
        did this from the 4/27 Recommended patch cluster . Hopefully there
        will be an update to
        the LU infodoc soon, but in the mean time the latest recommended patch
        cluster would
        probably be a good place to be.



        # zoneadm list -cv
        ID NAME STATUS PATH BRAND IP
        0 global running / native
        shared
        51 web1 running /zones/web1 native
        shared
        52 web2 running /zones/web2 native
        shared
        53 webstack running /zones/webstack native
        shared
        54 mysql running /zones/mysql native
        shared
        56 test running /archives/zones/test native
        shared
        57 webmin running /zones/webmin native
        shared
        - baseline installed /zones/baseline native
        shared



        So you can see that I have most of my zones running. One of the
        zones is on UFS, the
        rest in ZFS in the root pool in a separate dataset.

        # lucreate -n scooby
        Checking GRUB menu...
        System has findroot enabled GRUB
        Analyzing system configuration.
        Comparing source boot environment <s10u7-baseline> file systems with the
        file system(s) you specified for the new boot environment. Determining
        which file systems should be in the new boot environment.
        Updating boot environment description database on all BEs.
        Updating system configuration files.
        Creating configuration for boot environment <scooby>.
        Source boot environment is <s10u7-baseline>.
        Creating boot environment <scooby>.
        Cloning file systems from boot environment <s10u7-baseline> to create
        boot environment <scooby>.
        Creating snapshot for <rpool/ROOT/s10u7-baseline> on
        <rpool/ROOT/s10u7-baseline@scooby>.
        Creating clone for <rpool/ROOT/s10u7-baseline@scooby> on
        <rpool/ROOT/scooby>.
        Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/scooby>.
        Creating snapshot for <rpool/zones/web1> on <rpool/zones/web1@scooby>.
        Creating clone for <rpool/zones/web1@scooby> on <rpool/zones/web1-scooby>.
        Creating snapshot for <rpool/zones/web2> on <rpool/zones/web2@scooby>.
        Creating clone for <rpool/zones/web2@scooby> on <rpool/zones/web2-scooby>.
        Creating snapshot for <rpool/zones/webstack> on
        <rpool/zones/webstack@scooby>.
        Creating clone for <rpool/zones/webstack@scooby> on
        <rpool/zones/webstack-scooby>.
        Creating snapshot for <rpool/zones/mysql> on <rpool/zones/mysql@scooby>.
        Creating clone for <rpool/zones/mysql@scooby> on <rpool/zones/mysql-scooby>.
        Creating dataset <rpool/ROOT/scooby/zoneds/test-scooby> for zone <test>
        Copying root of zone <test>.
        Creating snapshot for <rpool/zones/webmin> on <rpool/zones/webmin@scooby>.
        Creating clone for <rpool/zones/webmin@scooby> on
        <rpool/zones/webmin-scooby>.
        Creating snapshot for <rpool/zones/baseline> on
        <rpool/zones/baseline@scooby>.
        Creating clone for <rpool/zones/baseline@scooby> on
        <rpool/zones/baseline-scooby>.
        Saving existing file </boot/grub/menu.lst> in top level dataset for BE
        <s10u6_baseline> as <mount-point>//boot/grub/menu.lst.prev.
        Saving existing file </boot/grub/menu.lst> in top level dataset for BE
        <test> as <mount-point>//boot/grub/menu.lst.prev.
        Saving existing file </boot/grub/menu.lst> in top level dataset for BE
        <scooby> as <mount-point>//boot/grub/menu.lst.prev.
        Saving existing file </boot/grub/menu.lst> in top level dataset for BE
        <route66> as <mount-point>//boot/grub/menu.lst.prev.
        Saving existing file </boot/grub/menu.lst> in top level dataset for BE
        <nv95> as <mount-point>//boot/grub/menu.lst.prev.
        Saving existing file </boot/grub/menu.lst> in top level dataset for BE
        <nv112> as <mount-point>//boot/grub/menu.lst.prev.
        File </boot/grub/menu.lst> propagation successful
        Copied GRUB menu from PBE to ABE
        No entry for BE <scooby> in GRUB menu
        Population of boot environment <scooby> successful.
        Creation of boot environment <scooby> successful.


        and......

        # luupgrade -t -s /export/patches/10x_Recommended-2009-04-27 -n scooby

        you really don't want to see this output, do you :-)



        # luactivate scooby
        System has findroot enabled GRUB
        Generating boot-sign, partition and slice information for PBE
        <s10u7-baseline>
        A Live Upgrade Sync operation will be performed on startup of boot
        environment <scooby>.

        Generating boot-sign for ABE <scooby>
        Saving existing file </etc/bootsign> in top level dataset for BE
        <scooby> as <mount-point>//etc/bootsign.prev.
        Generating partition and slice information for ABE <scooby>
        Copied boot menu from top level dataset.
        Generating multiboot menu entries for PBE.
        Generating multiboot menu entries for ABE.
        Disabling splashimage
        Re-enabling splashimage
        No more bootadm entries. Deletion of bootadm entries is complete.
        GRUB menu default setting is unaffected
        Done eliding bootadm entries.

        <blah>

        Modifying boot archive service
        Propagating findroot GRUB for menu conversion.
        File </etc/lu/installgrub.findroot> propagation successful
        File </etc/lu/stage1.findroot> propagation successful
        File </etc/lu/stage2.findroot> propagation successful
        File </etc/lu/GRUB_capability> propagation successful
        Deleting stale GRUB loader from all BEs.
        File </etc/lu/installgrub.latest> deletion successful
        File </etc/lu/stage1.latest> deletion successful
        File </etc/lu/stage2.latest> deletion successful
        Activation of boot environment <scooby> successful.

        # init 0


        And we are done.



        Bob
      • dick hoogendijk
        On Fri, 01 May 2009 08:57:59 -0500 ... I ve read it somewhre on sunsolve. Can t remember when/where ;-) ... I upgraded from u6- u7 just now. It went very very
        Message 3 of 5 , May 1 12:16 PM
        • 0 Attachment
          On Fri, 01 May 2009 08:57:59 -0500
          Bob Netherton <progbob@...> wrote:

          > dick hoogendijk wrote:
          > > Do I need to halt these zones (as adviced with UFS) if I lucreate a
          > > new BE for upgrading?
          >
          > Under what circumstances did you receive the advice to halt the zones

          I've read it somewhre on sunsolve. Can't remember when/where ;-)

          > So you can see that I have most of my zones running.
          > One of the zones is on UFS, the rest in ZFS in the root pool in a
          > separate dataset.

          I upgraded from u6->u7 just now. It went very very smooth. At last!
          Never had an upgrade with zones that went so well.

          Two things:
          [1] A freshly installed u7 system already has 14 NEW patches.
          I'm downloading now (wit pca) :)
          [2] I -HATE- it that everytime my SENDMAIL.CF files are overwrittten.
          Why in heavens name does SUN do that? :-(

          For the rest: applaus..

          --
          Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
          + http://nagual.nl/ | nevada / opensolaris sharing the same ROOT pool
          + All that's really worth doing is what we do for others (Lewis Carrol)
        • dick hoogendijk
          On Fri, 01 May 2009 08:57:59 -0500 ... I ve read it somewhre on sunsolve. Can t remember when/where ;-) ... I upgraded from u6- u7 just now. It went very very
          Message 4 of 5 , May 1 12:21 PM
          • 0 Attachment
            On Fri, 01 May 2009 08:57:59 -0500
            Bob Netherton <progbob@...> wrote:

            > dick hoogendijk wrote:
            > > Do I need to halt these zones (as adviced with UFS) if I lucreate a
            > > new BE for upgrading?
            >
            > Under what circumstances did you receive the advice to halt the zones

            I've read it somewhre on sunsolve. Can't remember when/where ;-)

            > So you can see that I have most of my zones running.
            > One of the zones is on UFS, the rest in ZFS in the root pool in a
            > separate dataset.

            I upgraded from u6->u7 just now. It went very very smooth. At last!
            Never had an upgrade with zones that went so well.

            Two things:
            [1] A freshly installed u7 system already has 14 NEW patches.
            I'm downloading now (wit pca) :)
            [2] I -HATE- it that everytime my SENDMAIL.CF files are overwrittten.
            Why in heavens name does SUN do that? :-(

            For the rest: applaus..

            --
            Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
            + http://nagual.nl/ | nevada / opensolaris sharing the same ROOT pool
            + All that's really worth doing is what we do for others (Lewis Carrol)
          Your message has been successfully submitted and would be delivered to recipients shortly.