Loading ...
Sorry, an error occurred while loading the content.

Re: Running on idle systems

Expand Messages
  • Michael Tokarev
    On 03.05.2012 17:16, Stan Hoeppner wrote: [] ... Please refrain from using such words in public forum. Such usage makes you to be of that kind. ... Your read
    Message 1 of 23 , May 3, 2012
    • 0 Attachment
      On 03.05.2012 17:16, Stan Hoeppner wrote:
      []
      > To who at Debian? Lamont Jones? Has he replied to your idiotic idea yet?

      Please refrain from using such words in public forum.
      Such usage makes you to be of that kind.

      >> Thank you for making my worst nightmares come true. I will do
      >> my best to prevent this from happening, and if I find out that
      >> they do it anyway, then I will raise hell and it won't be pretty.
      >
      > All of this nonsense because one guy on the planet feels he can't simply
      > use an MUA with submission like everyone else does, but demands he be
      > able to run an MTA on his damn desktop/laptop, and demands the default
      > MTA config allows him to do what he wants seamlessly, possibly to the
      > detriment of others, mainly the guy who wrote this MTA for your use in
      > the first place. At least that's my read of this thread.

      Your read is incorrect. World is much larger than your imagination.

      Thanks,

      /mjt
    • Wietse Venema
      I already thanked Michael for his contributions in private email. Michael, does editing master.cf and s/fifo/unix/ solve the mtime file system updates problem?
      Message 2 of 23 , May 3, 2012
      • 0 Attachment
        I already thanked Michael for his contributions in private email.

        Michael, does editing master.cf and s/fifo/unix/ solve the mtime
        file system updates problem? This is already supported by existing
        code, works on Linux and *BSD, and I can make a config parameter
        that makes this configurable with system-dependent defaults.

        If so, then we can avoid controversial changes, such as support to
        mount over a critical Postfix directory without Postfix's knowledge
        of such things happening.

        Wietse
      • Stan Hoeppner
        ... My apologies for allowing my passion to transform into abrasiveness. ... Please (re)explain the use case you have in mind. It seemed to me the changes
        Message 3 of 23 , May 3, 2012
        • 0 Attachment
          On 5/3/2012 8:48 AM, Michael Tokarev wrote:
          > On 03.05.2012 17:16, Stan Hoeppner wrote:
          > []
          >> To who at Debian? Lamont Jones? Has he replied to your idiotic idea yet?
          >
          > Please refrain from using such words in public forum.
          > Such usage makes you to be of that kind.

          My apologies for allowing my passion to transform into abrasiveness.

          >>> Thank you for making my worst nightmares come true. I will do
          >>> my best to prevent this from happening, and if I find out that
          >>> they do it anyway, then I will raise hell and it won't be pretty.
          >>
          >> All of this nonsense because one guy on the planet feels he can't simply
          >> use an MUA with submission like everyone else does, but demands he be
          >> able to run an MTA on his damn desktop/laptop, and demands the default
          >> MTA config allows him to do what he wants seamlessly, possibly to the
          >> detriment of others, mainly the guy who wrote this MTA for your use in
          >> the first place. At least that's my read of this thread.
          >
          > Your read is incorrect. World is much larger than your imagination.

          Please (re)explain the use case you have in mind. It seemed to me the
          changes you're proposing will have a positive effect, immediately
          anyway, for only a very small subset of Postfix users, for a niche
          configuration.

          This request seems very similar to one made on the XFS list not all that
          long ago. A user with a home theater PC and a single large WD Green
          drive was irked that the drive wouldn't stay asleep for more than 30
          seconds. He debugged it himself, and found a long standing XFS behavior
          of accessing the journal or filesystem superblock every 30s IIRC. He
          said this wasn't necessary and pleaded with the devs to change this
          behavior, just so his HTPC drive could sleep. XFS was never intended
          for such a setup, this behavior existing since ~1994/95. The average
          XFS setup is a server with a dozen to a few hundred or more drives in
          hardware RAID running 24x7--no sleeping. An SGI employee mentioned just
          a couple of weeks ago working with a single XFS filesystem spaning 600
          drives in an IS16000 array. Not your average XFS drive count, but it is
          a typical large XFS configuration, and quite a contrast from a single
          drive HTPC server in a living room.

          IIRC a patch was eventually developed after many months, when it was
          determined there was likely no downside, and mainlined after much
          regression testing and tweaking. All for the benefit of very very few
          non-typical XFS users.

          Anyway, I see this as a similar case, and a similar waste of resources
          expended for the benefit of very few users, when there is nothing
          inherently "wrong" with the current Postfix implementation, as far as I
          understand the request. Maybe I simply don't fully understand the issue
          and the potential benefits yet.

          --
          Stan
        • john
          I do not see where Stan was abusive. Abrasive maybe, but then sometimes bumps on logs need sanding down this would appear to be one of those occasions.
          Message 4 of 23 , May 3, 2012
          • 0 Attachment
            I do not see where Stan was abusive.
            Abrasive maybe, but then sometimes bumps on logs need sanding down this
            would appear to be one of those occasions.

            On 03/05/2012 11:29 AM, Stan Hoeppner wrote:
            > On 5/3/2012 8:48 AM, Michael Tokarev wrote:
            >> On 03.05.2012 17:16, Stan Hoeppner wrote:
            >> []
            >>> To who at Debian? Lamont Jones? Has he replied to your idiotic idea yet?
            >> Please refrain from using such words in public forum.
            >> Such usage makes you to be of that kind.
            > My apologies for allowing my passion to transform into abrasiveness.
            >
            >>>> Thank you for making my worst nightmares come true. I will do
            >>>> my best to prevent this from happening, and if I find out that
            >>>> they do it anyway, then I will raise hell and it won't be pretty.
            >>> All of this nonsense because one guy on the planet feels he can't simply
            >>> use an MUA with submission like everyone else does, but demands he be
            >>> able to run an MTA on his damn desktop/laptop, and demands the default
            >>> MTA config allows him to do what he wants seamlessly, possibly to the
            >>> detriment of others, mainly the guy who wrote this MTA for your use in
            >>> the first place. At least that's my read of this thread.
            >> Your read is incorrect. World is much larger than your imagination.
            > Please (re)explain the use case you have in mind. It seemed to me the
            > changes you're proposing will have a positive effect, immediately
            > anyway, for only a very small subset of Postfix users, for a niche
            > configuration.
            >
            > This request seems very similar to one made on the XFS list not all that
            > long ago. A user with a home theater PC and a single large WD Green
            > drive was irked that the drive wouldn't stay asleep for more than 30
            > seconds. He debugged it himself, and found a long standing XFS behavior
            > of accessing the journal or filesystem superblock every 30s IIRC. He
            > said this wasn't necessary and pleaded with the devs to change this
            > behavior, just so his HTPC drive could sleep. XFS was never intended
            > for such a setup, this behavior existing since ~1994/95. The average
            > XFS setup is a server with a dozen to a few hundred or more drives in
            > hardware RAID running 24x7--no sleeping. An SGI employee mentioned just
            > a couple of weeks ago working with a single XFS filesystem spaning 600
            > drives in an IS16000 array. Not your average XFS drive count, but it is
            > a typical large XFS configuration, and quite a contrast from a single
            > drive HTPC server in a living room.
            >
            > IIRC a patch was eventually developed after many months, when it was
            > determined there was likely no downside, and mainlined after much
            > regression testing and tweaking. All for the benefit of very very few
            > non-typical XFS users.
            >
            > Anyway, I see this as a similar case, and a similar waste of resources
            > expended for the benefit of very few users, when there is nothing
            > inherently "wrong" with the current Postfix implementation, as far as I
            > understand the request. Maybe I simply don't fully understand the issue
            > and the potential benefits yet.
            >
          • Stan Hoeppner
            On 5/3/2012 6:54 PM, Bill Cole wrote: ... This could be completely resolved by PXE/bootp and NFS mounted root filesystems, and save you $200-500/node in disk
            Message 5 of 23 , May 4, 2012
            • 0 Attachment
              On 5/3/2012 6:54 PM, Bill Cole wrote:
              ...
              > For many of these systems,
              > the OS resides on a mirrored pair of local disks which see very
              > infrequent writes because every filesystem with significant flux is
              > physically resident across the SAN. Spinning disks draw power. Anything
              > drawing power generates heat. Heat requires cooling. Cooling typically
              > requires more power than the devices it is compensating for. Cooling
              > also requires careful attention to the details of physical server
              > density and rack design and so on...

              This could be completely resolved by PXE/bootp and NFS mounted root
              filesystems, and save you $200-500/node in disk drive costs after
              spending $1000-2000 for the NFS server hardware, or nothing using a VM
              server. It would also save you substantial admin time by using
              templates for new node deployments. This diskless node methodology has
              been around for ~30 years.

              > A local mail submission and trivial outbound transport subsystem is a
              > normal feature of any Unix-like machine. To operate robustly, it needs a
              > queueing and retry mechanism. It is helpful for environments with power
              > and cooling concerns if a mechanical disk (or worse: a mirrored pair of
              > disks) isn't forced to spin up every time that mechanism activates.
              > Every little wattage savings is useful, and avoiding truly pointless
              > disk writes is never a bad thing.

              SSD is a perfect solution here, in cases of non netboot machines. And
              right now small SSDs are less expensive than their rusty disk
              counterparts. If one is truly concerned about spurious spin ups eating
              power and generating heat, I would think one would not go after the
              software stack in a piecemeal fashion to solve the problem. The MTA
              isn't the only software waking the disk. The kernel will write logs far
              more often in many/most situations.

              > Well, beyond the data center environment there is also a very widespread
              > deployment of Postfix as the legacy mail subsystem on MacOS personal
              > machines, where the mail flow is typically extremely low.
              ...
              > Ultimately the result is having to choose
              > between power management and timely delivery. If the periodic wakeups
              > didn't force a disk write, it would be less onerous to let master run in
              > its normal persistent mode for a lot of Postfix users (many of whom may
              > not even be aware that they are Postfix users.)

              This is only true if two things persist into the future:

              1. Postfix isn't modified in order to perform a power management role
              2. Laptops will forever have spinning rust storage

              Addressing the first point, should it be the responsibility of
              application software to directly address power management concerns? Or
              should this be left to the OS and hardware platform/BIOS?

              Addressing the 2nd, within a few years all new laptops will ship with
              SSD instead of SRD specifically to address battery run time
              issues. Many are shipping now with SSDs. All netbooks already do,
              smart phones use other flash types.

              > Whether it is actually worthwhile to make a change that is only
              > significant for people who are barely using Postfix isn't a judgment I
              > can make. It's obvious that Dr. Venema takes significantly more care
              > with his code than I can really relate to, so I don't really know what
              > effort a conceptually small change in Postfix really entails.

              Wietse will make his own decisions as he always has.

              I'm simply making the point that issues such as power/cooling,
              wake/sleep, etc should be addressed at the hardware platform/OS level,
              or system or network architecture level, at the application level,
              especially if the effort to implement it is more than trivial.

              This is especially true when any such coding effort may only produce
              very short term gains, as these issues are already being addressed and
              will be completely resolved by other means (SSD) in the near
              future, or have already been resolved by 30 year old
              technology/architecture methods (netboot/NFS), depending on the platform
              scenario.

              --
              Stan
            • Bill Cole
              ... Yes, it is possible to fundamentally re-architect working environments that have been organically developed over years by adding significant new
              Message 6 of 23 , May 4, 2012
              • 0 Attachment
                On 4 May 2012, at 17:00, Stan Hoeppner wrote:

                > On 5/3/2012 6:54 PM, Bill Cole wrote:
                > ...
                >> For many of these systems,
                >> the OS resides on a mirrored pair of local disks which see very
                >> infrequent writes because every filesystem with significant flux is
                >> physically resident across the SAN. Spinning disks draw power.
                >> Anything
                >> drawing power generates heat. Heat requires cooling. Cooling
                >> typically
                >> requires more power than the devices it is compensating for. Cooling
                >> also requires careful attention to the details of physical server
                >> density and rack design and so on...
                >
                > This could be completely resolved by PXE/bootp and NFS mounted root
                > filesystems, and save you $200-500/node in disk drive costs after
                > spending $1000-2000 for the NFS server hardware, or nothing using a VM
                > server. It would also save you substantial admin time by using
                > templates for new node deployments. This diskless node methodology
                > has
                > been around for ~30 years.

                Yes, it is possible to fundamentally re-architect working environments
                that have been "organically" developed over years by adding significant
                new infrastructure to save on capital costs of hypothetical growth and
                maybe on future admin time. The idea that a server in the $1000-$2000
                range would be part of a global conversion to diskless servers or even
                the largest capital cost of such a project reveals that I failed to
                communicate an accurate understanding of the environment, but that's not
                terribly important. There's no shortage of well-informed well-developed
                specific proposals for comprehensive infrastructure overhaul, and in the
                interim between now and the distant never when one of those meets up
                with a winning lottery ticket and an unutilized skilled head or three, I
                have sufficient workarounds in place.

                I didn't mention that environment seeking a solution, but rather to
                point out that there are real-world systems that take advantage of the
                power management capabilities of modern disks and have nothing else in
                common with the average personal system. I think that was responsive to
                the paragraph of yours that I originally quoted. It's easy to come up
                with flippant advice for others to spend time and money to replace
                stable working systems, but it is also irrelevant and a bit rude.

                [...]
                >> Ultimately the result is having to choose
                >> between power management and timely delivery. If the periodic wakeups
                >> didn't force a disk write, it would be less onerous to let master run
                >> in
                >> its normal persistent mode for a lot of Postfix users (many of whom
                >> may
                >> not even be aware that they are Postfix users.)
                >
                > This is only true if two things persist into the future:
                >
                > 1. Postfix isn't modified in order to perform a power management role

                No reason for it to "perform" but it would be nice for it to "stop
                thwarting."

                > 2. Laptops will forever have spinning rust storage

                Who said anything about laptops?

                > Addressing the first point, should it be the responsibility of
                > application software to directly address power management concerns?
                > Or
                > should this be left to the OS and hardware platform/BIOS?

                Applications should not do things that are actively hostile to
                housekeeping functions of lower-level software (in this case: drive
                firmware) without a functional justification. It's not wrong for a
                filesystem to change the mtime on a pipe with every write to it, nor is
                it wrong for a filesystem to commit every change in a timely manner.
                This is not really fixable at a lower level without eliminating the
                hardware in question or making changes to filesystem software that could
                cause wide-ranging problems with other software.

                > Addressing the 2nd, within a few years all new laptops will ship with
                > SSD instead of SRD specifically to address battery run time
                > issues. Many are shipping now with SSDs. All netbooks already do,
                > smart phones use other flash types.

                This is not about laptops. Really.

                Systems can live a long time without drive replacements. Spinning rust
                with power management firmware is not going to be rare in running
                systems until at least 5 years after dependable & fast SSD's hit $1/GB
                for devices larger than 100GB. Of course, those drives may die out a lot
                faster where applications do periodic pointless writes that keep them
                running continuously.

                Note that the reason this issue exists *AT ALL* is to work around a bug
                in Solaris 2.4. I spent most of the last 14 years working mostly on
                Solaris systems in change-averse places and the last time I saw Solaris
                2.4 was 1999. I don't have the details of the bug or the free time to
                rig up a test system to prove it gone in whatever version Postfix needs
                to work on today, but I have no gripe with that relatively ancient and
                *likely* inoperative history being the blocking issue. I hope someone
                else can settle the issue. An argument that time will soon make this fix
                pointless is a bit ironic.


                >> Whether it is actually worthwhile to make a change that is only
                >> significant for people who are barely using Postfix isn't a judgment
                >> I
                >> can make. It's obvious that Dr. Venema takes significantly more care
                >> with his code than I can really relate to, so I don't really know
                >> what
                >> effort a conceptually small change in Postfix really entails.
                >
                > Wietse will make his own decisions as he always has.
                >
                > I'm simply making the point that issues such as power/cooling,
                > wake/sleep, etc should be addressed at the hardware platform/OS level,
                > or system or network architecture level, at the application level,
                > especially if the effort to implement it is more than trivial.

                See his discussion of the details. The code exists, what remains is the
                harder work of testing and getting all the defaults right.


                P.S.: Note that I have respected with your Reply-To header. Please
                return that courtesy.
              • Reindl Harald
                ... but only if you do not permanently spin them up and down power managment is the dead of a drive i have here disks with 35.000 uptime you can be sure with
                Message 7 of 23 , May 4, 2012
                • 0 Attachment
                  Am 05.05.2012 03:05, schrieb Bill Cole:
                  > Systems can live a long time without drive replacements.

                  but only if you do not permanently spin them up and down
                  power managment is the dead of a drive

                  i have here disks with > 35.000 uptime
                  you can be sure with "power-managment" they
                  would still be dead

                  try it out: spin down a drive running some years
                  and you have a real good change that the next
                  spin up is the final one

                  > Spinning rust with power management firmware is not going
                  > to be rare in running systems until at least 5 years

                  and should be the first to get disabled in the real world

                  the power you save in spin down a disk is meaningless
                  in any way, the power for produce a new drive because yours
                  died by permanently spin up/down is much higher and the cost
                  of the new drive also compared with let the drive run
                Your message has been successfully submitted and would be delivered to recipients shortly.