Loading ...
Sorry, an error occurred while loading the content.

verify db with mysql

Expand Messages
  • Stefan Jakobs
    Hello list, I refer to my question of august 2008 (http://archives.neohapsis.com/archives/postfix/2008-08/0747.html, and see below). What are the necessary
    Message 1 of 25 , Jun 13, 2010
    • 0 Attachment
      Hello list,

      I refer to my question of august 2008
      (http://archives.neohapsis.com/archives/postfix/2008-08/0747.html, and see
      below).

      What are the necessary steps to add update support to the mysql client
      (Postfix 2.5.6 or newer)?

      Has someone already done this and is willing to share the code?

      Thanks for your help and kind regards
      Stefan


      Wietse wrote on August 22nd 2008:
      > Stefan Jakobs:
      > > Hello list,
      > >
      > > I use Postfix 2.4.3 on two (actually four, but let's assume two)
      > > mailgateways. Both do recipient verification and cache the results with
      > > the address_verify_map (verify.db). Sometimes it happens that a foreign
      > > server tries to deliver a message and gets a 4xx response from one of my
      > > servers because the recipient verification doesn't finish in time. Then it
      > > waits some time and tries the other one. From that one it gets a 4xx
      > > response, too, because the second server doesn't know that the first
      > > server has allready done the recipient verification and does it again by
      > > itself. This process delays the delivery of a message and I like to avoid
      > > that.
      > > My idea: Is it possible to use one verify.db, let's say on a NFS share,
      > > with two (or more) postfix servers? Or will it produce problems with
      > > accessing the file?
      >
      > You can't have more than one process writing to a Berkeley DB verify
      > database, not even when they run on the same machine.
      >
      > If you want to share the file with more than one Postfix server,
      > you have several options.
      >
      > Option 1 uses the Postfix 2.5 proxywrite service, plus some
      > source code changes:
      <snip>
      >
      > Option 2 adds "update" support to the Postfix mysql client.
      >
      > Wietse
    • Wietse Venema
      ... I think this involves writing, testing, and documenting code. The design stage can pretty much be skipped for this fill-in-the-blanks exercise. Wietse
      Message 2 of 25 , Jun 13, 2010
      • 0 Attachment
        Stefan Jakobs:
        > Hello list,
        >
        > I refer to my question of august 2008
        > (http://archives.neohapsis.com/archives/postfix/2008-08/0747.html, and see
        > below).
        >
        > What are the necessary steps to add update support to the mysql client
        > (Postfix 2.5.6 or newer)?

        I think this involves writing, testing, and documenting code. The
        design stage can pretty much be skipped for this fill-in-the-blanks
        exercise.

        Wietse

        > Has someone already done this and is willing to share the code?
        >
        > Thanks for your help and kind regards
        > Stefan
        >
        >
        > Wietse wrote on August 22nd 2008:
        > > Stefan Jakobs:
        > > > Hello list,
        > > >
        > > > I use Postfix 2.4.3 on two (actually four, but let's assume two)
        > > > mailgateways. Both do recipient verification and cache the results with
        > > > the address_verify_map (verify.db). Sometimes it happens that a foreign
      • Stefan
        Hi list, I m in the process of adding write support to postfix s mysql client (you will find a patch against postfix-2.7.1 in the appendix). But I have two
        Message 3 of 25 , Oct 1, 2010
        • 0 Attachment
          Hi list,

          I'm in the process of adding write support to postfix's mysql client (you will
          find a patch against postfix-2.7.1 in the appendix). But I have two problems:
          1) the dict_cache_clean_event writes _LAST_CACHE_CLEANUP_COMPLETED_ to the
          database. Is this the intended behaviour?

          2) If I'm guessing right then the dict_cache_clean_event will iterate with
          help of dict->sequence through the database and will look for keys to expire.
          But I don't know how to implement this iteration/traverse process with mysql.
          My first thought was to use "SELECT * FROM verify" and mysql_use_result() but
          I'm wondering if there is a better solution.
          Has anyone an idea of how to do this?

          Thanks for your help and best regards
          Stefan

          > > by Stefan Jakobs on 2010-06-13T19:43:00+00:00
          > > Hello list,
          > > I refer to my question of august 2008
          > > (http://archives.neohapsis.com/archives/postfix/2008-08/0747.html, and see
          > > below).
          > > What are the necessary steps to add update support to the mysql client
          > > (Postfix 2.5.6 or newer)?
          > > Has someone already done this and is willing to share the code?
          > > Thanks for your help and kind regards
          > > Stefan
          > Wietse wrote on August 22nd 2008:
          > Stefan Jakobs:
          > I think this involves writing, testing, and documenting code. The
          > design stage can pretty much be skipped for this fill-in-the-blanks
          > exercise.
          > Wietse
        • Wietse Venema
          ... This record is needed by the cache cleanup pseudo-thread. This code assumes that the verify(8) daemon is responsible for cleaning up the verify(8) cache.
          Message 4 of 25 , Oct 1, 2010
          • 0 Attachment
            Stefan:
            > Hi list,
            >
            > I'm in the process of adding write support to postfix's mysql client (you will
            > find a patch against postfix-2.7.1 in the appendix). But I have two problems:
            > 1) the dict_cache_clean_event writes _LAST_CACHE_CLEANUP_COMPLETED_ to the
            > database. Is this the intended behaviour?

            This record is needed by the cache cleanup pseudo-thread. This code
            assumes that the verify(8) daemon is responsible for cleaning up
            the verify(8) cache.

            > 2) If I'm guessing right then the dict_cache_clean_event will iterate with
            > help of dict->sequence through the database and will look for keys to expire.
            > But I don't know how to implement this iteration/traverse process with mysql.
            > My first thought was to use "SELECT * FROM verify" and mysql_use_result() but
            > I'm wondering if there is a better solution.
            > Has anyone an idea of how to do this?

            Does the database support a first/next operation?

            Wietse

            > Thanks for your help and best regards
            > Stefan
            >
            > > > by Stefan Jakobs on 2010-06-13T19:43:00+00:00
            > > > Hello list,
            > > > I refer to my question of august 2008
            > > > (http://archives.neohapsis.com/archives/postfix/2008-08/0747.html, and see
            > > > below).
            > > > What are the necessary steps to add update support to the mysql client
            > > > (Postfix 2.5.6 or newer)?
            > > > Has someone already done this and is willing to share the code?
            > > > Thanks for your help and kind regards
            > > > Stefan
            > > Wietse wrote on August 22nd 2008:
            > > Stefan Jakobs:
            > > I think this involves writing, testing, and documenting code. The
            > > design stage can pretty much be skipped for this fill-in-the-blanks
            > > exercise.
            > > Wietse

            [ Attachment, skipping... ]
          • Stefan Jakobs
            ... Ah, fine. So that s OK. ... The operation which comes close to that, is to select the whole table and then fetch the keys row by row. Yes, I think that is
            Message 5 of 25 , Oct 1, 2010
            • 0 Attachment
              On Friday 01 October 2010 18:58:26 Wietse Venema wrote:
              > Stefan:
              > > Hi list,
              > >
              > > I'm in the process of adding write support to postfix's mysql client (you
              > > will find a patch against postfix-2.7.1 in the appendix). But I have two
              > > problems: 1) the dict_cache_clean_event writes
              > > _LAST_CACHE_CLEANUP_COMPLETED_ to the database. Is this the intended
              > > behaviour?
              >
              > This record is needed by the cache cleanup pseudo-thread. This code
              > assumes that the verify(8) daemon is responsible for cleaning up
              > the verify(8) cache.

              Ah, fine. So that's OK.

              > > 2) If I'm guessing right then the dict_cache_clean_event will iterate
              > > with help of dict->sequence through the database and will look for keys
              > > to expire. But I don't know how to implement this iteration/traverse
              > > process with mysql. My first thought was to use "SELECT * FROM verify"
              > > and mysql_use_result() but I'm wondering if there is a better solution.
              > > Has anyone an idea of how to do this?
              >
              > Does the database support a first/next operation?

              The operation which comes close to that, is to select the whole table and then
              fetch the keys row by row. Yes, I think that is a first/next operation (with a
              bad performance).

              What would be the answer if there wasn't a first/next operation?

              > Wietse

              Thank you in advance.
              Stefan

              <snip>
            • Wietse Venema
              ... Another desirable option may be to disable cache cleanup by the verify(8) daemon. Supposedly, the cache is meant to be shared, otherwise why incur the
              Message 6 of 25 , Oct 1, 2010
              • 0 Attachment
                Wietse Venema:
                > Stefan:
                > > Hi list,
                > >
                > > I'm in the process of adding write support to postfix's mysql client (you will
                > > find a patch against postfix-2.7.1 in the appendix). But I have two problems:
                > > 1) the dict_cache_clean_event writes _LAST_CACHE_CLEANUP_COMPLETED_ to the
                > > database. Is this the intended behaviour?
                >
                > This record is needed by the cache cleanup pseudo-thread. This code
                > assumes that the verify(8) daemon is responsible for cleaning up
                > the verify(8) cache.
                >
                > > 2) If I'm guessing right then the dict_cache_clean_event will iterate with
                > > help of dict->sequence through the database and will look for keys to expire.
                > > But I don't know how to implement this iteration/traverse process with mysql.
                > > My first thought was to use "SELECT * FROM verify" and mysql_use_result() but
                > > I'm wondering if there is a better solution.
                > > Has anyone an idea of how to do this?
                >
                > Does the database support a first/next operation?
                >

                Another desirable option may be to disable cache cleanup by the
                verify(8) daemon. Supposedly, the cache is meant to be shared,
                otherwise why incur the overhead?

                Wietse

                > > Thanks for your help and best regards
                > > Stefan
                > >
                > > > > by Stefan Jakobs on 2010-06-13T19:43:00+00:00
                > > > > Hello list,
                > > > > I refer to my question of august 2008
                > > > > (http://archives.neohapsis.com/archives/postfix/2008-08/0747.html, and see
                > > > > below).
                > > > > What are the necessary steps to add update support to the mysql client
                > > > > (Postfix 2.5.6 or newer)?
                > > > > Has someone already done this and is willing to share the code?
                > > > > Thanks for your help and kind regards
                > > > > Stefan
                > > > Wietse wrote on August 22nd 2008:
                > > > Stefan Jakobs:
                > > > I think this involves writing, testing, and documenting code. The
                > > > design stage can pretty much be skipped for this fill-in-the-blanks
                > > > exercise.
                > > > Wietse
                >
                > [ Attachment, skipping... ]
                >
                >
                >
              • Wietse Venema
                ... A DBMS without iterator does not seem plausible. The dict_cache cleanup code slowly scans the DBMS for obsolete records and removes them while allowing the
                Message 7 of 25 , Oct 2, 2010
                • 0 Attachment
                  Stefan Jakobs:
                  > > Does the database support a first/next operation?
                  >
                  > The operation which comes close to that, is to select the whole table and then
                  > fetch the keys row by row. Yes, I think that is a first/next operation (with a
                  > bad performance).
                  >
                  > What would be the answer if there wasn't a first/next operation?

                  A DBMS without iterator does not seem plausible.

                  The dict_cache cleanup code slowly scans the DBMS for obsolete
                  records and removes them while allowing the verify or postscreen
                  process to handle requests from other Postfix processes.

                  This means that the MySQL client will need to handle two streams
                  of requests that are interleaved:

                  1 - One stream of first/next/lookup/delete requests from the cache
                  cleanup code.

                  2 - One stream of lookup/update requests that are triggered by
                  smtpd (lookup) and by delivery agents (update).

                  These two streams must be able to co-exist. Cache cleanup can take
                  a long time, and it is not acceptable that the cache cleanup (stream
                  1) must run from start to completion without allowing requests from
                  stream 2.

                  Wietse
                • Stefan
                  Hi list, in the appendix you will find a patch against Postfix 2.7.1 which adds write support to Postfix MySQL client. If someone like to test it, then he
                  Message 8 of 25 , Oct 15, 2010
                  • 0 Attachment
                    Hi list,

                    in the appendix you will find a patch against Postfix 2.7.1 which adds write
                    support to Postfix' MySQL client.

                    If someone like to test it, then he will find Postfix RPMs with MySQL write
                    support for recent versions of *SUSE linux here:
                    http://download.opensuse.org/repositories/home:/rusjako/

                    To use a MySQL verify db, you have to:
                    - add "address_verify_map = mysql:/etc/postfix/verify.cf" to your main.cf
                    - Content of /etc/postfix/verify.cf:
                    user = postfix
                    password = <secret>
                    dbname = postfix
                    query = SELECT data FROM verify WHERE address='%s'
                    delete = DELETE FROM verify WHERE address='%s'
                    insert = INSERT verify SET address='%s', data='%v'
                    update = UPDATE verify SET data='%v' WHERE address='%s'
                    sequence = SELECT address,data FROM verify
                    - Create the MySQL table postfix.verify:
                    mysql> CREATE TABLE verify(
                    address VARCHAR(255) primary key,
                    data TEXT NOT NULL,
                    created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
                    );
                    - Grant the rights SELECT, INSERT, DELETE, UPDATE to the user 'postfix'

                    best regards
                    Stefan
                  • Victor Duchovni
                    ... Have you found any issues with lock contention between the sequence pseudo-thread and INSERT/DELETE/UPDATE operations during a garbage-collection sweep?
                    Message 9 of 25 , Oct 15, 2010
                    • 0 Attachment
                      On Fri, Oct 15, 2010 at 03:05:33PM +0200, Stefan wrote:

                      > in the appendix you will find a patch against Postfix 2.7.1 which adds write
                      > support to Postfix' MySQL client.
                      >
                      > If someone like to test it, then he will find Postfix RPMs with MySQL write
                      > support for recent versions of *SUSE linux here:
                      > http://download.opensuse.org/repositories/home:/rusjako/
                      >
                      > To use a MySQL verify db, you have to:
                      > - add "address_verify_map = mysql:/etc/postfix/verify.cf" to your main.cf
                      > - Content of /etc/postfix/verify.cf:
                      > user = postfix
                      > password = <secret>
                      > dbname = postfix
                      > query = SELECT data FROM verify WHERE address='%s'
                      > delete = DELETE FROM verify WHERE address='%s'
                      > insert = INSERT verify SET address='%s', data='%v'
                      > update = UPDATE verify SET data='%v' WHERE address='%s'
                      > sequence = SELECT address,data FROM verify

                      Have you found any issues with lock contention between the
                      "sequence" pseudo-thread and INSERT/DELETE/UPDATE operations during a
                      garbage-collection sweep? Is the code known to be safe against dead-lock?

                      Also it probably makes sense to retain a compatible db_common_expand()
                      wrapper around the extended code that also handles a second "value"
                      element in addition to lookup the key. This would obviate the need
                      to modify the other table drivers that don't do updates...

                      --
                      Viktor.
                    • Stefan Jakobs
                      ... No. ... I am not aware of any dead-lock issues. The sequence pseudo-thread will query the database only once with the first key. For every next key the
                      Message 10 of 25 , Oct 28, 2010
                      • 0 Attachment
                        On Friday 15 October 2010 16:53:40 Victor Duchovni wrote:
                        > On Fri, Oct 15, 2010 at 03:05:33PM +0200, Stefan wrote:
                        > > in the appendix you will find a patch against Postfix 2.7.1 which adds
                        > > write support to Postfix' MySQL client.
                        > >
                        > > If someone like to test it, then he will find Postfix RPMs with MySQL
                        > > write support for recent versions of *SUSE linux here:
                        > > http://download.opensuse.org/repositories/home:/rusjako/
                        > >
                        > > To use a MySQL verify db, you have to:
                        > > - add "address_verify_map = mysql:/etc/postfix/verify.cf" to your main.cf
                        > > - Content of /etc/postfix/verify.cf:
                        > > user = postfix
                        > > password = <secret>
                        > > dbname = postfix
                        > > query = SELECT data FROM verify WHERE address='%s'
                        > > delete = DELETE FROM verify WHERE address='%s'
                        > > insert = INSERT verify SET address='%s', data='%v'
                        > > update = UPDATE verify SET data='%v' WHERE address='%s'
                        > > sequence = SELECT address,data FROM verify
                        >
                        > Have you found any issues with lock contention between the
                        > "sequence" pseudo-thread and INSERT/DELETE/UPDATE operations during a
                        > garbage-collection sweep?

                        No.

                        > Is the code known to be safe against dead-lock?

                        I'am not aware of any dead-lock issues. The sequence pseudo-thread will query
                        the database only once with the first key. For every next key the sequence
                        pseudo-thread is working with the results in the memory. With a very large
                        database the size of the response may be a problem. But a INSERT/DELETE/UPDATE
                        operation will not conflict with the sequence pseudo-thread.
                        Finally, I can not prove if the code is dead-lock safe.

                        > Also it probably makes sense to retain a compatible db_common_expand()
                        > wrapper around the extended code that also handles a second "value"
                        > element in addition to lookup the key. This would obviate the need
                        > to modify the other table drivers that don't do updates...

                        Yes, good idea. I fixed that in the appended patch.

                        Best regards
                        Stefan
                      • Wietse Venema
                        Thanks for the patch. ... It appears that this sequence() implementation uses memory in proportion to the database size. That is not acceptable. Would it be
                        Message 11 of 25 , Oct 28, 2010
                        • 0 Attachment
                          Thanks for the patch.

                          Stefan Jakobs:
                          > I'am not aware of any dead-lock issues. The sequence pseudo-thread
                          > will query the database only once with the first key. For every
                          > next key the sequence pseudo-thread is working with the results
                          > in the memory. With a very large database the size of the response
                          > may be a problem. But a INSERT/DELETE/UPDATE operation will not
                          > conflict with the sequence pseudo-thread. Finally, I can not
                          > prove if the code is dead-lock safe.

                          It appears that this sequence() implementation uses memory in
                          proportion to the database size. That is not acceptable. Would it
                          be possible to maintain state with a limited amount of memory for
                          a database cursor?

                          By design, Postfix memory usage must not keep growing with increasing
                          data size, queue size, message size, database size etc. This is
                          necessary to ensure sane handling of overload. It is not acceptable
                          that Postfix becomes deadlocked under overload.

                          The alternative would be to disable database cleanup by Postfix
                          and to rely on other software to clean the databasem but that
                          is a problem because the database format is not public.

                          Wietse
                        • Wietse Venema
                          ... Looking over MySQL documentation, there appears to be a HANDLER primitive that appears to support FIRST/NEXT sequential access without using an amount of
                          Message 12 of 25 , Dec 7, 2010
                          • 0 Attachment
                            Wietse Venema:
                            > Thanks for the patch.
                            >
                            > Stefan Jakobs:
                            > > I'am not aware of any dead-lock issues. The sequence pseudo-thread
                            > > will query the database only once with the first key. For every
                            > > next key the sequence pseudo-thread is working with the results
                            > > in the memory. With a very large database the size of the response
                            > > may be a problem. But a INSERT/DELETE/UPDATE operation will not
                            > > conflict with the sequence pseudo-thread. Finally, I can not
                            > > prove if the code is dead-lock safe.
                            >
                            > It appears that this sequence() implementation uses memory in
                            > proportion to the database size. That is not acceptable. Would it
                            > be possible to maintain state with a limited amount of memory for
                            > a database cursor?
                            >
                            > By design, Postfix memory usage must not keep growing with increasing
                            > data size, queue size, message size, database size etc. This is
                            > necessary to ensure sane handling of overload. It is not acceptable
                            > that Postfix becomes deadlocked under overload.

                            Looking over MySQL documentation, there appears to be a HANDLER
                            primitive that appears to support FIRST/NEXT sequential access
                            without using an amount of memory that grows with database size.

                            http://dev.mysql.com/doc/refman/5.5/en/handler.html
                            http://dev.mysql.com/doc/refman/4.1/en/handler.html

                            This approach seems to have similar consistency limitations as
                            other Postfix maps that support FIRST/NEXT sequential access while
                            database updates are happening, and that is not a problem. When
                            we use FIRST/NEXT for database cleanup, it is sufficient if we
                            clean up most of the stale entries.

                            Wietse
                          • Stefan
                            ... Great hint. I wasn t aware of the HANDLER primitive. I would have used mysql_use_result() but it would have needed its own connection to the server and
                            Message 13 of 25 , Dec 14, 2010
                            • 0 Attachment
                              On Tuesday, 7th of december 2010, 21:57:00 Wietse Venema wrote:
                              > Wietse Venema:
                              > > Thanks for the patch.
                              > >
                              > > Stefan Jakobs:
                              > > > I'am not aware of any dead-lock issues. The sequence pseudo-thread
                              > > > will query the database only once with the first key. For every
                              > > > next key the sequence pseudo-thread is working with the results
                              > > > in the memory. With a very large database the size of the response
                              > > > may be a problem. But a INSERT/DELETE/UPDATE operation will not
                              > > > conflict with the sequence pseudo-thread. Finally, I can not
                              > > > prove if the code is dead-lock safe.
                              > >
                              > > It appears that this sequence() implementation uses memory in
                              > > proportion to the database size. That is not acceptable. Would it
                              > > be possible to maintain state with a limited amount of memory for
                              > > a database cursor?
                              > >
                              > > By design, Postfix memory usage must not keep growing with increasing
                              > > data size, queue size, message size, database size etc. This is
                              > > necessary to ensure sane handling of overload. It is not acceptable
                              > > that Postfix becomes deadlocked under overload.
                              >
                              > Looking over MySQL documentation, there appears to be a HANDLER
                              > primitive that appears to support FIRST/NEXT sequential access
                              > without using an amount of memory that grows with database size.
                              >
                              > http://dev.mysql.com/doc/refman/5.5/en/handler.html
                              > http://dev.mysql.com/doc/refman/4.1/en/handler.html
                              >
                              > This approach seems to have similar consistency limitations as
                              > other Postfix maps that support FIRST/NEXT sequential access while
                              > database updates are happening, and that is not a problem. When
                              > we use FIRST/NEXT for database cleanup, it is sufficient if we
                              > clean up most of the stale entries.

                              Great hint. I wasn't aware of the HANDLER primitive. I would have used
                              mysql_use_result() but it would have needed its own connection to the server
                              and would have made the code more complicated.
                              I changed my code and it uses now the handler primitive. You will find the new
                              patch against Postfix 2.7.1 in the appendix. The advantage is that the
                              sequence() implemantation fetches only one data set (tuple) per query, so the
                              used memory doesn't grow with the database size. A drawback is that this
                              solution is not as configurable/flexible as the other one. And it's still the
                              case that the first two values of a fetched tuple must be the address and its
                              corresponing cache timings (data). But I guess that is acceptable.
                              I'm not aware of any deadlock issues. But again, I can not prove it.

                              With this new patch a configuration for using the verify db with mysql looks
                              like this:
                              /etc/postfix/verify.cf:
                              user = postfix
                              password = <secret>
                              dbname = postfix
                              cache_tblname = verify
                              query = SELECT data FROM verify WHERE address='%s'
                              delete = DELETE FROM verify WHERE address='%s'
                              insert = INSERT verify SET address='%s', data='%v'
                              update = UPDATE verify SET data='%v' WHERE address='%s'

                              > Wietse

                              Thank you for your help and suggestions.
                              Kind regards
                              Stefan
                            • Wietse Venema
                              ... Do you really mean that the implementation is usable only for the Postfix verify(8) database format? Have you tested this with query and update activity
                              Message 14 of 25 , Dec 14, 2010
                              • 0 Attachment
                                Stefan:
                                > A drawback is that this
                                > solution is not as configurable/flexible as the other one. And it's still the
                                > case that the first two values of a fetched tuple must be the address and its
                                > corresponing cache timings (data). But I guess that is acceptable.

                                Do you really mean that the implementation is usable only for the
                                Postfix verify(8) database format?

                                Have you tested this with query and update activity while a sequence
                                operation is in progress? That is required by the Postfix dict_cache
                                implementation.

                                Wietse
                              • Stefan Jakobs
                                ... Sorry, my describtion wasn t clear. No, the implementation isn t bound to the verify(8) database format. The dict_mysql_sequence() function will return any
                                Message 15 of 25 , Dec 14, 2010
                                • 0 Attachment
                                  On Tuesday 14 December 2010 14:43:23 Wietse Venema wrote:
                                  > Stefan:
                                  > > A drawback is that this
                                  > > solution is not as configurable/flexible as the other one. And it's still
                                  > > the case that the first two values of a fetched tuple must be the
                                  > > address and its corresponing cache timings (data). But I guess that is
                                  > > acceptable.
                                  >
                                  > Do you really mean that the implementation is usable only for the
                                  > Postfix verify(8) database format?

                                  Sorry, my describtion wasn't clear.
                                  No, the implementation isn't bound to the verify(8) database format. The
                                  dict_mysql_sequence() function will return any key-value pair, as demanded by
                                  the interface. But the mysql HANDLER NEXT call will return a complete row of
                                  the table as an array. And the first element of this array must be the key and
                                  the second element must be the value. So the implementation of
                                  dict_mysql_sequence() expects the database to be in a specifc format, e.g
                                  (key|data|...).

                                  > Have you tested this with query and update activity while a sequence
                                  > operation is in progress? That is required by the Postfix dict_cache
                                  > implementation.

                                  Yes, I have tested that and it worked without problems. If you are interested
                                  then I will send you the logs of that test.

                                  > Wietse

                                  Regards
                                  Stefan
                                • Wietse Venema
                                  ... OK, that is not too restrictive, just a matter of documentation. ... Yes, it would help when I want to run some tests (the results should be similar). One
                                  Message 16 of 25 , Dec 14, 2010
                                  • 0 Attachment
                                    Stefan Jakobs:
                                    >
                                    > On Tuesday 14 December 2010 14:43:23 Wietse Venema wrote:
                                    > > Stefan:
                                    > > > A drawback is that this
                                    > > > solution is not as configurable/flexible as the other one. And it's still
                                    > > > the case that the first two values of a fetched tuple must be the
                                    > > > address and its corresponing cache timings (data). But I guess that is
                                    > > > acceptable.
                                    > >
                                    > > Do you really mean that the implementation is usable only for the
                                    > > Postfix verify(8) database format?
                                    >
                                    > Sorry, my describtion wasn't clear.
                                    > No, the implementation isn't bound to the verify(8) database format. The
                                    > dict_mysql_sequence() function will return any key-value pair, as demanded by
                                    > the interface. But the mysql HANDLER NEXT call will return a complete row of
                                    > the table as an array. And the first element of this array must be the key and
                                    > the second element must be the value. So the implementation of
                                    > dict_mysql_sequence() expects the database to be in a specifc format, e.g
                                    > (key|data|...).

                                    OK, that is not too restrictive, just a matter of documentation.

                                    > > Have you tested this with query and update activity while a sequence
                                    > > operation is in progress? That is required by the Postfix dict_cache
                                    > > implementation.
                                    >
                                    > Yes, I have tested that and it worked without problems. If you
                                    > are interested then I will send you the logs of that test.

                                    Yes, it would help when I want to run some tests (the results should
                                    be similar).

                                    One more question: what happens if a "first" sequence operation is
                                    requested before the last one is finished? Should the code maintain
                                    an internal flag that the "handler open" is still in effect, and
                                    do the "right" thing when another "first" sequence operation is
                                    requested?

                                    Wietse

                                    Wietse
                                  • Stefan Jakobs
                                    On Tuesday, 14th of December 2010, 20:09:26 Wietse Venema wrote: ... I send you the logs in a separate message. ... The call of a first sequence
                                    Message 17 of 25 , Jan 5, 2011
                                    • 0 Attachment
                                      On Tuesday, 14th of December 2010, 20:09:26 Wietse Venema wrote:
                                      <snip>
                                      > > Yes, I have tested that and it worked without problems. If you
                                      > > are interested then I will send you the logs of that test.
                                      >
                                      > Yes, it would help when I want to run some tests (the results should
                                      > be similar).

                                      I send you the logs in a separate message.

                                      > One more question: what happens if a "first" sequence operation is
                                      > requested before the last one is finished? Should the code maintain
                                      > an internal flag that the "handler open" is still in effect, and
                                      > do the "right" thing when another "first" sequence operation is
                                      > requested?

                                      The call of a "first" sequence operation will reset the cleanup process, so
                                      the last elements will never be seen by the cleanup process.
                                      I introduced a semaphore to circumvent that. If a second sequence operation
                                      starts before the first one has finished, it will quit and return as if the
                                      database was empty.
                                      You will find the new patch in the appendix.

                                      I tried to produce a situation where two cleanup processes were running
                                      simultaneously, but I couldn't. Even with a
                                      address_verify_cache_cleanup_interval of 2s and a database with more than
                                      100.000 entries (the cleanup took 82s) only one cleanup process was running.
                                      Another one started 2 seconds after the first one finished.

                                      # egrep "dict_cache_clean_event: (done|start)" /var/log/mail
                                      Jan 5 15:04:34 mx2 postfix/verify[30223]: dict_cache_clean_event: start
                                      /etc/mx/verify.cf cache cleanup
                                      Jan 5 15:05:56 mx2 postfix/verify[30223]: dict_cache_clean_event: done
                                      /etc/mx/verify.cf cache cleanup scan
                                      Jan 5 15:05:58 mx2 postfix/verify[30223]: dict_cache_clean_event: start
                                      /etc/mx/verify.cf cache cleanup

                                      I'm not sure if it is possible that two cleanup processes can run
                                      simultaneously. Wietse, how are the cleanup processes scheduled and executed?
                                      From the above it looks as if the next cleanup process will not be scheduled
                                      until the current one has finished. Is that the case?

                                      Thanks for your patience and help.

                                      > Wietse

                                      Stefan
                                    • Wietse Venema
                                      ... Each verify or postscreen or tlsmgr process will at set times scan the database for old entries. If it so happens that this scan doesn t finish before the
                                      Message 18 of 25 , Jan 5, 2011
                                      • 0 Attachment
                                        Stefan Jakobs:
                                        > On Tuesday, 14th of December 2010, 20:09:26 Wietse Venema wrote:
                                        > <snip>
                                        > > > Yes, I have tested that and it worked without problems. If you
                                        > > > are interested then I will send you the logs of that test.
                                        > >
                                        > > Yes, it would help when I want to run some tests (the results should
                                        > > be similar).
                                        >
                                        > I send you the logs in a separate message.
                                        >
                                        > > One more question: what happens if a "first" sequence operation is
                                        > > requested before the last one is finished? Should the code maintain
                                        > > an internal flag that the "handler open" is still in effect, and
                                        > > do the "right" thing when another "first" sequence operation is
                                        > > requested?
                                        >
                                        > The call of a "first" sequence operation will reset the cleanup process, so
                                        > the last elements will never be seen by the cleanup process.
                                        > I introduced a semaphore to circumvent that. If a second sequence operation
                                        > starts before the first one has finished, it will quit and return as if the
                                        > database was empty.
                                        > You will find the new patch in the appendix.
                                        >
                                        > I tried to produce a situation where two cleanup processes were running
                                        > simultaneously, but I couldn't. Even with a
                                        > address_verify_cache_cleanup_interval of 2s and a database with more than
                                        > 100.000 entries (the cleanup took 82s) only one cleanup process was running.
                                        > Another one started 2 seconds after the first one finished.
                                        >
                                        > # egrep "dict_cache_clean_event: (done|start)" /var/log/mail
                                        > Jan 5 15:04:34 mx2 postfix/verify[30223]: dict_cache_clean_event: start
                                        > /etc/mx/verify.cf cache cleanup
                                        > Jan 5 15:05:56 mx2 postfix/verify[30223]: dict_cache_clean_event: done
                                        > /etc/mx/verify.cf cache cleanup scan
                                        > Jan 5 15:05:58 mx2 postfix/verify[30223]: dict_cache_clean_event: start
                                        > /etc/mx/verify.cf cache cleanup
                                        >
                                        > I'm not sure if it is possible that two cleanup processes can run
                                        > simultaneously. Wietse, how are the cleanup processes scheduled and executed?
                                        > From the above it looks as if the next cleanup process will not be scheduled
                                        > until the current one has finished. Is that the case?
                                        >
                                        > Thanks for your patience and help.

                                        Each verify or postscreen or tlsmgr process will at set times
                                        scan the database for old entries.

                                        If it so happens that this scan doesn't finish before the new one
                                        starts, then it would really be a W*T*F* moment if the code decides
                                        that the new scan completes immediately, especially if it means
                                        that the old scan is stopped too.

                                        Wietse
                                      • Victor Duchovni
                                        ... When the Postfix queue-manager is scanning the incoming or deferred queue, if another scan request comes in in the middle of an existing scan, the exsting
                                        Message 19 of 25 , Jan 5, 2011
                                        • 0 Attachment
                                          On Wed, Jan 05, 2011 at 06:56:31PM -0500, Wietse Venema wrote:

                                          > Each verify or postscreen or tlsmgr process will at set times
                                          > scan the database for old entries.
                                          >
                                          > If it so happens that this scan doesn't finish before the new one
                                          > starts, then it would really be a W*T*F* moment if the code decides
                                          > that the new scan completes immediately, especially if it means
                                          > that the old scan is stopped too.

                                          When the Postfix queue-manager is scanning the incoming or deferred queue,
                                          if another scan request comes in in the middle of an existing scan, the
                                          exsting scan continues, but a flag is set (idempotent) that indicates
                                          that a new scan should start as soon as the old scan completes.

                                          In this case, it is not as critical to set such a flag, but it is important
                                          to allow the existing scan to continue to completion, and ignore or
                                          (just note) new requests until it does. Once a scan completes, new
                                          scans can proceed either immediately (saved flag) or when next requested.

                                          --
                                          Viktor.
                                        • Stefan Jakobs
                                          ... That s what I have implemented. If a cleanup process is already running and a second cleanup process starts then the second process will quit as if the
                                          Message 20 of 25 , Jan 6, 2011
                                          • 0 Attachment
                                            On Thursday 06 January 2011 01:45:00 Victor Duchovni wrote:
                                            > On Wed, Jan 05, 2011 at 06:56:31PM -0500, Wietse Venema wrote:
                                            > > Each verify or postscreen or tlsmgr process will at set times
                                            > > scan the database for old entries.
                                            > >
                                            > > If it so happens that this scan doesn't finish before the new one
                                            > > starts, then it would really be a W*T*F* moment if the code decides
                                            > > that the new scan completes immediately, especially if it means
                                            > > that the old scan is stopped too.
                                            >
                                            > When the Postfix queue-manager is scanning the incoming or deferred queue,
                                            > if another scan request comes in in the middle of an existing scan, the
                                            > exsting scan continues, but a flag is set (idempotent) that indicates
                                            > that a new scan should start as soon as the old scan completes.
                                            >
                                            > In this case, it is not as critical to set such a flag, but it is important
                                            > to allow the existing scan to continue to completion, and ignore or
                                            > (just note) new requests until it does. Once a scan completes, new
                                            > scans can proceed either immediately (saved flag) or when next requested.

                                            That's what I have implemented. If a cleanup process is already running and a
                                            second cleanup process starts then the second process will quit as if the
                                            database was empty and it will log a warning that the cleanup was skipped due
                                            to an already running process. The first cleanup process will continue and
                                            complete the database scan. A subsequent cleanup process will start as
                                            scheduled.
                                            I don't use a flag to put a simultaneously running cleanup process on hold, it
                                            will just be skipped. And that shouldn't be a problem because the cleanup
                                            process isn't a time critical process.

                                            I'm sorry for the confusion.

                                            regards
                                            Stefan
                                          • Victor Duchovni
                                            ... No warning is necessary. With a large database the cleanup thread may run longer than the scheduled interval between threads. This is fine. -- Viktor.
                                            Message 21 of 25 , Jan 6, 2011
                                            • 0 Attachment
                                              On Thu, Jan 06, 2011 at 04:56:48PM +0100, Stefan Jakobs wrote:

                                              > > In this case, it is not as critical to set such a flag, but it is important
                                              > > to allow the existing scan to continue to completion, and ignore or
                                              > > (just note) new requests until it does. Once a scan completes, new
                                              > > scans can proceed either immediately (saved flag) or when next requested.
                                              >
                                              > That's what I have implemented. If a cleanup process is already running and a
                                              > second cleanup process starts then the second process will quit as if the
                                              > database was empty and it will log a warning

                                              No warning is necessary. With a large database the cleanup thread may
                                              run longer than the scheduled interval between threads. This is fine.

                                              --
                                              Viktor.
                                            • Stefan
                                              ... OK, I attached the final(?) version of the mysql-write-support patch. Is there any chance that the patch will make it into a stable Postfix release?
                                              Message 22 of 25 , Jan 10, 2011
                                              • 0 Attachment
                                                On Thursday, 6th Januar 2011, 21:02:17 Victor Duchovni wrote:
                                                > On Thu, Jan 06, 2011 at 04:56:48PM +0100, Stefan Jakobs wrote:
                                                > > > In this case, it is not as critical to set such a flag, but it is
                                                > > > important to allow the existing scan to continue to completion, and
                                                > > > ignore or (just note) new requests until it does. Once a scan
                                                > > > completes, new scans can proceed either immediately (saved flag) or
                                                > > > when next requested.
                                                > >
                                                > > That's what I have implemented. If a cleanup process is already running
                                                > > and a second cleanup process starts then the second process will quit as
                                                > > if the database was empty and it will log a warning
                                                >
                                                > No warning is necessary. With a large database the cleanup thread may
                                                > run longer than the scheduled interval between threads. This is fine.

                                                OK, I attached the final(?) version of the mysql-write-support patch.
                                                Is there any chance that the patch will make it into a stable Postfix release?

                                                Regards
                                                Stefan
                                              • Tuomo Soini
                                                On Mon, 10 Jan 2011 17:41:05 +0100 ... What happened to this patch? There is real user case in any bigger system for shared verify db and mysql engine patch
                                                Message 23 of 25 , Mar 10, 2014
                                                • 0 Attachment
                                                  On Mon, 10 Jan 2011 17:41:05 +0100
                                                  "Stefan" <stefan@...> wrote:

                                                  > On Thursday, 6th Januar 2011, 21:02:17 Victor Duchovni wrote:
                                                  > > No warning is necessary. With a large database the cleanup thread
                                                  > > may run longer than the scheduled interval between threads. This is
                                                  > > fine.
                                                  >
                                                  > OK, I attached the final(?) version of the mysql-write-support patch.
                                                  > Is there any chance that the patch will make it into a stable Postfix
                                                  > release?

                                                  What happened to this patch? There is real user case in any bigger
                                                  system for shared verify db and mysql engine patch looks like really
                                                  good a solution for the problem.

                                                  I can see this is not yet included in expermental releases and I'd vote
                                                  for including this.

                                                  --
                                                  Tuomo Soini <tis@...>
                                                  Foobar Linux services
                                                  +358 40 5240030
                                                  Foobar Oy <http://foobar.fi/>
                                                • Wietse Venema
                                                  ... Postfix software evolves from it handles normal cases as expected via it handles all normal AND all abnormal cases to code that has been throughly
                                                  Message 24 of 25 , Mar 10, 2014
                                                  • 0 Attachment
                                                    Tuomo Soini:
                                                    > On Mon, 10 Jan 2011 17:41:05 +0100
                                                    > "Stefan" <stefan@...> wrote:
                                                    >
                                                    > > On Thursday, 6th Januar 2011, 21:02:17 Victor Duchovni wrote:
                                                    > > > No warning is necessary. With a large database the cleanup thread
                                                    > > > may run longer than the scheduled interval between threads. This is
                                                    > > > fine.
                                                    > >
                                                    > > OK, I attached the final(?) version of the mysql-write-support patch.
                                                    > > Is there any chance that the patch will make it into a stable Postfix
                                                    > > release?
                                                    >
                                                    > What happened to this patch? There is real user case in any bigger
                                                    > system for shared verify db and mysql engine patch looks like really
                                                    > good a solution for the problem.

                                                    Postfix software evolves from "it handles normal cases as expected"
                                                    via "it handles all normal AND all abnormal cases" to code that has
                                                    been throughly reviewed and tested. I ran out of time after helping
                                                    the code from stage 1 to stage 2.

                                                    Wietse

                                                    > I can see this is not yet included in expermental releases and I'd vote
                                                    > for including this.
                                                    >
                                                    > --
                                                    > Tuomo Soini <tis@...>
                                                    > Foobar Linux services
                                                    > +358 40 5240030
                                                    > Foobar Oy <http://foobar.fi/>
                                                    >
                                                  • Stefan Jakobs
                                                    ... Wietse complained about some missing safety checks and I didn t had the time (and need) to add them. With the memcached support in recent postfix versions
                                                    Message 25 of 25 , Mar 10, 2014
                                                    • 0 Attachment
                                                      Tuomo Soini:
                                                      > On Mon, 10 Jan 2011 17:41:05 +0100
                                                      >
                                                      > "Stefan" <stefan@...> wrote:
                                                      > > On Thursday, 6th Januar 2011, 21:02:17 Victor Duchovni wrote:
                                                      > > > No warning is necessary. With a large database the cleanup thread
                                                      > > > may run longer than the scheduled interval between threads. This is
                                                      > > > fine.
                                                      > >
                                                      > > OK, I attached the final(?) version of the mysql-write-support patch.
                                                      > > Is there any chance that the patch will make it into a stable Postfix
                                                      > > release?
                                                      >
                                                      > What happened to this patch? There is real user case in any bigger
                                                      > system for shared verify db and mysql engine patch looks like really
                                                      > good a solution for the problem.
                                                      >
                                                      > I can see this is not yet included in expermental releases and I'd vote
                                                      > for including this.

                                                      Wietse complained about some missing safety checks and I didn't had the time
                                                      (and need) to add them. With the memcached support in recent postfix versions
                                                      I don't think that there is any longer a need for mysql write support.

                                                      Regards
                                                      Stefan
                                                    Your message has been successfully submitted and would be delivered to recipients shortly.