Loading ...
Sorry, an error occurred while loading the content.
 

Re: [larscontest] Re: New record

Expand Messages
  • Bertram Felgenhauer
    ... That was me. The initial pattern is based on the construction A A A -A for Hadamard matrizes of size 2^n. It s only good near powers of 2 though, which
    Message 1 of 18 , Apr 18, 2005
      jcmeyrignac wrote:
      >
      >
      > Just for reference, here is the original matrix (it may be useful to
      > detect how it was built)
      >
      > (56,9.50015e+032)

      That was me.

      The initial pattern is based on the construction
      A A
      A -A
      for Hadamard matrizes of size 2^n.

      It's only good near powers of 2 though, which was to be expected.

      Code:
      int o = random() % (((N-1)|(N>>1)|(N>>2)|(N>>3)|(N>>4)|(N>>5))+1-N);
      for (int i=0; i<N; i++)
      for (int j=0; j<N; j++) {
      int k = i+1;
      int l = j+1;
      if (j>N/2)
      l = l+o;
      int m = 0;
      while (k) {
      m ^= k&l&1;
      k>>=1;
      l>>=1;
      }
      weight += matrix[i][j] = m;
      }
      if (random()%2) {
      for (int i=0; i<N; i++)
      std::swap(matrix[0][i], matrix[1][i]);
      }

      HTH,

      Bertram
    • w_orrick
      ... Very interesting. Congratulations to JCM and to Bertram Felgenhauer! The new record matrix has a fascinating structure. If you convert it to a 57x57
      Message 2 of 18 , Apr 19, 2005
        > That was me.
        >
        > The initial pattern is based on the construction
        > A A
        > A -A
        > for Hadamard matrizes of size 2^n.
        >
        > It's only good near powers of 2 though, which was to be expected.
        >


        Very interesting. Congratulations to JCM and to Bertram Felgenhauer! The new record
        matrix has a fascinating structure. If you convert it to a 57x57 {-1,1} matrix you find that
        the new record is closely related to a particular 56x56 Hadamard matrix. When suitably
        normalized, this Hadamard matrix contains a 7x7 submatrix consisting entirely of 1s.
        By negating this submatrix you obtain a matrix which is no longer Hadamard - its
        determinant is 75% of the Hadamard bound, but by appending an appropriate extra
        row and column you get this spectacular new record. The determinant value is 89%
        of Barba's bound.

        Bertram:
        I'm very interested in where this Hadamard matrix came from. Unfortunately I wasn't
        quite able to understand your algorithm. Can you say more?

        -Will
      • Bertram Felgenhauer
        ... Ok, in effect, I started with the 64x64 Hadamard matrix obtained from the above construction, and then took rows 1..57 (leaving out the last 7 rows) and
        Message 3 of 18 , Apr 20, 2005
          > > The initial pattern is based on the construction
          > > A A
          > > A -A
          > > for Hadamard matrizes of size 2^n.

          > Bertram:
          > I'm very interested in where this Hadamard matrix came from.
          > Unfortunately I wasn't quite able to understand your algorithm.
          > Can you say more?

          Ok, in effect, I started with the 64x64 Hadamard matrix obtained
          from the above construction, and then took rows 1..57 (leaving out
          the last 7 rows) and columns 1..29, 37..64 (leaving out 7 columns
          in the center) of that, and then converted that to a {0,1}-matrix.

          The code I posted tried a few other matrices, taking columns 1..29,
          and x..x+27 for x in 30 to 37 but starting at 37 turned out to be
          the only value with a positive determinant of the resultant matrix
          (oops).

          This matrix is actually not very good; its determinant is 1.014E+31.

          This step was followed by simple greedy hill-climbing (changing one
          of the entries in the matrix which gave maximum improvement for the
          determinant) with a best resultant matrix with determinant
          9.65944E+32 (as far as I remember). The matrix JCM started with
          is also a result of this process.

          JCM's optimization program did a much better job than mine did.

          So I don't deserve any credit for this result either.

          Bertram
        • w_orrick
          ... Bertram, Thanks for the explanation. I gather from what you said previously that this method gave good results in other orders close to 64. Order 61
          Message 4 of 18 , Apr 22, 2005
            --- In larscontest@yahoogroups.com, Bertram Felgenhauer <bf3@m...> wrote:

            > Ok, in effect, I started with the 64x64 Hadamard matrix obtained
            > from the above construction, and then took rows 1..57 (leaving out
            > the last 7 rows) and columns 1..29, 37..64 (leaving out 7 columns
            > in the center) of that, and then converted that to a {0,1}-matrix.
            >
            > The code I posted tried a few other matrices, taking columns 1..29,
            > and x..x+27 for x in 30 to 37 but starting at 37 turned out to be
            > the only value with a positive determinant of the resultant matrix
            > (oops).
            >
            > This matrix is actually not very good; its determinant is 1.014E+31.
            >
            > This step was followed by simple greedy hill-climbing (changing one
            > of the entries in the matrix which gave maximum improvement for the
            > determinant) with a best resultant matrix with determinant
            > 9.65944E+32 (as far as I remember). The matrix JCM started with
            > is also a result of this process.
            >
            > JCM's optimization program did a much better job than mine did.
            >
            > So I don't deserve any credit for this result either.


            Bertram,

            Thanks for the explanation. I gather from what you said previously that this method gave
            good results in other orders close to 64. Order 61 (-1/+1, which is order 60 for 0/1) is of
            particular interest. The best determinant value known in that order cannot be improved,
            but nevertheless I'm curious how well your method did. I guess you also did well near
            order 32?

            There must be something interesting going on with your construction. My experience with
            hill climbing in higher orders (above order 22 or 23, say) is that it only does well with a
            good starting matrix, so I believe that your matrix was the key here. Many thousands of
            Hadamard matrices of order 56 are known, and it's easy to generate millions more. I
            spent quite a bit of time running various search and hill climbing programs, starting from
            56x56 Hadamard matrices augmented in various ways, to try to improve the value in order
            57, and I only achieved modest success. There are several senses in which the new record
            matrix has optimal properties that all previous searches missed.

            As for the subsequent optimization, it may be that hill climbing works better if you modify
            entire rows or columns at a time, rather than single entries. Perhaps JCM can say how his
            program works.

            -Will
          • Jean-Charles Meyrignac
            ... My program is very simple. It optimizes the matrices with the following methods: 1) full inversion of the matrix. 2) inversion of every row and every
            Message 5 of 18 , Apr 22, 2005
              >
              > As for the subsequent optimization, it may be that hill climbing works
              > better if you modify
              > entire rows or columns at a time, rather than single entries. Perhaps JCM
              > can say how his
              > program works.
              >
              My program is very simple.
              It optimizes the matrices with the following methods:
              1) full inversion of the matrix.
              2) inversion of every row and every column. When an inversion doesn't
              increase the score, it is undone, otherwise the inversion is kept.
              3) freezing pass. This process was designed by Jaroslaw Wroblewski, and I
              already gave its source code.
              4) inversion of every single cell. When an inversion doesn't increase the
              score, it's undone, otherwise the inversion is kept.

              For N=56, it seems that the freezing pass is useless, but column/row
              inversion is very important !
              If you know other useful transformations for the hill climbing, I'll be glad
              to include them in my program. Inverting blocks seems a good idea (for
              example 7x7 blocks on N=56), but I have not yet tested it.

              I previously implemented an algorithm similar to Bertram's to optimize the
              matrices (that is: invert the cell that gives the biggest score), but the
              optimization was too slow. The current method works very well.

              I'm currently exploring signed matrices truncation.
              It's basically Bertram's idea:
              - take the 64x64 Sylvester matrix
              - for every N, remove 64-N consecutive columns and rows, and optimize the
              resulting matrix.

              For N=57, my program found 2 record matrices at 1.17303e33 by removing:
              - columns 24-30 and lines 5-11
              - columns 44-50 and lines 17-23
              (maybe some other removals can give the same score, since we cannot trust
              the current precision computation)

              Attached, a zip file containing the program and the 2 signed matrices.

              The program started at N=59, and after one day of computation, it's
              currently at N=57. Perhaps we'll get a new solution during this week-end.
              The reduction and optimization of the contest matrices is still running on 4
              Pentiums, but chances are slim to get a new result.
              Finally, starting with a matrix different from Sylvester's could lead to
              better results, but the process is very slow.

              JC
            • sterten@aol.com
              how about combining two good matrices by inserting a subrectangle(square) from one good matrix into the other one ? The matrices can be of different size.
              Message 6 of 18 , Apr 22, 2005
                how about combining two good matrices by inserting a subrectangle(square)
                from one good matrix into the other one ?
                The matrices can be of different size.
                Just an idea, I didn't try.
                 
                Do you have a statistics, which rectangles occur more than once in
                in the matrices from your database ?
              • Bertram Felgenhauer
                ... These are useless in Bertram.c, because you work in the +-1-matrix domain - all these steps do is change the sign of the determinant. When working with
                Message 7 of 18 , Apr 22, 2005
                  Jean-Charles Meyrignac wrote:
                  > My program is very simple.
                  > It optimizes the matrices with the following methods:
                  > 1) full inversion of the matrix.
                  > 2) inversion of every row and every column. When an inversion doesn't
                  > increase the score, it is undone, otherwise the inversion is kept.

                  These are useless in Bertram.c, because you work in the +-1-matrix
                  domain - all these steps do is change the sign of the determinant.

                  When working with 0/1-matrices, these correspond to changing the
                  top left entry (step 1) or another entry of the first column
                  or row (step 2) in the associated +-1-matrix, which are important.

                  In fact, it's because these two steps were missing that my program
                  failed to find the 1.17303e+33 record.

                  > I previously implemented an algorithm similar to Bertram's to optimize the
                  > matrices (that is: invert the cell that gives the biggest score), but the
                  > optimization was too slow. The current method works very well.

                  I calculate the adjoint matrix and pick the biggest value (with sign
                  adjusted according to the respective entry in the transposed original
                  matrix) from there ... that works quite well.

                  For N=60 my program gets to 4.2855e+35.

                  Thanks,

                  Bertram
                • w_orrick
                  Some small progress: The 7x7 block of 1s contained in the 56x56 Hadamard matrix that gives rise to the new record is an unusual feature. It appears that this
                  Message 8 of 18 , Apr 24, 2005
                    Some small progress:

                    The 7x7 block of 1s contained in the 56x56 Hadamard matrix that gives rise to the new
                    record is an unusual feature. It appears that this particular Hadamard matrix is related to
                    a "product" construction that uses "skew" Hadamard matrices. A skew Hadamard matrix
                    is a Hadamard matrix that equals the negation of its transpose except for its diagonal,
                    which is all +1s. (In other words, H+H'=2I.) The construction takes a skew Hadamard
                    matrix of size h and produces a new Hadamard matrix of size h(h-1). The resulting
                    matrix will contain h blocks of size (h-1)x(h-1) consisting entirely of 1s. The details are
                    described in books by J. Seberry, but I could try to reproduce them here if there's interest.

                    The matrix in question is the h=8 version of this construction. Interestingly, the h=4
                    version produces the 12x12 Hadamard matrix, and negating one of its 3x3 block of 1s
                    and appending the appropriate row and column produces the record matrix in size 13.

                    I tried the same idea with h=12 to obtain a Hadamard matrix of size 132 with 11x11
                    blocks. Applying the analogous modifications produces a matrix of size 133 whose
                    determinant attains 84% of Barba's bound. This is quite good for a matrix so big. Other
                    "standard" constructions, such as maximal excess, cannot even in principle produce a
                    determinant so large.

                    There are other ways of getting Hadamard matrices with large blocks of 1s that don't rely
                    on skew Hadamard matrices, but the construction doesn't seem to work well on those
                    matrices. Clearly there are features of the skew Hadamard family besides large blocks of
                    1s that explain why they work so well.

                    One possible avenue to explore is the following: The "right" 56x56 matrix showed up
                    when Bertram deleted some rows and columns from the 64x64 Sylvester matrix. One
                    can regard the latter as the tensor product of an 8x8 Hadamard matrix with itself.
                    Perhaps deleting some rows and columns of other tensor product matrices will be similarly
                    productive. I would bet that the new matrix in order 133 comes from deleting rows and
                    columns from the tensor product of a 12x12 matrix with itself.. To get other sizes one
                    might try tensoring Hadamard matrices of different sizes, such as an 8x8 with a 12x12.

                    -Will
                  • sterten@aol.com
                    ... this could be viewed as an undirected graph and then the 7*7-blocks were 7-cliques. Define the skew-determinant of a graph G as det(G ) where G has 1s
                    Message 9 of 18 , Apr 25, 2005
                       >Some small progress:
                       >
                       >The 7x7 block of 1s contained in the 56x56
                       >Hadamard matrix that gives rise to the new
                       >record is an unusual feature.  It appears that
                       >this particular Hadamard matrix is related to
                       >a "product" construction that uses "skew"
                       >Hadamard matrices.  A skew Hadamard matrix 
                       >is a Hadamard matrix that equals the negation
                       >of its transpose except for its diagonal,
                       >which is all +1s.  (In other words, H+H'=2I.)
                       
                      this could be viewed as an undirected graph and
                      then the 7*7-blocks were 7-cliques.
                      Define the "skew-determinant" of a graph G as det(G')
                      where G' has 1s on the diagonal and the skewed
                      values in the lower left triangle.
                      Then we are looking for graphs with max.
                      skew-determinants. What graph-theoretic
                      properties will these have ?
                       
                       >The construction takes a skew Hadamard
                       >matrix of size h and produces a new Hadamard
                       >matrix of size h(h-1).  The resulting
                       >matrix will contain h blocks of size (h-1)x(h-1)
                       >consisting entirely of 1s.  The details are
                       >described in books by J. Seberry, but I could
                       >try to reproduce them here if there's interest.
                       
                      I didn't find it in the wdwchap.ps on Seberry's page.
                       
                      suppose you have a binary matrix and then replace
                      each 1 by a (h-1)*(h-1) matrix
                      consisting entirely of 1s (call it 1_(h-1) ?)
                      and each 0 by a 0_(h-1)
                       
                      well, it's probably not so simple, else you would
                      have told us already..
                      Is the product matrix also skew ? Could it help
                      if we(JC's database) restrict our search to skew-matrices ?
                       
                       >The matrix in question is the h=8 version of
                       >this construction.  Interestingly, the h=4
                       >version produces the 12x12 Hadamard matrix,
                       >and negating one of its 3x3 block of 1s
                       >and appending the appropriate row and column
                       >produces the record matrix in size 13.
                       >
                       >I tried the same idea with h=12 to obtain a
                       >Hadamard matrix of size 132 with 11x11
                       >blocks.  Applying the analogous modifications
                       >produces a matrix of size 133 whose
                       >determinant attains 84% of Barba's bound.
                       >This is quite good for a matrix so big.  Other
                       >"standard" constructions, such as maximal excess,
                       >cannot even in principle produce a
                       >determinant so large.
                       
                       
                       
                       >There are other ways of getting Hadamard matrices
                       >with large blocks of 1s that don't rely
                       >on skew Hadamard matrices, but the construction
                       >doesn't seem to work well on those
                       >matrices.  Clearly there are features of the
                       >skew Hadamard family besides large blocks of
                       >1s that explain why they work so well.
                       >
                       >One possible avenue to explore is the following:
                       >The "right" 56x56 matrix showed up
                       >when Bertram deleted some rows and columns
                       >from the 64x64 Sylvester matrix.
                       >One can regard the latter as the tensor product
                       
                      what's tensor product ? Is it the above construction ?
                       
                       >of an 8x8 Hadamard matrix with itself. 
                       >Perhaps deleting some rows and columns of other
                       >tensor product matrices will be similarly
                       >productive.  I would bet that the new matrix
                       >in order 133 comes from deleting rows and
                       >columns from the tensor product of a 12x12 matrix
                       >with itself..  To get other sizes one might try
                       >tensoring Hadamard matrices of different sizes,
                       >such as an 8x8 with a 12x12.
                       >
                       >-Will
                       
                      reminds me to latin squares or n-queen solutions
                      or magic squares, where this construction is also often
                      used to produce larger squares with same properties.
                       
                      Another idea: couldn't we also measure the "amount of Hadamardness
                      of a matrix" (rather than the determinant) by counting orthogonal
                      row-col pairs or such and then maximize this ?
                      Hoping for better convergence than with determinant
                       

                      -Guenter
                    • jcmeyrignac
                      ... 132 with 11x11 ... size 133 whose ... matrix so big. Other ... principle produce a ... Nice result, along with your new 106 record ! Cutting consecutive
                      Message 10 of 18 , Apr 25, 2005
                        --- In larscontest@yahoogroups.com, "w_orrick" <w_orrick@y...> wrote:
                        >
                        > I tried the same idea with h=12 to obtain a Hadamard matrix of size
                        132 with 11x11
                        > blocks. Applying the analogous modifications produces a matrix of
                        size 133 whose
                        > determinant attains 84% of Barba's bound. This is quite good for a
                        matrix so big. Other
                        > "standard" constructions, such as maximal excess, cannot even in
                        principle produce a
                        > determinant so large.
                        >
                        Nice result, along with your new 106 record !


                        Cutting consecutive rows and columns didn't produce any new result
                        here, and my optimizer is always trying to reduce contest matrices (it
                        only reached N=33).

                        Perhaps could you integrate Tomas latests results on your site ?
                        It seems that he's not progressing anymore.

                        JC
                      • w_orrick
                        ... I should have given an exact reference. The construction I had in mind doesn t seem to be discussed in any of the more recent review articles. For
                        Message 11 of 18 , Apr 25, 2005
                          > >The construction takes a skew Hadamard
                          > >matrix of size h and produces a new Hadamard
                          > >matrix of size h(h-1). The resulting
                          > >matrix will contain h blocks of size (h-1)x(h-1)
                          > >consisting entirely of 1s. The details are
                          > >described in books by J. Seberry, but I could
                          > >try to reproduce them here if there's interest.
                          >
                          > I didn't find it in the wdwchap.ps on Seberry's page.

                          I should have given an exact reference. The construction I had
                          in mind doesn't seem to be discussed in any of the more recent
                          review articles. For complete details you need to go back to

                          W. D. Wallis, Anne Penfold Street, and Jennifer Seberry Wallis,
                          Combinatorics: Room squares, Sum-free sets, Hadamard
                          matrices, Lecture Notes in Mathematics 292, Springer-Verlag,
                          Berlin, 1972.

                          The relevant section is 8.2. I'll try to summarize.

                          First some background: An example of the tensor product
                          is the construction

                          A A

                          A -A

                          You can regard this as

                          1 1
                          x A
                          1 -1

                          where I use "x" to indicate the tensor product, also
                          known as the Kronecker product.

                          In general, if G and H are Hadamard matrices, then

                          G x H

                          is another Hadamard matrix.

                          g(11) H g(12) H ... g(1n) H
                          .
                          G x H = .
                          .
                          g(n1) H g(n2) H ... g(nn) H


                          One can actually be more flexible: If G and
                          H1, H2, ... Hn are Hadamard matrices, then

                          g(11) H1 g(12) H2 ... g(1n) Hn
                          .
                          .
                          .
                          g(n1) H1 g(n2) H2 ... g(nn) Hn

                          is another Hadamard matrix, but is not, strictly
                          speaking, a tensor product.

                          The construction described by Seberry is the sum of
                          two tensor products:

                          U x Z + I x J

                          where

                          U = H-I
                          H = an (h x h) skew Hadamard matrix, normalized so
                          the first row is all 1s and the first columns is all -1s.
                          (The top left corner is a 1.)
                          Z = the (h-1) x (h-1) matrix obtained by stripping off the
                          first row and column of H
                          I = the (h x h) identity matrix
                          J = the (h-1) x (h-1) matrix of 1s.



                          -Will
                        • w_orrick
                          ... Hope to have this done soon. The problem is that Tomas has set just too many records! -Will
                          Message 12 of 18 , Apr 25, 2005
                            > Perhaps could you integrate Tomas latests results on your site ?
                            > It seems that he's not progressing anymore.

                            Hope to have this done soon. The problem is that Tomas has set just too many records!

                            -Will
                          • sterten@aol.com
                            ... are W. D. Wallis and Jennifer Seberry Wallis related ? ... G is m*m , H is n*n ... GxH is m*n * m*n (GxH)(n*(i-1)+k,n*(j-1)+l)=G(i,j)*H(k,l) i,j=1..m
                            Message 13 of 18 , Apr 26, 2005
                               >>  >The construction takes a skew Hadamard
                               >> >matrix of  size h and produces a new Hadamard
                               >> >matrix of size h(h-1).  The  resulting
                               >> >matrix will contain h blocks of size  (h-1)x(h-1)
                               >> >consisting entirely of 1s.  The details are 
                               >> >described in books by J. Seberry, but I could
                               >> >try to  reproduce them here if there's interest.
                               >> 
                               >> I didn't find it in the wdwchap.ps on Seberry's page.
                               >
                               >I should have given an exact reference.  The construction I had
                               >in mind doesn't seem to be discussed in any of the more recent
                               >review articles.  For complete details you need to go back to
                               >
                               >  W. D. Wallis, Anne Penfold Street, and Jennifer Seberry Wallis,
                               
                              are W. D. Wallis and Jennifer Seberry Wallis related ?
                               
                               >  Combinatorics: Room squares, Sum-free sets, Hadamard
                               >  matrices, Lecture Notes in Mathematics 292, Springer-Verlag,
                               >  Berlin, 1972.
                               >
                               >The relevant section is 8.2.  I'll try to summarize.
                               >
                               >First some background:  An example of the tensor product
                               >is the construction
                               >
                               >A    A
                               >
                               >A  -A
                               >
                               >You can regard this as
                               >
                               >  1   1
                               >              x   A
                               >  1  -1
                               >
                               >where I use "x" to indicate the tensor product, also
                               >known as the Kronecker product.
                               >
                               >In general, if G and H are Hadamard matrices, then
                               
                              G is m*m , H is n*n
                               
                               >  G x H
                               >
                               >is another Hadamard matrix.
                               >
                               >                  g(11) H    g(12) H  ...  g(1n) H
                               >                     .
                               >  G x H =       .
                               >                     .
                               >                 g(n1) H     g(n2) H ...   g(nn) H
                               >
                               
                              GxH is m*n * m*n
                              (GxH)(n*(i-1)+k,n*(j-1)+l)=G(i,j)*H(k,l)
                              i,j=1..m
                              k,l=1..n
                               
                               >One can actually be more flexible:  If G and
                               >H1, H2, ... Hn are Hadamard matrices, then
                               >
                               >  g(11) H1    g(12) H2   ...   g(1n) Hn
                               >     .
                               >     .
                               >     .
                               >  g(n1) H1   g(n2) H2   ...   g(nn) Hn
                               >
                               >is another Hadamard matrix, but is not, strictly
                               >speaking, a tensor product.
                               
                              (Gx(H_h))(n*(i-1)+k,n*(j-1)+l)=G(i,j)*H_{n*(i-1)+k}(k,l)
                              i,j=1..m
                              k,l=1..n
                               
                               >The construction described by Seberry is the sum of
                               >two tensor products:
                               >
                               >  U x Z + I x J
                               >
                               >where
                               >
                               >  U = H-I
                               >  H = an (h x h) skew Hadamard matrix, normalized so
                               >         the first row is all 1s and the first columns is all -1s.
                               >         (The top left corner is a 1.)
                               >  Z = the (h-1) x (h-1) matrix obtained by stripping off the
                               >         first row and column of H
                               >  I = the (h x h) identity matrix
                               >  J = the (h-1) x (h-1) matrix of 1s.
                               

                               n=h
                               X is n*(n-1)  *  n*(n-1)
                               X = (H_n-I_n) x Z_{n-1}   +   I_n x 1_{n-1}
                               
                               X((n-1)*(i-1)+k,(n-1)*(j-1)+l)=(H(i,j)+(i=j))*H(k+1,l+1)) - (k=l)
                               
                              i,j=1..n
                              k,l=1..n-1
                               
                               >
                               >
                               >-Will
                               

                              -Guenter
                            • w_orrick
                              ... Does the expression (i=j) take the value 1 when i=j and 0 otherwise? It looks to me like the formula for the entries should be
                              Message 14 of 18 , Apr 26, 2005
                                > >The construction described by Seberry is the sum of
                                > >two tensor products:
                                > >
                                > > U x Z + I x J
                                > >
                                > >where
                                > >
                                > > U = H-I
                                > > H = an (h x h) skew Hadamard matrix, normalized so
                                > > the first row is all 1s and the first columns is all -1s.
                                > > (The top left corner is a 1.)
                                > > Z = the (h-1) x (h-1) matrix obtained by stripping off the
                                > > first row and column of H
                                > > I = the (h x h) identity matrix
                                > > J = the (h-1) x (h-1) matrix of 1s.
                                >
                                >
                                > n=h
                                > X is n*(n-1) * n*(n-1)
                                > X = (H_n-I_n) x Z_{n-1} + I_n x 1_{n-1}
                                >
                                > X((n-1)*(i-1)+k,(n-1)*(j-1)+l)=(H(i,j)+(i=j))*H(k+1,l+1)) - (k=l)
                                >
                                > i,j=1..n
                                > k,l=1..n-1


                                Does the expression (i=j) take the value 1 when i=j and 0 otherwise? It looks to me like
                                the formula for the entries should be

                                X((n-1)*(i-1)+k,(n-1)*(j-1)+l)=(H(i,j)-(i=j))*H(k+1,l+1)) + (i=j)


                                -Will
                              Your message has been successfully submitted and would be delivered to recipients shortly.