Lars' Programming Contests is a Public Group with 49 members.
 Lars' Programming Contests

 Public Group,
 49 members
Re: [larscontest] Re: New record
Expand Messages
 jcmeyrignac wrote:
>
That was me.
>
> Just for reference, here is the original matrix (it may be useful to
> detect how it was built)
>
> (56,9.50015e+032)
The initial pattern is based on the construction
A A
A A
for Hadamard matrizes of size 2^n.
It's only good near powers of 2 though, which was to be expected.
Code:
int o = random() % (((N1)(N>>1)(N>>2)(N>>3)(N>>4)(N>>5))+1N);
for (int i=0; i<N; i++)
for (int j=0; j<N; j++) {
int k = i+1;
int l = j+1;
if (j>N/2)
l = l+o;
int m = 0;
while (k) {
m ^= k&l&1;
k>>=1;
l>>=1;
}
weight += matrix[i][j] = m;
}
if (random()%2) {
for (int i=0; i<N; i++)
std::swap(matrix[0][i], matrix[1][i]);
}
HTH,
Bertram > That was me.
Very interesting. Congratulations to JCM and to Bertram Felgenhauer! The new record
>
> The initial pattern is based on the construction
> A A
> A A
> for Hadamard matrizes of size 2^n.
>
> It's only good near powers of 2 though, which was to be expected.
>
matrix has a fascinating structure. If you convert it to a 57x57 {1,1} matrix you find that
the new record is closely related to a particular 56x56 Hadamard matrix. When suitably
normalized, this Hadamard matrix contains a 7x7 submatrix consisting entirely of 1s.
By negating this submatrix you obtain a matrix which is no longer Hadamard  its
determinant is 75% of the Hadamard bound, but by appending an appropriate extra
row and column you get this spectacular new record. The determinant value is 89%
of Barba's bound.
Bertram:
I'm very interested in where this Hadamard matrix came from. Unfortunately I wasn't
quite able to understand your algorithm. Can you say more?
Will> > The initial pattern is based on the construction
Ok, in effect, I started with the 64x64 Hadamard matrix obtained
> > A A
> > A A
> > for Hadamard matrizes of size 2^n.
> Bertram:
> I'm very interested in where this Hadamard matrix came from.
> Unfortunately I wasn't quite able to understand your algorithm.
> Can you say more?
from the above construction, and then took rows 1..57 (leaving out
the last 7 rows) and columns 1..29, 37..64 (leaving out 7 columns
in the center) of that, and then converted that to a {0,1}matrix.
The code I posted tried a few other matrices, taking columns 1..29,
and x..x+27 for x in 30 to 37 but starting at 37 turned out to be
the only value with a positive determinant of the resultant matrix
(oops).
This matrix is actually not very good; its determinant is 1.014E+31.
This step was followed by simple greedy hillclimbing (changing one
of the entries in the matrix which gave maximum improvement for the
determinant) with a best resultant matrix with determinant
9.65944E+32 (as far as I remember). The matrix JCM started with
is also a result of this process.
JCM's optimization program did a much better job than mine did.
So I don't deserve any credit for this result either.
Bertram  In larscontest@yahoogroups.com, Bertram Felgenhauer <bf3@m...> wrote:
> Ok, in effect, I started with the 64x64 Hadamard matrix obtained
Bertram,
> from the above construction, and then took rows 1..57 (leaving out
> the last 7 rows) and columns 1..29, 37..64 (leaving out 7 columns
> in the center) of that, and then converted that to a {0,1}matrix.
>
> The code I posted tried a few other matrices, taking columns 1..29,
> and x..x+27 for x in 30 to 37 but starting at 37 turned out to be
> the only value with a positive determinant of the resultant matrix
> (oops).
>
> This matrix is actually not very good; its determinant is 1.014E+31.
>
> This step was followed by simple greedy hillclimbing (changing one
> of the entries in the matrix which gave maximum improvement for the
> determinant) with a best resultant matrix with determinant
> 9.65944E+32 (as far as I remember). The matrix JCM started with
> is also a result of this process.
>
> JCM's optimization program did a much better job than mine did.
>
> So I don't deserve any credit for this result either.
Thanks for the explanation. I gather from what you said previously that this method gave
good results in other orders close to 64. Order 61 (1/+1, which is order 60 for 0/1) is of
particular interest. The best determinant value known in that order cannot be improved,
but nevertheless I'm curious how well your method did. I guess you also did well near
order 32?
There must be something interesting going on with your construction. My experience with
hill climbing in higher orders (above order 22 or 23, say) is that it only does well with a
good starting matrix, so I believe that your matrix was the key here. Many thousands of
Hadamard matrices of order 56 are known, and it's easy to generate millions more. I
spent quite a bit of time running various search and hill climbing programs, starting from
56x56 Hadamard matrices augmented in various ways, to try to improve the value in order
57, and I only achieved modest success. There are several senses in which the new record
matrix has optimal properties that all previous searches missed.
As for the subsequent optimization, it may be that hill climbing works better if you modify
entire rows or columns at a time, rather than single entries. Perhaps JCM can say how his
program works.
Will >
My program is very simple.
> As for the subsequent optimization, it may be that hill climbing works
> better if you modify
> entire rows or columns at a time, rather than single entries. Perhaps JCM
> can say how his
> program works.
>
It optimizes the matrices with the following methods:
1) full inversion of the matrix.
2) inversion of every row and every column. When an inversion doesn't
increase the score, it is undone, otherwise the inversion is kept.
3) freezing pass. This process was designed by Jaroslaw Wroblewski, and I
already gave its source code.
4) inversion of every single cell. When an inversion doesn't increase the
score, it's undone, otherwise the inversion is kept.
For N=56, it seems that the freezing pass is useless, but column/row
inversion is very important !
If you know other useful transformations for the hill climbing, I'll be glad
to include them in my program. Inverting blocks seems a good idea (for
example 7x7 blocks on N=56), but I have not yet tested it.
I previously implemented an algorithm similar to Bertram's to optimize the
matrices (that is: invert the cell that gives the biggest score), but the
optimization was too slow. The current method works very well.
I'm currently exploring signed matrices truncation.
It's basically Bertram's idea:
 take the 64x64 Sylvester matrix
 for every N, remove 64N consecutive columns and rows, and optimize the
resulting matrix.
For N=57, my program found 2 record matrices at 1.17303e33 by removing:
 columns 2430 and lines 511
 columns 4450 and lines 1723
(maybe some other removals can give the same score, since we cannot trust
the current precision computation)
Attached, a zip file containing the program and the 2 signed matrices.
The program started at N=59, and after one day of computation, it's
currently at N=57. Perhaps we'll get a new solution during this weekend.
The reduction and optimization of the contest matrices is still running on 4
Pentiums, but chances are slim to get a new result.
Finally, starting with a matrix different from Sylvester's could lead to
better results, but the process is very slow.
JC how about combining two good matrices by inserting a subrectangle(square)from one good matrix into the other one ?The matrices can be of different size.Just an idea, I didn't try.Do you have a statistics, which rectangles occur more than once inin the matrices from your database ?
 JeanCharles Meyrignac wrote:
> My program is very simple.
These are useless in Bertram.c, because you work in the +1matrix
> It optimizes the matrices with the following methods:
> 1) full inversion of the matrix.
> 2) inversion of every row and every column. When an inversion doesn't
> increase the score, it is undone, otherwise the inversion is kept.
domain  all these steps do is change the sign of the determinant.
When working with 0/1matrices, these correspond to changing the
top left entry (step 1) or another entry of the first column
or row (step 2) in the associated +1matrix, which are important.
In fact, it's because these two steps were missing that my program
failed to find the 1.17303e+33 record.
> I previously implemented an algorithm similar to Bertram's to optimize the
I calculate the adjoint matrix and pick the biggest value (with sign
> matrices (that is: invert the cell that gives the biggest score), but the
> optimization was too slow. The current method works very well.
adjusted according to the respective entry in the transposed original
matrix) from there ... that works quite well.
For N=60 my program gets to 4.2855e+35.
Thanks,
Bertram  Some small progress:
The 7x7 block of 1s contained in the 56x56 Hadamard matrix that gives rise to the new
record is an unusual feature. It appears that this particular Hadamard matrix is related to
a "product" construction that uses "skew" Hadamard matrices. A skew Hadamard matrix
is a Hadamard matrix that equals the negation of its transpose except for its diagonal,
which is all +1s. (In other words, H+H'=2I.) The construction takes a skew Hadamard
matrix of size h and produces a new Hadamard matrix of size h(h1). The resulting
matrix will contain h blocks of size (h1)x(h1) consisting entirely of 1s. The details are
described in books by J. Seberry, but I could try to reproduce them here if there's interest.
The matrix in question is the h=8 version of this construction. Interestingly, the h=4
version produces the 12x12 Hadamard matrix, and negating one of its 3x3 block of 1s
and appending the appropriate row and column produces the record matrix in size 13.
I tried the same idea with h=12 to obtain a Hadamard matrix of size 132 with 11x11
blocks. Applying the analogous modifications produces a matrix of size 133 whose
determinant attains 84% of Barba's bound. This is quite good for a matrix so big. Other
"standard" constructions, such as maximal excess, cannot even in principle produce a
determinant so large.
There are other ways of getting Hadamard matrices with large blocks of 1s that don't rely
on skew Hadamard matrices, but the construction doesn't seem to work well on those
matrices. Clearly there are features of the skew Hadamard family besides large blocks of
1s that explain why they work so well.
One possible avenue to explore is the following: The "right" 56x56 matrix showed up
when Bertram deleted some rows and columns from the 64x64 Sylvester matrix. One
can regard the latter as the tensor product of an 8x8 Hadamard matrix with itself.
Perhaps deleting some rows and columns of other tensor product matrices will be similarly
productive. I would bet that the new matrix in order 133 comes from deleting rows and
columns from the tensor product of a 12x12 matrix with itself.. To get other sizes one
might try tensoring Hadamard matrices of different sizes, such as an 8x8 with a 12x12.
Will  >Some small progress:
>
>The 7x7 block of 1s contained in the 56x56
>Hadamard matrix that gives rise to the new
>record is an unusual feature. It appears that
>this particular Hadamard matrix is related to
>a "product" construction that uses "skew"
>Hadamard matrices. A skew Hadamard matrix
>is a Hadamard matrix that equals the negation
>of its transpose except for its diagonal,
>which is all +1s. (In other words, H+H'=2I.)this could be viewed as an undirected graph and
then the 7*7blocks were 7cliques.
Define the "skewdeterminant" of a graph G as det(G')
where G' has 1s on the diagonal and the skewed
values in the lower left triangle.
Then we are looking for graphs with max.
skewdeterminants. What graphtheoretic
properties will these have ?>The construction takes a skew Hadamard
>matrix of size h and produces a new Hadamard
>matrix of size h(h1). The resulting
>matrix will contain h blocks of size (h1)x(h1)
>consisting entirely of 1s. The details are
>described in books by J. Seberry, but I could
>try to reproduce them here if there's interest.I didn't find it in the wdwchap.ps on Seberry's page.suppose you have a binary matrix and then replace
each 1 by a (h1)*(h1) matrix
consisting entirely of 1s (call it 1_(h1) ?)
and each 0 by a 0_(h1)well, it's probably not so simple, else you would
have told us already..
Is the product matrix also skew ? Could it help
if we(JC's database) restrict our search to skewmatrices ?>The matrix in question is the h=8 version of
>this construction. Interestingly, the h=4
>version produces the 12x12 Hadamard matrix,
>and negating one of its 3x3 block of 1s
>and appending the appropriate row and column
>produces the record matrix in size 13.
>
>I tried the same idea with h=12 to obtain a
>Hadamard matrix of size 132 with 11x11
>blocks. Applying the analogous modifications
>produces a matrix of size 133 whose
>determinant attains 84% of Barba's bound.
>This is quite good for a matrix so big. Other
>"standard" constructions, such as maximal excess,
>cannot even in principle produce a
>determinant so large.>There are other ways of getting Hadamard matrices
>with large blocks of 1s that don't rely
>on skew Hadamard matrices, but the construction
>doesn't seem to work well on those
>matrices. Clearly there are features of the
>skew Hadamard family besides large blocks of
>1s that explain why they work so well.
>
>One possible avenue to explore is the following:
>The "right" 56x56 matrix showed up
>when Bertram deleted some rows and columns
>from the 64x64 Sylvester matrix.
>One can regard the latter as the tensor productwhat's tensor product ? Is it the above construction ?>of an 8x8 Hadamard matrix with itself.
>Perhaps deleting some rows and columns of other
>tensor product matrices will be similarly
>productive. I would bet that the new matrix
>in order 133 comes from deleting rows and
>columns from the tensor product of a 12x12 matrix
>with itself.. To get other sizes one might try
>tensoring Hadamard matrices of different sizes,
>such as an 8x8 with a 12x12.
>
>Willreminds me to latin squares or nqueen solutions
or magic squares, where this construction is also often
used to produce larger squares with same properties.Another idea: couldn't we also measure the "amount of Hadamardness
of a matrix" (rather than the determinant) by counting orthogonal
rowcol pairs or such and then maximize this ?
Hoping for better convergence than with determinant
Guenter   In larscontest@yahoogroups.com, "w_orrick" <w_orrick@y...> wrote:
>
132 with 11x11
> I tried the same idea with h=12 to obtain a Hadamard matrix of size
> blocks. Applying the analogous modifications produces a matrix of
size 133 whose
> determinant attains 84% of Barba's bound. This is quite good for a
matrix so big. Other
> "standard" constructions, such as maximal excess, cannot even in
principle produce a
> determinant so large.
Nice result, along with your new 106 record !
>
Cutting consecutive rows and columns didn't produce any new result
here, and my optimizer is always trying to reduce contest matrices (it
only reached N=33).
Perhaps could you integrate Tomas latests results on your site ?
It seems that he's not progressing anymore.
JC > >The construction takes a skew Hadamard
I should have given an exact reference. The construction I had
> >matrix of size h and produces a new Hadamard
> >matrix of size h(h1). The resulting
> >matrix will contain h blocks of size (h1)x(h1)
> >consisting entirely of 1s. The details are
> >described in books by J. Seberry, but I could
> >try to reproduce them here if there's interest.
>
> I didn't find it in the wdwchap.ps on Seberry's page.
in mind doesn't seem to be discussed in any of the more recent
review articles. For complete details you need to go back to
W. D. Wallis, Anne Penfold Street, and Jennifer Seberry Wallis,
Combinatorics: Room squares, Sumfree sets, Hadamard
matrices, Lecture Notes in Mathematics 292, SpringerVerlag,
Berlin, 1972.
The relevant section is 8.2. I'll try to summarize.
First some background: An example of the tensor product
is the construction
A A
A A
You can regard this as
1 1
x A
1 1
where I use "x" to indicate the tensor product, also
known as the Kronecker product.
In general, if G and H are Hadamard matrices, then
G x H
is another Hadamard matrix.
g(11) H g(12) H ... g(1n) H
.
G x H = .
.
g(n1) H g(n2) H ... g(nn) H
One can actually be more flexible: If G and
H1, H2, ... Hn are Hadamard matrices, then
g(11) H1 g(12) H2 ... g(1n) Hn
.
.
.
g(n1) H1 g(n2) H2 ... g(nn) Hn
is another Hadamard matrix, but is not, strictly
speaking, a tensor product.
The construction described by Seberry is the sum of
two tensor products:
U x Z + I x J
where
U = HI
H = an (h x h) skew Hadamard matrix, normalized so
the first row is all 1s and the first columns is all 1s.
(The top left corner is a 1.)
Z = the (h1) x (h1) matrix obtained by stripping off the
first row and column of H
I = the (h x h) identity matrix
J = the (h1) x (h1) matrix of 1s.
Will> Perhaps could you integrate Tomas latests results on your site ?
Hope to have this done soon. The problem is that Tomas has set just too many records!
> It seems that he's not progressing anymore.
Will >> >The construction takes a skew Hadamard
>> >matrix of size h and produces a new Hadamard
>> >matrix of size h(h1). The resulting
>> >matrix will contain h blocks of size (h1)x(h1)
>> >consisting entirely of 1s. The details are
>> >described in books by J. Seberry, but I could
>> >try to reproduce them here if there's interest.
>>
>> I didn't find it in the wdwchap.ps on Seberry's page.
>
>I should have given an exact reference. The construction I had
>in mind doesn't seem to be discussed in any of the more recent
>review articles. For complete details you need to go back to
>
> W. D. Wallis, Anne Penfold Street, and Jennifer Seberry Wallis,are W. D. Wallis and Jennifer Seberry Wallis related ?> Combinatorics: Room squares, Sumfree sets, Hadamard
> matrices, Lecture Notes in Mathematics 292, SpringerVerlag,
> Berlin, 1972.
>
>The relevant section is 8.2. I'll try to summarize.
>
>First some background: An example of the tensor product
>is the construction
>
>A A
>
>A A
>
>You can regard this as
>
> 1 1
> x A
> 1 1
>
>where I use "x" to indicate the tensor product, also
>known as the Kronecker product.
>
>In general, if G and H are Hadamard matrices, thenG is m*m , H is n*n> G x H
>
>is another Hadamard matrix.
>
> g(11) H g(12) H ... g(1n) H
> .
> G x H = .
> .
> g(n1) H g(n2) H ... g(nn) H
>GxH is m*n * m*n
(GxH)(n*(i1)+k,n*(j1)+l)=G(i,j)*H(k,l)
i,j=1..m
k,l=1..n>One can actually be more flexible: If G and
>H1, H2, ... Hn are Hadamard matrices, then
>
> g(11) H1 g(12) H2 ... g(1n) Hn
> .
> .
> .
> g(n1) H1 g(n2) H2 ... g(nn) Hn
>
>is another Hadamard matrix, but is not, strictly
>speaking, a tensor product.(Gx(H_h))(n*(i1)+k,n*(j1)+l)=G(i,j)*H_{n*(i1)+k}(k,l)
i,j=1..m
k,l=1..n>The construction described by Seberry is the sum of
>two tensor products:
>
> U x Z + I x J
>
>where
>
> U = HI
> H = an (h x h) skew Hadamard matrix, normalized so
> the first row is all 1s and the first columns is all 1s.
> (The top left corner is a 1.)
> Z = the (h1) x (h1) matrix obtained by stripping off the
> first row and column of H
> I = the (h x h) identity matrix
> J = the (h1) x (h1) matrix of 1s.
n=h
X is n*(n1) * n*(n1)
X = (H_nI_n) x Z_{n1} + I_n x 1_{n1}X((n1)*(i1)+k,(n1)*(j1)+l)=(H(i,j)+(i=j))*H(k+1,l+1))  (k=l)i,j=1..n
k,l=1..n1>
>
>Will
Guenter > >The construction described by Seberry is the sum of
Does the expression (i=j) take the value 1 when i=j and 0 otherwise? It looks to me like
> >two tensor products:
> >
> > U x Z + I x J
> >
> >where
> >
> > U = HI
> > H = an (h x h) skew Hadamard matrix, normalized so
> > the first row is all 1s and the first columns is all 1s.
> > (The top left corner is a 1.)
> > Z = the (h1) x (h1) matrix obtained by stripping off the
> > first row and column of H
> > I = the (h x h) identity matrix
> > J = the (h1) x (h1) matrix of 1s.
>
>
> n=h
> X is n*(n1) * n*(n1)
> X = (H_nI_n) x Z_{n1} + I_n x 1_{n1}
>
> X((n1)*(i1)+k,(n1)*(j1)+l)=(H(i,j)+(i=j))*H(k+1,l+1))  (k=l)
>
> i,j=1..n
> k,l=1..n1
the formula for the entries should be
X((n1)*(i1)+k,(n1)*(j1)+l)=(H(i,j)(i=j))*H(k+1,l+1)) + (i=j)
Will
Your message has been successfully submitted and would be delivered to recipients shortly.