Speed Solving Rubik's Cube  All about speed solving the Rubik's Cube is a Public Group with 2883 members.
 Speed Solving Rubik's Cube  All about speed solving the Rubik's Cube

 Public Group,
 2883 members
Re: [Speed cubing group] Statistical analysis of (mostly megaminx) scramblers
Expand Messages
 Hi Ron,
I did not test the old scramble program, mostly because the sticker
tracking code would have to be modified significantly. Honestly, now
that I think of it I could probably manage the code in an hour or so
keep an eye out for it in the next few days.
And if anyone is curious, tracking the 1000 move sequences 1000000
times, essentially tracking 1 billion (that's a US billion) megaminx
twists took around 45 minutes. For the cube, 1 billion twists took
about 20 minutes if I remember correctly.
Best,
Daniel
 In speedsolvingrubikscube@yahoogroups.com, "Ron van Bruchem"
<ron@...> wrote:>
program?
> Hi Daniel,
>
> Thanks! Very interesting post.
>
> Did you also investigate the results of the old Megaminx scramble
> Or is that the 3x3x3 analysis that you did?
http://www.worldcubeassociation.org/regulations/scrambles/scramble_megaminx.htm
>
>
> Have fun,
>
> Ron   In speedsolvingrubikscube@yahoogroups.com, "Daniel
Hayes" <swedishlf@...> wrote:>
So 270 turns are considerably better than 1000? That seems wrong to
> Megaminx:
> WCA Scrambler
> Turns  Std Dev  Expected Std Dev  Confidence
> 10 108483.94 279  0.3%
> 50  21986.84 279  0.9%
> 70  12559.95 279  1.7% *official scramble length
> 100  5546.20 279  5.0%
> 150  1451.23 279  19.2%
> 200  469.90 279  59.4%
> 210  398.26 279  70.1%
> 220  351.96 279  79.3%
> 230  312.99 279  89.2%
> 240  307.22 279  90.9%
> 250  294.32 279  94.8%
> 260  285.37 279  97.8%
> 270  280.47 279  99.5%
> 300  278.30 279  99.7%
> 400  275.66 279  98.7%
> 500  269.56 279  96.6%
> 1000  264.62 279  94.8%
me, along with the standard deviation droping considerably below the
expected one.
> 3x3x3 Cube, generic scrambler (avoids redundant turns, etc):
Same strange behaviour as the megaminx analysis, but in addition,
> Turns  Std Dev  Expected Std Dev  Confidence
> 1 264953.60 360  0.1%
> 10  26320.07 360  1.4%
> 20  4274.47 360  8.4%
> 30  794.28 360  45.3%
> 35  514.75 360  69.9%
> 40  387.39 360  92.9%
> 45  347.40 360  96.5%
> 50  341.04 360  94.7%
> 75  350.58 360  97.4%
> 100  372.78 360  96.6%
> 500  347.09 360  96.4%
> 1000  355.14 360  98.7%
here the standard deviation repeatedly jumps up and down at the end.
Cheers!
Stefan  I have a bit of statistical background. What you have done so far
seems promising, and is well presented, but with how the values jump
lower sometimes when the scramblelength goes up, I think that it's an
indication that one of your inital premise is not as sound as it could
be.
It does discount the variable lengthscramble approach though, or at
least I'm convinced.
I wouldn't be suprised to see a few minor jumps in the negative
direction, but I'm concerned with how far it jumps and how often.
I think that the problem is with your original assumption (quoted
below). It even sounds like something a noncuber, mathgeek from
xkcd would come up with  to look at sticker distributions. And
although it would yield rough results that are useful to some extent,
I believe comparing for instance how often every corner piece lands in
every location (CP itself) might yield more evened out results.
Depending on how the programs are written, this could be very hard to
change up. You could have separate charts for CP,EP,CO,EO. (Although
I'm not sure of how to designate orientations on Megaminx.)
Minor point: why is the line for 25turns missing from the 3x3 chart?
That's the most important one!
Doug
 In speedsolvingrubikscube@yahoogroups.com, "Daniel Hayes"
<swedishlf@...> wrote:> The test I conducted was the simplest I could imagine, I applied
a
> scrambles to the puzzle and kept track of how often each color landed
> in each position. The assumption: Given a scramble algorithm
> generator that generates an n move scrambling algorithms, if we take
> large number of solved puzzles and apply those algorithms, each
times.
> position should end up with each color a roughly equal number of
 Doug,
The 25 case:
StdDev = 1838.03
CI = 19.6%
I didn't include it because it still seemed pretty far from random.
My actual data includes many more points than what I posted here, I
was just trying to isolate some of the more interesting ones (ie,
extremes and when the std dev began to settle down).
As far as the bouncing around is concerned, I agree with you and
Stephen, it seems to be less consistent than it should. Perhaps I
should run a few longer scrambles (20005000 perhaps) and see if the
behavior continues.
My understanding is not however that higher confidence = better
scramble (as in the 270 vs 1000 case). As I tried to articulate in my
post, confidence is just an indicator, we pick some value (by
convention 95%) and say the following:
Hypothesis: this scramble generator is truly random at this length.
Test: Given that I project 1 million truly random scrambles to
generate a standard deviation of about 370, what is the probability
that 1 million truly random scrambles would give me the observed
standard deviation?
If that probability is less than 95% (or our threshold) then we reject
the hypothesis that the scramble generator is truly random. If it is>= 95% we choose not to reject that hypothesis.
Thus the confidence is just an answer to this question "Is this a bad
approximation of random?"
Not an answer to the question "How good is this at approximating random?"
I do find it odd that 1000 move scrambles fails this test for the wca
scrambler though, and agree that there may be a problem with my basic
assumptions. It could also be that I did not directly calculate the
expected std dev, and instead used observed values.
I briefly considered tracking position and orientation of pieces
instead of stickers, but the problem was as you say on the megaminx I
couldn't think of a good way to determine orientation. Stephen, you
had to determine orientation when you BLD solved it, could you share how?
I will begin implementing the piece / orientation tracking for the
cube though and see what the results look like there.
Thanks for the feedback guys. If you come up with any more
suggestions, do share!
Best,
Daniel
Btw, the XKCD thread:
http://forums.xkcd.com/viewtopic.php?f=17&t=21433
 In speedsolvingrubikscube@yahoogroups.com, d_funny007
<no_reply@...> wrote:>
> I have a bit of statistical background. What you have done so far
> seems promising, and is well presented, but with how the values jump
> lower sometimes when the scramblelength goes up, I think that it's an
> indication that one of your inital premise is not as sound as it could
> be.
>
> It does discount the variable lengthscramble approach though, or at
> least I'm convinced.
>
> I wouldn't be suprised to see a few minor jumps in the negative
> direction, but I'm concerned with how far it jumps and how often.
>
> I think that the problem is with your original assumption (quoted
> below). It even sounds like something a noncuber, mathgeek from
> xkcd would come up with  to look at sticker distributions. And
> although it would yield rough results that are useful to some extent,
> I believe comparing for instance how often every corner piece lands in
> every location (CP itself) might yield more evened out results.
> Depending on how the programs are written, this could be very hard to
> change up. You could have separate charts for CP,EP,CO,EO. (Although
> I'm not sure of how to designate orientations on Megaminx.)
>
> Minor point: why is the line for 25turns missing from the 3x3 chart?
> That's the most important one!
>
>
> Doug
>
>
>  In speedsolvingrubikscube@yahoogroups.com, "Daniel Hayes"
> <swedishlf@> wrote:
> > The test I conducted was the simplest I could imagine, I applied
> > scrambles to the puzzle and kept track of how often each color landed
> > in each position. The assumption: Given a scramble algorithm
> > generator that generates an n move scrambling algorithms, if we take
> a
> > large number of solved puzzles and apply those algorithms, each
> > position should end up with each color a roughly equal number of
> times.
>  Okay, I've now fully read though your post, and I don't see anything
wrong statistically with the number crunchings, except like I said
before the idea of stickerdistribution is not ideal, but it's very
practical, quick, and easy. One obvious problem is that if say a
corner piece is in place and oriented, it somehow triple counts in
this scheme, and it's unclear what a more 'correct/useful' *weight*
should be for it. This is a simple method of tracking, because it
pays no attention to distinguising pieces as corners or as edges,
which may or maynot be problematic as well.
One thing I'd really like to see done, is to make a simple tweak of
your program to do only 2/5 turns and not 1/5 turns and post the
results here sidebyside with the other one. This is becasue I am
very curious about Stefan's assertion that 2/5 are so much more
superior that doing 1/5 is a waste. Initally, I thought this to be
counterintuitive, but everything I've found so far agrees with the
claim. Perfoming your anlaysis on it, would be enough to put that
debate to a rest for me, so I would appreciate it.
The other thing I am not 100% sure of, is something you pointed out.
You are using observed values, albeit out of 1 million trials, and
the actual SD values can not be realistically calculated. What
happens if you change this to 2 million? Well if you are curious
about a trend, then I suggest somehow estimating the *limit*. Try it
at 250000, 500000, 1000000, 2000000, 4000000 and see if it settles.
Also, you could rerun them at 1 million trials, and pssibly get
differnt results. Compare the two data sets and see if 1 million was
significant.
Anyhow, I have some time to code these days, if there's some
tedious/bulk/manual entering of tables for various things, I'd be
happy to do it if given an example.
Doug
 In speedsolvingrubikscube@yahoogroups.com, "Daniel Hayes"
<swedishlf@...> wrote:>
If
> I am NOT a statastician, but I do have a strong math background.
> you notice any errors in my reasoning or methodology, please tell
me.
>
folk
> After busting out the stats book and talking to some of the smart
> at the xkcd message boards, I developed a rudimentary test for
scrambler
> comparing the effectiveness of scramblers. I have since conducted a
> few experiments to compare the WCA megaminx scrambler and a
> which behaves a little differently. (Also tested a standard 3x3x3
want
> scrambler). I will explain the methodology first, so if you just
> the results, skip to the end.
landed
>
> The test I conducted was the simplest I could imagine, I applied
> scrambles to the puzzle and kept track of how often each color
> in each position. The assumption: Given a scramble algorithm
take a
> generator that generates an n move scrambling algorithms, if we
> large number of solved puzzles and apply those algorithms, each
times.
> position should end up with each color a roughly equal number of
>
all
> That is if an n move scramble from a generator truly approximates
> random, all colors should have equal probability of showing up in
> locations. Essentially this is a test for how flat the
distribution
> of colors over locations is.
each
>
> Methodology: Apply many scrambles to a cube and track how often
> color shows up in each position. The standard deviation of this
list
> should tell us approximately how well the scrambles approximate
true
> random.
the
> So a list would look like this for 10 scrambles on a cube:
> Position  R O G B Y W
> 0  3 2 1 2 2 0
> 1  4 1 4 0 0 1
> ....
> 47  1 1 2 1 3 2
>
> Ignore the position column and take the standard deviation of the
> rest. Centers are ignored as well since they are always the same
> after scrambling. On the megaminx, after a scramble was applied,
> puzzle was reoriented so that the same center color was on top and
the
> same was on the front face every time, this prevents puzzle
rotations
> skewing the results.
NOT
>
> The expected standard deviations which are used as a baseline were
> calculated directly, as I could not figure out how. Instead, they
were
> were taken from an average of very long scramble algorithms which
> presumed to be random based on the data at hand. This bugs me, so
if
> you can figure out how to directly calculate the expected stdDev,
Turns
> please let me know.
>
> On to the results:
> Each of these cases is conducted with 1 million (10^6) trials.
> is the number of nonpuzzle rotating turns (except in the WCA
megaminx
> scrambler case, where puzzle rotations are counted). StdDev is the
the
> observed standard deviation. Expected std dev is the standard
> deviation which a truly random scramble generator should give.
> Confidence is the % confidence that this scrambler / length
> combination approximates true random in the long term (essentially
> ratio of expected stdDev to observed.) The standard practice I
the
> believe is to take 95% confidence as the threshold for rejecting
> null hypotheses that sigma = s. Thus if the confidence is less
than
> 95%, by convention we can assume the scramble is not truly
random. In
> practice, I would be willing to go as low as 93% or so. The custom
only 73.1%!
> 3x3x3 and megaminx scramblers are the same as from the excel sheet
> generator I posted a few days ago.
>
> 3x3x3 Cube, generic scrambler (avoids redundant turns, etc):
> Turns  Std Dev  Expected Std Dev  Confidence
> 1 264953.60 360  0.1%
> 10  26320.07 360  1.4%
> 20  4274.47 360  8.4%
> 30  794.28 360  45.3%
> 35  514.75 360  69.9%
> 40  387.39 360  92.9%
> 45  347.40 360  96.5%
> 50  341.04 360  94.7%
> 75  350.58 360  97.4%
> 100  372.78 360  96.6%
> 500  347.09 360  96.4%
> 1000  355.14 360  98.7%
>
> When a range for the length was used, the results were pretty
> miserable. I have been converted to an advocate for fixed length
> scrambles. when allowed to range from 3040 moves, the CI was
>
from
> Based on these results my scrambles for practice will always be at
> least 40 moves, and I'd rather use 45.
>
>
> On to the megaminx. It should be noted that the entirety of the
> testing I did was in Java, and as such I had to modify the code
> the wca scrambler to java code. I'm not 100% familiar with
code
> javascript, but I do think that my conversion is accurate. All
> used in this project is available on request.
length
>
> Megaminx:
> WCA Scrambler
> Turns  Std Dev  Expected Std Dev  Confidence
> 10 108483.94 279  0.3%
> 50  21986.84 279  0.9%
> 70  12559.95 279  1.7% *official scramble
> 100  5546.20 279  5.0%
gives
> 150  1451.23 279  19.2%
> 200  469.90 279  59.4%
> 210  398.26 279  70.1%
> 220  351.96 279  79.3%
> 230  312.99 279  89.2%
> 240  307.22 279  90.9%
> 250  294.32 279  94.8%
> 260  285.37 279  97.8%
> 270  280.47 279  99.5%
> 300  278.30 279  99.7%
> 400  275.66 279  98.7%
> 500  269.56 279  96.6%
> 1000  264.62 279  94.8%
>
> This is the heart of why I'm posting. Based on this analysis the
> official wca scrambler does not appear to be random until you reach
> 250 turns or more. And the official scramble length of 70 turns
> a feeble 1.7% confidence of randomness.
is
>
> Now lets examine a different scramble generator the big difference
> that puzzle rotations are randomly interspersed and do not count
moves
> against the move count. So a 10 move scramble may in fact be 15
> long. As such, for a scramble of length n, there will be at least
scramble.
> n/10 more nonpuzzle rotating moves than in an n move wca
> So an 80 move scramble here should only be compared to 88 or more
move
> wca scrambles. Also 1/5 twists are allowed as opposed to just 2/5:
around
>
> Turns  Std Dev  Expected Std Dev  Confidence
> 10  91955.36 279  0.3%
> 50  5529.65 279  5.0%
> 70  1572.39 279  17.8%
> 100  380.54 279  73.4%
> 110  315.88 279  88.4%
> 115  287.75 279  97.0%
> 120  291.12 279  95.9%
> 125  279.61 279  99.8%
> 150  273.86 279  98.1%
> 200  282.68 279  98.8%
> 500  279.16 279 100.0%
> 1000  286.71 279  97.4%
>
> Clearly this converges toward random much more quickly, in fact
> twice as quickly. I would put 115 moves as the bare minimum and
will
> probably henceforth use 125 moves myself. I suspect the difference
1/5
> comes from having so many extra puzzle rotations, not from having
> and 2/5 turns included, though I have not yet tested this.
boards
>
> There you have it. I will probably be posting this to the wca
> sometime soon, as the megaminx situation concerns me a bit.
and
>
> I will consider in the future analyzing the wca scramble lengths /
> generators for the 4x4x4 and 5x5x5 cubes as well, but writing the
> scramble tracking code is mind numbingly tedious and my wife is
> complaining about me spending too much time on this ;) . As I
> mentioned, if you are better at stats than I am (not hard to do)
> there is any error in my assumptions or methodology, please LET ME
> KNOW! All source code used is available upon request.
>
> I'm going to bed now, cheers.
> Daniel
>  As requested: I have only included the cases that I deem interesting,
but I have many more
This scrambler permits only 2/5 turns when making "Puzzle breaking"
twists, and only 1/5 turns when making puzzle rotations. The
rotations are randomly included throughout the scramble instead of at
fixed intervals.
Turns  Std Dev  Expected Std Dev  Confidence
10  93628.55 279  0.3%
50  14191.19 279  2.0%
100  2310.15 279  12.1%
150  463.14 279  60.3%
175  318.79 279  87.6%
180  303.87 279  91.9%
190  296.05 279  94.3%
200  283.16 279  98.6%
210  278.01 279  99.6%
500  Pending These are not included as it
1000  Pending seems to even out around
It would appear that with this scheme, 190 moves is about the minimum
for random.
This prompted me to test again, this time allowing both 1/5 and 2/5
puzzle rotations:
Turns  Std Dev  Expected Std Dev  Confidence
10  98269.43 279  0.3%
50  15582.28 279  1.8%
100  2760.56 279  10.1%
150  550.70 279  50.7%
190  302.10 279  92.4%
200  282.31 279  98.9%
So here, it seems like 190 is not quite enough, but 200 is.
I don't have time tonight, but I will run my scrambler with both 1/5
and 2/5 twist, but only 1/5 puzzle rotations tomorrow.
The conclusions I feel I can draw: restricting the puzzle to only ++
and  twists is a less accurate approximation of random than allowin
g both 1/5 and 2/5 twists. However, restricting to 1/5 puzzle
rotations appears, at least in the 2/5 only twists case, to provide a
small gain in terms of randomness.
And yes, when I re run trials, I seem to get a swing of as much as
1525 in either direction for the ~95% range cases.
Over the weekend I plan to run some cases using more trials and more
twists as suggested. I'll let you know if I need any help getting the
older megaminx scramble tracker programmed, and I appreciate the offer!
Best,
Daniel
 In speedsolvingrubikscube@yahoogroups.com, d_funny007
<no_reply@...> wrote:>
> Okay, I've now fully read though your post, and I don't see anything
> wrong statistically with the number crunchings, except like I said
> before the idea of stickerdistribution is not ideal, but it's very
> practical, quick, and easy. One obvious problem is that if say a
> corner piece is in place and oriented, it somehow triple counts in
> this scheme, and it's unclear what a more 'correct/useful' *weight*
> should be for it. This is a simple method of tracking, because it
> pays no attention to distinguising pieces as corners or as edges,
> which may or maynot be problematic as well.
>
> One thing I'd really like to see done, is to make a simple tweak of
> your program to do only 2/5 turns and not 1/5 turns and post the
> results here sidebyside with the other one. This is becasue I am
> very curious about Stefan's assertion that 2/5 are so much more
> superior that doing 1/5 is a waste. Initally, I thought this to be
> counterintuitive, but everything I've found so far agrees with the
> claim. Perfoming your anlaysis on it, would be enough to put that
> debate to a rest for me, so I would appreciate it.
>
> The other thing I am not 100% sure of, is something you pointed out.
> You are using observed values, albeit out of 1 million trials, and
> the actual SD values can not be realistically calculated. What
> happens if you change this to 2 million? Well if you are curious
> about a trend, then I suggest somehow estimating the *limit*. Try it
> at 250000, 500000, 1000000, 2000000, 4000000 and see if it settles.
>
> Also, you could rerun them at 1 million trials, and pssibly get
> differnt results. Compare the two data sets and see if 1 million was
> significant.
>
> Anyhow, I have some time to code these days, if there's some
> tedious/bulk/manual entering of tables for various things, I'd be
> happy to do it if given an example.
>
>
> Doug  Same suggestions as Doug, plus...
 I think sticker color distribution is alright. Causes triplewrong
counting of corners, but also tripleright counting. I believe this
evaluation is fine.
 Might be possible to do *exact* computations of averages and
standard deviations if you use dynamic programming similar to my 3x3
scramble analyzer here: http://stefanpochmann.info/spocc/other_stuff/
tools/
 When blindsolving the megaminx (or the 3x3), pieces not at the
correct place simply have no "correct/wrong" orientation for me. And
those at the correct place are obvious. However, you can simply for
each piece pick any reference orientation you like. Gosh, I wish I
had written that article already (it's on my to do list).
 I had actually thought about analyzing the 5x5 centers, that's
something else I'd like to see. So after N scramble moves, how many
moves does it take to solve 1) the white center 2) the easiest
center. This is something you could do here as well, i.e., analyze
the solution length or the position distribution of a group of pieces
or stickers. But as mentioned above, I believe single sticker color
distribution to be fine.
Cheers!
Stefan  So you do stickercycles for Megaminx BLD as well?
As for scramblerandomness calculation wishlists... they always grow
faster than any of us want to code them up. Whenever I read
something like that  say about 5x5 centers, I get chills down my
spine and nausea. I wonder if it's just me or do a lot of other
programmers feel that way. It's like I did it enough while in
college and at work, that coding drives me crazy even if it's for
something "fun" like cubing.
Although, I a couple months ago I was bored enough to write a solver
program for the Pentultimate. It's a 12sided puzzle (dodecahedron),
faceturning down the middle such that each pentagon face has 6
stickers. Or you can look it up if your curious. I guess I found
that to be an enjoyable excersize.
I'd say sticker distribution approach is *okay*. For sufficently
high triallength/samplesize it'll still yield very useful results.
It's just that somehow I find that it's not entirely *ideal* that's
all.
Doug (why am i up this late... /early?)
 In speedsolvingrubikscube@yahoogroups.com, "Stefan Pochmann"
<stefan.pochmann@...> wrote:>
wrong
> Same suggestions as Doug, plus...
>
>  I think sticker color distribution is alright. Causes triple
> counting of corners, but also tripleright counting. I believe
this
> evaluation is fine.
3x3
>
>  Might be possible to do *exact* computations of averages and
> standard deviations if you use dynamic programming similar to my
> scramble analyzer here: http://stefan
pochmann.info/spocc/other_stuff/
> tools/
And
>
>  When blindsolving the megaminx (or the 3x3), pieces not at the
> correct place simply have no "correct/wrong" orientation for me.
> those at the correct place are obvious. However, you can simply
for
> each piece pick any reference orientation you like. Gosh, I wish I
many
> had written that article already (it's on my to do list).
>
>  I had actually thought about analyzing the 5x5 centers, that's
> something else I'd like to see. So after N scramble moves, how
> moves does it take to solve 1) the white center 2) the easiest
pieces
> center. This is something you could do here as well, i.e., analyze
> the solution length or the position distribution of a group of
> or stickers. But as mentioned above, I believe single sticker
color
> distribution to be fine.
>
> Cheers!
> Stefan
>  Hi :)
What im most interested in is to know the following.
If you run your experiment(s) more than once. Do you get the same
results again? Or to put it another way. Are you really generating
representative scrambles so as to analyse correctly ?? If you get
divergent results easch time the results are less valuable ...
Yes, i know it takes a lot of time to do "one pass" of the
calculations.
On the other side, i do not really think randomness is of crucial
imoprtance for competition sctrambles. What happens to the fun of
puzzle solving if scrutinizing the scrambles too much??
I want fair scrambles much rather than really random scrambles. The
purpose of the scrambles is to distinguish relative solving
cabability (read relative solving speed)  nothing more ;)
 Per
>  In speedsolvingrubikscube@yahoogroups.com, "Stefan Pochmann"
<stefan.pochmann@...> wrote:
>
length
>  In speedsolvingrubikscube@yahoogroups.com, "Daniel
> Hayes" <swedishlf@> wrote:
> >
> > Megaminx:
> > WCA Scrambler
> > Turns  Std Dev  Expected Std Dev  Confidence
> > 10 108483.94 279  0.3%
> > 50  21986.84 279  0.9%
> > 70  12559.95 279  1.7% *official scramble
> > 100  5546.20 279  5.0%
to
> > 150  1451.23 279  19.2%
> > 200  469.90 279  59.4%
> > 210  398.26 279  70.1%
> > 220  351.96 279  79.3%
> > 230  312.99 279  89.2%
> > 240  307.22 279  90.9%
> > 250  294.32 279  94.8%
> > 260  285.37 279  97.8%
> > 270  280.47 279  99.5%
> > 300  278.30 279  99.7%
> > 400  275.66 279  98.7%
> > 500  269.56 279  96.6%
> > 1000  264.62 279  94.8%
>
> So 270 turns are considerably better than 1000? That seems wrong
> me, along with the standard deviation droping considerably below
the
> expected one.
end.
>
> > 3x3x3 Cube, generic scrambler (avoids redundant turns, etc):
> > Turns  Std Dev  Expected Std Dev  Confidence
> > 1 264953.60 360  0.1%
> > 10  26320.07 360  1.4%
> > 20  4274.47 360  8.4%
> > 30  794.28 360  45.3%
> > 35  514.75 360  69.9%
> > 40  387.39 360  92.9%
> > 45  347.40 360  96.5%
> > 50  341.04 360  94.7%
> > 75  350.58 360  97.4%
> > 100  372.78 360  96.6%
> > 500  347.09 360  96.4%
> > 1000  355.14 360  98.7%
>
> Same strange behaviour as the megaminx analysis, but in addition,
> here the standard deviation repeatedly jumps up and down at the
>
> Cheers!
> Stefan
>   In speedsolvingrubikscube@yahoogroups.com, "Daniel
Hayes" <swedishlf@...> wrote:>
I propose the following scrambler, which is hopefully an improvement,
> 3x3x3 Cube, generic scrambler (avoids redundant turns, etc):
> Turns  Std Dev  Expected Std Dev  Confidence
> 1 264953.60 360  0.1%
> 10  26320.07 360  1.4%
> 20  4274.47 360  8.4%
> 30  794.28 360  45.3%
> 35  514.75 360  69.9%
> 40  387.39 360  92.9%
> 45  347.40 360  96.5%
> 50  341.04 360  94.7%
> 75  350.58 360  97.4%
> 100  372.78 360  96.6%
> 500  347.09 360  96.4%
> 1000  355.14 360  98.7%
which could be tested against Daniel's analyzer.
The scrambler works like the generic one, except at each turn it
doesn't choose completely randomly between the possible sides.
Instead it prefers the side which breaks the most sticker pairs, more
precisely leads to the fewest sticker pairs after the turn. Here a
"sticker pair" are two adjacent stickers of the same color.
It does this for the first half of the moves, the second half it
works the old generic way. So first it tries to break a lot quickly,
then goes on scrambling from there.
To make it less deterministic, it chooses randomly from the three
sides which break the most. Or chooses the biggest breaker 1/2 of the
time, the second biggest 1/4 of the time, etc. Something like that.
Open to experimentation.
Daniel, can you do this?
Btw this is somewhat how I scramble at least the 5x5 and the megaminx
when I don't have computer generated scrambles. And I believe it
could lead to better/shorter scrambles.
Cheers!
Stefan  Yo Stefan :)
Any chance or releasing the code for this new "climbing" scrambler??
Or is this too trivial to be released? I guess it depends on the data
structures being used. Do you optimise the data structures for this
somehow?
I actually had similar couple yrs ago about breaking pairs
deliberately, but i never formalised it or implemented anything :D
Good to see old idea coming back to life. This way of scrambling
should be generaliseable for almost every kind of twisty puzzle.
Another benefit is that no solver whatsoever is required. It is also
closer to RANDOM SCRAMBLING than previous positional approach(es).
 Per
>  In speedsolvingrubikscube@yahoogroups.com, "Stefan Pochmann"
<stefan.pochmann@...> wrote:
>
improvement,
>  In speedsolvingrubikscube@yahoogroups.com, "Daniel
> Hayes" <swedishlf@> wrote:
> >
> > 3x3x3 Cube, generic scrambler (avoids redundant turns, etc):
> > Turns  Std Dev  Expected Std Dev  Confidence
> > 1 264953.60 360  0.1%
> > 10  26320.07 360  1.4%
> > 20  4274.47 360  8.4%
> > 30  794.28 360  45.3%
> > 35  514.75 360  69.9%
> > 40  387.39 360  92.9%
> > 45  347.40 360  96.5%
> > 50  341.04 360  94.7%
> > 75  350.58 360  97.4%
> > 100  372.78 360  96.6%
> > 500  347.09 360  96.4%
> > 1000  355.14 360  98.7%
>
> I propose the following scrambler, which is hopefully an
> which could be tested against Daniel's analyzer.
more
>
> The scrambler works like the generic one, except at each turn it
> doesn't choose completely randomly between the possible sides.
> Instead it prefers the side which breaks the most sticker pairs,
> precisely leads to the fewest sticker pairs after the turn. Here a
quickly,
> "sticker pair" are two adjacent stickers of the same color.
>
> It does this for the first half of the moves, the second half it
> works the old generic way. So first it tries to break a lot
> then goes on scrambling from there.
the
>
> To make it less deterministic, it chooses randomly from the three
> sides which break the most. Or chooses the biggest breaker 1/2 of
> time, the second biggest 1/4 of the time, etc. Something like that.
megaminx
> Open to experimentation.
>
> Daniel, can you do this?
>
> Btw this is somewhat how I scramble at least the 5x5 and the
> when I don't have computer generated scrambles. And I believe it
> could lead to better/shorter scrambles.
>
> Cheers!
> Stefan
>
Your message has been successfully submitted and would be delivered to recipients shortly.