- Begin forwarded message:
**From:**JACK SARFATTI <sarfatti@...>**Subject:****Re: Woodward's Machian Star Ship Propulsion Strategy****Date:**July 1, 2012 11:45:25 AM PDT**To:**"Woodward, James" <jwoodward@...>On Jul 1, 2012, at 10:04 AM, Woodward, James wrote:It's in the book (and here and there in the peer reviewed literature over the years). The book may be out before the end of the year.Not very helpful because I think I have made a fatal objection to any scheme at all that proposes to “reduce rest mass density” on very fundamental matters of principle. I cannot even conceive of any sensible argument to the contrary. Therefore, you should at least give the list a short qualitative plausibility argument here and now as to how I am, in your view, mistaken.Many wrong arguments are published in books and even in peer-reviewed prestige journal - normal science proceeds by recursive corrections of errors both theoretical and experimental.**From:**JACK SARFATTI [sarfatti@...]**Sent:**Sunday, July 01, 2012 10:21 AM**To:**Woodward, James**Subject:**Re: Woodward's Machian Star Ship Propulsion StrategyAgain I do not understand“driving the rest density to zero”.The rest density of matter is determined by1) the Higgs vacuum field for the rest masses of isolated quarks and leptonsLHC has now found the Higgs at 125 Gev - not much doubt of that. It’s only a mop up from this time on getting better statistical analysis - a matter of time.2) the confined kinetic motion of trapped real quarks in the virtual gluon/quark-antiquark plasma of quantum chromodynamics.And if you could reduce rest density of matter to zero you would have an uncontrolled super-fusion explosion!3) In warp drive, the ship is on a self-created timelike geodesic - changing the effective mass of the ship as a whole, even if you could do it without destroying the ship, is completely irrelevant because of the equivalence principle.### Martin Rees's Six Numbers

Martin Rees, in his book*Just Six Numbers*, mulls over the following six dimensionless constants, whose values he deems fundamental to present-day physical theory and the known structure of the universe:*N*≈10^{36}: the ratio of the fine structure constant (the dimensionless coupling constant for electromagnetism) to the gravitational coupling constant, the latter defined using two protons. In Barrow and Tipler (1986) and elsewhere in Wikipedia, this ratio is denoted α/α_{G}.*N*governs the relative importance of gravity and electrostatic attraction/repulsion in explaining the properties of baryonic matter;^{[3]}- ε≈0.007: The fraction of the mass of four protons that is released as energy when fused into a helium nucleus. ε governs the energy output of stars, and is determined by the coupling constantfor the strong force;
^{[4]} - Ω ≈ 0.3: the ratio of the actual density of the universe to the critical (minimum) density required for the universe to eventually collapse under its gravity. Ω determines the ultimate fate of the universe. If Ω>1, the universe will experience a Big Crunch. If Ω<1, the universe will expand forever;
^{[3]} - λ ≈ 0.7: The ratio of the energy density of the universe, due to the cosmological constant, to the critical density of the universe. Others denote this ratio by <256cceeebe9014f9bff8ee174863ad64.png>;
^{[5]} *Q*≈ 10^{– 5}: The energy required to break up and disperse an instance of the largest known structures in the universe, namely a galactic cluster or supercluster, expressed as a fraction of the energy equivalent to the rest mass*m*of that structure, namely*mc*^{2};^{[6]}*D*= 3: the number of macroscopic spatial dimensions.

*N*and ε govern the fundamental interactions of physics. The other constants (*D*excepted) govern the size, age, and expansion of the universe. These five constants must be estimated empirically.*D*, on the other hand, is necessarily a nonzero natural number and cannot be measured. Hence most physicists would not deem it a dimensionless physical constant of the sort discussed in this entry. There are also compelling physical and mathematical reasons why*D*= 3.Any plausible fundamental physical theory must be consistent with these six constants, and must either derive their values from the mathematics of the theory, or accept their values as empirical.## [edit]

*Just Six Numbers: the deep forces that shape the universe*, by Martin Rees. ISBN 0-75381-022-0.The laws of nature seem to have too many arbitrary constants in them; numbers for whose values we can see no explanation; numbers that, for all we can tell, were chosen at random by whatever gods there may be. One interesting thing about these numbers (which has led some people to think that those gods shouldn't be taken too metaphorically) is that it seems that some of them couldn't be very different from what they are without making life as we know it impossible. In other words, we seem to have been very lucky that there was a universe fit for us to live in.

In this book, Martin Rees discusses six of them:

- The relative strengths of gravity and the other fundamental forces. If gravity were too strong, then stars wouldn't live long enough for the likes of us to evolve. (No very awful consequences seem to ensue if gravity is too weak; so perhaps this one isn't really so very finely tuned.)
- The ratio of the binding energy of a helium nucleus to the rest mass of its constituents. This is determined by the strength of the strong nuclear force, and it determines the amount of energy released by nuclear fusion of hydrogen to form helium. If this were much smaller than it is, stars wouldn't burn and elements heavier than hydrogen wouldn't form. If it were much greater, there'd be no hydrogen left and (for instance) water couldn't form.
- The density of the universe, relative to the "critical" density at which it just barely escapes a Big Crunch. Supposedly, if this wasn't incredibly close to 1 when the universe was very young, it would now have to be either very close to 0 or terribly large, and neither option produces a universe hospitable to life.
- The cosmological constant. This seems to be very small but not 0; if it weren't very small, then the early universe would have expanded too fast for the formation of galaxies.
- The nonuniformity of the distribution of matter in the universe. If this were much smoother, galaxies and stars and the like wouldn't form; if it were much rougher, the universe would be all black holes and very tightly grouped clusters of stars.
- The number of macroscopic dimensions. Too few dimensions and connecting up brains is too hard; too many and there are no stable orbits.

I find myself unconvinced by several of these (but, note, I am not a cosmologist or even a physicist, so maybe I'm missing important things); the obvious hole in claims of fine tuning is that there may be a big difference between life

*as we know it*and life simpliciter.Anyway, the discussion of these six numbers gives Rees a chance to digress on black holes, antimatter, nucleosynthesis, inflation, dark matter, and all the other usual suspects of popular cosmology. He does so very competently.

Finally, Rees addresses the question of how come these constants are (allegedly) so finely tuned. He doesn't think much of the prospects for a theory that makes their values inevitable; he prefers, like most people at present, a "multiverse" theory (in which there are many "universes" with different values for the constants) plus the anthropic principle. (I was surprised to see no mention of Smolin's evolutionary variation on this theme.) He dismisses the possibility that the tuning is a one-off coincidence, and passes over the theistic (or deistic) explanation almost without comment.

Rees does a good workmanlike job of explaining this material for a lay audience.

See also## Anthropic coincidences

Main article: Fine-tuned UniverseIn 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random.^{[9]}Instead, biological factors constrain the universe to be more or less in a "golden age," neither too young nor too old.^{[10]}If the universe were one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus Dicke explained away the rough coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence which had inspired Dirac's varying-G theory.Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg^{[11]}gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics").^{[12]}However, if the cosmological constant were more than about 10 times its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life.The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life.^{[13]}A slight increase in the strong nuclear force would bind the dineutron and the diproton, and nuclear fusion would have converted all hydrogen in the early universe to helium. Water and the long-lived stable stars essential for the emergence of life as we know it would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life.## [edit]

On Jul 1, 2012, at 5:59 AM, Woodward, James wrote:Yes Jack, you are right. The effect here produces propulsion, but it doesn't necessarity produce the sort of spacetime distortions needed for warp/wormhole effects. So if it is simply scaled up for thrust, g forces would be felt in the spacecraft.

To get to warp/wormhole effects, further steps are required. The effect has to be made large enough to (transiently) produce exotic effects (by driving the rest density to zero) which triggers non-linear behavior that makes possible the generation of sufficient exotic matter to do the starship/stargate thing. Actually, a bootstrap process may make this possible with the leading term only. But it's easier if you use the second (wormhole) term. It's all in the book. . . .

The main point at this juncture is that theory (when done correctly) and observation are sufficiently close to have confidence that this will actually work. If the first term is really there -- and that's what the experimental result say -- then the second term is necessarily present.

Jim

________________________________________

From: JACK SARFATTI [sarfatti@...]

Sent: Sunday, July 01, 2012 1:16 AM

To: jfwoodward@...

Subject: Re: late June

I assume here you mean proper tensor acceleration deviation away from a timelike geodesic. So this is not a warp drive. Assuming you could scale up your device the crew would feel g-force just like in an ordinary rocket. Now if there is also a geodesic warp drive term in your theory how big is it compared to your acceleration term? And can you change that ratio so that the alleged warp drive term is much bigger than the acceleration term?

On Jun 30, 2012, at 11:08 PM, jfwoodward@... wrote:Thanks for the kind words George. As you know, the importance of the acceleration dependence has been obvious ever since Nembo pointed out that we weren't treating it consistently several years ago. But the earlier way of doing the predictions was so easy. . . . As it turns out, doing the predictions with explicit acceleration dependence isn't very hard either. You just have to put dP/dt into appropriate mechanical terms for the device in question. The calculation is already in the draft of the JPC paper Heidi and I are doing. I can excerpt it and send it to you if you like.

As for other dependencies, none are as important as the acceleration I think. The effects, after all, are gravitational/inertial effects, and accelerations and their related forces are the key dynamical quantities there. Calculations, of course, can be bent to hghlight this or that parameter. But if you loose track of the acceleration, evidently, you are asking for trouble.

As for thermal issues, you may want to take a look at the section on thermal effects in that humongous PPT file I sent out about a month ago (123 slides). These aren't thermal effects. Indeed, I look at the (unexpected) power pulse at the beginning of the frequency sweeps as the Great Spirit's way of say, "hey dummy, this isn't a thermal effect" just in case anyone might be inclined to think so. I've put together the formal argument using the present data in the attached revision of the PPT file of yesterday.

As for scaling, these results are completely consistent with simple power scaling. The peak voltage at resonance is about a half of that for the first device when it produced the 10 uN thrusts last January. That is, the present device is running at resonance at about a quarter of the power of the first device. The thrusts now are about 2.5 uN, a quarter of the 10 uN thrusts of last January. By the way, simple power scaling has been the case all along.

On the details of the thrust responses, I'd only mention that there is notable inertial lag and damping to take into account. (There's a section on this in 123 too.) And in the data taken when the device is cold, evolution of resonance conditions is an issue. But not as big an issue as one might expect -- as can be ascertained from the stack accelerometer results -- which I didn't put in the plots. You may recall that I took some flak a while ago for putting more than three traces into plots. :-)

Best,

Jim

Date: Sat, 30 Jun 2012 17:19:39 -0400

Jim,

Congrats on re-calculating with "full acceleration dependence" and

seeing the thrust more closely matching the revised predictions. Are those

calculations going to be available? Could you have predicted the shape of

the thrust response (as seen in the PPT slides) from the new calculations?

Any other "full X dependence" calculations required, where X is some other

parameter?

These look like pretty good traces but I'm wondering why the effect

does not seem to scale with input power consistently (eg slide 25). Also can

you show a plot of dT/dt (temperature differential). Some of the thrust

plots look suspiciously like the derivative of temperature, at least at the

leading edge.

Cheers - George ghathaway@...

(sent from home - please reply to ghathaway@...)

-----Original Message-----

From: jfwoodward@... [mailto:jfwoodward@...]

Sent: Saturday, June 30, 2012 3:18 AM

To: jfwoodward@...

Subject: Re: late June

Gentlefolk,

This time there's a bit more to report than in the past few updates. Some

of the things relate to events in progress for more than a week. You may

recall (and in part can read below from earlier emails) that Heidi and I

have been working through problems with the experimental apparatus. First

was the problem with the data acquisition system -- that Heidi finally

correctly diagnosed as a bum power supply, and I fixed (in a klugey sort of

way. She got running taking data with the fixed system a couple of weeks

ago, only to have the power circuit fail after one day a getting good data.

Last weekend, on a short LA trip, I went after the problem with the power

system. At first it looked like a simple problem, so I started doing some

house keeping around the lab. When I got serious, it turned out to be a bit

more complicated. Indeed, I was ready to pull the balance out of the vacuum

chamber and tear into it. But in a last minute re-check of the components

between the power amplifier and chamber, I finally saw what I should have

seen much earlier. Someone -- after Heidi had gotten a day's worth of data

a week earlier -- had placed a circuit patch in the line with a removable

plug -- removed -- that broke the power circuit. I had been looking for the

sort of fault that happens by accident or inadvertence. Custodians bumping

into stuff, or others dropping things, or somesuch. I wasn't looking for a

deliberate fault. But there it was. The fix was simple. Remove the patch.

You'll find a couple of pictures of the box in the attached PPT file.

No, I have no idea who would have done this. 'Twarnt me or Heidi. But I

can say that I don't really care. Heidi was able to get some really cute

data this past week with device #3. The on resonance voltage across the

device has been about 200 volts -- better than for the runs I did with it a

month and more ago. Retorquing and ageing seem to have agreed with the

device. The compute averages of the various types of runs done are also in

the PPT file attached. There is a clean signal (SNR >= 10) in the 2 to 3 uN

range. And in the data with the frequency sweeps, there's a power spike at

the beginning of the sweep that really heats the device up -- but off

resonance, so there is no prompt thrust response (as one would expect were

the signal a thermal effect). It's the real deal folks.

Why would I say that it's the real deal when this signal is orders of

magnitude smaller than the predictions suggest it should be? Because we've

found these past two weeks that those predictions are wrong. Heidi and I

have been working now for a while on the JPC paper that will go with the

presentation I'll do at that conference in about a month. While working on

the theory section of that paper, I decided to include a section on explicit

acceleration dependence of Mach effects. While writing that out, I decided

to derive the prediction based on full acceleration dependence -- rather

than doing the prediction the way it's been done for years. It turns out

that this calculation is not difficult at all. A bit tedious for an old

duffer like me; but not difficult.

SI units are really scary. Completely unintuitive for me. So catching some

arithmetic errors took longer than it should have. But the end result is a

prediction of 10 uN for the present system -- whereas observation is ~ 3 uN.

And that with the assumption that the electrostrictive constant is the same

as the piezoelectric constant. It is surely smaller. But without allowance

for mechanical resonance amplification -- which is surely present. These

two considerations will be largely offsetting I expect. And the resulting

prediction will likely be in the uN range.

I've already mentioned that Heidi is a first rate theorist. She is also a

natural at experimental work. The next steps are already set in motion.

You may see starships and stargates in your lifetimes after all. . . .

May you have a good weekend,

Jim

<Late June-Thermal.ppt>