Page **1** of **3**

### Numerical Simulation of a Polywell

Posted: **Sun May 17, 2009 9:33 pm**

by **TimTruett**

Is it possible to simulate a polywell in a computer? If so, then such a simulation should be able to provide answers to questions about any particular polywell design.

It is clear that a simulation would require a large number of calculations, but computing power has increased quite a bit since the time that Robert Bussard first started thinking about polywells.

Here is a quote from "High-Performance Computer Architecture" (published in 1987): "Some very large programs require 10E12 to 10E15 floating-point operations to achieve solutions at the desired level of accuracy. ... a conventional high-performance machine can complete roughly one floating-point operation per one microsecond."

Today, my Dell desktop PC can do 100 million multiplications per second. That is essentially the same as a supercomputer from twenty years ago.

I propose that we set up a project to put together software that can simulate a polywell. Then, we could harness the computing power of thousands of computers and make some real progress in evaluating polywell designs.

Posted: **Sun May 17, 2009 11:14 pm**

by **MSimon**

Software is the hard part. What you suggest has been suggested before. After quite a bit of discussion it was concluded because of the interactions that there might not be a software solution.

It has to do with the fact that moving charged particles generate magnetic fields.

Dr. Mike was working on it for a while and has seemed to have dropped out. There were one or two others. It is a very complicated problem. They had gotten as far as slicing the cube into 48 equivalent segments to simplify the computing.

I'll send Dr. Mike an e-mail and see if I can get his thoughts.

Posted: **Mon May 18, 2009 1:52 am**

by **icarus**

Proposing projects seems to be becoming an affliction around here.

Who's going to do it?

or better yet

Who wants to pay for it?

Remember, pay peanuts and you get monkeys ..... and sometimes you get monkeys regardless.

People who are willing to do it for free are either unqualified or inexperienced and are looking to "cut their teeth" or make their mark.

The rare qualified/experienced altruists will treat it as a hobby project ... and then run out of time and energy eventually.

The numerical problem is enormous, if it was easy (even with current computational power) it would have been done. Art C. will describe in detail the pitfalls and shortcomings of numerics of plasma simulations.

I could put you in touch with people who have the skill-set and experience where you'd want to begin.

Me?, I'm busy at the moment ... unless you have a much better offer?

Posted: **Mon May 18, 2009 12:58 pm**

by **drmike**

To accurately simulate is really difficult. Every model that does not follow 1e23 particles with 1e-12 second precision must have approximations - right now 1e15 flops is pretty darn good. We need 1e34 to simulate perfection, and that isn't going to happen in the next 20 years. There are many good models and all should be done, but ultimately it has to be compared to reality. At this point it is just as cheap to build experiments as it is to model on software.

I've been slowly building my welding circuit - most of my time is going into robotics with my kids. When I get a chance I drop by here to see what's new - but mostly I want to build something so I'm just doing it. Slow going for now, I'll report when I actually get things to work!

Posted: **Mon May 18, 2009 1:00 pm**

by **MSimon**

Thank you drmike !!!!

Posted: **Mon May 18, 2009 3:25 pm**

by **TimTruett**

Suppose you had 10e18 floating point operations to play with (1000 fast PCs running for a few weeks). What kind of progress would that enable?

Would that be enough to find the magnetic field that would result from some particular configuration of magnets? Would it be enough to simulate a million electrons zipping around the fields, and thereby evaluate the resulting electric potential well, and electron losses?

Posted: **Mon May 18, 2009 3:31 pm**

by **MSimon**

TimTruett wrote:Suppose you had 10e18 floating point operations to play with (1000 fast PCs running for a few weeks). What kind of progress would that enable?

Would that be enough to find the magnetic field that would result from some particular configuration of magnets? Would it be enough to simulate a million electrons zipping around the fields, and thereby evaluate the resulting electric potential well, and electron losses?

Maybe. The trouble is that one million electrons do not behave like one billion electrons or one trillion or 1E30 electrons.

If you need 1E34 flops for a good simulation, 1E18 is not even in the ball park. It is about a galaxy or 100 over.

Posted: **Mon May 18, 2009 4:37 pm**

by **KitemanSA**

If we used the ever popular 1.5 dimensional analysis vice 3D, this would bring the need down to ~10E17 wouldn't it?

Posted: **Tue May 19, 2009 10:55 am**

by **MSimon**

KitemanSA wrote:If we used the ever popular 1.5 dimensional analysis vice 3D, this would bring the need down to ~10E17 wouldn't it?

Dr. Mike is a physicist who designs particle accelerators. If he says 1E34 I believe him.

You could always as Art Carlson for a second opinion.

Posted: **Tue May 19, 2009 2:20 pm**

by **KitemanSA**

MSimon,

drmike wrote:To accurately simulate is really difficult. Every model that does not follow 1e23 particles with 1e-12 second precision must have approximations ...

The 1.5D analysis is one of those approximations that drmike was talking about. If we use said approximation with today's computing power, can we get better answers than we got before? Would it be worth it?

Drmike has said it would be better to build the experiment than the analysis. Does anyone disagree assuming the use of a 1.5D analysis?

Posted: **Tue May 19, 2009 9:22 pm**

by **MSimon**

KitemanSA wrote:MSimon,

drmike wrote:To accurately simulate is really difficult. Every model that does not follow 1e23 particles with 1e-12 second precision must have approximations ...

The 1.5D analysis is one of those approximations that drmike was talking about. If we use said approximation with today's computing power, can we get better answers than we got before? Would it be worth it?

Drmike has said it would be better to build the experiment than the analysis. Does anyone disagree assuming the use of a 1.5D analysis?

Short of Dr. Nebel, Dr. Mike is who I would most like to have helping me with Polywell experiments. If Dr. Mike says experiments are the way to go I'm inclined to assign a high probability to his opinion.

You will also note that Dr. Nebel - despite access to supercomputer time - has put minimal effort into simulations.

Dr. Mike does particle accelerators. For those you can make simplifying assumptions. Ignore the electrons if you are accelerating ions. Or vice versa. In a Polywell they interact. And in fact that interaction is a part of the design. Then you have Wiffle Ball formation. It gets very complicated. And to make it accurate you may need to go to (tens? hundreds?) picosecond time slices.

It is for all practical purposes intractable. At this time. For $10 or $20 million I can do a quantum computer that does computations on Planck time scales. With accuracy as good as your measurement eqpt. And it solves 1E30 or 1E50 simultaneous equations per 1E-22 second. It don't get any better than that.

And the solutions are exact and only limited by your read out eqpt. For some of the parameters that would amount to 1 part per thousand in 10 ns intervals with eqpt that is not too expensive. i.e a 100 MHz 16 bit nominal A to D.

Posted: **Wed May 20, 2009 11:29 am**

by **tomclarke**

"It doesn't get any better than that"

I am not disputing the judgement of those who have tried. Lots of effort has gone into plasma simulation and if those who have tried it think Polywell is intractable they are probably right.

But not necessarily right. There are always new approximations, and the fact you that have two (or more) species, not one, to track does not have to make simulation impossible. There may also be specific questions which can be answered by simulation.

And from simulations you gain a different type of insight than what you do from experiments. You can instrument simulations more precisely. You can vary conditions more easily over a wider range.

So it does, in principle, get better than that - even if not for Polywell.

I'm not however saying simulation even if possible is enough! It is one tool.

Posted: **Wed May 20, 2009 1:39 pm**

by **MSimon**

There are always new approximations,

And when we have enough experimental evidence we will know what they are.

Posted: **Wed May 20, 2009 2:11 pm**

by **KitemanSA**

MSimon wrote: Short of Dr. Nebel, Dr. Mike is who I would most like to have helping me with Polywell experiments. If Dr. Mike says experiments are the way to go I'm inclined to assign a high probability to his opinion.

You will also note that Dr. Nebel - despite access to supercomputer time - has put minimal effort into simulations.

It is for all practical purposes intractable. At this time. For $10 or $20 million I can do a quantum computer that does computations on Planck time scales. With accuracy as good as your measurement eqpt. And it solves 1E30 or 1E50 simultaneous equations per 1E-22 second. It don't get any better than that.

And the solutions are exact and only limited by your read out eqpt. For some of the parameters that would amount to 1 part per thousand in 10 ns intervals with eqpt that is not too expensive. i.e a 100 MHz 16 bit nominal A to D.

I too am an experimentalist. I smash things for a living. And when it comes to the final answer, you can't beat the perfect analog computer that is the full scale test. I make this statement regularly and frequently to those who wish to replace testing with analysis. BUT, without some reasonable model of what you are testing, you can't easily determine what isn't going per plan

**AND WHY.** Without a decent model, knowing what to measure, where and when and HOW become more difficult. There are many reasons to model. The question then becomes, when is it good enough?

Anyway, I guess the answer to Tim Truett's original question (Is it possible to simulate a polywell in a computer?) is:

*Yes, but with many approximations and limitations. Such approximate and limited simulations have been done. To simulate without the approximations and limitations would take MANY orders of magnitude of improvement in computer power. At Moore's law improvement rates, this might take another 3/4 century. Recent improvements have not warrented additional significant simulation efforts. The issue awaits a breakthrough to change the status quo. *
By the way, I get the 3/4 century by this equation

T=N/C/R

**T**ime =

**N**eeded flops /

**C**urrent flops / improvement

**R**ate

where N ~ 1E34, C ~ 1E18 and R ~ 2X/18 months. Thus 10X improvement takes ABOUT 4.5 - 5 years, and we need ABOUT 16 cycles of 10X which gives something like 72 - 80 years, close enough.

Should this answer be added to the FAQ?

Posted: **Fri May 22, 2009 3:58 am**

by **TimTruett**

I appreciate everyone's input.

If someone can produce software that would make tangible progress towards a working reactor, I would be happy to donate my computing power for that purpose. So too, I suspect would thousands of other people.

Seti@home had great success in using the computing power of (I think) about a million people. Their software has evolved into BOINC, which can be used to support other massively parallel computing projects.

When each of N particles can interact with all the others, the time needed to simulate the interactions is proportional to N**2. But N**2 is still a polynomial. It is not exponential. In the computer business, polynomial time is considered feasible.

Simplifying assumptions are made in simulations all the time. People have simulated galaxy evolution and somehow obtained meaningful results. Traffic flow can be simulated without considering each individual car.

A cursory Google search turned up some public domain plasma simulation software.

Finally, it is worth noting that evolutionary computing may be applicable here. Start with a design, vary it, and then evaluate the fitness of each variation. Impose some Darwinian selection, and repeat for a few hundred generations.

Wasn't there an early polywell that had the magnet coils touching, and which did not work well as a result? It would not have taken much of a simulation to evolve that design into one that had the magnets spaced a little bit apart.