Simulation Barriers

Discuss the technical details of an "open source" community-driven design of a polywell reactor.

Moderators: tonybarry, MSimon

luked
Posts: 8
Joined: Tue Jul 06, 2010 4:26 pm

Simulation Barriers

Post by luked »

Can anyone speak to the difficulties in simulating polywells? I seem to remember that at some point I read that Dr. B estimated a budget of $8M for simulation, and that construction would be much lower.

Does anyone know where this number comes from?

Computing resources should get cheaper by about a factor of two every two-ish years, and can be greatly influenced by efficiency as well, e.g., matlab or python vs. custom CUDA. In addition, distributed algorithms (like the @HOME work) can harness a huge amount of power, basically for free. I imagine people would be happy to sign up for Fusion@HOME.

It seems like it should be possible to build a substantial simulation infrastructure today for much less than $8M. How well understood are the small-scale physics of the polywell? Is it pretty much the case that we know how to describe the physics of the system at a fundamental level, and just don't know what the complex behavior of all of the moving parts are when put together?

It seems like simulation is the kind of thing that a community like this can actually hope to do with lots of decentralized effort.

Anyway, frustrated with the current (temporary) information embargo and thinking out loud.

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Post by D Tibbets »

Not only computational power was involved. In his Google talk Bussard mentioned the small difference in the electron/ ion balance (~1 ppm) was so small that the calculations did not have the resolution necessary. The effects of these conditions was so small on the particle level, that any results were lost in the noise. At least that was my understanding.

Dan Tibbets
To error is human... and I'm very human.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

D Tibbets wrote:Not only computational power was involved. In his Google talk Bussard mentioned the small difference in the electron/ ion balance (~1 ppm) was so small that the calculations did not have the resolution necessary. The effects of these conditions was so small on the particle level, that any results were lost in the noise. At least that was my understanding.

Dan Tibbets
Yep. 64 bit or better 128 bit calculations are needed. With 128 bits you might as well go fixed point and speed things up some.
Engineering is the art of making what you want from what you can get at a profit.

luked
Posts: 8
Joined: Tue Jul 06, 2010 4:26 pm

Post by luked »

D Tibbets wrote:Not only computational power was involved. In his Google talk Bussard mentioned the small difference in the electron/ ion balance (~1 ppm) was so small that the calculations did not have the resolution necessary. The effects of these conditions was so small on the particle level, that any results were lost in the noise. At least that was my understanding.
Ah, that can exclude a lot of hardware (and at the time, that absolutely meant buying supercomputing time).
MSimon wrote:Yep. 64 bit or better 128 bit calculations are needed. With 128 bits you might as well go fixed point and speed things up some.
Any idea if 64 bits is good enough? This makes a big difference, as individual OpenCL hardware is now offering about 1TFLOPS (max) for $700 running on a 650W supply at 64 bits, but 128 bit fixed point is a problem as far as I know.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Roughly: 64 bits (floating point) gives about 1E16 range of the non-exponent part. Normally you go to 80 bits to get a guard band for calculations. Given 64 bits, an exponent and guard band - you are down to around 48 bits or roughly 1E4 actual range (assuming differences of 1E-6 are important). It might work. But 64 more bits would give a lot of comfort.
Engineering is the art of making what you want from what you can get at a profit.

luked
Posts: 8
Joined: Tue Jul 06, 2010 4:26 pm

Post by luked »

MSimon wrote:Roughly: 64 bits (floating point) gives about 1E16 range of the non-exponent part. Normally you go to 80 bits to get a guard band for calculations. Given 64 bits, an exponent and guard band - you are down to around 48 bits or roughly 1E4 actual range (assuming differences of 1E-6 are important). It might work. But 64 more bits would give a lot of comfort.
If you can live with 64 bits you can use modern GPGPUs. In any case, my real point was that very few can build a useful polywell, but almost everyone could participate in simulation if that was an option. I would certainly join Fusion@HOME if it were around.

Anyway, just ran into http://talk-polywell.org/bb/viewtopic.php?t=1291 which has great information, so I'll spend more time educating myself there.

Uthman
Posts: 7
Joined: Sat Jul 03, 2010 6:30 am

Post by Uthman »

someone before (in another post) mentioned that the nature of the polywell simulation might not lend well to distributed computing. the basis of the assertion was the amount of time it takes to access the data would be far greater than the amount of time it takes to actually crunch it. in the polywell, there are a large # of atomic particles interacting with one another and each and every single one of them has to take into account each and every single other one. that said, it would be better to handle the sim on a single machine, because it's not exactly accurate to distribute a 'piece' of the puzzle... which can't be solved alone.

hanelyp
Posts: 2261
Joined: Fri Oct 26, 2007 8:50 pm

Post by hanelyp »

If 64bit floating point is enough, the CPUs on current generation PCs can do the job, if perhaps a bit slowly.

If I had the spare energy, I'd be inclined to try a simulation of one or another component effects, rather that a full up polywell.

- How does plasma interact with the magnetic field to produce the wiffleball effect at beta = 1? Not so sensitive to plasma charge balance.
- Assuming an idealized spherical containment bubble for electrons and reactor, what is our electrostatic well profile? Highly symmetric, so it should be much easier to handle the precise charge balance.
- Assuming a potential well, what kind of collision profile do we get, and how does that impact annealing?

As these are idealized cases they might miss something, but are much more approachable. And they should give useful results to compare with actual experiment, perhaps giving insight not easily found with limited instruments.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

luked wrote:
MSimon wrote:Roughly: 64 bits (floating point) gives about 1E16 range of the non-exponent part. Normally you go to 80 bits to get a guard band for calculations. Given 64 bits, an exponent and guard band - you are down to around 48 bits or roughly 1E4 actual range (assuming differences of 1E-6 are important). It might work. But 64 more bits would give a lot of comfort.
If you can live with 64 bits you can use modern GPGPUs. In any case, my real point was that very few can build a useful polywell, but almost everyone could participate in simulation if that was an option. I would certainly join Fusion@HOME if it were around.

Anyway, just ran into http://talk-polywell.org/bb/viewtopic.php?t=1291 which has great information, so I'll spend more time educating myself there.
This has been brought up many times and yet none of the folks involved in simulation have ever shown any interest in it.
Engineering is the art of making what you want from what you can get at a profit.

luked
Posts: 8
Joined: Tue Jul 06, 2010 4:26 pm

Post by luked »

Uthman wrote:someone before (in another post) mentioned that the nature of the polywell simulation might not lend well to distributed computing. the basis of the assertion was the amount of time it takes to access the data would be far greater than the amount of time it takes to actually crunch it. in the polywell, there are a large # of atomic particles interacting with one another and each and every single one of them has to take into account each and every single other one. that said, it would be better to handle the sim on a single machine, because it's not exactly accurate to distribute a 'piece' of the puzzle... which can't be solved alone.
Sure, I understand the basic idea. The numerical methods that are normally used aren't suitable for a distributed setting. I'm sure that there are people interested in this, and if they haven't come up with an analysis technique that can be widely distributed (such that computation time dominates communication time) then maybe there isn't one.

luked
Posts: 8
Joined: Tue Jul 06, 2010 4:26 pm

Post by luked »

MSimon wrote:
luked wrote: I would certainly join Fusion@HOME if it were around.
This has been brought up many times and yet none of the folks involved in simulation have ever shown any interest in it.
If no one's done it, then no doubt it can't be done. Would make a good addition to the FAQ if it comes up that much.

Anyway, the main observation was that (approximate) simulation was budgeted at $8M more than 4 years ago, and that it could be $2M today, and even possible at a rough or small-scale on a home machine with 2-4 GPUs.

Just frustrating waiting for information.

luked
Posts: 8
Joined: Tue Jul 06, 2010 4:26 pm

Post by luked »

hanelyp wrote:If 64bit floating point is enough, the CPUs on current generation PCs can do the job, if perhaps a bit slowly.

If I had the spare energy, I'd be inclined to try a simulation of one or another component effects, rather that a full up polywell.
Hear hear.
hanelyp wrote: - How does plasma interact with the magnetic field to produce the wiffleball effect at beta = 1? Not so sensitive to plasma charge balance.
- Assuming an idealized spherical containment bubble for electrons and reactor, what is our electrostatic well profile? Highly symmetric, so it should be much easier to handle the precise charge balance.
- Assuming a potential well, what kind of collision profile do we get, and how does that impact annealing?

As these are idealized cases they might miss something, but are much more approachable. And they should give useful results to compare with actual experiment, perhaps giving insight not easily found with limited instruments.
And would create a sense of interest, investment, and community amongst the (potentially large number) of participants, perhaps leading to a higher profile and more money for people to actually build these things.

[off-topic]It's a crime that there's seems to be such a small investment in something with such a high potential. What is $200M to BP or Westinghouse (or the Gates foundation for that matter)? Are the patents a deterrent?[/off-topic]

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

As I expected the article on Famulus/Mark Suppes has kicked up VC interest. I'm not at liberty to discuss details but I can tell you there has been a noticeable spike in interest. And that is just what crosses my desk. Who knows what EMC2 has been seeing? Not me.
Engineering is the art of making what you want from what you can get at a profit.

kcdodd
Posts: 722
Joined: Tue Jun 03, 2008 3:36 am
Location: Austin, TX

Post by kcdodd »

The bit precision is not the problem I have had in simulating high densities. If you use a pic code, then very small fluctuations lead to large E fields and grow very quickly. So the number of particles needed for stability gets very large, and the time steps get very small, demanding lots of memory and cpu time.
Carter

Jeff Mauldin
Posts: 17
Joined: Thu Feb 21, 2008 8:41 pm

Possible supercomputer access, best software starting point

Post by Jeff Mauldin »

Due to recent (not entirely desired) career jumps, I have landed at an excellent spot in Sandia National Labs in Albuquerque. I'm a computer scientist (master's UNC-CH) with an EE/Math/CS Degree from Duke.

Most of my career has been in data visualization, including a fair amount of supercomputer simulation visualization. A year or so ago I fiddled around with some visualization of some of Dr. Mike's simulations of polywell electric and magntic fields generated by a hypothetical. I sent him pictures and we exchanged a few emails, but at the time I couldn't do anything else (the visualization was interactive so pictures were a bit of a letdown).

As luck would have it I'm working towards doing some visualization work related to the new supercomputer to be built at Los Alamos National Labs. It's way big. In fact, some of the scuttlebutt is simply that we don't really have much experience with what to do with this scale, other than the straightforward running of much higher fidelity simulations as we've done previously.

So I see a possible window for proposing some polywell simulations using this new computer. My biggest question at this point would be what would be the appropriate simulation software to look at or does this need to be a roll-your-own simulation project? My gut is telling me that I want to simulate the magnetic and electric fields from the grid in a probably adaptive modified spatial mesh and then simulate a massive number of electrons and their interaction with the fields from the grid and with each other. I haven't yet looked into what the z-pinch folks are doing other than at the company-available top level information, and I don't know what they are doing for simulations other than being pretty certain they are doing lots of simulating.

Post Reply