Virtual Polywell

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Well, I ground through a bunch of numbers, and got underflow instead of overflow, and I'd have a system of 0/0 everywhere in space if I tried to continue. Which obviously makes no sense, so I need to decide if this method is worth attempting to salvage, or just give up and say - "that's a dead end".

From a statistical thermodynamics story, the math is telling me there is only one possible state. Since there are all possible magnetic fields, that can't be true.

I can do the numbers if I assume a voltage on the MaGrid to current in the MaGrid with a specific ratio on the order of V/I^2 ~ 0.1 -> 10. A 50kV to 100k Amp-turns is on the order of 0.0005, so a realistic calculation isn't possible. So maybe I'll do some calculations with a system that can be computed, and see if any trends show up. But I'm not sure there's much point since a real system can't be done. Well, it could be I suppose, but the exponent would need to be as big as the mantissa!

I think I'll sleep on it and try again tomorrow. Funny that a simple formula is impossible to compute.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

drmike wrote:
laksindiaforfusion wrote: Would that mean computational savings; like "ionstateupdate()" function can be called say in every 100 calls for "electronstateupdate()" function.

Another way to put it is: the ion motion can updated in simulation after letting it be where it for a time interval t', and counting the vector sum of all columb collision forces in that interval and then update the position with the motion caused by the vector sum of all the electron collisions during t'. That way we can model both the plasmas together.
Yes, that would work. It would be a PIC simulation with different update rates. You could do the same thing for the H vs B ions as well, updating the B once every 10 times you update H becuase it has 10 times the mass.
Wouldn't the ratio be the as square root of the ratio of the masses since the interest is in velocity?
Engineering is the art of making what you want from what you can get at a profit.

Mikos
Posts: 27
Joined: Wed Jan 16, 2008 3:19 pm
Location: Prague, Czech Republic

Post by Mikos »

drmike wrote:I will be pretty happy with 125 GFLOP double precision at $1000! 15 years ago I was thinking about building that level of processing power for a million dollars. Even if AMD is late on what they thought they could do initially, it will be worth waiting for.
Why don't you just buy Sony PlayStation 3 console? It has Cell processor (8 cores - 1 central PPE core (64 bit, Power architecture), 7 synergistic SPE cores (128 bit SIMD)) which gives you 218 GFLOPS. And it officially runs Linux.

If you include also power of GPU, it gives you 2 teraFLOPS. You can use NVidia GPUs for general computing with their CUDA development environment, but I don't know if GPU in PlayStation 3 is accessible from Linux (I have read somewhere that access to GPU in PlayStation 3 from Linux is somehow restricted).

PlayStation 3 costs only $400. Only problem I can think of is amount of memory. PlayStation 3 has only 256 MB RAM (powerful XDR DRAM) + 256 MB video RAM (GDDR3).
"Those who would give up Essential Liberty to purchase a little Temporary Safety, deserve neither Liberty nor Safety."
-- Benjamin Franklin

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

MSimon wrote:Wouldn't the ratio be the as square root of the ratio of the masses since the interest is in velocity?
I'd have to dig thru the formulas, but that certainly makes sense. The fundamental is F = ma = dp/dt, so the change in time is proportional to mass. I'm using energy so I can integrate out the velocity and just deal with space so it is possible the method allows me to get away with it.
Mikos wrote:Why don't you just buy Sony PlayStation 3 console? It has Cell processor (8 cores - 1 central PPE core (64 bit, Power architecture), 7 synergistic SPE cores (128 bit SIMD)) which gives you 218 GFLOPS. And it officially runs Linux.
Because the problem isn't processing power, it's numerical representation. A double precision floating point in IEEE has an 11 bit exponent and I need 16 to 18 for the range of problem I'm facing. I've written software to do things like that, but it is probably better to formulate the problem in a better way.

For the hell of it, I'll change the numbers so the formulas work and figure out what it means later. It may be a dead end, but there are always interesting corners to look at in a box canyon.

laksindiaforfusion
Posts: 22
Joined: Sat Feb 16, 2008 9:48 pm
Contact:

Post by laksindiaforfusion »

drmike wrote:
Because the problem isn't processing power, it's numerical representation. A double precision floating point in IEEE has an 11 bit exponent and I need 16 to 18 for the range of problem I'm facing. I've written software to do things like that, but it is probably better to formulate the problem in a better way.

For the hell of it, I'll change the numbers so the formulas work and figure out what it means later. It may be a dead end, but there are always interesting corners to look at in a box canyon.
Well, one approach to do is to use a mantissa exponent method and scale the representation (IEEE 32 or 64 IS a matissa and exponent) so you can have something like

struct{
unsigned int scale;
float/doube value;
}NewRepresentation;

you can always get the real value, at the expense of resolution, as realvalue = NewRep.scale*NewRep.value;

Again the compromise here is increase in dynamic range at the expense of memory, computation power [for checking whether we are saturated and then update (increment) the value field etc etc] , and last but not the least it comes at the expense of resolution.

Several optimizations can be done here with some simple algebraic identities. for ex; if we keep scale constant for things that are constantly multiplied or added or even better:divided , then we just have to do the operations on the value and then adjust the scale accordingly.

Also we can scale an array with the same scale. There are other stuff you can do... but ultimately, the larger the number of bits .. the better :D
The believer's burden and a skeptics purpose

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Yeah, you can play all kinds of games. But it has to be worth the effort.

I put up some pictures in the take_2 version of the "electron fluid" paper. What it boils down to is that the shape of the potential does not change no matter how much magnetic field is present. That just doesn't make sense to me.

I'm less rusty than when I started, so that's a good warm up. But it'd be nice to have a powerful theory of how a BFR should behave. Time to sit back and think.

Edit: point to the right file!

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

Bussard's number crunchers had problems with underflow, so at least you're consistent. Don't know whether I should add a smiley there or not.

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

scareduck wrote:Bussard's number crunchers had problems with underflow, so at least you're consistent. Don't know whether I should add a smiley there or not.
I believe this was discussed extensively upthread. The conclusion was we needed a native 128 bit FP representation. As much as for the added bits in the mantissa as the exponent. Sixty-four bits was close - but no cigar. The problem is you give away 20 bits at the start because the electron density is so little different from the ion density.

Dr Mike,

Have you considered limiting electrostatic particle calculations to within a Debye length or three?
Engineering is the art of making what you want from what you can get at a profit.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

MSimon wrote:
scareduck wrote:Bussard's number crunchers had problems with underflow, so at least you're consistent. Don't know whether I should add a smiley there or not.
scareduck - yes, let's keep smiling!!
8)
I believe this was discussed extensively upthread. The conclusion was we needed a native 128 bit FP representation. As much as for the added bits in the mantissa as the exponent. Sixty-four bits was close - but no cigar. The problem is you give away 20 bits at the start because the electron density is so little different from the ion density.
It'd be nice to have 30 bits of that be exponent too, but good luck getting that thru the IEEE!
Dr Mike,

Have you considered limiting electrostatic particle calculations to within a Debye length or three?


No, but that's a good idea. I'm thinking about the assumptions - what does particle distribution really mean? I sort of started with a static view of things, and I put the magnetic field - particle interaction at the same level as an energy distribution. None of the books do that (and now I know why!!) - they make the B field fixed and go from there. There is nothing fixed in the Polywell!

A lot of people have struggled with this for a long time. It's one of the reasons I want to do basic physics in my basement - there's no way to get really low level basic science funded otherwise. The neat thing is I've already explored a few dead ends (several I haven't even bothered to post) so I've got plenty more ideas to try. And once I do find a theory that makes sense, I should be able to test it with the crude equipment I've got at hand.

Then we can build rocket engines to get us to Mars.
:D

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

drmike wrote:scareduck - yes, let's keep smiling!!
I appreciate your enthusiasm! And MSimon's, and everyone else, too.
It'd be nice to have 30 bits of that be exponent too, but good luck getting that thru the IEEE!
The real benefit there is to have commodity hardware that supports these formats. But it occurred to me -- do we really need that, after all? How difficult would it be to build an FPGA that can handle MDAS operations in some nonstandard format? I started looking into this a little bit. There are a number of companies that sell FPGAs that sell math-oriented devices, though I don't know if any of them support even the proposed IEEE 754R spec.

Also, if you can tolerate the performance hit, I found a list of software-only arbitrary precision floating point libraries at Lawrence Berkeley Labs' website. You might want to investigate some of those.
Last edited by scareduck on Sun Feb 24, 2008 6:53 pm, edited 1 time in total.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

Duplicated post.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Yeah, I actually program FPGA's as part of my day job, so that's definitly a possibility. But before I head down that road, I want to be certain I know it's the right road.

I've been reading all my old books and thinking about things a lot. I really have two problems. One is philisophical in a way - what assumptions are the right ones? I hate playing that game because the whole point of doing the physics is to understand what is going on. It is just too darn complicated though!

Another problem is where to start. Up to now I've just sort of wanted to look at the system as a final state and have tried to look for a self consistent solution.

I'm kind of back to square one in a way - but this time I think doing a full blown 6D phase space is the right thing to do, and I should start with nothing. Let the electrons flow from the source points (those balls off in the cusp corners of the WB-6 pictures) and just follow the fluid. With a boundary value problem, I can at least define what the walls are in phase space, and then watch it evolve. There are fewer assumptions that way.

I still have to make assumptions about what rules apply where, so I'll ignore collisions and radiation to start with. It should be computable with a desk top computer, darn it!!
:)

MSimon
Posts: 14335
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Some interesting stuff on number theory in computing:

the table makers dilemma (pdf)

It deals with the fact that there are no irrational numbers inside computers. i.e. the number line is quantized.

They say that to do correct rounding of transcendentals (sin, cos, 2^x - where x is a fraction, 1/x etc) internal representations of numbers need about twice as many bit as the final result. So a guard band of 16 bits for a 64 bit calculation is inadequate.

What you hope is that the methods chosen averages the errors in a series of calculations rather than propagating them.

What does all this mean? One thing is that you have to choose the rounding method according to the problem. - Not an option in most systems.

Say we want final results to 1 part in 1,000. That is 10 bits accuracy. We need another 20 bits to account for the small excess of electrons. That says you need a significand (also some times called mantissa) of 60 bits (actually a few more depending) to get the desired accuracy (if a transcendental or division is involved) in one calculation.

You probably need to add one bit for every time a calculation is repeated in a sequence. Maybe two bits. This gets out of hand quickly.

I posted up thread (or in another thread) how this causes some plasma calculations under certain circumstances to blow up.

So what we may want to take into consideration is that the overall problem may not be computable without simplifying assumptions and we do not yet know what those assumptions are.

Another paper on rounding:

http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
Engineering is the art of making what you want from what you can get at a profit.

jmc
Posts: 427
Joined: Fri Aug 31, 2007 9:16 am
Location: Ireland

Post by jmc »

Hello, I'm back after several months and I've finally read the whole thread! Just like to ask a few questions and make a few comments.

Regarding shielding the electron gun in particular from alphas, I don't really understand, isn't an electron gun simply a piece of ordinary metal set to a voltage, why is important that alphas don't hit it? If heat dissipation is an issue there then it will be an issue everywhere the alphas hit, if you ask me the biggest challenge in the magrid idea will be to protect the coils from alpha bombarment since alpha bombardment will make them brittle and increase their resistivity, it might even cause them to short out. I'm not sure even a 2 tesla field will protect you from alpha particles (a 2 Mev alpha has a larmor radius of about 20cm)

Secondly how much fiddling around in this virtual polywell has been dedicated to coil spacing? If coils are very close together you get bussards 'funny cusp' that causes the electrons to go through a corridor of zero field into the coils, if you increase coil spacing then you get big massive line cusps in between all the coils, (I've got some beautiful pictures of the field lines but I'm not a seasoned blogger so I don't know howe to attach them to posts)

You can't stop th fieldlines from touching solid surfaces, all cusplines go radially outward from the polywell directly into solid surface. By charging the magrid to a high potential, however, you might be able to ensure that most of the emerging electrons get turned back by the electric field. Upscattering of electrons from collisions in the well will however cause some net loss of energy. As the density of electrons gets higher and higher however the electron might 'short circuit' the center of the well with the vacuum vessel, this might allow ions to escape through the cusps, although the loss radii for the ions would still be the electron rather than ion larmor radius.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Simon: thanks, that looks like an interesting read. I do think the theory needs to be robust otherwise floating point representation will kill it. It is clearly pretty easy to make a bad theory :)

jmc: Welcome back! Definitely an impressive feat reading this whole thread!

I agree, electron gun is a misnomer. I prefer electron source. Alphas smacking them is not a problem.

I haven't played with coil spacing yet. It is a free parameter, so I can play with it when I get a chance. so far I don't even have finite size to the coils - one step at a time!

The B field lines fall of as 1/r^3 in intensity, so keeping the wall 3 or more coil radii away from the center reduces the B field by 1/8th. I think the main idea of the coils is to keep the electrons away from the positive grid, not to contain the plasma so much. However, the 3D cusp shape is pretty ideal for containment.

I first want to see how a pure electron gas behaves, then I'll look at plasma. It's been an interesting learning curve, and at least it is fun!

Post Reply