videos of polywell phase space

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

icarus
Posts: 819
Joined: Mon Jul 07, 2008 12:48 am

Post by icarus »

happyjack:
but when i do, it'll be much more accurate. and take a lot longer.
... you are assuming it will be more accurate but you don't know because you don't calculate errors.

Most of you simulations are worthless as analytical tools because you have no charge on the MaGrid, so at this point you are simulating a different problem, not the Polywell.

ladajo
Posts: 6266
Joined: Thu Sep 17, 2009 11:18 pm
Location: North East Coast

Post by ladajo »

Thanks much for the indulgence. Instructive when compared side by side.
There was a definate improvement as you noted, .1 to .8T.
I noticed when you changed your settings that you were not exactly the same for each in the later view modes. I am not sure how much this impacts the presentation.
For the numbers part, understood, not ready yet to spit out data. But when you do, I am sure that they will be plugged forthwith into the existing models we have from Bussard and Rogers to compare notes and see how it stacks up.
Duly noted on the WAG aspect of this as to errors, however, something is better than nothing, even pretty graphics...
Thanks for the efforts, keep on chugging...

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

icarus wrote:happyjack:
but when i do, it'll be much more accurate. and take a lot longer.
... you are assuming it will be more accurate but you don't know because you don't calculate errors.

Most of you simulations are worthless as analytical tools because you have no charge on the MaGrid, so at this point you are simulating a different problem, not the Polywell.
ofcourse it will be more accurate, it's higher resolution. that is by definition more accurate.

i know what the errors are. i know the exact precisions of the numbers and i know the exact precision of the time step and i know the exact representation rate of the particles.

and with the magrid charge off (or rather at 10E-16, to be more precise), i AM simulating a magrid. to be more precise, i am simulated a magrid in a very particular parameter regime. and one that is particularly useful in that it allows me to simplify the problem and gather information about a lower-dimensional version of parameter space. once i have that, then i can add the charge to the magrid and gain more information. but not having that first, it would be difficult if not impossible to sort out whats responsible for the differences. so you see i'm really doing things the only possible practical way there is.

93143
Posts: 1142
Joined: Fri Oct 19, 2007 7:51 pm

Post by 93143 »

happyjack27 wrote:ofcourse it will be more accurate, it's higher resolution. that is by definition more accurate.
Not necessarily. Ever heard of the Faucet problem?

I'm not claiming your simulation will exhibit an effect like that, but higher resolution is not "by definition" more accurate.

rjaypeters
Posts: 869
Joined: Fri Aug 20, 2010 2:04 pm
Location: Summerville SC, USA

Post by rjaypeters »

To which "Faucet problem" do you refer?
"Aqaba! By Land!" T. E. Lawrence

R. Peters

93143
Posts: 1142
Joined: Fri Oct 19, 2007 7:51 pm

Post by 93143 »

It's an IBVP used to test multiphase CFD codes. I don't know the exact conditions used, but it looks like a propagating void-fraction discontinuity. You can see what happens if the problem is posed as nonhyperbolic in Figure 2 of Liou, Chang, Nguyen, and Theofanous (2008). Relevant quote: "...solutions are divergent as errors increase with decreasing grid spacing." Hyperbolizing the system results in grid convergence in Figure 6, but the solution still doesn't appear to be converging to the analytic result (to be fair, a canonical Fourier series wouldn't either, under these circumstances... but TVD schemes are supposed to be better than that).

Figure 3 shows how bad an underwater shock tube problem can look, regardless of resolution, if the flux function doesn't handle stiff fluids well. This one doesn't have to do with the analytic structure of the equations, but simply with the numerical method used to solve them. CFD is rife with this stuff.

Since happyjack27 is using a Lagrangian method, I don't expect him to have these specific problems. But to a computational physicist, the claim that higher resolution is "by definition" formally equivalent to higher accuracy is something of a red flag. Heck, I've had practice codes produce unstable solutions because my time step was too small...

rjaypeters
Posts: 869
Joined: Fri Aug 20, 2010 2:04 pm
Location: Summerville SC, USA

Post by rjaypeters »

Thanks.
"Aqaba! By Land!" T. E. Lawrence

R. Peters

MITlurker
Posts: 5
Joined: Sat Dec 04, 2010 11:54 am

Post by MITlurker »

happyjack27 wrote:
oh, and the electron and ion sources are both uniform throughout a sphere of half the magrid radius, centered at the origin. this roughly simulates injection by ionization of a neutral gas, with a few extra electrons shot in from the outside.

i call this one "W is for Wiffleball".

http://www.youtube.com/watch?v=JBObkl0EQGg
A description of the settings (distances, field strength etc) would be helpful.

Why in the first video were the molecules expanding in a wafer? wafer like starting conditions?

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

MITlurker wrote: A description of the settings (distances, field strength etc) would be helpful.

Why in the first video were the molecules expanding in a wafer? wafer like starting conditions?
eh, no. perhaps i failed to mention that: what you're seeing there is an affect of the "particle filters". starting conditions are just random throughout a sphere of half the magrid radius. and all the particles are simulated, even if you don't see them. the "particle filters" just select what particles to _display_. and that applies for all views, normal x,y,z space and phase space views as well.

the "wafer" as you see is the filter "z bottom" and "z thickness". you will only see particles with a z-coordinate between where the z-bottom slider (on the left) is at and where said slider is at plus the z thickness slider. e.g. if z-bottom is -0.1 and z-thickness is 0.3 you will only see particles w/a z-coordinate between -0.1 and 0.2. and that's the reason why you'll see particles appear and disappear in the phase space views, because they're entering and leaving this "z-slice".

all the relevant parameters, besides the magrid radius, are in the upper left. the bottom ones ("z slice bottom" and down) are just display settings, and they don't really affect the simulation, just what you see. they're there to help you make more sense of what's going on.

oh, and the sliders with "(log10)" in their name means take 10 to the power of that number and that's the value. for instance, the total ion charge in that video is 10^(-6.035) coloumbs. so that's something on the order of 10^12 simulated ions.

MITlurker
Posts: 5
Joined: Sat Dec 04, 2010 11:54 am

Post by MITlurker »

happyjack27 wrote:
MITlurker wrote: A description of the settings (distances, field strength etc) would be helpful.

Why in the first video were the molecules expanding in a wafer? wafer like starting conditions?
eh, no. perhaps i failed to mention that: what you're seeing there is an affect of the "particle filters". starting conditions are just random throughout a sphere of half the magrid radius. and all the particles are simulated, even if you don't see them. the "particle filters" just select what particles to _display_. and that applies for all views, normal x,y,z space and phase space views as well.

the "wafer" as you see is the filter "z bottom" and "z thickness". you will only see particles with a z-coordinate between where the z-bottom slider (on the left) is at and where said slider is at plus the z thickness slider. e.g. if z-bottom is -0.1 and z-thickness is 0.3 you will only see particles w/a z-coordinate between -0.1 and 0.2. and that's the reason why you'll see particles appear and disappear in the phase space views, because they're entering and leaving this "z-slice".

all the relevant parameters, besides the magrid radius, are in the upper left. the bottom ones ("z slice bottom" and down) are just display settings, and they don't really affect the simulation, just what you see. they're there to help you make more sense of what's going on.

oh, and the sliders with "(log10)" in their name means take 10 to the power of that number and that's the value. for instance, the total ion charge in that video is 10^(-6.035) coloumbs. so that's something on the order of 10^12 simulated ions.
I see, that is obvious now that I look at it.
I'm completely unfamiliar with this software so please excuse my ignorance. The z-slice is describe makes much more sense.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

my interpretation of whats going on in the phase space views:

axial momentum.

picture a particle traveling in a straight line and at constant velocity past a fixed point. now when it's very far from the point (either when its going towards or away), most of its velocity relative to the point is radial. but as it gets closer to the point, since the vector from the fixed point to the moving point - the radial vector - rotates, while the vector representing the particles absolute velocity stays constant, the portion of the particles velocity vector that projects onto the radial vector decreases, while the portion projected onto the vector tangent to that -- the axial (or angular if you divide by 2*pi*r) component -- increases, until it passes the point of minimal distance, at which point the situation reverses.

so in other words as the point gets closer, without acutally changing it's absolute velocity, since it's not passing right _through_ the point, its radial velocity becomes axial velocity. and then as it goes away its converted back into radial velocity.

now in particles orbiting a point charge via a 1/r^2 force this is not exactly the case. it will travel in an oscillating spiral, elliptical orbit, or hyperbolic "fly by", depending on whether its velocity is suborbital, orbital, or super-orbital, respectively. and it's absolute speed will also be greater closer to the fixed point. but the general principle established for the linear trajectory still stands, esp. for the super-orbital case. that is, closer to the fixed point radial velocity becomes axial velocity.

so what we should expect to see, if this is the case, is greater axial momentum near the center. but not neccessarily less radial momentum, as its total speed will be faster closer to the center so that will counteract the decrease in the amount of its velocity that is radial to the point.

you can see this is clearly the case for both of the sims that looked at the phase space views:

http://www.youtube.com/watch?v=BKuWlbm3tbQ
http://www.youtube.com/watch?v=aqyBA4eCt6c

(axial momentum towards the end of the videos, z-mode between 7 and 8 )


radial momentum, right near the center.

you'll notice in the videos, (perhaps i havent posted one that shows this yet), that in the radial momentum view, right in the center some ions will just shoot through the center. that is, their radial momentum will go from totally inward to totally outward very quickly. (or vice versa). now if a particle was traveling right through the center it's radial momentum would have its sign flip instantly. i.e. if you graphed it there'd be a singularity at zero. but since it's never passing through the exact center, it's radial velocity smoothly becomes completely axial momentum, and then goes back to radial momentum with the opposite sign. and when a particle is traveling very close to the center this occurs over a very short amount of time. so in such situations in the radial momentum view we would expect to see, well, exactly what we see: ions smoothly and quickly changing the sign of their radial momentum component. though this only explains going from inward (negative sign) to outward (positive sign). particles traveling quickly "down" through the center in this view would require some other explanation.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

i'm currently uploading two new videos that show the above phenomena quite clearly. i've chosen a time step for the parameter range that's fairly fast (relatively, of course) but not so much that the dynamics are affected. 100 picoseconds per second. (or a little under 10 per frame).

what these videos will show is the radial momentum and the axial momentum of a sim after its reached equilibrium (after an hour or so aka a hundred picohours or so). in the first video the x is radial, y is axial, and z is distance from center. the second video is the same, except z is density (volume) adjusted distance from center, i.e. distance from center cubed. so what looks like density IS density. and i rotate that one around a bit so you can see it better and adjust the radius scaling factor so you get see the density at different "volume magnification levels" so to speak.

so the second is probably far more interesting but i only created the second after i was half done uploading the first.

anyways you can see clearly the particles at the left edge (high inward radial velocity) suddenly accelerating down a 45 degree diagonal line, and then decelerating likewise to the other side before slowly reducing their radial momentum while keeping their axial momentum constant.

these are the particles passing close by the center. after they pass through the center, they're still travelling outward, but their outward momentum slowly diminishes due to their electrostatic attraction towards the center, until it reaches zero (the particle passes through x = 0 in phase space), in which case it accelerates back toward the center (left side = inward momentum), for another pass (making the square corner again)

notice that the line is very near 45 degrees here means that axial momentum + radial momentum remains nearly constant, i.e. they don't gain/lose a lot of inertia on this fly by / through, relative to their current momentum, at least

first one uploaded: http://www.youtube.com/watch?v=sO1M4OLOIDU
second one: http://www.youtube.com/watch?v=cYvdRlKwEtE

oh, and i should note that i "cheat" in this one by starting electrons out iin a sphere of 0.05 times the magrid radius. that's why i did the "brute force" approach to test the validity of that.

if the wb theory is correct and it does form a sphere of zero flux. well that means no electrons escape from inside that sphere, but it also means that no electrons get IN. so i'm thinking you just have to have electrons from the outside pass through that sphere with just a little inertia. then they'll jostle things up a little and maybe add to the mirror coils maybe not, maybe kick another electron out. but through time i imagine thats how you'd form an electron core without starting them out there. though it takes much more than a 100 picohours.

just did the calcs: a 100 picohours is 0.36 microseconds. so at this rate it takes roughly 3 hours to simulate a microsecond.

oh, and finally, just normal spatial coordinates:

http://www.youtube.com/watch?v=LFmkRTZwwxY

rcain
Posts: 992
Joined: Mon Apr 14, 2008 2:43 pm
Contact:

Post by rcain »

happyjack27 wrote: just did the calcs: a 100 picohours is 0.36 microseconds. so at this rate it takes roughly 3 hours to simulate a microsecond.
hmm, by my rough mental arithmetic that'll be about 4 months of computing time for a millisecond of real-time sim - and ideally i'd love to see a whole second play out (eg. to compare with a real experimental shots and especially for thermalisation and transport studies).

a fatter box sure would be nice at this point. do you suppose your sim would transcode to parallel/grid computing eg BOINC - http://boinc.berkeley.edu/
?

other than that you might have to consider compromising the model, i guess.

fantastic to see the basic structures and symmetries ermerging however, even at these scales. thanks for your own written narrative btw - it really helps - and all sounds well reasoned/logical, imho, and reassuringly sort of what we were expecting.

i am thinking of taking some still images of interesting bits so we can kick off some further discussion. what will really help us is to be able to quickly derive 'real' (instrument) values. any chance of a consolidated cheat-sheet at some point?

lastly, great to see 'cross section' there in your phase space repertoire. the holy grail. will be really interesting to see animated graphs of this as we vary parameter space, explore optimisaton strategies such as coil config, etc later.

keep up the great work - and as someone has said - dont forget to eat.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Post by happyjack27 »

rcain wrote: a fatter box sure would be nice at this point. do you suppose your sim would transcode to parallel/grid computing eg BOINC - http://boinc.berkeley.edu/
i've thought about the difficulties of doing that. it's an nbody all-pairs algorithm, which means each data element has to eventually get paired with every other data element. that memory access pattern is not conducive to distributed computing. for efficient distributed computing you need to be able to break the problem up into data-independent chunks.

there are other algorithms, however that would be more compatible with this architecture. and at particle counts much higher than 14k they tend to outperform an all-pairs one so you'd want to switch over to them anyways. particlarly i'm refering to barnes-hut treecodes and "mesh" i.e. "grid" based algorithm. barnes-hut tree code would be a little better and scales much better, but would still have a lot of data interdependence, and fairly unpredictable interdependence at that. to go distributed you'd probably want to go with a mesh algorithm. that is, you spatially divide up the sim space into little cubic sections and assign a cubic section to each processing node. then you still have to do some tree-code like stuff but the pattern is pretty regular. though that works best if the particle density throughout the space is uniform, otherwise you've got to cut it up into different size chunks and that's where a proper barnes-hut treecode algorithm comes in.

all in all it's not easy, particularly because of the i/o demands since it's a highly inter-dependent system, and it will only start to pay off at very high particle counts, but there are ways to approach it, all of which require deep and well thought-out changes in the algorithm.

rcain
Posts: 992
Joined: Mon Apr 14, 2008 2:43 pm
Contact:

Post by rcain »

agreed. i thought better of the suggestion almost as soon as i'd written it.

did a little scouring the net and the nearest i could find to the type of model i was thinking of, that might lend itself to a parallel/grid approach for this type of (time-like) problem was the following:

http://www.cours.polymtl.ca/ele6705/Art ... 6705_3.pdf

- 'A Weighted Z Spectrum, Parallel Algorithm, and Processors for Mathematical Model Estimation' - Michael J. Corinthios - IEEE Transactions on Computers Vol 45 No. 5 May 1996.

thought it might interest you. though i suspect you might be able to suggest better refined methods using a similar pattern are now avaliable.

(edit: ps. like Barnes–Hut it also exhibits algorithmic complexity based on 'n Log n' rather than 'n^2')

but as you say, such hybrid design not easy to say the least. still, its apparent from your rendering metrics that we are near to hitting a physical performance ceiling already.

any other thoughts about getting your sim projecting into a 1-3 second scale?
Last edited by rcain on Thu Dec 09, 2010 11:36 pm, edited 1 time in total.

Post Reply