Vlasov Solver [work in progress]

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

kcdodd
Posts: 722
Joined: Tue Jun 03, 2008 3:36 am
Location: Austin, TX

Vlasov Solver [work in progress]

Post by kcdodd »

I just thought I would post an update on the solver I am working on. Even with adaptive mesh it is being difficult to get the memory usage into a workable range. The actual solver code has been done since I posted my introduction in the general forum, but that was apparently the easy part :(. I'm having to change a lot of things, one being my density integration algorithm, to try to save memory so its taking more time. But anyway, for those who are curious what the heck it may/hopefully eventually do, here is a quick rundown.

Starting out the program is given min and max position and momentum of anything that will be in the simulation (electrons, ions, etc). It generates an initial mesh by permutation of the min/max coordinates creating a 6-D box in phase space, which is subdivided into simplices (6-D version of triangles/tetrahedrons etc), creating a binary partition tree. The target number of divisions can be reached by subdividing the simplices in such a way that the vertices all line up, and any two bordering simplices can be up to 2x (or 1/2) the size. So areas of interest in the mesh can be refined much more then others.

Each vertex represents a point in phase space, and the densities of the various populations are attached. The gradient of the densities can then be used in the vlasov equation to find the rate of change of the density in phase space of each population. Several samples are taken and the runge kutta method is used to integrate total changes in densities at all the vertices over time.

Then the densities can be integrated over momentum space to find the spatial density distribution. At that point you can use the information to calculate new e-fields and b-fields to continue to (hopefully) find a steady state solution.

Right now the collisional terms are not included. I still haven't learned what they mean exactly so I can't implement them yet, lol.
Carter

tonybarry
Posts: 219
Joined: Sun Jul 08, 2007 4:32 am
Location: Sydney, Australia
Contact:

Post by tonybarry »

Hello Carter,
This sounds good. Keep up the effort. How much RAM does your sim currently consume? And what hardware are you running it on?

Regards,
Tony Barry

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Sounds great! Are you keeping the E and B fields on the same mesh, or are you using the "leap frog" method so they are offset by 1/2 mesh positions?

I am not surprised about the memory - I ran into the same problem.

Ignore collisions for now. Once you get the solver to work, we can include
that level at a later time. It's just one more term.

Good luck!

kcdodd
Posts: 722
Joined: Tue Jun 03, 2008 3:36 am
Location: Austin, TX

Post by kcdodd »

I'm running win2k with 1gb memory and 2ghz processor. its a few years old computer, lol.

I've already made some changes, I think one thing is there was some memory fragmentation going on but I think I fixed that. Its currently using like 150mb of memory for every 100k simplices, which would be like average 1500 bytes per simplex for all the data. I'm using double float precision so so a 6-vector takes 48B, and each simplex has 7 vertices, but adjacent ones share them. plus the adjacency lists etc. Plus the gradient and density derivitiv, forces, etc to actually do the simulation. It still doesnt see like it should need that much space but thats just a feeling haha.

I create two meshes. A 6D phase mesh and a corresponding 3D mesh. The way I have it right now the density data is integrated to the 3D mesh, which then calculates the e and b fields to the 3D mesh and then the 6D mesh goes back and does a lookup. The two meshes are independent but the refinement should sort of match the "projection" of the 6D mesh.
Carter

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

So the 3D mesh is an integral of the 6D mesh (over velocity)?

I used the index to determine position and nearest neighbor. It would be interesting to see how the methods compare.

Keep us posted!

I've decided it's time for a new computer. Everything I've tried to do in the past week has failed because my software is so far out of date with the new stuff I want to try out (about 5 years old, so even older than yours!). Hopefully I'll be able to play soon too. :)

kcdodd
Posts: 722
Joined: Tue Jun 03, 2008 3:36 am
Location: Austin, TX

Post by kcdodd »

Yeah, Its interpreted as relativistic momentum though, and the velocities for the forces etc are found using the momentum = gamma*m*v equation.

The units are not even square so I cant use direct indexing. I use a binary space partition tree to find the nearest neighbors.
Carter

charliem
Posts: 218
Joined: Wed May 28, 2008 8:55 pm

Post by charliem »

Some tries to write programs of this sort in the past have been limited by the lack of sufficient memory or processor speed.

It could be worth studying (now that it is in its beginning stages) if it could be design to run in a loosely-coupled multiprocessor grid instead of just one cpu.

I have access to a small MPU of 32x3GHz cpus, and most probably there is people over here that have even more powerful ones available to them.

And now that I come to think about it, may sound like sci-fi but if we had the right program I'm sure that a cooperative effort between fusion fans could lend a LOT of computer power, much higher that has been possible til now, in the range of Teraflops (sort of like the SETI@home initiative).

dch24
Posts: 142
Joined: Sat Oct 27, 2007 10:43 pm

Post by dch24 »

charliem wrote:And now that I come to think about it, may sound like sci-fi but if we had the right program I'm sure that a cooperative effort between fusion fans could lend a LOT of computer power, much higher that has been possible til now, in the range of Teraflops (sort of like the SETI@home initiative).
The computation may be "tightly coupled," meaning the separate nodes need to communicate a lot.

We could still run multiple "jobs". I'd be happy to donate CPU cycles any time. (But I'd need to compile it from source.)

tonybarry
Posts: 219
Joined: Sun Jul 08, 2007 4:32 am
Location: Sydney, Australia
Contact:

Post by tonybarry »

Carter's code base is in Matlab, which does allow multiprocessor operation - if the required toolbox is installed.

The running of the sim on a worker farm requires a Matlab worker licence for each worker. If you already have the farm and licences, then there is no problem. Otherwise, it can get expensive.

It would be great to have a look at both the physics and the code behind it all ... I have Matlab running on a dual-core PowerMac G5 here (at work).

It might be worth a paper ... I'd read it to enlarge my understanding, though I'd probably not be able to contribute much.

Regards,
Tony Barry

Tom Ligon
Posts: 1871
Joined: Wed Aug 22, 2007 1:23 am
Location: Northern Virginia
Contact:

Post by Tom Ligon »

That's a constant irratation in the working world ... MatLab and its $^*^% licenses! I've got it on my machine, and it seems every time a project comes up where I need it, my license has expired. So I have IT get me current again, and try to run the script one of the MIT whiz-kids upstairs has devised, and it won't run because I don't have some toolkit installed, which requires a $1500 license fee, so I can use one function they like.

Which is why I usually wind up using Excel as much as possible ... which is better than we used to manage with Quattro. It is rather limited for running 3-D finite element models, though.

Personally, I'd probably go the route of ab-initio programming, likely in something I'm comfortable with, like Pascal, or go ahead and get comfortable with C. Run on a Linux partition and turn off anything that looks remotely like Windows, compile some really slick, efficient code, and see what a 3 GHz Pentium can really do if you get out of its way.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

I have been looking at new computers, but I think I will hold of a bit. I really want to get my hands on a small or a a big parallel processor. The day is near when powerful multiprocessing is mind boggling cheap.

None of my calculations were relativistic. There's no real need for that with energies below 10MeV on the electrons, and absolutely no need for it on the ions. You can save a lot of computation time sticking with Newtonian physics, at least for particle motion.

kcdodd
Posts: 722
Joined: Tue Jun 03, 2008 3:36 am
Location: Austin, TX

Post by kcdodd »

I'm not using matlab for this one. I'm doing it in c++ for more control in general. But I just dont want to worry about distribution and networking and all that, at least for now anyway. It may be possible to do it but that will wait till after I know if what I'm doing will even give good results. I think its possible to build a pc with up to 4 cpus (quad each so 16 processors) and up to like 32 gb of mem, in the few $k range.

And about relativistic vs not: Its actually not that much more process to make it relativistic; basically one line to convert to velocity. So I figured why build in non-relativistic code when the other is more general and not much overhead compared to finding a gradient.
Carter

Mike Holmes
Posts: 308
Joined: Thu Jun 05, 2008 1:15 pm

Post by Mike Holmes »

Heck, if you want you can go right now to apple.com and buy a machine with two 3.2GHz Quad-Core Intel Xeons and 32 Gig of ram. Of couse right there you're already over 13 grand, but if you can get this without even looking hard, I'm sure you can find the machine you describe, KC.

Mike

MSimon
Posts: 14334
Joined: Mon Jul 16, 2007 7:37 pm
Location: Rockford, Illinois
Contact:

Post by MSimon »

Carter,

More CPUs doesn't help much for sustained speed. They get throttled back when they get hot. (My dual core slows to a crawl when the fan turns on). What you need is a server PC designed for heavy sustained loads.
Engineering is the art of making what you want from what you can get at a profit.

dweigert
Posts: 24
Joined: Tue Sep 11, 2007 1:09 am

Post by dweigert »

That's why I invested in water cooling my beast. I have two dual core opterons in my workstation with copper waterblocks that keep the whole thing nice and cool.

Dan

Post Reply