Virtual Polywell

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

Mikos
Posts: 27
Joined: Wed Jan 16, 2008 3:19 pm
Location: Prague, Czech Republic

BOINC - Polywell@Home

Post by Mikos »

What about BOINC (Berkeley Open Infrastructure for Network Computing)? You can write BOINC application like SETI@home, Einstein@Home, Climateprediction.net, Rosetta@home, LHC@home, etc. And then you can have HUGE amount of distributed computing power. But I don't know if Polywell simulations can be scaled like this. If yes, I can promise you at least 6 machines. Polywell@Home would be really great ;-) You can get hundreds of thousands machines working for you with project like this.

dch24
Posts: 142
Joined: Sat Oct 27, 2007 10:43 pm

Post by dch24 »

drmike, you are right. :) It's exciting to be able to do something but hardware will get better, the simulations will get better, so it's a win/win to wait.

Mikos, I like the idea of using BOINC. I think we might be able to get several things going at once. I can at least see three parallel (heh!) attack vectors: 1. the simple sim code for validation and verification that the sim is working, 2. single machine codes that run in parallel -- works on multi-core desktops and compute servers, 3. distributed, BOINC, cluster computing.

Why not do all at once? This is all, after all, free. :)

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

BOINC definitely looks interesting, thanks for the link.

I'm still drooling after the AMD Firestream 9170, which hasn't shipped yet.

tonybarry
Posts: 219
Joined: Sun Jul 08, 2007 4:32 am
Location: Sydney, Australia
Contact:

Post by tonybarry »

Matlab has a distributed computing toolbox which will run parallel code on four cores in one machine. For an additional fee, you can buy worker licences to run on a farm. These run on Apple as well as PC hardware. Unfortunately, Not Cheap At All.

dch24, I like the idea. I can put in AUD100. Via Paypal. An octoCore Apple Mac Pro with as much RAM as possible ... perhaps we could do some thing with Apple ...

Regards,
Tony Barry

(edited to remove errant HTML)

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

JohnP wrote:I've seen some discussion about multi core machines and thought I'd ask if everyone's sure that the problem is or can be reasonably parallelized. I did some work a couple years ago on a Beowulf type system that turned out to be a complete dog. Beowulf is not a shared memory model like multicore but even multicore is not suitable for all problems. Please excuse this comment if it's obvious to you - I haven't seen your code.

A lot of work has been done to vectorize and parallelize plasma codes. Beowulf is good if you have _one_ problem to solve. Unfortunately plasmas are not ideal for that kind of cluster.

I too have drooled over BOINC, but I'm not sure what the right problem is to solve yet.

I have to admit this is fun though!

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

It's been 24 hours and I got to level 241 out of 400. So now I know what *not* to do.

The number of steps scales as the cube of the linear number of divisions, so if I cut back from 400^3 to 100^3 it should help :shock: I'm also going to tell the integration routine to use the least number of steps per interval, that should help by 6^3 per integral.

I was thinking for BOINC it would be useful to have a whole lot of parameters as "knobs" which could be set up for each set of calculations. You could vary voltage and current on the MaGrid, current in the electron sources, etc. Once you have a model worth pounding on, BOINC would be a great way to do the pounding.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Apparently I have not uploaded the correct code patch - I'll try to do that this evening - to correct the integration problem.

I did get the program to run in 40 minutes for one integration, so that's a lot better than not finishing at all in 24 hours. Still not useful for debugging and helping me understand the physics, but getting there.

I'll try to put in a few attempts at going a touch quicker, then upload the file.
It should be easy to make parallel - the outer loop can be broken up into lots of sections. The trick is access to all the same ram.

dch24
Posts: 142
Joined: Sat Oct 27, 2007 10:43 pm

Post by dch24 »

That's great, drmike! I'll give it a go and see if it runs on my machine this time.

Then I'll see about running things in parallel. You mentioned some good ideas about dividing up the volume before.

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

Looking at the hardware requirements for a BOINC installation:
What resources are needed to create a BOINC project?

If you have an existing application, figure on about three man-months to create the project: one month of an experience sys admin, one month of a programmer, and one month of a web developer (these are very rough estimates). Once the project is running, budget a 50% FTE (mostly system admin) to maintain it. In terms of hardware, you'll need a mid-range server computer (e.g. Dell Poweredge) plenty of memory and disk. Budget about $5,000 for this. You'll also need a fast connection to the commercial Internet (T1 or faster).

It may be difficult for some scientists to provide these resources. In this case, it may be possible to create a BOINC project at a higher organizational level, to server the needs of multiple scientists. For example, such a project might be created at the university campus level. Several U.S. funding agencies (NSF, NIH) have programs that could support this.
So if we already had this kind of hardware available (let alone budget!) we wouldn't be talking about BOINC.

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Yikes! Yeah, that's really a set up for already large organizations. It'd be a great PR idea though for helping to spread the word around about the technology.

I uploaded a running version (at least on my machine) of electron_fluid.c at http://www.eskimo.com/~eresrch/Fusion It's 70% done in 10 minutes, so
not too bad.

The bad part is that all data is zeros - so I have some debugging to do!

scareduck
Posts: 552
Joined: Wed Oct 17, 2007 5:03 am

Post by scareduck »

drmike -- are you getting close to a point where a CVS/SVN repository would be helpful (hint, hint)?

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Yeah, it's true. But I hate those things. It makes me think I'm at work, not at play :)

It's living code - it changes by the minute (or at least by the day when I get a chance!) The purpose is to get clues and try to learn about the complexity of the physics. When it becomes a job I'll let you guys who know how to make things like subversion work correctly take care of it (the guys I work with pull off miracles in terms of how far back they can recover stuff, butt saver many times!)

I've been digging thru my code and looking at the math and I finally realized that zero is a valid stable solution. It is not a very useful solution! An alternate starting point would be one where the electron potential exactly balances the grid potential. That makes the exponential term zero every where on the first pass and the density function determines the potential.

I'll see how that goes on the 100 step version.

I'll be happy to help you set up a code repository :wink: I can assure you that if I try to set it up, it will fail!!

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

I changed one line from this

Code: Select all

ElecPot0[dex] = 0.0;
to this

Code: Select all

ElecPot0[dex] = -GridPot[dex];
and it goes from running in minutes to hours because the adaptive integration routine is taking a lot more interpolated points.

No wonder people make lots of assumptions so they can do *some* kind of calculation!
I haven't even gotten to an "interesting" problem yet! But this is at least a lot of fun....

dch24
Posts: 142
Joined: Sat Oct 27, 2007 10:43 pm

Post by dch24 »

drmike, I think I understand now what my error means, "gsl: qag.c:261: ERROR: could not integrate function". I don't think it's a bug in your code. The error basically means the integration didn't converge -- it diverged.

So I need to be sure the input data is OK. I'm doing MAXSTEPS=100, and I want to verify that my potential.dat is correct. Can you check that the output below is what you get, too?

Code: Select all

localhost $ md5sum potential.dat 
e234099bd4142c01d4da056389781573  potential.dat
localhost $ ls -l potential.dat 
-rw-r--r-- 1 dch24 dch24 1414808 Jan 29 22:38 potential.dat

drmike
Posts: 825
Joined: Sat Jul 14, 2007 11:54 pm
Contact:

Post by drmike »

Size looks right, I'll check it when I get home. I bet potential is correct,
but there is something wrong with electron_fluid. Something I'm not
understanding about either the math or the implementation of it.

I ran it with 10 steps to watch a lot of printf's. It gave me numbers. So I set it up to run with 100 MAXSTEPS and left it till morning. I dumped the data - and it is all zeros again!

So I'm confused. I suspect your hardware is giving NAN's where mine gives underflow - it's the same problem just different outcome.

I will see if I can do some integrals by hand along specific radial lines (like the x axis) and then set up the program to do the same thing. I can at least compare what I get with what I expect.

I did that for potential.dat last night - and the computer matched my special cases. I think the input is right - but clearly my understanding
of how the potential is computed is wrong. Confusion is the first step in learning, so this is good!

I'll do the md5sum this evening - I bet they match.

Post Reply