A.k.a. climate modeling!icarus wrote: Then there is the huge problem of validating the code output with real world data or how else do you know if you are not just generating meaningless pretty, colourful graphics? (cartoon engineering).
![Wink ;)](./images/smilies/icon_wink.gif)
i resent that. especially the ill-informed part. i am a professional computer program and as such i am well aware of the difficulties. i am well aware that gpus are only good for data-parallel problems. this is, obviously, exactly such a problem. and there are already exist very good gpu codes for doing very similiar problems (n-body problems). although it would not be trivial to modify them, that was precisely the intent of the original GRAPE code - to be flexible such that it could be easily adapted to similar problems, such as particle physics.icarus wrote:Probably wrong, and definitely ill-informed. GPU can give massive speed up over multi-core and parallelised CPUs BUT only for specific problems (like graphics or graphics-like).i believe the main gist of what's being said is that with the right computer code, we could do a mixed-particle 3d-simulation of a polywell configuration on our home computers.
Super computing is problem specific, it depends on the type or equations, BCs, ICs, etc on what type of hardware will work 'best' (fastest, most accurate).
Also "the right computer code" is a massive understatement since you must develop, tailor and tune that code for the hardware you are running on and this make take years or even a decade to achieve for a real, physical solution. Then there is the huge problem of validating the code output with real world data or how else do you know if you are not just generating meaningless pretty, colourful graphics? (cartoon engineering).
btw, you pass the Turing Test very well!happyjack27 wrote: i am a professional computer program
BenTC wrote:I thought the main problem with GPU codes were random errors in quality of the cards. For fast frame games an odd rogue pixel is not a problem but for simmulations it could be. Perhaps dependant on consumer versus professional workstation priced cards.
btw, you pass the Turing Test very well!happyjack27 wrote: i am a professional computer program
Code: Select all
#define C 299792458
#define PI 3.14159265358979323846264338327950288419716939937510
#define MAGNETIC_CONSTANT (4.0*PI*0.0000001)
#define ELECTRIC_CONSTANT (1.0/(MAGNETIC_CONSTANT*C*C))
#define ECONST (1.0/(4.0*PI*ELECTRIC_CONSTANT))
#define BCONST (MAGNETIC_CONSTANT/(4.0*PI))
#define RC2 (1.0/(C*C))
#define dot(a,b) (a.x*b.x + a.y*b.y + c.z*c.z)
#define crossx(a,b) (a.y*b.z - a.z*b.y)
#define crossy(a,b) (a.z*b.x - a.x*b.z)
#define crossz(a,b) (a.x*b.y - a.y*b.x)
typedef float _float;
struct vector {
_float x,y,z,scale;
}
struct particle {
vector p;
_float charge;
vector v;
_float mass;
}
__device__ vector calc_em_force(particle p0, particle p) {
//distance
vector pdiff;
pdiff.x = p0.p.x-p.p.x;
pdiff.y = p0.p.y-p.p.y;
pdiff.z = p0.p.z-p.p.z;
_float p2 = dot(pdiff,pdiff);
pdiff.scale = __reciprsqrt(p2);
_float scale = pdiff.scale*pdiff.scale*pdiff.scale; //recipr(p2)*pdiff.scale;
//calculate E-field
_float estrength = p.charge * ECONST * scale;
#ifdef RELATIVISTIC
vector vdiff;
vdiff.x = p.v.x-p0.v.x;
vdiff.y = p.v.y-p0.v.y;
vdiff.z = p.v.z-p0.v.z;
_float v2 = dot(vdiff,vdiff);
vdiff.scale = __reciprsqrt(v2);
_float pdotv = dot(pdiff,vdiff);*pdiff.scale*vdiff.scale;
_float st2 = 1-pdotv*pdotv; //=sin^2(arcos(pdotv))
_float v2rc2 = v2*RC2;
_floaT rcd = (1-v2rc2*st2);
_float relativistic_correction = (1-v2rc2)*__reciprsqr(rcd*rcd*rcd);
estrength *= relativistic_correction;
#endif
vector e;
e.x = estrength * pdiff.x;
e.y = estrength * pdiff.y;
e.z = estrength * pdiff.z;
//calculate B-field
vector b;
b.x = crossx(vdiff,e)*RC2;
b.y = crossy(vdiff,e)*RC2;
b.z = crossz(vdiff,e)*RC2;
//calc lorentz force
vector f;
f.x = p0.charge*(e.x + crossx(p0.v,b));
f.y = p0.charge*(e.y + crossy(p0.v,b));
f.z = p0.charge*(e.z + crossz(p0.v,b));
return f;
}
Code: Select all
B strength = triple integral of
dx*dy*dz
/
[(y*sx*sy+z*cx*sy+x*cy+tx)^2 + (y*cx-z*sx+ty)^2 + (y*sx*cy+z*cx*cy-x*sy+tz)^2]
2nd:i believe the main gist of what's being said is that with the right computer code, we could do a mixed-particle 3d-simulation of a polywell configuration on our home computers.
Ok, so let's say your are not presumptuous thus you can do what you said at the outset. A full 3-D mixed-particle validated simulation on your home computer (presumably using your own code that your are writing for some gee whiz GPU CUDA platform or some such).i don't mean to be presumptuous
that's kind of a convolution of what i said, but fair enough.icarus wrote:happyjack:
Ok, so let's say your are not presumptuous thus you can do what you said at the outset. A full 3-D mixed-particle validated simulation on your home computer (presumably using your own code that your are writing for some gee whiz GPU CUDA platform or some such).
I'll be waiting for some code outputs to prove that you're not being presumptuous.
PS: do you have any experience numerically modelling physical systems?
Ok, so my "ill-informed" comment was not far off the mark.oh, and no, i don't have any experience numerically modeling physical systems.