Hypothesis on Electron and Ion Behavior Inside the Polywell.
Hypothesis on Electron and Ion Behavior Inside the Polywell.
Hello Polywell Fans!
I have a hypothesis on electron and ion behavior inside the polywell. You can check it out here:
http://thepolywellblog.blogspot.com/201 ... avior.html
I have a hypothesis on electron and ion behavior inside the polywell. You can check it out here:
http://thepolywellblog.blogspot.com/201 ... avior.html
Only had time to read it once and a quick thought... but it seemed that only the magnetic mirror effect was used to explain how electrons recirculate, slowing down and reversing direction when outside the magrid coils. While the magnetic mirror (and whiffle ball) effect keeps the majority of electrons inside, I thought that when one leaks out, it is mainly (only?) the positive charge held on the coils that causes the electron to return.
In theory there is no difference between theory and practice, but in practice there is.
For what it is worth, from a U-Tube video of some early 1960's Nuclear weapon tests in space, I have the impression that in a constant magnetic field the mirroring or bouncing back and forth in the magnetic field is consistent. But, where the magnetic field strength becomes stronger (magnetic field lines compressed like at the Earth's poles, or the cusps in a Polywell or a Mirror machine), there is a greater tendency for the particles to reverse, which is good for containment (up to the limits that can be achieved with mirror confinement- which is not that great).
But, if a particle does reverse outside the closest /strongest magnetic region of a cusp , it subsequently may not manage to get back inside, and/ or it will continue to travel outside the magrid, till the field line hits something. The positive charge biases this process as it has a greater accelerative force on non up scattered electrons, that reverses these electrons that are traveling along field lines outside the magrid. This is independent of the magnetic mirroring and dominates. Remember that the potential well is only ~ 80% of the drive potential on the magrid.
I believe that these electrons are accelerated towards the center along nearly parallel field lines in the cusp, and they can subsequently escape the field lines and again enter the B-field free core (just as the new virgin electrons from the electron guns do).
The descriptions of the electrons bouncing around freely and bouncing off the Wiffleball border- much like billiard balls is useful for describing the Wiffleball effect, and recirculation (just imagine the billiard table pockets are smaller (or at least the beveled surfaces on the edge of the pockets are smaller) and there is a short up sloping ramp behind the pockets that reverses the balls that are not traveling too fast).
Electrons that become trapped on magnetic field lines have more complex motions where mirroring plays a role, but the end results are apparently similar, except for those electrons which manage to be transported through the field and impact the magrid. As the electrons are transported deeper into the magnetic wield they are at a lower potential energy and this might be a mechanism to remove these slower electrons more rapidly. Thus there are mechanisms to preferentially remove both up scattered electrons, and down scattered electrons, which helps to extend the thermalization time beyond the average lifetimes of the electrons in the system. Also, the recirculated electrons, are reset to the original energy due to one aspect of Gauss's law. Also, this selective removal of electrons does not also lose much energy as the up scattered electrons give back most (?) of their energy to the potential well. While the down scattered electrons, during the random walk trip through the magnetic field give energy back to other electrons (as they are knocked deeper into the magnetic field, the electron that did the knocking is knocked away (higher up the electron potential well).
Dan Tibbets
But, if a particle does reverse outside the closest /strongest magnetic region of a cusp , it subsequently may not manage to get back inside, and/ or it will continue to travel outside the magrid, till the field line hits something. The positive charge biases this process as it has a greater accelerative force on non up scattered electrons, that reverses these electrons that are traveling along field lines outside the magrid. This is independent of the magnetic mirroring and dominates. Remember that the potential well is only ~ 80% of the drive potential on the magrid.
I believe that these electrons are accelerated towards the center along nearly parallel field lines in the cusp, and they can subsequently escape the field lines and again enter the B-field free core (just as the new virgin electrons from the electron guns do).
The descriptions of the electrons bouncing around freely and bouncing off the Wiffleball border- much like billiard balls is useful for describing the Wiffleball effect, and recirculation (just imagine the billiard table pockets are smaller (or at least the beveled surfaces on the edge of the pockets are smaller) and there is a short up sloping ramp behind the pockets that reverses the balls that are not traveling too fast).
Electrons that become trapped on magnetic field lines have more complex motions where mirroring plays a role, but the end results are apparently similar, except for those electrons which manage to be transported through the field and impact the magrid. As the electrons are transported deeper into the magnetic wield they are at a lower potential energy and this might be a mechanism to remove these slower electrons more rapidly. Thus there are mechanisms to preferentially remove both up scattered electrons, and down scattered electrons, which helps to extend the thermalization time beyond the average lifetimes of the electrons in the system. Also, the recirculated electrons, are reset to the original energy due to one aspect of Gauss's law. Also, this selective removal of electrons does not also lose much energy as the up scattered electrons give back most (?) of their energy to the potential well. While the down scattered electrons, during the random walk trip through the magnetic field give energy back to other electrons (as they are knocked deeper into the magnetic field, the electron that did the knocking is knocked away (higher up the electron potential well).
Dan Tibbets
To error is human... and I'm very human.
I have read over the explanations. Unfortunately there are several major misunderstandings of the physics.
In section 2 "When are the electrons moving fast?", the field energy density is different then the energy of the particle. Just because you have strong fields it does not mean you have fast particles. This explanation is just not correct at all.
In section 3 "The magnetic mirror line". Magnetic mirror effect has to do with the ratio of velocity perpendicular to the field compared to parallel to the field, as well as ratio of magnetic field strengths. not total energy. It can also be formulated in terms of conservation of magnetic moment, but not the moment you have specified. See below. I am not even sure where your equations come from. charge times electric field gives force not energy so it does not even have the correct units.
In section 4 "The Whiffle Ball Theory", there is a misunderstanding as to what magnetic moment is useful in magnetic confinement. It is due to the circular motion of the electron in a magnetic field (moment = KE_perp/B), not the intrinsic spin. There are almost 6 orders of magnitude difference in their effects for a plasma of kT=1eV. For a fusion plasma at 10keV that is 4 more orders of magnitude difference.
In section 2 "When are the electrons moving fast?", the field energy density is different then the energy of the particle. Just because you have strong fields it does not mean you have fast particles. This explanation is just not correct at all.
In section 3 "The magnetic mirror line". Magnetic mirror effect has to do with the ratio of velocity perpendicular to the field compared to parallel to the field, as well as ratio of magnetic field strengths. not total energy. It can also be formulated in terms of conservation of magnetic moment, but not the moment you have specified. See below. I am not even sure where your equations come from. charge times electric field gives force not energy so it does not even have the correct units.
In section 4 "The Whiffle Ball Theory", there is a misunderstanding as to what magnetic moment is useful in magnetic confinement. It is due to the circular motion of the electron in a magnetic field (moment = KE_perp/B), not the intrinsic spin. There are almost 6 orders of magnitude difference in their effects for a plasma of kT=1eV. For a fusion plasma at 10keV that is 4 more orders of magnitude difference.
Carter
mattman,
I think it is strange that your discussion doesn't include mention of a) the Brillouin limit, b) grad B drift [as per Dan's comments], c) inertial drift (across ExB vector), d) diamagnetic drift and e) ambipolar diffusion chasing after differential [q/m] drift currents.
So I think you might be trying to re-invent some basic physics here. That's not necessarily a Bad Thing, but it would be better to put it all into a context alongside some standard physical descriptions. If you are not familiar with a-to-e then each are easy concepts to pick up from standard texts, but it is the complexity of their mutual behaviour that is the point of interest here.
I'm sure you may have some valid points embedded in your discussion, but I get to a few paragraphs in and think "but what about.... he's not talking about... &c., &c." so I begin to switch off.
e.g. to me, just speaking for my own perceptions, piling into a discussion about a completely pure electron behaviour without even a hat-tip to ambipolar behaviour kinda makes it ring a bit hollow in substance. I'm not saying you can't make a good discussion from the starting point of looking at electrons alone, but I would expect to see an argument string at the start that leads us to a point which sets the scene as to why we might be reasonably able to gain some insight by considering an analysis of electrons in isolation.
I think it is strange that your discussion doesn't include mention of a) the Brillouin limit, b) grad B drift [as per Dan's comments], c) inertial drift (across ExB vector), d) diamagnetic drift and e) ambipolar diffusion chasing after differential [q/m] drift currents.
So I think you might be trying to re-invent some basic physics here. That's not necessarily a Bad Thing, but it would be better to put it all into a context alongside some standard physical descriptions. If you are not familiar with a-to-e then each are easy concepts to pick up from standard texts, but it is the complexity of their mutual behaviour that is the point of interest here.
I'm sure you may have some valid points embedded in your discussion, but I get to a few paragraphs in and think "but what about.... he's not talking about... &c., &c." so I begin to switch off.
e.g. to me, just speaking for my own perceptions, piling into a discussion about a completely pure electron behaviour without even a hat-tip to ambipolar behaviour kinda makes it ring a bit hollow in substance. I'm not saying you can't make a good discussion from the starting point of looking at electrons alone, but I would expect to see an argument string at the start that leads us to a point which sets the scene as to why we might be reasonably able to gain some insight by considering an analysis of electrons in isolation.
What kcdodd said. I can't add much to that, except that I don't like the electron-only simulation because I don't think space charge limits allow interesting densities.
Oh, and I think the the virtual anode has been measured with laser fluoroscopy in Japan.
Oh, and I think the the virtual anode has been measured with laser fluoroscopy in Japan.
n*kBolt*Te = B**2/(2*mu0) and B^.25 loss scaling? Or not so much? Hopefully we'll know soon...
-
- Posts: 1439
- Joined: Wed Jul 14, 2010 5:27 pm
and how hard is it to add another species of charged particles (e.g. deutrium nuclei)?
though yeah, talking about density, simulation time would grow at something like the square of the density. at that point you'd probably want to look for an algorithm that can do a good approximation in less than n^2 time. (i remember seeing something somewhere like that, i believe.) also you'd want to use a gpu instead of a cpu 'cause you'd need much more computing power. one could google "fast n-body cuda" for some examples.
(edit) ah, here are some good ones: http://gpgpu.org/tag/n-body
(if someone wants to write/modify the code, i've got a 1GB 460GTX i'd be happy to run it on.)
though yeah, talking about density, simulation time would grow at something like the square of the density. at that point you'd probably want to look for an algorithm that can do a good approximation in less than n^2 time. (i remember seeing something somewhere like that, i believe.) also you'd want to use a gpu instead of a cpu 'cause you'd need much more computing power. one could google "fast n-body cuda" for some examples.
(edit) ah, here are some good ones: http://gpgpu.org/tag/n-body
(if someone wants to write/modify the code, i've got a 1GB 460GTX i'd be happy to run it on.)
By the way. Bussard mentioned trying to do supercomputer simulations in the early 1990's, but the math was to complex for anything other than setting up the initial conditions. This was partly speed and perhaps precision (32 bits, 64, or 128 bit processing available at the time?).
How do super computers do now in comparison? And, how does a hot multi core desktop computer with a good graphics accelerator compare with the super computer circa ~1993? I understand even 64 bit processing would strain to achieve meaningful results.
Dan Tibbets
How do super computers do now in comparison? And, how does a hot multi core desktop computer with a good graphics accelerator compare with the super computer circa ~1993? I understand even 64 bit processing would strain to achieve meaningful results.
Dan Tibbets
To error is human... and I'm very human.
-
- Posts: 1439
- Joined: Wed Jul 14, 2010 5:27 pm
with the right algorithm, 64 bits of precision should be plenty.D Tibbets wrote:By the way. Bussard mentioned trying to do supercomputer simulations in the early 1990's, but the math was to complex for anything other than setting up the initial conditions. This was partly speed and perhaps precision (32 bits, 64, or 128 bit processing available at the time?).
How do super computers do now in comparison? And, how does a hot multi core desktop computer with a good graphics accelerator compare with the super computer circa ~1993? I understand even 64 bit processing would strain to achieve meaningful results.
Dan Tibbets
a 64-bit fp (called a "double" in programmer speak) has a dynamic range and precision, to put it in terms of scientific notation, of:
+/- X.XXXXXXXXXXXXXXXX(~16 decimal places of precision) * 10^(from about -308 to +308)
(i'm not sure there are names for numbers like that but if there are they would probably fill up multiple pages -- each.)
in comparison, a proton weights about 6.02 * 10^(-23) grams.
nowadays you don't do multi-core processors. nowadays for a problem like this you use GPGPUs, such as produced by nvidia or ati. to compare cpu power vs 1993 just use moore's law. but then to convert that over to gpgpu power you've got to multiple that by a factor of 50 or so. (that's why we use GPUs for high performance computing instead of CPUs now.)
my desktop computer has a gpu with a theoretical peak performance of over a teraflop. it cost me a little over $200.
in comparison, as of 2010, the fastest PC processors six-core has a theoretical peak performance of 107.55 GFLOPS (Intel Core i7 980 XE). (from wikipedia) That's less than a 10th of what my graphics card can do and it costs way more than $200.
-
- Posts: 1439
- Joined: Wed Jul 14, 2010 5:27 pm
according to wikipedia: http://en.wikipedia.org/wiki/FLOPS ,
in 1997 it costed about $30,000 per gigaflop.
in 2009 it costed about $0.14 per gigaflop.
so for the same $$ you get about 214,286 more floating point operations per second than you would in 1997.
in 1984 it costed about $15 billion per gigaflop.
i could buy a much more powerful computer than the biggest supercomputer they had then with the change in my pocket.
in 1997 it costed about $30,000 per gigaflop.
in 2009 it costed about $0.14 per gigaflop.
so for the same $$ you get about 214,286 more floating point operations per second than you would in 1997.
in 1984 it costed about $15 billion per gigaflop.
i could buy a much more powerful computer than the biggest supercomputer they had then with the change in my pocket.
No need to guess. Top500 has their list from 1993, which tops out at 124 gflops.And, how does a hot multi core desktop computer with a good graphics accelerator compare with the super computer circa ~1993? I understand even 64 bit processing would strain to achieve meaningful results.
http://www.top500.org/lists/1993/11
-
- Posts: 1439
- Joined: Wed Jul 14, 2010 5:27 pm
Probably wrong, and definitely ill-informed. GPU can give massive speed up over multi-core and parallelised CPUs BUT only for specific problems (like graphics or graphics-like).i believe the main gist of what's being said is that with the right computer code, we could do a mixed-particle 3d-simulation of a polywell configuration on our home computers.
Super computing is problem specific, it depends on the type or equations, BCs, ICs, etc on what type of hardware will work 'best' (fastest, most accurate).
Also "the right computer code" is a massive understatement since you must develop, tailor and tune that code for the hardware you are running on and this make take years or even a decade to achieve for a real, physical solution. Then there is the huge problem of validating the code output with real world data or how else do you know if you are not just generating meaningless pretty, colourful graphics? (cartoon engineering).