Rough thoughts on CSI second talk

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Re: Rough thoughts on CSI second talk

Post by D Tibbets »

I concede computer modeling is advancing and that certain insights may be gained, especially with focus on the simplest levels or focused on a small area within the overall plasma. My point is that even with profoundly capable researchers at the best and best equipped computational labs (like MIT) that have available relative quick feedback from their actual in house machines, the road is still very difficult and laced with mines.

Fluid dynamics , radar stealth, etc have all been pursued vigorously with computer modeling, and this may have allowed for much greater predictive models. But still, wind tunnels, flights, radar tests, etc. are still necessary to verify and fine tune predictions.

It seams that such computer modeling would save considerably on time and development costs, but such programs like the F35 fighter are still slow and extremely costly endeavors. Basically, from the real world perspective, I perceive that the Hocksteders an Wallowitze's (sp?)are more important than the Sheldons. Conversely...

Dan Tibbets
To error is human... and I'm very human.

asdfuogh
Posts: 77
Joined: Wed Jan 23, 2013 6:58 am
Location: California

Re: Rough thoughts on CSI second talk

Post by asdfuogh »

>I concede computer modeling is advancing and that certain insights may be gained, especially with focus on the simplest levels or focused on a small area within the overall plasma. My point is that even with profoundly capable researchers at the best and best equipped computational labs (like MIT) that have available relative quick feedback from their actual in house machines, the road is still very difficult and laced with mines.

The "best" supercomputers in the world are currently housed in China (tianhe-2) and Oakridge (titan). Most simulation codes aren't using all of the cores available on a cluster at a particular instance so it doesn't matter that much unless you're going for some records in max flops. In addition, being slightly closer to the an experimental facility does not mean that the facility runs faster. Data that is released by a facility is equally available to collaborators because internet makes communication time faster.

>Fluid dynamics , radar stealth, etc have all been pursued vigorously with computer modeling, and this may have allowed for much greater predictive models. But still, wind tunnels, flights, radar tests, etc. are still necessary to verify and fine tune predictions.

The scientific method is based in empiricism, such that all theoretical and computational models require a final validation by experimental data. However, it's ignorant to try to relate the need for experimental validation to the "uselessness" of computational methods. Wind tunnels, and other physical experiments, are expensive. Models provide a way to do general parameter searches, and other relevant factor searches BEFORE you go and look with the wind tunnels. That's why the number of wind tunnels in the US has decreased as CFD advanced and computational power increase.

>It seams that such computer modeling would save considerably on time and development costs, but such programs like the F35 fighter are still slow and extremely costly endeavors. Basically, from the real world perspective, I perceive that the Hocksteders an Wallowitze's (sp?)are more important than the Sheldons.

And that slowness and cost has nothing to do with theoretical/computational/experimental approaches. Innovation comes at a cost. Also, the Big Bang Theory (I assume that's what you're referring to) is a pretty bad representation of how physicists are, from the point of view of an engineer. I wish Saltzberg did more than just consult them on the science of the show.

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Re: Rough thoughts on CSI second talk

Post by D Tibbets »

Just to drift further off topic, by non experimental physics, the prime example might be string theory. It is well accepted as science, but offers little if any opportunity for testing experimentally.

Also, while wind tunnel utilization may have decreased, at least a large part of that is related to decreased development, not shifting to dependance of computer modeling. During the 1950s there was a frenzy of aircraft development, now there is much less. Part of that is economic and political considerations. Part of it is due to the learning curve. Gains are near a plateau so that increased efforts are needed for smaller gains. Computer modeling certainly provides an avenue to pursue this without the cost and danger of suiting up test pilots, provided again that the modeling translates into reality, which is built and tested/ used.

And, I've never said computer modeling was useless, or even only marginally useful. What I am of the opinion of is that much of science advancement currently seems to be simulations. Reports in the media are often revealed to be "in our computer models, we have shown that..." This virtual reality, is just that. It is interesting, and educational, but until hardware is produced, it has no piratical significance. Admittedly, the exception to this is the modeling of natural processes, in order to make predictions. The National Weather Service is perhaps the prime example of this. They are getting pretty good at predicting severe weather. Now, if only they could only extend the predictions past a few days...

Modeling, or theory is often shaken by observation/ experimentation. The Standard Model is a good example of this. Things have to be massaged to match observation. It is this balance that is paramount. Theory without experimentation is not significant (it is philosophy). Experimentation without the input of modeling may gain useful ends, but it is much more cumbersome, and perhaps more likely to fail.

An example of computer modeling actually being counter productive may be the field reversed configuration. This was considered a dead end by mainstream science. But , apparently that understanding was challenged by alternative modeling (and experiment?). The point being that the modeling may be useful for predicting, and useful for priority assignment, but it is not infallable yet. Advances are being made in number crunching and precision and perhaps even applicable assumptions. But there is still a lot of uncertainties in complex systems like plasma physics and fluid dynamics, that lead to increasing ,um... uncertainties as the simulation progresses.

Dan Tibbets
To error is human... and I'm very human.

hanelyp
Posts: 2261
Joined: Fri Oct 26, 2007 8:50 pm

Re: Rough thoughts on CSI second talk

Post by hanelyp »

I see computer modeling as very useful. It can help identify points of interest in parameter space for a closer look. It can give insight to a process that may be difficult to directly examine.

It is also not to be trusted without validation. There are too many ways even a well designed model can go wrong. Perhaps the biggest flaw in "scientific process" in recent decades is presenting unvalidated, or even invalidated, models as settled science.
The daylight is uncomfortably bright for eyes so long in the dark.

asdfuogh
Posts: 77
Joined: Wed Jan 23, 2013 6:58 am
Location: California

Re: Rough thoughts on CSI second talk

Post by asdfuogh »

>The point being that the modeling may be useful for predicting, and useful for priority assignment, but it is not infallable yet. Advances are being made in number crunching and precision and perhaps even applicable assumptions. But there is still a lot of uncertainties in complex systems like plasma physics and fluid dynamics, that lead to increasing ,um... uncertainties as the simulation progresses.

At what point did I, anyone else on this forum, or other computational physicists ever argue that modeling is infallible? I seem to recall that I've pointed out several times that computational and theoretical models must always be subjected to experimental validation. The point of scientific research isn't to hammer out some blackbox device to get some particular results. An experiment without some theoretical model underlining it is just tinkering as an engineer. The point of these models is to try and isolate effects and phenomena, then seeing if your model is actually accurate.

>Modeling, or theory is often shaken by observation/ experimentation. The Standard Model is a good example of this

Sorry, if you wanted something that isn't shaken by physical results, perhaps you should consult with some religious faction instead? If a theoretical model fails, you have to figure out WHY it fails (ie. what assumptions did you take that are probably wrong?) You don't just throw out the mathematics. When the Newtonian theory of gravity failed in higher order tests, did we just throw it out? No, we figured out what was the incorrect assumption, and went from there. Again, what do you think science is supposed to be?

> Admittedly, the exception to this is the modeling of natural processes, in order to make predictions. The National Weather Service is perhaps the prime example of this. They are getting pretty good at predicting severe weather. Now, if only they could only extend the predictions past a few days...

This is stupid. What we modeling, supernatural processes? You realize that weather is also subjected to non-linear dynamics, kind of like what plasma physics is, yeah? Except in the case of plasmas, we're maybe modeling a few seconds (for some codes). "Now, if only they could extend these predictions past a few seconds.." You have to start somewhere.

> by non experimental physics, the prime example might be string theory. It is well accepted as science, but offers little if any opportunity for testing experimentally.

This isn't true. There are proponents and opponents to string theory. A lot of people actually have an issue with calling it science since there isn't yet a way to test it experimentally. However, there are also a lot of peolpe who are okay with it because it offers a possible experimental test at a level of energy that is technologically out of our reach. I don't know enough about this to comment more, but "well accepted as science" shows signs of not paying attention to what is actually in the real world.

----------------------------

>An example of computer modeling actually being counter productive may be the field reversed configuration. This was considered a dead end by mainstream science.

Forgive me if I don't trust your little statement here, but you've shown yourself to be a bit lacking in understanding the modeling process. According to Steinhauer in Review of Field-Reversed Configurations / POP 18, 070501/ (2011), the first end of FRCs in the US was due to the tearing and rotational instabilities observed in experiments. It lacked the stability of tokamaks in the 1960s. The second end of FRCs was because the US stopped paying for it for economic reasons. I'm relatively sure that was mostly because tokamaks were more popular and more studied than FRCs.

-----------------------------

>And, I've never said computer modeling was useless, or even only marginally useful.

And yet, you've come across as implying that. Let's not quibble over how overtly you attempted to pass a message. In any case, I think my point, not just for you, but especially for you, is that scientists already know this. Experimental validation is the defining point of a theory. That gravitational wave theorist was overcome with joy when the recent experimental results verified his theoretical model. That's because experimental validation is the defining moment of a model, whether it's just conjecture or possible truth. We know that, but maybe you should learn that as well.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Re: Rough thoughts on CSI second talk

Post by happyjack27 »

i've been just observing for a while, as i feel the conversation has been pretty good so far; always find myself seeing someone already saying what i was going to say, but better. :)

i guess i'll just briefly state my general position, though, for what it's worth.

computer modeling and simulation can be very helpful in augmenting research.

it's main weakness is its scale vs. the physical world. the physical world vastly outdoes a computer in exactly the one thing a computer is supposed to be good at: computing.

but the computer's strength is malleability. any hypothesis that can be formally stated can be tested, and any observation can be made. in fact, the whole process can be fully automated. you can measure things with a computer that you can't measure in real life. and you can test parameters with a computer that you can't test in real life. i can bring my polywell simulation up to 1,000,000,000 tesla, easier than it was to type "1,000,000,000 tesla" (because i use a slider, so it's just dragging the mouse). there are things a computer can do that are very useful and helpful in understanding phenomena and the ramifications of hypothesis that the "physical world" just can't do. on the flip side, the "physical world" excels at what a computer, in comparison, is horrible at: computing.

so if you want a full-scale simulation, use physical objects. if you want to toy around with specific aspects or parts to learn more about them and explore configurations, etc., computer simulations can be very useful for that.

hell, we're constantly discovering new drugs and new materials from fully automated parameter space searches of computer simulations and analysis (e.g. folding@home).

mattman
Posts: 459
Joined: Tue May 27, 2008 11:14 pm

Re: Rough thoughts on CSI second talk

Post by mattman »

I have to work through all this material. I got a copy of birdsall and Langdon. You can Download the whole textbook, for free, on PDF, here:

http://en.bookfi.org/book/654217

I am reading it. I also went and got a copy of Happy Jacks code.

http://sourceforge.net/u/happyjack27/profile/

I want to broadly describe PIC modeling and couple that with CSI’s second talk. The way to do this is to integrate everything that has been said, with references and some “hands-on” modeling. Here is what I purpose to model:

Image

The WB6 reactor can be broken into 1/48th wedges. From this, a PIC block can be extracted. This block extends about five inches from the center volume, about one inch in height. If we use representative particles, we can whittle the number of particles down to 1.2E8. The simulation can use planes of symmetry around this block.
I have MATLAB code which tells me the magnetic field inside the reactor.

https://github.com/ThePolywellGuy/Matlab-Modeling

I plan to run Indreks OCTIVE code on MATLAB. The goal is to explore a wide variety of operating conditions. Now, Joel Rogers has already done this. I am getting his code and hope to upload it.

====
Lastly, I was very encouraged by the UK's response to Jamie Edwards. I think this political cartoon sums it up:

Image

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Re: Rough thoughts on CSI second talk

Post by D Tibbets »

One complication about using a wedge that is 1/48 th(?) the circumference of a 2 D model is that assuming a sphere or circle ignores behavior near or in the cusp. That this is important is illustrated by the confounding results found with WB5. Bussard used terms like 'quasi spherical' meaning (I think) that he recognized the lump/ spiky surface features but because they are symetrical he could ignore them in his simulations. This led to unreasonable expectations with WB5. The changed assumptions/ observations that led to WB6 included the recognition that the computer models were flawed because the magnet surfaces were assumed to have almost zero surface area (they were mathmatical lines in his simulations) so ExB and other magnetic diffusion effects could be ignored. In WB4 the square magnet forms and the touching on the sides led to much more losses than he anticipated. In the second patent application it is mentioned that ExB losses were about 1-10% of total electron losses inWB6. Weather this is before or after recirculation I don't know. I have not seen what these losses were in WB4 but I suspect they may have actually matched or exceeded the cusp losses. In WB5 the shape and separation issues were not addressed, but electron repellers located very near the midplane of the cusps was used to reduce electron cusp losses. This did improve electron confinement performance some, but when additional power was applied in an attempt to drive the machine to higher internal densiies the gains were feeble. This approach led to accumulations of cold electrons right in the central cusp and these competed with the internal potential well such that once ions gained great enough radii near the cusps they were more attracted to this external electron space charge, and electrostatic ion confinement went to pot.

These experimental deviations from his computer models led to the several profound assumption changes that led to the claimed successful WB6 design.

My understanding of these assumptions are:

1) A closed box machine will not work. You can stop and turn the electrons in the cusps (midplane at the radius of the narrowest portion of the cusp (what I think of as the actual cusp as opposed to the cusp throat or collection area)). Electron repellars can improve electron cusp confinement but if done at the radius of the cusp the associated space charges attracts internal ions. You might have repellars outside the cusps provided they are far enough away. This limits how close you can place the e-guns. Recirculation apparently works because the electrons remain fast in the cusp, they are not slowed and most reversed until they travel some distance beyond this. This turn around point (accumulation of cold electrons) is far enough outside the cusp that the associated space charge does not overwhelm the internal potential well for the ions (at least locally). The inverse square law applies and the previously assumed (?) Gauss law considerations were not as significant as assumed. Gauss law applies perfectly in perfect conditions, but it starts to degrade as holes are introduced. It might still dominate up to a limit, but this was exceeded in WB 5

2) Despite the electrons much smaller gyroradius compared to ions, ExB diffusion is still significant. You need to keep the magnet surfaces as deep within the insulating magnetic field as possible. Thus the conformal magnet cans, they work better than square cans as the corners are not projecting as far into the diminishing B field. Also, ExB drift requires that the magnets be separated, The more the better. But this results in larger cusps. There is a Goldilocks compromise that works best. That is another assumption applied to WB6. The separation was assumed to work best with 3-5 gyroradii separation. This was applied to WB6. Actual performance may vary from this (a small amount?). Experimentation with various separations, perhaps with ~ 2-8 gyroradii separations might fine tune this optimization.

What is the electron gyroradii in the cusp anyway? The gyroradii is defined by the B field strength and the velocity vector of the electron that is perpendicular to the field. The question is how much of the electrons velocity is parallel and how much is perpendicular? It depends on angular momentum of the electrons in the cusp as driven by collisions. A simulation might answer this specific question. If the electron is mirrored at the midplane of the cusp as opposed in the throats of the cusps results in the parallel vector becoming zero and the perpendicular vector maximizing. I think this implies that the turn around point results in the greatest gyroradii(?)*, and this might be another disadvantage of having electron impellers too close to the cusp midplane. The electrons would be slow with smaller gryrradii, but all of the remaining energy would be the perpendicular motion, so the net gyroradii may be relatively larger than expected. Then you need to consider....

As a side light I wondered how fusion alpha particles at high velocity (KE) and thus high gyro radii could squeeze through a cusp without hitting a magnet can. With the recognition of the vectors importance though, I can see how they could do this provided that most of their KE motion is parellel to the cusp B fields, that is radial to the center of the machine. This would apply assuming most of the fusions occurred near the center, the angular momentum was not increased too much by the several thousand bounces off of the sloping B field surfaces, and collisions was at a minimum. The last is reasonable considering their Coulomb collision cross section at such high energies, the others are open questions (for me). Modeling could certainly answer these questions, or at least define the limits that would have to apply in the machine for this to occur.



There are a lot of questions that focused modeling could answer with perhaps greater accuracy and precision than more generalized modeling that requires correspondingly more correct assumptions and greater error accumulation as the sim progresses.

* The gyro radii changing in the B field- that sounds foolish on the surface, but consider thatwhat I am talking abount is the electrons that Enter the B field dominate region. Due to the Wiffleball effect the B field gradient is very steep. On the inside the B field may be minimal and the electrons motions dominated by B field free collisions, while a few mm greater radii and the B field dominates the motion. It is not the electron velocity vector in a constant B field but the vector that is present as the electron crosses this threashold that determines the resultant gyroradii and this transition may occur on every orbit. On one side the B field dominates, on the other collisions dominate. Maximal density at a given energy versus the B field strength once the magnetic domain is entered determines the parellel vs perpendicular vectors of the ions and thus variations their gyroradius . Admittedly this is nebulous thinking, but I'm uncertain how to pin down my thinking without further exhausting you with my ramblings.

Dan Tibbets
To error is human... and I'm very human.

asdfuogh
Posts: 77
Joined: Wed Jan 23, 2013 6:58 am
Location: California

Re: Rough thoughts on CSI second talk

Post by asdfuogh »

It's okay to start with simplifications, as long as you remember that simplifications come with certain assumptions that will have to be changed as you get more computing power. Go with the 1/48 model, ignore the cusps for now, and maybe use periodic boundary conditions. The more complications you add, the longer it will take for you to get up to speed on coding simulations. Once you've got some working model, you have to figure out your limitations in particles-per-grid, your grid-size, your time-step, etc. It's not going to be perfect, but you just have to keep that in mind as you figure out the logistics of coding a PIC code (especially if you're going to do it in 3D, which is going to be much harder).

If you were doing this just for the blog, I'd say going with a 1D-in-space (but with all components of velocity and fields) line at the interesting points (maybe one along an axis, and one along the line that would define a cusp) will suffice. If you were doing this as a foundation for further simulations, I'd still start with the 1D code because you have to learn the basics of programming a PIC code by practice anyways. Trying to include all those complications into a first attempt is something like trying to cook a feast for an emperor before trying out individual dishes first.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Re: Rough thoughts on CSI second talk

Post by happyjack27 »

if you're doing a slice, i think you should "mirror" it, with periodic boundary condities (a wrap-around universe) so when calculating the forces on particles you have the whole thing, except you really have 48 copies of a 1/48th slice. and then you roll forward the 1/48th slice. interesting to note, theoretically if you rolled forward the whole thing, each 1/48th slice would still be exact copies of each other, since it's symmetrical. so the only simplifying assumption you're really adding is that each 1/48th slice starts out in exactly the same configuration as all the others.

though ultimately i'm not sure how big of an improvement in efficiency / modeling resolution this will give you.
there is something gained by just doing the whole thing: "redundancy" to help error correction. you're essentially doing 48 experiments at once, except better: you're automatically mixing their results / averaging them together at every time slice.

Post Reply