Rough thoughts on CSI second talk

Discuss how polywell fusion works; share theoretical questions and answers.

Moderators: tonybarry, MSimon

mattman
Posts: 459
Joined: Tue May 27, 2008 11:14 pm

Rough thoughts on CSI second talk

Post by mattman »

Rough thoughts on CSI second talk - 11/5/2014:

Introduction: Slides 1 -3:

Image

The Introduction explains why CSI is interested in simulating the polywell instead of experimenting with it.
1. It is cheaper to simulate than build.
2. It is faster to simulate than build.
3. Simulations help you understand physical effects, such as:
a. The rate of particle loss through the cusps
b. The ion temperature: center and edge
c. The electron temperature: center and edge
d. A predicted fusion rate

One thing that was not made clear was: what is the benchmark for these simulations?

Slide 2- Particle motion overview:

Image

A typical plasma has the following characteristics:
1. 10^19 things in one cubic meter of space
2. 10,000 electron volts as an average energy
a. Is this a thermalized plasma? With a nice bell curve of energies? If so this is the plasma Devlin is describing. The electron energy distribution would look like this:

Image

b. By the way, the likely, ideal distribution for WB-6 is shown below. This is assuming that the ions and electrons can be at different temperatures (something Dr. Rider disagreed with, he wrote an energy transfer paper that almost nobody cited). It helps to have lots of cold electrons - and a few hot ions. There are many reasons for this. First, the plasma should be mostly electrons to maintain the drop needed for fusion. Second, if the electrons are cold, it lowers the radiation losses (energy being sapped away as light). Third, if the ion population is small and hot, it drives up the fusion rate. This distribution, optimizes, the polywell.

Image

c. Is CSI modeling a non-thermal plasma? with energies like the following:

Image

3. Devlin states this plasma has a collision rate of 5E+6 collisions per second. Do we have an equation to predict this?

Image

4. How do you handle long wave/short wave electromagnetic interactions? I imagine it is Maxwell’s equations, which explain how electromagnetic waves behave.

Slide 5: Introduction to Particle in Cell:

Image

Instead of doing the full treatment of modeling the plasma, the Particle-in-cell method breaks the problem into two issues. This is shown below.

Image

Do I have that correctly? I imagine that these graphics represent data in a computer code.

asdfuogh
Posts: 77
Joined: Wed Jan 23, 2013 6:58 am
Location: California

Re: Rough thoughts on CSI second talk

Post by asdfuogh »

The particle-in-cell method is basically approaching simulations with a kinetic (instead of fluid) route. You track computational particles (which represent a large number of real particles) in phase space such as f(x, y, z, vx, vy, vz), although coordinates may be different for simpler calculations (for example, a popular coordinate system for tokamaks is magnetic-boozer coordinates in which magnetic fields are represented as straight field lines). You also keep track of other quantities at the computational grid points (which are discrete because computers need that).

The basic steps are

1. Initialize the system (ie. set up particles, the computational grid, etc.).

2. Calculate density at the grid points (by weighing what particles are near this point, etc.).

3. Calculate fields at the grid points (can use potentials or straight up fields, but potential usage is more common).

4. Calculate fields at particle points (by weighing which grid points the particles are near, etc.).

5. Calculate new particle phase-positions (just time integration of equations of motion).

6. Calculate changes due to collisions, sources, sinks, etc. (depends on your model).

7. Loop back to 2 until satisfactory or until code breaks down.

Of course, the more particles you use and the finer the mesh you use, the more accurate the results should be, if your theoretical model is good and if your computational model is stable and converges. If it is a fully kinetic PIC simulation, that would be very computationally expensive because that's a full 6 dimensions for each computational particle. The code I work with is gyro-averaged so you drop one dimension with the assumption that our simulations are looking at low frequency waves and instabilities, and that k_perp*rho_i is smaller than one (k_perp being wave number in the perpendicular direction and rho_i being gyroradius of the ions). If this isn't clear enough, please tell me what isn't exactly understandable so I can try to clarify. Of course, there might be subtleties that I won't know either because I'm still technically a newbie.

By the way,

> This is assuming that the ions and electrons can be at different temperatures (something Dr. Rider disagreed with, he wrote an energy transfer paper that almost nobody cited).

was this specific to Polywell? Also, you are very correct about the lack of mention of benchmarking. Unfortunately, there aren't many groups who are working on a Polywell-type simulation so what they *really* need to do is validate their results, at least, qualitatively.

Finally, can you try to think of how Polywell might be reduced to a 1d or 2d version? I'd like to take a stab at doing a tiny bit of simulation on the side. I previously wasn't too sure about this because I'm not allowed to use the time I have on our accessible supercomputers because that time was specifically granted for a different project. However, I recently realized that our university has a cluster that probably can run a decent simulation as well so.. perfect place to try it out!

prestonbarrows
Posts: 78
Joined: Sat Aug 03, 2013 4:41 pm

Re: Rough thoughts on CSI second talk

Post by prestonbarrows »

Plasma Physics via Computer Simulation by Birdsall and Langdon is probably the best explanation of how PIC codes work. The first few chapters clearly go through all the mathematical nuts and bolts of using finite grids to describe fields and interactions between discrete particles with these grids. It also includes a few fully implemented 1D (and maybe 2D?) electrostatic and electromagnetic codes written out in fortran or something silly. But its enough to see what is happening and then implement it in something more contemporary. Being so old, the sections dealing with hardware implementation are mostly useless.

hanelyp
Posts: 2261
Joined: Fri Oct 26, 2007 8:50 pm

Re: Rough thoughts on CSI second talk

Post by hanelyp »

I figure a lot of issues surrounding the Polywell can be addressed by a spherically symmetric 1 axis position (radius) + 2 axis momentum (radial, tangential) simulation, punting on the magrid itself. Profiles of particle density, energy, and radial/axial velocity as a function of radius covers most of the really big questions.
The daylight is uncomfortably bright for eyes so long in the dark.

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Re: Rough thoughts on CSI second talk

Post by D Tibbets »

Misconceptions, at least by my understanding:

The average ion and electrons temperatures are and probably must be the same, at least for thermalized plasmas. In a Polywell talking about cold or hot ions or electrons is based on location (radius from center). It is a spacial distribution. On the edge the ions are slow (cold), while the electrons are fast (hot). The opposite holds in the core (small radius). But the average temperature of the ions and electrons distributed over the volume of the machine is approximately the same. When cold electrons are invoked to minimize Bremsstruhlung radiation, it is referring to those electrons in the core. The electrons near the edge are hot. This works because of the spherical geometry. As the ions converge towards the center they not only accelerate (get hotter as they fall down their potential well) they become more dense. Not only electron speed is important but so is the frequency with which they pass close to an ion. The radition rate scales as ~ the 1.75 power of the density. If the electrons remained hot as they approached the center, the Bremsstruhlung radiation would devastating. This is what Rider pointed out, especially for P-B11 as the Z of 5 for B11 results in ~ 25 times the Bremsstruhlung radiation compared to D-D plasmas (Z=1). Note that this temperature gradient for ions and electrons based on radial position are not absolute. There is slurring and expansion of the temperature range due to various interactions. In an ideal situation Bremsstruhlung would be trivial as the core ion density would be tremendously greater in the center compared to the edge, but this core ion density would not contribute much to Bremsstruhlung radiation because the electrons in this region are very slow/ cold.
On the edge where the electron speeds are maximal, the corresponding ion density is trivial. Why is this important/ Because the ion density not only figures into the Bremsstruhlung rate, it also figures into the fusion rate. The relative overall fusion rate to Bremsstruhlung rate is improved, and this allows for profitable fusion output versus the additional energy input needed to keep the average plasma temperature adequate.

The thermalized spread of the plasma also contributes. If talking about the average energy/ temperature in a thermalized plasma like a Tokamak, the fusion rate relative to the Bremsstruhlung rate suffers. This is because of the fusion cross section being significantly lower at the average temperature of a Tokamak plasma- perhaps 10-20 KeV. Most of the fusion is coming from the high temperature thermal tail ions, while Bremsstruhlung is coming from this tail but also from the cooler average ion temperatures. This is why thermalized plasmas cannot easily achieve breakeven unless the average temperature is in a window- not to cool and not to hot. And this is why Tokamaks are only good for high fusion cross section at relative low temperature fuels like D-T. As Bussard pointed out, the temperature target for thermalized machines like Tokamaks is perhaps ~ 5KeV to 60 KeV, Below this or over this and Bremsstruhlung losses will always exceed fusion output for D-T fuel. D-D fuel has a different fusion cross section curve (not a peak then fall off as for D-T, but consider that once the D-D fusion cross section curve slope becomes less than ~ 1.75 power-
ground is lost against Bremsstruhlung.

The thermalized temperature distribution tends to look the opposite of what was presented above. The slope is steeper on the low temperature side, and more stretched out on the high temperature side and this becomes more pronounced as the average temperature is increased. This has some benefits if you are burning D-T fuel, but for other fuels where the fusion cross section lags ~ 2 orders of magnitude or more, the high temperature thermal tail does not provide fusion rates sufficient to overcome the additional Bremsstruhlung losses that derive not only from the thermal tail but the mostly fusion inert lower temperature ions (average and below). If a Tokemak can be heated enough, D-D fusion might squeek by but certainly aneutronic fuels would not. By having a non thermalized plasma, so called monoenergetic plasma/ ions maintain the entire ion population at a relatively narrow temperature that is selected for best fusion rate versus loss rates. Bremsstruhlung concerns , while not eliminated, are favorably modified. Again, keep in mind the important consideration is that the ions and electrons (hot/ cold) in the core is the parameter that determines the overall fusion rate versus Bremsstruhlung rate. Having the ions and electrons closely clustered to a spatially (radius) dependent average temperature allows for optimization of the fusion rate and the Bremsstruhlung loss rate. This is not possible when the plasma is thermalized- the temperatures are all over the place no matter where you are in the machine.

As I mentioned, the mono energetic moniker is actually a misleading term. It suggests that the ion or electron population (importantly in the Polywell to remember that the radius always be a consideration) is clustered closer to the average temperature, not that there is only one possible temperature in that location (like 10,000 eV +/- 1 KeV).

In the Polywell, Nebel pointed out that D-D profitable fusion was possible even without confluence of the ions towards the center. This is because while the angular momentum component of the ion motion leads to an even distribution (density ) of the ions and a near even temperature distribution ( the potential well acceleration only occurs on the very edge (the potential well is square rather than parabolic)) the monoenergetic temperature distribution of all of the ions during the majority of their lifetimes gives the benefits that I mentioned above - the temperature dependent fusion cross section being optimized against the Bremsstruglung cross section or rate for almost all of the ions as opposed to only the high thermal tail ions in a thermalized plasma.

I think that there has to be at least some ion confluence/focus towards the center for D-He3 or P-B11 fusion to work due to the above arguments about hot/ cold distributions of ions and electrons dependent on radius.
Note that even this advantage was not considered adequate by Bussard. He proposed an additional manipulation to push the system over the edge. By having a ~ 10 to 1 ratio of protons to Boron nuclei, the fusion rate suffers some but not as much as the reduced Bremsstruhlung benifits. The Bremsstruhlung scales as the Z squared. With a Z of 1 for a proton and a Z of 5 for Boron, The1:1 mixture of p and B results in a factor of ((1*1^2) + (1*5^2)) /2 or (1+25)/2 or 13. A 10:1 ratio would yield ((10*1^2) + (1*5^2))/11 or (10+25)/11 or ~ 3.2
The fusion rates suffers by I think a small amount, perhaps 1/2(?), but the Bremsstruhlung rate decreases by 3.2/13 or 1/4. This is a doubling of the ratio of fusion output versus Bremsstruhlung losses. I'm uncertain how the fusion suffers in the diluted plasma as the overall ion density would be unchanged but the available boron targets would be less.. Perhaps the penalty would be log rhythmic, and the fusion loss would be only 10 % The opposite - 90% or even 99% loss seams unlikely or the proposition would not even be considered by knowledgeable individuals like Bussard.


Note that I insist that the average temperature of the ions and electrons throughout the volume of the Polywell are the same. Perhaps it would be more useful, if less accurate to say that the mean temperature is the same.

Dan Tibbets
To error is human... and I'm very human.

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Re: Rough thoughts on CSI second talk

Post by D Tibbets »

hanelyp wrote:I figure a lot of issues surrounding the Polywell can be addressed by a spherically symmetric 1 axis position (radius) + 2 axis momentum (radial, tangential) simulation, punting on the magrid itself. Profiles of particle density, energy, and radial/axial velocity as a function of radius covers most of the really big questions.
Much more condensed than what I was trying to say. Just like real estate, the important consideration is location, location,location...

Dan Tibbets
To error is human... and I'm very human.

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Re: Rough thoughts on CSI second talk

Post by D Tibbets »

Another point: the comments and graph seems to suggest that the number of ions in the machine is much less than the number of electrons. This is not so. As stressed by Bussard, the difference in ions and electrons are only ~ 1 part per million. In the graph, such a difference could not be seen without a very powerful magnifying glass. Note that this condition applies to plasmas with a density where useful fusion could occur. This would be densities of ~ 10^19 to 10^22 particles per cubic meter. If you modeled with an extremely thin plasma with a density of perhaps 10^6 particles per cubic meter and you ignored the stupendously long confinement times that would be required for any detectable fusion to occur- the MFP would be millions or even billions or more of passes across a 1 meter machine before a fusion collision occurred, or even a Coulomb collision occurred. Even at the edge where the ion energy/ velocity is minimal, the MFP may still be many times the width of the edge region so that edge annealing perhaps could not be addressed in this model.* Also, the Coulomb pressure (essentially the voltage) necessary to maintain an imbalance much above ~ 1 ppm would quickly increase to many millions of volts. The ~ 1 ppm million imbalance in charge is enough to create a useful potential well, but not so much that the voltage becomes ridiculous.

Even at these densities of 2*10^ 6 charged particles per cubic meter the ion and electron populations would be ~ 1,000,001 electrons and 999,999 ions (with a Z of one)per cubic meter.

* Coulomb collisions scale as ~1/the temperature squared, and the density squared. The fusion collisions are the same except modified by the cross section curve. At 10s of KeV and a density of ~ 10^19 / M^3 the Coulomb MFP might be in the neighborhood of a 100 meters. At a density of 10^6/ M^3 the MFP would be longer by (10^13)^2 or about 10^ 26 meters. I think this could be considered as a non collisional plasma and it could be very difficult to determine any parameters dependent on collisions , especially collisions related to radial position in a machine with a diameter perhaps a billion billion billion times less than the MFP. While a million particles in a computer model might allow for relative short computational time, the signal to noise ratio would be so low that any difference would possibly be much smaller than even 64 bit calculations could resolve.
My understanding of computer models is that this is useless within available computational times and bit resolutions. That is why efforts are made to use packets of particles and use them to effectively increase the density many fold. The problem though is that you are measuring packets of particles (perhaps billions or more particles per packet) so that the resulting numbers represent approximations and are based on assumptions that can lead to tremendous deviation in the results if the assumptions are even only a tiny amount off. This is represented in the competing models that come up with much different results- Rider verses Nevins, etc.

This goes along with Bussard's comments in the Google talk where the resolution available with the then super computers was totally inadequate for making predictions. Super computers have improved considerably since then but as discussed in some threads here, there are still many orders of magnitude improvements needed before reasonable results can be obtained. Perhaps when quantum computers come along, a massively parallel run might give definitive answers after only a few months of calculation. Till then compromised computer models can be used, but only with the understanding that the predictions have low confidence, perhaps very low confidence, unless they are consistent with experiments. The models, starting from matched experiments, may be useful for extrapolating limited design changes, but the greater the design changes the more uncertain the models become.

Dan Tibbets
To error is human... and I'm very human.

asdfuogh
Posts: 77
Joined: Wed Jan 23, 2013 6:58 am
Location: California

Re: Rough thoughts on CSI second talk

Post by asdfuogh »

The models, starting from matched experiments, may be useful for extrapolating limited design changes, but the greater the design changes the more uncertain the models become.
Hmm, I feel like there's a lot of misconceptions on what computational plasma physics is necessarily about. Sure, it'd be a fantastic dream to be able to design new experiments based purely on computational simulation, but at our current computational power and algorithms, that's just not possible due to the incredibly wide ranges of time and length scales. So.. a large part of simulations is running the particular physics model to see what probable effects happen in isolated conditions. For example, you take some experimental plasma shot, take those physical profiles and put them into your simulation, then vary a shit ton of stuff to see what kind of instabilities and waves might pop up. You can get a sense of which waves will be most important, or which waves will dominate because of the specific kind of profiles you might have (for instance, a large temperature gradient at the edge), but it's not like you're simulating more than 5 milliseconds of real time in that kind of simulation.

>While a million particles in a computer model might allow for relative short computational time, the signal to noise ratio would be so low that any difference would possibly be much smaller than even 64 bit calculations could resolve.

Really depends on what you're trying to resolve. Linear simulations don't really require that many particles per cell, but non-linear simulations get much messier and uglier reaaaal fast.

> the comments and graph seems to suggest that the number of ions in the machine is much less than the number of electrons.

I think most computational models directed toward fusion research are based on quasi-neutrality.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Re: Rough thoughts on CSI second talk

Post by happyjack27 »

D Tibbets wrote: * Coulomb collisions scale as ~1/the temperature squared, and the density squared. The fusion collisions are the same except modified by the cross section curve. At 10s of KeV and a density of ~ 10^19 / M^3 the Coulomb MFP might be in the neighborhood of a 100 meters. At a density of 10^6/ M^3 the MFP would be longer by (10^13)^2 or about 10^ 26 meters. I think this could be considered as a non collisional plasma and it could be very difficult to determine any parameters dependent on collisions , especially collisions related to radial position in a machine with a diameter perhaps a billion billion billion times less than the MFP. While a million particles in a computer model might allow for relative short computational time, the signal to noise ratio would be so low that any difference would possibly be much smaller than even 64 bit calculations could resolve.
to be clear, these are floating point calculations, and for scientific calculations such as these it's pretty standard to use double-precision, (64-bit) http://en.wikipedia.org/wiki/Double-pre ... int_format That gives you 11 bits for your exponent, so anywhere from 2^-1024 to 2^1024. that translates to a dynamic range between 10^−308 and 10^308.

Now you still have the issue that your fractional part is only 52 bits, so if you're numbers differ by more than a factor of 2^52, adding them together will simply give you the larger number as the result, since the smaller number was below the precision.

Usually this is fine.

there are some ways to improve upon this. one, you can do a sort of modulus arithmetic - just store the remainder and keep accumulating it until it gets beyond a certain threshold.

another way is see if you can rework it to add up the smaller numbers first, so the precision difference isn't so large. generally speaking, try to do the operations in an order that keeps them on roughly the same order of scale.

though in all this, i'm partial to the "fast multipole method" (consider one of the top 10 algorithms of the century), or my implementation with slight modifications: http://sourceforge.net/projects/octreem ... particles/
(i have to work on improving memory management in that.) as it explicitly takes the finite precision into account to dramatically reduce the computational load, with negligible effect on the simulation accuracy. And the gain in efficiency goes up the more particles you add. it's an O(N) solution to what is typically an O(N^2) problem!

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Re: Rough thoughts on CSI second talk

Post by happyjack27 »

To clarify o(n) vs o(n^2):

It's called "big o notation", so look that up in google if you want more detail.


Basically it means that an o(n^2) algorithm, where n is the number if particles, will take a thousand units of time, squared, times a constant. And an o(n) algorithm l, where n=1000, will just take a thousand units of time, period.

At n=1000, the o(n) algorithm will solve it 1000 times faster than the o(n^2) algorithm. At n=1000000, the o(n) will solve it a million times faster.

This is where the saying comes from "algorithm trumps hardware."
Last edited by happyjack27 on Thu Mar 13, 2014 12:53 pm, edited 2 times in total.

hanelyp
Posts: 2261
Joined: Fri Oct 26, 2007 8:50 pm

Re: Rough thoughts on CSI second talk

Post by hanelyp »

happyjack27 wrote:At n=1000, the o(n^2) algorithm will solve it 1000 times faster than the o(n) algorithm.
That's backwards. The O(n^2) algorithm will be slower than the O(n).
The daylight is uncomfortably bright for eyes so long in the dark.

happyjack27
Posts: 1439
Joined: Wed Jul 14, 2010 5:27 pm

Re: Rough thoughts on CSI second talk

Post by happyjack27 »

hanelyp wrote:
happyjack27 wrote:At n=1000, the o(n^2) algorithm will solve it 1000 times faster than the o(n) algorithm.
That's backwards. The O(n^2) algorithm will be slower than the O(n).
opps. thanks for spotting that. will edit.

D Tibbets
Posts: 2775
Joined: Thu Jun 26, 2008 6:52 am

Re: Rough thoughts on CSI second talk

Post by D Tibbets »

My appreciation of the computer modeling limitations for the Polywell comes mostly from what Dr Bussard said in his Google talk- time slice 1:22:00 to 1:24:20
http://www.youtube.com/watch?v=rk6z1vP4Eo8
At that time millions of dollars was anticipated for even limited start up analysis. Admittedly those calculations could now possibly be done on a souped up desktop computer over a reasonable amount of time, but again these are only limited analyses.


A text that may be useful for someone truely determined to model the Polywell computationally may be this:
http://books.google.com/books?hl=en&lr= ... WYztwBq6th

That someone does not include me as I have neither the computer skills nor persistance to takle the problem. Reading some of the early pages of the excerps though reviels some obvous issues. They admit that modeling full up partiles in cell approaches are impossible for the forseeable future, unless some gross simplifications are used. The arguement then becomes how much simplification is possible and how much uncertainity in the results is tolorable.

They mention using MHD theory and particle in cell modeling. According to Bussard, MHD approaches are worthless for modeling the Polywell. That leaves particle in cell, with it's limitations. They mention using collisionless conditions (I don't understand how that would be very useful....) and few particles- super particles (clumps of many particles) to simplify the calculations in neutral plasmas. The Polywell is not a neutral plasma, and some of the FRC plasma schemes may also be non neutral. This presumably complicates the process, perhaps profoundly.

This is all taken from only the first few pages of the book, but does start to illustrate some of the considerable challenges in computer modeling of fusion plasmas.

Recently an effort at MIT to model the edge instabilities in their Tokamak was reported. They admitted that more work was needed. I interpret that as meaning that they can't do it to any degree that would be Highy useful. Perhaps some intermediate indication of trends might be implied, but again only as they compared to experiment, or as used to design experiments to get answers. The modeling itself does not give the answers.

Dan Tibbets
To error is human... and I'm very human.

asdfuogh
Posts: 77
Joined: Wed Jan 23, 2013 6:58 am
Location: California

Re: Rough thoughts on CSI second talk

Post by asdfuogh »

>Perhaps some intermediate indication of trends might be implied, but again only as they compared to experiment, or as used to design experiments to get answers. The modeling itself does not give the answers.

Dan, you're misunderstanding the point of most simulations. Sure, the end goal of simulations is to have something akin to aerospace; replace most physical experiments with great plasma modeling software. However, that isn't what we do right now because the long range coulomb interaction makes it much more complicated than neutral fluids.

So, what most fusion simulations (at least, from my still fresh perspective) aim for, is to isolate and establish the phenomena that can be seen in a plasma. A big part of it is maybe taking some equilibrium data from an experiment, playing with different profiles of plasma properties, slowly changing and tweaking parameters, all to see what kind of instabilities and waves might pop out. You isolate some linear properties to identify the wave (oh, look, it's got some particular dispersion relation, some particular growth), you isolate the causes (oh, looks like a temperature gradient that has length scale greater than blahblah makes it go pop), and you do a lot of other linear analysis. Then you try and do some nonlinear runs which is even harder to understand (when you look at the data analysis). It's kind of like doing theoretical modeling, except you use computers to establish some results that you then argue with theoretical models and try to validate using experimental data.

And, oh yes, of course, what you look at and what you see depends on the time and length scale of your simulation. But guess what, plasmas are sadistic little devils that have a really wide range of time and space. So a lot of focus usually stays on low frequency waves because those are the dominant ones in transport and turbulence.

>Recently an effort at MIT to model the edge instabilities in their Tokamak was reported. They admitted that more work was needed. I interpret that as meaning that they can't do it to any degree that would be Highy useful.

Being cautious is a good thing. It's definitely the smarter route to undersell your product in the plasma community. If you are too confident, raise too much hope, then realize it's more complicated than you originally thought, you end up with a situation like NIF. Also, I'm not exactly sure which model you are talking about, but there's been a recentish edge physics model which has had great success in both fitting old experimental data and predicting new experimental data. It is called EPED, and it relates the peeling ballooning modes and kinetic ballooning modes to the tokamak pedestal height and width. Is this the model that isn't "highly useful"?

asdfuogh
Posts: 77
Joined: Wed Jan 23, 2013 6:58 am
Location: California

Re: Rough thoughts on CSI second talk

Post by asdfuogh »

Also, Birdsall and Langdon is a classic, but it's pretty old. Anyone looking to try their hand ought to look into Computational Plasma Physics by T.Tajima. It's more recent, more detailed, but also doesn't assume a specific language. My advice would be to thumb through it somewhat (assuming some plasma physics and computational knowledge), then go and find the OSIRIS code by UCLA. From there, either modify the OSIRIS code until it suits your purpose, or base your code on similar algorithms and organization. No need to re-invent PIC codes when they exist already. In addition, to run any decent simulation, you'll probably need a lot more computational power.. if you don't have access to a cluster, I would suggest buying some computational time from Amazon.

Post Reply