On first reflection, using D-D fuel several parameters can be combined. The calibrated neutron counts directly determines the fusion rate, and the known MeV per reaction gives the energy. The only real variable here is the neutron count.
Ion injection ideally will always be as close to zero as you can get it presuming injection is at the Wiffleball border (not deeper in the machine). An alternative might to be to inject the ions from the outside with just enough KE to overcome the magrid repulsion. This would cost more energy but it might have benefits in terms of avoiding electron sinks like a positive ion gun located at the Wiffleball border.
A potential well depth of ~ 1.2 million volts is not desired. While it is true that the D-D cross section continues to increase with increased ion KE, it is not linier. The actual best voltage from an efficiency viewpoint is ~ 15,000 eV. This is where the D-D cross section curve is steepest (a small increase in voltage will increase the fusion rate the the greatest relative amount. If this raw efficiency was the only consideration then this would be the ideal potential well depth- this is what Joel Rogers used for his simulations.
But, there are many other concerns, thermalization issues, Bremsstrulung, arcing, energy output density and thermal loads, etc. I believe Dr Bussard considered all of these issues and decided that ~ 80,000 volts potential well depth in a 3 meter, 10 T machine was the best compromise for D-D fusion.
Consider Bremsstrulung. It scales at the ~1.75 power of the temperature. This means that the Bremsstrulung X-ray losses increases by a factor of ~ 60 for a temperature increase from 100,000 to 1,000,000 eV. Unless the fusion rate increases at a faster rate within this temperature range you are losing ground.
Also, above ~ 1 million eV, other endothermic nuclear reactions become increasingly significant.
http://en.wikipedia.org/wiki/Oppenheime ... ps_process
Also, as the energy of the fuel ions is increased towards the energy of the fusion reactions, unless you have 100% efficient heat to electricity conversion efficiency you are losing ground.
eg: 100KeV fuel ion energy + ~ 3 MeV fusion products energy = 3.1 NeV. That energy converted to electrical energy via a steam cycle (assume ~ 30% efficiency) might yield 1.03 MeV of electrical output. Subtract from this the input energy- even if 100% efficient in accelerating and maintaining the fuel ions without Bremsstrulung losses, etc. , you would yield a net electrical energy of 1.03MeV- 0.1 MeV= 0.903 MeV net gain per fusion reaction.
If the fuel ion was at 1MeV, then the net electrical output would be 3.0MeV + 1 MeV= 4.0 MeV . This fusion output / conversion efficiency of 30% = 1.3 MeV of electrical energy. Subtract the input energy from this: 1.3 MeV - 1.0 MeV = 0.3 MeV. So, not only do you lose ground on the Bremsstrulung issue, but also on the output / input energy ratio per fusion reaction.
This is one area where P-B11 fuel has an advantage if direct conversion is used. There is not as much penalty (at ~ 80% conversion efficiency with direct conversion) for using higher drive energies. The ~ 9 MeV per P-B11 fusion reaction also helps to offset the higher drive voltages needed (still well below a million volts). The need to have excess protons in the mix to minimize Bremsstrulung loses does increase the energy costs for maintaining the mix of particles at ~ 200 KeV, but I suspect the P-B11 mixture would still have an advantage, or at lest less of a disadvantage that would be appreciated without this consideration.
Also, vacuum pumping requirements to prevent charged and neutral particle buildup outside the magrid to levels that lead to potential well destroying arcing will be very challenging. Increasing the voltage too much will compound the problem as the arcing is proportional to the external density AND the voltage.