Posted: Fri Dec 18, 2009 10:39 pm
Kurzweil argues convincingly that it has been for a long time.Innovation would accelerate.
a discussion forum for Polywell fusion
https://talk-polywell.org/bb/
Kurzweil argues convincingly that it has been for a long time.Innovation would accelerate.
TallDave wrote:Those were basically curiosities. The first practical steam engine is generally ascribed to Savery in 1698. It wasn't very good (tended to explode). Newcomen's was better, and Watts' better yet, and then Richard Trevithick started using high pressure. It's hard to imagine any of this could have been done by Hellenistic Greeks.alexjrgreen wrote:Ctesibius of Alexandria invented the double action piston pump in the third century BC and Damascus steel was available a century earlier, so your point about the cheapness of slave labour is more convincing.TallDave wrote: The Greeks understood the basic principle of the steam engine. They lacked the materials science and engineering to make a steam locomotive or mining pump, and their economies were wracked by constant wars.
There's a certain level below which slaves make way more economic sense.
I guess that also argues again markets were as important as invention.
Heron's steam engines were probably just as good as Savery's, possibly better, but they were only used as religious gadgets to impress visitors to the temple.TallDave wrote:Those were basically curiosities. The first practical steam engine is generally ascribed to Savery in 1698. It wasn't very good (tended to explode). Newcomen's was better, and Watts' better yet, and then Richard Trevithick started using high pressure. It's hard to imagine any of this could have been done by Hellenistic Greeks.
But I suppose that also argues again markets and labor costs were as important as the other elements.
That is true. That is why I believe that the first implementation of actual strong AI will not (or should not) be based of simulation of existing neural networks.kurt9 wrote:I believe the development of such AI is unlikely in the next 50 years for neuro-biological reasons.
Brains are very different from semiconductor-based computers.
But they are also very slow. Silicon implementations can round circles about single neuron processing. Even if we go that ineffective neuron network simulation path, we can achieve the goal, just because CPUs can simulate a lot of neurons and synapses quickly, even if silicon itself is not as flexible - but it is much faster and algorithms will implement flexibility.Brains are dynamically reconfigurable. The synaptic connection reconfigure themselves all the time (I think this occurs during sleep and is one of the reasons why we sleep). No semiconductor technology has this dynamism. FPGA's do not count as they are not reconfigurable in the same manner.
Implementation detials.Memory storage is chemical in nature, not electronic. Synapses vary as to chemical type. Also, dendrites are not the only way neurons communicate with each other. They also use diffusion-based chemistry as well.
There are various kinds of memory storage. There is short-term storage, there is long-term potentiation, then there is the really long term memory which is still not understood. Both the various kinds of memory as well as communications are interactive with each other.
In fact, I believe, we already have hardware for the most optimized strong AI implementation. What is missing is software....By Moravec's estimates, we all ready have them.
One way is to go 3D. All current silicon is 2D only.CMOS semiconductors are reaching their limits. If 22nm is not the limit, most certainly that limit will be the 15nm design rule. The reason is that the depostion, patterning, and etching fabrication technology cannot make structures smaller than this. Also, there are issues with quantum mechanics with structures smaller than this that make CMOS transistors impossible. A new fabrication technology is necessary.
There is nearly 7 billion people on the earth. Will you personally be the only one improved?I would rather improve myself than to make my superior replacement. What say you?
And just how do you verify such a system?Some more sophisticated software in specialized apps rewrites its own code in response to various situations
Yep. Because they'll get more stuff done, you know, faster and whatnot. Am I missing something?There is nearly 7 billion people on the earth. Will you personally be the only one improved?
If not, does it really matter that certain citizens will be built of silicon instead of organic matter?
MRAM has a better shot and is currently in production.alexjrgreen wrote:The "prediction of future states" that Hawkins discusses is a property of memristance: Memristor minds: The future of artificial intelligence.Luzr wrote:Yes, I guess there is sort of consensus that the real AI must "self-emerge" or "grow".djolds1 wrote:There's only one AI route I think is plausible - bottom-up "growing" of minds, just like raising a child.
It may be that this ability to predict is what a network scales up to achieve higher levels of intelligence.
The Micro$haft philosophy.BenTC wrote:2. Proprietry software developers only debug their software to an "acceptable" level of pain for their users - they need to get the product out the door to make some money.
Interesting. Thanks.BenTC wrote:3. Open Source developers doing it for interest (like you might do a crossword or play a game) whose egos are tied up in their code often put more effort into quality debugging. Homesteading the Noosphere is examines the property and ownership customs of the open-source culture which reveal an underlying gift culture in which hackers compete amicably for peer repute.
Perhaps. However, an absolute reduction in the number of people involved must have a negative impact on the range of options pursued and rate of work.BenTC wrote:4. Any tradeoff in numbers is replaced by an increase in the quality of those that remain (but I think the numbers will go up anyway when people have more time on their hands)
Contra. Leisure time was FAR more available to hunter-gatherers. Agriculture actually led to a DECREASE in living standards for the producing peasant classes. It forced far more investment of time to achieve subsistence results, which made learning specialized skills economically productive. The plus side was that once you achieved subsistence it was relatively easy to achieve surplus (sunk costs already invested), which allows you to ride out unexpected downturns.BenTC wrote:Innovation would accelerate. Similar to how the agricultural revolution generated free time for people to specialise away from hand-to-mouth living, a post-scarce world would leave more time for technically inclined people to follow their interests.
Yes.BenTC wrote:Take an example close to home. If you had the opportunity, would you build yourself a Polywell to experiment with?
Me - skill sets, patent law, the Nuclear Regulatory Commission, and the prospect of FBI HRT (or Delta) snipers becoming involved with extreme prejudice when the word "nuclear" comes up.BenTC wrote:What is stopping you? Stopping me is the equipment expense and time away from my day job feeding my family.
Yup. Tho as I pointed out earlier - that pushes AGAINST large societies such as we have today, in favor of smaller human bands (the Rule of 150 - the largest number of human beings that can run a social group informally) being self-sufficient. Absent a critical density of human beings (FAR larger than 150), supporting R&D (and supporting education) as we understand it is not doable. Abundant leisure & low population densities looks a lot like the hunter-gatherer phase, and that was technologically static for 100,000 years.BenTC wrote:In a post-scarse world, those constraints drop away and I, and everyone in this forum could build a Polywell each. Image the discussions we could have here on the physics, using raw experimental results, rather than the data-blind speculation we have - and the multiplying effect of the rate of innovation.
The Hero engine was a velocity engine (analogous to a turbine) while the engine that actually got things going was a torque engine. To make turbines work they need to operate at constant high velocity. Which says that you need gearing to do useful work.alexjrgreen wrote:Heron's steam engines were probably just as good as Savery's, possibly better, but they were only used as religious gadgets to impress visitors to the temple.TallDave wrote:Those were basically curiosities. The first practical steam engine is generally ascribed to Savery in 1698. It wasn't very good (tended to explode). Newcomen's was better, and Watts' better yet, and then Richard Trevithick started using high pressure. It's hard to imagine any of this could have been done by Hellenistic Greeks.
But I suppose that also argues again markets and labor costs were as important as the other elements.
Without a secular market there was no equivalent of Newcomen or Watt, even though it was easily within the Greeks' abilities.
And yet there are people making money selling water. In North America. Right next to some of the largest fresh water lakes on the planet.Bottom line, I think post scarcity manufacturing has a real possibility of radically SLOWING the rate of innovation once its achieved. It will nuke the profit motive for new advancements in physical technological durables - bluetooth, iPod, etc. We will still see dedicated academics and interested amateurs like the Royal Society of the 17th-19th centuries, or the Open Source movement of today, but the drive to newer and newer technologies is swamped as the only remaining profit sector is interpersonal.
Only because we are accustomed to it. There is no inherent linkage per se beyond sci fi fashion.kurt9 wrote:All discussions of post-scarcity economics sooner or later lead to discussion of sentient AI.
I agree that AIs will need to be "grown" from the bottom up, just like a human consciousness takes years to develop. See herekurt9 wrote:I believe the only way that sentient AI will be possible will be to use the same fabrication methods that are used to grow brains to grow artificial brains based on synthetic biology or some kind of biology-like nanotechnology. This technology will be developed sooner or later.
Perhaps. There's already some movement away from absolute highest performance in electronics towards ease of manufacture.kurt9 wrote:Our computer chips will be wet and squishy, just like our bodies. Making AI will involve "growing" an artificial brain that will be remarkably similar, both in fabrication process as well as molecular structure, to our own brains.
While I support enhancing humanity and not creating our successors, I'm not sure it will be as easy as you seem to think. The human brain & the mind it forms are a lifelong development, with many of the basics of any single mind being "burned in" in the first few years of life. Adding quasi-AI "enhancements" that will seamlessly merge with that massive, undesigned structure is... daunting. Far more daunting than just growing a functional AI itself. We don't need to know HOW the AI (or human) brain and mind matured; just that they did. Interfacing with a matured (or maturing) mind is another thing entirely.kurt9 wrote:In other words, its not going to be that much different from us. The same set of technologies will make it easier to redesign and restructure our own selves as well.
I would rather improve myself than to make my superior replacement. What say you?
Depends on how you mean. "Psychic" controllers for game stations came out this year. Interfaces of that type will explode over the coming few years. If you mean "uploading," AI is MUCH easier. Sapient AI is easier than uploading because we don't need to know HOW a brain matures. Put the silicon or protoplasm for the brain in place, pump in stimuli, and "decant" an adult in 4-25 years. How the 100 trillion neural or quasi-neural connections "line up" is unimportant, so long as the "product" (typically called a child) works. For uploading, you need to understand each and every connection in that 100 trillion strong network. Which connections have reinforced, which atrophied.MirariNefas wrote:Along that line of thought, what comes first, the mind machine interface, or the strong AI?
Possibly. AI would have the advantage of much faster processing speed; the neurons "click" orders of magnitude faster.MirariNefas wrote:Personally, I think interfaces capable of transfering data between electronics and brains are much more achievable than an artificial consciousness. If so, we'll have the capacity to improve ourselves well before we could create our replacements. Moreover, this will remove the incentive to build our replacements, because you could just hire a human to interface up and do the same job.
LOL. Touche.MSimon wrote:And yet there are people making money selling water. In North America. Right next to some of the largest fresh water lakes on the planet.Bottom line, I think post scarcity manufacturing has a real possibility of radically SLOWING the rate of innovation once its achieved. It will nuke the profit motive for new advancements in physical technological durables - bluetooth, iPod, etc. We will still see dedicated academics and interested amateurs like the Royal Society of the 17th-19th centuries, or the Open Source movement of today, but the drive to newer and newer technologies is swamped as the only remaining profit sector is interpersonal.