Wednesday, October 24, 2012

Why I am Skeptical About Risks from AI

Alexander Kruel

October 23, 2012

As we know,
There are known knowns.
There are things
We know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don?t know
We don?t know.

? Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing

INTELLIGENCE, A CORNUCOPIA?

It seems to me that those who believe into the possibility of catastrophic risks from artificial intelligence act on the unquestioned assumption that intelligence is kind of a?black box, a?cornucopia?that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries.

Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns and who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To enable an intelligence explosion the light would have to reach out much farther with each increase in intelligence than the increase of the distance between unknown unknowns. I just don?t see that to be a reasonable assumption.

INTELLIGENCE AMPLIFICATION, IS IT WORTH IT?

It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.

If any increase in intelligence is vastly outweighed by its computational cost and the expenditure of time needed to discover it then it might not be instrumental for a perfectly rational agent (such as an artificial general intelligence), as?imagined by game theorists, to increase its intelligence as opposed to using its existing intelligence to pursue its terminal goals directly or to invest its given resources to acquire other means of self-improvement, e.g. more efficient sensors.

What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields?enormous advantages?(enough to enable an?intelligence explosion) over evolutionary discovery relative to its cost?

We simply don?t know if intelligence is instrumental or quickly hits diminishing returns.

Can intelligence be effectively applied to itself at all? How do we know that any given level of intelligence is capable of handling its own complexity efficiently? Many humans are not even capable of handling the complexity of the brain of a worm.

HUMANS AND THE IMPORTANCE OF DISCOVERY

There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:

  • Intelligence is goal-oriented.
  • Intelligence can think ahead.
  • Intelligence can jump fitness gaps.
  • Intelligence can engage in direct experimentation.
  • Intelligence can observe and incorporate solutions of other optimizing agents.

But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence itself does it take the discovery of novel unknown unknowns?

We have no idea about the nature of discovery and its importance when it comes to what is necessary to reach a level of intelligence above our own, by ourselves. How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?

Our ?irrationality? and the patchwork-architecture of the human brain might constitute an actual feature. The noisiness and patchwork architecture of the human brain might play a significant role in the discovery of unknown unknowns because it allows us to become distracted, to leave the path of evidence based exploration.

A lot of discoveries were made by people who were not explicitly trying tomaximizing expected utility. A lot of progress is due to luck, in the form of the discovery of unknown unknowns.

A basic argument in support of risks from superhuman intelligence is that we don?t know what it could possible come up with. That is also why it is called it a ?Singularity?. But why does nobody ask how a superhuman intelligence knows what it could possible come up with?

It is not intelligence in and of itself that allows humans to accomplish great feats. Even people like Einstein, geniuses who were apparently able to come up with great insights on their own, were simply lucky to be born into the right circumstances,?the time was ripe for great discoveries, thanks to previous discoveries of unknown unknowns.

EVOLUTION VERSUS INTELLIGENCE

It is argued that the mind-design space must be large if evolution could stumble upon general intelligence and that there are?low-hanging fruits?that are much more efficient at general intelligence than humans are, evolution simply went with the first that came along. It is further argued that?evolution is not limitlessly creative, each step must increase the fitness of its host, and that therefore there are artificial mind designs that can do what no product of natural selection could accomplish.

I agree with the above, yet given all of the apparent disadvantages of?the blind idiot God, evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven?t been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.

The example of?altruism?provides evidence that intelligence isn?t many levels above evolution. Therefore the crucial question is,?how?great is the performance advantage? Is it large enough to justify the conclusion that the probability of an intelligence explosion is easily larger than 1%? I don?t think so. To answer this definitively we would have to fathom the significance of the?discovery?(?random mutations?) of unknown unknowns in the dramatic amplification of intelligence versus the?invention?(goal-oriented ?research and development?) of an improvement within known conceptual bounds.

Another example is?flight. Artificial flight is not even close to the energy efficiency and maneuverability of birds or insects. We didn?t went straight from no artificial flight towards flight that is generally superior to the natural flight that is an effect of biological evolution.

Dragonfly

Take for example a?dragonfly.?Even if we were handed the design for a perfect artificial dragonfly, minus the design for the flight of a dragonfly, we wouldn?t be able to build a dragonfly that can take over the world of dragonflies,?all else equal, by means of superior flight characteristics.

It is true that a Harpy Eagle can?lift more than three-quarters of its body weight?while the?Boeing 747 Large Cargo Freighter?has a maximum take-off weight of almost double its operating empty weight (I suspect that insects can do better). My whole point is that we never reached artificial flight that is strongly above the level of natural flight. An eagle can after all catch its cargo under various circumstances like the slope of a mountain or from beneath the sea, thanks to its superior maneuverability.

HUMANS ARE BIASED AND IRRATIONAL

It is obviously true that our expert systems are better than we are at their narrow range of expertise. But that expert systems are better at certain tasks does not imply that you can effectively and efficiently combine them into a coherent agency.

The noisiness of the human brain might be one of the important features that allows it to exhibit general intelligence. Yet the same noise might be the reason that each task a human can accomplish is not put into execution with maximal efficiency. An expert system that features a single stand-alone ability is able to reach the unique equilibrium for that ability. Whereas systems that have not fully relaxed to equilibrium feature the necessary characteristics that are required to exhibit general intelligence. In this sense a decrease in efficiency is a side-effect of general intelligence. If you externalize a certain ability into a coherent framework of agency, you decrease its efficiency dramatically. That is the difference between a tool and the ability of the agent that uses the tool.

In the above sense, our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability.

EMBODIED COGNITION AND THE ENVIRONMENT

Another problem is that general intelligence is largely a result of an interaction between an agent and its environment. It might be in principle possible to arrive at various capabilities by means of induction, but it is only a theoretical possibility given unlimited computational resources. To achieve real world efficiency you need to rely on slow environmental feedback and make decision under uncertainty.

AIXI?is often quoted as a?proof of concept?that it is possible for a simple algorithm to improve itself to such an extent that it could in principle reach superhuman intelligence. AIXI proves that there is a general theory of intelligence. But there is a minor problem, AIXI is as far from real world human-level general intelligence as an abstract notion of a?Turing machine?with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn?t get you anywhere in terms of real-world general intelligence. Just as you won?t be able to?upload yourself?to a non-biological substrate?because you showed that in some abstract sense?you can simulate every physical process.

Just imagine you?emulated a grown up human mind?and it wanted to become apick up artist, how would it do that with an Internet connection? It would needsome sort of avatar, at least, and then wait for the environment to provide a lot of feedback.

Therefore even if we?re talking about the emulation of a grown up mind, it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler, that makes it unable to become a master of?social engineering?in a very short time?

Can we imagine what is missing that would enable one of the existing expert systems to quickly evolve vastly superhuman capabilities in its narrow area of expertise? Why haven?t we seen a learning algorithm teaching itself chess intelligence starting with nothing but the rules?

In a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its ability to read and respond to environmental cues is vastly greater than that of a stone. Yet intelligent or not, the environment in which an agent is embedded plays a crucial role. There exist a?fundamental dependency on unintelligent processes. Our environment is structured in such a way that we use information within it as an extension of our minds. The environment enables us to learn and improve our predictions by providing a testbed and a constant stream of data.

NECESSARY RESOURCES FOR AN INTELLIGENCE EXPLOSION

If artificial general intelligence is unable to seize the resources necessary to undergo explosive recursive self-improvement then the ability and cognitive flexibility of superhuman intelligence in and of itself, as characteristics alone, would have to be sufficient to self-modify its way up to massive superhuman intelligence within a very short time.

Without advanced?real-world nanotechnology?it will be considerable more difficult for an AGI to undergo quick self-improvement. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU?s. It will have to rely on puny humans for a lot of tasks. It won?t be able to create new computational substrate without the whole economy of the world supporting it. It won?t be able to create an army of robot drones overnight without it either.

Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But, more importantly, it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn?t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources. The AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.

Therefore the absence of advanced nanotechnology constitutes an immense blow to the possibility of explosive recursive self-improvement and?risks from AI?in general.

One might argue that an AGI will solve nanotechnology on its own and find some way to trick humans into manufacturing a molecular assembler and grant it access to it. But this might be very difficult.

There is a strong interdependence of resources and manufacturers. The AGI won?t be able to simply trick some humans to build a high-end factory to create computational substrate, let alone a molecular assembler. People will ask questions and shortly after get suspicious. Remember, it won?t be able to coordinate a world-conspiracy, it hasn?t been able to self-improve to that point yet because it is still trying to acquire enough resources, which it has to do the hard way without nanotech.

Anyhow, you?d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.

People associated with the SIAI would at this point claim that if the AI can?t make use of nanotechnology it might make use of something we haven?t even thought about. But what, magic?

ARTIFICIAL GENERAL INTELLIGENCE, A SINGLE BREAK-THROUGH?

Another point to consider when talking about risks from AI is how quickly the invention of artificial general intelligence will take place. What evidence do we have that there is some principle that, once discovered, allows us to grow superhuman intelligence overnight?

If the development of AGI takes place slowly, a gradual and controllable development, we might be able to learn from small-scale mistakes while having to face other risks in the meantime. This might for example be the case if?intelligencecan not be captured by a discrete algorithm, or is modular, and therefore never allow us to reach a point where we can suddenly build the smartest thing ever that does just extend itself indefinitely.

To me it doesn?t look like that we will come up with artificial general intelligence quickly, but rather that we will have to painstakingly optimize our expert systems step by step over long periods of times.

PAPERCLIP MAXIMIZERS

It is claimed that an artificial general intelligence?might wipe us out inadvertentlywhile undergoing explosive recursive self-improvement to more effectively pursue its terminal goals. I think that it is unlikely that most AI designs will not hold.

I agree with the argument that any AGI that isn?t made to care about humans won?t care about humans. But I also think that the same argument applies for spatio-temporal scope boundaries and resource limits. Even if the AGI is not told to hold, e.g. compute as many digits of Pi as possible, I consider it an far-fetched assumption that any AGI intrinsically cares to take over the universe as fast as possible to compute as many digits of Pi as possible. Sure, if all of that are presuppositions then it will happen, but I don?t see that most of all AGI designs are like that. Most that have the potential for superhuman intelligence, but who are given simple goals, will in my opinion just bob up and down as slowly as possible.

Complex goals need complex optimization parameters (the design specifications of the subject of the optimization process against which it will measure its success of self-improvement).

Even the creation of paperclips is a much more complex goal than telling an AI to compute as many digits of Pi as possible.

For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters. Therefore, given the large amount of restrictions that are inevitably part of any advanced general intelligence, the nonhazardous subset of all possible outcomes might be much larger than that where the AGI works perfectly yet fails to hold before it could wreak havoc.

FERMI PARADOX

The?Fermi paradox?does allow for and provide the only conclusions and data we can analyze that amount to?empirical criticism?of concepts like that of a?Paperclip maximizer?and general risks from superhuman AI?s with non-human values without working directly on AGI to test those hypothesis ourselves.

If you accept the premise that life is?not?unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering.

Due to the absence of any signs of intelligence out there, especially paper-clippersburning the cosmic commons, we might conclude that unfriendly AI could?not be the most dangerous existential risk?that we should worry about.

SUMMARY

In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish.

There are many question marks when it comes to the possibility of superhuman intelligence, and many more about the possibility of recursive self-improvement. Most of the arguments in favor of those possibilities?solely derive their appeal from being vague.

Source: http://hplusmagazine.com/2012/10/23/why-i-am-skeptical-about-risks-from-ai/

rex ryan yule log ham recipes darlene love free kindle books roasted potatoes turkey recipes

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.