Artificial intelligence is already everywhere. From Amazon product suggestions to Google auto-complete, AI has invaded nearly every aspect of our lives. The trouble is that AI just isn’t very good. Have you ever had a meaningful conversation with Siri or Alexa or Cortana? Of course not. But that doesn’t mean it always will be this way. Though it hasn’t quite lived up to our expectations, AI is definitely improving. In a utopian version of an AI-dominated future, humans are assisted by friendly, all-knowing butlers that cater to our every need. In the dystopian version, robots assert their independence and declare a Terminator-style apocalypse on humanity. But how realistic are these scenarios? Will AI ever actually achieve true general intelligence? Will AI steal all of our jobs? Can AI ever become conscious? Could AI have free will? Nobody knows, but a good place to start thinking about these issues is here.
“So you get into this uncomfortable position where you might be forced to recognize that some humans are non-persons and some nonhumans are persons.”
“Now again, if you bite the bullet and say I’m willing to be a speciesist, being a member of the human species is either necessary or sufficient for being a person, you avoid this problem entirely. But if not you at least have to be open to the possibility that artificial intelligence in particular may at one point become person like and have the rights of persons.”
“It’s already becoming painfully clear that even research in transformers is yielding diminishing returns.”
“Transformers are getting larger and more power hungry. A recent transformer developed by Chinese search engine giant Baidu has several billion parameters. It takes an enormous amount of data to effectively train. Yet, it has so far proved unable to grasp the nuances of human common sense. Even deep learning pioneers seem to think that new fundamental research may be needed before today’s neural networks are able to make such a leap. Depending on how successful this new line of research is, there’s no telling whether machine common sense is five years away, or 50.”
“I think a lot of people dismiss this kind of talk of super intelligence as science fiction…”
“…because we’re stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms. As a physicist, from my perspective intelligence is just a kind of information processing preformed by elementary particles moving around according to the laws of physics. And there’s absolutely no law in physics that says you can’t do that in ways that are much more intelligent than humans.”
“The dire view, which is more the traditional view, is that human minds have a lot of complexity,…”
“…that you need to build a lot of functionality into it, like in Minsky’s society of mind, to get to all the tricks that people are up to. And if that is the case then it might take a very long time until we have re-created all these different functional mechanisms. But I don’t think that it’s going to be so dire, because our genome is very short and most of that codes for a single cell. Very little of it codes for the brain. And I think a cell is much more complicated than a brain.”
“If we create intelligence, […] unless we program it with the goal of subjugating less intelligent beings, there’s no reason to think that it will naturally evolve in that direction.”
“Particularly if, like with every gadget that we invent, we build in safeguards. And we know, by the way, that it’s possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they’re called women.”
“I believe that understanding how consciousness and intelligence interrelate could lead us to better make decisions…”
“…about how we enhance our own brain. So on my own view, we should enhance our brains in a way that maximizes sentience, that allows conscious experience to flourish. And we certainly don’t want to become expert systems that have no felt quality to experience. So the challenge for a technological civilization is actually to think not just technologically but philosophically, to think about how these enhancements impact our conscious experience.“
“Over the last ten years, we’ve clearly been in an AI summer as vast improvements in computing power…”
“…and new techniques like deep learning have led to remarkable advances. But now, as we enter the third decade of the 21st century, some who follow AI feel the cold winds at their back leading them to ask, “Is Winter Coming?” If so, what went wrong this time?”
“So I think actually OpenCog and other AI systems have potential to be far better than human beings at the sort of logical and strategic side of things. And I think that’s quite important because if you take a human being and upgrade them to like 10,000 IQ the outcome might not be what you want, because you’ve got a motivational system and an emotional system that basically evolved in prehuman animals. Whereas if you architect a system where rationality and empathy play a deeper role in the architecture then as its intelligence ramps way up we may find a more beneficial outcome.”
— Ben Goertzel
“A brain is probably largely self-organizing and built not like clockwork but like a cappuccino—so you mix the right ingredients and then you let it percolate and then it forms a particular kind of structure. So I do think, because nature pulls it off pretty well in most of the cases, that even though a brain probably needs more complexity than a cappuccino—dramatically more—it’s going to be much simpler than a very complicated machine like a cell.”
“The AIs that we are going to build in the future are probably not going to be humanoid robots for the most part. It’s going to be intelligent systems. So AIs are not going to be something that lives next to us like a household robot or something that then tries to get human rights and throw off the yoke of its oppression, like it’s a household slave or something.
Instead it’s going to be, for instance, corporations, nation states and so on that are going to use for their intelligent tasks machine learning and computer models that are more and more intricate and self-modeling and become aware of what they are doing.
So we are going to live inside of these intelligent systems, not next to them. We’re going to have a relationship to them that’s similar to the gut flora has to our organism and to our mind. We are going to be a small part of it in some sense.
So it’s very hard to map this to a human experience because the environment that these AIs are going to interact with is going to be very different from ours.
Also I don’t think that these AIs will be conscious of things in the same sense as we are, because we are only conscious of things that require our attention and we are only aware of the things that we cannot do automatically.”
— Joscha Bach
“Probably the single best thing that we could do to make our machines smarter is to give them common sense, which is much harder than it sounds like.”
“So machines are really good at things like, I don’t know, converting metrics — you know, converting from the English system to the metric system. Things that are nice, and precise, and factual, and easily stated. But things that are a little bit less sharply stated, like how you open a door, machines don’t understand the first thing.”
“We’re much better off with tools than with colleagues. We can make tools that are smart as the dickens, and use them and understand what their limitations are without giving them ulterior motives, purposes, a drive to exist and to compete and to beat the others. those are features that don’t play any crucial role in the competences of artificial intelligence. So for heaven sakes don’t bother putting them in.
Leave all that out, and what we have is very smart “thingies” that we can treat like slaves, and it’s quite all right to treat them as slaves because they don’t have feelings, they’re not conscious. You can turn them off; you can tear them apart the same way you can with an automobile and that’s the way we should keep it.”
— Daniel Dennett
“One could foresee a future time when silicon beings look back on a dawn age when the earth was peopled by soft squishy watery organic beings and who knows that might be better, but we’re really in the science fiction territory now.”
“AI is nothing but a brand. A powerful brand, but an empty promise.”
“The concept of “intelligence” is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.”
“I am conscious in the same way that the moon shines.”
“The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.”
“So if we govern AI well, there’s likely to be substantial advances in medicine, transportation, helping to reduce global poverty and [it will] help us address climate change. The problem is if we don’t govern it well, it will also produce these negative externalities in society. Social media may make us more lonely, self-driving cars may cause congestion, autonomous weapons could cause risks of flash escalations and war or other kinds of military instability. So the first layer is to address these unintended consequences of the advances in AI that are emerging. Then there’s this bigger challenge facing the governance of AI, which is really the question of where do we want to go?”
“So one, the robots are taking all of our jobs — maybe it will be true someday. It is not true currently and it is not visible in any of the data currently, which is a problem. The other reason I’m skeptical is that human beings are very good at assigning value to jobs that maybe do not have that much intrinsic value in them. So, go back a couple hundred years and we’re most all working in agriculture, we are doing things that are very directly about human survival. So, you go forward in time, I mean I’m a journalist who writes stuff online, it’s not an objectively all-that-needed a job. There are more yoga instructors today than there are coal miners in America. Management consultants make a lot more money and are given a lot more social capital, I’m not saying fairly, I’m just saying it is true, than farmers or public school teachers. So, this idea that the only jobs that have dignity and that have worth are ones that are actually needed. This idea that we’re going to have a useless class of people because robots are going to take the jobs, it seems a lot likelier to me that we’re just going to imbue new jobs with both social capital and money.”
“If we rely upon the market, we’re going to follow the market off a cliff because the market’s going to turn on more and more of us over time, and we can already see that the market does not value many of the things that are core to human existence like caring, nurturing, and parenting and caregiving. And I use my wife as an example. My wife is at home with our two boys, one of whom is autistic. And the market values her contribution at zero whereas we all know that’s nonsense and that her work is incredibly valuable and difficult. It’s not just the caring and nurturing roles. It’s also arts, creativity, journalism, increasingly, volunteering in the community. All of these things are getting valued at zero or near zero and declining. And so what we have to do, we have to say look, the market is not omniscient. The market’s valuation of us and our activities and their value is something that we essentially invented. And we need to invent new ways to measure what we think is important. And I think that this is the most important challenge of our time because if we do not evolve in this direction, we’re going to follow the market to a point that’s going to destroy us where eventually AI is going to be able to outprogram our smartest software engineers. And then what will we ask people to do that has value? So we have to start getting ahead of this curve as fast as possible, and this is why I’m running for president.”
“There are a lot of those vulnerable people out there,…”
“…and because of the richness of our social lives and our social drives I just don’t see anyone, even really great innovators, coming up with technologies that could just substitute for the people who are currently doing those very, very social jobs.”