Skip to content
Who's in the Video
Daniel C. Dennett is the author of Intuition Pumps and Other Tools for Thinking, Breaking the Spell, Freedom Evolves, and Darwin's Dangerous Idea and is University Professor and Austin B. Fletcher[…]
Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

Daniel Dennett Discusses the Problem of Robotic Warfare.

Question: Should we be worried about high-tech warfare

Dennett:    Yeah, I worry about that.  I’ve been worried about it for years.  And let me try to put it in even more, even more basic terms.  Quite a few years ago there was movie called War Games and the opening scene to be the most chilling scene in the whole movie cause it showed the… actually it was a mock up of the control room where the decision had to be reached about whether or not to fire, retaliatory nuclear weapons in the case of an enemy strike.  So the idea was the nuclear war heads are on their way and are we going to, are going to fire off our retaliatory strike? And in the movie there was a fail safe system, two people had to turn the key or something and some of the soldiers wouldn’t turn the key or something and some of the soldiers wouldn’t turn the key.  And the chilling thing was that those soldiers were flash out of the program, they didn’t have the guts to turn the key so they were deemed incapable.  They were disabled for that particular job.  Well, look if you’re not really going to let people make a moral decision there. it’s only one decision that counts, well, throw out the people all together and just give up on that fantasy and say all right our fate is now in the hands of our machines and we’re not going to, there’s no oversight possible.  Now, that’s a chilling idea but it’s also common in  areas like medicine where it takes some real courage and maybe full heart courage on the part of a doctor to overrule the diagnosis or even the treatment regime, proposed by some artificial intelligent analytic machine and we are delegating more and more of our difficult decision making to technology and there are good reasons always to do that because the artificial devices we’ve made can sift through much, much more information than any human being could ever, could ever digest and do it very fast but the downside is, that we’re really cut off then from playing the direct controlling role and some very important matters.  Now, how we sort out that issue is I think a very important question and we’re only beginning to appreciate how serious that is.  I have tremendous sympathy for an appreciation of those people who dimly or acutely perceive that the encroaching use of decision making technology is both threatening and promising to take decisions out of our hands.  This is a very big change and this isn’t a conscious robot and if you’re worried, you’re looking in the wrong place for something ominous.  If you’re looking for, looking worried about a conscious robot, that’s not where the thread lies. 

Question: Do you believe The Singularity is Near?

Dennett:    Well, I think that the idea of the singularity is, it’s possible and principle of course but I think that the idea that it’s in the near future is just not possible.  I disagree with Hertz Whale on that I think that he hugely underestimates the amount of design work that has to into the software, let Morris’ law reign triumphant.  Let, we got a pedal flap machine now and so yes we can, we can perform trillions of floating point operations of the second now, but we don’t know how to harness that power and it’s going to be a long time before we do. 

Question: Would you ever advocate regulating scientific inquiry?

Dennett:    I can certainly imagine that.  I think that we should look ahead as best we can and see what we can do to steer research and development in benign directions and I think there are real pit falls to avoid.  I think that the gene splicing technologies have given us a pretty good example of the sorts of safeguards and controls that we want to include in that area of research I think there’s room for more of that.  I think we do want to, really carefully about the side effects of nano technology for instance and yeah, just this, look we’ve had some great successes which should be more, they should be more dramatize.  Sherry Rowland drew the world’s attention to the hole in the ozone layer and was so quick and so convincing on this core that change the way we dealt with refrigerants, chloroform carbons and that was a very important thing that didn’t happen.  Crisis averted, catastrophe is averted are not very exciting in a way but we should make the most of them and encourage people to realize that we can, we can avert catastrophes.


Related