Be Afraid or Be Involved: Artificial Intelligence Is Close at Hand - WhoWhatWhy Be Afraid or Be Involved: Artificial Intelligence Is Close at Hand - WhoWhatWhy

The Sentient Machine: The Coming Age of Artificial Intelligence, Amir Husain, AI, Artificial Intelligence, Sophia the Robot
Sophia the robot, an AI machine made by Hanson Electronics (left). The Sentient Machine: The Coming Age of Artificial Intelligence by Amir Husain. Photo credit: ITU Images / Wikimedia (CC BY 2.0), Scribner

Be Afraid or Be Involved: Artificial Intelligence Is Close at Hand

Artificial Intelligence May Be the Most Important Story of 2018

01/02/18

‘Tis the season for looking back at what made 2017 tick. At WhoWhatWhy it’s also a time for looking at what might emerge as the top stories for the year ahead. It certainly appears that Artificial Intelligence (AI) may be a big part of that future.

Talk to any scientists working in the field of AI, and they will tell you we are at an inflection point, where more and more work will be done by machines, changing everything from warfare to the basic social contract.

Just as the mechanical replication of human muscle was the driver of the Industrial Age, the synthetic replication of the human mind is the driver of the AI Age. As a result, the very notion of gainful employment will be disrupted, and with it the underpinnings of democratic government.

In this week’s WhoWhatWhy podcast, one of the leaders in the field of AI, Amir Husain, talks to Jeff Schechtman about the perils we face if policymakers don’t start paying attention to the revolution that is already upon us.

At the moment, he says, we have no plan, no sense of urgency and nowhere near enough money devoted to the issue. According to Husain, the US has allocated just $5 billion over the next five years to work on AI. By contrast, the Chinese are planning to spend $150 billion over the same period. And Husain reminds us that Russian President Vladimir Putin recently told a group of students, “Whoever becomes the leader in this sphere will become the ruler of the world.”

The march of technology is unstoppable, Husain says: We can start adjusting to AI today — or complain about our lost opportunities tomorrow.

Amir Husain is author of The Sentient Machine: The Coming Age of Artificial Intelligence (Scribner, November, 2017).

download rss-35468_640

Click HERE to Download Mp3


Full Text Transcript:

As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to resource constraints, we are not always able to proofread them as closely as we would like, and we hope that you will excuse any errors that slipped through.

Jeff Schechtman: Welcome to radio WhoWhatWhy. I’m Jeff Schechtman. One of the criticisms of Silicon Valley is often that so much talent in engineering is going towards the creation of such minor advancements: a new dating app, a new form of banking, or even games.
But all of this belies what’s really going on beneath the surface in the world of artificial intelligence. A world, in fact, a word that conjures up whole new fears and confusion. Perhaps it comes from too many science fiction movies, or maybe it’s just a fear of the ultimate change and loss of control. Either way, it is coming in every aspect of our lives. We can choose to have the conversation now, or complain and protest later. The former seems like a much better idea. And we’re going to have that conversation today with my guest Amir Husain. He’s a serial entrepreneur, an inventor. He serves on IBM’s advisory board for Watson and Cognitive Computing. And he’s the founder and CEO of SparkCognition, a company specializing in cognitive computing software solutions.
His work has been featured in such publications as Fast Company, Wired, Forbes, and The New York Times. And it is my pleasure to welcome Amir Husain here to talk about The Sentient Machine: The Coming Age of Artificial Intelligence.
Amir, thanks so much for joining us here on radio WhoWhatWhy.
Amir Husain: Jeff, thank you so much for having me.
Jeff Schechtman: When we talk about artificial intelligence, we tend to talk about it as sort of one giant thing. Talk a little bit about the classifications, the different classifications of artificial intelligence that help put it into clearer perspective, I think.
Amir Husain: Yeah. Absolutely. Artificial intelligence has been a field of study now for many decades. In the late fifties, at a very famous conference at Dartmouth, the fathers of artificial intelligence, John McCarthy and Marvin Minsky, along with many others got together. And originally, their thought was that it wouldn’t be very complex to be able to write programs that would cause a computer to exhibit the kind of thought that human beings exhibit. But as more and more work happened in this area, they realized that this wasn’t a simple challenge. And many of the things that we think of as very complex, such as play chess at the grand master level, those things ended up being simpler and they got addressed and artificial intelligence programs got built that could defeat a human grand master at chess much sooner than we could replicate the behaviors, let’s say, of a three year old that’s curious, that has this sort of innate curiosity that’s discovering the world, that’s able to learn across multiple domains.
So the classifications of artificial intelligence … and it’s very important to understand this particularly in context of what’s going on today. Much of what you see now, the kind of technology that can recognize images and discover objects in those images, or can listen to human speech and then transcribe it as text, these sorts of capabilities fall into the realm of what’s referred to as artificial narrow intelligence, ANI, where you’re using smart programs, you’re using techniques that might leverage machine learning or deep learning, which is one of the very popular techniques that’s being used these days. But these are all specialized applications that have focused on a specific domain.
So we might have a program that can defeat a human grand master of chess. And in that sense it’s very smart, but only within the narrow domain of chess. It can’t drive a car, for example. And that same program can’t recognize objects inside pictures. It can’t carry on a conversation. Now other ANI systems, other narrow intelligence systems, might be able to do that. So that’s one classification, ANI.
And then the other one that’s worth talking about a little bit is what’s called Artificial General Intelligence. And that’s sort of the dream of artificial intelligence researchers. That’s been the goal that many of the earliest artificial intelligence researchers really targeted, which is the ability to build synthetic intelligence that can replicate the general purpose nature of the human intellect. In other words, the one program that can go and learn about sports, and can learn about law, and can learn about art, and can exhibit that generality in both its expertise as well as its ability to gain further information and continue learning.
So those are two that are worth really thinking about, ANI and AGI. There’s a third which people talk about, which is artificial super intelligence, which is really AGI at a level that’s sort of forced human, that’s operating so fast, that’s gone far beyond the capabilities of a human mind. So that’s the third one that is an aspirational goal again, but nothing that we’re close to at the moment.
Jeff Schechtman: One of the things that these two things do, and really it’s the other part of the conversation I suppose, is the degree to which artificial intelligence either augments or replaces human involvement. Talk about that.
Amir Husain: Absolutely. So one of the fears around artificial intelligence and the advent of smart machines is that they’ll take away our jobs, and much of what we do can be done by machine intelligence, at least as far as the performance of economic labors is concerned. Now in the meantime, of course, we’re talking about building systems that augment human decision making. So we know that there are many cognitive vices. For example, in making legal judgments, there’s a very famous example covered by Daniel Kahneman in his book Thinking Fast and Slow, where he talks about judges adjudicating parole applications. And when the glucose level in their body is at a diminished level, they start defaulting to fast thinking, in other words, not really thinking through the case and just defaulting to a rejection.
And when after lunch, the glucose level spikes again, they’re able to think through that decision more rationally and the number of approvals goes up. So we know that this is just one example of a cognitive bias, but there are many such cognitive biases. And we definitely want to work toward systems that can augment human beings and address some of these shortcomings and some of these issues in our thought pattern. So in the near term that’s definitely a goal. In the medical profession, in energy … my own company is working in those areas, in the military for decision making as well as in actually taking action … Today, it’s mostly about augmentation.
But very quickly in some areas, now we’re moving into autonomous action where these machines can actually perform a job end to end without needing a human in the loop. My view is that while we are still at that point in the development of AI where it’s mostly about augmentation, we need to start having the policy conversation around what that future social contract looks like. What does an economy, what does a country, what does a democracy that has this new form of synthetic intelligence performing labors … what does that look like? And how does that change the social contract that we have in place right now? We need to have that discussion because my view is that ultimately more and more of today’s economic labors will be performed by machines.
It’s not going to happen overnight. But that’s the direction we’re headed in. So this time that we have now should be used not to talk about platitudes. For example, there’ll be some jobs that we don’t know about that’ll come about. It always happens. Don’t worry about the job losses now. We’ll just discover some new form of employment for human beings in the future. And that very well may be the case, but in my view, when you replicate the human muscle … back in the late 1600s with the steam engine, from that point on you saw in history that the employment of the human muscle for significant work diminished drastically. So when we replicate capabilities of the human mind, I think a similar case can be made that now between mind and muscle, that’s really the two contributions that humans make to the performance of economic labor.
So while there may be new jobs, I don’t think you’re going to have a one-to-one replacement. So I think it’s essential to have the conversation on what a society, where machines do much of our labor, what that society looks like.
Jeff Schechtman: But whether it was moving from the agrarian age to the industrial age, or even the degree to which we are in the digital age now, history doesn’t really give us any examples of where we’ve been willing to have the conversation on an a priori basis, to have the conversation before in fact these things engage. And that’s part of the problem I suppose.
Amir Husain: Well that’s a very very fine observation that you’ve made there. And you’re absolutely right. That unfortunately is what history shows us. Sometimes realities are staring us in the face, but we just can’t develop a consensus. So I think it’s a responsible thing to push for that. Whether it’ll happen or not, certainly if you look at history as a guide, probably not. However, I will say one thing, which is that we are not at the cusp of Artificial General Intelligence right now. And there are two elements to these concerns. One of course is AI will take away our jobs. And the second concern is sort of this Terminator-like concern of AI will kill us. Now in that latter area of concern in particular, I think that AI researchers can on their own, even in the absence of a large societal discourse, can contribute quite a bit.
And that is through the pursuit of technologies that are labeled safe AI and explainable AI, techniques and algorithms that make the deployment of smart machines safer. And at least in that way, I think where we find opportunities as policymakers, as researchers, as scientists, as engineers, wherever we find opportunities to make a difference and contribute something that even in the absence of that larger societal discourse can pave the way for the successful implementation of a development, which I think is inevitable. We should.
And so I’m doing that. My team and my own research interests have focused in part on safe AI. And we are huge believers in the fact that that’s an area that needs to be invested in more. And I wish that policymakers, folks that work in economic planning as an example, would take on the cause in a similar way. Make a difference where you can. You can’t force the societal discourse, but you can make some noise. And we are trying to make noise in both of these areas, on the economic side as well as on the AI safety side.
Jeff Schechtman: On the AI safety side, is there a danger of creating a kind of false job, if you will, for humans in this process? It’s a little like these meal kits that are very popular right now, things like Blue Apron and the like, where they give you just enough to do, to make you feel like you’re participating in the process. Is there a danger that we’re going to do the same thing with AI in a way that makes people feel good, but isn’t really constructive and significant in the long run?
Amir Husain: That again is an excellent question. A line from the film Flight of the Phoenix comes to mind, where there’s hopelessness, a crashed aircraft in the middle of the desert and everybody is trying to band together to figure things out, and nothing is working out, and everybody is frustrated. And the leader of the group faces some pushback from one of the other passengers. And he looks at him and he says, “Listen. Give people hope. And if you can’t give them hope, give them something to do.”
So in that spirit, I think you’re right. There is a concern that it could be buying time, where we are capable of replacing certain activities and certain jobs and certain tasks with artificial intelligence. But at the same time, we have not had that conversation around the renewed social contract. Our governments have not put our economy in a position where technology can support the populace, and can free the populace up to do other things, many of which I talk about in my book. The pursuit of knowledge is a great one. It would just have to be done in a way and under a social contract that recognizes that what we used to call gainful employment, that notion has shifted. But in the absence of all of that, what you’re suggesting might be one of the avenues of last resort that some governments take where some oversight just for the sake of oversight, some involvement, some human in the loop just for the sake of the human in the loop … that might happen.
But again, when I talk about some of these things I don’t always have a happy audience, because a lot of people want to hear that it’ll all be okay. Things will continue the way they are. And the reality is that the development of technology, the march of technology just cannot be reversed. And the direction in which we’re heading, which is the replication of the human cognitive function and going beyond that, will lead to changes in society, which are fundamental. So these sorts of stop gaps may happen like the ones that you suggest. But I think in the long term, it’s just a stop gap. And in many tasks you realize that having that human in the loop was okay for that initial period, but now it’s actually leading to a compromise of that job function.
There’s a lot being written now about how when autonomous cars are numerous and they form the majority, for example, of vehicles out on the roads, that it might actually be unsafe to allow humans to drive. Where all cars being autonomous is a safer situation than most cars being autonomous and some cars being human driven. There’s a similar debate going on in the military sphere where autonomous weapons are under development now. The UN has been talking about defining what an autonomous weapon is for the last four years. They just got together three weeks ago at the Convention for Conventional Weapons, which is a UN group with over 100 member countries. I think the third or fourth year running, they couldn’t agree on a definition of what an autonomous weapon is while at the same time you have all the major military powers developing autonomous weapons capability.
So technology has a habit of leaving policy behind, if the policy makers aren’t motivated to keep up with the march and the progress that science and technology bring.
Jeff Schechtman: Is the inflection point really about decision making? And it seems to be an even more bold relief when you’re talking about it with respect to the military and weapons systems, that the degree to which the decision making is further and further removed from the human mind, that seems to be the sweet spot in the discussion.
Amir Husain: Certainly. The cognitive function is, at the end of the day, a decision-making function. I’ve done a lot of work with General John Allen, who’s a four-star retired marine corps general. He ran all three of our wars, was deputy commander of CentCom, commander USFOR-Afghanistan, Iraq, Syrian operations. And he and I have written extensively. In fact, we published a piece in the US Naval Institute proceedings journal magazine that was titled On Hyperwar. And the way we’ve defined this concept of hyperwar is the application of artificial intelligence in the battle field. And the whole premise of that, while there are many many implications, the core premise of this is that there’s a concept in military terms of what’s called the OODA loop: observe, orient, decide, and act.
That’s a complex way of saying decision action. Right? You look at something. You make a decision. And then you act. How quickly you can do that in a military context can make the difference between a win and a loss. Even if you have a smaller force, even if you have a smaller amount of fire power, the ability to operate inside the enemy’s decision action loop, where you’re able to observe, orient, decide, and act before the enemy can do the same thing means that a smaller force can overpower a larger force.
It’s a huge advantage. And artificial intelligence, in military terms, has the potential for shrinking that decision action loop down to nothingness. And that’s one example where, again, we will have to really deeply study what this means, how rapidly these capabilities will be implemented, and what the role of human decision makers will be when, at some point, it’ll be discovered that adding a human to the loop is actually making you less competitive than an enemy that is choosing to remove the human from the loop.
Because the highest latency, the highest amount of delay, is going to be added by having a human decision-making cycle in that loop. And there, it’s sort of a fait accompli, where if somebody else that’s opposing you is removing that human from the loop, what do you do now? Do you keep a human in the loop at your end and have a slower decision action cycle? Or do you match your opponent? These are very difficult questions, but they will all have to be grappled with and will have to be addressed over time. The revolution, as you very rightly pointed out, is again, it stems from the capability of artificial intelligence platforms, software, applications to exhibit the cognitive function, that decision making function, and do so at massive scale and with immense speed. That is a revolution unlike any other we’ve ever experienced in the history of humanity.
Jeff Schechtman: Of course, where this goes to the extent that artificial intelligence by shrinking this loop becomes a kind of forced multiplier. The other side of that is that we get into a kind of artificial intelligence arms race.
Amir Husain: So again, we are already in the midst of an artificial intelligence arms race. Vladimir Putin just said a few months ago, couple of months ago, speaking to students in Russia, he said that, “He who controls artificial intelligence will control the world.” I don’t disagree with him. China has announced and published an AI national plan. They have officially stated publicly that their goal is to dominate the field of artificial intelligence and be the number one destination for AI by the year 2030. They have allocated 150 billion dollars in government spending over the next five years. And this is in contrast with US government spending on AI, which was 1.1 billion in 2015 and 1.2 billion in 2016. Think about that, roughly 5 or so billion over 5 years, given that run rate in the US, and 150 billion of government spent by China over the next 5 years.
So at the very time when technology is poised to change the shape of our future, it is poised to rewrite what will become the history of the century. And I refer to the century that we’re living in today as the AI century. It will be that monumentally powerful technological force that will shape not just economies and militaries, but it will shape really our collective future. At that very time, when we require leadership, when we require a national plan, when we require a JFK speech talking about that great moon shot, talking about doing these things because they are hard and not easy … at that very time, we are faced with some of the most challenging competition that we have ever known in the history of this country, some of the most determined competition.
And you’re hearing leaders of these near peer states talk about AI in terms of world domination. But yet, sadly, even though America is the birthplace of artificial intelligence, at least at a formal level, we don’t yet have a national plan for artificial intelligence. We don’t have a collective strategy.
Jeff Schechtman: How do we overcome … as trite as it sounds, I think that it is a big part of whatever debate is going to happen in any form on this. How do we overcome that cultural inertia? The sort of science fiction images, the fear that it creates in people that certainly from a science fiction point of view, it never turns out well for humans. I mean that’s sort of a general premise.
Amir Husain: Science fiction has helped a lot in cases like Star Trek, painting the picture of a future world where much of what we’re grappling with now has been addressed. One of the things to think about with regards to Star Trek, for example, is Gene Roddenberry’s amazing vision. I reference this in my book also. But it’s basically a post income society. Nobody really talks about their paycheck or monetary concerns. Technology has provided for the needs of the populace. And it’s not Wall-E where people are sort of just letting themselves go and not doing anything productive. They’re choosing to discover and explore and do what humanity has always excelled in, which is to uncover knowledge and chart new frontiers.
So in that sense it’s helped. But it’s also hurt because there are so many dystopian. For every good outcome, as you pointed out, there are probably 10 dystopian sci-fi outcomes. You know what? That makes for a good story. But the reality is that the only way to engage successfully with the future is to play a role in shaping it. That is something that needs to be communicated and understood. Technology is inevitable. If Albert Einstein had never been born, special relativity and general relativity would still have been uncovered, perhaps a decade or two after what we gained from the brilliant mind of Einstein, but at the same time it would have come about.
There are many examples of this. The Nobel Prize that was won by Abdus Salam and Weinberg, these two researchers did not have any common research. They were doing independent work in different countries on the opposite ends of the world. And yet, they both ended up coming to the same conclusion and then won the Nobel Prize along with Sheldon Glashow. The point here is technology, scientific progress is inevitable. So the only way to contend with the future is to take an active part in shaping it. And here, in the case of artificial intelligence, in very tangible and concrete terms, that is to involve yourself in the debate. That is, if you’re scientifically inclined, if you’re a computer scientist, if you’re an AI researcher, get involved with safe AI. Understand explainable AI. If you’re a policymaker, look at the implications of an unchanged model that could lead to the automation of a large number of jobs and could lead to high levels of unemployment. Re-define what it means to be gainfully employed.
Leverage technology to take care of the needs of the populace. And many of the economic constructs that we’ve put into place … There’s traditional thinking around demand and supply and constraining supply because you want to keep prices up and so on and so forth, which without getting into the details of that, a lot of that has happened over the last several decades. Think about that. Think about what the future looks like and what that new social contract with our citizens needs to be. Getting involved is the only answer. Whether there’s a good outcome or a bad outcome, like I said, I think we can make the difference and involvement can make the difference.
Jeff Schechtman: And how to respond to people that are just negative about the whole thing and that sow this fear, people like Elon Musk at the moment?
Amir Husain: I don’t agree with the most publicized statements that Elon Musk has made on this topic. He said some things, which make for great soundbites like “AI is like summoning the demon.” And so on and so forth. But at the same time, Elon Musk is a brilliant man and I don’t think the totality of his views are reflected by any one of these statements. But to the extent that they are … and of course, I don’t know what’s going on in the man’s mind, I don’t know. But I do want to give him the benefit of doubt.
But that being said, I think to say that AI is summoning the demon and things of this nature, it’s unhelpful. So what? So what are you going to do? Ban AI? It’s not going to work. There’s a construct in game theory that’s referred to as the prisoner’s dilemma. And basically that whole area of science is concerned with how we make decisions under competitive situations. So what do you do? You ban AI or the development of certain kinds of AI? How are you going to enforce that? Will China comply with that ban? And will Russia comply with that ban? And what if one of the two of them tell you that they are complying with that ban, will you believe them? Probably won’t. And neither will they. Neither will they believe you. So at the end of the day what’s going to happen is there’s going to be technological development behind closed doors, which is essentially what happens in situations like these, because absent that technological edge, the outcome for any one of these adherents to the ban can be a very negative outcome.
None of them will be willing to take that risk. So all of them will not trust the other and will continue these developments. So a ban in my view is unenforceable, even a ban on autonomous weapons is unenforceable. And that’s what’s actually happening. When a hundred and some countries get together at a UN convention to talk about autonomous weapons and their dangers and potentially talking about a ban, they can’t even agree on the definition of what an autonomous weapon is, partly because it’s a difficult problem and partly because many of these countries have made advancements that make it not in their interest to impose a ban at this stage.
So my point is that even if you’re scared, even if you think that Elon Musk has a point, even if you think all those things, then what? You’re not going to be able to put a stop to this. The only way to do it is to get involved and to shape a better outcome. So we don’t control all seven-and-a-half billion people on this planet. And AI developments are unlike, for example, building a nuclear reactor. When somebody’s building a nuclear reactor, you can see. You can see from space, there’s a lot of construction activity. There’s huge buildings. There’s a lot of earth being moved. And you get a lot of advance notice.
In the case of artificial intelligence, you’re talking about programmers sitting at computer terminals with laptops or desktops anyway. Look, you won’t know what somebody’s working on in the realm of AI software and so you won’t be able to really curb it. The only way to deal with this is to get proactively involved.
Jeff Schechtman: Amir Husain. His book is The Sentient Machine: The Coming Age of Artificial Intelligence.
Amir, I thank you so much for spending time with us today here on Radio

WhoWhatWhy.

Amir Husain: Jeff, thank you so much. This was a real pleasure.
Jeff Schechtman: Thank you.
And thank you for listening and for joining us here on Radio WhoWhatWhy. I hope you join us next week for another Radio WhoWhatWhy podcast. I’m Jeff Schechtman.
If you liked this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to WhoWhatWhy.org/donate.

Related front page panorama photo credit: Adapted by WhoWhatWhy from O’Reilly Conferences / Flickr (CC BY-NC 2.0)

Author

  • Jeff Schechtman

    Jeff Schechtman's career spans movies, radio stations, and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and, more recently, the world of podcasts. To date, he has conducted over ten thousand interviews with authors, journalists, and thought leaders. Since March 2015, he has produced almost 500 podcasts for WhoWhatWhy.

    View all posts

Comments are closed.