In his keynote address, Hod Lipson, the James and Sally Scapa Professor of Innovation at Columbia University, spoke about why the AI revolution has accelerated so powerfully and considered its ethical implications.

Lipson offered a four-part answer to the first question. Moore’s law – which holds that processing speed will double and cost will decrease by half every few months – has proven true, he said, and in recent years, computers have gotten exponentially faster, cheaper and better.

“Now we look back on those people in 2010 and think their computers were almost indistinguishable from those in 1950. And it will be the same in 2029 when we look back at today.”

Currently, according to Lipson, most AI is “rule-based” – a major drawback, he said, because “it requires experts to tell you the rules, and as we all know, experts are expensive, slow and wrong.”

But AI is now undergoing a massive change, with machine learning technologies becoming mainstream.

Now we look back on those people in 2010 and think their computers were almost indistinguishable from those in 1950. And it will be the same in 2029 when we look back at today.

— Hod Lipson

“Basically, with machine learning, you don’t tell the computer, you show it,” he said. “You give it examples, and it calculates the probabilities of these things happening again or not.” Example: tic tac toe. Rather than giving a computer rules for winning, you can show it hundreds of actual games played by humans, and “it pulls out the odds, and bam, it wins every time. And that works not just for tic tac toe, but also driving a car or how people move around a supermarket.”

Watch remarks by Stephanie Rowley, TC Provost and Dean of the College, and highlights from Hod Lipson’s keynote address

Machine learning has been around since the 1950s, but until recently there were some things it couldn’t do – such as distinguishing between a dog and a cat. Humans get that right 19 out of 20 times, but until a few years ago, the best AI software was accurate only 75 percent of the time. Now, thanks to an algorithm called Deep Learning, AI can outperform humans.

People are building self-driving cars now because we finally have AI that can tell the difference between a fire hydrant and a toddler.

— Hod Lipson

“People are building self-driving cars now because we finally have AI that can tell the difference between a fire hydrant and a toddler,” Lipson said – and that same technology can identify cancerous skin cells better than trained medical staff in top research centers and hospital. “You can say, those people will lose their jobs, but think of the millions of people with no access to doctors. Their lives will be saved.”  

Finally, AI has taken a quantum leap thanks to the creation of cloud computing (the on-demand availability of computer systems’ data storage and computing power, without direct active management by the user). The cloud has enabled intelligent systems to learn from one another. Lipson’s example:

“We humans have one lifetime of driving experience, but driverless cars can learn from the experiences of all the other driverless cars on the road.”

How powerful might AI ultimately become, and what is its promise in education? Lipson said that machines still can't deliver on holding a real conversation, which obviously is critical in education. For that reason, there's no threat that teachers will be replaced.

AI also is not yet very far along in being “generative.” It can improve on existing designs, but humans are still better at “working with their hands in an unstructured environment.” Hence, he said, plumbers in New York City still make good money.

And then there is the final frontier: actual sentience, including emotions. 

“People say, yeah, but AI can’t have emotions and free will,” he said. “I think that will happen when AI turns inward to think about itself.”

Hang on, everyone. It is indeed a brave new world.

— Joe Levine

Speakers quotations may have been edited for clarity.

More from the AI Conference: