Meet Geoffrey Hinton, U of T’s Godfather of Deep Learning

That’s right, I’ve never taken a computer
science course. So, here’s a very good trick that everybody
needs to know: If you know nothing about a topic, get yourself made a professor of it,
and nobody will ever ask you again if you actually know anything about it. So actually I got involved in the field when
I was in high school. I got interested in how the brain might work,
and I had a very smart friend who learned about holograms when they first came out. With a hologram, you can take a hologram and
you can chop a corner off, and you don’t lose a corner of the image. The whole image just gets slightly blurrier. The representation of any one part of the image is
spread over the whole hologram. I got interested in this idea that the brain
might have representations distributed over lots of neurons. Then I went to university and I studied physiology,
then I switched to philosophy, then I switched to psychology, then I went into AI. Because I decided at that point that you’re
never going to understand psychology unless you understand the brain. In 2009, two graduate students at U of T applied
some work I’d been doing on a new learning algorithm to the problem of recognizing speech. And it was clear then to people who knew a
lot about speech recognition that, if two graduate students over a summer could produce
something better than the existing technology, then with a significant amount of development
work, it would become much better than the existing technology. And that’s exactly what happened. The other systems then started using neural
nets too. That was a big success for a new kind of neural
net that was developed in Toronto. In the old days, there was what’s now called
good old fashioned artificial intelligence, which most people doing AI still believe in. And the basic idea was that human reasoning
is the core of intelligence, and to understand human reasoning, we’d better somehow get something
like logic into the computer. And then the computer would reason away and
maybe we could make it reason like people. It’s quite tricky to reason like people, mainly
because people don’t do most of their thinking by reasoning. So the alternative view was that we should
look at biology and we should try and make systems that work roughly like the brain,
and the brain doesn’t do most of its thinking by reasoning; it uses things like analogies. It’s a great big neural network that has huge
amounts of knowledge in the connections. It’s got so much knowledge in it that you
couldn’t possibly program it all in by hand. So the key question became, how could you
take a great big neural network and get it to learn stuff from data? It’s very important to fund basic, curiosity-driven
science. That’s where the really big progress comes
from. The huge breakthroughs don’t come from money
designed for applications. Obviously though, if basic curiosity-driven
science has led to something that is really working, then you should exploit it. So the Vector Institute is mainly about exploiting
this technology for deep learning that is really working well at present, and can presumably
be improved a lot by more basic research. Scientists do their best work when they’re
working on something that really interests them. You’ve got to give scientists the freedom
to work on what they believe in. That’s when you’re going to get the real good stuff.

Leave a Reply