Machine Learning & Artificial Intelligence: Crash Course Computer Science #34


Hi, I’m Carrie Anne, and welcome to Crash
Course Computer Science! As we’ve touched on many times in this series,
computers are incredible at storing, organizing, fetching and processing huge volumes of data. That’s perfect for things like e-commerce
websites with millions of items for sale, and for storing billions of health records
for quick access by doctors. But what if we want to use computers not just
to fetch and display data, but to actually make decisions about data? This is the essence of machine learning – algorithms
that give computers the ability to learn from data, and then make predictions and decisions. Computer programs with this ability are extremely
useful in answering questions like Is an email spam? Does a person’s heart have arrhythmia? What video should youtube recommend after
this one? While useful, we probably wouldn’t describe
these programs as “intelligent” in the same way we think of human intelligence. So, even though the terms are often interchanged,
most computer scientists would say that machine learning is a set of techniques that sits
inside the even more ambitious goal of Artificial Intelligence, or AI for short. INTRO Machine Learning and AI algorithms tend to
be pretty sophisticated. So rather than wading into the mechanics of
how they work, we’re going to focus on what the algorithms do conceptually. Let’s start with a simple example: deciding
if a moth is a Luna Moth or an Emperor Moth. This decision process is called classification,
and an algorithm that does it is called a classifier. Although there are techniques that can use
raw data for training – like photos and sounds – many algorithms reduce the complexity
of real world objects and phenomena into what are called features. Features are values that usefully characterize
the things we wish to classify. For our moth example, we’re going to use
two features: “wingspan” and “mass”. In order to train our machine learning classifier
to make good predictions, we’re going to need training data. To get that, we’d send an entomologist out
into a forest to collect data for both luna and emperor moths. These experts can recognize different moths,
so they not only record the feature values, but also label that data with the actual moth
species. This is called labeled data. Because we only have two features, it’s
easy to visualize this data in a scatterplot. Here, I’ve plotted data for 100 Emperor
Moths in red and 100 Luna Moths in blue. We can see that the species make two groupings,
but…. there’s some overlap in the middle… so
it’s not entirely obvious how to best separate the two. That’s what machine learning algorithms
do – find optimal separations! I’m just going to eyeball it and say anything
less than 45 millimeters in wingspan is likely to be an Emperor Moth. We can add another division that says additionally
mass must be less than .75 in order for our guess to be Emperor Moth. These lines that chop up the decision space
are called decision boundaries. If we look closely at our data, we can see
that 86 emperor moths would correctly end up inside the emperor decision region, but
14 would end up incorrectly in luna moth territory. On the other hand, 82 luna moths would be
correct, with 18 falling onto the wrong side. A table, like this, showing where a classifier
gets things right and wrong is called a confusion matrix… which probably should have also
been the title of the last two movies in the Matrix Trilogy! Notice that there’s no way for us to draw
lines that give us 100% accuracy. If we lower our wingspan decision boundary,
we misclassify more Emperor moths as Lunas. If we raise it, we misclassify more Luna moths. The job of machine learning algorithms, at
a high level, is to maximize correct classifications while minimizing errors On our training data, we get 168 moths correct,
and 32 moths wrong, for an average classification accuracy of 84%. Now, using these decision boundaries, if we
go out into the forest and encounter an unknown moth, we can measure its features and plot
it onto our decision space. This is unlabeled data. Our decision boundaries offer a guess as to
what species the moth is. In this case, we’d predict it’s a Luna Moth. This simple approach, of dividing the decision
space up into boxes, can be represented by what’s called a decision tree, which would
look like this pictorially or could be written in code using If-Statements, like this. A machine learning algorithm that produces
decision trees needs to choose what features to divide on…and then for each of those
features, what values to use for the division. Decision Trees are just one basic example
of a machine learning technique. There are hundreds of algorithms in computer
science literature today. And more are being published all the time. A few algorithms even use many decision trees
working together to make a prediction. Computer scientists smugly call those Forests…
because they contain lots of trees. There are also non-tree-based approaches,
like Support Vector Machines, which essentially slice up the decision space using arbitrary
lines. And these don’t have to be straight lines;
they can be polynomials or some other fancy mathematical function. Like before, it’s the machine learning algorithm’s
job to figure out the best lines to provide the most accurate decision boundaries. So far, my examples have only had two features,
which is easy enough for a human to figure out. If we add a third feature, let’s say, length
of antennae, then our 2D lines become 3D planes, creating decision boundaries in three dimensions. These planes don’t have to be straight either. Plus, a truly useful classifier would contend
with many different moth species. Now I think you’d agree this is getting
too complicated to figure out by hand… But even this is a very basic example – just three features and five moth species. We can still show it in this 3D scatter plot. Unfortunately, there’s no good way to visualize
four features at once, or twenty features, let alone hundreds or even thousands of features. But that’s what many real-world machine
learning problems face. Can YOU imagine trying to figure out the equation for a hyperplane rippling through a thousand-dimensional decision space? Probably not, but computers, with clever machine
learning algorithms can… and they do, all day long, on computers at places like Google,
Facebook, Microsoft and Amazon. Techniques like Decision Trees and Support
Vector Machines are strongly rooted in the field of statistics, which has dealt with
making confident decisions, using data, long before computers ever existed. There’s a very large class of widely used
statistical machine learning techniques, but there are also some approaches with no origins
in statistics. Most notable are artificial neural networks,
which were inspired by neurons in our brains! For a primer of biological neurons, check
out our three-part overview here, but basically neurons are cells that process and transmit
messages using electrical and chemical signals. They take one or more inputs from other cells,
process those signals, and then emit their own signal. These form into huge interconnected networks
that are able to process complex information. Just like your brain watching this youtube
video. Artificial Neurons are very similar. Each takes a series of inputs, combines them,
and emits a signal. Rather than being electrical or chemical signals,
artificial neurons take numbers in, and spit numbers out. They are organized into layers that are connected
by links, forming a network of neurons, hence the name. Let’s return to our moth example to see
how neural nets can be used for classification. Our first layer – the input layer – provides
data from a single moth needing classification. Again, we’ll use mass and wingspan. At the other end, we have an output layer,
with two neurons: one for Emperor Moth and another for Luna Moth. The most excited neuron will be our classification
decision. In between, we have a hidden layer, that transforms
our inputs into outputs, and does the hard work of classification. To see how this is done, let’s zoom into
one neuron in the hidden layer. The first thing a neuron does is multiply
each of its inputs by a specific weight, let’s say 2.8 for its first input, and .1 for it’s
second input. Then, it sums these weighted inputs together,
which is in this case, is a grand total of 9.74. The neuron then applies a bias to this result
– in other words, it adds or subtracts a fixed value, for example, minus six, for a new value
of 3.74. These bias and inputs weights are initially
set to random values when a neural network is created. Then, an algorithm goes in, and starts tweaking
all those values to train the neural network, using labeled data for training and testing. This happens over many interactions, gradually
improving accuracy – a process very much like human learning. Finally, neurons have an activation function,
also called a transfer function, that gets applied to the output, performing a final
mathematical modification to the result. For example, limiting the value to a range
from negative one and positive one, or setting any negative values to 0. We’ll use a linear transfer function that
passes the value through unchanged, so 3.74 stays as 3.74. So for our example neuron, given the inputs
.55 and 82, the output would be 3.74. This is just one neuron, but this process
of weighting, summing, biasing and applying an activation function is computed for all
neurons in a layer, and the values propagate forward in the network, one layer at a time. In this example, the output neuron with the
highest value is our decision: Luna Moth. Importantly, the hidden layer doesn’t have
to be just one layer… it can be many layers deep. This is where the term deep learning comes
from. Training these more complicated networks takes
a lot more computation and data. Despite the fact that neural networks were
invented over fifty years ago, deep neural nets have only been practical very recently,
thanks to powerful processors, but even more so, wicked fast GPUs. So, thank you gamers for being so demanding
about silky smooth framerates! A couple of years ago, Google and Facebook
demonstrated deep neural nets that could find faces in photos as well as humans – and
humans are really good at this! It was a huge milestone. Now deep neural nets are driving cars, translating
human speech, diagnosing medical conditions and much more. These algorithms are very sophisticated, but
it’s less clear if they should be described as “intelligent”. They can really only do one thing like classify
moths, find faces, or translate languages. This type of AI is called Weak AI or Narrow
AI. It’s only intelligent at specific tasks. But that doesn’t mean it’s not useful;
I mean medical devices that can make diagnoses, and cars that can drive themselves are amazing! But do we need those computers to compose
music and look up delicious recipes in their free time? Probably not. Although that would be kinda cool. Truly general-purpose AI, one as smart and
well-rounded as a human, is called Strong AI. No one has demonstrated anything close to
human-level artificial intelligence yet. Some argue it’s impossible, but many people
point to the explosion of digitized knowledge – like Wikipedia articles, web pages, and
Youtube videos – as the perfect kindling for Strong AI. Although you can only watch a maximum of 24
hours of youtube a day, a computer can watch millions of hours. For example, IBM’s Watson consults and synthesizes
information from 200 million pages of content, including the full text of Wikipedia. While not a Strong AI, Watson is pretty smart,
and it crushed its human competition in Jeopardy way back in 2011. Not only can AIs gobble up huge volumes of
information, but they can also learn over time, often much faster than humans. In 2016, Google debuted AlphaGo, a Narrow
AI that plays the fiendishly complicated board game Go. One of the ways it got so good and able to
beat the very best human players, was by playing clones of itself millions and millions of
times. It learned what worked and what didn’t,
and along the way, discovered successful strategies all by itself. This is called Reinforcement Learning, and
it’s a super powerful approach. In fact, it’s very similar to how humans
learn. People don’t just magically acquire the
ability to walk… it takes thousands of hours of trial and error to figure it out. Computers are now on the cusp of learning
by trial and error, and for many narrow problems, reinforcement learning is already widely used. What will be interesting to see, is if these
types of learning techniques can be applied more broadly, to create human-like, Strong
AIs that learn much like how kids learn, but at super accelerated rates. If that happens, there are some pretty big
changes in store for humanity – a topic we’ll revisit later. Thanks for watching. See you next week.

100 Comments

  1. Jayyy Zeee

    February 14, 2018 at 6:46 am

    If I program a script with my credentials to act on my behalf, that's probably legitimate. What if I give my credentials to an AI program with the intention of delegating some task to it? If it's smart enough, is that act tantamount to sharing credentials with another person?

  2. SecularDogma

    February 23, 2018 at 7:12 pm

    Lost me at "mary anne"

  3. Retro Gamer

    March 8, 2018 at 8:00 am

    This is how machine learning will fail. The off switch. No power. NO AI.

  4. prop8604

    March 13, 2018 at 3:19 pm

    pleeeeease taaaaaaalk sloooower! -.-' even on simple topics it would be hard to keep up :/ love the content though 🙂

  5. Shelby Simpson

    March 15, 2018 at 12:43 am

    If we're gonna have a girl doing this can she at least be attractive?

    Great vid though very interesting stuff

  6. Caesar TG

    March 21, 2018 at 2:52 am

    Wow, thank you so much, this video was super informative!

  7. Susan Asher

    March 24, 2018 at 11:44 pm

    You talk too fast. My brain cannot process all you are saying. Slow it down. The more confusing the information, the slower and shorter sentences you should speak.

  8. Chaid Daud

    March 26, 2018 at 9:15 pm

    Great explanation. would love to learn more about Machine learning.
    any suggestion on where to start

  9. Zeeshan Hassan

    March 31, 2018 at 2:00 am

    I'm a little confused, are there any Strong AIs?

  10. Ijaz Faridi

    March 31, 2018 at 7:43 am

    Nice video

  11. Saim Selim

    March 31, 2018 at 6:57 pm

    Awesome Video

  12. Momfus Arboleo

    April 2, 2018 at 5:10 am

    Please make spanish subtitles for this video (and all the course) I understand but a lot students from my university have a lot of trouble learning english and the videos that teachers use to explain "what is AI" are very very bad

  13. Nino Nazghaidze

    April 3, 2018 at 9:45 am

    She is very very good! Enjoy listening so much ^_^ very easy and amusing tone.

  14. Robert Ireland

    April 5, 2018 at 6:00 pm

    We need more AI vids please!!!!! This was awesome!

  15. Deepak Sahu

    April 15, 2018 at 8:31 am

    Thanks for such information

  16. Ben Quinney

    April 17, 2018 at 3:10 am

    Matrix joke

  17. Yulin Liu

    April 28, 2018 at 11:30 pm

    Great brief introduction of so many related concepts. Thanks!

  18. Cheng Zhang

    May 8, 2018 at 4:30 am

    The wave is stop from motion in the backgrund?!

  19. L1qu1d S1lenc3r

    May 8, 2018 at 5:18 am

    While watching this I realized what the penis and vagina we're, they are a quantum multi-dimentional Machine learning program. It works like an A.I. does by sending millions of sperm(from the penis) down a 3 dimensional "maybe a multi dimensional depending on how you believe the world's physics work" path(vagina) and the vagina could possibly mimmick the knowledge you need to live and survive through the world and the best generation gets born and it does all this so much faster and with trillions more variables then any computer we have now. The penis and can name are basically nature's perfeced machine learning A.I.

  20. PennyAfNorberg

    May 8, 2018 at 3:08 pm

    I had no problem with ~300d unit circles, counting arc length between points on it where a bit harder however

  21. Omna Aggvin

    May 18, 2018 at 8:06 pm

    kinda funny you skip the whole artificial evolution, and how great it is part of machine learning.probebly because advocating for normal evolution is socially unacseptable.

  22. Dius276

    May 19, 2018 at 7:43 pm

    Why is there no Crash Course on geography? Or did Miriam the Super-Racist torpedo that idea for good?

  23. Dragon Stone Creations

    June 15, 2018 at 5:35 am

    You guys are the best ever….. The way you teach the most complicated tasks are just like we could have invented ourselves….. You are awesome <3

  24. Sam Vargas

    June 19, 2018 at 1:54 am

    I just glimpsed an understanding of the purpose for multidimensional geometry.

  25. Denise Bowens

    June 23, 2018 at 1:55 pm

    Extraordinary!!!!!!

  26. red cat

    June 27, 2018 at 2:59 pm

    Induced intelligence.

  27. Saba undefined

    July 7, 2018 at 2:06 pm

    Sorry, why is weight needed?

  28. Rah

    July 22, 2018 at 2:51 pm

    This is great, after watching this i coded my own auto-sucker with orange peels!

  29. TheAlderFalder

    August 1, 2018 at 2:23 pm

    Very well made video!

  30. Nirbhay Pandya

    August 3, 2018 at 12:37 pm

    "aha" ! Thank you!

  31. Md Shajid

    August 4, 2018 at 7:56 am

    oooooo she is dammm hot😘

  32. Balde Aguirre

    August 8, 2018 at 8:00 am

    I like how you explain in this videos.. do you have any courses online?

  33. Raghu Prasad Konandur

    August 9, 2018 at 7:08 am

    Good quick video to understand basic concepts..

  34. Ribbitt III

    August 22, 2018 at 1:35 pm

    Sofia the robot is watching

  35. THE URBANWOLF

    August 23, 2018 at 7:18 am

    Hermosa tatas

  36. Omar Hussein

    August 25, 2018 at 10:14 pm

    thats was very good

  37. Le Aundre J

    August 27, 2018 at 10:13 pm

    She sounds like the narrator from Within the Wire

  38. Caroline

    August 29, 2018 at 12:44 am

    I love how the code was placed right next to the decision tree! It was so simple to understand the tree and with the code right next to the branches I felt the same simplicity merge over as I looked at the code. Definitely a smart way to relieve the anxiety of trying to figure out a new code! Seeing this makes me think that if kids are exposed to math and science examples like this in programming courses or even in after school activities they'd be even more adaptable to learning how to code! You guys are amazing! Please keep the videos coming!

  39. Christian Chen Liu

    August 31, 2018 at 3:16 am

    Where's the AI that defeated many of the best Dota Players: Open AI?

  40. yasa owens

    August 31, 2018 at 9:54 am

    i'm trying to make a stong ai

  41. Kat R.

    September 5, 2018 at 12:07 am

    Don't let Lee Sedol hear that AlphaGo is a narrow AI

  42. Shyamala Padhy

    September 6, 2018 at 8:45 am

    What is that machine of your right hand side ?

  43. Khadijah Alsmiere

    September 10, 2018 at 6:38 pm

    Thank you Carrie so much .. you're such A Hero .

  44. Đạt Phạm

    September 15, 2018 at 1:33 pm

    Ok, catching butterfly (:

  45. Rafael Arturo Mateo Núñez

    September 20, 2018 at 1:19 am

    The information in this video is sublime, and its quality as well. I do not know how there is people that hit dislike.

  46. Başak Yenel Yalova Istanbul Real Estate Agent

    September 22, 2018 at 3:07 pm

    Not enough

  47. Bo Do

    September 23, 2018 at 4:36 am

    Holy cannoli did I just have 🤯. I just realized that higher dimensions are not just possible but almost certain. We encounter this every day in math using higher dimensional vectors, in computers and analysis.

  48. first name

    September 30, 2018 at 3:32 am

    not hot dog

  49. Pyaie Phyo

    September 30, 2018 at 2:24 pm

    May I ask one question? 7:32 why subtract 6 from apply bias? Thank you

  50. Luciana Morais

    October 5, 2018 at 3:08 am

    Crash Course needs to become Crash Degree. This and the episode on Natural Language Processing, pretty much saved me hours of tedious reading of some O'Reilly book.

  51. Benji Campos

    October 11, 2018 at 9:39 pm

    it's amazing how the essence of trial and error can be emphasized in every aspect of life

  52. Steph Barbee

    October 17, 2018 at 7:28 pm

    Excellent video for Machine Learning! Kudos! Thank you for sharing!

  53. darak abdushakir

    October 19, 2018 at 1:59 pm

    Great job.

  54. Shaun Y. Cheng

    October 20, 2018 at 2:22 pm

    nice video!

  55. Aviral Janveja

    October 22, 2018 at 10:40 pm

    Such avant garde content makes YouTube an avant garde platform.

  56. boom bang

    October 24, 2018 at 3:16 pm

    Wow. I read somewhere that the neo-nazi are members of A.I. Lab and anyone any idea about this?

  57. Faisal Bin Faruk

    October 24, 2018 at 6:13 pm

    hey Marvel created a strong AI, "Dr. Strange"

  58. Anthony Morford

    October 27, 2018 at 9:10 pm

    Hi Carrie Anne

  59. Arsen Ghasabyan

    November 9, 2018 at 5:39 am

    Excellent video! A little bit too fast, though.

  60. Brett Bourg

    November 14, 2018 at 2:14 am

    humans are incredible

  61. Jack Baxter

    November 14, 2018 at 4:16 pm

    As a newcomer to Machine Learning this was a great overview. Thank you!

  62. Tony Li

    November 20, 2018 at 11:35 am

    After this video I can hardly understand why ai is a threat. It basically can just do one specific task better.

  63. Tony Li

    November 20, 2018 at 12:50 pm

    We gamers won’t take responsibility for human extinction

  64. Ali107

    November 25, 2018 at 7:46 pm

    SkyNet is Inevitable.

  65. Randall Ledbetter

    November 27, 2018 at 3:49 am

    Please just help me balance my checkbook.

  66. Randall Ledbetter

    November 27, 2018 at 3:55 am

    At 2:30 my eyeballs rolled back in my head and I collapsed, falling backward out of my chair due to total confusion and non understanding of said material. Takes me back to my school days.

  67. Randall Ledbetter

    November 27, 2018 at 4:01 am

    A robot walks into a bar.
    Robot ; " Hey bartender , I'll have a beer please."
    Bartender: " We don't serve your type in here.
    Robot: " No, but someday you will."

  68. 3rdman

    December 15, 2018 at 11:26 am

    These are just really basic things that engineers have been doing for decades using MatLab. Really there is nothing new in these "new fields" called "machine-learning" and "data science". Just somewhat larger data-sets, somewhat faster machines, and a few new softwares that makes the job easier (mainly in the business domain), and a whole new mass of people who are called "data-scientists" or "data-analysts" or whatever.
    Also I don't understand why this is called "artificial intelligence". These optimisation works fundamentally and utterly have nothing to do with what people define as "artificial intelligence", or whatever the paranoid ideas that people have about machines taking over the world or whatever.

  69. Jonas Bergqvist

    January 12, 2019 at 7:24 am

    Super good! Thanks alot!

  70. Akshay Dhande

    January 25, 2019 at 6:09 am

    Thank you, for thanking gamers for demanding higher frame rates. I knew we would fit in somewhere 🙂

  71. Chris Sabre

    January 30, 2019 at 7:57 pm

    No , computers dont and wont have intelligence, that takes comprehension, they have and use predictability based on gathered data,

  72. Pooja Mahesha

    February 4, 2019 at 7:30 pm

    Thanks for the intelligently designed graphics that pops at times.. They were so useful .. 🙂

  73. ralph diaz

    March 9, 2019 at 11:28 am

    Hello crash course great vids.
    Make some conceptual understanding videos on Multivariate calculus, like the ones made by 3blue1brown YouTube channel.
    Great job Carrie Ann..

  74. Jadwiga Buczyk

    March 22, 2019 at 5:28 pm

    I don’t like your accent.

  75. Dragoon77

    March 23, 2019 at 4:24 pm

    Arthur from The Tick at 3:33 ?

  76. VideoSiteAccess

    April 4, 2019 at 1:03 am

    This was GOOD!

  77. Amaroq Starwind

    April 9, 2019 at 8:31 pm

    The graph on 2:30 actually looks like a moth.

  78. Duc D

    May 14, 2019 at 7:48 pm

    I learned so much in 12 mins
    thank you so much!

  79. Dav Vez

    May 18, 2019 at 11:45 am

    gonna try to create artificial intelligence using a geometric language in many dimension where the number of angle is our 2d equivalent of that language allowing higher and more complex coding and maybe even self aware artificial intelligence

  80. Global Digital Direct Subsidiarity Democracy

    June 2, 2019 at 3:24 pm

    And human said: Let there be god.

  81. Cody Gillespie

    June 21, 2019 at 12:58 am

    Fantastic video

  82. David Huss

    June 22, 2019 at 3:32 pm

    7:39 did you mean to say "iterations"?

  83. Ramya Ranjan

    July 8, 2019 at 6:27 pm

    She is such a good teacher , never knew Machine leaening and AI could be this interesting.

  84. Dull Bananas

    July 14, 2019 at 4:42 pm

    "BSOD" Course

  85. bill Niko

    July 18, 2019 at 4:21 am

    Hey do you have book , i love your series

  86. bill Niko

    July 18, 2019 at 4:22 am

    Every neuron connect to every neuron with different weight .

  87. bill Niko

    July 18, 2019 at 4:39 am

    After this video I became a strong AI can beat alpha go.

  88. Doctor FudgeBrownie

    July 25, 2019 at 2:15 pm

    A SMART MICROWAVE?!

  89. PAPA CHOUDHARY

    August 21, 2019 at 5:30 am

    Thanks!

  90. krskiwi

    September 3, 2019 at 11:36 pm

    That was very concise and extremely informative. If only all content on Youtube was this good!

  91. Barry Herron

    October 1, 2019 at 11:49 am

    Wasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaap

  92. Mohanraj Sampath

    October 7, 2019 at 12:18 pm

    Super Neat!!! Loved it!

  93. MR

    October 14, 2019 at 3:33 am

    This is the beginning of end.

  94. Sercil

    October 16, 2019 at 4:42 pm

    "A human can only watch 24 hours of youtube per day at maximum"
    Me: You underestimate my power.
    (starts opening additional tabs)

  95. Gareth Shelley

    October 16, 2019 at 11:31 pm

    Stop making weird faces when talking

  96. Rak Usu

    October 25, 2019 at 3:46 pm

    OpenAI 5 defeated some pro dota players.

Leave a Reply