English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 12h 07m | 10.0 GB
We live on a planet with billions of people—but also billions of computers, many of them programmed to evaluate and make decisions much as humans do. We don’t yet reside among truly intelligent machines, but they are getting there, and knowing how machines learn is crucial for everyone from professionals to students to ordinary citizens. Machine learning pervades our culture in a multitude of ways, through tools and practices from medical diagnosis and data management to speech synthesis and search engines.
An offshoot of artificial intelligence, machine learning takes programming a giant step beyond the traditional role of computers in routine data processing, such as scheduling, keeping accounts, and making calculations. Now computers are being programmed to figure out how to solve problems by themselves—problems that are so complex that humans often don’t know where to begin. Indeed, machine learning has become so advanced that, often, even the experts don’t know how a computer arrives at the solution it does.
Introduction to Machine Learning demystifies this revolutionary discipline in 25 try-it-yourself lessons taught by award-winning educator and researcher Michael L. Littman, the Royce Family Professor of Teaching Excellence in Computer Science at Brown University. Dr. Littman guides you through the history, concepts, and techniques of machine learning, using the popular computer language Python to give you hands-on experience with the most widely used programs and specialized libraries .
For those new to Python, this course includes a lecture that is a dedicated tutorial on how to get started with this versatile, easy-to-use language. Professor Littman includes approximately one Python demonstration in each lesson. Even if you have never written code in Python, or any language, you can still run these programs for yourself to get a feeling for the amazing power of machine learning.
Get Started with Machine Learning
Backed by Bach-inspired music composed by a machine learning program, Professor Littman opens the course with playful displays of the technology: automatic voice transcription, word prediction, face aging, foreign language translation, voice simulation, and more. Then he launches into a real-world example: how to use machine learning to listen to heartbeats and diagnose heart disease. Traditional computer programs only do what you tell them to, and medical software would typically match a set of symptoms to already well-established diagnoses. But the advantage of machine learning is that the computer is set loose to find patterns that may have escaped human observation.
How does it do it? Professor Littman walks you through the process, which starts with choosing a “representational space”—a formal description that defines how to approach the problem. The representational space is the domain of all possible rules, or algorithms, which the machine-learning program should consider. It’s called a space because it encompasses an array of possibilities that can be made more or less expansive depending on the data and time available. The next step is defining the “loss function,” which determines how the possible rules in the representational space are assessed; better rules get better scores. Finally, a program called the “optimizer” rummages through the representational space to find the rules that score well. One or more of these rules become the preferred solution to the problem .
Dig into the Details
In Introduction to Machine Learning, you investigate three major types of representational spaces, focusing on the types of problems they excel at solving.
Decision Trees: Anyone who has dealt with a phone menu has faced a decision tree. “For sales, press 1. For accounts, press 2.” Each choice is followed by additional choices, until you get the person or department you want. Decision trees are a natural fit for machine-learning problems that require “if-then” reasoning, such as many medical diagnoses.
Bayesian Networks: In contrast to decision trees, which rely on a sequence of deductions, Bayesian networks involve inferences from probability. They are well-suited to cases where you need to work backwards from the data to their likely causes. A prominent example is software that identifies probable spam messages.
Neural Networks: Designed to work like neurons in the brain, neural networks excel at perceptual tasks, such as image recognition, language processing, and data classification. Deep neural networks are composed of networks of networks and are the heart of the “deep learning” revolution that Professor Littman covers in detail.
You delve into the mechanics of each of these strategies as well as their pitfalls, especially overfitting, which is when a rule works too well. Overfitting may sound like a good thing, but it is a sign that the rule is tailored too closely to the original data and may not work on new data that requires treatment with a general rule. Professor Littman explains how to steer clear of this hazard and deal with other problems, such as hidden biases, sampling flaws, and false positives.
Embark on Your Own Coding Adventures
Another way to classify machine learning programs is the degree of human input involved. Does the programmer specify a desired outcome or leave it to the computer—or is the approach something in between? These different strategies are:
Supervised Learning: Here, the desired answer is supplied by the programmer as a training dataset that acts as a teacher to guide the learning process. Recommender systems where the user rates a product work like this. So do a host of other machine-learning programs where examples are labeled with their relevant attribute.
Unsupervised Learning: This approach is like having no teacher at all. There is no right answer, just training data to be compared to test data in a search for similarity. News story recommender systems typically work this way, since people rarely rate the news.
Reinforcement Learning: This hybrid strategy is Dr. Littman’s favorite style of machine learning. Think of it as like having access to a critic. You are not being told what to do; you are simply getting feedback on how well you did it. The many examples of reinforcement learning in this course include an entire lesson on game-playing programs.
Throughout this extraordinary course, you dig deeply into the uses of machine learning for cutting-edge problems in research, education, business, entertainment, and daily life. You also consider the social implications of machine learning, which are likely to loom ever larger as its influence grows. Dr. Littman stresses that it’s up to each of us to ensure that this technology is applied in ways that benefit us all.
Therefore, it’s up to us to boost our machine-learning literacy. Many people regard the subject as a black box, where inscrutable things happen that lead to today’s technological wonders. Fortunately, Professor Littman has a gift for making opaque processes not only clear, but captivating. Introduction to Machine Learning will open your eyes to this thrilling field and, better yet, pave the way for your own coding adventures in machine learning.
The Great Courses
1 Introduction to Machine Learning
2 Telling the Computer What We Want
3 Starting with Python Notebooks and Colab
4 Decision Trees for Logical Rules
5 Neural Networks for Perceptual Rules
6 Opening the Black Box of a Neural Network
7 Bayesian Models for Probability Prediction
8 Genetic Algorithms for Evolved Rules
9 Nearest Neighbors for Using Similarity
10 The Fundamental Pitfall of Overfitting
11 Pitfalls in Applying Machine Learning
12 Clustering and Semi-Supervised Learning
13 Recommendations with Three Types of Learning
14 Games with Reinforcement Learning
15 Deep Learning for Computer Vision
16 Getting a Deep Learner Back on Track
17 Text Categorization with Words as Vectors
18 Deep Networks That Output Language
19 Making Stylistic Images with Deep Networks
20 Making Photorealistic Images with GANs
21 Deep Learning for Speech Recognition
22 Inverse Reinforcement Learning from People
23 Causal Inference Comes to Machine Learning
24 The Unexpected Power of Over-Parameterization
25 Protecting Privacy within Machine Learning
26 Mastering the Machine Learning Process