Cluster Analysis and Unsupervised Machine Learning in Python

Cluster Analysis and Unsupervised Machine Learning in Python
Cluster Analysis and Unsupervised Machine Learning in Python
English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 5 Hours | 890 MB

Data science techniques for pattern recognition, data mining, k-means clustering, and hierarchical clustering, and KDE.

Cluster analysis is a staple of unsupervised machine learning and data science.

It is very useful for data mining and big data because it automatically finds patterns in the data, without the need for labels, unlike supervised machine learning.

In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.

Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?

We always seem to have a nice CSV or a table, complete with Xs and corresponding Ys.

If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!

Those “Y”s have to come from somewhere, and a lot of the time that involves manual labor.

Sometimes, you don’t have access to this kind of information or it is infeasible or costly to acquire.

But you still want to have some idea of the structure of the data. If you’re doing data analytics automating pattern recognition in your data would be invaluable.

This is where unsupervised machine learning comes into play.

In this course we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels! We’ll do this by grouping together data that looks alike.

There are 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.

Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to “learn” the probability distribution of a set of data.

One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! We’ll prove how this is the case.

All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this course is for you.

All the materials for this course are FREE. You can download and install Python, Numpy, and Scipy with simple commands on Windows, Linux, or Mac.

This course focuses on “how to build and understand”, not just “how to use”. Anyone can learn to use an API in 15 minutes after reading some documentation. It’s not about “remembering facts”, it’s about “seeing for yourself” via experimentation. It will teach you how to visualize what’s happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.

What you’ll learn

  • Understand the regular K-Means algorithm
  • Understand and enumerate the disadvantages of K-Means Clustering
  • Understand the soft or fuzzy K-Means Clustering algorithm
  • Implement Soft K-Means Clustering in Code
  • Understand Hierarchical Clustering
  • Explain algorithmically how Hierarchical Agglomerative Clustering works
  • Apply Scipy’s Hierarchical Clustering library to data
  • Understand how to read a dendrogram
  • Understand the different distance metrics used in clustering
  • Understand the difference between single linkage, complete linkage, Ward linkage, and UPGMA
  • Understand the Gaussian mixture model and how to use it for density estimation
  • Write a GMM in Python code
  • Explain when GMM is equivalent to K-Means Clustering
  • Explain the expectation-maximization algorithm
  • Understand how GMM overcomes some disadvantages of K-Means
  • Understand the Singular Covariance problem and how to fix it
Table of Contents

Introduction to Unsupervised Learning
1 Introduction and Outline
2 What is unsupervised learning used for
3 Why Use Clustering
4 How to Succeed in this Course

K-Means Clustering
5 An Easy Introduction to K-Means Clustering
6 Using K-Means on Real Data MNIST
7 One Way to Choose K
8 K-Means Application Finding Clusters of Related Words
9 Visual Walkthrough of the K-Means Clustering Algorithm
10 Soft K-Means
11 The K-Means Objective Function
12 Soft K-Means in Python Code
13 Visualizing Each Step of K-Means
14 Examples of where K-Means can fail
15 Disadvantages of K-Means Clustering
16 How to Evaluate a Clustering (Purity, Davies-Bouldin Index)

Hierarchical Clustering
17 Visual Walkthrough of Agglomerative Hierarchical Clustering
18 Agglomerative Clustering Options
19 Using Hierarchical Clustering in Python and Interpreting the Dendrogram
20 Application Evolution
21 Application Donald Trump vs. Hillary Clinton Tweets

Gaussian Mixture Models (GMMs)
22 Description of the Gaussian Mixture Model and How to Train a GMM
23 Comparison between GMM and K-Means
24 Write a Gaussian Mixture Model in Python Code
25 Practical Issues with GMM Singular Covariance
26 Kernel Density Estimation
27 Expectation-Maximization
28 Future Unsupervised Learning Algorithms You Will Learn

Appendix
29 What is the Appendix
30 What order should I take your courses in (part 1)
31 What order should I take your courses in (part 2)
32 Windows-Focused Environment Setup 2018
33 How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow
34 How to Code by Yourself (part 1)
35 How to Code by Yourself (part 2)
36 How to Succeed in this Course (Long Version)
37 Is this for Beginners or Experts Academic or Practical Fast or slow-paced
38 Proof that using Jupyter Notebook is the same as not using it
39 Python 2 vs Python 3