English | MP4 | AVC 1920×1080 | AAC 48KHz 2ch | 1h 41m | 616 MB
Implement machine learning algorithms and evaluate how well they perform with the Scala programming language
Programmers face multiple challenges while implementing ML; dealing with unstructured data and picking the proper ML model are among the hardest.
In this course we will go through day-to-day challenges that programmers face when implementing ML pipelines and consider different approaches and models to solve complex problems.
You will learn about the most effective machine learning techniques and implement them in your favor. You will implement algorithms in practical hands-on projects, building data models and understanding how they work by using different types of algorithm.
Each section of the course deals with a specific machine learning problem and analysis and gives you insights by using real-world datasets.
By the end of this course, you will be able to take huge datasets, extract features from it, and apply a machine learning model that is well suited to your problem.
This is a step-by-step and fast-paced guide that will help you learn how to create a ML model using the Apache Spark ML toolkit. With this practical approach, you will take your skills to the next level and will be able to create ML pipelines effectively.
What You Will Learn
- Extract features from data
- Write Scala code implementing ML algorithms for prediction and clustering
- Analyze the structure of datasets with exploratory data analysis techniques using Scala.
- Get to grips with the most popular machine learning algorithms used in the areas of regression, classification, clustering, dimensionality reduction, PCA, and neuralnetworks.
- Use the power of MLlib libraries to implement machine learning with Spark
- Using GMM to reason about time series data
- Work with the k-means and Naive Bayes algorithms and their methods and implement them in Scala with real datasets
1 The Course Overview
2 Analyzing Text Input Data
3 Feature Generation from Text – Count Vectorizer, TFIDF, LDA
4 Extracting Features from Data – Transforming Text into Vector of Numbers
5 Bag-of-Words and Skip Gram
6 Training Classification Models – Implementing Word2Vect Using Apache Spark
7 Logistic Regression Explanation
8 Writing a Logistic Regression Model Per Author in Apache Spark
9 Training Regression Model
10 Key Concepts, Machine Learning Pipelines, and Operations
11 Learn How to Validate Models Using Cross-Validation
12 Analyzing Time of Post Using Clustering – (GMM Explanation)
13 Implementing GMM in Apache Spark
14 K-Means Clustering Explanation and Use Cases
15 Implementing K-Means Clustering in Apache Spark
16 Measure Accuracy Using Area Under ROC
17 Dimensionality Reduction Using Singular Value Decomposition (SVD)
18 Building Recommendation Engine in Spark Using Collaborative Filtering
19 Using Recommendation Engine to Get Top Recommendations
20 Dense and Sparse Vectors
21 LabeledPoints, Rating, and Other Data Types
22 The Spark versus Deep Learning Use Case
23 Spark for Parallelizing Deep Learning Evaluation
24 Deep Learning As a Feature Generator for Existing Spark ML Algorithms
25 Spark Deep Learning Made Simple