ORPP logo
Image from Google Jackets

Modern Scala Projects : Leverage the Power of Scala for Building Data-Driven and High-performant Projects.

By: Material type: TextTextPublisher: Birmingham : Packt Publishing, Limited, 2018Copyright date: ©2018Edition: 1st edDescription: 1 online resource (328 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781788625272
Subject(s): Genre/Form: Additional physical formats: Print version:: Modern Scala ProjectsDDC classification:
  • 005.114
LOC classification:
  • QA76.73.S28 .G878 2018
Online resources:
Contents:
Cover -- Title Page -- Copyright and Credits -- Packt Upsell -- Contributors -- Table of Contents -- Preface -- Chapter 1: Predict the Class of a Flower from the Iris Dataset -- A multivariate classification problem -- Understanding multivariate -- Different kinds of variables -- Categorical variables -- Fischer's Iris dataset -- The Iris dataset represents a multiclass, multidimensional classification task -- The training dataset -- The mapping function -- An algorithm and its mapping function -- Supervised learning - how it relates to the Iris classification task -- Random Forest classification algorithm -- Project overview - problem formulation -- Getting started with Spark -- Setting up prerequisite software -- Installing Spark in standalone deploy mode -- Developing a simple interactive data analysis utility -- Reading a data file and deriving DataFrame out of it -- Implementing the Iris pipeline -- Iris pipeline implementation objectives -- Step 1 - getting the Iris dataset from the UCI Machine Learning Repository -- Step 2 - preliminary EDA -- Firing up Spark shell -- Loading the iris.csv file and building a DataFrame -- Calculating statistics -- Inspecting your SparkConf again -- Calculating statistics again -- Step 3 - creating an SBT project -- Step 4 - creating Scala files in SBT project -- Step 5 - preprocessing, data transformation, and DataFrame creation -- DataFrame Creation -- Step 6 - creating, training, and testing data -- Step 7 - creating a Random Forest classifier -- Step 8 - training the Random Forest classifier -- Step 9 - applying the Random Forest classifier to test data -- Step 10 - evaluate Random Forest classifier -- Step 11 - running the pipeline as an SBT application -- Step 12 - packaging the application -- Step 13 - submitting the pipeline application to Spark local -- Summary -- Questions.
Chapter 2: Build a Breast Cancer Prognosis Pipeline with the Power of Spark and Scala -- Breast cancer classification problem -- Breast cancer dataset at a glance -- Logistic regression algorithm -- Salient characteristics of LR -- Binary logistic regression assumptions -- A fictitious dataset and LR -- LR as opposed to linear regression -- Formulation of a linear regression classification model -- Logit function as a mathematical equation -- LR function -- Getting started -- Setting up prerequisite software -- Implementation objectives -- Implementation objective 1 - getting the breast cancer dataset -- Implementation objective 2 - deriving a dataframe for EDA -- Step 1 - conducting preliminary EDA -- Step 2 - loading data and converting it to an RDD[String] -- Step 3 - splitting the resilient distributed dataset and reorganizing individual rows into an array -- Step 4 - purging the dataset of rows containing question mark characters -- Step 5 - running a count after purging the dataset of rows with questionable characters -- Step 6 - getting rid of header -- Step 7 - creating a two-column DataFrame -- Step 8 - creating the final DataFrame -- Random Forest breast cancer pipeline -- Step 1 - creating an RDD and preprocessing the data -- Step 2 - creating training and test data -- Step 3 - training the Random Forest classifier -- Step 4 - applying the classifier to the test data -- Step 5 - evaluating the classifier -- Step 6 - running the pipeline as an SBT application -- Step 7 - packaging the application -- Step 8 - deploying the pipeline app into Spark local -- LR breast cancer pipeline -- Implementation objectives -- Implementation objectives 1 and 2 -- Implementation objective 3 - Spark ML workflow for the breast cancer classification task -- Implementation objective 4 - coding steps for building the indexer and logit machine learning model.
Extending our pipeline object with the WisconsinWrapper trait -- Importing the StringIndexer algorithm and using it -- Splitting the DataFrame into training and test datasets -- Creating a LogisticRegression classifier and setting hyperparameters on it -- Running the LR model on the test dataset -- Building a breast cancer pipeline with two stages -- Implementation objective 5 - evaluating the binary classifier's performance -- Summary -- Questions -- Chapter 3: Stock Price Predictions -- Stock price binary classification problem -- Stock price prediction dataset at a glance -- Getting started -- Support for hardware virtualization -- Installing the supported virtualization application -- Downloading the HDP Sandbox and importing it -- Hortonworks Sandbox virtual appliance overview -- Turning on the virtual machine and powering up the Sandbox -- Setting up SSH access for data transfer between Sandbox and the host machine -- Setting up PuTTY, a third-party SSH and Telnet client -- Setting up WinSCP, an SFTP client for Windows -- Updating the default Python required by Zeppelin -- What is Zeppelin? -- Updating our Zeppelin instance -- Launching the Ambari Dashboard and Zeppelin UI -- Updating Zeppelin Notebook configuration by adding or updating interpreters -- Updating a Spark 2 interpreter -- Implementation objectives -- List of implementation goals -- Step 1 - creating a Scala representation of the path to the dataset file -- Step 2 - creating an RDD[String] -- Step 3 - splitting the RDD around the newline character in the dataset -- Step 4 - transforming the RDD[String] -- Step 5 - carrying out preliminary data analysis -- Creating DataFrame from the original dataset -- Dropping the Date and Label columns from the DataFrame -- Having Spark describe the DataFrame -- Adding a new column to the DataFrame and deriving Vector out of it.
Removing stop words - a preprocessing step -- Transforming the merged DataFrame -- Transforming a DataFrame into an array of NGrams -- Adding a new column to the DataFrame, devoid of stop words -- Constructing a vocabulary from our dataset corpus -- Training CountVectorizer -- Using StringIndexer to transform our input label column -- Dropping the input label column -- Adding a new column to our DataFrame -- Dividing the DataSet into training and test sets -- Creating labelIndexer to index the indexedLabel column -- Creating StringIndexer to index a column label -- Creating RandomForestClassifier -- Creating a new data pipeline with three stages -- Creating a new data pipeline with hyperparameters -- Training our new data pipeline -- Generating stock price predictions -- Summary -- Questions -- Chapter 4: Building a Spam Classification Pipeline -- Spam classification problem -- Relevant background topics -- Multidimensional data -- Features and their importance -- Classification task -- Classification outcomes -- Two possible classification outcomes -- Project overview - problem formulation -- Getting started -- Setting up prerequisite software -- Spam classification pipeline -- Implementation steps -- Step 1 - setting up your project folder -- Step 2 - upgrading your build.sbt file -- Step 3 - creating a trait called SpamWrapper -- Step 4 - describing the dataset -- Description of the SpamHam dataset -- Step 5 - creating a new spam classifier class -- Step 6 - listing the data preprocessing steps -- Step 7 - regex to remove punctuation marks and whitespaces -- Step 8 - creating a ham dataframe with punctuation removed -- Creating a labeled ham dataframe -- Step 9 - creating a spam dataframe devoid of punctuation -- Step 10 - joining the spam and ham datasets -- Step 11 - tokenizing our features -- Step 12 - removing stop words.
Step 13 - feature extraction -- Step 14 - creating training and test datasets -- Summary -- Questions -- Further reading -- Chapter 5: Build a Fraud Detection System -- Fraud detection problem -- Fraud detection dataset at a glance -- Precision, recall, and the F1 score -- Feature selection -- The Gaussian Distribution function -- Where does Spark fit in all this? -- Fraud detection approach -- Project overview - problem formulation -- Getting started -- Setting up Hortonworks Sandbox in the cloud -- Creating your Azure free account, and signing in -- The Azure Marketplace -- The HDP Sandbox home page -- Implementation objectives -- Implementation steps -- Create the FraudDetection trait -- Broadcasting mean and standard deviation vectors -- Calculating PDFs -- F1 score -- Calculating the best error term and best F1 score -- Maximum and minimum values of a probability density -- Step size for best error term calculation -- A loop to generate the best F1 and the best error term -- Generating predictions - outliers that represent fraud -- Generating the best error term and best F1 measure -- Preparing to compute precision and recall -- A recap of how we looped through a ranger of Epsilons, the best error term, and the best F1 measure -- Function to calculate false positives -- Summary -- Questions -- Further reading -- Chapter 6: Build Flights Performance Prediction Model -- Overview of flight delay prediction -- The flight dataset at a glance -- Problem formulation of flight delay prediction -- Getting started -- Setting up prerequisite software -- Increasing Java memory -- Reviewing the JDK version -- MongoDB installation -- Implementation and deployment -- Implementation objectives -- Creating a new Scala project -- Building the AirlineWrapper Scala trait -- Summary -- Questions -- Further reading -- Chapter 7: Building a Recommendation Engine.
Problem overviews.
Summary: Scala is a multipurpose programming language, especially for analyzing large datasets without impacting the application performance. Its functional libraries can interact with databases and build scalable frameworks that create robust data pipelines. This book showcases how you can use Scala and its constructs to meet specific project demands.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Cover -- Title Page -- Copyright and Credits -- Packt Upsell -- Contributors -- Table of Contents -- Preface -- Chapter 1: Predict the Class of a Flower from the Iris Dataset -- A multivariate classification problem -- Understanding multivariate -- Different kinds of variables -- Categorical variables -- Fischer's Iris dataset -- The Iris dataset represents a multiclass, multidimensional classification task -- The training dataset -- The mapping function -- An algorithm and its mapping function -- Supervised learning - how it relates to the Iris classification task -- Random Forest classification algorithm -- Project overview - problem formulation -- Getting started with Spark -- Setting up prerequisite software -- Installing Spark in standalone deploy mode -- Developing a simple interactive data analysis utility -- Reading a data file and deriving DataFrame out of it -- Implementing the Iris pipeline -- Iris pipeline implementation objectives -- Step 1 - getting the Iris dataset from the UCI Machine Learning Repository -- Step 2 - preliminary EDA -- Firing up Spark shell -- Loading the iris.csv file and building a DataFrame -- Calculating statistics -- Inspecting your SparkConf again -- Calculating statistics again -- Step 3 - creating an SBT project -- Step 4 - creating Scala files in SBT project -- Step 5 - preprocessing, data transformation, and DataFrame creation -- DataFrame Creation -- Step 6 - creating, training, and testing data -- Step 7 - creating a Random Forest classifier -- Step 8 - training the Random Forest classifier -- Step 9 - applying the Random Forest classifier to test data -- Step 10 - evaluate Random Forest classifier -- Step 11 - running the pipeline as an SBT application -- Step 12 - packaging the application -- Step 13 - submitting the pipeline application to Spark local -- Summary -- Questions.

Chapter 2: Build a Breast Cancer Prognosis Pipeline with the Power of Spark and Scala -- Breast cancer classification problem -- Breast cancer dataset at a glance -- Logistic regression algorithm -- Salient characteristics of LR -- Binary logistic regression assumptions -- A fictitious dataset and LR -- LR as opposed to linear regression -- Formulation of a linear regression classification model -- Logit function as a mathematical equation -- LR function -- Getting started -- Setting up prerequisite software -- Implementation objectives -- Implementation objective 1 - getting the breast cancer dataset -- Implementation objective 2 - deriving a dataframe for EDA -- Step 1 - conducting preliminary EDA -- Step 2 - loading data and converting it to an RDD[String] -- Step 3 - splitting the resilient distributed dataset and reorganizing individual rows into an array -- Step 4 - purging the dataset of rows containing question mark characters -- Step 5 - running a count after purging the dataset of rows with questionable characters -- Step 6 - getting rid of header -- Step 7 - creating a two-column DataFrame -- Step 8 - creating the final DataFrame -- Random Forest breast cancer pipeline -- Step 1 - creating an RDD and preprocessing the data -- Step 2 - creating training and test data -- Step 3 - training the Random Forest classifier -- Step 4 - applying the classifier to the test data -- Step 5 - evaluating the classifier -- Step 6 - running the pipeline as an SBT application -- Step 7 - packaging the application -- Step 8 - deploying the pipeline app into Spark local -- LR breast cancer pipeline -- Implementation objectives -- Implementation objectives 1 and 2 -- Implementation objective 3 - Spark ML workflow for the breast cancer classification task -- Implementation objective 4 - coding steps for building the indexer and logit machine learning model.

Extending our pipeline object with the WisconsinWrapper trait -- Importing the StringIndexer algorithm and using it -- Splitting the DataFrame into training and test datasets -- Creating a LogisticRegression classifier and setting hyperparameters on it -- Running the LR model on the test dataset -- Building a breast cancer pipeline with two stages -- Implementation objective 5 - evaluating the binary classifier's performance -- Summary -- Questions -- Chapter 3: Stock Price Predictions -- Stock price binary classification problem -- Stock price prediction dataset at a glance -- Getting started -- Support for hardware virtualization -- Installing the supported virtualization application -- Downloading the HDP Sandbox and importing it -- Hortonworks Sandbox virtual appliance overview -- Turning on the virtual machine and powering up the Sandbox -- Setting up SSH access for data transfer between Sandbox and the host machine -- Setting up PuTTY, a third-party SSH and Telnet client -- Setting up WinSCP, an SFTP client for Windows -- Updating the default Python required by Zeppelin -- What is Zeppelin? -- Updating our Zeppelin instance -- Launching the Ambari Dashboard and Zeppelin UI -- Updating Zeppelin Notebook configuration by adding or updating interpreters -- Updating a Spark 2 interpreter -- Implementation objectives -- List of implementation goals -- Step 1 - creating a Scala representation of the path to the dataset file -- Step 2 - creating an RDD[String] -- Step 3 - splitting the RDD around the newline character in the dataset -- Step 4 - transforming the RDD[String] -- Step 5 - carrying out preliminary data analysis -- Creating DataFrame from the original dataset -- Dropping the Date and Label columns from the DataFrame -- Having Spark describe the DataFrame -- Adding a new column to the DataFrame and deriving Vector out of it.

Removing stop words - a preprocessing step -- Transforming the merged DataFrame -- Transforming a DataFrame into an array of NGrams -- Adding a new column to the DataFrame, devoid of stop words -- Constructing a vocabulary from our dataset corpus -- Training CountVectorizer -- Using StringIndexer to transform our input label column -- Dropping the input label column -- Adding a new column to our DataFrame -- Dividing the DataSet into training and test sets -- Creating labelIndexer to index the indexedLabel column -- Creating StringIndexer to index a column label -- Creating RandomForestClassifier -- Creating a new data pipeline with three stages -- Creating a new data pipeline with hyperparameters -- Training our new data pipeline -- Generating stock price predictions -- Summary -- Questions -- Chapter 4: Building a Spam Classification Pipeline -- Spam classification problem -- Relevant background topics -- Multidimensional data -- Features and their importance -- Classification task -- Classification outcomes -- Two possible classification outcomes -- Project overview - problem formulation -- Getting started -- Setting up prerequisite software -- Spam classification pipeline -- Implementation steps -- Step 1 - setting up your project folder -- Step 2 - upgrading your build.sbt file -- Step 3 - creating a trait called SpamWrapper -- Step 4 - describing the dataset -- Description of the SpamHam dataset -- Step 5 - creating a new spam classifier class -- Step 6 - listing the data preprocessing steps -- Step 7 - regex to remove punctuation marks and whitespaces -- Step 8 - creating a ham dataframe with punctuation removed -- Creating a labeled ham dataframe -- Step 9 - creating a spam dataframe devoid of punctuation -- Step 10 - joining the spam and ham datasets -- Step 11 - tokenizing our features -- Step 12 - removing stop words.

Step 13 - feature extraction -- Step 14 - creating training and test datasets -- Summary -- Questions -- Further reading -- Chapter 5: Build a Fraud Detection System -- Fraud detection problem -- Fraud detection dataset at a glance -- Precision, recall, and the F1 score -- Feature selection -- The Gaussian Distribution function -- Where does Spark fit in all this? -- Fraud detection approach -- Project overview - problem formulation -- Getting started -- Setting up Hortonworks Sandbox in the cloud -- Creating your Azure free account, and signing in -- The Azure Marketplace -- The HDP Sandbox home page -- Implementation objectives -- Implementation steps -- Create the FraudDetection trait -- Broadcasting mean and standard deviation vectors -- Calculating PDFs -- F1 score -- Calculating the best error term and best F1 score -- Maximum and minimum values of a probability density -- Step size for best error term calculation -- A loop to generate the best F1 and the best error term -- Generating predictions - outliers that represent fraud -- Generating the best error term and best F1 measure -- Preparing to compute precision and recall -- A recap of how we looped through a ranger of Epsilons, the best error term, and the best F1 measure -- Function to calculate false positives -- Summary -- Questions -- Further reading -- Chapter 6: Build Flights Performance Prediction Model -- Overview of flight delay prediction -- The flight dataset at a glance -- Problem formulation of flight delay prediction -- Getting started -- Setting up prerequisite software -- Increasing Java memory -- Reviewing the JDK version -- MongoDB installation -- Implementation and deployment -- Implementation objectives -- Creating a new Scala project -- Building the AirlineWrapper Scala trait -- Summary -- Questions -- Further reading -- Chapter 7: Building a Recommendation Engine.

Problem overviews.

Scala is a multipurpose programming language, especially for analyzing large datasets without impacting the application performance. Its functional libraries can interact with databases and build scalable frameworks that create robust data pipelines. This book showcases how you can use Scala and its constructs to meet specific project demands.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.

© 2024 Resource Centre. All rights reserved.