Applied Learning

Scorecard Building in R – Part V – Rejected Sample Inference, Grade Analysis and Scoring Techniques

In the previous section, part IV of the scorecard building process, I trained, validated and tested a logistic regression model serving as the heart of the scorecard. In this section, I address the obvious sample selection problem where loans are accepted based on merit and personal credit information, and also rejected because of lack of credentials. I also look into analyzing model assumptions where the predicted scores for the training set is used to built a grading scheme. As an extra exercise, I scale the log-odds score into a more understandable scoring function.

To fully account for the sample selection bias in our model, performance inference methods are utilized to predict the performance of rejected clients if they were actually given a loan. The first step is creating a function that maps some sort of domain knowledge of features to the log-odds of accepted customers. This function will then be used to predict the probabilities of rejected customers. Here, the rejected data set does not include which applicants applied for a 36-month loan term so I assume that all of these applicants were considered.

I will utilize the following packages:

library(dplyr)
library(ggplot2)
library(scales)

I will require the dataset of rejected applicants and the data available from them. This can be downloaded here.

rejected_data <- read.csv("C:/Users/artemior/Desktop/Lending Club Model/RejectStatsD.csv")

To start off, I create a new logistic regression model with the same Bad_Binary variables and only the features that are common between both accepted and rejected applicants. In this case, both contain zip code, state and employment length. The data that I will use comes from section 2, more specifically WOE_matrix and Bad_Binary. It is assumed that building this inference model takes into account all variable WOE transformations. Bad_Binary_Inference calls upon the original Bad variable from the features_36 vector.

Bad_Binary_Original <- features_36$Bad
sample_inference_features <- WOE_matrix_final[c("zip_code", "addr_state", "emp_length")]
sample_inference_features["Bad_Binary"] <- Bad_Binary_Original

Run a simple generalized linear model on the accepted applicants data set.

sample_inference_model <- glm(Bad_Binary ~ ., data = sample_inference_features)

Here, I calculate the WOEs for the rejected applicants by applying the WOE tables from the accepted applicants onto the rejected applicant data set. The code and methodology of transformation is exactly that in part III.

features_36_inference % select(-Amount.Requested, -Application.Date,
-Loan.Title, -Risk_Score,
-Debt.To.Income.Ratio, -Policy.Code)

features_36_inference_names <- colnames(features_36_inference)

Initiate cluster for parallel processing.

require(parallel)

number_cores <- detectCores() – 1
cluster <- makeCluster(number_cores)
clusterExport(cluster, c("IV", "min_function", "max_function",
"features_36_inference",
"features_36_inference_names",
"only_features_36", "recode", "WOE_tables"))

Create the WOE matrix table for the rejected data applicants.

WOE_matrix_table_inference <- parSapply(cluster, as.matrix(features_36_inference_names),
FUN = WOE_tables_function)

WOE_matrix_inference is the converted WOE matrix for the rejected applicant data set. This is the dataset we will be using to predict their scores for model performance inference.

WOE_matrix_inference <- parSapply(cluster, features_36_inference_names,
FUN = create_WOE_matrix)

Using WOE_matrix_inference to come up with predicted probabilities using the sample_inference_model, which was built on the accepted applicant data set.

rejected_inference_prob <- predict(sample_inference_model,
data = WOE_matrix_inference,
type = "response")
rejected_inference_prob_matrix <- as.matrix(rejected_inference_prob)
rejected_inference_prob_dataframe <- as.data.frame(rejected_inference_prob_matrix)
colnames(rejected_inference_prob_dataframe) <- c("Probabilities")

stopCluster(cluster)

Now I obtain the predicted probabilities using the cross-validated elastic-net logistic regression model from section 4 for all accepted applicants

number_cores <- detectCores()
cluster <- makeCluster(number_cores)
registerDoParallel(cluster)

accepted_prob <- predict(glmnet.fit, LC_WOE_Dataset, type = "prob")
accepted_prob_matrix <- as.matrix(accepted_prob[,2])
accepted_prob_dataframe <- as.data.frame(accepted_prob_matrix)
colnames(accepted_prob_dataframe) <- c("Probabilities")

I combine the probabilities from both the rejected and accepted applicants and generate a graph that depicts the distribution of the probabilities.

probability_matrix <- rbind(accepted_prob_matrix, rejected_inference_prob_matrix)
probability_matrix <- as.data.frame(probability_matrix)
colnames(probability_matrix) <- c("Probabilities")

I use ggplot to plot the distribution of probabilities of default. Here we see that the distribution is left skewed. This is a representation of both accepted and rejected applications. It is also useful to analyze the distribution of the accepted and rejected applicants separately.

probability_distribution <- ggplot(data = probability_matrix, aes(Probabilities))
probability_distribution <- probability_distribution + geom_histogram(bins = 50)

accepted_probability_distribution <- ggplot(data = accepted_prob_dataframe, aes(Probabilities))
accepted_probability_distribution <- accepted_probability_distribution + geom_histogram(bins = 50)

accepted_prob_dist

rejected_probability_distribution <- ggplot(data = rejected_inference_prob_dataframe, aes(Probabilities))
rejected_probability_distribution <- rejected_probability_distribution + geom_histogram(bins = 50)

rejected_prob_dist.png

The accepted probability distribution is left-skewed while the distribution of the rejected applicants is normal. This could be interpreted as rejected applicants exhibiting a normally distributed score if they were to be funded. Therefore, rejected sample selection bias is minimal if applicants are being rejected randomly through a normal distribution. If this was not the case, I would suspect some bias in the acceptance and rejection of applicants.

After gaining some insight on how our model will perform on a population it has never seen before. We look to formalizing the scorecard by creating a grading scheme that defines several levels of risk.

Here, I am going to organize the accepted applicant probability data set into bins to initiate a lift analysis. We use lift analysis to help us determine which bins of scores are going to be described by particular letter grades. I create 25 different bins to mimic the sub-grade system that LC has from their given public data set. I then append the “Bad” column from features_36 to this vector and summarize the information so that I may be able to calculate the proportions of bads within each bin.

bins = 25
Bad_Binary_Values <- features_36$Bad
prob_bad_matrix <- as.data.frame(cbind(accepted_prob_matrix, Bad_Binary_Values))
colnames(prob_bad_matrix) <- c("Probabilities", "Bad_Binary_Values")

I sort the probabilities and binary values in an increasing order based on the probabilities column. By ordering the first column, the corresponding values in the second column are also sorted.

Probabilities <- prob_bad_matrix[,1]
Bad_Binary_Values <- prob_bad_matrix[,2]
order_accepted_prob <- prob_bad_matrix[order(Probabilities, Bad_Binary_Values, decreasing = FALSE),]

I create the bins based on the sorted probabilities and create a new data frame consisting of only the bins and bad binary values. This will be the data frame I use to conduct a lift analysis.

bin_prob <- cut(order_accepted_prob$Probabilities, breaks = bins, labels = 1:bins)
order_bin <- as.data.frame(cbind(bin_prob, order_accepted_prob[,2]))
colnames(order_bin) <- c("Bin", "Bad")

I summarize the information where I calculate the proportion of bads within each bin.

bin_table <- table(order_bin$Bin, order_bin$Bad)

Bin_Summary <- group_by(order_bin, Bin)

Bad_Summary <- summarize(Bin_Summary, Total = n(), Good = sum(Bad), Bad = 1 - Good/Total)

Using Bad_Summary, I plot a bar plot that represents the lift analysis.

lift_plot <- ggplot(Bad_Summary, aes(x = Bin, y = Bad))
lift_plot <- lift_plot + geom_bar(stat = "identity", colour = "skyblue", fill = "skyblue")
lift_plot <- lift_plot + xlab("Bin")
lift_plot <- lift_plot + ylab("Proportion of Bad")

lift_plot

Here, the graph shows 25 bins and the proportion of bad customers within each bin. As expected, the bins have a decreasing trend of proportion of bads which shows the effectiveness of our classifer.By separating them into 25 bins, I mimic LC’s subgrading system a nd could apply the exact same logic to this scorecard.

To finalize the scorecard, I generate a linear function of log-odds and apply a three-digit score mapping system that will assist upper management in understanding the risk score obtained from the scorecard. First I convert the probability scores into log-odds form and figure out what type of linear transformation I would like to apply.

Accepted_Probabilities <- Probabilities
LogOdds <- log(Accepted_Probabilities)

The score is up to the analyst and how they feel is the best way to present it to upper management for easy interpretation. Here, I will apply a three-digit score transformation to the log-odds using a simple linear function. To calculate the slope of this linear line, I use the minimum and maximum log-odds and use that as my range for a score range from 100 – 1000.

max_score <- 1000
min_score <- 100
max_LogOdds <- max(LogOdds)
min_LogOdds <- min(LogOdds)

linear_slope <- (max_score - min_score)/(max_LogOdds - min_LogOdds)
linear_intercept <- max_score - linear_slope * max_LogOdds

Here, the linear slope is 82.6 and the intercept is 1000. This means that the average applicant will have a risk score that is maxed out, guaranteed funding . As the Log-Odds decreases based on the features that are inputted into the model, the score will decrease significantly. The way to interpret the score here is that for every 1 unit of log-odds, the score will decrease by 82.6 units. That means someone that is twice as risky as someone with 1 unit of log-odds will have their score decreased by 165.2. Therefore, every 82.6 units in a score indicates levels of riskiness that is easily understood by non-technical audiences.

Concluding Remarks

There you have it! After 5 parts into the scorecard building process, the scorecard is ready for presentation and production. Now, I could go even further into explaining some of the things that could be changed for the scorecard, such as changing scoring thresholds that meet business requirements or accepted levels of risk. I could even go further into explaining how you can link the scorecard, grades and expected losses and margins of profit for the company. This stems beyond what I have demonstrated here but is definitely possible for utmost consideration especially for a financial company.

Source Code

The R code for all 5 parts of the scorecard building process can be found at my Github page.

Scorecard Building in R – Part II – Data Preparation and Analysis

I used the dataframe manipulation package ‘dplyr’, some basic parallel processing to get the code running faster with the package ‘parallel’, and the ‘Information’ package which allows me to analyze the features within the data set using weight-of-evidence and information value.

library(dplyr)
library(parallel)
library(Information)

First, I read in the Lending Club csv file downloaded from Lending Club website. The file is saved on my local desktop which is easily accessed by the read.csv function.

data <- read.csv("C:/Users/artemior/Desktop/Lending Club model/LoanStats3d.csv")

Next, I create a column that indicates whether I will keep an observation (row) or not. This will be based on the loan statuses because for a predictive logistic regression model, I would like all the statuses that will be strictly defined as a ‘Good’ loan or a ‘Bad’ loan.

data <- mutate(data,
Keep = ifelse(loan_status == "Charged Off" |
loan_status == "Default" |
loan_status == "Fully Paid" |
loan_status == "Late (16-30 days)" |
loan_status == "Late (31-120 days)",
"Keep", "Remove"))

After creating the ‘Keep’ column I filter the data depending on whether the observation had “Keep” or “Remove”.

sample <- filter(data, Keep == "Keep")

I further filter the data set to create two new samples. The Lending Club offers two exclusive types of loan products. To improve predictability of the riskiness of its loans, we can create two sub-risk models, one for all 36-month term loans and 60-month term loans.

sample_36 <- filter(sample, term == " 36 months")
sample_60 <- filter(sample, term == " 60 months")

For the purposes of this scorecard building demonstration I will create a model using the 36-month term loans. Using the mutate function, I create a new column called ‘Bad’ which will be my binary independent variable used
in the logistic regression.

sample_36 <- mutate(sample_36, Bad = ifelse(loan_status == "Fully Paid", 1, 0))

The next step is to clean up the table to remove any data points I do not want to include in the prediction model. Variables such as employment title would take more time to analyze so for the purposes of this analysis I remove them.

features_36 % select(-id, -member_id, -loan_amnt,
-funded_amnt, -funded_amnt_inv, -term, -int_rate, -installment,
-grade, -sub_grade, -pymnt_plan, -purpose, -loan_status,
-emp_title, -out_prncp, -out_prncp_inv, -total_pymnt, -total_pymnt_inv,
-total_rec_int, -total_rec_late_fee, -recoveries, -last_pymnt_d, -last_pymnt_amnt,
-next_pymnt_d, -policy_code, -total_rec_prncp, -Keep)

To further understand the data, I want to take a look at the number of observations per category under each variable. This will weed out any data points that could be problematic in future algorithms.

Once the features table is complete, I use the methodology of information value to transform the raw feature data. In theory, transforming the raw data into a proportional log-odds value as seend in the Weight-of-Evidence maps better onto a logistic regression fitted curve.

IV <- create_infotables(data = features_36, y = "Bad")

We can generate a summary of the IV’s for each feature. The IV for a particular feature represents the sum of individual bin IV’s.

IV$summary

We can even check the IV tables for individual features and see how each feature was binned, the percentage of observations that the bin represents out of the total number of observations, the WOE attributed to the bin and as well as the IV. The following code is an example of presenting the feature summary for the last credit pull date.

IV$Tables$last_credit_pull_d

I analyze the behaviors of continuous and ordered-discrete variables by plotting their weight-of-evidences. In theory, the best possible transformation occurs when weight-of-evidences exhibit a monotonic relationship. First, I define features_36_names as the vector of column names. This will serve as the vector which I will use a function that plots every WOE graph for each feature in the features_36_names matrix. I remove features from the list that are categorical and would generate way too many bins to plot later on. For example, I removed the feature zip_code as there would be over 500 different kinds.

features_36_names_plot <- colnames(features_36)[c(-7, -11, -ncol(features_36))]

Here is the code for the ploeWOE function as I previously mentioned. This function generates a WOE plot for input x, where x is a string that represents the column name of a specific feature. Recall that I generated a list of strings in features_36_names.

plotWOE <- function(x) {
p <- plot_infotables(IV, variable = x, show_values = TRUE)
return(p) }

To make my for loop code clean and faster, I define a number as the length of the features name vector.

feature_name_vector_length_plot <- length(features_36_names_plot)

Now for the fun part, to generate a graph for each feature, I use a for loop which will go over every string object in the features_names_36 list, and plot a WOE graph for each string name that corresponds to a feature in the features_36 matrix. To be safe, I created an error-handling portion of code because somewhere in this huge matrix of features, I may have missed a feature or two in which a WOE plot cannot be created. This would occur if a particular feature only contained 1 category or value for every observed loan.

for (i in 1:feature_name_vector_length_plot) {
p <- tryCatch(plotWOE(features_36_names_plot[i]),
error = function(e)
{print(paste("Removed variable: ",
features_36_names_plot[i])); NaN})
print(p) }

About 90 graphs are generated using for loop. Below I present and discuss two examples of what kinds of graphs are presented and what they mean.

home_ownership_WOE_plot

The home ownership weight-of-evidence plot displays how a greater proportion of good consumer loan customers own their homes and a greater proportion of bad consumer loans pay rent where they live. Those who still pay mortgage are slightly better customers.

months_since_delin_WOE_plot

The months since delinquency (or time since you failed to pay off some form of credit) weight-of-evidence plot presents another intuitive relationship. The more months that pass since a customer’s most recent delinquency will make them more likely to be a good customer in paying off their loan. The lower the amount of months since a customer’s most recent delinquency means that they have just recently failed to pay off other forms of credit. This goes to show that even if you had a delinquency in your lifetime, you can improve your credit management and behaviors over time.

In the plot, something weird happens when customers had their delinquency between 19 – 31 months before they received another consumer loan. This could suggest a lagging effect where it takes some time to fully chase down a customer. It could be the case that sometimes months and months of notification is given before the customer is actually classified as delinquent.

In the next post, Scorecard Building – Part III – Data Transformation, I am going to describe how the data we prepared and analyzed using Information Theory will be transformed to better suit a logistic regression model.

Scorecard Building in R – Part I – Introduction

Part of my job as a Data Scientist is to create, update and maintain a small-to-medium business scorecard. This machine learning generated application allows its users to identify applicants that are more likely to pay back their loan or not. Here, I take the opportunity to showcase the steps I take in building a reliable scorecard, and the analysis associated with evaluating it by using R. I will accomplish this with the use of public data provided by the consumer and commercial lending company, Lending Club (downloaded here).

Here is an overview of the essential steps to take when building this scorecard:

  1. Data Collection, Cleaning and Manipulation
  2. Data Transformations: Weight-of-Evidence and Information Value
  3. Training, Validating and Testing a Model: Logistic Regression
  4. Scorecard Evaluation and Analysis
  5. Finalizing Scorecard with other Techniques

See the next post, Scorecard Building – Part II – Data Preparation and Analysis to see how the data is prepared for further scorecard building.

Time-Series Model Building for TSX Stock Prices Using R

Time-series modelling and forecasting was definitely a core concept to learn and one of the more important technical skills to pick up in a masters program in economics. Having primarily focused on economic applications of time-series such as the estimation and prediction of future Canadian Real GDP and Real Interest Rates, I was able to get a sense of the power of time-series modelling.

One of the things I wished these applied econometric courses taught me was how to at least follow some sort of standard procedure in building a good time-series model! It can be misleading to have a time-series model project that you worked really hard on, knowing that you followed every step in the textbook, but realize in practice that it is a horrible model!

Luckily, I was able to pick up on one of the fundamental statistical model building procedures: splitting your data into a training and testing set with time-series data.

In this mini-project, I use a data set of 141 observations from Yahoo Finance Canada where each observation represents the S&P/TSX stock price for a particular month. Here, I wanted to demonstrate the importance of building a forecasting model on a training set, using this model to forecast future values, and comparing the forecasted values with actual observed values to see how well the model performed. Take note on the Charizard and Venusaur colour palettes of graphs during the read!

The following describes the procedures taken to create this model:

  1. Perform decomposition of time-series using LOESS on full data set of stock prices.
  2. Obtain a vector of seasonally adjusted stock prices (remainder) after the seasonal and trend components are removed.
  3. Separate this remainder into a training and test set where the training set consists of all seasonally adjusted stock prices between January 2006 to December 2015.
  4. Build an ARIMA model on the training set and obtain predicted values into the present month (October 2016).
  5. Compare predicted values to actual values observed and assess model performance.

Using the above procedures, I was able to obtain the following graph using an ARIMA(5,0,0) model or more simply an MA(5) model.

stockpricestationary

When we see the estimated model (Fitted line) compared to the actual behaviour of the stocks, the two are seemingly close. It is evident though that the model predictions into 2016 come close to what was actually observed but still over-predict the seasonally adjusted stock prices. About 6 months into the forecast, the predictions start to decrease and under-predict actual stock performance. Now, forecasts such as these are not meant to go out for too long as they become unreliable. For demonstration purposes, it is nice to see that the model can reliably predict behaviour for a few months into the future.

Since I do not want to put complete faith into this one model, I also ran a Simple Exponential Smoothing time-series model using HoltWinters.

The following describes the procedures taken to create this model:

  1. Separate the raw time-series data into a training and test set where the training set consists of all seasonally adjusted stock prices between January 2006 to December 2015.
  2. Build an Simple Exponential Smoothing (SES) model using Holt-Winters on the training set and obtain predicted values into the present month (October 2016).
  3. Compare predicted values to actual values observed and assess model performance.

Using the above procedures, I was able to obtain the following graph:stockpriceforecastHW

I am happy with the results of this graph. First, one needs to note that this model takes in raw stock prices and as such should be interpreted as their raw prices. The SES model fits closely with the actual values and the forecasting behaviour is similar to that obtained from the ARIMA(5,0,0) model. The confidence intervals shaded in blue show some boundaries to these values which is also very nice.

Further Work

In this post, I addressed the idea of being able to split a time-series data set into a training and testing set and model accordingly. In the first model, I used LOESS, a decomposition method to make the stock price data stationary by removing the seasonal and trending components. This allowed me to reliably apply an ARIMA model and make subsequent predictions. In the second model, I directly applied the Holt-Winters method and obtained similar results.

Although one could see that the time-series models were trained well and to a certain degree able to predict well, there is much more left to be said here. It is much better practice to compare the performance between the two models through calculations of their prediction errors. To take it even further, we could apply more advanced time-series modelling via neural networks.

Source Code

This project has been done in R. The source code can be found at my github here.

Implementing a Predictive Model Pipeline using R and Microsoft Azure Machine Learning

In this post, I aim to demonstrate the process of building a simple machine learning model in R and implementing it as predictive web-service in Microsoft Azure Machine Learning (Azure ML). One is not limited to the built-in machine learning capabilities of Azure ML since the Azure ML environment enables the use of R scripts, and the ability to upload and utilize R packages.

In practice, this gives the Data Scientist the flexibility they need to use their own carefully R-crafted machine learning models within the Azure ML environment. Once an R-built machine learning model is fully implemented within the Azure ML environment, the web-service provides an API in which calls can be made by external applications. This provides large value to any organization that seeks to automate decision processes using predictive modelling.

This demonstration is subdivided into three sections:

  1. Building a Machine Learning Model in R
  2. Preparing for Model Implementation in Microsoft Azure Machine Learning
  3. Creating a Predictive Web Service in Microsoft Azure Machine Learning

Before we begin, this demonstration assumes all of the following are satisfied:

  1. We have a verified and registered Microsoft Azure Machine Learning account. Microsoft allows you to try the product for free if you do not have a subscription. Click the “Sign-up here” on the far right using the link above and follow the steps to gain access to Azure ML.
  2. We have R and an associated IDE installed. I will be using the free version of R-Studio throughout this process.
  3. We have the following R packages installed: dplyr, dummies, caretcaretEnsemble
  4. We have the Human Resources Analytics data set downloaded as this will be the main source of data for building our predictive model.

Building a Machine Learning Model in R

To keep things simple, I will be building a simple logistic regression model.

Data Pre-processing

First, I pre-process the Human Resources Analytics data set to hot-encode the categorical features sales and salary.


require(dummies)

dataset <- read.csv("HRdata.csv")
dataset <- dummy.data.frame(dataset, names = c("sales", "salary"), sep = "_")
dataset <- dataset[-c(15,19)] 

# Columns 15 and 19 represent sales_RandD and salary_high which are removed to prevent the dummy variable trap

Training the Logistic Regression Model

Next, I train a Logistic Regression Model and check that it can successfully generate predictions for new data.

 

# Create logistic regression model
glm_model - glm(left ~ ., data = dataset)

# Generate predictions for new data
newdata <- data.frame(satisfaction_level = 0.5, last_evaluation = 0.5, number_project = 1, average_montly_hours = 160, time_spend_company = 2, Work_accident = 0, promotion_last_5years = 1, sales_accounting = 0, sales_hr = 0, sales_IT = 0, sales_management = 0, sales_marketing = 0, sales_product_mng = 0, sales_sales = 1, sales_support = 0, sales_technical = 0, salary_low = 0, salary_medium = 1)
prediction <- predict(object = stack.rf, newdata = newdata) 

print(prediction)

Executing the above gives us a probability of 0.193 indicating that this employee has low risk of leaving.

Saving the R-Built Model

Since the goal is to use our very own R-built model in Microsoft Azure Machine Learning, we need to be able to utilize our model without having to generate the above code over again. We run the following code to save our model and all of its parameters:


saveRDS(glm_model, file = "glm_model.rds")

Running this code will save the glm_model.rds file in the active working directory within R-Studio. In this demonstration, the glm_model.rds file is saved to my desktop.

glm_modelrds

Creating Package Project Environment

The next couple of steps are crucial in ensuring that we end up with a package file that can be uploaded to Azure ML. The reason for creating this package is to ensure that Azure ML can call on our logistic regression model to generate predictions.

First, we must initialize the package creation process by starting a new project. In R-Studio this achieved by the following:

  • Click “File” in the top left corner
  • Click “New Project…” and a pop-up screen will appear

    new_project_popup.PNG

  • Click “New Directory”

    new_package

  • Click “R Package”

    glm_createproject

  • Type in a package name. Here I used “azuremlglm” as my package name. Make sure to create the project folder by setting a project subdirectory. Here, I used my desktop as the location for this project folder.
  • After clicking “Create Project”, a new R-Studio working environment will open with the default R file being “hello.R”. Since I saved my project to my desktop, I also noticed that a new folder was created.

    glm_azuremlglm

  • Now we are set to build our package. Within our package environment in R-Studio, we can close the “hello.R” file and create three new R scripts by hitting ctrl + shift + N twice. These three scripts will be needed in the following sections.

Filling the Package with Necessary Items

By successfully setting up the package creation environment, we are now free to fill this package with anything that we may find useful in our predictive modelling pipeline. For the purposes of this demonstration, this package will only include the logistic regression model built from the first section, and a function that Azure ML can use to generate predictions.

Before writing anything in the new R script, we write the following R code in the first script to add the glm_model.rds file to our package. To better accomplish this, we can drag the .rds file to the azuremlglm folder since the R-Studio working directory is that project folder.


# Read the .rds file into the package environment
glm_model_rds <- readRDS("stack_randomforest_model.rds")

glm_workingenvironment_1

By reading in the .rds file that contained our logistic regression model into the package environment, we are now free to utilize the model in any way we wish. It is important that we save this script within the project folder. Here, I saved it as glm_model_rds.R as seen on the tab.

glm_tab_1

The Prediction Function

Since the primary use of this package is to utilize the logistic regression model to produce predictions, we need to create a function that takes in a data frame containing new data and outputs a prediction. This is very similar to the prediction verification procedure we did in the first section after building the model and using the predict function on new data.

In the new R script that we created, we write the following R code:


# Create function that allows Azure ML to generate predictions using logistic regression model

prediction_function <- function(newdata) {
 prediction <- predict(glm_model_rds, newdata = newdata)
 return(data.frame(prediction))
}

Here, I saved this function as prediction_function.rds

glm_predictionfunction

The Decision Function

When Azure ML receives new data and passes its arguments to this function, we expect the resulting predictive web-service to produce the predicted probability. What if our decision process required more than just the predicted probability?

The added benefit of being able to create your own models and packages to use in Azure Machine Learning are tenfold. In many cases, you may want the Azure ML API call to output decision processes as a result of the predictions created by your machine learning model. Consider the following example:


# Create function with decision policy

decision_policy <- function(probability) {
 if (probability < 0.2) {return("Employee is low risk, occassional check-up where necessary.")}
 else if (probability >= 0.2 & probability < 0.6) {return("Employee is medium risk, take action in employee retention where necessary.")}
 else (return("Employee is high risk, notify upper management to ensure risk is mitigated in work environment."))
}

This decision function takes the logistic regression model’s predicted probability of a new observation and applies a Human Resource policy that meets the organization’s needs. As you can see, instead of a predicted probability, this function is recommending some form of action to be taken one the predictive model is used. It is possible that the result of a predictive model can trigger many different company-wide policies, no matter what the industry-specific application.

Here’s another example in the alternative business-financing industry. A predicted probability of risk to a specific business owner can trigger different loan-product pricing policies, and trigger different employee actions to be taken. If the Azure ML API call can output a series of policies and rules, there is huge value in being able to automate decision processes in order to get that loan out faster or rejected faster.

Creating your own models in R and including decision policies within your R packages could be the solution to an automated decision process within any organization.

Now, back to package creation. Given the newly created decision_function, we need to be sure to update our prediction_function to be able to implement these new policies.


# Create function that allows Azure ML to generate predictions using stacked model

prediction_function <- function(newdata) {
 prediction <- predict(glm_model_rds, newdata = newdata)
 return(data.frame(decision_policy(prediction)))
}

With the prediction function and decision function ready to go, it is important that we run these functions so that it is saved within the package environment.

It is also important that we save this R script within the package folder. Here, I saved the decision_policy.R function and re-saved the prediction_function.R as shown in the tabs.

glm_decisionfunction

glm_predictionfunction_2

Once these three separate R scripts are saved, we are ready to build and save our package. To build and save the package, we do the following:

  • Click the “Build” tab in the top right corner

    stack_build

  • Click “Build & Reload”

    glm_buildandreload

  • Verify that the package was built and saved by going to the R library folder, “R/win-library/3.3”. Here, my library is saved in my Documents folder.

    glm_azuremlgml_doc

  • With the package folder from above, you want to create a .zip file of it. You can do this by right-clicking the file, going to “send to”, then selecting “Compressed (zipped) folder”. After doing so, it will create a .zip file of your package. Do not rename this .zip file. I also proceeded to drag this .zip file to my desktop.

    glm_1_azuremlglm

  • This next step is extremely important. With the newly saved .zip file, you want to create ANOTHER .zip file of it. The reason for this is because of the weird way that Azure ML reads in package files. This time I renamed the new .zip file as “2_azuremlglm”.  You should now have a .zip file that contains a .zip file that contains the actual azuremlglm folder package. You can delete the first .zip file created from the previous step as it is no longer needed.

    glm_2_azuremlglm

  • This is the resulting package file that will be uploaded to Azure ML.

Creating a Predictive Web Service in Microsoft Azure Machine Learning

We are in the final stretch of the implementation process! This last section will describe how to configure Microsoft Azure Machine Learning to utilize our logistic regression model and decision rules.

Uploading the Package File and Creating a New Experiment

Once we have logged in, we want to do the following steps:

  • Click the “NEW” button in the bottom left corner, click “DATASET”, and then click “FROM LOCAL FILE” as shown

    stack_azureml_newdataset

  • Upload the .zip file created from the previous section

    glm_uploaddataset

  • When the upload is successful, you should receive the following message at the bottom of the screen

    glm_uploadcomplete

  • Next, we create a new blank experiment. We do this by clicking the “NEW” button at the bottom left corner again, click “EXPERIMENT”, and then click the first option “Blank Experiment”

    stack_blankexperiment

  • Now we are ready to configure our Azure Machine Learning experiment

    stack_azuremlenvironment

Setting up the Experiment Platform

In order for Microsoft Azure Machine Learning to utilize our logistic regression model, we need to set up the platform in such a way that it knows to take in new data inputs and produce prediction outputs. We accomplish this with the following layout.

glm_layout

  • The Execute R Script on the left defines the schema of the inputs. This module will connect to the first input node of the second Execute R Script. The code inputted in this module is as follows

    glm_schema

  • A module was placed for the 2_azuregmlglm.zip package so that it can be installed within the Azure ML environment. This module is inputted into the third node of the second Execute R Script module.
  • The Execute R Script in the center is where we utilize the logistic regression model package. This module contains the following code

    glm_readpackagemodule

  • Once all of the above are satisfied, we are ready to deploy the predictive web service.

Deploying the Predictive Web service

  • At the bottom of the screen, we will deploy the web service by clicking on DEPLOY WEB SERVICE”, then clicking “Deploy Web service (classic)”.

    glm_azuremenu

  • Azure ML will then automatically add the Web service input and Web service output modules to the appropriate nodes as follows

    glm_azuremllayout2

  • The Web service output automatically connected to the second output node of the Execute R Script module. We actually want this to connect to the first output node of the Execute R Script as shown

    glm_firstnode

  • Click the “RUN” button at the bottom of the screen to verify the web service
  • Click the “DEPLOY WEB SERVICE” button once again, and select “Deploy Web service (Classic). The following page will show up

glm_apipage

  • Finally, we are able to test that our predictive model works by clicking the blue “Test” button in the “REQUEST/RESPONSE” row.

    glm_enterdatatopredict

  • After confirming the test, we should get the following result

glm_outputachieved

  • This confirms that our predictive model works and all decision policies have been correctly implemented. The API is ready to go and can be consumed by external applications.

Further Considerations

Throughout this post, I showcased the process of implementing a simple predictive model using R and Microsoft Azure Machine Learning model. Of course, there are much more efficient ways of utilizing predictive models such as directly using the platform of Azure ML  to train, validate and test machine learning models, or directly using the Execute R Script module and doing all the R hard-coding there.

I want to emphasize that the process outlined here may seem less efficient to build and carry out, but I think it offers a good way to organize and automate decision pipelines. By going through the process of building and creating R packages that can then be uploaded to Azure ML, we are able to implement many decision rules within the R package. For example, an organization may choose to implement several product pricing rules  or internal decision policies as a result of what the predictive model outputs. There is plenty of room to automate these decisions for faster turnaround of work. Creating packages also gives us the ability to train, validate, and test more complex machine learning models and saving their results accordingly. I am sure there are plenty of other reasons and uses than the ones I stated here in which building your own machine learning R packages and then uploading it to Azure ML is highly beneficial.

In the future, I look to implement this process by using more complex machine learning models rather than the simple logistic regression. I also look to learn some more software application development as this is clearly not the end of the data science pipeline. With Azure ML producing an API, it would be nice to be able to see the full extent of this pipeline by utilizing the API through my own created applications. Finally, some important takeaways from this post are the abilities to organize and automate an operational data science pipeline and the thought-process behind automating company-related decisions.

Extract, Transform, and Load Yelp Data using Python and Microsoft SQL Server

In this post, I will demonstrate a simple ETL process of Yelp data by calling the Yelp API in Python, and transforming and loading the data from Python into a Microsoft SQL Server database. This process is exemplary for any data science project that requires raw data to be extracted and stored to be consumed by other applications or used for further analysis.

Before we begin, the steps to this ETL process assumes the following four things:

  1. We have a verified and registered Yelp account.
  2. We have Microsoft SQL Server and SQL Server Management Studio installed. This guide can help us install both Microsoft SQL Server 2014 Express and SQL Server 2014 Management Studio.
  3. We have Python and an IDE installed. This guide can help us install Anaconda which installs Python 3.6 and the Spyder IDE.
  4. pyodbc module is installed after installation of Anaconda. Using the Anaconda Prompt, refer to these instructions to install pyodbc.
  5. We have a valid connection between Microsoft SQL Server and other local systems through the ODBC Data Source Administrator tool. Follow these simple steps to set up this connection.

EXTRACTION

To extract the raw Yelp data, we must make an API call to Yelp’s repositories.

Obtain App ID and App Secret

First, we go to the Yelp Developer page and scroll to the bottom and click ‘Get Started’.

Get_Started

Next we click on ‘Manage App’ in the left menu bar and record our App ID and App secret.  I whited-out the App ID below but you would see some form of text there. We will be needing these values in order to call the API within Python.

My_App

Run Yelp API Python Script

Next, using the App ID and App Secret, we run the following Python script which calls the Yelp API. In this example, I will be requesting business data for Kiku Sushi, a sushi restaurant that I have ordered from a few times.


# We import the requests module which allows us to make the API call
import requests

# Replace [app_id] with the App ID and [app_secret] with the App Secret
 app_id = '[app_id]'
 app_secret = '[app_secret]'
 data = {'grant_type': 'client_credentials',
         'client_id': app_id,
         'client_secret': app_secret}
 token = requests.post('https://api.yelp.com/oauth2/token', data = data)
 access_token = token.json()['access_token']
 headers = {'Authorization': 'bearer %s' % access_token}

# Call Yelp API to pull business data for Kiku Sushi
 biz_id = 'kiku-sushi-burnaby'
 url = 'https://api.yelp.com/v3/businesses/%s' % biz_id
 response = requests.get(url = url, headers = headers)
 response_data = response.json()

A successful API call will return the data in JSON format which is read by Python as a dictionary object.

Response_Data

Response_Data_2

Notice how the url variable within the script is a string whose value depends on the Yelp API documentation provided specifically for requesting business data.

Yelp_Biz_Request

The Request section in the documentation tells you the appropriate url to use. The Yelp API documentation provides a brief overview of the data points and data types received from the API call. The different data points and their respective data types is important to know when we load the data to the Microsoft SQL Server database later on.

Accessing the Dictionary

Using the documentation, we can extract a few data points of interest by accessing the dictionary as you normally would using Python syntax. The following lines of code will provide examples of some data extractions.


# Extract the business ID, name, price, rating and address

biz_id = response_data['id']
biz_name = response_data['name']
price = response_data['price']
rating = response_data['rating']
review_count = response_data['review_count']
location = response_data['location']
address = location['display_address']
street = address[0]
city_prov_pc = address[1]
country = address[2]

At this point, the extraction of the data is complete and we move onto transforming the data for proper storage into Microsoft SQL Server.

TRANSFORMATION

To transform the extracted data points, we simply reassign the data types. If we do not complete this step, we will run into data type conversion issues when storing it within Microsoft SQL Server.

The following code simply reassigns the data types to the extracted data points that we would like to store.


# Reassign data types to extracted data points
biz_id = str(biz_id)
biz_name = str(biz_name)
price = str(price)
rating = float(rating)
review_count = int(review_count)
street = str(street)
city_prov_pc = str(city_prov_pc)
country = str(country)

After the transformations are complete, we move into the final stage of loading the data into Microsoft SQL Server.

LOADING

In order to load a database such as those in Microsoft SQL Server, we need to ensure that we have a database created with the appropriate columns fields and column types.

Microsoft SQL Server Table Creation

After we log into our default database engine in SQL Server Management Studio, we set up and run the following T-SQL code.


-- Note that the number assigned to each varchar represents the number of characters that the data point can take up

CREATE TABLE Yelp (id varchar(50), name varchar(50), price varchar(5), rating float, review_count int, street varchar(50), city_prov_pc varchar(50), country varchar(50))

This effectively creates a table with the appropriate data types that allows us to store the Yelp data we extracted and transformed.


SELECT * FROM YELP

When we run the T-SQL code, we should see an empty table. This verifies successful table creation.

Empty_Table

Transferring Data from Python to Microsoft SQL Server

The last step is to run a Python script that takes the data points and saves them into Microsoft SQL Server. We run the following Python code to accomplish this task.


# We import the pyodbc module which gives us the ability and functionality to transfer data straight into Microsoft SQL Server
import pyodbc

# Connect to the appropriate database by replacing [datasource_name] with the data source name as set up through the ODBC Data Source Administrator and by replacing&amp;amp;nbsp;[database] with the database name within SQL Server Management Studio
 datasource_name = '[datasource_name]'
 database = '[database_name]'
 connection_string = 'dsn=%s; database=%s' % (datasource_name, database)
 connection = pyodbc.connect(connection_string)

# After a connection is established, we write out the data storage commands to send to Microsoft SQL Server
cursor = connection.cursor()

cursor.execute('INSERT INTO YELP (id, name, price, rating, review_count, street, city_prov_pc, country) values (?, ?, ?, ?, ?, ?, ?, ?)', biz_id, biz_name, price, rating, review_count, street, city_prov_pc, country)

cursor.commit()

After this script is run, we can do a final check that the data has been successfully loaded onto the Microsoft SQL Server database by rerunning a Yelp table query. Once we do, we see that we have in fact successfully transferred the data over.

SQL_Table

FURTHER WORK

This simple ETL process for Yelp data demonstrated the ability to tap into Yelp’s data repository using Python, simple data type considerations and loading data into Microsoft SQL Server.

One thing to note here is that we did not consider the more difficult data points to extract. For example, the Yelp API provides a data point corresponding to a restaurant’s operational hours which is stored as a dictionary within a list within a dictionary. Although not too difficult to extract, these kinds of data points do require more work.

Secondly, we should note that some data points are not always readily available because restaurant owners choose not to fill out this information. Also, as documented by Yelp, there will be no data available from an API call if the restaurant does not have any reviews (even if it is clear that they have a Yelp page)! We would have to account for the potential errors from the inability to extract specific information. For example, we could set up try-catch blocks in the Python code and have Microsoft SQL Server store NULL values.

Another thing to note is that there are security and efficiency considerations for loading data into a database. This exercise did not consider database creation design, where it is almost always efficient to have row keys and essential to minimize the data type memory space. It also did not demonstrate access to a secure database (where a username and password is required).

Although it is obvious that there is more that can be done, this post depicts the endless possibilities of how we may choose to further consider this data. Now that this data is stored in a nice tabular format within Microsoft SQL Server, we can use it for further analysis or other purposes within our data science projects. Further work can be done to automate the data extraction process, and set up more advanced SQL tables. Finally, There are a wide variety of social media API’s out there to try out and master.