Artemio Rimando - Evaluating a life measured in smiles | A data scientist lifestyle blog
  • HOME
  • LET’S COLLABORATE
    • CONTACT ME
    • SUBSCRIBE
    • BIG BROTHER GAMES
      • INQUIRY
      • PAST GAMES
  • ABOUT
    • CREDENTIALS
    • ABOUT
  • DATA SCIENCE
    • PYTHON
    • APPLIED LEARNING
    • SOFT SKILLS
  • CAREER
    • CAREER ADVICE
    • RECRUITMENT ADVICE
  • LIFESTYLE
    • OUT AND PROUD
    • LIFESTYLE
  • TRAVEL
    • HONG KONG
    • MACAU
    • SAN FRANCISCO
HOME
LET'S COLLABORATE
    CONTACT ME
    SUBSCRIBE
    BIG BROTHER GAMES
    INQUIRY
    PAST GAMES
ABOUT
    CREDENTIALS
    ABOUT
DATA SCIENCE
    PYTHON
    APPLIED LEARNING
    SOFT SKILLS
CAREER
    CAREER ADVICE
    RECRUITMENT ADVICE
LIFESTYLE
    OUT AND PROUD
    LIFESTYLE
TRAVEL
    HONG KONG
    MACAU
    SAN FRANCISCO
Artemio Rimando - Evaluating a life measured in smiles | A data scientist lifestyle blog
  • HOME
  • LET’S COLLABORATE
    • CONTACT ME
    • SUBSCRIBE
    • BIG BROTHER GAMES
      • INQUIRY
      • PAST GAMES
  • ABOUT
    • CREDENTIALS
    • ABOUT
  • DATA SCIENCE
    • PYTHON
    • APPLIED LEARNING
    • SOFT SKILLS
  • CAREER
    • CAREER ADVICE
    • RECRUITMENT ADVICE
  • LIFESTYLE
    • OUT AND PROUD
    • LIFESTYLE
  • TRAVEL
    • HONG KONG
    • MACAU
    • SAN FRANCISCO
Data Science•R

Scorecard Building in R – Part IV – Training, Testing and Validating the Logistic Regression Model

Quickly go to any section of the Scorecard Building in R 5-Part Series:
i. Introduction
ii. Data Collection, Cleaning and Manipulation
iii. Data Transformations: Weight-of-Evidence
iv. Scorecard Evaluation and Analysis
v. Finalizing Scorecard with other Techniques


Continuing from part III where the Weight-of-Evidence matrix and information values were determined to give us an idea of how the consumer credit information could lead to predict the performance of 36-month loans. In this part, we train, test and validate an elastic-net Logistic Regression model. This statistical model is one of the most widely used machine learning techniques that maps a bernoulli distributed variable to a continuous log-odds value. Here we will also use parallel processing to speed up the high amounts of calculations and algorithms done by the caret package on the data set.

library(caret)
library(doParallel)
library(pROC)
library(glmnet)
library(Matrix)

Set the seed so that we may receive reproducible results when we train our model.

set.seed(20160727)

Redefine the WOE matrix obtained from part III as our main dataset for this section. Please see part III to see how WOE_matrix_final was obtained.

LC_WOE_Dataset <- WOE_matrix_final

Use createDataPartition to divide the random sample into a training and test set where 75% of the data goes to training and 25% goes to testing

partition <- createDataPartition(LC_WOE_Dataset$Bad_Binary, p = 0.75, list = FALSE)
training <- LC_WOE_Dataset[partition,]
testing <- LC_WOE_Dataset[-partition,]

Define the type of resampling that will be used. Here, I am interested in using repeated k-fold cross-validation. More specifically, I apply 3-fold cross-validation by setting the number of folds to 3. Later on, I want to select a model that maximizes the statistic AUC for this specific classification model. I set savePredictions to TRUE to save predictions for each hold-out in each step of the cross-validation. I set classProbs to TRUE to compute class probabilities and predicted values for each resample. summaryFunction is set to twoClassSummary which allows us to compute true-positive rates and false-positive rates later on.

fitControl <- trainControl(method = "cv",
number = 3,
savePredictions = TRUE,
classProbs = TRUE,
summaryFunction = twoClassSummary)

Now we train the model. First we set the seed to ensure that the algorithm is being run on the exact same data in each fold.

set.seed(1107)

Here, parallel processing is initiated to speed up algorithms thereafter.

number_cores <- detectCores()

cluster <- makeCluster(number_cores)

registerDoParallel(cluster)

Set up the lambda and alpha grids in which the train function will used to generate an elastic-net logistic regression model. The alpha term acts as a weight between L1 and L2 regularizations, where in such extremes, alpha = 1 gives the LASSO regression and alpha = 0 gives the RIDGE regression. Penalized linear regression models aims to balance the bias-variance trade-off who exhibits a relationship of increasing bias to decrease variance. The lambda parameter further penalizes coefficient estimate to 0 which indirectly serves as variable reduction. It is also the result of an elastic-net regression to handle collinearities very well.

Caution: the following code takes a long time to run. Here, I let the algorithm use its default grid of alpha and lambda parameters to obtain a solution. I could have more control over this if I set up my own sequences of alpha and lambda values and include a ‘tuneGrid’ entry into the glmnet.fit train function.

glmnet.fit <- train(Bad_Binary ~., data = training,
method = "glmnet",
family = "binomial",
metric = "ROC",
trControl = fitControl,
tuneLength = 5)

When completed, we take a look at the summary of tuning parameters that were used in the cross-validation process. Here, we use the ROC to optimize alpha and lambda. Lambda = 1 is used and therefore, the model converges to a LASSO regression model. We also plot the graph of changing the tuning parameter, lambda.

glmnet.fit
plot(glmnet.fit)

regularization_plot

There are some terminology to address. Specificity as presented in the summary is the fraction of loans that were good and predicted good by the model. Sensitivity is the fraction of loans that were bad and predicted bad by the model. In the glmnet.fit summary, all sensitivities and specificities are presented for every alpha parameter and lambda hyperparameters. All corresponding ROC’s which actually represent the AUC value are presented.

Given the code above, 3-fold cross-validation splits the data set into 3 parts, labelled Set 1, 2 and 3. The algorithm selects 1 of the sets to be used as a test set and trains the model on the other two sets. It then calculates the AUC and stores it in memory. The algorithm repeats this procedure for every combination of training and test sets, ie. Training = ((1,2), (1,3), (2,3)), Testing = ((3), (2), (1)). After all ROCAUC’s are calculated, it averages over them and presents this for a particular initiated alpha parameter and lambda hyperparameter. In this case, the train function runs 3-fold cross-validation given the grid of starting parameters. THe ROCAUC given out by the algorithm represents the average over the 3 models that were trained in each iteration.

Now that we obtained an ROCAUC of 0.8754938 from the cross-valiation step. We now need to test how well the model performs on a set the model has never seen, our initial test set. If a similar ROCAUC is shown from testing the model on the test set, then we can conclude that overfitting has been appropriately addressed. It is up to the discretion of the analyst to decide the threshold similarity between cross-validation ROCAUC and test set ROCAUC. test.glmnet.fit calculates the predicted probabilities used to generate the ROCAUC.

test.glmnet.fit <- predict(glmnet.fit, testing, type = "prob")
auc.condition <- ifelse(testing$Bad_Binary == "Good", 1, 0)
auc.test.glmnet.fit <- roc(auc.condition, test.glmnet.fit[[2]])

auc.test.glmnet.fit provides an ROCAUC of 0.8792 which comes really close to our cross-validated ROCAUC. Here, I am happy with the results and proceed to use this as the final model for the purpose of LC’s risk scorecard.

auc.test.glmnet.fit

I proceed to set up a visualization of the ROCAUC through a roc plot using the package pROC.

plot(auc.test.glmnet.fit, col = "red", grid = TRUE)

scorecard_auc_plot

Since the primary focus of this project is to set up a logistic regression scorecard for Lending Club, the model obtained here is sufficient enough. I could go further and test out several different classification machine learning models such as random forests, binary trees, etc. By having the elastic-net logistic regression, I produce the coefficient estimates corresponding to the regularization parameter used. The heart of the model lies within the coefficient estimates. As previously mentioned, the algorithm is a form of variable selection in that it pins down features to 0 if overfitting is suspected or collinearities are present within the WOE dataset.

final.model <- glmnet.fit$finalModel
coef.final.model <- as.matrix(coef(final.model, glmnet.fit$bestTune$lambda))

In the next section, Scorecard Building – Part V – Rejected Sample Inference, Grade Analysis and Scoring Techniques, I discuss how the model is evaluated and analyzed for further business implications.

Data Science•R

Scorecard Building in R – Part III – Data Transformation

Quickly go to any section of the Scorecard Building in R 5-Part Series:
i. Introduction
ii. Data Collection, Cleaning and Manipulation
iii. Data Transformations: Weight-of-Evidence
iv. Scorecard Evaluation and Analysis
v. Finalizing Scorecard with other Techniques


In part II of the scorecard building process, I had prepared the Lending Club data in order to create a Logistic Regression model that would enact as a scorecard in predicting good customers from bad ones.

In this section, I transform the data set by applying weight-of-evidence (WOE) value conversions where in the previous section, a weight-of-evidence was created for specific binning groups. Here, for each feature column, I take each data point and assign the weight-of-evidence value from its corresponding binning group. For example, for the home ownership variable, all customers who are paying mortgage on their homes have a weight-of-evidence of 0.29. Every entry in the home ownership column in the data set with a value of ‘Mortgage’ would then be replaced with 0.29. This transformation takes place for all features within the data set so that the new matrix contains only weight-of-evidence valued transformations.

The following code begins this process. Here, I continue to use the code from part II. I use the package ‘parallel’ to apply some basic parallel processing that will help make the code run faster.

library(parallel)

First, here is a function to obtain the minimum value of range in string form and maximum value of range in string form.

min_function <- function(x) {
remove_brackets <- gsub("\\[|\\]", "", x = x)
take_min <- gsub(",.*", "", remove_brackets)
min_value <- as.numeric(take_min)
return(min_value)
}

max_function <- function(x) {
remove_brackets <- gsub("\\[|\\]", "", x = x)
take_max <- gsub(".*,", "", remove_brackets)
max_value <- as.numeric(take_max)
return(max_value)
}

The following presents a function that tabulates all WOE with their respective categories. This will help group all variables and allow for easier lookups when NA’s are transformed into -1.

features_36_names_WOE <- colnames(features_36)[-ncol(features_36)]
features_36_names_WOE_vector_length <- length(features_36_names_WOE)
only_features_36 <- features_36[-ncol(features_36)]

WOE_tables_function <- function(x) {
table_text <- sprintf("IV$Tables$%s", x)
create_table <- eval(parse(text = table_text))
MIN <- sapply(create_table, min_function, USE.NAMES = FALSE)[,1]
MAX <- sapply(create_table, max_function, USE.NAMES = FALSE)[,1]
MIN_equal_NA <- is.na(MIN)
count_MIN_equal_NA <- length(MIN[MIN_equal_NA])

if (count_MIN_equal_NA == 1) {
MIN[is.na(MIN)] <- -1
MAX[is.na(MAX)] <- -1
WOE <- create_table$WOE
categories <- create_table[,1]
table <- cbind.data.frame(categories, WOE)
return(table)

} else {
WOE <- create_table$WOE
table <- cbind(MIN, MAX, WOE)
return(table)
}
}

To obtain the results of this WOE_tables function quickly, we assign the function to three cores attributed to the laptop I am using. This is known as Parallel Processing. This allows the slow task of applying a function over each row of data to be sped up.

First, calculate the number of cores that are located within the laptop. The memory used to apply the WOE_tables_function will be distributed among the number of cores minus 1. We need to save the last core for any other sort of activity we want to do that may or may not be programming related.

number_cores <- detectCores() - 1

Initiate Cluster, where the cluster is just the defined group of cores that are designated to process the memory.

cluster <- makeCluster(number_cores)

I assign the main functions that the cluster will be handling and processing. These include the functions that are called within the main function WOE_tables_function.

clusterExport(cluster, c("IV", "min_function", "max_function"))

WOE_tables is the resulting matrix which is created off the cluster. We use the function parSapply which is very similar to the sapply function except is run with parallel processing.

WOE_tables <- parSapply(cluster, as.matrix(features_36_names_WOE), FUN = WOE_tables_function)

Usually at this point we would close the cluster so that the computer may resume using memory for other computer functions. Since we will still require its work, we do not close it and resume coding our way to obtaining the final aggregated WOE matrix.

recode is a helper function that takes in the feature column name and searches for it in the WOE_tables vector. It then replaces all raw data inputs in the feature vector with their respective WOE values.

recode <- function(x, y) {
r_WOE_table_text <- sprintf("WOE_tables$%s", y)
create_r_WOE_table <- eval(parse(text = r_WOE_table_text))
data_type_indicator <- create_r_WOE_table[1,1]

if (is.factor(data_type_indicator)) {
category_Table <- as.numeric(create_r_WOE_table[,1])
corresponding_WOE_Table <- as.character(create_r_WOE_table[,2])
category_Table_length <- length(category_Table)
raw_variable <- as.numeric(factor(x))

for (i in 1:category_Table_length) {
condition_1 <- raw_variable == category_Table[i]
raw_variable[condition_1] <- corresponding_WOE_Table[i]
}

return(as.numeric(raw_variable))

} else if (data_type_indicator == -1) {
min_r_Table <- create_r_WOE_table[,1]
max_r_Table <- create_r_WOE_table[,2]
corresponding_WOE_Table <- as.character(create_r_WOE_table[,3])
min_r_Table_length <- length(min_r_Table)
raw_variable <- x

for (i in 2:min_r_Table_length) {
condition_1 = min_r_Table[i]
condition_2 <- raw_variable <= max_r_Table[i]
raw_variable[condition_1 & condition_2] <- as.numeric(corresponding_WOE_Table[i])
}

condition_3 <- is.na(raw_variable)
raw_variable[condition_3] <- corresponding_WOE_Table[1]

return(as.numeric(raw_variable))

} else {
min_r_Table <- create_r_WOE_table[,1]
max_r_Table <- create_r_WOE_table[,2]
corresponding_WOE_Table <- create_r_WOE_table[,3]
min_r_Table_length <- length(min_r_Table)
raw_variable <- x

for (i in 1:min_r_Table_length) {
condition_1 = min_r_Table[i]
condition_2 <- raw_variable <= max_r_Table[i]
raw_variable[condition_1 & condition_2] <- corresponding_WOE_Table[i]
}

return(as.numeric(raw_variable))

}
}

WOE_matrix_final applies the recode function over the entire vector of feature names. Another helper function create_WOE_matrix allows the matrix to be created.

create_WOE_matrix <- function(x) {
variable_text <- sprintf("only_features_36$%s", x)
create_variable <- eval(parse(text = variable_text))
variable <- create_variable
variable_name <- x
WOE_vector <- recode(variable, variable_name)
return(WOE_vector)
}

Finally, create WOE_matrix through parallel processing. Again, we export the set of subfunctions that will be called by the main function create_WOE_matrix to the cluster. After creating WOE_matrix, it is important to include the Binary vector of whether a loan has resulted to be “Good” or “Bad”. Finally, we obtain the goal of this section of the project, WOE_matrix_final. Notice that the last line of code stops the cluster.

clusterExport(cluster, c("only_features_36", "create_WOE_matrix", "recode", "WOE_tables"))

WOE_matrix <- parSapply(cluster, features_36_names_WOE, FUN = create_WOE_matrix)
WOE_matrix <- as.data.frame(WOE_matrix)
Bad_Binary <- features_36$Bad
Bad_Condition_1 <- Bad_Binary == 1
Bad_Condition_0 <- Bad_Binary == 0
Bad_Binary[Bad_Condition_1] <- "Good"
Bad_Binary[Bad_Condition_0] <- "Bad"
Bad_Binary <- as.factor(Bad_Binary)
WOE_matrix["Bad_Binary"] <- Bad_Binary
WOE_matrix_final <- WOE_matrix

stopCluster(cluster)

In the next section, Scorecard Building – Part IV – Training, Testing and Validating the Logistic Regression Model I will take the transformed data set and apply various machine learning techniques to get a preliminary scorecard.

Analysis•Applied Learning•Data Science•R

Scorecard Building in R – Part II – Data Preparation and Analysis

Quickly go to any section of the Scorecard Building in R 5-Part Series:
i. Introduction
ii. Data Collection, Cleaning and Manipulation
iii. Data Transformations: Weight-of-Evidence
iv. Scorecard Evaluation and Analysis
v. Finalizing Scorecard with other Techniques


I used the dataframe manipulation package ‘dplyr’, some basic parallel processing to get the code running faster with the package ‘parallel’, and the ‘Information’ package which allows me to analyze the features within the data set using weight-of-evidence and information value.

library(dplyr)
library(parallel)
library(Information)

First, I read in the Lending Club csv file downloaded from Lending Club website. The file is saved on my local desktop which is easily accessed by the read.csv function.

data <- read.csv("C:/Users/artemior/Desktop/Lending Club model/LoanStats3d.csv")

Next, I create a column that indicates whether I will keep an observation (row) or not. This will be based on the loan statuses because for a predictive logistic regression model, I would like all the statuses that will be strictly defined as a ‘Good’ loan or a ‘Bad’ loan.

data <- mutate(data,
Keep = ifelse(loan_status == "Charged Off" |
loan_status == "Default" |
loan_status == "Fully Paid" |
loan_status == "Late (16-30 days)" |
loan_status == "Late (31-120 days)",
"Keep", "Remove"))

After creating the ‘Keep’ column I filter the data depending on whether the observation had “Keep” or “Remove”.

sample <- filter(data, Keep == "Keep")

I further filter the data set to create two new samples. The Lending Club offers two exclusive types of loan products. To improve predictability of the riskiness of its loans, we can create two sub-risk models, one for all 36-month term loans and 60-month term loans.

sample_36 <- filter(sample, term == " 36 months")
sample_60 <- filter(sample, term == " 60 months")

For the purposes of this scorecard building demonstration I will create a model using the 36-month term loans. Using the mutate function, I create a new column called ‘Bad’ which will be my binary independent variable used
in the logistic regression.

sample_36 <- mutate(sample_36, Bad = ifelse(loan_status == "Fully Paid", 1, 0))

The next step is to clean up the table to remove any data points I do not want to include in the prediction model. Variables such as employment title would take more time to analyze so for the purposes of this analysis I remove them.

features_36 % select(-id, -member_id, -loan_amnt,
-funded_amnt, -funded_amnt_inv, -term, -int_rate, -installment,
-grade, -sub_grade, -pymnt_plan, -purpose, -loan_status,
-emp_title, -out_prncp, -out_prncp_inv, -total_pymnt, -total_pymnt_inv,
-total_rec_int, -total_rec_late_fee, -recoveries, -last_pymnt_d, -last_pymnt_amnt,
-next_pymnt_d, -policy_code, -total_rec_prncp, -Keep)

To further understand the data, I want to take a look at the number of observations per category under each variable. This will weed out any data points that could be problematic in future algorithms.

Once the features table is complete, I use the methodology of information value to transform the raw feature data. In theory, transforming the raw data into a proportional log-odds value as seend in the Weight-of-Evidence maps better onto a logistic regression fitted curve.

IV <- create_infotables(data = features_36, y = "Bad")

We can generate a summary of the IV’s for each feature. The IV for a particular feature represents the sum of individual bin IV’s.

IV$summary

We can even check the IV tables for individual features and see how each feature was binned, the percentage of observations that the bin represents out of the total number of observations, the WOE attributed to the bin and as well as the IV. The following code is an example of presenting the feature summary for the last credit pull date.

IV$Tables$last_credit_pull_d

I analyze the behaviors of continuous and ordered-discrete variables by plotting their weight-of-evidences. In theory, the best possible transformation occurs when weight-of-evidences exhibit a monotonic relationship. First, I define features_36_names as the vector of column names. This will serve as the vector which I will use a function that plots every WOE graph for each feature in the features_36_names matrix. I remove features from the list that are categorical and would generate way too many bins to plot later on. For example, I removed the feature zip_code as there would be over 500 different kinds.

features_36_names_plot <- colnames(features_36)[c(-7, -11, -ncol(features_36))]

Here is the code for the ploeWOE function as I previously mentioned. This function generates a WOE plot for input x, where x is a string that represents the column name of a specific feature. Recall that I generated a list of strings in features_36_names.

plotWOE <- function(x) {
p <- plot_infotables(IV, variable = x, show_values = TRUE)
return(p) }

To make my for loop code clean and faster, I define a number as the length of the features name vector.

feature_name_vector_length_plot <- length(features_36_names_plot)

Now for the fun part, to generate a graph for each feature, I use a for loop which will go over every string object in the features_names_36 list, and plot a WOE graph for each string name that corresponds to a feature in the features_36 matrix. To be safe, I created an error-handling portion of code because somewhere in this huge matrix of features, I may have missed a feature or two in which a WOE plot cannot be created. This would occur if a particular feature only contained 1 category or value for every observed loan.

for (i in 1:feature_name_vector_length_plot) {
p <- tryCatch(plotWOE(features_36_names_plot[i]),
error = function(e)
{print(paste("Removed variable: ",
features_36_names_plot[i])); NaN})
print(p) }

About 90 graphs are generated using for loop. Below I present and discuss two examples of what kinds of graphs are presented and what they mean.

home_ownership_WOE_plot

The home ownership weight-of-evidence plot displays how a greater proportion of good consumer loan customers own their homes and a greater proportion of bad consumer loans pay rent where they live. Those who still pay mortgage are slightly better customers.

months_since_delin_WOE_plot

The months since delinquency (or time since you failed to pay off some form of credit) weight-of-evidence plot presents another intuitive relationship. The more months that pass since a customer’s most recent delinquency will make them more likely to be a good customer in paying off their loan. The lower the amount of months since a customer’s most recent delinquency means that they have just recently failed to pay off other forms of credit. This goes to show that even if you had a delinquency in your lifetime, you can improve your credit management and behaviors over time.

In the plot, something weird happens when customers had their delinquency between 19 – 31 months before they received another consumer loan. This could suggest a lagging effect where it takes some time to fully chase down a customer. It could be the case that sometimes months and months of notification is given before the customer is actually classified as delinquent.

In the next post, Scorecard Building – Part III – Data Transformation, I am going to describe how the data we prepared and analyzed using Information Theory will be transformed to better suit a logistic regression model.

Analysis•Applied Learning•Data Science•R

Scorecard Building in R – Part I – Introduction

Quickly go to any section of the Scorecard Building in R 5-Part Series:
i. Introduction
ii. Data Collection, Cleaning and Manipulation
iii. Data Transformations: Weight-of-Evidence
iv. Scorecard Evaluation and Analysis
v. Finalizing Scorecard with other Techniques


Part of my job as a Data Scientist is to create, update and maintain a small-to-medium business scorecard. This machine learning generated application allows its users to identify applicants that are more likely to pay back their loan or not. Here, I take the opportunity to showcase the steps I take in building a reliable scorecard, and the analysis associated with evaluating it by using R. I will accomplish this with the use of public data provided by the consumer and commercial lending company, Lending Club (downloaded here).

Here is an overview of the essential steps to take when building this scorecard:

  1. Data Collection, Cleaning and Manipulation
  2. Data Transformations: Weight-of-Evidence and Information Value
  3. Training, Validating and Testing a Model: Logistic Regression
  4. Scorecard Evaluation and Analysis
  5. Finalizing Scorecard with other Techniques

See the next post, Scorecard Building – Part II – Data Preparation and Analysis to see how the data is prepared for further scorecard building.

Page 3 of 5« First...«2345»

Meet Artemio!

Artemio is a Torontonian-at-heart living in Vancouver, BC. You can find him in and around the city sipping bubble tea and playing Pokemon GO.

Welcome!

You will find blog posts written about a passion for data science, travel, and the joys of life.

Follow Me!

Subscribe Here!

Instagram Feed

artemiorimando

Achoo-choo 🇨🇦🚂 #covidtravel2020 #wearamas Achoo-choo 🇨🇦🚂 #covidtravel2020 #wearamask #choochootrain #revelstoke #beautifulbc🍁
Stay golden 🌄 #covidtravel2020 #albertaviews #l Stay golden 🌄 #covidtravel2020 #albertaviews #lakeannette
Out here capturing a summer take of a similar phot Out here capturing a summer take of a similar photo I took in the winter last year 🙈🤓 #covidtravel2020 #albertaviews #lakelouisecanada
Crystal clear 😌 #covidtravel2020 #albertaviews Crystal clear 😌 #covidtravel2020 #albertaviews #morainelake #luckyaf
Looking like sound waves but all I hear is quiet 😌🌄 #covidtravel2020 #albertaviews #pyramidlake #tranquil
Early riser 🌄🇨🇦 #covidtravel2020 #jasperp Early riser 🌄🇨🇦 #covidtravel2020 #jasperprovincialpark #albertaviews
Feeling lucky we saw the highest point of the Cana Feeling lucky we saw the highest point of the Canadian Rockies today 🇨🇦🏞️🚡 #covidtravel2020 #albertaviews #luckyaf
Streaming Game 6 and walking this trail 😏🦖🇨🇦 #winwin #covidtravel2020 #beautifulbc🍁 #kamloops
As of late, caused unnecessary game drama/stress w As of late, caused unnecessary game drama/stress with friends and family 😏, (ironically) advocated for destigmatizing mental health 🙏, longest streak for not leaving the house was like 20 days, drank a lot of bubble tea, and a picture under a bridge to show for it. 2020 has been wild so far. #pandemic #staysafe
Smoggy sunrise 🇹🇭 #thailand #bangkok #infini Smoggy sunrise 🇹🇭 #thailand #bangkok #infinitypool #sunrise #gaytravel #gaypassport #travelasia #instatravel #travelpics #travelgram #travel #globetrotter #igtravel #igtravelworld
Wet trunks, sandy toes, sun block, speedy boats 🇹🇭 #thailand #phuket #kohphiphi #phiphi #phiphiislands #paradise #gaytravel #gaypassport #travelasia #instatravel #travelpics #travelgram #travel #globetrotter #igtravel #igtravelworld
Ayy Okay👌🕶️🇹🇭 #thailand #phuket #koh Ayy Okay👌🕶️🇹🇭 #thailand #phuket #kohphiphi #phiphi #phiphiislands #gaytravel #gaypassport #travelasia #instatravel #travelpics #travelgram #travel #globetrotter #igtravel #igtravelworld
🤳 Big Buddha 🇹🇭 #thailand #phuket #bigbud 🤳 Big Buddha 🇹🇭 #thailand #phuket #bigbuddha #gaytravel #gaypassport #travelasia #instatravel #travelpics #travelgram #travel #globetrotter #igtravel #igtravelworld #blackandwhite
🇻🇳 Long 🐲 Lan 🦁 Quy 🐢 Phung🐥 #v 🇻🇳 Long 🐲 Lan 🦁 Quy 🐢 Phung🐥

#vietnam #haolu #ancientcapital #ancient #gaysian #gaytravel #gaypassport #instagay #travelasia #instatravel #travelpics #travelgram #travel #globetrotter #igtravel #igtravelworld #blackandwhite
Ha Long Bae 🚣‍♂️🇻🇳 #vietnam #halong Ha Long Bae 🚣‍♂️🇻🇳 #vietnam #halongbay #baitulongbay #gaysian #gaytravel #gaypassport #instagay #travelasia #instatravel #travelpics #travelgram #travel #globetrotter #igtravel #igtravelworld #blackandwhite
Load More... Follow on Instagram

Most Popular Posts

  • Data Engineering using Airflow with Amazon S3, Snowflake and Slack
    Data Engineering using Airflow with Amazon S3, Snowflake and Slack
  • Extract, Transform, and Load Yelp Data using Python and Microsoft SQL Server
    Extract, Transform, and Load Yelp Data using Python and Microsoft SQL Server

Links to the Past

artemiorimando

Achoo-choo 🇨🇦🚂 #covidtravel2020 #wearamas Achoo-choo 🇨🇦🚂 #covidtravel2020 #wearamask #choochootrain #revelstoke #beautifulbc🍁
Stay golden 🌄 #covidtravel2020 #albertaviews #l Stay golden 🌄 #covidtravel2020 #albertaviews #lakeannette
Out here capturing a summer take of a similar phot Out here capturing a summer take of a similar photo I took in the winter last year 🙈🤓 #covidtravel2020 #albertaviews #lakelouisecanada
Crystal clear 😌 #covidtravel2020 #albertaviews Crystal clear 😌 #covidtravel2020 #albertaviews #morainelake #luckyaf
Looking like sound waves but all I hear is quiet 😌🌄 #covidtravel2020 #albertaviews #pyramidlake #tranquil
Early riser 🌄🇨🇦 #covidtravel2020 #jasperp Early riser 🌄🇨🇦 #covidtravel2020 #jasperprovincialpark #albertaviews
Follow on Instagram
This error message is only visible to WordPress admins
Error: There is no connected account for the user 31859063.

Subscribe for new updates!

© 2019 ARTEMIO RIMANDO // All rights reserved.