Using recurrent neural networks to segment customers

Understanding consumer segments is key to any successful business. Analytically, segmentations involve clustering a dataset to find groups of similar customers. What “similar” means is defined by the data that goes into the clustering — it could be demographic, attitudinal, or other characteristics. And the data that goes into the clustering is often limited by the clustering algorithms themselves — most require some kind of tabular data structure, and common techniques like k-Means require strictly numeric input. Breaking out of these restrictions has been one of our top priorities since starting the company.

So what do you do when you want to find segments of customers that are “similar” because they behave similarly — their experience with you, their brand, has been similar. How would you define that? Increasingly, companies are collecting sequence data, with each entry being an interaction with a customer — be it a purchase, reading an email, visiting the website, etc. Given the popularity of deep learning techniques to tackle sequence-related learning tasks, we thought applying neural networks to customer segmentation was the natural approach.

This post builds off of our previous customer journey segmentation post and demonstrates a prototype of a deep learning approach to behavior sequence segmentation. We wanted to investigate if we could leverage the internal state of a recurrent neural network (RNN) on complex sequences of data to identify distinctive customer segments.

Turns out that we can. And it works well.

Data description

Our client recorded a behavioral dataset for each customer interaction such as receiving an email, opening an email or using the app, so a single users “sequence” looks like this. Note that each sequence can have a variable number of rows.

User ID Cancel Sent Email Open email Click email App used Site visited Days since last interaction
1001 0 0 0 0 0 1 0
1001 0 0 0 0 0 1 2
1001 0 0 0 0 0 1 4
1001 0 1 0 0 0 0 5
1001 0 0 1 0 0 0 7
1001 0 0 0 0 0 1 1

Developing the Neural Network

We developed a very simple neural network architecture which is described below. For this sample of customers, we knew whether or not they had churned by the time the data was collected, so our “X’s” were the sequences of customer behavior, and our “Y’s” were 0/1s depending on if the customer had churned.

Therefore we had a sigmoid output layer which predicted either a 0 or 1 and a recurrent input layer, which is able to handle variable length sequences. We included a dense layer to make the network more powerful, and to generate encodings.

 

Layer Input dimension Output dimension
Recurrent Variable 10
Dense 10 10 (used for encoding)
Sigmoid 10 1

 

 

We used Keras (on R) to specify and train the network.

After training the network on the churn data, we used the weights from the Recurrent and Dense layers to produce a set of encodings for each user. After feeding in a user’s sequence, we get a ten-dimensional numeric encoding out:

User ID Encoding_1 Encoding_2 Encoding_3 Encoding_4 Encoding_5
1001 0 0 0.4 12.8 0.5
1002 0.1 1.3 0.9 14.7 141.0
1003 0.1 1.3 0.9 14.7 141.0
1004 0.1 1.3 0.9 14.7 141.0
1005 0.0 0.0 0.0 0.5 0

Clustering the RNN encodings

The encodings capture all of the information of the neural network. Although they do not have any inherent  meaning we can use them in a clustering algorithm to identify distinct segments. Which is exactly what we did.

We decided to run a DBSCAN on the encoded sequence data. DBSCAN had the advantage (in this case) of being able to handle non-linearities in the data and for not needing to specify the number of clusters in advance. K-means performed similarly.

Results

The DBSCAN algorithm identifies  five distinct clusters with some significant, and valuable differences between them.

 

Segment Percentage of customers Avg. E-mails Clicked Avg. E-mails Opened Avg. App Actions Avg. Site Visits Avg. Churn Date Churn percentage
1 0.3% 2.11 22.8 16.4 18.2 325 30.1%
2 34.5% 1.13 11.5 3.6 8.1 308 16.7%
3 59.5% 0.3 3.2 0.1 2.9 88 98%
4 5.5% 4.0 27.0 89.5 16.5 337 0.1%
5 0.2% 0.5 2.0 0.0 1.5 93 93%

 

Although the clusters  are fairly imbalanced (likely an artifact of using a supervised clustering technique), the number of days since the first interaction is clearly a strong driver in defining segments. The key takeaway here is that clusters with the highest churn rate have an interaction history of three months or less. This business  absolutely must focus on getting customers through the first three months to decrease the likelihood of churning early.

Takeaways

  • Sequence data is increasingly being captured by brands and methods for exploring it must be developed
  • Recurrent neural networks are an effective way of generating encodings for behavioral sequence data
  • Clustering the encodings (results of intermediate layers) of a neural network can be an effective way of peering inside the black box

We welcome any thoughts or comments you might have, and feel free to share this blog posts with your friends and colleagues!

 

 

 

Gradient is now powered by windmills

Meet our newest team member, Stefan, from the Netherlands!

We can all relate to the thoughts of playing in the Major Leagues when we were a kid. It’s an indescribable feeling when you dream about hitting that near 100 mph fastball out of the stadium. This feeling became my reality as soon as I joined Gradient. The high-profile clients, excellent deliverables developed with meticulous care by using state-of-the-art modeling and analysis techniques make me feel like I’m batting in the MLB!

Feelings of uncertainty and doubt about the future are inextricably linked to being a recent graduate. However, after my first week at Gradient, these feelings vanished immediately. I’ve been taken in as if I was a long-lost son. At first, the transparency within the company was overwhelming, yet it is quickly becoming my favorite feature. It not only improves internal communication — it also makes me feel that, even after a week, I’m already a fully integrated employee.

Alright, enough about my feelings. Let’s talk about what my role as a Quantitative Analyst means for Gradient. But first, let me take you on my academic journey. I started my university adventure at the technical university in Eindhoven (TU/E). Wait, where? Right, I forgot to mention that I’m a Dutch citizen. I’ll be generating quantitative insights while sitting in a field of tulips wearing clogs. Anyway, I studied Innovation Sciences. Which is a broad term for everything related to the combination of technology and psychology. Afterwards I finished a more business related master’s degree at the Tilburg University (UvT) in Marketing & Management.

As you might know — a data scientist, or quantitative analyst — operates at the intersection of statistics, business and computer science. All these things are right up my alley, and make me into the utility player that Gradient has been searching for. For each new client, I will immerse myself into their business and understand their underlying goals, motivations and opportunities.

Once we have developed a hypotheses, obtained the required data and transformed the information into a usable format, I’ll get cracking on developing statistically robust models. Which is where a lot of data scientists call it a day — not me though, and especially not Gradient. We will guide you through our development process and surface meaning from the analysis.

Each project will require me to learn new techniques and explore new research topics. Not a single day will be the same. One day I might have to code non-stop to finish a sprint, while the other I will be practicing a client presentation in front of the mirror. Variety drives me, be it in my work or in my personal life, I never want to live the same day twice.

Working for an international company like Gradient resonates deeply with me. The cultural dynamic is extremely interesting and provides a challenging yet rewarding workflow. On a more personal level I enjoy learning about one’s background and cultural habits. Back in 2010, when I was 18, I went on a life-changing backpacking trip to Australia and New Zealand. Even more of a cultural shock came to me when I followed an internship in Dubai.

I’m adventurous by nature and my passion for the outdoors might be an obsession. Climbing in the Scottish Highlands, hiking through the Belgium Ardennes, camping in a French province or actually enjoying an outdoor bootcamp in my local town, ‘s Hertogenbosch. Besides the outdoors, I’ll be listening to music wherever I go. Spending hours on Spotify optimizing a playlist (of course accompanied by a meta-data analysis.) for every single occasion is one of my favorite past-times.

Now it is time to start my latest adventure with Gradient.

 

 

 

Multi-state churn analysis with a subscription product

Subscriptions are no longer just for newspapers. The consumer product landscape, particularly among e-commerce firms, includes a bevy of subscription-based business models. Internet and mobile phone subscriptions are now commonplace and joining the ranks are dietary supplements, meals, clothing, cosmetics and personal grooming products.

Standard metrics to diagnose a healthy consumer-brand relationship typically include customer purchase frequency and ultimately, retention of the customer demonstrated by regular purchases. If a brand notices that a customer isn’t purchasing, it may consider targeting the customer with discount offers or deploying a tailored messaging campaign in the hope that the customer will return and not “churn”.

The churn diagnosis, however, becomes more complicated for subscription-based products, many of which offer multiple delivery frequencies and the ability to pause a subscription. Brands with subscription-based products need to have some reliable measure of churn propensity so they can further isolate the factors that lead to churn and preemptively identify at-risk customers.

This post shows how to analyze churn propensity for products with multiple states, such as different subscription cadences or a paused subscription.  

Unpacking our box

Assume we have an online subscription-based product that can be bought at set delivery intervals: monthly, quarterly and biannually. The customer also has the option to pause the subscription for any reason.

In our hypothetical example, a customer journey involves five states:

State 1:  Starts a subscription for the first time

State 2: Unsubscribes from receiving promotional emails from the brand

State 3: Pauses subscription because the supply from the previous delivery is not depleted

State 4: Unsubscribes from receiving promotional emails and pauses subscription (combination of States 2 and 3)

State 5: Cancels the subscription because no longer has a need for the product

Customer transition matrix

Like any relationship, that between a brand and customer passes through many states and phases. The transition between states can be represented in a transition matrix. This transition matrix presents an example of transitions between states:

This plot below conveys the various transitions in a graphical format. The corners represent transition states and each arrow represents the direction of a possible transition journey. Inevitably, one can see that it is possible to move to State 5, churn, from every possible state. However, to move from State 1 to State 4 requires passing through States 2 or 3. This is just one hypothetical customer journey in our subscription-based product.

Putting our data to use

Let’s also assume the brand has a variety of data points about each customer’s journey. Each customer’s files has:

  • A unique ID
  • The time since an event occurred within a state measured in the number of days (columns St2, St3…)
  • Occurrence of an event within said state (columns St2.s, St3.s…; 0: did not occur; 1: occurred)
  • Demographic and sales data such as year, age, discounts, gender

To generate a measure of size of the number of customers in each state, we can simply calculate the transition frequency and proportion of transitions across the full customer base. Here we see that there are 533 churn events.

We can also see that among customers who started their journey in State 1,640 customers moved to State 2, 777 to State 3, and 160 to State 5.  Furthermore, 332 stayed in State 1, which is classified as a non event.

The most probable transition is from State 1 is to State 3, which is shown below as a proportion. We see that 46% of customers that were in State 3 end up in the State 4, and of those, 25% end up churning.  

Building a time model

A common approach to modeling time is the Cox proportional hazards (PH) model. It identifies the effect of several variables on the time a specified event takes to occur. In other words, what is the likelihood that a particular customer will be exposed to a transition event (eg. moving from State 1 to State 2)?

Don’t forget, however, that the subscription model has more than just an active and inactive state — there are many possible states that need to be assessed for risk. With a Cox PH model, each transition (trans) is modeled separately and takes into the account the time since entering a transition state.

R has a very useful engine to calculate separate input statistics for each transition in a Cox PH model. We use a stratified Cox model where the strata is determined by the transition variable. This means that we have 8 separate models for each transition.

The following code models just time and does not (yet) include the impact of any other variables:

c0 <- coxph(Surv(Tstart, Tstop, status) ~ strata(trans), data = msdata_exp, method = breslow)

To incorporate input statistics, we apply the msfit workhorse, which calculates the probability of being in a state given the original state and time spent in each subsequent state. This is the first approach we use when modeling churn across multiple states.

msfit(object = c0, vartype = greenwood, trans = trans_mat)

So what did we find?

This next plot shows the probability that a customer moves from one state to another (also known as the cumulative hazard) with respect to time since the initial subscription began. Keep in mind, however, that this plot is an aggregate of all customers and does not show the impact of a specific variable, such as gender or product type, on the final hazard slope.

From this plot we see that at 1,000 days, customers who initialize their journey in State 1 have a 75% probability of transitioning to State 2 and a 70% probability to State 3.  

We can also explore the probabilities of a state-to-state transition by creating a probability matrix. This snippet shows the probabilities of customers who started in State 1 at the earliest days of their journey (days 0-6) and the end (days 4560 onward).

By day 5 of the customer journey, there is a 99% likelihood the customer will be still in State 1 and a less than 1% probability of being in State 3. As the journey progresses, however, there is only a 16% probability of still being in State 1 come day 4,787. Of crucial importance for the brand is the likelihood of moving to State 5, the end of the relationship: we see a 33% probability of this occurring.

Here’s one more way to visualize the same trend:

The distance between two adjacent curves represents the probability of being in the corresponding state with respect to time.

Adding more precision with covariates

A full model can be calculated with the workhorse coxph function by introducing stratification by transition and including all additional explanatory variables. From this model we can extract the relevant covariates that explain the likelihood of moving between states for a given demographic or behavioral variable.

In this table, a positive covariate indicates an increase in the hazard of moving from one state to an end state. In other words, the larger the covariate, the more exposed a customer is to the “risk” of transition.

With this model we can predict the distribution of states for a given time since beginning the subscription while taking gender, age or discounts into account. For example, imagine two customers with the following profiles:

Customer A Customer B
  • Discount: Yes
  • Gender: Female
  • Joined: 2013-2017
  • Age: Younger than 20    
  • Discount: No
  • Gender: Male
  • Joined: 2002-2007
  • Age: 20-40

Given these customer profiles, Customer B is more likely over the course of its journey with the brand to churn (State 5) or delay the subscription once and reject promotional emails (State 4). Comparatively, Customer A has a 30% likelihood of remaining an active client with no subscription pauses over the course of 10 years, while client B has a 20% likelihood.

Summary

Multistate models are common in business applications. They allow decision makers to see, literally, how states are distributed across a customer journey for a diverse variety of customer segments. Armed with this intelligence, brand decision makers can focus their outreach and acquisition efforts toward customers that have a higher probability of remaining in an active state for a longer period of time. It also shows where in the customer journey customers are vulnerable to churn — which can then be used to implement a strategy to preemptively mitigate vulnerability before it manifests.

Feature selection with the Boruta Algorithm

One of the most important steps in building a statistical model is deciding which data to include. With very large datasets and models that have a high computational cost, impressive efficiency can be realized by identifying the most (and least) useful features of a dataset prior to running a model. Feature selection is the process of identifying the features in a dataset that actually have an influence on the dependent variable.

High dimensionality of the explanatory variables can cause both high computation times and a risk of overfitting the data. Moreover, it’s difficult to interpret models with a high number of features. Ideally we would be able to select the significant features before performing statistical modeling. This reduces training time and makes it easier to interpret the results.

Some techniques to address the “curse of dimensionality” take the approach of creating new variables in a lower-dimensional space, such as Principal Component Analysis (Pearson 1901) or Singular Value Decomposition (Eckart and Young 1936). While these may be easier to run and more predictive than an un-transformed set of predictors, they can be very hard to interpret.

We’d rather—if possible—select from the original predictors, but only those that have an impact. There are a few sophisticated feature selection algorithms such as Boruta (Kursa and Rudnicki 2010), genetic algorithms (Kuhn and Johnson 2013, Aziz et al. 2013) or simulated annealing techniques (Khachaturyan, Semenovsovskaya, and Vainshtein 1981) which are well known but still have a very high computational cost — sometimes measured in days as the dataset multiplies in scale by the hour.

As genuinely curious, investigative minds, we wanted to explore how one of these methods, the Boruta algorithm, performed. Overall, we found that for small datasets, it is a very intuitive and beneficial method to model high dimensional data. Below follows a summary of our approach.

Why such a strange name?

Boruta comes from the mythological Slavic figure that embodies the spirit of the forest. In that spirit, the Boruta R package is based on ranger, which is a fast implementation of the random forests classification method.

How does it work?

We assume you have some knowledge of how Random Forests work—if not, this may be tough.

Let’s assume you have a target vector T (what you care about predicting) and a bunch of predictors P.

The Boruta algorithm starts by duplicating every variable in P—but instead of making a row-for-row copy, it permutes the order of the values in each column. So, in the copied columns (let’s call them P’), there should be no relationship then between the values and the target vector.

Boruta then trains a Random Forest to predict T based on P and P’.

The algorithm then compares the variable importance scores for each variable in P with it’s “shadow” in P’. If the distribution of variable importances is significantly greater in P than it is in P’, then the Boruta algorithm considers that variable significant.

Application

The dataset of interest here were records of doctors’ appointments for insurance-related matters, and the target variable of interest was whether or not the patient showed up for their appointment. Part of our task was to find the most significant interactions, and with fifty jurisdictions and thirty doctor specialties, we already have a space of 1,500 potential interactions to search through—not including many other variables.

The set of features can be visualized by creating a set of boxplots for the variable importances for each potential feature.

The three red boxplots represent the distribution of minimum, mean and maximum scores of the randomly duplicated “shadow” variables. This is basically the range of variable importances that can be achieved through chance.

The blue bars are features that performed worse than the best “shadow” variables and should not be included in the model. Purple bars are features that have the same explanatory power as the best “shadow” variable, and its use in the model is up to the discretion of the analyst. The green bars are variables with importances higher than the maximum “shadow” variable — and are therefore good predictors to include in a future classification model.

Code

show_mm <-
  model.matrix( ~ 0 +
                 `Doctor Specialty` + `Business Line` + 
                 Jurisdiction,
               data = show_df,
               contrasts.arg =
                 lapply(
                   show_df[, c('Doctor Specialty',
                               'Business Line',
                               'Jurisdiction')],
                   contrasts,
                   contrasts = FALSE
                  )
               )

show_mm_st <- cbind(status = show_df$`Appt Status`, show_mm)
show_mdf <- as.data.frame(show_mm_st)

library(Boruta)
b_model <- Boruta(status ~ ., data = show_mdf)

cat(getSelectedAttributes(b_model), sep = "\n")
# Doctor SpecialtyChiropractic Medicine
# Doctor SpecialtyNeurology
# Doctor SpecialtyNurse
# Doctor SpecialtyOrthopaedic Surgery
# Doctor SpecialtyOther
# Doctor SpecialtyRadiology
# Business LineDisability
# Business LineFirst Party Auto
# Business LineLiability
# Business LineOther
# Business LineThird Party Auto
# Business LineWorkers Comp
# JurisdictionCA
# JurisdictionFL
# JurisdictionMA
# JurisdictionNJ
# JurisdictionNY
# JurisdictionOR
# JurisdictionOther
# JurisdictionTX
# JurisdictionWA

## Importance plot 

plot(b_model, las =2, cex.axis=0.75)

 

Segmenting customers by their purchase histories using non-negative matrix factorization

Businesses often want to better understand their customers by segmenting them along a common set of attributes. In a previous post, we explored how to build segments based on customers’ trajectories of interactions with a brand. In this post, we’ll show how to build segments based purely on the products that customers have purchased; this approach has the added bonus of not just segmenting your customers, but your products too! And this isn’t just a strategic intelligence tool—finding groups of similar customers can lead to advanced recommendations systems that are personalized for each one of your customers.

How one can find groups of similar customers that purchase similar products? How do we define what “similar” means in an assortment of thousands (or tens of thousands) of SKUs? Flip the question on its head: how do we find products that are purchased by similar customers? How do we define similarity between customers?

As you’ll see—asking either one of those questions individually is a bit like asking which blade of the scissors cuts the paper. In a product segmentation, customers are said to be similar when they purchase from the same set of products; and products are similar when they are purchased by the same set of customers.

Ok—let’s dive into how it’s done.

We start with what we’re trying to explain, which is our observed customer-by-product matrix:

SKU1 SKU2 SKU3
CUST1 0 1 1
CUST2 0 0 1
CUST3 1 0 0

 

To keep things really simple, we’re going to put a 1 in the cell if the customer has ever purchased that product, and 0 if they never have.

Again, let’s remember that we’re trying to explain this data in terms of customers (rows) and products (columns)—sounds like we’re trying to split this matrix in two! In fact we are—the tool that we use to explain our observed data is called non-negative matrix factorization. It is a group of algorithms that simplify the original matrix of data (let’s call it V) by 2 other matrices (W and H), which, when multiplied together, come to (approximately) the original matrix.

So, let’s say you have 10,000 customers and sell 1,000 products. Your customer-by-product matrix is going to be 10,000 rows and 1,000 columns. But, you could factor this matrix into a:

  • 10,000 row by 2[†] column matrix (W), and a
  • 2[†] row by 1,000 column matrix (H)

†2 is arbitrary—but is typically determined by trying a number of different options

This would mean that for each customer, you have two pieces of information that tell you what kinds of products they purchase (instead of 1,000); and for each product, you have two pieces of information that tell you which kinds of customers purchase them.

 

Link: https://en.wikipedia.org/wiki/Non-negative_matrix_factorization#/media/File:NMF.png License: https://creativecommons.org/licenses/by-sa/3.0/

 

This approach can be thought of as a multidimensional scaling algorithm that has better features than Principal Component Analysis as it is well defined for non-negative values in the data (and counts of purchases are always non-negative).

Working backwards, if you work out the sums, a single cell of the matrix that you arrive at when you multiply the customer- and product-segment matrices together is:

CS1 * PS1 + CS2 * PS2 + … + CS5 * PS5

Where CS1 is that customer’s score for segment 1, and PS1 is that product’s score for being in segment 1—and so on through to segment 5. So if a customer and product have high scores for the same segments, then our factorization is implying that this cell in the customer-by-product matrix has a high value.

Results

Visually, the results can be shown in the form of a heatmap, which shows each customer’s score by each segment (the charts below use the word “basis” in lieu of segment).

The dark entries (in column 2, for example) mean that those customers preferentially buy products from segment 2. And which products are those? Well, we have a corresponding heatmap for our product segments, that looks like this:

Now we have the two pieces of information together—the customers that tend to purchase similar products, and the products that tend to be purchased by similar customers.

With this information in hand, you can start using these scores as the basis for strategic decisions and marketing enhancements, like:

  • Developing product-based customer segments to build need-based personas
  • Deciding which products should be offered together as a bundle
  • Building a product recommendation engine that uses a customer’s segment to determine which products should be merchandised

Code Through

If you’re interested—check out our code-through here.

Challenges

Working through such an analysis, a common challenge is having a very sparse dataset. Typically there are many products, and customers tend to only purchase a few of them. This can usually be addressed by applying some expert knowledge to develop a hierarchy of information about the product—from brand, to style, to size (or SKU), and choosing the appropriate level of the hierarchy to use.

In addition, non-negative matrix factorization takes a number of options and parameters—there a number of different algorithms and choices to make for each. One needs to determine which loss function to use and how to determine the starting state of the matrix in the estimation process.

Not to mention, you have to choose the number of segments for your analysis—this is typically done by trying a range of possible segments and comparing how well they explain the data (by comparing their errors) and how well they perform for you, the analyst. Too many segments, and the information is hard to digest; too few, and you are not explaining the data well.

Finally, the computations are expensive and tricky. We use the NMF library in R, which is as performant and flexible as they come—but even still we often encounter tricky and hard-to-diagnose errors all the time.

If you see these words, bid high

Christie’s Auction House is a landmark institution in New York City and across influential capital cities. Every few weeks, in the run-up to one of their famous auctions, you can waltz right in and enjoy museum quality artwork enjoying a brief public interlude before it disappears back into private hands. Only a few weeks ago, I was in the midst of Lichtensteins, Chagalls, and Basquiats on their two week vacation in the public eye. In these halls, listen carefully and you can hear the whispers of brokers advising their wealthy clients — which more often than not contains an evaluation of the direction of the art market. But how good are these predictions? Can we do better with data?

One thing we love to do at Gradient is to push our skills with novel, underutilized datasets. A few weeks ago, we got the bright idea of applying statistical techniques to an unlikely recipient of quantitative techniques: fine arts. The proof-of-concept project outlined below presented an opportunity to exercise a few core competencies of intense interest to our clients: assembling bespoke datasets through web scraping and building statistical models with unstructured or semi-structured data. From this proof-of-concept, we also uncovered a valuable case study in the value of regularization.

Assembling the Dataset

Christie’s conveniently hosts an online database of the results of their auctions. A typical page looks something like this:

With some inspection of the page’s source code, we can see how the data is organized:

With tools like R’s rvest, we can automate the scraping of data from auctions and specific lots (sales) and begin to assemble a massive dataset through an automated procedure. Gone are the days of copy-and-paste and manual data entry. For each sale, we collected the following data:

  • The artist
  • The title of the work
  • The realized price
  • The estimated pre-sale price range
  • An essay describing the work
  • And details on the work’s provenance

As is typical for projects like these, the data is extraordinarily messy — with many fields missing for many entries. A typical entry looks something like this:

$ lot : chr "802"
$ title : chr "A SMALL GREYISH-GREEN JADE 'BUFFALO’"
$ subtitle : chr "LATE SHANG-EARLY WESTERN ZHOU DYNASTY, 12TH-10TH CENTURY BC"
$ price_realised: int 15000
$ low_est : int 4000
$ high_est : int 6000
$ description : chr "A SMALL GREYISH-GREEN JADE 'BUFFALO’\r\nLATE SHANG-EARLY WESTERN ZHOU DYNASTY, 12TH-10TH CENTURY BC\r\nPossibly a necklace clos"| __truncated__
$ essay : chr "Compare the similar jade water buffalo carved in flat relief and dated to the Shang dynasty in the Mrs. Edward Sonnenschein Col"| __truncated__
$ details : chr "Provenance\r\n The Erwin Harris Collection, Miami, Florida, by 1995."
$ saleid : chr "12176"

We downloaded every lot sale from 2017 — a set of 11,577 observations.

Setting up the model

To start with, we needed to simplify the dataset into a target vector and a set of predictors. We are interested in predicting the actual price — but what price exactly? Since Christie’s supplies an estimated range, we decided we needed to “back out” the information already contained in the estimate. To control for the effect of scale, we used the ratio of the actual price to the upper bound of the estimated range. Since the ratio was not normally distributed, we had to employ a Box-Cox transformation to this vector to normalize it

For predictors, we decided to use the abundance of text contained in the dataset. To add some structure to the text, we tokenized the “bag of words” for each sale item, and included only words that were used between 50 and 200 times. This type of dataset is standard in text mining approaches and is called a document-term matrix, where each “document” is a row, and each possible “term” — typically, a stemmed word — is a column, with the number of times that term appears in a given document in the respective cell.

A naïve approach

Our first model was a simple linear regression with the document-term matrix as the set of predictors and our Box-Cox-transformed price-to-estimate ratio as our target vector. What did we get? A data scientist’s dream come true!

Residual standard error: 0.09126 on 8755 degrees of freedom
Multiple R-squared:  0.967, Adjusted R-squared:  0.9613 
F-statistic: 168.4 on 1525 and 8755 DF,  p-value: < 2.2e-16

You see that R-squared? 0.96!!

Any good analyst worth their salt will throw up an eyebrow at this result, and an inspection of diagnostic plots starts exposes more cracks in this model:

The model underestimates low ratios:

And residuals do not follow a normal distribution:

And let’s come back to that 0.96 R-squared! This is obviously a case of overfitting — in real life we should not expect to be able to predict the actual price of a sale with that kind of accuracy. If we tested this model on data that we did not use to train the model, would we really expect to get it this right?

In addition, this kind of naïve model gives us almost no insight into what words are significant predictors of actual sale price. With 1,526 predictors, we’d have a lot of data to sort through even after we’ve run the analysis.

What’s the solution to all of these issues? Regularization!

A more sophisticated model — L1-regularized regression with cross validation

We love the L1-norm (or LASSO) for regularizing our regression models. In addition to regularizing the model by ensuring that it is applicable to data the model has not yet seen, it also helps make sense of very “wide” datasets — like those with over 1,500 predictors — by shining a light on only those that have a significant impact. By imposing an extra penalty on coefficients, it zeros out coefficients that have no significant impact and restricts the size of those that are non-zero.

Using cross-validation that repeatedly trains a model on sample of a databse and testing it on the held-out portion, we can tune the regression model to pick the exact combination of predictors that maximizes the penalized fit.

Although this is a busy graph (we did have 1,500 predictors after all), this shows that as we increase the penalization (lambda), more and more coefficients shrink and ultimately become zero. At the value of lambda that we selected through cross validation, there are roughly 40 words that actually have some predictive power. Some predict a price higher than the actual, and some predict a lower price.

Positive indicators. If you see these words in the description, bid high.

Negative indicators

Oh, and what was our R-squared? Nothing close to 0.96! This model had a pseudo R-squared of around 0.14. Smaller? Yes, but certainly more reflective of the predictive power of the words in the objects’ descriptions.

So, is 0.14 good or bad? Depends on your perspective! In financial markets an R-squared even ever so slightly above zero has huge value, as any kind of edge can make you millions. This kind of model probably could not be used to reliably inform purchase decisions at auctions, but it certainly raises good questions for the astute art observer: why does the word “warehouse” int he description predict a higher than estimated value? Are watercolours disrespected by the experts but then preferred by buyers? Any discerning buyer should be asking these questions.

One step further — image tagging with computer vision

In addition to price data and a text description of the item, we wanted to see how helpful the photos of each item could be in predicting sales price. We have been wanting to try the Microsoft Azure Computer Vision API, so we sent every image file to the service and it returned a set of tags for each photo. The most popular tags are shown below:

We then built a number of binary classifiers to predict whether or not the object exceeded the high end of its predicted price range. There were 365 tags that appeared for at least two images — these tags were used as the predictors in building a classifier.

We divided items into 2 groups:

  • “1” — received higher than the estimated price
  • “0” — received lower than the estimated price

We plot the most popular tags across groups. We were surprised that the counts for the group that underperformed were so low, and that some fairly common words, like “vase”, “small”, and “group” appeared only in group 1. (A puzzle, for sure!).

We built three types of modern classifiers on the full set of tags: a Random Forest, a Neural Network, and a Gradient Boosted Tree. None of them performed spectacularly (the max AUC was 0.5744), but we were surprised that there was any signal in this data at all! We would have thought that all of this data would have been completely incorporated by the specialists at Christie’s in their estimate. Here are the three respective ROC curves detailing the performance of the classifiers:

Conclusion

So much more could be done to improve these models. Certainly we could go much further than the works sold in 2017. We could look for lots sold in more than one auction and build a model to account for changes in price over time. We could build a regression to isolate the performance of certain auction locations — like New York, London, and Hong Kong, or test performance measures between live and online auctions.

As a short internal project, this was really fun, and shows what the Gradient team can do with a novel source of data in a few days. Like what you see? Get in touch.

 

Introducing Team Gradient’s newest member

Kyle Block, Gradient’s new Research Manager

As Gradient’s newest employee, I’m thrilled to be working with such a talented crew on some fascinating assignments. Admittedly, I was a little intimidated by the abundance of brains and expertise, but we have quickly developed a strong camaraderie and settled into a productive cadence. In the first few days alone, I’ve come to appreciate the responsive and efficient approach with which Gradient approaches research. Having had the opportunity to design and execute research for a diverse set of global clients, I know first hand that a lean and flexible research design is essential to getting clients the exact data and analysis they expect.

I’m no stranger to quantitative research, having spent the past 7 years in various capacities using data to help managers make important decisions. I love seeing how the combination of the right research question, data, and targeted analysis can uncover a completely unexpected finding that changes the direction of an initiative, campaign, or upends an established hypothesis. It’s even more gratifying to work on projects with so many different applications, from learning how Sesame Street programming helps young children learn in a developing country to mapping how the population in an upmarket neighborhood has changed over time and investigating the drivers behind it. I feel way too fortunate that I get to spend my days (and the occasional night) learning about people and their habits, motivations, triggers, emotions, and relationships. I used to tell myself that I “understand” people, but I continue to be surprised and have my assumptions overturned, so I’ve learned to approach every new project with a fresh perspective and no expectations.

While Gradient already has an impressive lineup of brainpower and fascinating clients, I hope to draw upon my diverse experience in emerging markets where the growing consumer class is not well understood. Thanks to mobile phones, the marketing tools and mediums available to emerging markets are not drastically different from those of more established economies. That said, the way in which marketers appeal to this new wave of consumers requires foregoing all prior assumptions and investing in extensive formative research to appreciate the values of a new culture before you can even begin to think about analyzing a dataset. I’d like to help Gradient glue together the cultural context and high-level analytics to ultimately produce analyses that are statistically rigorous but not blind to their surroundings. I’m also a lover of maps (see below) and commonly think the best way to make a point that nobody can refute is to toss your arguments on a map. Too often we overlook the importance of spatial relationships and how our environments affect us. There are many promising applications of spatial analysis that I’d love to incorporate into Gradient’s standard methodology, such as analyzing the performance of our predictive models by ZIP code or neighborhood.

I feel quite fortunate that my professional and personal interests align quite nicely, which is a good thing, right? I’m a super adventurous traveler. I’ll go anywhere — the more far-flung, bizarre, unheard-of place the better. I’ve been to more than 40 countries and enjoyed (with one exception) every single one — you can ask me which one over a beer. When I travel, I’ll eat or drink anything and prefer to spend at least one full day with no map (which is hard for me!) and just a good pair of walking shoes to truly get a sense of a new place. My ideal Saturday is one spent exploring a city on foot with no itinerary or objective.

I just graduated from The University of Pennsylvania with a MS in Spatial Analytics, which might sound like years of misery for some, but was as close to academic heaven as I could get. Full disclosure: I really like maps, I can’t get enough of them. So much so that I have a designated map wall at home and have spent enough time staring at satellite imagery that I can recognize the footprint of pretty much every major city in the world. Thankfully everyone at Gradient embraces nerdiness, so I can be open about my strange obsessions. At Gradient, we all have them, I assure you!


Introducing Team Gradient’s newest member was originally published in Gradient on Medium, where people are continuing the conversation by highlighting and responding to this story.

Unpacking the election results using bayesian inference

As anyone whose read this blog recently can surmise, I’m pretty interested in how this election turned out, and have been doing some exploratory research into the makeup of our electorate. Over the past few weeks I’ve taken the analysis a step further and built a sophisticated regression that goes as far as anything I’ve seen to unpack what happened.

Background on probability distributions

(Skip this section if you’re familiar with the beta and binomial distributions.)

Before I get started explaining how the model works, we need to discuss some important probability distributions.

The first one is easy: the coin flip. In math, we call a coin flip a Bernoulli trial, but they’re the same thing. A flip of a fair coin is what a mathematician would call a “Bernoulli trial with p = 0.5”. The “p = 0.5” part simply means that the coin has a 50% chance of landing heads (and 50% chance of landing tails). But in principle you can weight coins however you want, and you can have Bernoulli trials with p = 0.1, p = 0.75, p = 0.9999999, or whatever.

Now let’s imagine we flip one of these coins 100 times. What is the probability that it comes up heads 50 times? Even if the coin is fair (p = 0.5), just by random chance it may come up heads only 40 times, or may come up heads more than you’d expect – like 60 times. It is even possible for it to come up 100 times in a row, although the odds of that are vanishingly small.

The distribution of possible times the coin comes up heads is called a binomial distribution. A probability distribution is a set of numbers that assigns a value to every possible outcome. In the case of 100 coin flips, the binomial distribution will assign a value to every number between 0 and 100 (which are all the possible numbers of times the coin could come up heads), and all of these values will sum to 1.

Now let’s go one step further. Let’s imagine you have a big bag of different coins, all with different weights. Let’s imagine we grab a bunch of coins out of the bag and then flip them. How can we model the distribution of the number of times those coins will come up heads?

First, we need to think about the distribution of possible weights the coins have. Let’s imagine we line up the coins from the lowest weight to the highest weight, and stack coins with the same weight on top of each other. The relative “heights” of each stack tell us how likely it is that we grab a coin with that weight.

Now we basically have something called the beta distribution, which is a family of distributions that tell us how likely it is we’ll get a number between 0 and 1. Beta distributions are very flexible, and they can look like any of these shapes and almost everything in between:

Taken from Bruce Hardie: http://www.brucehardie.com/talks/cba_tut_art_16_HO.pdf

 

So if you had a bag like the upper left, most of the coins would be weighted to come up tails, and if you had a bag like the lower right, most of the coins would be weighted to come up heads; if you had a bag like the lower left, the coins would either be weighted very strongly to come up tails or very strongly to come up heads.

This distribution is called the beta-binomial.

Model set up

You might now be seeing where this is going. While we can’t observe individuals’ voting behavior (other than whether or not they voted), we can look at the talleys at local levels, like counties. And let’s say, some time before the election, you lined up every voter in a county and stacked them the same way you did with coins as before, but instead of the probability of “coming up heads”, you’d be looking at a voter’s probability of voting for one of the two major candidates. That would look like a beta distribution. You could then model the number of votes for a particular candidate in a particular county would as a beta-binomial distribution.

So in our model we can say the number of votes V[i] in county i is distributed beta-binomial with N[i] voters and voters with p[i] propensity to vote for that candidate:

V[i] ~ binomial(p[i], N[i])

But we’re keeping in mind that p[i] is not a single number but a beta distribution with parameters alpha[i] and beta[i]:

p[i] ~ beta(alpha[i], beta[i])

So now we need to talk about alpha and beta. A beta distribution needs two parameters to tell you what kind of shape it has. Commonly, these are called alpha and beta (I know, it’s confusing to have the name of the distribution and one of its parameters be the same), and the way you can think about it is that alpha “pushes” the distribution to the right (i.e. in the lower right above) and that beta “pushes” the distribution to the left (i.e. in the upper left above). Both alpha and beta have to be greater than zero.

Unfortunately, while this helps us understand what’s going on with the shape of the distribution, it’s not a useful way to encapsulate the information if we were to talk about voting behavior. If something (say unemployment) were to “push” the distribution one way (say having an effect on alpha), it would also likely have an effect on beta (because they push in opposite directions). Ideally, we’d separate alpha and beta into two unrelated pieces of information. Let’s see how we can do that.

It’s a property of the beta distribution that its average is:

 
   alpha
------------
alpha + beta

So let’s just define a new term called mu that’s equal to this average.

        alpha
mu = ------------
     alpha + beta

And then we can define a new term phi like so

       alpha
phi = --------
        mu  

With a few lines of arithmetic, we can solve for everything else:

 
phi = alpha + beta
alpha = mu * phi 
beta = (1 - mu) * phi

And if alpha is the amount of “pushing” to the right and beta is the amount of “pushing” to the left in the distribution, then phi is all of the pushing (either left or right) in the distribution. This is a sort of “uniformity” parameter. Large values of phi mean that almost all of the distribution is near the average (think the upper right beta distribution above) – the alpha and beta are pushing up against each other – and small values of phi mean that almost all the values are away from the average (think the beta distribution on the lower left above).

In this parameterization, we can model propensity and polarization independently.

So now we can use county-level information to set up regressions on mu and phi – and therefore on the county’s distribution of voters, and how they ended up voting. Since mu has to be between 0 and 1 we use the logit link function, and since phi has to be greater than zero, we use the exponential link function

logit(mu[i]) = linear function of predictors in county i
log(phi[i]) = linear function of predictors in county i

The “linear functions of predictors” have the format:

coef[uninsured] * uninsured[i] + coef[unemployment] * unemployment[i] + ...

Where uninsured[i] is the uninsurance rate in that county and coef[uninsured] is the effect that uninsurance has on the average propensity of voters in that county (in the first equation) or the polarity/centrality of the voting distribution (in the second equation).

For each county, I extracted nine pieces of information:

  • The proportion of residents that do not have insurance
  • The rate of unemployment
  • The rate of diabetes (a proxy for overall health levels)
  • The median income
  • The violent crime rate
  • The median age
  • The gini coefficient (an index of income heterogeneity)
  • The rate of high-school graduation
  • The proportion of residents that are white

Since each of the above pieces of information had two coefficients (one each for the equations for mu and phi) the model I used had twenty parameters against 3111 observations.

The source for the data is the same as in this post, and is available and described here.

The BUGS model code is below: (all of the code is available here and the model code is in the file county_binom_model.bugs.R)

Model results / validation

The model performs very well on first inspection, especially when we take the log of the actual votes and the prediction (upper right plot), and even more so when we do that and restrict it only to counties with greater than 20,000 votes (lower left plot):

actual_v_estimate

This is actually cheating a bit, since the number of votes for HRC (which the model is fitting) in any county is constrained by the number of votes overall. Here’s a plot showing the estimated proportion vs. the actual proportion of votes for HRC, weighted by the number of votes overall:

proportions_plot

Here is the plot of coefficients for mu (the average propensity within a county):

mu_coefs_plot

All else being equal, coefficients to the left of the vertical bar helped Trump, and to the right helped Clinton. As we can see, since more Democratic support is concentrated in dense urban areas, there are many more counties that supported Trump, so the intercept is far to the left. Unsurprisingly (but perhaps sadly) whiteness was the strongest predictor overall and was very strong for Trump.

In addition, the rate of uninsurance was a relatively strong predictor for Trump support, and diabetes (a proxy for overall health) was a smaller but significant factor.

Economic factors (income, gini / income inequality, and unemployment) were either not a factor or predicted support for Clinton.

The effects on polarity can be seen here:

phi_coefs_plot

What we can see here (as the intercept is far to the right) is that most individual counties have a fairly uniform voter base. High rates of diabetes and whiteness predict high uniformity, and basically nothing except for income inequality predicts diversity in voting patterns (and this is unsurprising).

What is also striking is that we can map mu and phi against each other. This is a plot of “uniformity” – how similar voting preferences are within a county vs. “propensity” – the average direction a vote will go within a county. In this graph, mu is on the y axis, and log(phi) is on the x axis, and the size of a county is represented by the size of a circle:

propensity_uniformity

What we see is a positive relationship between support for Trump and uniformity within a county and vice versa.

And if you’re interested in bayesian inference using gibbs sampling, here are the trace plots for the parameters to show they converged nicely: mu trace / phi trace.

Conclusion and potential next steps

This modeling approach has the advantage of closely approximating the underlying dynamics of voting, and the plots showing the actual outcome vs. predicted outcome show the model has pretty good fit.

It also shows that whiteness was a major driver of Trump support, and that economic factors on their own were decidedly not a factor in supporting Trump. If anything, they predicted support for Clinton. It also provides an interesting way of directly modeling unit-level (in this case, county-level) uniformity / polarity among the electorate. This approach could perhaps be of use in better identifying “swing counties” (or at least a different approach in identifying them).

This modeling approach can be extended in an number of interesting ways. For example, instead of using a beta-binomial distribution to model two-way voting patterns, we could use a dirichlet-multinomial distribution (basically, the extension of beta-binomial to more than 2 possible outcomes) to model voting patterns across all candidates (including Libertarian and Green), and even flexibly model turnout by including not voting as an outcome in the distribution.

We could build similar regressions for past elections and see how coefficients have changed over time.

We could even match voting records across the ’12 and ’16 elections to make inferences about the components of the county-level vote swing: voters flipping their vote, voting in ’12 and not voting in ’16, or not voting in ’12 and then voting in ’16 – and which candidate they came to support.