Check Model Matrix Website and Github project.
At Collective we are in programmatic advertisement business, it means that all our advertisement decisions (what ad to show, to whom and at what time) are driven by models. We do a lot of machine learning, build thousands predictive models and use them to make millions decision per second.
How do we get the most out of our data for predictive modeling?
Success of all Machine Learning algorithms depends on data that you put into it, the better the features you choose, the better the results you will achieve.
Feature Engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work better.
In Ad-Tech it’s finite pieces of information about users that we can put into our models, and it’s almost the same across all companies in industry, we don’t have access to any anonymous data like real name and age, interests on Facebook etc. It really matter how creative you are to get maximum from the data you have, and how fast you can iterate and test new idea.
In 2014 Collective data science team published Machine Learning at Scale paper that describes our approach and trade-offs for audience optimization. In 2015 we solve the same problems, but using new technologies (Spark and Spark MLLib) at even bigger scale. I want to show the tool that I built specifically to handle feature engineering/selection problem, and which is open sources now.
Model Matrix
Feature Transformation
Imagine impression log that is used to train predictive model
Producing a feature vector for every visitor (cookie) row and every piece of information about a visitor as an p-size vector, where p is the number of predictor variables multiplied by cardinality of each variable (number of states in US, number of unique websites, etc …). It is impractical both from the data processing standpoint and because the resulting vector would only have about 1 in 100,000 non-zero elements.
Model Matrix uses feature transformations (top, index, binning) to reduce dimensionality to arrive at between one and two thousand predictor variables, with data sparsity of about 1 in 10. It removes irrelevant and low frequency predictor values from the model, and transforms continuous variable into bins of the same size.
Transformation definitions in scala:
Transformed Columns
Categorical Transformation
A column calculated by applying top or index transform function, each columns id corresponds to one unique value from input data set. SourceValue is encoded as ByteVector unique value from input column and used later for featurization.
Bin Column
A column calculated by applying binning transform function.
Building Model Matrix Instance
Model Matrix instance contains information about shape of the training data, what transformations (categorical and binning) are required to apply to input data in order to obtain feature vector that will got into machine learning algorithm.
Building model matrix instance described well in command line interface documentation.
Featurizing your data
When you have model matrix instance, you can apply it to multiple input data sets. For example in Collective we build model matrix instance once a week or even month, and use it for building models from daily/hourly data. It gives us nice property: all models have the same columns, and it’s easy to compare them.
Results
Model Matrix is open sourced, and available on Github, lot’s of documentation on Website.
We use it at Collective to define our models and it works for us really well.
You can continue your reading with Machine Learning at Scale paper, to get more data science focused details about our modeling approach.