Thanks to growing interest in big data and machine learning, the number of good reads on the algorithms behind personalization has grown exponentially (Etsy’s November 2014 post on its personalized recommendations methods immediately comes to mind).
But relying on data and math alone won’t suffice when it comes to providing the best user experience. To get personalization right, you need heuristics: a set of problem-solving techniques that rely on experiment, experience, and sometimes even wild guesses.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1645930,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,marketing,","session":"C"}']At Gilt, the heuristic techniques we use for our personalization initiatives help us to gain a better understanding of how our millions of users think. Like our data, these methods aim to capture how our users shop and make purchases; what they want, what they “need,” and how to recommend products to them that they’ll actually buy. Our work is driven by algorithms. But it also involves examining user behavior in order to produce really rich customer experiences that enable us to recommend products across categories and find complimentary items based on behavioral similarities.
The math that goes into building a personalization algorithm can be somewhat magical. Coupled with the right data, algorithms can recommend clothes or furniture, movies or books, underwear or outerwear, etc. that will fit a user’s preferences, budget, or size. But the math doesn’t care about the overall impression made by a recommendation. It can’t easily take products from different categories to make a well-rounded and cohesive shopping experience.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
In our world, that problem looks like this: a personalized sale that produces a single product option, like socks or lamps, based on a user’s past purchases. Chances are, such a sale would simply feel strange. And what happens when the user has already purchased enough socks or lamps and wants to buy something else?
Whether they concern a person’s preferences or current needs, emerging fashion trends, or seasonal factors, our models will always have blind spots. But the heuristical approach allows us to address some of these blind spots by helping us identify which aspects of our models require refinement and by suggesting ways to refine them.
One important heuristic technique we use at Gilt is trial and error. If we can isolate the trial, we can mitigate the damages from an error. User testing is a great way to prevent costly errors. Several of Gilt’s most enthusiastic and supportive members have been willing to help us improve our shopping experience by testing experimental personalization ideas in a controlled environment. With their assistance, we’ve been able to better understand the “why” behind a successful product recommendation and improve our brand mixture or category cadence when composing a personalized sale.
Another common method we employ at Gilt is A/B testing. By exposing a trial to a small sample of members, we can get an honest sense of its viability while limiting the cost of making an error.
A third important tool in our heuristic toolbox is the employee-only release. Before rolling out a new feature to our users, we often release them internally to generate feedback that we then use to better craft our recommendations. The information we collect during this process is simple and straightforward — “Not enough jeans!”, “Why are you recommending me flip flops in December?”, etc. — yet invaluable. (Thanks for the feedback, guys — we’re working on it.)
These three techniques aren’t failsafe. A meaningful A/B testing effort can take a prohibitively long time to launch and complete or be too limited in scope. Anecdotal data from user testing or employee-only releases is limited by its subjectivity. In the end, people — not mathematical formulae — have to make decisions about how often we recommend a new product, or type of product, to a user. People decide where to put the code that will systematize that recommendation frequency. Human experience, not algorithmic efficiency, is the end-all for a heuristic approach.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1645930,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,marketing,","session":"C"}']
This brings me to one of the most valuable resources in our recommendation arsenal — Gilt’s dedicated and talented merchandising teams. They know what’s popular, what’s on-trend, and what will be the Next Big Thing. They know our customers better than anyone. We incorporate their expertise every step of the way, from boosting the ranking of certain products and brands, to providing relevant fall-backs for when we don’t have enough good data to personalize for a particular member.
Our heuristic approach means that, the more we know about how our products, how our members shop, and what they want to see next, the better we can make the algorithms serve their needs.
Brian Ballantine is a lead software engineer on the personalization and discovery team at online shopping site Gilt. He has more than a decade of experience coding in a wide range of languages, from Scala to Ruby to .Net.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More