(Editor’s note: Niel Robertson is the founder and CEO of Trada. He submitted this story to VentureBeat.)
“I know that half of my advertising doesn’t work. The problem is I don’t know which half.” – John Wanamaker
For more than 60 years, this well-known statement about advertising has been an accepted truism. Roughly five years ago, though, there was a sea change in marketing in both the tools and attitudes of its practitioners.
Armed with the ability to measure results at a fine-grained level, marketers started making more data-driven decisions about their advertising. The ability to measure was catalyzed by online advertising, a uniquely traceable medium for marketing. Accessibility to simple but sophisticated analysis tools like Google Analytics only increased marketers’ abilities to be data-driven. Not only could they trace where their dollars had been spent, but also the effectiveness of each program, message, source and landing page from influence through to purchase. The next generation of marketers was born: data-driven marketing analysts.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Engineering teams, meanwhile, have spent the past 25 years building software, shipping it on a CD (until relatively recently), sending it into the field and only received limited feedback on what does and doesn’t work. Product development has primarily been about what’s next, not about what’s working. While big picture feedback may have made it back to the development team, the fine-grained detail of usage in the system was completely lost.
Just like advertising, development is about to change.
With the move to web-based Software-as-a-Service (SaaS) systems that can be instrumented to measure usage (Google Analytics, MixPanel, etc.), product teams now have access to rich data on how an application is really being used. Strangely, though, most product development teams still don’t use this information (let alone instrument their applications). Why?
Just as in marketing, while measurement tools exist, there is still a cultural gap between the data-driven decisions these tools allow and the way departments think and are organized. Part of this comes from the clichéd (but often true) story that most product development organizations are not truly customer-facing.
Product management was designed to be the proxy for constituencies and what they want. But inevitably product management focuses on the future – what those users want next, not their opinion of the current application. We primarily still develop software through anecdote not data.
Instrumentation of application usage should be a core discipline of every engineering team member. Each feature developed should be measured, and the developer should own the measurement and analysis of how successful it was.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":247264,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"B"}']
In an Agile SaaS model, your greatest strength is your ability to adapt quickly. Similar to poorly performing marketing programs that can now be adjusted quickly, features that aren’t accomplishing their goals should be triaged and updated or removed.
Before features are developed, software developers should determine how their success will be measured. In the same way marketing shouldn’t spend $50,000 on a marketing campaign without tracking results, development teams shouldn’t spend similarly on expensive development resources building and testing a feature without consideration of how to measure its success.
It’s hard to remove parts of an application that some people on the development or product management team don’t “feel” are working well. The conversation becomes emotional and subjective and no-decision ends up being the decision. Thus, we all end up with a big bundle of feature cruft, lots of technical debt, and an application that makes development and testing regression cycles longer every time.
The lingering question is, “what should we measure?” The quick answer, like in marketing, is start with everything. You simply don’t know which metrics truly represent your users’ experience until you start mining data. As you examine results, you’ll start to get a better picture of what matters and what doesn’t. Once your team understands the basics of usage measurement, they’ll start to see the dynamics of how changing the application affects outcomes. This is similar to how a small change to a landing page can change lead conversion rates dramatically.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":247264,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"B"}']
Once your team is comfortable with the building blocks, elevate the development metrics to higher-level usage outcomes. These will of course depend a lot on what the goals of your application are – discussions that will lead you back into the product management domain. Just like marketing outcomes start to turn into sales outcomes, you will start to align your development team and your product management team.
Fundamentally, this approach puts power back into the hands of the development staff. For 25 years developers have been screaming, “just give us problems and we’ll solve them,” but we’ve tied their hands with little data to allow them to know what to do. The revolution that’s coming is for developers to get the data themselves, have a seat at the table, and recommend changes to the application based on results.
Data empowers – and it will always trump anecdote.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More