When Tesla CEO Elon Musk slammed New York Times writer John Broder yesterday for allegedly fabricating a number of details in his test-drive account, I found the whole exchange fascinating.
I’ve designed products and I occasionally review technology, so I’ve been thinking about how to improve the accuracy of reviews for a while. Here are some thoughts on how both product designers and reviewers can do better:
Product designers
Set up a context for the product. If your product isn’t a mainstream product, be clear about who it’s intended for.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Don’t over promise. If your product has operating parameters that will significantly impact performance, be upfront about them. If your package says “set up easily within 60 seconds,” I will have a stopwatch and will test that claim. “Easily” is a vague claim, but “within 60 seconds” is something I can test.
Idiot-proof your product. Is it possible for a user to unintentionally brick your product? Warn them on screen when they’re about to do so. Most people don’t read manuals or directions. (I’ll have a post on idiot-proofing products later.)
Track usage. This can help you to debug errors and identify product deficiencies. If your product is a video camera, knowing that a reporter was testing on a 15Mbps connection or a 3 Mbps connection is valuable. Tracking should be privacy sensitive. For that video camera, don’t record the video unless that is part of the product’s features.
Research the writer. This doesn’t mean only giving access to reporters who are likely to give you glowing reviews. Many of those people will lose credibility over time. Reviews that point out some flaws have more credibility. But you’ll want to address concerns that might come up from the writer. If a reporter shows a history of caring about privacy, you can point out how your product addresses privacy.
Don’t make reflexive claims. If you have a review that you hate, talk to the reviewer. Don’t start tweeting about what an idiot the reporter is, how he’s out to smear you, etc. In many cases, it just draws more attention to a review than needed. If a response is warranted, it’s always better when you have solid data.
Reviewers
Assume that every reasonably sophisticated device will be tracking what you do with it. While lying and making up stats is unquestionably wrong, honest mistakes do happen. Human memory can’t compete with data logging. When I review products, I take pictures at every pertinent step. I do this to help me remember details. (My phone automatically time stamps it.) Most of this never ends up in the final review; I use it just to help me out when I’m writing. If I were reviewing a car like the Tesla S, I’d have my own portable GPS unit recording tracklogs. I’d take pictures when I changed temperature settings.
Set up a context for the review that is relevant to the product. No product will make sense for everyone. Some make sense in the bulk of use cases, some make sense in certain scenarios. The focus should be on the use cases that the product is marketed for, not some corner case you dream up. If you’re talking about a corner case, be explicit about it.
Put yourself in the mindset of the consumer who will be purchasing the product. For an upcoming review of a connected baby monitor, I had a friend — a not particularly tech savvy mom — do the setup to make sure it was easy enough for someone who knows nothing about wireless networking. She also has a baby, so I was able to gather details on what was important to her in a monitor.
Be upfront with your biases. I think the business models of Yelp and Groupon are terrible and exploitative. And I note that whenever I write about the companies. But I do try to present scenarios where they can make sense.
Don’t make reflexive responses. Shortly after Musk challenged the New York Times’ claims, the Times responded, “Jan. 10 article recounting a reporter’s test drive in a Tesla Model S was completely factual […] Any suggestion that the account was ‘fake’ is, of course, flatly untrue.” The “we stand by our story” is a reflexive news organization claim that annoys me to no end. A better response would have been, “We believe that our story is accurate. But we look forward to reviewing the log data provided by Tesla.”
Rocky Agrawal is an analyst focused on the intersection of local, social and mobile. He is a principal analyst at reDesign mobile. Previously, he launched local and mobile products for Microsoft and AOL. He blogs at http://blog.agrawals.org; and tweets at @rakeshlobster.
[Image of Tesla Model S c/o Teslamotors.com]
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More