The last few weeks have been punctuated by two major developments in the world of European tech and data privacy. First, the EU announced draft data protection legislation (more on that later), and second, Facebook’s Moments app was unceremoniously banished from European soil until an opt-in feature is incorporated into its facial recognition technology. These events are likely to have tech innovators on both sides of the Atlantic scratching their heads – just what are the rules of the game in relation to data protection and tech innovation in Europe?

The simple answer is that the rules are, in many cases, ill-defined and struggling to keep up with the pace of technological change. A perfect example of this is the flexibility built into the EU’s draft data protection legislation. For the uninitiated, the directive is meant to make data rules consistent across the EU. In an ideal world, this will reduce business costs, especially for marketing and data-heavy technology companies, and it will strengthen security and increase privacy for European citizens.

Of course, we don’t live in anything closely resembling an ideal world. To meet the vastly differing expectations of privacy, cultural attitudes, and business climate in each European country, the directive is riddled with ambiguous phrases and flexible clauses. Thirty-five separate provisions will be implemented in different ways in each member state. That means that across the 28 member states of the EU, there will be 28 different ways to interpreted 35 different rules. Not exactly consistent.

The draft directive hasn’t exactly been rushed together to meet a changing technological world. It has been three years in the making and is unlikely to be fully implemented before the end of 2016. In the meantime, the countries of Europe are governed by separate data protection rules and Article 8 of the European Convention of Human Rights (the right to respect for private and family life). As Article 8 was drafted 65 years ago, it is not exactly equipped to deal with the likes of Facebook’s facial recognition technology or the encryption of conversations in WhatsApp. In the UK, data protection regulation was drawn up in the era of dial-ups, Encarta, and Windows 95.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

If you’re still with me at this point – well done. The complicated and, let’s be honest, dull nature of data protection in Europe is far removed from the fast-paced, try-or-be-damned world of global tech. However, falling afoul of European data protection laws can have a devastating effect on a startup with global ambitions. Even tech juggernauts like Google have received more than a headache from Europe, with the “right to be forgotten” battle continuing to rage. France is now pushing for the new rule to be extended across the world.

The situation in Europe is also a showcase of wider debate on the balancing act of privacy versus the commoditization or personal information and the reach of tech. Using facial recognition in Moments may seem benign. Nonetheless, the debate has been couched as a “slippery slope”: What if this technology is used en masse by unscrupulous companies or security services? This is, of course, not a fantastical proposition as the Edward Snowden revelations demonstrated. It is also telling that the public reaction to Prism et al varied massively between European countries; Germany – apoplectic, France – annoyed, the UK – mildly miffed.

So how do you protect your startup from falling afoul of European regulators? Well, if you’re data heavy or are unleashing a bleeding-edge app or software platform, lawyer up. If you can’t lawyer up, tread carefully and act ethically. Transparency can be a panacea for placating European regulators. If you’re up front about how you use your customers’ data and what the benefits are, and if you build in clear opt-ins or outs, it will help to mitigate most of the danger. Learning from the mistakes of others is also a no-brainer. Obviously, if you have some form of facial recognition technology in your app, start developing an opt-out if you want to expand into Europe.

However, avoiding regulators should, in my opinion, be a happy by-product of startups acting responsibly and ethically with their customers’ data. It may sound bizarre coming from the CEO of a data science consultancy, but I believe that people should demand and expect more data protection. I also believe that businesses should be less gung-ho about how they commoditize and use personal information. With smart cities, wearables, smart clothes, and facial recognition technology all developing quickly, more and more data on individuals is going to be used and, let’s face it, abused.

Regulation is never going to keep up with these developments. It is therefore incumbent on businesses to act sensibly, avoid being creepy, and treat their customers with respect. If that sounds like a soft, naïve view of doing business, I would argue that, even if you don’t want to manage data responsibly from an ethical point of view, you should do so from a dispassionate corporate perspective. Abusing personal data will inevitably end in huge public backlash. Not only will this undermine consumer trust in new technology, it could also provoke draconian legislation that kills many startups and stifles innovation. Such a development would make Facebook’s Moments kerfuffle seem like the “good old days.”

Mike Weston is CEO of London-based data science consultancy Profusion.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More