Artificial Intelligence (AI) can dramatically boost productivity, enabling teams to work more efficiently and effectively. As the many case studies in my recently published book illustrate, AI is being deployed to improve the customer experience, and is enabling retailers such as Reiss, brands such as TGI Fridays, banks, law firms and others to compete on cost, and get closer to the customer. In a legal practice for example, a chatbot can deliver simple legal advice, whilst in other firms AI may be focused on reducing basic, routine admin tasks such as timesheets.


But this incredible transformation of work poses an increased data breach risk, given the increasing volume of personal data, which AI is handling. The Information Commissioner’s Office (ICO) is the UK’s independent authority which was set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals. The ICO continues to develop its guidance relating to AI and the protection of personal data. Human error continues to be a threat, so all organisations need to ensure that staff are regularly trained, and that the company is compliant with the latest, developing regulations and guidance.


Central to this debate are key topics including data privacy and transparency. We often consent to our information being used because we’ve been sold a promise that it will improve service. That may result in inadvertently approving that supplier to extract more personal and sometimes sensitive data, which they could then sell on to others to influence us to buy further products and services.


Clearly this is a concern shared by many, including high profile tech luminaries. In his Keynote speech delivered at a security conference in Brussels, Apple CEO Tim Cook advocated that the US implement a federal data privacy law similar to GDPR. He described a battle between corporations that are using technological advances to “weaponize” consumer data for their own enrichment and those that recognize the need to respect consumer privacy rights.


The Council of Europe Commissioner for Human Rights recently published recommendations for improving compliance with human rights regulations by parties developing, deploying or implementing artificial intelligence (AI). They focused on 10 key areas of action, the first being a human rights impact assessment (HRIA) which reviews AI systems in order to discover, measure and/or map human rights impacts and risks.


Watch this space as the debate continues and as policies and frameworks begin to be introduced.


I’ll be delivering a Keynote After Dinner speech on this at Monday’s European Data Protection Summit 2019 –