Today, we live in a world where our phones have apps that can complete our sentences, give us directions, manage money, and so on and so forth. We could go on, but, let’s just say there are very few things we cannot do at a click of a button, or in this case with a ‘tap, tap, tap’ on the phone.
Ever stopped to wonder how this is possible?
We owe this rapid growth to technology, and especially to Artificial Intelligence (AI) and machine learning. Simply put, AI is based on machine learning, which involves algorithms that are constantly improving upon themselves. This is possible because these algorithms feast and thrive on data, data that enables it to get better and better at what it does. While AI has its advantages, it has also led to serious concerns regarding data privacy. The recent instances of Cambridge Analytica using Facebook data to tweak and tamper with the 2016 U.S. Presidential Elections and the Brexit vote results point to this grave issue.
While on one hand, there is no doubt that AI has significantly improved our lives; think. AI assistants like Alexa and Google Home. Even in industries like manufacturing, it has helped reduce cost and the time required for task completion.
The problem is the amount of trust we can put on AI and the people who use it. According to one study, over seventy percent of people polled in countries such as the USA, UK and Australia chose privacy over the perceived benefits of AI. This is because people are wary of their information being misused or used to manipulate their behavior. But it is not like people are shying away from sharing their data either. According to a study, over forty percent of consumers in a group of 1000 said they were ready to share data with companies through personalized promotions and the like; whereas 39% concluded they would share data in return for quicker resolution of problem. The main point of contention being – how much data can be safely divulged without putting individuals and companies at risk.
So what can we do?
Regulating collection and use of data, research and educating the public are all steps in the right direction. For example, simplifying privacy policies, making user agreements more comprehensible to the consumers, taking steps to keep data secure, implementing data protection laws like the EU wide GDPR are a good starting point. These need to be constantly reviewed and updated, in keeping with the advancement in technology.
Research, especially, will help as much in development of AI, as it will in understanding and safeguarding individual and collective interest and should be done in collaboration with multiple stakeholders, including civil society and expert networks.
In Elon Musk’s words, “AI is a rare case where I think we need to be proactive in regulation than be reactive.”