Artificial intelligence technologies can co-exist with privacy, according to panellists during a panel session on the cloud, AI and privacy.
The panellists discussed the different approaches to the ‘digital world order’ in the US, China and Europe. The premise of the debate was that as inherently global entities, major financial institutions serving the fast-unifying needs of corporates and citizens have a vested interest in a collaborative approach, whereby regional strengths are combined to create a universal platform for competition and innovation.
Samik Chandarana, head of CIB Data and Analytics at JP Morgan, started the discussion with reference to the badges all Sibos delegates are sporting this week. As delegates move around ExCel, their badges are scanned and they can also share their business card details with those they meet. Such scanning is ubiquitous in our personal lives, he observed. As a banker, Chandarana said: “I have to think differently. Our customers are global and the relationship is built on trust. It has been the bastion of the relationship banks have with their clients for years.”
Different data rules exist in every jurisdiction, he said, meaning global banks are constantly asking how data can be used, and for what purposes, in those jurisdictions. JP Morgan has spent much time looking at different technologies to ensure data use is secure and within regulatory guidelines.
Pooma Kimis, director Autonomous Research/Bernstein, said the question of privacy should depend on the individual owners of data. She described the AI world as having “good guys” and “bad guys”. Cyber-attacks such as the Pegasus attack on the WhatsApp application surprised individuals at the ease with which personal information was drawn down. But any privacy models would have to be led by the permissions given by individual data owners.
While there are bad guys, there are also good guys, using the same technologies to protect data owners from the bad guys. The good guys depend to a large extent on the permissions provided by individual data owners, she said.
Yves-Alexandre de Montjoye, assistant professor, Imperial College London, said the concept that “you can only get one – protecting data or you will lose out in AI” was “fundamentally wrong”. The issue was not about not using data, it was about using data properly. Organisations that do so will achieve more innovation in AI, he added.
Qiang Yang, chief AI officer at China’s WeBank, speaking via a phone link, said he did not see AI and privacy “facing each other off”. Rather, AI is an engine, powered by data. The area is evolving, but by using data, developers will be able to create more powerful AI engines to deliver systems that yield better predictions and better security.
Asked by session moderator, Francesco Guerrera from Dow Jones whether different definitions of privacy existed in different countries, Yang disagreed. “The notion of privacy is universal,” he said. “Data protection regulations are being launched in China and they are very much aligned with those of the Global Data Protection Regulation in Europe.” In fact, he pointed out,China’s laws are even more restrictive as privacy violators not only face financial penalties but can also serve prison time.
Promising technology to ensure AI systems can be built while preserving privacy include federated learning. Yang said progress was being made in this technology, which enables different parties to collectively build AI engines without exchanging data. The data resides locally, but the AI model can be built by “parameter exchange” in a secure way. Such systems would satisfy privacy requirements, he said.
Financial institutions’ understanding of privacy is becoming deeper and deeper, said Yang, and the development of AI is moving forward with security and privacy “in mind”.
De Montjoye said he was pleased that the panellists had agreed that there was no need to choose between AI and privacy: “You can have both and get a lot more out of AI”.