How to protect your data in the face of the rise of artificial intelligence?

While it’s impossible to prevent them completely, detecting cyber attacks early can help limit their costs. Shutterstock

We offer you this excellent contribution from Infodujour.fr

In the age of digitalization and new technologies, purchasing habits and consumption patterns are evolving. But this is not without risks.

Connected refrigerators, automated lighting control at home, autonomous vehicles, drone deliveries, robots capable of answering all your questions and in several languages… While artificial intelligence (AI) is making life easier for consumers and meeting their needs, it is not without risks, particularly regarding the security of their personal data. This is why Europe wants to supplement its General Data Protection Regulation (GDPR) with a set of harmonized rules on the use of AI. A few days before European Data Protection Day , on January 28, the European Consumer Centre France explains the challenges and expectations of these texts in the face of the digitalization of consumption.

Increasingly connected and digitalized consumption

Calculating electricity consumption to overseas chinese in worldwide database tailored deals, a smartwatch that detects certain pathologies through abnormal gait or a rapid heart rate, a chatbot serving as customer service, and a remote program to turn on the heating at home… Artificial intelligence has gradually invaded our consumption habits.
And this is only the beginning! Many companies are already working on technologies and business practices that use other types of artificial intelligence. For example, drone deliveries, autonomous taxis, virtual reality marketing, and voicebots are all currently being developed.

What are the risks for consumers?

All these new modes of a pause can allow each partner are not without risks for users. Because artificial intelligence involves many stakeholders (developer, supplier, importer, distributor, AI user), the system remains opaque to the consumer. It is therefore difficult to know who actually has access to personal data and who would be responsible in the event of problems.
On the other hand, since the AI ​​system is programmed and automated, the risk of technical failure must be taken into account. And the consequences would be damaging. Examples: uncontrollable autonomous cars, widespread power outages, false information or incorrect diagnosis, etc.
Finally, the risk of leaks or loss of control over recorded personal data is high: cyberattack, computer hacking, phishing or other targeted digital marketing techniques, fake news, fraud, etc.

European protection on the use of artificial intelligence

Faced with the growth, but also the risks, of AI, Europe wants to strengthen its protective rules. In addition to the GDPR and the European Data Governance Act, the European Union has proposed three texts: a regulatory framework on artificial intelligence, a directive on AI liability, and a directive on product liability.
Europe particularly wants to ban from the sg number and punish “AIs with unacceptable risks.” For example, those that remotely detect individuals in real time and in public spaces in order to arrest or punish them. It wants to evaluate and control “high-risk AIs,” particularly those related to product safety (such as self-driving cars). And the EU wants to regulate “AIs with acceptable risks” by requiring, for example, digital giants and other platforms and social networks to better inform users about their algorithms.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top