Published on March 15, 2024, 5:11 pm

The European Commission has initiated requests to major tech companies like Bing, Google Search, Facebook, Instagram, Snapchat, TikTok, YouTube under the Digital Services Act (DSA) seeking information on their measures to mitigate risks associated with generative AI. These risks include issues such as AI-generated false information leading to “hallucinations,” the proliferation of Deepfakes, and automated manipulation tactics that could potentially mislead voters. Generative AI is highlighted as a significant risk factor in the Commission’s preliminary guidelines concerning the integrity of electoral processes.

Apart from requesting details on risk mitigation strategies, the Commission is also asking for internal documents related to risk evaluations associated with generative AI. Additionally, they are seeking information on measures regarding the spread of illegal content, safeguarding fundamental rights, combating gender-based violence, protecting minors, ensuring psychological well-being, data privacy, consumer rights assurance, and intellectual property protection. The inquiries pertain to both the dissemination and creation of generative AI content. The Commission will evaluate responses received to determine further actions.

If companies fail to respond adequately to the questions posed by the Commission, it reserves the right to enforce compliance through decisions. In cases where inaccurate, incomplete or deceptive information is provided, fines may be imposed on non-compliant entities. The deadline for providing the requested details is set for April 5, 2024 for election-related queries and April 26, 2024 for all other inquiries.

Recently approved by the European Parliament, the AI Act establishes a risk-oriented regulatory framework for AI technologies. It mandates that high-risk AI systems utilized in areas like medical devices or critical infrastructure must adhere to strict safety standards. Prohibited applications include those that infringe upon citizens’ rights by engaging in biometric profiling based on sensitive traits or conducting unsupervised scans of facial images sourced from online sources or surveillance cameras. Moreover,

the legislation outlaws emotion recognition systems in workplace and educational settings along with social scoring mechanisms. However

for law enforcement purposes,

specific loopholes permit robust surveillance technologies like real-time facial recognition or behavioral tracking in public spaces which have raised concerns among advocacy groups such as Algorithmwatch that urge member states to address these surveillance shortcomings promptly.

It carries profound implications for entities operating within this fast-evolving technological landscape.


Comments are closed.