Big Tech’s AI Models Fall Short of New EU AI Act’s Standards

A striking assessment of several leading AI models has unveiled noticeable compliance issues with the European Union’s forthcoming regulations, particularly concerning cybersecurity resilience and non-discriminatory outputs. Conducted by Swiss startup LatticeFlow in conjunction with EU officials, this study scrutinized generative AI models from major tech giants such as Meta, OpenAI, and Alibaba. The results highlight critical gaps as the EU prepares to enforce its AI Act over the next two years, with potential penalties of up to €35 million or 7% of global annual turnover for non-compliance.

LatticeFlow employed its ‘Large Language Model (LLM) Checker’ to evaluate these AI systems on various metrics, assigning scores from 0 to 1. Among the evaluated models, Anthropic’s ‘Claude 3 Opus’ ranked highly with a score of 0.89, showcasing commendable compliance. However, the results were less favorable for others; OpenAI’s ‘GPT-3.5 Turbo’ scored only 0.46 for discriminatory output, while Alibaba’s ‘Qwen1.5 72B Chat’ received an even lower score of 0.37. Such figures underscore the enduring challenge of AI models perpetuating human biases, notably in sensitive domains like gender and race.

Cybersecurity was another area where many models fell short. Meta’s ‘Llama 2 13B Chat’ received a mere 0.42 in the ‘prompt hijacking’ category, which tests the model’s vulnerability to malicious prompts that could expose sensitive information. Similarly, Mistral’s ‘8x7B Instruct’ model garnered a score of 0.38. These findings emphasize an urgent need for tech corporations to bolster their security measures in anticipation of the EU’s rigorous standards.

Although the EU is still finalizing how these regulations will be enforced, the insights gleaned from LatticeFlow’s assessments may guide companies in refining their models for better alignment with these upcoming mandates. LatticeFlow’s CEO, Petar Tsankov, noted that despite some negative scores, the overall results provided a constructive avenue for improvement. He indicated optimism about these test outcomes, viewing them as a path for tech companies to enhance compliance ahead of the regulations.

The European Commission has recognized this initiative, highlighting it as an important first step toward translating the AI Act into concrete and enforceable technical criteria. As developments unfold, the LLM Checker may become a vital instrument for tech companies striving to confirm compliance with the EU’s new regulations.

This compliance testing arrives at a critical moment. The EU’s AI Act is significant, aiming to regulate artificial intelligence to ensure its safety and efficacy across various sectors while safeguarding fundamental rights. As large language models increasingly permeate everyday life, their potential impact necessitates urgent attention to ethical standards and regulatory compliance.

As these tech companies work on adjustments based on the findings of the LatticeFlow test, industry leaders will need to balance innovation with responsibility. The stakes are high. Failing to adhere to these new regulations could result in hefty fines and irreparable damage to corporate reputations, not to mention the broader implications for trust in artificial intelligence technologies.

In conclusion, while the landscape of AI continues to evolve, the immediate takeaway from LatticeFlow’s assessments is clear: achieving compliance with the EU AI Act will require substantial effort from tech giants. As the industry pivots towards more ethical AI models, collaboration and transparency will be essential to mitigate risks associated with bias and cybersecurity threats. Moving forward, a proactive approach to compliance could not only enhance the resilience of AI models but also fortify user trust in AI technologies.

Back To Top