TECHTRICKS365

Striking the Balance: Global Approaches to Mitigating AI-Related Risks TechTricks365

Striking the Balance: Global Approaches to Mitigating AI-Related Risks TechTricks365


It’s no secret that for the last few years, modern technologies have been pushing ethical boundaries under existing legal frameworks that weren’t made to fit them, resulting in legal and regulatory minefields. To try and combat the effects of this, regulators are choosing to proceed in various different ways between countries and regions, increasing global tensions when an agreement can’t be found.

These regulatory differences were highlighted in a recent AI Action Summit in Paris. The final statement of the event focused on matters of inclusivity and openness in AI development. Interestingly, it only broadly mentioned safety and trustworthiness, without emphasising specific AI-related risks, such as security threats. Drafted by 60 nations, the UK and US were conspicuously missing from the statement’s signatures, which shows how little consensus there is right now across key countries.

Tackling AI risks globally

AI development and deployment is regulated differently within each country. Nonetheless, most fit somewhere between the two extremes – the United States’ and the European Union’s (EU) stances.

The US way: first innovate, then regulate

In the United States there are no federal-level acts regulating AI in particular, instead it relies on market-based solutions and voluntary guidelines. However, there are some key pieces of legislation for AI, including the National AI Initiative Act, which aims to coordinate federal AI research, the Federal Aviation Administration Reauthorisation Act and the National Institute of Standards and Technology’s (NIST) voluntary risk management framework.

The US regulatory landscape remains fluid and subject to big political shifts. For example, in October 2023, President Biden issued an Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, putting in place standards for critical infrastructure, enhancing AI-driven cybersecurity and regulating federally funded AI projects. However, in January 2025, President Trump revoked this executive order, in a pivot away from regulation and towards prioritising innovation.

The US approach has its critics. They note that its “fragmented nature” leads to a complex web of rules that “lack enforceable standards,” and has “gaps in privacy protection.” However, the stance as a whole is in flux – in 2024, state legislators introduced almost 700 pieces of new AI legislation and there have been multiple hearings on AI in governance as well as, AI and intellectual property. Although it’s apparent that the US government doesn’t shy away from regulation, it’s clearly looking for ways of implementing it without having to compromise innovation.

The EU way: prioritising prevention

The EU has chosen a different approach. In August 2024, the European Parliament and Council introduced the Artificial Intelligence Act (AI Act), which has been widely considered the most comprehensive piece of AI regulation to date. By employing a risk-based approach, the act imposes strict rules on high-sensitivity AI systems, e.g., those used in healthcare and critical infrastructure. Low-risk applications face only minimal oversight, while in some applications, such as government-run social scoring systems are completely forbidden.

In the EU, compliance is mandatory not only within its borders but also from any provider, distributor, or user of AI systems operating in the EU, or offering AI solutions to its market – even if the system has been developed outside. It’s likely that this will pose challenges for US and other non-EU providers of integrated products as they work to adapt.

Criticisms of the EU’s approach include its alleged failure to set a gold standard for human rights. Excessive complexity has also been noted along with a lack of clarity. Critics are concerned about the EU’s highly exacting technical requirements, because they come at a time when the EU is seeking to bolster its competitiveness.

Finding the regulatory middle ground

Meanwhile, the United Kingdom has adopted a “lightweight” framework that sits somewhere between the EU and the US, and is based on core values such as safety, fairness and transparency. Existing regulators, like the Information Commissioner’s Office, hold the power to implement these principles within their respective domains.

The UK government has published an AI Opportunities Action Plan, outlining measures to invest in AI foundations, implement cross-economy adoption of AI and foster “homegrown” AI systems. In November 2023, the UK founded the AI Safety Institute (AISI), evolving from the Frontier AI Taskforce. AISI was created to evaluate the safety of advanced AI models, collaborating with major developers to achieve this through safety tests.

However, criticisms of the UK’s approach to AI regulation include limited enforcement capabilities and a lack of coordination between sectoral legislation. Critics have also noticed a lack of a central regulatory authority.

Like the UK, other major countries have also found their own place somewhere on the US-EU spectrum. For example, Canada has introduced a risk-based approach with the proposed AI and Data Act (AIDA), which is designed to strike a balance between innovation, safety and ethical considerations. Japan has adopted a “human-centric” approach to AI by publishing guidelines that promote trustworthy development. Meanwhile in China, AI regulation is tightly controlled by the state, with recent laws requiring generative AI models undergo security assessments and align with socialist values. Similarly to the UK, Australia has released an AI ethics framework and is looking into updating its privacy laws to address emerging challenges posed by AI innovation.

How to establish international cooperation?

As AI technology continues to evolve, the differences between regulatory approaches are becoming increasingly more apparent. Each individual approach taken regarding data privacy, copyright protection and other aspects, make a coherent global consensus on key AI-related risks more difficult to reach. In these circumstances, international cooperation is crucial to establish baseline standards that address key risks without curtailing innovation.

The answer to international cooperation could lie with global organisations like the Organisation for Economic Cooperation and Development (OECD), the United Nations and several others, which are currently working to establish international standards and ethical guidelines for AI. The path forward won’t be easy as it requires everyone in the industry to find common ground. If we consider that innovation is moving at light speed – the time to discuss and agree is now.


Exit mobile version