Monday, September 16, 2024
Apps

The US, UK, and EU have signed an AI treaty


The US, UK, and EU have signed an international treaty which obliges signatories to introduce measures to ensure that the use of AI systems within their countries are consistent with “human rights, democracy, and the rule of law.” The Framework Convention on Artificial Intelligence sets out the general obligations and key principles that all the countries signed up to the treaty are expected to implement.




Powerful AI tools such as ChatGPT and Google Gemini have exploded out of almost nowhere and are continuing to develop at what some see as an alarming pace, with some people involved in their development indicating that AI has the potential to destroy humanity. Understandably, many countries have indicated a need for laws or oversight to ensure that we don’t end up at the point where Skynet becomes self-aware and launches our own nukes against us.

The Council of Europe posted on X that this was a historic moment, representing the “first-ever legally binding global treaty on AI and human rights”, but what exactly does the treaty deliver, and how “legally binding” is it? We take a look at what the document says and if it’s even possible to enforce it.


The focus of the treaty is in the title

The emphasis is on human rights, democracy, and rule of law

ChatGPT app on phone in front of colored background

The full name of the treaty is the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and Rule of Law. These three areas make up the general obligations of the treaty.

Article 4 of the treaty states that each country should “adopt or maintain measures to ensure that the activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights, as enshrined in applicable international law and its domestic law.” In other words, each country must have rules in place that prevent AI companies from breaching human rights laws.

To what extent does an AI-generated attack ad on a political opponent undermine the integrity of democratic institutions? What about a deepfake such as the one shared by Elon Musk on his own social media app? The lines are blurry at best and one country’s interpretation may not be the same as another.


Article 5 relates to the integrity of democratic processes and respect for the rule of law. This states that each country should “adopt or maintain measures that seek to ensure that artificial intelligence systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence and access to justice.”

This is something that could be a real challenge. To what extent does an AI-generated attack ad on a political opponent undermine the integrity of democratic institutions? What about a deepfake such as the one shared by Elon Musk on his own social media app? The lines are blurry at best and one country’s interpretation may not be the same as another.


The treaty also includes some common principles

Each country is expected to implement them in some form

The Gemini icon on a Pixel 9.

The treaty lays out some common principles that all signatories are expected to adhere to within the scope of their own legal systems. These include the following:

  • Human dignity: AI systems must respect human dignity and individual autonomy
  • Transparency and oversight: This includes the identification of content generated by AI
  • Accountability and responsibility: AI companies should be held accountable for breaches of human rights, democracy, or the rule of law
  • Equality and discrimination: AI activities must respect equality and prohibit discrimination
  • Privacy and personal data protection: The privacy rights of individuals must be protected by effective safeguards
  • Reliability: Measures should promote the reliability and trust in the output of AI systems
  • Safe innovation: Establishing controlled environments for the safe development of AI systems


These principles are not unreasonable, but they could cause significant problems for AI companies. AI chatbots, for example, are notorious for hallucinating, leading to content which is not accurate or reliable. However, the principle of reliability states that countries should “promote” trust and reliability, so it may be the case that as long as AI companies are seen to be working towards making their products as reliable as possible, they won’t be breaching the rules.

However, AI companies can’t directly control exactly what comes out of an AI chatbot due to the nature of how generative AI works. We’ve already seen examples of popular chatbots producing results that definitely do not respect equality or prohibit discrimination. Again, it may be the case that as long as companies are seen to be working towards minimizing these issues, that may be good enough.


The treaty doesn’t lay down any laws itself

Each country is obligated to implement its own measures

Meta AI on phone against colored background

One of the most important things to understand about this treaty is that it doesn’t lay down any AI laws itself. In essence, it’s an agreement between multiple countries to introduce measures to ensure that the general obligations and common principles are followed. The exact wording is that “each party shall adopt or maintain appropriate legislative, administrative, or other measures to give effect to the provisions set out in this Convention.”

In other words, countries can create new laws that uphold these principles, but that’s not their only option. They can also put in place administrative measures that are not legally binding, or any other type of measures that they see fit.


In other words, countries can create new laws that uphold these principles, but that’s not their only option. They can also put in place administrative measures that are not legally binding, or any other type of measures that they see fit.

In reality, many countries are likely to enact some laws relating to these obligations, but since there’s nothing prescribed in the treaty, different countries may have very different approaches, and very different laws. It’s unclear how this will work with AI tools that are used in multiple countries which have different laws on how they should operate.

There are no explicit consequences for countries that don’t stick to the treaty

Signatories are expected to keep each other in line

Twitter's Grok AI


If the treaty ultimately tells countries to come up with their own measures to ensure that these obligations are followed, what’s to stop a country that’s signed the treaty from simply not bothering? Well, in reality, not much.

It’s a treaty built on trust rather than having any punitive actions in place for those countries that don’t comply.

The treaty lays out plans for a “Conference of the Parties” which is essentially a big meeting of representatives of all the signatories to look at how well the obligations are being implemented, and to consider amendments to the treaty. Each country must report to the Conference of the Parties within the first two years about how the obligations are being met. If a country isn’t meeting the obligations, however, there’s specific action that can be taken against that country. It’s a treaty built on trust rather than having any punitive actions in place for those countries that don’t comply.


Which countries have signed up to the AI treaty?

The US, UK, and EU have all signed on, along with a few other countries

Claude app running on iPhone on colored background

The three biggest signatories are the US, UK, and EU, but some other countries have also signed up. These include Israel, Norway, Iceland, Republic of Moldova, Georgia, San Marino, and Andorra.

The treaty is open to any other country that wishes to sign up, however. For this kind of treaty to be truly effective, it would require every country to sign up, but it’s highly unlikely that countries such as China or Russia will be queuing up to add their signatures any time soon.


The treaty isn’t perfect but does offer hope of effective AI regulation

Ultimately, each country is responsible for its own rules

Apple Intelligence on Mac, iPad, and iPhone on colored background

Apple/Pocket-lint

The fact that the treaty is toothless when it comes to dealing with countries that don’t live up to their obligations is far from ideal. However, the general concept of the treaty is a good one. AI continues to develop at a rapid pace, and things that weren’t possible a year ago are more than possible now. In particular, the ability to generate AI content that is indistinguishable from the genuine article has the potential to have a significant impact on our lives. AI-generated images and videos are already being used in the run-up to the US election, and the technology is only going to improve from here. Some kind of legislation that ensures that AI-generated content is always clearly marked as such would go a long way to help stop the spread of misinformation.


We’re still a long way from a globally agreed set of rules on how AI can and can’t be used, but this treaty is a significant step in the right direction.

Another major flaw is that there’s nothing stopping countries that haven’t signed up from doing whatever they wish. Just because you create a law in the US banning the use of deepfakes in political elections doesn’t stop other countries from doing exactly that.

We’re still a long way from a globally agreed set of rules on how AI can and can’t be used, but this treaty is a significant step in the right direction.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.