AI- caused panic: The EU, the US House, governments ban use of AI tools

Published on:

  • The United States, the European Union and governments across the globe have projected the dangers that the technologies it might cause and have gone all out to create AI restrictions and regulations.
  • Countries that have banned the use of ChatGPT include China, Russia, Iran, Syria, Chad, South Sudan, Eswatini, the Central African Republic and Eritrea. 
  • Fines for failure to comply with the EU AI Act are hefty. The penalties could go as high as 40 million Euros or 7 per cent of annual revenue, whichever is higher

This year has been marred by many Artificial Intelligence innovations, from ChatGPT Plus to Google AI, Bard and Microsoft working space, to a point where it caused tension. The panic arose from security threats with unregulated AI technologies, considering how much user data they operate.

The United States, the European Union and governments across the globe have projected the dangers that the technologies it might cause and have gone all out to create AI restrictions and regulations. Among other significant concerns are security, job losses, misinformation and bias. 

For example, countries that have banned ChatGPT include China, Russia, Iran, Syria, Chad, South Sudan, Eswatini, the Central African Republic and Eritrea. 

Read: The EU Council approves the world first comprehensive crypto regulations

European Union regulates AI tools.

On June 14, The European Union parliament recently approved the world’s first set of AI rules, The EU AI Act. During the vote, 499 were in favour compared to 28 against.

The Act states that all generative AI developers must submit their systems for review before any commercial release. The parliament, however, maintained a ban on real-time biometric identification systems such as face scans, fingerprints, eyeballs, etc. The ban also extends to the use of AI in predictive policing, emotion recognition systems and social scoring.

EU is also targeting to strongly regulate AI use in critical infrastructure, education, essential services, employment and hiring, and in justice and democratic processes. AI uses in these sectors will be expected to provide a detailed set of compliances, including:

  • Providing a thorough risk assessment
  • Ensure that their data sets contain no bias
  • Keeping track of everything they do
  • Provide clear information to everyone using their technologies

EU implies heavy fines for contempt of the EU AI Act.

Fines for failure to comply with the EU AI Act are hefty. The most severe sanctions apply to those running prohibited AI technologies. The penalties could go as high as 40 million Euros or 7 per cent of annual revenue. Failure to comply with data-related requirements will attract a 20 million Euros fine, or 4 per cent of annual income, whichever is higher. A Failure to comply with other provisions in the Act will attract a fine of up to 10 million Euros, or 2 per cent of annual revenue.

The EU AI Act aims to build safeguards on the development and use of these technologies to ensure we have an innovation-friendly environment for these technologies such that society can benefit from them, “said Jens-Henrik Jeppesen, senior director of public policy at Workday.

However, the private sector has come out to say that the EU is creating rules around the industry without consulting with the creators. Sam Altman, the CEO of OpenAI (creators of ChatGPT), noted that the company is ready to cease operations in Europe if the legislation from the EU becomes too stringent.

US House bans the use of AI tools.

The United States House of Representatives recently implemented new regulations prohibiting its members’ use of Artificial Intelligence (AI) large language models, except for OpenAI’s ChatGPT Plus service. The decision came in the interest of security, outlined in a notice by Catherine Szpindor, the House’s chief administrative officer.

The memo states that House offices can only use the ChatGPT Plus version. This is because it incorporates essential privacy features to safeguard House data. No other versions of ChatGPT or similar AI language models are currently permitted for use in the House.

Despite measures set to regulate generative AI tools, the industry will require more than just a  US House ban.

Is it possible to regulate AI tools?

The only way to regulate AI tools is to control the internet. Internet regulation is an agenda in almost all governments. However, this is only possible if all the citizens have digital Identification documents for easy monitoring. This is not a mean feat to achieve, and it will take years before AI regulation is successful.

Read: European Union to curb crypto mining to save energy

Related

Leave a Reply

Please enter your comment!
Please enter your name here

JOSEPH KANGETHE
JOSEPH KANGETHE
I am a tech, business, and investment news reporter covering Africa. Most of what is good in Africa is obscured by preconceptions, yet there is still a lot of good going on. Technology is what is driving the continent and this is my passion. For Africa, I share the stories that are important to Africans.