BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Ethical AI: A Perfect World Or A Perfect Storm? Blog 2

This article is more than 2 years old.

This is the second blog in the Ethical AI short series, looking at the positive and negative aspects of AI. The first blog focused on AI failures and identified lessons to be learned by leaders advancing their AI product innovations, or internal usage of AI solutions.

The 2021 European Commission AI Regulations

Major progress as a result of the European Commission officially released this year its AI regulations to ensure a well-functioning internal market for artificial intelligence systems, based on EU values and fundamental rights. The new regulations offer the world’s strong’s attempt to create a uniform legal and ethics framework that can guide businesses and countries for years to come. The regulations apply to:

  • providers (both public and private actors) offering or putting into service AI systems within the EU irrespective of whether the providers are situated inside the EU
  • AI users located within the EU
  • AI providers and users located outside the EU provided that the output produced by the system is utilized within the EU

The new EU AI Regulations follow a risk-based approach and differentiate between the following: (i) prohibited AI systems whose use is considered unacceptable and that contravene union values (e.g., by violating fundamental rights); (ii) uses of AI that create a high risk; (iii) those which create a limited risk (e.g., where there is a risk of manipulation, for instance via the use of chatbots)); and (iv) uses of AI that create minimal risk.

Under the requirements of the new AI Regulations, the greater the potential of algorithmic systems to cause harm, the more far-reaching the intervention. Limited risk uses of AI face minimal transparency requirements and minimal risk uses can be developed and used without additional legal obligations. However, makers of "limited" or "minimal" risk AI systems will be encouraged to adopt non-legally binding codes of conduct. The "high risk" uses will be subject to specific regulatory requirements before and after launching into the market (e.g., ensuring the quality of data sets used to train AI systems, applying a level of human oversight, creating records to enable compliance checks and providing relevant information to users). Some obligations may also apply to distributors, importers, users or any other third parties, thus affecting the entire AI supply chain.

Enforcement

Member states will be responsible for enforcing these regulations. Penalties for noncompliance are up to 6% of global annual turnover or EUR 30 million, whichever is greater.

Conclusion:

What I really liked about the commission report is that it clearly bans specific uses of AI. Examples of banned AI systems are systems which use subliminal methods to manipulate a person’s behaviour in a way which causes the person, or is likely to cause, physical or mental harm. Furthermore, AI systems used to exploit the vulnerability of a specific group of people, when the purpose is to change the behaviour of a person in that group, are prohibited. Further prohibitions are suggested for AI systems which identify people in public places in real time, where the identification is based on biometric data, for law enforcement purposes. This ban entails a right for Member States to make exceptions from the ban, including in the event of a significant risk of a terrorist attack.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here