The use of Artificial Intelligence (AI) has been increasing across the globe with each day that passes. The use of AI these days has made headlines for both right and wrong reasons. On one hand, there are companies that are trying to employ AI for the right reasons and to help mankind and on the other hand, there are people who are using it to disrupt the lives of others. With both sides being in the frame, some guidelines, laws or regulations are needed to keep the people safe.
Google once said, “AI is too important not to regulate, and too important not to regulate well.” Few nations have started early and have tried to form some regulations regarding AI and its use. Are you aware of which nations are leading this race and what regulations have they brought? We will dive into this today and understand the regulations that they have introduced for the safety of citizens.
ALSO READ | The Spread Of AI & Its Side Effect On Mankind: Types Of Crimes That Employ AI In Its Functioning
European Union’s Regulations On AI
The comprehensive AI Act received a favourable vote from the EU Parliament on March 13 this year. This Act is likely to be enforced in the next few months, however, there will be a 2-year grace period meaning that it will be fully applicable around June 2026. The AI Act is a comprehensive regulation aimed at ensuring responsible and fair use of AI in the market. Contrary to misconceptions, it isn’t designed to be an ‘AI-Killer’ but rather to establish safeguards for decision-making processes involving AI systems. The said Act basically attempts to introduce: due diligence obligations in the development of the AI system from the outset, mechanisms to verify the correctness of the decision, and avenues to hold individuals accountable if the decision is found to be incorrect.
This Act classifies AI systems into three different risk categories, namely — Minimal, High, and Unacceptable Risks. This classification is done on the basis of their potential impact on users and society. Notably, the AI systems classified as ‘High-Risk’ are subject to more stringent measures and the AI systems classified as ‘Unacceptable Risk’ are prohibited.
The AI Act sets up the European Artificial Intelligence Board (EAIB). The job of EAIB will be primarily to offer guidance, share effective strategies, and make sure the AI Act is applied consistently across all EU member states. This body is inspired by the European Data Protection Board (EDPB), which was created by the GDPR.
Failure to comply with the AI Act can result in significant penalties. These fines can range up to 7 per cent of the worldwide annual revenue of the entity accountable for the AI system, depending on the seriousness of the violation.
United Kingdom’s Regulations On AI
UK has not introduced a comprehensive AI regulation akin to the EU, and it does not plan to do it either. Instead, it advocates a context-sensitive, balanced approach, using existing sector-specific laws for AI guidance. In March 2023, the UK issued a white paper on its domestic AI regulation. It indicates a clear intention to create a proportionate and pro-innovation regulatory framework, which will focus on the context in which AI is deployed instead of focusing on the technology alone.
If we have to break it down then there are five guiding principles at the heart of the framework. The first principle is safety, security and robustness which will ensure the reliability and security of AI systems. The second principle is transparency and expandability to ensure the transparency of AI operations and easy understanding of it by the users. The third is fairness which would ensure AI does not contribute to unfair bias or discrimination. The fourth guideline is accountability and governance which will hold AI systems and their operators accountable for their actions. The last but not the least principle is contestability and redress which would provide mechanisms for challenging AI decisions and seeking redress.
At the moment, the current regulations which are applicable to AI and its use are: Data Protection Act 2018/General Data Protection Regulation (GDPR), Human Rights Act 1998, and Equality Act 2010.Â
United States Of America’s Regulations On AI
While the USA lacks a unified AI regulation, it still has established a number of guidelines and frameworks to govern the AI sector on a federal level. The existing legal landscape encompasses:
Executive Orders such as Maintaining American Leadership in AI, Promoting the Use of Trustworthy AI in the Federal Government Act
Legislations and Proposed Bills: AI Training Act, National AI Initiative Act, Algorithmic Accountability Act (Proposed), Transparent Automated Governance Act (Proposed), Global Technology Leadership Act (Proposed)
Putting it in simpler words, the US is adopting a case-by-case strategy to AI governance enforcement. It is avoiding an overly precautionary approach.
Since there is an absence of AI law in the US, it means that the oversight and regulation of AI deployment will fall to existing agencies. The agencies which are responsible for ensuring AI guidelines include — the Federal Trade Commission, the Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission.
Switzerland’s Regulations On AI
Akin to the UK, Switzerland has also opted to not introduce a comprehensive AI regulation. Rather, what it is doing is focusing on selectively amending existing laws to accommodate AI. This approach encompasses the integration of AI transparency rules into existing data protection laws, and modification of the local competition laws, product liability laws, and general civil laws to address AI system needs.
Presently, there are two Swiss authorities which manage the AI-related regulation. The names of those authorities are — Federal Data Protection and Information Commissioner, and Competition Commission.
Canada’s Regulations On AI
Canada is advancing the AI and Data Act (AIDA) to keep Canadians safe from high-risk AI and promote responsible AI practices in line with global norms. AIDA proposes three main approaches.
The first approach is to build on existing Canadian consumer protection and human rights law. AIDA through this would ensure that high-impact AI systems meet the same expectations with respect to safety and human rights to which the citizens are accustomed. The second approach is to empower the Minister of Innovation, Science, and Industry to administer and enforce the Act, which would ensure that policy and enforcement move forward in sync as the technology evolves. The third approach is to prohibit reckless and malicious uses of AI which cause serious harm to the citizens and their interests through the creation of new criminal law provisions.
At present, two significant entities oversee and guide on AI matters, namely — the Ministry of Innovation, Science and Economic Development, and the Office of the Privacy Commissioner of Canada.
Brazil’s Regulations On AI
Brazil is in the process of crafting an extensive AI Bill aimed at banning certain high-risk AI systems, establishing a specialized regulatory authority, and assigning civil responsibility to developers and users of AI. Moreover, the bill would mandate the timely disclosure of major security breaches and ensure individuals’ rights to comprehend AI-based decisions and rectify any biases.
At the moment, two authorities oversee the compliance of AI regulation. They are — the Ministry of Science, Technology and Innovation, and the National Data Protection Authority.
China’s Regulations On AI
China appears to be leading at the forefront of jurisdictions that are actively introducing AI regulations. While crafting a comprehensive AI framework, certain specific AI applications are already regulated by current laws. These encompass — Algorithmic Recommendations Management Provisions, Ethical Norms for New Generation AI, and Opinions on Strengthening the Ethical Governance of Science and Technology.
AI governance initiatives in progress include — Draft Provisions on Deep Synthesis Management, Measures for the Management of Generative AI Services.
Prominent regulatory entities in the AI landscape in China are the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the State Administration for Market Regulation.
Japan’s Regulations On AI
Japan does not have strict AI regulations. Instead, the Japanese government relies on guidelines and allows the private sector to manage their AI use by themselves. Key guidelines laid down by the government include Guidelines for Implementing AI Principles, and AI Governance in Japan Ver. 1.1.
Though Japan has not designed laws specifically for AI but the Japanese sector-specific laws, including data protection, antimonopoly, and copyright, are still relevant in its context.
The authorities that are involved in AI policy matters are — the Ministry of Economy, Trade and Industry, the Council for Science, Technology and Innovation, and the Personal Information Protection Commission.
India’s Regulations On AI
Presently, India lacks specific legislation for AI governance. However, the upcoming Digital India Act is all geared up to focus on the regulation of high-risk AI applications. Apart from this, in response to the challenges posed by AI, a specialised task force has also been established to safeguard the people. This task force is charged with delving into AI’s ethical, legal, and societal facets, paving the way for a prospective AI regulatory body and enhancing the nation’s AI legislative landscape.
The authorities that are involved in AI-related policies are — NITI Aayog, the Ministry of Electronics and Information Technology, and the Ministry of Commerce and Industry.
Australia’s Regulations On AI
Australia also has not introduced specific AI governance laws or policies so far. Australia will adopt a risk-based approach to regulate AI and it seeks to prevent the harms associated with the use of AI use, by regulating the development, deployment and use of AI in high-risk contexts only. It will allow other lower-risk forms of AI to flourish largely unimpeded.
The Government intends to enhance its set of obligatory standards by conducting additional consultations with the public and industry, as well as establishing an advisory body. Initially, this effort will focus on the themes of Testing and Audit, Transparency, and Accountability.
Presently, these Australian authorities are involved in AI-related policies — the Department of Industry, Science and Resources, Office of the eSafety Commissioner, Office of the Victorian Information Commissioner, Competition and Consumer Commission.Â
UAE’s Regulations On AI
In October 2017, the UAE Government launched ‘UAE Strategy for Artificial Intelligence (AI)’ which aims to:
Achieve the objectives of UAE Centennial 2071
Boost government performance at all levels
Use an integrated smart digital system that can overcome challenges and provide quick efficient solutions
Make the UAE the first in the field of AI investments in various sectors
Create a new vital market with high economic value.
The strategy is centred around five themes, which are — Formation of the UAE AI Council, Workshops and initiatives including field visits to government bodies, Developing capabilities and skills of all staff operating in the field of tech and conducting training courses for government officials, Provide all services via AI and the full integration of AI into medical and security services, Launch leadership strategy and issue a government law on the safe use of AI.
Singapore’s Regulations On AI
There is no hard law in Singapore so far that was created for AI. Creating a trusted ecosystem is crucial as Singapore advances its digital economy. This ecosystem should allow organisations to leverage technological innovations effectively while ensuring that consumers feel secure in adopting and utilising AI. In the global conversation surrounding AI ethics and governance, Singapore is confident that its balanced strategy can encourage innovation, protect consumer interests, and become a universally recognised benchmark.
Singapore’s Model AI Governance Framework provides guidance on implementing responsible AI systems. The Personal Data Protection Act (PDPA) includes provisions relevant to AI and data privacy.