Transformative technology like AI should not be stifled by undue restrictions
Trust is imperative; this is the principle underlying the new regulations unveiled by the European Union to regulate the use of artificial intelligence. Trust can only arise from transparency. Therefore, according to the draft rules, companies providing AI services and those who use them should be able to explain exactly how AI makes decisions, be open to risk assessments, and provide human oversight over how these systems are created and used. It is an ambitious goal; Experts say that as AI systems become more complex and feed on more data, it may be impossible to understand why the machine is making a particular decision. The regulations to control AI – an ecosystem without a definition yet defined either in law or in the industry – will therefore have to be fluid and evolve over time. More importantly, a one-size-fits-all approach cannot be used to govern a system that feeds on data from around the world and performs a wide range of functions, from commissioning self-driving cars to making rental and lending decisions. in banks. and scoring of exams. The draft rules not only set limits on the use of AI in these areas, but also place checks and balances on “high-risk” applications of AI by law enforcement and law enforcement agencies. courts to protect the fundamental rights of individuals. While some uses like live facial recognition in public places may be banned altogether, there are several exemptions in the name of national security that leave room for invasion of privacy and fundamental rights. It’s dangerous that if governments demand accountability from tech companies, they keep loopholes open to mine data and harness the invasive reach of AI.
The world was looking to the EU – its General Data Protection Regulation in 2018 became the framework for similar laws around the world – to find a way forward in regulating AI. While the emphasis on transparency is important, the burden of accountability falls on those who develop AI. In addition, the project avoids the problems of racial and gender prejudices that have affected new technologies since their inception. These biases can make the use of AI in the interest of “national security” contrary to democratic principles. In India, where regulation of the Internet and its services tends to be paramount and weighs in favor of state control, such imbalances can make the difference between democracy and a surveillance state. Transformative technologies like AI should not be stifled by undue restrictions. It is the interests of citizens, not the state, that should be at the center of legislation regulating AI.