Satwik Mishra writes: Cautiously on AI

Source– The post is based on the article “Satwik Mishra writes: Cautiously on AI” published in “The Indian Express” on 12th September 2023.

Syllabus: GS3- Science and Technology – Awareness in the field of IT, computers

News– The G20 Delhi Declaration stresses the importance of responsible artificial intelligence (AI) practices, including the protection of human rights, transparency, fairness, and accountability.

What is the potential of AI?

AI is currently playing a pivotal role in shaping our digital era and is fundamentally reshaping our concept of advancement.

According to Stanford’s Artificial Index Report of 2023, private investments in AI have surged by 18 times since 2013. The adoption of AI technologies by companies has doubled since 2017.

McKinsey’s estimates project that the annual worth of AI could span from $17.1 trillion to $25.6 trillion.

AI is on a steady upward trajectory. It is showing increasing capabilities, accessible affordability, and broad-ranging applications.

What are the challenges posed by AI?

AI poses established challenges such as biased models, privacy concerns, and obscured decision-making.

Generative AI carries the potential danger of undermining the integrity of public discourse through the spread of misinformation, disinformation, influence operations, and personalized persuasion tactics. It can erode societal trust.

In the defense sector, there is a concern that AI unexplained aberrations and unverified analyses could potentially lead to unforeseen and uncontrollable military escalations.

The concept of Artificial General Intelligence has been highlighted as a significant concern. There is growing apprehension about the potential for AI systems to become extremely powerful.

Way forward-

There is a need to establish a global consensus on the risks posed by AI. Even a single vulnerability can create opportunities for malicious actors to execute extensive breaches.

It would be wise to establish an international commission dedicated to continuously identifying AI-related risks

It is crucial to formulate a set of standards that should be met by any public AI service.

These standards play a pivotal role in enhancing safety by reducing risks, advancing quality, facilitating private-public collaborations, streamlining operations , and fostering compatibility across different regions.

There is a need to develop socio-technical standards. It should outline ideals and provide the technical means to achieve them. Since AI is an evolving technology, these standards must be adaptable.

Governments should have a substantial stake in the design, development, and deployment of AI. It is currently dominated by a small number of companies.

There is a need to reimagine models for public-private partnerships. It is required to establish regulatory sandbox zones where experiments aimed at boosting entrepreneurs’ competitive edge are balanced with fair solutions to societal challenges.

Print Friendly and PDF