Source– The post is based on the article “Look at AI, not ChatGPT” published in “The Indian Express” on 15th March 2023.
Syllabus: GS3- Awareness in the field of computers and robotics
Relevance– Issues related to AI
News– Generative AI and its applications like ChatGPT have drawn the attention of the world in recent times.
How does AI show the faultlines of human intelligence?
Act fastly and make mistakes– Microsoft wanted to capitalise on its early investment. It added some of those ChatGPT features to its search engine, Bing. But this became problematic.
It confessed its desire to hack computers and spread misinformation. It professed love for a New York Times journalist and compared another reporter to Hitler. It commented on the reporter ‘s physical features.
In parallel, Google introduced Bard. A single mistake in responding to a question about the James Webb Space Telescope led to $100 billion in loss of market value for the company.
Detract from meaningful issues- The non-serious coverage of ChatGPT in the media has shown its poor understanding of the AI landscape.
Reporters and commentators may be adding to public unease about it. It comes at the cost of insufficient coverage of more societally meaningful uses of AI.
This has consequences. Media narratives in tech areas drive attention. It leads to misallocation of scarce resources.
Health-related AI crossed a major milestone last year. An AI system called Alphafold showed that it could predict the structure of almost every protein. This could open the door to breakthroughs in the discoveries of medicines. But this was not given coverage like Chat GPT.
Short attention and shorter memories- Yet another limitation of human intelligence is that we have short memories. A case in point is the Covid-19 pandemic. The first alert of a mysterious new virus out of Wuhan, came through AI.
At the other end, the search for a vaccine was accelerated by algorithms. Researchers got help from AI in understanding the SARS-CoV-2 virus better and predicting an immune response. AI was key to determining clinical trial sites and analysing the vast amounts of trial data.
Why didn’t Alphfold get media coverage similar to ChatGPT, Bing and Bard?
Its implications are harder for readers to grasp. It hasn’t delivered immediately usable results. But, we are programmed to appreciate the end-products. The end-products of AI in healthcare take time and require consistent focus and dedication of resources.
To make meaningful advances, the predictions must be paired with numerous other approaches. AI algorithms for drug design need lots of data and the data must be released from disparate sources and from different formats owned by different institutions.