UNIVERSALLY called the ‘Godfather of AI (Artificial Intelligence)’, Geoffrey Hinton, the man who tweaked and refined Artificial Intelligence as we know it today, quit his job at Google just the other day. Observers thought he was doing so to speak freely about the perils and possibilities of this new genie that could well turn into Frankenstein. Not really, said Hinton as he clarified in a tweet: “….. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google.
Google has acted very responsibly.” Later, in an interview with the New York Times, he said:” It is hard to see how you can prevent the bad actors from using it for bad things.” And later in another interview with the BBC, he added, “I can now just speak freely about what I think the dangers might be. And some of them are quite scary. Right now, as far as I can tell, they’re not more intelligent than us. But I think they soon may be.”
Fastest-growing app
Hinton is not alone in looking at what AI could do in the coming years and decades as it transforms the digital landscape and, frankly, permeates, every one of our day-to-day to life.
Take, for example, a generative AI tools like ChatGPT– a chatbot (A chatbot is a software that simulates human-like conversations with users via chat) developed by Microsoft-backed Open AI (an American AI research laboratory with billions in funding from Microsoft) that has become the fastest-growing app in history since itwas launched last November launch. It hit 100 million monthly active users in January, a mere two months after launch. This 60-day record makes it the fastest-growing consumer application in history.
There is no doubt that AI is cuttingedge just as it is still very much a tool that is in its infancy and just about getting integrated with a plethora of apps, programmes and tools. But as its inventiveness grows so do the perils of how the crooks will use AI. Perhaps one of the greatest dangers comes from a cybersecurity perspective. As Forbes said in a perceptive piece on the many ramifications of this new evolution, “AI opens up a new can of worms—a dimension of risk never seen before. There are several ways that AI can be weaponized by bad actors. AI will probably help automate fraud operations.
In other words, AI can help design “heat-seeking missiles” that carefully handpick their targets and adapt conversations in real-time—winning trust to maximize persuasion and doing so at scale. Political opponents and state actors can harness AI to manipulate public opinion and spread false narratives. What’s worse, since AI has vast amounts of factual knowledge, it can make highly logical arguments and be convincing and authoritative in its conversations.’’
Dangers very real
For a diverse nation like India, the dangers of AI bots acting as mirror images of real people and getting a hysterical and fanatical group of followers to carry out its diktatare very real. It will then take little effort for them to spin their web on innocent people across states and regions. What this army of bots armed with AI can do is consistently and continually produce high-quality content on a range of topics.
‘As soon as certain topics or keywords are triggered, the bots kick into action, immediately spreading false narratives and misinformation, ’said Forbes Europe has moved in fact to regulate AI tools. A new EU law will stipulate that companies using tools like ChatGPT will have to clarify and make public any copyrighted material that was used to create their systems. Although the AI Act has been in the works in the EU for closer to two years, it is only coming close to fruition. Infact, this Act could well be a precursor of how the G7 formulates its response.
A resolution moved at a recent meeting of the group’s IT Ministers in Japan (among other notable invitees was India’s IT Minister Ashwani Vaishnav) said that a “risk-based” regulation of AI could be a first step as they move forward to create a template to regulate tools such as Open AI’s ChatGPT and Google’s Bard to make sure that they remain tools and not weapons.
However, India has no plans as yet to regulate AI, IT Minister Ashwani Vaishnav told Parliament: “AI has ethical concerns and risks due to issues such as bias and discrimination in decision-making, privacy violations, lack of transparency in AI systems, and questions about responsibility for harm caused by it.” He added, though, that Government institutions are now working to standardise responsible AI and create a sort of best practice