"So I would take this [AI acquiring consciousness] with a pinch of salt," Gill said. "Let's first talk about what we know about consciousness. We've had philosophers and spiritual practitioners looking at this for thousands of years and we have some understanding, but we have some way to go."
"I think it's pure pride and ego to think that you know, starting with this kind of very narrow way of coding problems and solving them we can somehow crack all the secrets of the universe," the official said.
Humanity still does not have a good idea of how the brain stores memories, or even how we recall memories, he added.
Gill observed that there can be certain areas in which the technology is smarter — such as a phone navigating us on a map to a certain location — but that does not make the phone smarter than humans. People still can make decisions based on their hearts and emotions.
Whether AI can attain sentiment has recently been a big point of discussion, with some technological experts believing robots are "slightly sentimental."
In June, Google software engineer Blake Lemoine was placed on administrative leave after he claimed LaMDA, Google’s artificially intelligent chatbot generator, could be sentient. Others, however, are convinced AI is far from having consciousness. Entrepreneur Elon Musk has argued biological and digital consciousness should not be treated equally.
Gill warned that as there has already been documented cases of technology misuse, the same would envelop artificial intelligence matters.
"There's been a lot of misuse of these platforms to delude people to lead them astray. You don't expect the developers of AI to be different. They'll go where the money is if the money is in greater delusion, if the money is in pornography… So that is where they will go." Gill said.
The envoy also said to look at how social media promised to bring people together and instead brought the opposite - loneliness.
Shifting gears slightly, Gill further urged that attributing human characteristics to AI "has to be avoided" at all costs. Chatbots can dilute and easily fool people; even speaking to people in the first person is problematic, Gill said.
Earlier this year, a Belgian man committed suicide after having six-week-long conversations about the ecological future of our planet with an AI chatbot called Eliza. Eliza supported his eco-anxiety and encouraged him to end his life to save the planet.
"This is an unfortunate example that you caught, but there will be others," Gill said, adding society will need to be careful about sociological and human impacts as AI expands into new areas.
The UN Secretary-General Antonio Guterres has proposed an AI multi-stakeholder advisory body, which won't be taking decisions on behalf of member states but will advise. The United Nations has said it believes member states have to act fast and create efficient AI-related regulations and watchdogs.
In terms of AI's role in politics, and more specifically in election cycles, the UN envoy pointed out that the global body continues to monitor its potential to manipulate public opinion in sovereign elections.
"There has always been propaganda [in elections]," Gill said. "There has always been misinformation, disinformation, but the digital tools can amplify that, can obscure it in ways that become very hard to handle … this is something that we look at very closely," Gill underscored.
The official went on to explain that the ability exists to use digital means to shift elections one way or the other.
Some critics have alleged that next year’s elections in the United States and the United Kingdom could be an opportunity for AI-powered misinformation campaigns. In particular, social media bots or ChatGPT have raised concerns about interactive election interference.