Beyond Politics

From Empirical Judgments to True Reason: Why Today’s AI Still Falls Short of True Intelligence

The appearance of consumer-grade generative AI has fueled intense speculation in the popular imagination about the future of the technology and its implications for humanity. The director of one of Russia’s leading computer research institutions offers insights providing a better sense of where the technology is really at today.
Sputnik
The immense power of modern computing hardware and the ability to process vast amounts of data has facilitated vast strides into generative artificial intelligence, but the technology still has quite a ways to go before reaching a state the great philosophers would describe as genuine autonomy and the ability to reason, says Dr. Arutyun Avetisyan, director of the Russian Academy of Sciences’ Institute for System Programming.
“Modern artificial intelligence consists of judgements based on experience, on empirical information. Since there is a lot of empirical information (because there is a lot of digital data), and because supercomputing resources and tools have developed tremendously in recent years, AI judgements can be of very high quality – to the point where a person, when communicating, may not understand whether he is talking to a robot or a human being,” Avetisyan, a leading Russian specialist in the field of systems programming, explained.
Generative AI has made major advances in this direction, but has not passed the threshold outlined by mathematicians and philosophers when it comes to the creation of computing systems capable of genuine, independent reasoning and a human-like ability to think, the professor said, pointing to the arguments expressed by 18th century German philosopher Emmanuel Kant regarding experiential vs. inherent knowledge.
Dr. Arutyun Avetisyan, director of the Russian Academy of Sciences’ Institute for System Programming.
“What did Kant say? That all knowledge begins with experience, and at the same time, that experience will never guarantee true universality. Thus, he set out certain limitations. And if we take his main works, he believed that one of the main properties of reason is the handling of a priori knowledge. What is a priori knowledge? Knowledge which is independent of experience,” Dr. Avetisyan explained.
In that sense, the academic pointed out that “there is no knowledge in modern artificial intelligence” in existence today “which is absolutely independent of experience. If we consider that weak AI is that which is based on experience, and strong AI is that based on reason, in this sense, using Kant’s definitions, we are still very far from strong artificial intelligence.”
Of course, leading minds are working in this direction, and the emergence of a strong AI is possible at some point, Avetisyan says, but he doesn’t believe this is something likely happen in the immediate future – either in the next year, the next few years, the next decade and possibly the lifetimes of adults today.
Economy
Apple Announces Deal With OpenAI, Officially Jumping Into AI Tech Wave

Open Architecture Models for Genuinely Open AI

Therefore, Avetisyan and the Institute for System Programming have focused their resources and energies on problems related to weak AI – a space where there is definitely plenty of room to grow, and many issues, both technical and societal, which remain to be dealt with – not least among them revolving around security and trust.
In today’s world, the professor explained, the unprecedented availability of standardized computing power and large volumes of information have allowed even “far from the most advanced mathematical methods” to “achieve very serious results” using so-called generative or large language model-based AI. The technology’s future lies in mass adoption by companies and customers, and just as importantly, genuinely open architecture, the academic stressed.
“Today, entire economic sectors are taking software solutions from a single ‘common pool’ and implementing them in their own countries, turning them into specific technologies. All this is framed by ideas relating to a collaborative economy, where the productivity of a person or scientist grows not twofold, but by several orders of magnitude,” Avetisyan said.
Today, Avetisyan noted, the GitHub developer platform – which allows creators to create, store, manage and share their code, is the largest platform for collaborative development on open source projects, with its user base exploding from five million to more than 100 million people worldwide, “all of them…simultaneously creating new technologies and knowledge.”
“From this, we must learn to create products and be technologically independent,” the academic said, pointing to the development by Russian programmers of dozens of operating systems based on Linux which have proven invaluable for Russian industry.
“It’s impossible to compete by making a system closed, because you will not be able to gather the necessary amount of knowledge and personnel in one place,” Avetisyan stressed, expressing confidence that closed architecture AI models like OpenAI, ChatGPT and others will inevitably be met with competitive open models, with the latter to be vital for the appearance of secure generative AI architecture.
“Independently, no country in the world – not us nor the United States, will be able to develop a wide spectrum of competitive technologies. This doesn’t mean that we should dive headlong into other people’s projects and try to somehow move in the same direction as they do. Rather, we must, understanding this energy, create our own repositories of knowledge in open mode, which are not separated from the world community, but simply more reliable, safer, where there is a guarantee that access to them will not be restricted,” the academic said.
For this to occur, Russia needs its own development “toolkit,” financial resources and the organizational understanding require to mitigate risks, maximize returns, and ensure the flow of knowledge into the country, Avetisyan said, pointing out that the Institute for System Programming has already developed an array of tools, like Svace and Crusher, to address vulnerabilities with major generative AI tools like PyTorch and TensorFlow, and shared them to improve these systems.
World
Trump Allies Drafting AI 'Manhattan Projects' for Defense – Reports
In that sense, Dr. Avetisyan is an advocate of “trusted artificial intelligence,” which he defines as AI for which documentation exists and is available describing the mechanisms of its operation.
“There are no such documents in in artificial intelligence yet. But this process has already been launched all over the world,” Avetisyan said, pointing to efforts by nations to introduce regulations to mitigate AI’s risks, maximize openness and ensure ethical behavior.
“If we return to the word ‘trust’ from the perspective of artificial intelligence, we must define what it means to develop trusted AI: from the design, data analysis and libraries we use (so-called frameworks) to the analysis of ready-made models for identifying vulnerabilities and defects,” Avetisyan said.
Furthermore, “when we’re talking about prohibitions, it is necessary to ensure that these issues are not decided on only by IT specialists or mathematicians. They should participate, but experts from the humanities must be involved, because we see some things differently,” the professor emphasized. “I always joke that if you give us [scientists, ed.] the task of keeping everyone safe and happy, we will chip everyone and everyone will smile all the time.”
“If there are no control technologies, one can sign any declaration, but they will be meaningless. There must be an understanding of the situation [among authorities] and further development. And [in Russia] we have it. The government launched the Trusted Artificial Intelligence Research Center within the Russian Academy of Sciences’ Institute for System Programming back in 2021, while the global regulatory trend began only in 2023,” Avetisyan said, pointing to efforts now being undertaken in the European Union and the United States in this direction.
World
NSA Whistleblower Snowden Says Governments, Corporations Trying to Monopolize AI
For now, there are a number of AI-related technological issues that must be addressed, not just in Russia but globally, Avetisyan said.
“For example, optimization tasks: it would be great if we could spend an order of magnitude less energy and computing resources to achieve the same result. Or if there was a model that works on a smartphone, and for its quality to be similar to that of a large model. I attribute these areas to the efficiency and productivity of the code,” he said.
“There are also barriers related to the lack of equipment…It’s important to have the right infrastructure, so that any student or teacher does not have to think about how to find a GPU accelerator. They must be given access to the service using a cloud model –one volume for a teacher, another for a student, depending on needs. And if a student wins a project or competition, we can give him additional volumes [of computing power, ed.]. We must create infrastructure which will allow our scientists to remove this barrier. Digital inequality must be eliminated. And it’s not just about hardware, but the stack of corresponding software,” Dr. Avetisyan summed up.
Beyond Politics
EU Beats US to Devise First AI 'Rulebook'
Discuss