Article contributed by Professor Ioannis Pitas, Director of the Artificial Intelligence and Information Analysis (AIIA) lab at the Aristotle University of Thessaloniki (AUTH) and Management Board Chair of the AI Doctoral Academy (AIDA).
Can AI research be stopped, even temporarily? In my view, no. AI is humanity's response to a global society and a physical world of ever-increasing complexity.
As physical and social complexity increases, processes become deeper and seemingly relentless. AI and citizen morphosis is our only hope for a smooth transition from the current Information Society to a Knowledge Society. Otherwise, we may face catastrophic social implosion.
Perhaps we have reached the limits of AI research being engineered primarily by Big Tech companies while treating powerful AI systems (like LLMs) as marvellous black boxes whose functionality is poorly understood, both due to a lack of access to technical details and the immense complexity of AI systems.
Naturally, this lack of knowledge and related confusion about the nature of human and machine intelligence entails serious social risks.
The Open Letter calling for a pause on AI experiments reflects genuine concerns about social risks, as well as financial concerns about risk management related to future AI investments or the possibility of expensive lawsuits (in an unregulated and unlegislated environment) if things go wrong.
However, I doubt if the proposal for a six-month ban on large-scale experiments is the solution. It is impractical for geopolitical reasons and may yield few benefits, particularly if targeted only at LLM training, rather than LLM deployment.
What’s worse, the melodramatic tone of this Open Letter may only enhance technophobia in the wider population. Scientific views discounting the value of LLMs (e.g., like the ones expressed by Chomsky) are outdated (reminiscent of the perceptron rejection by Minsky and Papert) and unproductive.
Of course, AI research needs to change. It must be more open, democratic and scientific. Here are my proposals for how AI research must change if it is to promote societal progress:
- AI research issues that have a far-reaching social impact should be delegated to elected Parliaments and Governments, rather than to corporations or individual scientists.
- Every effort should be made to facilitate the exploration of the positive aspects of AI in social and financial progress and to minimise its negative aspects. The positive impact of AI systems can greatly outweigh their negative aspects if proper regulatory measures are taken. Technophobia is neither justified nor a solution.
- In my view, the biggest current threat comes from the fact that such AI systems can remotely deceive people with little investigative capacity. This can be extremely dangerous to democracy and any form of socioeconomic progress. Criminals could potentially use LLMs for illegal activity (cheating in university exams is a rather benign example we have seen so far). If we act on these risks, the impact on labour and markets will be very positive in the medium to long run.
- As AI systems have a huge societal impact, advanced key AI system technologies should become open if we are to maximise their positive impact on socio-economic progress. To this effect, they should be required by international law to be registered in an ‘AI system register’, and notify their users that they are conversing with or using the results of an AI system.
- AI-related data should also be (at least partially) democratised, again for the purpose of maximising their benefits for socio-economic progress. Proper financial compensation schemes must be foreseen for AI technology champions to compensate for any profit loss due to the aforementioned openness and to ensure strong future investments in AI R&D (e.g., through technology patenting, and obligatory licensing schemes).
- The AI research balance between academia and industry should be rethought to maximise research output while maintaining competitiveness and granting rewards for undertaken R&D risks. Education practices should be revisited at all education levels to maximise the benefit of AI technologies while creating a new breed of creative and adaptable citizens and (AI) scientists.
To ensure all of the above is possible, proper AI regulatory/supervision/funding mechanisms should be implemented and greatly increased.
Several such points were already discussed in the 2021 AI Mellontology workshop and are also included in my recent book: ‘AI Science and Society.'