Theoretical and legal analysis of the risks of introducing technologies based on artificial intelligence in the socio-economic sphere
In this paper the main risks of application of artificial intelligence in the socio-economic sphere and measures that can be utilized to minimize them have been considered. The following risks have been identified: disappearance of many professions, loss of jobs, vulnerability of Al-based systems to hacker attacks, formation of monopolies, discrimination of individual citizens and social groups, etc. Methodology: the work applied general scientific methods of research - analysis, synthesis, induction, deduction, systematic method and method of abstraction, as well as private-scientific methods: comparative-legal and formal-legal. Legal methods to minimize the risks of AI implementation were recommended, and prospects for further research in this area were identified. 1. Standards (GOSTs) should be created to which AI systems will be obligatorily compliant. These standards should regulate the transparency of operation and contain security protocols for AI-based systems, as well as protect human and civil rights and freedoms. The key principles, in our view, should be security, transparency and respect for human rights and freedoms. Defining ethical standards also involves making decisions about the collection and use of personal data. 2. Any AI-based systems must be understandable not only to operators and programmers, but also to the majority of citizens. Accordingly, it is necessary to develop a scale for assessing the degree of explainability of the functioning of individual AI-based technologies. This can help both the developers themselves and educators in developing appropriate educational programs, advanced training courses and professional retraining. 3. AI operation in experimental mode. It is important that before its direct implementation in any sphere of human activity, developers receive and take into account feedback from users. This approach allows identifying and correcting possible errors, errors in operation that may occur during the operation of the AI. 4. Systematic and continuous monitoring of AI-based systems by the authorities. That is why we should think about creating special interdepartmental expert commissions consisting of specialists in legal, technical, ethical and other issues of AI application. 5. When developing AI systems, the socio-cultural characteristics and needs of each particular community must be taken into account. We believe that the following laws are needed: on transparency of algorithms to ensure that they work in various sectors of the economy from commerce to social services; on the verification of the proper functioning, whereby AI should be available for systematic auditing and monitoring to ensure that it works properly and complies with government standards; on non-discrimination; and on fair distribution of AI-based high-tech to all members of society. The author declares no conflicts of interests.
Keywords
state, artificial intelligence, law, risks, theoretical and legal analysisAuthors
Name | Organization | |
Kiselev Aleksandr S. | Financial University under the Government of the Russian Federation | alskiselev@fa.ru |
References

Theoretical and legal analysis of the risks of introducing technologies based on artificial intelligence in the socio-economic sphere | Tomsk State University Journal of Law. 2025. № 55. DOI: 10.17223/22253513/55/4