REGULATORY FRAMEWORK FOR THE USE OF ARTIFICIAL INTELLIGENCE IN EUROPEAN HIGHER EDUCATION

MARCO NORMATIVO DEL USO DE LA INTELIGENCIA ARTIFICIAL EN LA EDUCACIÓN SUPERIOR EUROPEA


DOI: https://doi.org/10.17981/juridcuc.21.1.2025.07


Fecha de Recepción: 2025/03/27 Fecha de Aceptación: 2025/06/20


Andreea Elena Tabacu E:\Users\aromero17\Downloads\orcid_16x16.png

National University of Science and Technology POLITEHNICA Bucharest, Romania

andreea.tabacu@ubp.ro


Para citar este artículo:

Tabacu, A. (2025). Marco normativo del uso de la inteligencia artificial en la educación superior europea. Jurídicas CUC, 21(1), pp. 124 - 152. DOI: https://doi.org/10.17981/juridcuc.21.1.2025.07


Resumen

La integración de la inteligencia artificial (IA) en la educación superior resalta la necesidad de establecer regulaciones institucionales que disciplinen la conducta de estudiantes y personal académico en las actividades esenciales de aprendizaje e investigación. Aunque el uso de tecnologías de IA se expande, estudios recientes posteriores a 2022 muestran que los usuarios carecen frecuentemente de formación suficiente y de una comprensión completa tanto de los beneficios como de los riesgos éticos, legales y académicos asociados. Los marcos jurídicos europeos existentes, especialmente el Reglamento (UE) 2024/1689, regulan solo aspectos limitados, dejando en gran medida sin regulación directa las actividades de enseñanza, aprendizaje e investigación. Este estudio adopta una metodología cualitativa, combinando revisión de literatura, análisis jurídico normativo y análisis comparativo de contenido. Se revisaron estudios empíricos sobre percepciones de los actores, disposiciones legales de la UE y políticas institucionales de universidades europeas líderes, identificando patrones, significados e inconsistencias normativas. Los resultados evidencian la necesidad de regulaciones institucionales vinculantes que disciplinen el uso académico de la IA. El modelo propuesto contempla: formación obligatoria en IA para estudiantes y docentes; uso limitado de la IA como herramienta complementaria; obligación de declarar el trabajo asistido por IA; adaptación de los métodos de evaluación al detectarse su uso; y responsabilidad del usuario por violaciones de derechos de autor, datos personales y otros derechos de terceros. Estas normas son esenciales para proteger la integridad académica, los derechos individuales, la seguridad jurídica y el desarrollo profesional ético en la educación superior europea.

Palabras clave: Inteligencia Artificial, educación superior, derecho, ética, legislación

Abstract

The integration of Artificial Intelligence (AI) into higher education underscores the necessity for institutional regulations to oversee the conduct of both students and academic staff in key learning and research activities. Despite the increasing adoption of AI technologies, recent studies conducted after 2022 indicate that many users lack adequate training and a thorough understanding of both the benefits and the significant ethical, legal, and academic risks that AI may pose. Current European legal frameworks, particularly Regulation (EU) 2024/1689, offer only limited provisions addressing specific aspects of higher education. Consequently, teaching, learning, and research activities remain largely unregulated. This study employs a qualitative methodology that incorporates literature review, normative legal analysis, and comparative content analysis. It reviews empirical studies on stakeholder perceptions, analyzes EU legal provisions, and examines institutional policies from leading European universities to identify patterns, meanings, and regulatory inconsistencies. The findings emphasize the urgent need for binding institutional regulations that govern academic conduct regarding the use of AI. The proposed regulatory model includes mandatory AI training for both staff and students, limiting AI to a supportive role in academic work, requiring full disclosure of AI-assisted outputs, adapting evaluation methods when AI use is identified, and assigning user responsibility for any violations of third-party rights, including copyright and data protection. Implementing these regulations is essential for safeguarding academic integrity, individual rights, legal compliance, and ethical professional development within European higher education.

Keywords: Artificial Intelligence, higher education, law, ethics, legislation


INTRODUCTION

In a world where technological evolution is essential, it takes precedence and defines the trajectory to be followed in the fundamental areas of social life. Man is no longer understood by what this notion traditionally means but becomes a new hypostasis: the man assisted by technology (Harari, 2018).

Education has also been and continues to be influenced by technology. Still, the step taken by making AI tools available to the general public, free of charge, opened a different path from what was expected to be defining for national education systems, which were structured on traditional bases, in which the contribution of each participating actor in this vital field was based on the understanding, acquisition, and application of knowledge, helpful for future professional life.

Although the tendency is to consider the world at the beginning of this journey, the extremely rapid evolution of technology shows that an urgent adaptation to the challenges it brings is necessary in order not to lose sight of the purpose and essence of education in universities. Under the Global Convention on the Recognition of Qualifications concerning Higher Education (UNESCO, 2019), the “optimal use of human and educational resources” contributes to “structural, economic, technological, cultural, democratic and social development for all societies” (Article II, paragraph 10). Similarly, Article 3, paragraph 1, letter a) of the Romanian Law sets forth the mission of higher education as ensuring the professional and personal development of students, facilitating the integration of graduates into the labor market, and meeting the competence needs of the socioeconomic environment (Law of Higher Education, 2023).

The education must “strengthen the respect for human rights and fundamental freedoms” and “enable all persons to participate effectively in a free society” as provided in Article 13, 1st paragraph from the International Covenant on Economic, Social and Cultural Rights (UNITED NATIONS, 1967). The individual is to develop freely, integrally, and harmoniously, to form an autonomous personality, to create an entrepreneurial spirit, and to participate in society actively as required in Art. 2 of the Law no.199/2023 (Law of Higher Education, 2023).

In that case, the person responsible for education must ensure that the individual preserves, develops, and provides their autonomy without becoming dependent on others, whether these are already recognized subjects of law or technological instruments, systems, or applications about to become participants in the broad sense of legal relations (Dutu, 2025; Visoiu, 2025), which are meant to help but which become permanent support for the proper conduct of social relations.

The ideal of higher education seems to be diminishing, as is the reality currently revealed by the continuous and permanent use of AI systems to perform tasks in the academic environment.

Safeguarding values in higher education can be effectively achieved by regulating the specific use of AI to discipline individual conduct and prevent negative consequences on the rights of individuals, whether they are users themselves or third parties.

A solution to preserve the purpose and mission of higher education institutions lies in regulation that clearly defines whether, under what conditions, and with what consequences AI systems may be used in teaching, research, and learning, as proposed in other states for any critical activity (Zabala Leal & Zuluaga Ortiz, 2021) for digital platforms (Kirillova et al., 2021) or for the use of social media or digital networks by minors exposed to products with sexual content (Delva Benavides & Gonzalez Lopez, 2022) (Ualzhanova et al., 2020).

The analysis of Regulation (EU) 2024/1689 indicates that it contains only limited provisions concerning AI in education, while essential activities related to teaching, learning, and research remain unregulated.

Additionally, the documentary analysis reveals that European universities primarily aim to guide members of the academic community in the responsible use of AI through non-binding guidelines and recommendations, deliberately avoiding the imposition of strict, mandatory rules that would hinder the academic environment’s adaptation to these emerging technologies. They cautiously retreat under general norms, user instructions, warnings, and guidelines.

In the absence of higher-level legal norms regulating the use of AI in education, the primary responsibility for addressing this issue lies with higher education institutions, which must establish explicit legal norms for their members to prevent misconduct and avoid negative consequences.

Indeed, the same rapid technological evolution may challenge legislators’ ability to keep pace with appropriate regulations, which must avoid both excessively restricting the use of these technologies and permitting their unrestricted application, as both extremes risk undermining the mission and fundamental objectives of higher education if the individual relies solely on AI and does not continue to exercise independent judgment, critical thinking, and personal responsibility.

This study aims to analyze how AI systems are perceived and utilized in university education, to assess the potential consequences arising from their use, to examine the regulations adopted at the European level, and to determine the extent to which these regulations address the academic environment. Additionally, it analyzes how universities address this issue. One of the key findings emerging from the analysis is the need for regulation, which the study further examines in detail. Building on the guidelines and recommendations provided by leading European universities to their academic communities, this study seeks to identify concrete solutions by formulating binding rules for the use of AI in higher education. These rules aim to respect the rights of the individuals involved, address potential consequences, and preserve the fundamental mission of higher education: to prepare competent, engaged, and independent professionals for society.


METHODS

To achieve its objectives, the study applies a qualitative research design, combining documentary analysis and content analysis methods. It conducts a targeted literature search (keywords: AI use, higher education) in of the major databases (e.g., ScienceDirect, Springer, MDPI), primarily selecting open-access articles published after 2022 in English, to address participants’ perceptions of AI use in higher education. Normative legal analysis and interpretation is conducted on the provisions related to higher education contained in Regulation (EU) 2024/1689. The study identifies and examines the current AI policies and guidelines adopted by leading universities in European capitals, which are publicly available in English on their official websites, to assess whether binding institutional regulations governing AI use for teaching, learning, and research in higher education exist.

The research employs thematic content analysis to identify recurring patterns and meanings in institutional approaches to regulating the use of artificial intelligence in higher education, which serve as the basis for formulating binding norms. From the identified elements, those addressing the mode, scope, or extent of AI use in specific academic activities are selected, along with provisions ensuring transparency in its use and addressing potential consequences for the rights of individuals involved. Additionally, aspects related to AI literacy are incorporated. Also, an exploratory and interpretative approach is adopted to assess the current state of institutional governance, justify the need for institutional regulation, and propose minimum binding standards that ensure the protection of individual rights and ethical principles in the responsible integration of artificial intelligence into European higher education.


RESULTS AND DISCUSSION

Studies addressing participants’ perceptions of AI use in education

A multitude of studies in various fields have analyzed the influence, importance, and role of AI in education (Barus et al., 2025; C. Meniado, 2023; Chiu, 2024; Dong et al., 2025; Parambil et al., 2024; Stöhr et al., 2024), how, on the one hand, teachers and, on the other, students relate to this technology.

Initially, AI was not widely accepted in education (Elsen, 2023). Attempts were made to implement it, but there was a noticeable tendency to rely on traditional methods, especially due to a lack of understanding of how to use these new tools effectively (Lyu et al., 2025). Over time, however, participants in the educational process began to embrace AI (Wang et al., 2024), with the level of acceptance varying based on their role in the educational context (Heaven, 2023).

Thus, teachers initially responded that they could do without this tool, as they have the necessary knowledge to be transmitted to students in traditional methods, which ensure teacher-student contact, the essential empathy, and the support that the recipient of the educational act must receive from the professional (Rane, 2024; Saihi et al., 2024), but it was also found that AI can be an effective tool to improve the educational act (Jin et al., 2025) and that it is necessary to adapt (Buzkurt, 2023).

On the other hand, students have reacted more quickly to the use of AI, especially regarding the individual assignments they receive at school, as AI has proven to them that, in an extremely short time, without effort without personal involvement, it can provide them with the answers they need, which saves them a lot of time for other activities that they prefer (Forman et al., 2023; Meyer et al., 2023; Stöhr et al., 2024).

A series of ethical issues related to the use of AI in education has also been reported, including the use of invasive technologies that may infringe on individual rights (Moise & Nicoară, 2024) as well as moral dilemmas related to how such technology helps in decision-making (Zhang et al., 2022).

Alarm signals have been raised in the academic environment, discussing the challenges caused by uncontrolled use (Atlas, 2019) and the impossibility of detecting the author of the respective topic since AI is capable of generating completely different essays, compositions, or works in the shortest possible time (Basic, 2023; Newport, 2024), the fact that it invites misconduct, noting a particular competition between AI technologies and plagiarism detection programs (Grassini, 2023; Lo, 2023), and a series of measures have been proposed to prevent the use of AI or for its controlled use (Baidoo-Anu & Owusu Ansah, 2023).

However, it was found that AI must be accepted. Its use in school preparation activities can be helpful (Huang et al., 2023), even if it can be challenging to detect whether the work presented belongs to the person supporting it or not (Cotton et al., 2024; Khalil, 2023), which is the contribution of AI, concerns being expressed about the erosion of critical thinking skills (Hadi Mogavi et al., 2024).

AI technology can be used by teachers in the education process to facilitate teaching methods, increase their attractiveness, give the activity a high degree of interactivity (Baidoo-Anu & Owusu Ansah, 2023), personalize the education and training activity according to the specifics of each student, to prepare test batteries or to build course support or even to help in student assessment (Grassini, 2023), to obtain direct feedback during the learning process (Yu, 2023).

AI, as it is currently known, is not a threat if the situation of AI systems is wholly and correctly understood and regulated, according to the literature on the subject of AI being likely to ensure easy access to educational resources (Kovari, 2025), prediction, evaluation, help personalize learning (Vieriu & Petrea, 2025; Zawacki-Richter et al., 2019) and facilitate administrative processes (Pini et al., 2025).

It cannot be stated that AI should be refused, eliminated, or marginalized. On the contrary, there is a tendency to implement it in the act of education (Bin-Nashwan et al., 2023; Lim et al., 2023; Ou et al., 2024; Pavlik, 2023; Southworth et al., 2023).

For the effective implementation of AI in the educational act without replacing humans, training of all participants in the academic environment on the use of AI is required (Tzirides et al., 2024), a rethinking of the examination and evaluation method (Chan, 2023), cooperation between teachers and students to identify the benefits and risks of its use in education (Lee et al., 2024).

However, specialists have expressed concern about the use of AI systems that read emotions and establish categories of them in various fields of activity, using the notion of responsible AI and being convinced that legislative intervention or the establishment of precise and secure measures to protect fundamental human rights is necessary when such AI technology is used (Shehu et al., 2025).

European norms on AI use in education

The White Paper on Artificial Intelligence (European Commission, 2020) demonstrates an essential openness of the European Union to the implementation of AI systems in education in service of public interest if guarantees for the protection of individual rights and freedoms are provided.

A group of experts on artificial intelligence set up by the European Commission has developed the Ethics Guidelines for Trustworthy Artificial Intelligence (High-Level Expert Group on Artificial Intelligence, 2019), noting that to this end, AI must ensure compliance with all laws (lato sensu), ethical principles, and values and be sound (technically and socially). It is constantly remembered that the use of AI must be conducted with respect for human rights and freedoms and be oriented towards enhancing people’s quality of life and autonomy. The values and principles that cannot be affected in any way by AI are human freedom and dignity, equality, non-discrimination, solidarity, respect for democracy, justice, the rule of law, and individual rights. Ethical principles have been retained that must be respected in the implementation and use of AI: respect for human autonomy, prevention of harm, fairness, and transparency. It is very suggestive that the guide speaks about human-centric AI, meaning that “AI is not an end in itself” but a tool that must serve people to increase human well-being.

Also, the Report on Artificial Intelligence in education, culture and the audiovisual sector (European Parliament, 2021) speaks about “the importance of developing quality, compatible and inclusive AI (...) for use in deep learning which respect and defend the values of the Union” (General observations, point 5). It is stated that “the real objective of AI in education systems should be to make education as individualized as possible” (point 33). It stresses the concern about “the lack of specific higher education programs for AI” (point 37). Europe needs a legal framework for AI (point 43) to ensure fundamental rights and freedoms because the authorities are concerned that “schools and other education providers are becoming increasingly dependent on educational technology (edtech) services, including AI applications” (point 44).

In this context, after lengthy discussions (Huruba, 2023), the European forums adopted a normative, binding act directly applicable to the Member States to regulate the legal regime of AI. This is the Regulation (EU) establishing harmonized rules on artificial intelligence, the so-called Regulation on artificial intelligence (Regulation (EU 2024/1689, 2024).

After acknowledging the advantages of AI technologies, the Preamble, which sets out the context of adopting the norm and the legislator’s intention, attempts to define the areas in which they should be used under certain conditions. If AI “contributes to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and societal activities,” its use should only support education and training (recital 4 of the Preamble).

The legislator is usually cautious, especially in this area, where technological evolution has outpaced regulation, which is adopted due to the emergence of AI and primarily because of its exceptionally rapid development and expansion.

In Recital 56 of the Preamble, the European Regulation admits the implementation of AI systems in education, recognizing that it is crucial to promote high-quality digital education and training as active and effective participation in society and professional and economic activity. Any democratic process requires acquiring digital skills and competencies appropriate to current reality without removing or diminishing critical thinking and owning and discerning analysis of the situations in which everyone is placed.

The European legislator, on the one hand, was cautious. On the other hand, limited by university autonomy, it could not impose a general regulation on implementing AI in education, but it adopted minimum norms in this field.

By classifying AI systems into four degrees of risk based on the possibility of affecting the safety, livelihoods, and rights of the individual, the European norm qualifies AI systems according to the field in which they are used.

The legislative intervention concerns AI that identifies or infers the emotions or intentions of individuals based on their biometric data, which poses an unacceptable risk and should be prohibited in education. With exceptions, placing on the market, putting into service, and using such AI systems are not allowed as they may generate discriminatory results and affect the rights and freedoms of the individuals assessed (recital 44 of the Preamble to R(EU) 2024/1689). Article 5(1)(f) of the Regulation prohibits the placing on the market, putting into service for this specific purpose or use of AI systems to infer the emotions of an individual in the sphere of educational institutions, with exceptions related to medical or safety reasons (Hsu et al., 2025).

Therefore, even if AI systems can be a real help in certain areas for their performance, their use must be limited, not allowing intrusion into the person’s life in terms of their emotional background and manner of manifestation in the context of the act of education.

With respect to the four degrees of risk, the Regulation goes further and shows that AI systems present a high degree of risk if they are intended to be used to determine access or admission or to assign individuals to education and vocational training institutions at any level (Art. 6 para. 2 reported to Annex III).

The consequence of this qualification is that AI technology must comply with the requirements set out in the risk management system, depending on its intended purpose, which is the recognized state of development of AI and AI-related technologies. It is necessary to identify and analyze the known and reasonably foreseeable risks that the high-risk AI system may pose to health, safety, or fundamental rights when the high-risk AI system is used under its intended purpose and to take appropriate and specific measures to manage these risks. Still, it is also necessary to estimate and assess the risks that may arise when the AI system is in question, although used following its intended purpose, is used under reasonably foreseeable inappropriate conditions of use (Art. 9 para 2). Other risks that may arise based on the analysis of data collected from the post-market monitoring system must also be assessed. Before placing them on the market or putting them into the service of AI systems, the testing of those with a high degree of risk is also necessary (Art. 8). As regards training, validation, and testing data, they are subject to governance and management practices (Art. 10) which refer, among others, to: data collection processes and the origin of the data, relevant processing operations for the preparation of the data; formulation of hypotheses, in particular as regards the information that the data should measure and represent; examination to identify possible biases that are likely to affect the health and safety of individuals, harm fundamental rights or lead to discrimination prohibited under Union law, in particular where the output data influences the input data for future operations; appropriate measures to detect, prevent and mitigate such possible biases (Art. 10 para 2).

In general, in education, an AI system should not decide without such checks who can access or who is admitted to the educational unit or to which educational program each is assigned (Annex III, point 3). Ensuring that the rights of participants in the act of education are not affected can be supported by checks carried out on the training data of the AI system, which must reveal that it does not bear traces of discrimination based on gender, health status, age, racial origin, sexual orientation, etc. or prejudices that may harm essential values such as access to education (Stivers, 2018).

Likewise, AI systems used to assess a person’s learning outcomes or the appropriate training level (Annex III, point 3, letter b) are high-risk systems and subject to the same checks and limitations.

Similarly, AI technology that helps monitor and detect prohibited student behavior during tests (Annex III, point 3, letter d) presents a high degree of risk, as it is intrusive, and the consequences produced on the individual can be harmful, especially when technology does not respond appropriately to the truth.

AI systems intended to be used to assess the appropriate level of education (Annex III, point 3, letter c) that a person will receive or will be able to access within educational institutions are also considered to be at elevated risk. This appreciation must remain a human assessment, left to the subject of law, not to a machine that can make mistakes but give the individual a wrong image of their ability to participate in real life.

Also, those AI systems that profile the user’s person must be controlled and limited. Annex III of the Regulation specifies that an AI system is always considered to present a substantial risk if it creates profiles of natural people.

The Regulation requires that high-risk AI systems be designed and developed so that, by including appropriate human-machine interface tools, humans can effectively supervise them during their use (Article 14 para 1). They must also technically allow for the automatic recording of events (log files) throughout the life of the system (Article 12 para 1) and ensure an appropriate type and degree of transparency to comply with the relevant obligations of the provider and the implementer, achieve a proper level of accuracy, robustness, and cybersecurity and operate consistently in these respects throughout their life cycle (Article 15 para 1).

For the third type of AI technologies, having limited risk to the safety, livelihoods, and rights of the individual, the law imposes the obligation of transparency, in the sense that the human individual must know he is interacting with an AI system and not with a person before any interaction (Art.50).

If the AI system generates synthetic content in different formats, in addition to marking the results, the provider must ensure that the technical solutions are reasonable, but if the system only performs “an assistive function for standard editing or does not substantially alter the input data” the provider is not bound by this obligation (Art.50 paragraph 2).

Regarding AI systems that generate or manipulate images, audio, or video content that constitute deepfakes, the regulation requires, as a rule, to indicate that the content has been artificially generated or manipulated. The same obligation is also established for the case where the AI system generates or manipulates texts published to inform the public on matters of public interest, except where such use is authorized by law for a criminal procedure or where the content generated by the AI has been subject to a process of editorial or human review, the editorial responsibility belonging to the human (Art. 50 para. 4).

A necessary provision is found in Art. 6 para 3 of the Regulation, which allows the conclusion of a lower degree of risk for an AI system, even if it is mentioned in Annex III when it does not explicitly present a significant risk of harming the values provided for in the law. For example, this risk is diminished when AI performs a narrow procedural task, improves the result of previous human activity, and detects previous decision-making models without replacing or influencing the previously completed human assessment.

For systems with minimal (or zero) risk, from the fourth category, the law does not impose obligations (video games, spam filters, recommendation or sorting algorithms, search engines, and translation tools (Moldovan, 2025).

How is AI seen in European universities?

The position of the European University Association. (2023) on the use of AI in higher education is clear. This entity rules out the possibility of universities banning AI systems and refers to their responsible use for teaching and learning.

The Open University (2025) allows students to use Generative AI as a learning or assessment aid, with the requirement to disclose the use, not to rely consistently on AI to generate the work, not to use personal data, and not to infringe copyright, including by uploading protected material (e.g., exam questions developed within the university). Students are also warned that the answers may be incorrect and that using AI to complete the tasks will affect their acquisition and development of the skills needed for further studies and later employment.

The Russell Group. (2023) has adopted principles for the use of AI in education, showing that it is vital to prepare participants in the act of education for the use of AI and to adapt teaching and assessment methods to incorporate AI and to respect the rules of ethics and academic integrity.

Oxford University (2025) provides rules for the use of AI by those involved in the act of education, showing, among other things, that the unauthorized use of an AI system for assessment attracts consequences related to plagiarism and where the use of AI is allowed by the respective department, the student must show that he has used AI and to what extent he has done so.

The University of Cambridge (2025) guides students and teaching staff on how to use AI systems, warns of the violation of ethical norms when a work is submitted for assessment that is the exclusive result of an AI system, and encourages teachers to design assessment tests that reveal learning outcomes that involve higher-order skills that AI cannot solve.

Other universities in England have also established rules for the use of AI in learning and preparation for assessment, essential being those aimed at the protection of personal data, against discrimination, and prohibiting the presentation of a work that is exclusively the result of an AI system, without revealing this aspect (Swansea University, 2025).

Erasmus University Rotterdam (2024) leaves it up to the teachers to decide whether a student can use AI. Still, given that students use these free systems anyway, the university warns of the danger that looms over assessment and provides guidance on identifying use cases for Generative AI.

Freie Universitat Berlin (2024) provides rules for scholarly publishing, which exclude AI tools from authorship or intellectual property and compel the author to reveal which content was generated using AI tools, the whole paper being under the responsibility of the human who uses it.

The Universitat of Wien (2025) presents a guide for students that allows AI tools to be used depending on the subject and the course. Still, if this is allowed, users are advised not to enter personal data into the AI system to check the results, all their responsibility being their own.

The ELTE Faculty of Education and Psychology Budapest (2023) elaborated a precise guide on using content-generating AI, with several examples of using AI, recommendations and suggestions for teachers and students, and the possible misconduct also presented.

Upsala Universitet (2024), Sweden, presents the Law and guidelines on this subject and considers that banning AI is unsuitable. The most prominent issues to be considered are academic integrity, personal privacy, and copyright.

DTU University Denmark (2024) allows the use of AI both for teachers and students and elaborated a guide for AI according to which the students take full responsibility for the work they submit for an exam and must never copy the work of others without acknowledging the source, even when this is an AI one. The SDU University (2024) accepts generative AI for different projects and papers but not for exams and written tests. The user must state the use of AI and the part of the AI-generated material.

Hanken University Finland (2023) established guidelines for using AI in teaching and learning, which state that using language models in theses at any level is forbidden because the work must be the author’s own. The students can use LLMa to correct grammar and improve the text if AI is allowed. Also, The University of Helsinki (2025) guides the researchers in their activity using AI tools with principles and advice that speak about responsibility, transparency, and ethics.

Vilnius University (2024) adopted a guide with recommendations for using AI in learning and teaching in the situations expressly mentioned, and the responsibility for using personal data and copyright lies with the user. AI tools are not forbidden in research, but the author must respect ethical norms and disclose the use of such instruments.

The Riga Stradins University (2024) provides recommendations for students and academic staff. The teachers have the right to decide whether and how AI tools can be used with respect to the guidelines that invite responsible use.

The University of Warsaw, Poland (2025) addresses researchers (staff and students) with principles that require responsible use and a limited one because human expertise must not be replaced. It is necessary to ensure academic ethics rules, acknowledge the use of AI, and consider the limitations and biases of the tool.

Comenius University of Bratislava (2024) adopted an internal regulation regarding using AI, with rules and recommendations for teachers and students, because the entity promotes using AI in any activity. However, AI tools are not accepted for creating works that must result from independent student activity. Also, academic rights and freedoms, ethical principles, copyright, and transparency must be respected.

Karlova University from Prague (2023), being aware that the use of AI tools cannot be prohibited, provides guidelines for both students and academic staff; the teachers are required to instruct students on the ethical use of AI to understand that the user bears the responsibility for the use and the results.

The University of Antwerp (2024) adopted a detailed guide for using generative AI in research, which prohibits using AI to create the core content of publications without checking and additional substantive editing. The Guide explains the ethical challenges and limitations of AI and the fact that responsibility for the information provided by AI lies with the researcher.

In Romania, at the leading universities in Bucharest, Cluj, Craiova, Iasi, and Timisoara, no norms, guidelines, or indications have been identified regarding the concrete use of AI systems or the possible consequences in teaching, learning, and research activities. The National University of Science and Technology POLITEHNICA Bucharest provided in a short text the possibility of using AI in learning and teaching activities if academic ethics are respected (Regulation on the Organization and Operation of Undergraduate University Studies, 2024)

There is a need for institutional regulation

The analysis of European legal norms reveals they do not refer to the use of AI in higher education for teaching, education, learning, and research activities, except for a few rules regarding the evaluation of the student, because the European legislator is mainly concerned with specific activities that could affect the rights of the individual.

Nor does Romanian national law regulate the use of AI systems in higher education for this kind of activity. Considering the relationship between the domestic and supranational norms, the binding nature of Regulation No. 2024/1689, its direct applicability, and its primacy in the Member States, it follows that it covers the field of use of AI in university education only in terms of the aspects regulated.

European norms impose conditions and limitations on the implementation and use of AI systems only in terms of access to university education, the assessment of learning outcomes, the appropriate level of training that a person can acquire, and the supervision and identification of prohibited behavior of students during the evaluation. The aim is not to influence the development and educational course of the recipients, especially when the technology is not built and used correctly. Such a situation may affect the right to education and training, leading to discrimination, with negative consequences for the individual’s ability to carry out a professional activity or fulfill obligations in an employment relationship, thereby affecting their existence.

The guarantees required by the European norm are intended to ensure respect for fundamental human rights and to respect the idea that humans must be at the center of social life, not to be affected by technology but assisted and helped by it, the tools created by man is intended to help them develop and not to be a source of dependence, even addiction, or loss of autonomy.

For the European legislator, access to education must be ensured without discrimination and without allowing an AI system that does not benefit from the guarantees imposed by the Regulation to decide who can access a higher education institution or how candidates are allocated to different forms of education. According to the public order rules in the Regulation, the assessment of learning outcomes or the appropriate level of education that a person can access in an educational institution cannot become the attribute of AI systems if they do not meet the multiple requirements provided by the Regulation for high-risk systems. Similarly, the use of AI for recording and detecting student behavior during an assessment activity must be accompanied by proof of compliance with all conditions relating to its placing on the market and its appropriate use.

No restrictive conditions are imposed on AI systems when they have a low degree of risk, apart from the obligation of transparency and, in some instances, of marking the results, of assuring the provider that the technical solutions are efficient and dependable.

Moreover, the European norm allows a system with a high degree of risk to receive a lower qualification since the respective AI system does not explicitly present a significant risk of harming the values provided by the law. These situations are related to the extent to which AI participates in decision-making, in the preparation of material, and in the performance of a work. The hypotheses provided in Art. 6 para 3 of the Regulation reveal that in all these cases, AI participation is minimal, its activity is not decisive, and it is not essential for the result. Applying the rule of teleological interpretation, seeing the purpose of the norm, corroborated with a systematic understanding of the paragraphs, it results that the degree of risk is linked to the mention of an AI system in Annex III of the Regulation but also to the contribution to the final decision-making process. In other words, if the AI system is connected to the procedures for admission to the educational institution, the assessment of learning outcomes, and monitoring during the exams and tests, it does not have to meet all the requirements imposed for a high-risk system if the decision ultimately belongs to a person or committee, or the contribution of the technology only concerns the collection of data, the creation of a database, its organization, without a decision on the intended outcome.

In context, it results in a specific system being categorized as having a certain level of risk depending on the fields or activities in/for which it is used. Hence, an AI system presents a high degree of risk if it is used in education only for the hypotheses mentioned above included in Annex III, point 3 of the Regulation, while AI systems like Chat GPT, Bard, Gemini, LLaMA, Mistral, DALL-E fall within a limited degree of risk if they are used as an aid for writing, proofreading, translation, resource identification, summarization, conversation.

Beyond the European provisions, which do not specifically address higher education-related learning and research activities and briefly regulate the use of AI in student assessment, it is necessary to establish the scope of application of an AI system in the university environment when used in teaching, learning, and research activities.

At this point, it should be noted that the Romanian higher education law has not regulated the use of AI in academic education. It provides rules for carrying out teaching, learning, and research activities, which are of an ethical, moral nature in the category of legal norms. Art.168 and Art. 174 paragraph 3 of Law no.199/2023 rule regarding teaching and research activity, which were initially considered ethical or moral norms. They are transformed into legal norms with a specific structure, necessarily providing for the sanction that may intervene in case of violation. Although it continues to use the notion of deontological and ethical norms, the Romanian legislator determines their legal regime within the scope of the law, provides for the possible sanctions that can be applied, and in Art. 171-173 provides for a procedure to be followed for investigating deviations from these rules.

The assessment of facts that could represent violations of deontological and ethical rules in activities specific to higher education was no longer left to the discretion of the entities providing education. The legislator chose to apply a mandatory regulation, which is implicitly uniform for all universities.

In the true sense of the notion, this approach to transforming the ethical norm into a legal one is justified by the need to protect the authentic value in higher education and not allow its violation or severe damage due to the lack of a regulation that benefits from coercive force.

At the same time, in Art. 161 paragraph 2 letter d) of Law no. 199/2023, the legislator allows universities to develop aspects related to violating ethical and deontological rules through their regulations, as it considers that the law covers only the gist of the problem. Any other elements not provided by the mandatory legal norm but related to the idea of protecting values in higher education can be regulated by the entities providing education.

This possibility of universities is also based on the idea of university autonomy. Every entity has the chance to regulate as it deems necessary, but they are required not to exceed the limits of the norm with higher legal force.

Given that a university charter or a regulation drafted by a higher education institution is only in the broad sense a law, belongs to a local-level entity, and has a lower legal force than the normative acts drawn up by the legislative authorities of a state or supranational level, it is necessary not to exceed the limits set out in the acts with higher legal force, to which they must comply, not to contradict them or to mitigate their application. According to Art.78 and Art.81 of the Law no.24/2000 (Law on Legislative Technical Norms for the Development of Normative Acts, 2000), in the matter of issuing acts with the force of law or binding on an indeterminate number of persons, a basic rule requires that in the system of normative acts, the one with a lower rank must respect the provisions of the one with a higher rank.

It is therefore necessary to first observe what the higher norm regulates, namely what aspects it does not even indirectly refer to. Then, it must be determined what conditions, what express limitations or restrictions it brings to the respective field of interest, what is the nature of the norms, whether of public or private order, and whether the authorities or entities that develop subsequent norms, of implementation or lower legal force are allowed to exercise to a certain degree their power of appreciation.

On this structure, the analysis of supranational provisions in the field of the use of AI in education demonstrates the acceptance of the use of AI in education, but with respect to certain limitations and conditions, which are provided to protect the human person and his fundamental rights, an aspect that reveals the spirit of the regulation.

It follows that the scope of legal relations arising in connection with the use of AI in higher education in teaching, learning, and research activities remains at the disposal of the domestic legislator. Also, this one does not regulate them, so it is concluded that AI and its use in these types of activities specific to higher education do not benefit from any regulation, which would primarily constitute a model of conduct and help those interested in using it correctly and without negative consequences.

In addition to the argument of the need to discipline individual conduct by developing binding legal norms to avoid negative consequences for users themselves, it must also be remembered that regulation comes with sanctions. These also warn the recipient about the attitude he is forced to follow and indirectly influence his decision because he is aware of the possible negative consequences he will choose to comply with the norm.

Specific legal sanctions such as nullity, forced adaptation of the agreement between the parties, and compensation for damage may be adopted in this matter. For example, a member of the academic community who violates a specific rule regarding the use of AI in research activities to develop necessary work either for the completion of studies or to cover the obligation to carry out research may suffer the consequence of being denied the right to invoke that work, which will thus be deprived of the desired legal effects. Or, in the case of an examination, depending on the seriousness of the act, the author may be subject to a different evaluation than the one announced, which determines the real level of knowledge of the person being evaluated. If the violation of the rule was not noticed in time and other negative consequences occurred, for example, breaching the copyright of another person, compensation for the damage suffered by the latter may be in question.

If the use of AI in education causes a series of concerns, especially of an ethical nature (Balalle & Pannilage, 2025; Chisega - Negrila, 2024), it means that this tool cannot become an exclusive one in the conduct of university activity, so the intention of the legislator set out in the Preamble of the European regulation is oriented towards qualifying AI only as aid or support in education.

The mandatory European regulation does not explicitly address the academic ethics challenges that the use of AI in education may cause. Still, it offers a way forward by establishing conditions for limited use of AI systems, depending on the risk to essential values related to the life and development of the human individual, with specific issues remaining an approach at the discretion of the entities providing education.

However, they are cautious in developing mandatory educational participation regulations that show how an AI system can be used, its consequences, and its limitations. For many reasons connected to the need to understand AI tools, the universities usually provide guidelines and recommendations, not compelling rules. Some European universities indicate to their academic communities whether and how they could use AI in specific activities, especially in research; provide clear explanations regarding the limits of use, while others only offer guidance but generally do not impose rules of conduct in this area and limit themselves to showing that, otherwise, consequences may occur in the field of academic ethics.

But, as argued by researchers (Balalle & Pannilage, 2025; Gimpel. 2023), such an approach is necessary. Users must understand the risk that non-compliant, uncontrolled use can cause both to the university and the individual’s professional research and personal relationship in the future so that they must discipline their conduct.

The exclusively legal approach, related to the development of norms, is not enough to ensure a coherent framework and for the efficient application of AI systems in university education, and it is necessary to think about a complex policy that includes pedagogical and technical aspects (Chan, 2023).

However, rules are needed in any democratic society, as they guarantee uniform action by individuals in the regulatory field to prevent certain legal subjects from being discriminated against or disadvantaged, mainly since one of the values identified in the specialized literature is represented by ensuring equal access to resources and fair education (McGrath, 2023).

Some of the main problems are that the use of AI without measure and discernment, without rules, can cause the loss of individual autonomy, confusion between virtual and reality, its misrepresentation based on information provided by AI or which is not always accurate or correct (Atlas, 2019; Lo, 2023), the impact of ethical norms (An et al., 2024), loss of data confidentiality (Vakulov, 2023), copyright infringement (Al-Busaidi, 2024).

The irresponsible use of AI can eliminate critical thinking, reflection on reality, or the situations in which the individual finds himself. The concept of learning can be lost, as the student, the young person, knows that they have a tool at hand that will answer their questions and solve their problems, so they no longer need to learn. If intellectual effort is no longer made, cognitive capacity is reduced, classical values lose their importance, fall into derision, and humanity risks becoming a consumerist tool whose trajectory is dictated by a minority that maintains control over technology, or worse, even by technology itself.

A bleak outlook, contrary to the purpose of a university, if the risks are not seen but, on the other hand, a significant loss if it is not understood how much help AI systems can offer. Therefore, minimal regulation at the university level could limit this tendency to rely too heavily on technology rather than one’s own strengths.

How can it be regulated?

If the need to develop regulations at the university level has been demonstrated, it is necessary to analyze what the content of mandatory rules in a university regulation may be, which do not represent simple recommendations, guidelines, or help in use but legal norms.

In the context of accepting the use of AI by those involved in the act of education, the essential aspects that must be provided for in an internal regulation concern: the accepted limits of the use of AI, the consequences, the literacy or training of teachers and students on the understanding and use of AI systems, respect for the rights of third parties, compliance with the rules of academic ethics.

Regulations on the organization of undergraduate, master’s, doctoral, or professional student studies, which specify the criteria for evaluating and grading students, may specify a minimum legal regime for using AI systems. General regulations may guide the students to adapt their behavior to the efficient and purposeful use of this technology, and teachers may become good users of these systems (Gimpel, 2023; Luckin et al., 2022) and may know how they work to become accurate guides for students in educational activity using AI systems.

First, the conditions under which AI can be used must be clarified.

The Romanian Higher Education Law (no. 199/2023) refers in particular to regulations that determine compliance with ethical rules in the activity carried out by participants in the academic environment, especially when the originality of the works is in question but does not expressly address AI systems, also justified by the fact that at the time of its adoption, Europe was still discussing how to regulate this area.

If the use of an AI system for the preparation of a specific work produces legal consequences since it is necessary to be presented in the student’s assessment activity, it may represent a violation of ethical norms that require that the works be original and not violate the rules of university ethics, that is, to represent the work of that student (Art. 18 letter a) rel. to Art. 168 para 1 letter g) of L.nr. 199/2023), which means that the use of the systems in question must be limited.

Equally, if the professor uses the AI system to prepare teaching materials to be used in his specific activity and ends up declaring these in his research activity, imposed by the university, there is a risk of violating ethical rules that require the paper to be the fruit of the work of the natural person and not of an AI system, even if when preparing the material by the AI, it matters how the questions were asked and the requirements established for that system to respond.

Thus, the University Charter could stipulate that the use of AI systems by those involved in the educational act (teaching staff, students, auxiliary teaching and research staff) is permitted under the conditions established in the regulation for the organization of each study cycle and without violating the norms of university ethics. Furthermore, the regulation for the organization of university studies should include a text that provides for the possibility of using AI systems only as a helping, complementary tool, teaching or learning assistant, which, consequently, is not sufficient for the development of scientific research, for the preparation of papers, exams and for the acquisition of the skills sought in the respective specialization.

The rule on how to use AI systems could have the following content:

The AI system can only be used as a complementary tool, teaching or learning assistant, without being able to serve exclusively for the development of a paper, assignment, project, material, or result that must be presented for the evaluation of the teaching staff or the student, for grading or graduation.

The AI system can be used for the following types of activities without being limited to them: developing different forms in which a particular idea can be expressed, identifying information and data on a specific subject or arguments in support of an idea, summarizing materials, correcting and improving the text, ensuring cohesion in a working group, generating tests and questions to verify knowledge, providing personalized feedback and determining the level of expertise, training for an interview, exam, improving teaching methods.

Regarding research activity, AI systems, depending on the specifics of each one, can present research themes or ideas, identify materials with source information for citation from various databases and archives, translate the created text, write code, improve the writing, the language used, carry out reviews, generate specialized articles, text with the technical and specialized aspects necessary to be presented in a project proposal or research grant.

In the context of a total lack of regulation, academic integrity suffers, on the one hand, because such a capacity for AI systems can lead to the idea of fraud since the result does not represent the work of the one who uses it and declares it. After all, the existing tools for detecting plagiarism or identifying the text’s artificial nature have been surpassed by the massive evolution of AI systems.

Another problem arises from using the data received for training, and an AI system can also provide wrong or false answers if it has not been taught to reason about the corroboration or systematic analysis of data to reach a specific result.

In this context, the text regarding the use of AI systems in research could have the following content:

In research activity, teaching staff, researchers, and students can use AI systems as support to identify necessary materials, organize them, systematize, translate, and improve writing or language without declaring as their material the result exclusively obtained from the use of an AI system, as a consequence of the researcher’s minimal intervention, under the sanctions provided by law for violating ethical norms.

Also, the teacher or student will have to disclose the use of an AI system for research or for preparing a paper required for evaluation, regardless of whether the assessment is partial or final, during the semester, at the end of the year, or during studies.

The norm could have the following content:

The use of an AI system for preparing a paper, assignment, project, material, or research result that must be presented to produce consequences in the assessment determines the obligation to declare such use.

When AI is used for training or learning or for preparing materials necessary for assessment, grading, and graduation, the Regulation should provide for the adaptation of the examination or evaluation method, especially since this is possible as the law on higher education offers that examinations must preferably be conducted face-to-face (Art. 32 para 2 of L.no. 199/2023).

The text could have the following content:

In the event of a declaration of significant use of an AI system for the preparation of a paper or if, in the absence of a declaration, this is detected, the evaluation will target topics that provoke critical thinking, analysis, interpretation, heuristic discussion to reveal the authentic preparation of the student.

The Regulation may allow the university to use AI markers to detect automatically generated content (Smodin, Originality AI, Copyleaks, GPTZero, Writer AI Content Detector) to identify deviations from ethical norms.

Understanding how the system works, and its frequent use determines the possibility of using it effectively and not being controlled by it. Hence, teachers are the first to adapt to these new realities. Students must also know how AI systems work, that they have certain limits and can make mistakes, and what kind of errors they can commit in the use process.

Therefore, the obligation of teachers and students to train themselves must be provided, in the sense of acceptance from the respective entities where they work, first literacy in the field of AI systems proposed by the educational institution for use but also of open ones existing on the market, and then to perfect their way of using these systems.

The content of the norm could be as follows:

Teachers and students are required to participate in training sessions on the use of AI systems proposed by the higher education institution and those available to the public, which can be used in teaching activities, in the individual preparation of the student, or for the final examination/evaluation.

The regulation must also consider the fact that AI systems may pose risks, on the one hand, to the security of personal data or the copyright or other rights of third parties and, on the other hand, to the person using them if they unconditionally trust the information received. If this is not checked, to discern what is valid and valuable from what is imaginary, to identify misinterpretations, incompleteness, and lack of coherent information determined by the lack of training of the system, the responsibility must belong to the user (Gimpel, 2023).

The norm could have the following content:

The user of an AI system in connection with the activity of teaching, research, and learning must ensure that he does not violate the confidentiality of personal data, copyright, or fundamental rights of another person through the information he provides to the system.

The user of data provided by the AI system, which he includes in his specific activity and which he uses for any evaluation, bears full responsibility for the correctness of the respective data and for violating the norms of academic ethics.


CONCLUSIONS

The specialized literature has examined the role of AI in education from both educators’ and students’ perspectives. In the initial phases of adoption, certain countries exhibited reluctance to integrate AI, often due to limited familiarity and a preference for traditional pedagogical approaches. While educators initially prioritized direct interaction and personal engagement, many eventually acknowledged AI’s potential to enhance instruction and personalize learning experiences. Students, by contrast, adopted AI more rapidly for academic tasks, attracted by its efficiency and time-saving advantages, though concerns were raised regarding diminished personal involvement and the erosion of critical thinking skills. The expanding use of AI has also highlighted significant ethical concerns, including data privacy risks, moral dilemmas, academic misconduct, and challenges in detecting AI-generated content. Nonetheless, AI offers considerable advantages, such as improving interactivity, personalizing learning pathways, supporting assessment processes, and streamlining administrative operations. The literature consistently emphasizes that AI should serve as a complement to, rather than a replacement for human involvement, requiring responsible integration through comprehensive training, adapted evaluation methods, and continuous collaboration between educators and students.

European higher education institutions have adopted a permissive approach to AI use, emphasizing the responsible integration of AI rather than outright prohibition. They develop guidelines that ensure transparency, ethical use, and accountability in teaching, learning, and research activities. Most institutions allow AI to support learning, personalize education, assist in language correction, or aid in research preparation but usually restrict its use in assessments that are meant to reflect individual work. Disclosure of AI use is widely required, and users are held fully responsible for the content generated. Key concerns addressed by these policies include academic integrity, protection of personal data, respect for intellectual property, prevention of plagiarism, and the promotion of independent critical thinking. Institutions also emphasize the need to adapt teaching and assessment methods, train both staff and students on the proper use of AI and maintain human expertise at the center of academic work. Although implementation varies across institutions, there is broad consensus that while AI offers valuable support, it must be used transparently, ethically, and with full awareness of its limitations and risks.

AI systems offer significant advantages in academic work, including generating research ideas, identifying sources, translating texts, writing code, refining language, and writing style, conducting reviews, and creating specialized content for projects or grant proposals. However, their uncontrolled and uncritical use poses significant risks. These include the loss of individual autonomy, confusion between the virtual and real worlds, the dissemination of inaccurate information, ethical violations, breaches of data confidentiality, and copyright infringement. Excessive reliance on AI may erode critical thinking, reflection, and genuine learning, as individuals may avoid intellectual effort, leading to diminished cognitive abilities. Traditional educational values risk being undermined, reducing individuals to passive consumers influenced by a technological elite or even by autonomous AI systems.

Under EU law, particularly the AI Act, Regulation (EU) 2024/1689, AI systems are classified based on the level of risk they pose, which is primarily determined by their intended use. When AI systems engage in high-risk activities, such as student admissions, assessment, grading, or behavioral monitoring, they are subject to strict compliance obligations due to their potential impact on fundamental rights and educational outcomes. Despite this structured risk-based approach, the EU legal framework remains insufficient in specifically addressing the use of AI in core educational processes, such as teaching, learning, and research. The current legislation primarily focuses on administrative and assessment-related uses without providing comprehensive regulation for the complex and evolving role of AI in teaching, learning, and research activities within the academic environment.

As centers of education and institutions entrusted with the mission of generating, certifying, and disseminating knowledge within the social, economic, and cultural spheres, universities must take an active role in anticipating challenges and developing their own ethical and regulatory frameworks, becoming key actors in defining standards of responsibility, transparency, and academic integrity, tailored to the specific requirements of education and research processes. They cannot limit themselves to merely complying with European regulations on the use of AI, which remain insufficient for the specific activities of higher education, especially in teaching, learning, and research; instead, they must exercise normative authority within the boundaries established by the applicable higher legal framework. Through this initiative-taking approach, universities can ensure the balanced integration of AI, safeguard the fundamental values of education, and contribute to the formation of competent and ethically responsible professionals, thus supporting societal progress and well-being.

The adoption of binding regulations at the university level is crucial for guiding the responsible use of AI in higher education, ensuring that both teachers and students uphold academic integrity and ethical standards. Students, as the future of society, must be educated not only in knowledge but also in responsibility and discernment in using innovative technologies. The legal norm serves as a model of conduct, disciplining behavior, providing guidance for appropriate action, preventing misuse, and ensuring certainty and security. Far from discouraging critical thinking, regulation stimulates it, as any norm can be subject to critique and analysis regarding its usefulness, clarity, and predictability. These processes contribute to preserving the fundamental mission of higher education: to train professionals capable of making significant contributions to societal development while maintaining personal integrity and intellectual independence in an increasingly digitalized and AI-driven world.

Given the limitations of existing legislation, this study emphasizes the need for universities to establish binding internal regulations governing the use of AI in teaching, learning, and research. The proposed regulatory framework developed herein emphasizes that AI systems should serve strictly as complementary tools, supporting tasks such as information retrieval, language enhancement, test generation, feedback provision, and material organization, without substituting the essential intellectual effort required for academic evaluations and research outputs. Full disclosure of AI use must be mandatory in all assessment activities to ensure transparency and academic integrity. Where significant AI involvement is identified, adapted evaluation formats emphasizing critical thinking and independent reasoning should be implemented. Moreover, universities should require systematic training for both faculty and students to promote informed and responsible use of AI while actively addressing associated risks, including data privacy breaches, copyright violations, and content accuracy issues. Additionally, the responsibility for verifying the validity of AI-generated information and ensuring compliance with academic ethics should remain entirely with users who integrate such content into their academic work.

Thus, AI represents a valuable tool in higher education, provided it is subject to responsible regulation and remains subordinate to the fundamental objectives of education: the development of knowledge, the cultivation of critical thinking, the preservation of personal integrity, and the preparation of competent and autonomous professionals.


REFERENCES

Al-Busaidi, A. S., Raman, R., Hughes, L., Albashrawi, M. A., Malik, T., Dwivedi, Y. K., Al- Alawi, T., AlRizeiqi, M., Davies, G., Fenwick, M., Gupta, P., Gurpur, S., Hooda, A., Jurcys, P., Lim, D., Lucchi, N., Misra, T., Raman, R., Shirish, A., & Walton, P. (2024). Redefining boundaries in innovation and knowledge domains: Investigating the impact of generative artificial intelligence on copyright and intellectual property rights. Journal of Innovation & Knowledge, 9(4), 100630. https://doi.org/10.1016/j.jik.2024.100630

An, Q., Yang, J., Xu, X., Zhang, Y., & Zhang, H. (2024). Decoding AI ethics from Users’ lens in education: A systematic review. Heliyon, 10(20), e39357. https://doi.org/10.1016/j.heliyon.2024.e39357

Atlas. (2019). Atlas, S. “ChatGPT for Higher Education and Professional Development: A Guide to Conversational AI.” (2023). https://digitalcommons.uri.edu/cba_facpubs/548.

Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4337484

Balalle, H., & Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Social Sciences & Humanities Open, 11, 101299. https://doi.org/10.1016/j.ssaho.2025.101299

Barus, O. P., Hidayanto, A. N., Handri, E. Y., Sensuse, D. I., & Yaiprasert, C. (2025). Shaping generative AI governance in higher education: Insights from student perception. International Journal of Educational Research Open, 8, 100452. https://doi.org/10.1016/j.ijedro.2025.100452

Basic. (2023). Basic, Z., Banovac, A., Kruzic, I. & Jerkovic, I. ChatGPT-3.5 as writing assistance in students’ essays. Humanit Soc Sci Commun 10, 750 (2023) https://doi.org/10.48550/arXiv.2302.04536.

Bin-Nashwan, S. A., Sadallah, M., & Bouteraa, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 75, 102370. https://doi.org/10.1016/j.techsoc.2023.102370

Buzkurt. (2023). Bozkurt, A. (2023). Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift. Asian Journal of Distance Education, 18(1). Retrieved from https://www.asianjde.com/ojs/index.php/AsianJDE/article/view/718.

C. Meniado, J. (2023). The Impact of ChatGPT on English Language Teaching, Learning, and Assessment: A Rapid Review of Literature. Arab World English Journal, 14(4), 3–18. https://doi.org/10.24093/awej/vol14no4.1

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://doi.org/10.1186/s41239-023-00408-3

Chisega - Negrila, A.-M. (2024). Predarea si învătarea într-o lume a inteligentei artificiale. Buletinul Universitătii Nationale de Apărare „Carol I”, 13(3), 59–71. https://doi.org/10.53477/2065-8281-24-22

Chiu, T. K. F. (2024). Future research recommendations for transforming higher education with generative AI. Computers and Education: Artificial Intelligence, 6, 100197. https://doi.org/10.1016/j.caeai.2023.100197

Comenius University of Bratislava. (2024). Internal Regulations on the Use of Artificial Intelligence Tools. https://www.jfmed.uniba.sk/fileadmin/jlf/Dekanat/studijne-oddelenie/Akreditacia_2022/Akreditacia_2023/Internal_Regulation_No_2_Volume_2024.pdf

Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148

Delva Benavides, J. E., & Gonzalez Lopez, I. S. (2022). Venta sexual digital: las redes sociales y su regulación internacional. JURÍDICAS CUC, 18(1). https://doi.org/10.17981/juridcuc.18.1.2022.11

Dong, L., Tang, X., & Wang, X. (2025). Examining the Effect of Artificial Intelligence in Relation to Students’ Academic Achievement in Classroom: A Meta-Analysis. Computers and Education: Artificial Intelligence, 100400. https://doi.org/10.1016/j.caeai.2025.100400

DTU University Denmark. (2024). DTU opens up for the use of artificial intelligence in teaching. https://www.dtu.dk/english/newsarchive/2024/01/dtu-opens-up-for-the-use-of-artificial-intelligence-in-teaching

Dutu, M. (2025). Towards the fourth category of human rights, human rights regarding artificial intelligence? – For an international pact on new rights for the AI era – UJ Premium, 14 ianuarie 2025. https://www.universuljuridic.ro/spre-cea-de-a-patra-categorie-de-drepturi-ale-omului-drepturile-umane-privind-inteligenta-artificiala-pentru-un-pact-international-relativ-la-noile-drepturi-ale-erei-ia/

Elsen-Rooney, M. (2023, January 4). NYC education department blocks ChatGPT on school devices, networks. https://www.chalkbeat.org/newyork/2023/1/3/23537987/nyc-schools-ban-chatgpt-writing-artificial-intelligence/

Erasmus University Rotterdam. (2024). Generative AI in education. https://www.eur.nl/en/about-university/vision/community-learning-and-innovation/lecturer-professionalisation/genai-education

European Commission. (2020). White Paper On Artificial Intelligence. https://op.europa.eu/en/publication-detail/-/publication/ac957f13-53c6-11ea-aece-01aa75ed71a1

European Parliament. (2021). REPORT on artificial intelligence in education, culture and the audiovisual sector. https://www.europarl.europa.eu/doceo/document/A-9-2021-0127_EN.html

European University Association. (2023). EUA Position, Artificial intelligence tools and their responsible use in higher education learning and teaching, 14 feb 2023. https://www.eua.eu/publications/positions/artificial-intelligence-tools-and-their-responsible-use-in-higher-education-learning-and-teaching.html?utm_source=chatgpt.com

Faculty of Education and Psychology Budapest. (2023). Using Content Generating Artificial Intelligence at the Faculty of Education and Psychology. https://www.ppk.elte.hu/media/21/72/a72cb868ca869721454aa7f40829e0db7a89593a0b7ec6290a47782d40e6/MI%20kari%20iranymutatas%20EN.pdf

Forman, N., Udvaros, J., & Avornicului, M. S. (2023). ChatGPT: A new study tool shaping the future for high school students. International Journal of Advanced Natural Sciences and Engineering Researches, 7(4), 95–102. https://doi.org/10.59287/ijanser.562

Freie Universitat Berlin. (2024). Guideline for Dealing with Artificial Intelligence. https://www.berlin-universities-publishing.de/en/ueber-uns/policies/ki-leitlinie/index.html

Gimpel. (2023). Gimpel, H., Hall, K., Decker, S., Eymann, T., Lämmermann, L., Mädche, A., Röglinger, M., Ruiner, C., Schoch, M., Schoop, M., Urbach, N., Vandirk, S., Discussion Paper, 2023, Unlocking the Power of Generative AI Models and Systems such as GPT-4 and ChatGPT for Higher Education, https://www.econstor.eu/bitstream/10419/270970/1/1840457716.pdf.

Grassini, S. (2023). Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Education Sciences, 13(7), 692. https://doi.org/10.3390/educsci13070692

Hadi Mogavi, R., Deng, C., Juho Kim, J., Zhou, P., D. Kwon, Y., Hosny Saleh Metwally, A., Tlili, A., Bassanelli, S., Bucchiarone, A., Gujar, S., Nacke, L. E., & Hui, P. (2024). ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Computers in Human Behavior: Artificial Humans, 2(1), 100027. https://doi.org/10.1016/j.chbah.2023.100027

Hanken University Finland. (2023). Guidelines for the use of AI in teaching and learning. https://www.hanken.fi/sites/default/files/2023-05/guidelines_for_the_use_of_ai_in_teaching_and_learning.pdf

Harari. (2018). Harari, Y.N., 2018, 21 lessons for the 21st century, Polirom Publishing house, Iasi, pp.65-66.

Heaven. (2023). Heaven, W.D., ChatGPT is going to change education, not destroy it, Artificial Inteligence, April, 6, 2023, https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/.

High-Level Expert Group on Artificial Intelligence. (2019). Ethics Guidelines for Trustworthy Artificial Intelligence. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Hsu, C.-F., Mudiyanselage, S. P. K., Agustina, R., & Lin, M.-F. (2025). Basic emotion detection accuracy using artificial intelligence approaches in facial emotions recognition system: A systematic review. Applied Soft Computing, 172, 112867. https://doi.org/10.1016/j.asoc.2025.112867

Huang, A. Y. Q., Lu, O. H. T., & Yang, S. J. H. (2023). Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom. Computers & Education, 194, 104684. https://doi.org/10.1016/j.compedu.2022.104684

Huruba, E. ,. (2023). Artificial intelligence, a European concern, Romanian Journal of Forced Execution, no. 3/2023, pp.24-31. https://www.ceeol.com/search/article-detail?id=1244194

Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. https://doi.org/10.1016/j.caeai.2024.100348

Khalil. (2023). Khalil, M., & Er, E. (2023). Will ChatGPT get you caught? Rethinking of Plagiarism Detection. arXiv (Cornell University). https://arxiv.org/pdf/2302.04335.

Kirillova, E. A., Zulfugarzade, T. E., Blinkov, O. E., Serova, O. A., & Mikhaylova, I. A. (2021). Prospects for developing the legal regulation of digital platforms. JURÍDICAS CUC, 18(1). https://doi.org/10.17981/juridcuc.18.1.2022.02

Kovari, A. (2025). A systematic review of AI-powered collaborative learning in higher education: Trends and outcomes from the last decade. Social Sciences & Humanities Open, 11, 101335. https://doi.org/10.1016/j.ssaho.2025.101335

Law of Higher Education, Pub. L. No. 199, Official Monitor no. 614 of July 5, 2023 (2023). https://legislatie.just.ro/Public/DetaliiDocument/271898

Law on Legislative Technical Norms for the Development of Normative Acts, Pub. L. No. 24, Republished Official Monitor No. 260 of April 21, 2010 (2000). https://legislatie.just.ro/public/detaliidocument/21698

Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., Lekkas, D., & Palmer, E. (2024). The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Computers and Education: Artificial Intelligence, 6, 100221. https://doi.org/10.1016/j.caeai.2024.100221

Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790. https://doi.org/10.1016/j.ijme.2023.100790

Lo, C. K. (2023). What Is the Impact of ChatGPT on Education? A Rapid Review of Literature. Education Sciences, 13(4), 410. https://doi.org/10.3390/educsci13040410

Luckin, R., Cukurova, M., Kent, C., & du Boulay, B. (2022). Empowering educators to be AI-ready. Computers and Education: Artificial Intelligence, 3, 100076. https://doi.org/10.1016/j.caeai.2022.100076

Lyu, W., Zhang, S., Chung, T., Sun, Y., & Zhang, Y. (2025). Understanding the practices, perceptions, and (dis)trust of generative AI among instructors: A mixed-methods study in the U.S. higher education. Computers and Education: Artificial Intelligence, 8, 100383. https://doi.org/10.1016/j.caeai.2025.100383

McGrath, C., Cerratto Pargman, T., Juth, N., & Palmgren, P. J. (2023). University teachers’ perceptions of responsibility and artificial intelligence in higher education - An experimental philosophical study. Computers and Education: Artificial Intelligence, 4, 100139. https://doi.org/10.1016/j.caeai.2023.100139

Meyer, J. G., Urbanowicz, R. J., Martin, P. C. N., O’Connor, K., Li, R., Peng, P.-C., Bright, T. J., Tatonetti, N., Won, K. J., Gonzalez-Hernandez, & Moore, J. H. (2023). ChatGPT and large language models in academia: opportunities and challenges. BioData Mining, 16(1), 20. https://doi.org/10.1186/s13040-023-00339-9

Moise, G., & Nicoară, E. S. (2024). Ethical aspects of automatic emotion recognition in online learning. In Ethics in Online AI-based Systems (pp. 71–95). Elsevier. https://doi.org/10.1016/B978-0-443-18851-0.00003-2

Moldovan, X. (2025). The relationship between artificial intelligence and legal regulation, UJ Premium, February 19, 2025. https://www.universuljuridic.ro/relatia-dintre-inteligenta-artificiala-si-reglementarea-legala/2/

Newport. (2024). Newport, C., What Kind of Writer Is ChatGPT? October 3, 2024, https://www.newyorker.com/culture/annals-of-inquiry/what-kind-of-writer-is-chatgpt.

Ou, A. W., Stöhr, C., & Malmström, H. (2024). Academic communication with AI-powered language tools in higher education: From a post-humanist perspective. System, 121, 103225. https://doi.org/10.1016/j.system.2024.103225

Oxford University. (2025). Use of generative AI tools to support learning. https://www.ox.ac.uk/students/academic/guidance/skills/ai-study

Parambil, M. M. A., Rustamov, J., Ahmed, S. G., Rustamov, Z., Awad, A. I., Zaki, N., & Alnajjar, F. (2024). Integrating AI-based and conventional cybersecurity measures into online higher education settings: Challenges, opportunities, and prospects. Computers and Education: Artificial Intelligence, 7, 100327. https://doi.org/10.1016/j.caeai.2024.100327

Pavlik, J. V. (2023). Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism & Mass Communication Educator, 78(1), 84–93. https://doi.org/10.1177/10776958221149577

Pini, B., Dolci, V., Gianatti, E., Petroni, A., Bigliardi, B., & Barani, A. (2025). Artificial Intelligence as a Facilitator for Public Administration Procedures: A Literature Review. Procedia Computer Science, 253, 2537–2546. https://doi.org/10.1016/j.procs.2025.01.313

Rane, N. (2024). Enhancing the quality of teaching and learning through ChatGPT and similar large language models: Challenges, future prospects, and ethical considerations in education. TESOL and Technology Studies, 5(1), 1–6. https://doi.org/10.48185/tts.v5i1.1000

Regulation (EU) 2024/1689 (2024). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

Regulation on the Organization and Operation of Undergraduate University Studies (2024). https://upb.ro/wp-content/uploads/2022/11/Regulament-org-studii-univ-licenta_2024.pdf

Riga Stradins University. (2024). Artificial intelligence in higher education. https://www.rsu.lv/en/artificial-intelligence-higher-education

Russell Group. (2023). Principles on the use of generative AI tools in education, July 3, 2023. https://www.russellgroup.ac.uk/policy/policy-briefings/principles-use-generative-ai-tools-education

Saihi, A., Ben-Daya, M., Hariga, M., & As’ad, R. (2024). A Structural equation modeling analysis of generative AI chatbots adoption among students and educators in higher education. Computers and Education: Artificial Intelligence, 7, 100274. https://doi.org/10.1016/j.caeai.2024.100274

SDU University, D. (2024). The use of generative artificial intelligence (generative AI) at SDU. https://mitsdu.dk/en/mit_studie/naturvidenskabelige_uddannelser/vejledning-og-support/aipaasdu

Shehu, H. A., Browne, W. N., & Eisenbarth, H. (2025). Emotion categorization from facial expressions: A review of datasets, methods, and research directions. Neurocomputing, 624, 129367. https://doi.org/10.1016/j.neucom.2025.129367

Southworth, J., Migliaccio, K., Glover, J., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4, 100127. https://doi.org/10.1016/j.caeai.2023.100127

Stivers, S. (2018). AI and bias in university admissions, Perspectives, 2018. https://ism.edu/images/ismdocs/ISM-Perspectives-Magazine-Dec2018.pdf

Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence, 7, 100259. https://doi.org/10.1016/j.caeai.2024.100259

Swansea University. (2025). Policy on the use of artificial intelligence (AI) in student assessment. https://myuni.swansea.ac.uk/academic-life/academic-regulations/aqs-policies/policy-on-the-use-of-artificial-intelligence-ai-in-student-assessment/?utm_source=chatgpt.com#ethical-use-of-ai=is-expanded

The Open University. (2025). Generative AI for students, Policy and Reports. https://about.open.ac.uk/policies-and-reports/policies-and-statements/gen-ai/generative-ai-students?utm_source=chatgpt.com

Tzirides, A. O. (Olnancy), Zapata, G., Kastania, N. P., Saini, A. K., Castro, V., Ismael, S. A., You, Y., Santos, T. A. dos, Searsmith, D., O’Brien, C., Cope, B., & Kalantzis, M. (2024). Combining human and artificial intelligence for enhanced AI literacy in higher education. Computers and Education Open, 6, 100184. https://doi.org/10.1016/j.caeo.2024.100184

Ualzhanova, A.; Zakirova, D.; Tolymbek, A.; Hernández García de Velazco, J. J.; Chumaceiro Hernandez, A. C. (2020). Innovative-entrepreneurial universities in the postmodern world concept: possibilities of implementation, Entrepreneurship and Sustainability Issues 8(1): 194-202. https://doi.org/10.9770/jesi.2020.8.1(12)

UNESCO. (2019). Global Convention on the Recognition of Qualifications concerning Higher Education. https://www.unesco.org/en/legal-affairs/global-convention-recognition-qualifications-concerning-higher-education?hub=70286

UNITED NATIONS. (1967). International Covenant on Economic, Social and Cultural Rights. https://treaties.un.org/doc/treaties/1976/01/19760103%2009-57%20pm/ch_iv_03.pdf

Universitat Wien. (2025). AI in studies and teaching. https://studieren.univie.ac.at/en/studying-exams/ai-in-studies-and-teaching/

University of Antwerp. (2024). Guidelines for (Generative) Artificial Intelligence in Research at UAntwerp. https://acortar.link/Au3GB7University of Cambridge. (2025). Blended Learning Service, Using generative AI. https://blendedlearning.cam.ac.uk/artificial-intelligence-and-education/using-generative-ai

University of Helsinki. (2025). Use of Generative Artificial Intelligence in Research. https://www.helsinki.fi/en/research/research-integrity/ai-research

University of Warsaw. (2025). Responsible Use of AI. https://www.wne.uw.edu.pl/en/research/research/responsible-use-ai

Univerzita Karlova. (2023). Artificial intelligence (AI) Recommendations for the academic members of staff of Charles University. https://ai.cuni.cz/AI-37-version1-ai_elearning_en.pdf

Upsala Universitet. (2024). Regulations of AI - Laws and Guidelines. https://www.uu.se/en/staff/gateway/teaching/ai-in-teaching-and-learning/general-information-about-generative-ai/regulations-of-ai---laws-and-guidelines

Vakulov, A. (2023). Ethical considerations of using AI for academic purposes. https://www.unite.ai/ro/consideratii-etice-ale-utilizării-ai-în-scopuri-academice/

Vieriu, A. M., & Petrea, G. (2025). The Impact of Artificial Intelligence (AI) on Students’ Academic Development. Education Sciences, 15(3), 343. https://doi.org/10.3390/educsci15030343

Vilnius University. (2024). The Guidelines on Artificial Intelligence Usage at Vilnius University. https://www.vu.lt/site_files/Vertimai/EN_Translation_Dirbtinio_intelekto_naudojimo_Vilniaus_universitete_gairės.pdf

Visoiu, R. (2025, January 21). Artificial intelligence and moral rights, UJ Premium. https://www.universuljuridic.ro/inteligenta-artificiala-si-drepturile-morale/

Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, 100326. https://doi.org/10.1016/j.caeai.2024.100326

Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1181712

Zabala Leal, T. D., & Zuluaga Ortiz, P. A. (2021). Los retos jurídicos de la inteligencia artificial en el derecho en Colombia. JURÍDICAS CUC, 17(1). https://doi.org/10.17981/juridcuc.17.1.2021.17

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0

Zhang, Z., Chen, Z., & Xu, L. (2022). Artificial intelligence and moral dilemmas: Perception of ethical decision-making in AI. Journal of Experimental Social Psychology, 101, 104327. https://doi.org/10.1016/j.jesp.2022.104327


FINANCING

This article is part of the Research plan - Ethics and Academic Integrity in Higher Education, 2024-2025, within the Department of Law and Public Administration, Faculty of Economic and Administrative Sciences, National University of Science and Technology POLITEHNICA Bucharest, Romania


CONFLICT OF INTEREST

The author declares that there is no conflict of interest.


BIODATA

Andreea Elena Tabacu. Ph.D. Associate Professor in the Department of Law and Administrative Sciences, Economic and Law School, University Center Pitesti of The National University of Science and Technology POLITEHNICA Bucharest, Romania.