Between Opportunity and Ordeal: Why Welcoming Artificial Intelligence Is Not Enough, We Must Understand It
Navigating the Ethical Dimensions of AI in Education: Between Opportunity and Responsibility
Artificial intelligence did not enter education as a supplementary tool that can be ignored or used only when needed. It arrived as a profound transformation that forces us to rethink the very meaning of learning, our roles as educators and researchers, and our relationship with knowledge, language, and power. Between voices that celebrate AI as a comprehensive solution to all educational problems and those that reject it out of fear of losing the human role, I have found myself in a different space—a space of questioning rather than quick judgment.
As a result of my work in education and language across multicultural contexts, I could not view artificial intelligence as a neutral tool. Language itself is not neutral; it is imbued with history, culture, and assumptions about what is considered “correct” and “acceptable.” So what about a system that generates language automatically? From this perspective, it became clear to me that artificial intelligence is, at the same time, a genuine opportunity and an ethical and epistemological ordeal.
The opportunity becomes especially visible when we look at learners, particularly those who feel vulnerable or afraid of making mistakes. I have seen how some AI tools can reduce anxiety around writing, offer learners a starting point instead of a blank page, and create space for experimentation without immediate judgment. In multilingual environments, this initial support can make a real difference—between silence and participation, between withdrawal and persistence.
When used thoughtfully, these tools can also help educators rethink their practices and free up time to focus on human interaction, guidance, and deep discussion. In this sense, artificial intelligence carries undeniable empowering potential.
But this picture remains incomplete if we ignore the other side. Artificial intelligence does not operate in a vacuum, nor does it produce “pure” knowledge. It is built on data, and that data reflects dominant languages, specific cultural contexts, and predefined models of what counts as good knowledge or acceptable discourse. The real danger lies not in using the tool, but in using it without questioning it—without awareness of its limits and biases.
Here begins the ordeal: when the tool turns into an unquestioned epistemic authority; when learning is measured by the polish of generated text rather than the depth of thinking; and when the lure of quick solutions tempts both teachers and learners to bypass difficulty, reflection, doubt, and revision. At that moment, we do not merely lose a skill—we risk losing the human relationship with language as an act of thinking and self-expression.
In educational contexts, the most serious risk is the gradual silencing of the learner’s voice: becoming accustomed to accepting what is offered rather than reviewing it, challenging it, or simply saying, “This does not sound like me.” Artificial intelligence can support voice, but it can also replace it if learners are not taught to use it critically and consciously.
This is where it becomes clear that the role of the teacher has not diminished—it has become more complex. We are no longer only transmitters of knowledge or designers of activities; we are ethical mediators. We are the ones who set boundaries, frame questions, and teach learners that the tool is not a substitute mind, but a medium that requires human thinking and decision-making. Teaching how to use AI without teaching how to question it is a new form of educational neglect.
For women in education and research, this transformation carries a particular responsibility—not because we are required to prove our competence, but because we have long experience navigating systems that were not always designed with our voices in mind. This lived experience gives us heightened sensitivity to issues of voice, representation, and justice. Using artificial intelligence consciously is not a departure from this path; it is an extension of it.
The awareness I call for is not merely technical, but human and ethical. It is an awareness that asks: What do we gain? What might we lose? Who is represented in these texts, and who is excluded? And how do we ensure that language remains a space for thinking—not merely a final product delivered without accountability?
Sharing these questions is no longer optional for me. In moments of major transformation, silence does not signal neutrality; it allows decisions to be made without informed, critical voices. We need a discourse that neither glorifies technology nor fears it, but places it at its proper scale and returns the human being to the center of the scene.
In the end, artificial intelligence alone will not determine the future of education. The way we choose to engage with it, the boundaries we draw around its use, and the questions we insist on asking will determine whether it becomes a tool that expands human possibility—or another ordeal added to a long list of unexamined transformations.