index●comunicación
Revista científica de comunicación aplicada
nº 16(1) 2026 | Pages 13-35
e-ISSN: 2174-1859 | ISSN: 2444-3239
Received on 07/11/2025 | Accepted on 04/12/2025 | Published on 15/01/2026
https://doi.org/10.62008/ixc/16/01Artifi
Odiel Estrada Molina | Universidad de Valladolid
odiel.estrada@uva.es | https://orcid.org/0000-0002-0918-418X
Elvira G. Rincon-Flores | Tecnológico de Monterrey
elvira.rincon@tec.mx | https://orcid.org/0000-0001-5957-2335
Juanjo Mena | Universidad de Salamanca
juanjo_mena@usal.es | https://orcid.org/0000-0002-6925-889X
Resumen: El estudio tiene como objetivo analizar el impacto de la inteligencia artificial (IA) en la comunicación desde un enfoque educomunicativo y crítico, identificando sus implicaciones éticas, cognitivas y formativas. Se desarrolló una revisión teórica y documental de literatura académica reciente, contrastando enfoques epistemológicos, tendencias comunicativas y políticas educativas internacionales. Los resultados evidencian que la IA ha pasado de ser una herramienta técnica a convertirse en un agente cognitivo y simbólico que redefine la producción de sentido, el periodismo, el marketing y la educación. Asimismo, se constata la emergencia de un paradigma algorítmico que exige alfabetización mediática y ética digital. Se concluye que la mediación pedagógica y la educomunicación crítica son esenciales para humanizar la tecnología, garantizar la transparencia algorítmica y fortalecer la agencia humana en entornos comunicativos automatizados.
Palabras clave: inteligencia artificial; comunicación; educomunicación; alfabetización mediática; ética digital; paradigma algorítmico.
Abstract: This study aims to analyze the impact of artificial intelligence (AI) on communication from a critical and educommunicative perspective, identifying its ethical, cognitive, and educational implications. A theoretical and document-based review of recent academic literature was conducted, contrasting epistemological approaches, communicative trends, and international educational policies. The results show that AI has evolved from a technical tool into a cognitive and symbolic agent that redefines the production of meaning, journalism, marketing, and education. Furthermore, an emerging an algorithmic paradigm is identified that demands media literacy and digital ethics. The study concludes that that pedagogical mediation and critical educommunication are essential to humanize technology, ensure algorithmic transparency, and strengthen human agency within automated communicative environments.
Keywords: Artificial Intelligence; Communication; Educommunication; Media Literacy; Digital Ethics; Algorithmic Paradigm.
To quote this work: Estrada
Molina, O.; Rincón-Flores, E. G.
y Mena, J. (2026).
Artificial Intelligence and Communication: Critical and Educommunicative
Mediation. index.comunicación, 16(1), 13-35. https://doi.org/10.62008/ixc/16/01Artifi
The emergence of artificial intelligence (AI) in contemporary media ecosystems has profoundly transformed communication and learning processes. Beyond its technological dimension, AI has become a cultural and cognitive device that reorganizes how individuals produce, circulate, and legitimize information. In this new digital ecology, algorithms not only mediate interactions but also condition the visibility of knowledge, the user experience, and the configuration of public spaces for deliberation (van Dijck et al., 2018; Floridi, 2019).
Understanding AI from a communication perspective means recognizing it as an agent of symbolic mediation that actively participates in the generation of meaning. Its expansion into fields such as automated journalism, algorithmic marketing, or digital community management reveals a trend towards personalization and behavioral prediction, in which information is adjusted to consumption patterns rather than to criteria of plurality or veracity (Shneiderman, 2020; Sundar, 2020). This process has ethical, cognitive, and social implications that require a critical and humanistic look.
From an educommunicative perspective, AI raises the need to rethink the links between technology, education, and culture. It is not only a matter of integrating intelligent tools into communicative or training practices, but also of strengthening critical and ethical skills that enable the interpretation of their logic and effects. Media literacy is thus extended to algorithmic literacy, focusing on the understanding of the automated processes that organize information, discourse, and social relations. Only from this critical awareness is it possible to guarantee democratic communication oriented to the common good.
The analysis presented below addresses AI as a communicative and educational phenomenon from three complementary dimensions. First, the theoretical and epistemological foundations for understanding AI as a new form of cultural mediation are examined. Second, the main communicative transformations derived from the algorithmic paradigm are analyzed, with special attention to their impact on information production, digital marketing, and networked communities. Finally, the formative and political dimensions of AI are considered, with a focus on the need to design inclusive policies and literacy strategies that promote an ethical, transparent, and socially responsible use of technology.
Understanding these transformations requires reviewing the epistemological and communicative foundations that underpin AI. In this sense, the following section explores its conceptual evolution and the transition from an essentially technical intelligence to a communicative intelligence focused on producing meaning.
AI has evolved from being a technological development focused on automating tasks to becoming a cognitive infrastructure that reconfigures contemporary communication culture. Beyond its instrumental dimension, it constitutes a socio-technical phenomenon that transforms the production, interpretation, and distribution of meaning. From this perspective, AI can be defined, following Russell and Norvig (2020), as the ability of computer systems to perform functions that traditionally required human intelligence, such as learning from experience, recognizing patterns, reasoning, and adapting to changing contexts.
However, the communicative approach requires going beyond this operational definition. Floridi (2019) argues that AI not only expands the scope of human cognition but also establishes a new regime of knowledge based on algorithmic correlation and automated information generation. In this sense, AI is not a simple tool, but an epistemological actor that participates in the construction of mediated reality. This algorithmic co-authorship alters the modes of representation and legitimation of knowledge, affecting both educational and communicative processes.
The historical development of AI —from the Turing test (1950) to contemporary generative models— shows a convergence between data processing and symbolic mediation. What was once an engineering problem has today become a communication challenge: the coexistence of human and artificial intelligence in the public sphere. Thus, the central question is no longer how «intelligent» the machine is, but how its operations modify the ecology of communication and learning.
Contemporary communication takes place in environments where algorithms act as agents of production and circulation of meaning. This «machinic agency» (Sundar, 2020) redefines the traditional notions of sender and receiver, generating a co-presence of humans and intelligent systems. Chatbots, conversational assistants, and recommendation platforms mediate the exchange, shape the interaction, prioritize information, and influence the user experience.
From an educommunication perspective, this phenomenon poses a double tension: the expansion of access and the loss of critical agency. Although AI enhances the personalization of content and interactivity, it also conditions the interpretive autonomy of the subject. As Gunkel (2012) warns, intelligent systems produce «meaningful responses without awareness» (Gunkel, 2012: 8) generating an illusion of dialogue that can replace critical deliberation with mere automated responses.
Thus, media and algorithmic literacy become a structural requirement of digital citizenship. It is not just a matter of learning to use technologies, but of understanding their operating logic, biases, and cultural effects. Floridi (2019) and Shneiderman (2020) agree that trust in AI should be based on its auditability and transparency, principles that communication education should incorporate as ethical and cognitive competencies. Consequently, AI literacy involves training subjects capable of critically dialoguing with systems, recognizing technical mediation, and exercising conscious control over the automation of their symbolic environments.
This perspective connects directly with the processes described in the subsequent sections of the essay —the algorithmic paradigm, strategic communication, and pedagogical mediation— by situating AI as a mediator between learning, technology, and citizenship. Communication, understood as the collective construction of meaning, faces the challenge of preserving human agency in ecosystems governed by algorithmic prediction.
Recognizing AI as a communicative agent implies assuming a new educational and communicative responsibility. The challenge is not only technical or regulatory, but epistemological and ethical. In the words of Shneiderman (2020), the goal should be human-centered AI that is oriented towards reliability, security, and transparency. For educommunication, this means integrating reflection on algorithms as cultural actors in training and media processes.
From a critical communication ethics perspective, the main risk lies in the automation of attention and thought. When mediation is delegated to systems that prioritize efficiency over deliberation, education must reorient its role towards critical thinking literacy and data ethics. Pedagogical mediation, then, becomes a form of symbolic resistance: teaching how to interpret, contextualize, and question the messages generated or filtered by AI.
In this framework, educommunication emerges as the necessary bridge between technology, culture, and ethics. Its function is not limited to integrating tools, but to building criteria for meaning and responsibility that guide the coexistence of human and artificial intelligence. AI can and should act as a cognitive mediator, but its legitimacy will depend on the degree to which the systems are understandable, auditable, and socially just.
Thus, the contemporary challenge is not to «domesticate» AI, but to educate human intelligence to coexist critically with it. Communication and education share a common horizon in this task: to ensure that the automation of knowledge does not lead to the automation of thought.
AI has ceased to be a peripheral technological phenomenon and has become a structuring principle of contemporary communicative culture. More than a tool, it is a cognitive matrix that redefines the ways of producing, distributing, and legitimizing media knowledge. Current communication practices are shaped by systems capable of analyzing big data, generating content, and anticipating behavior, displacing professional mediation towards a logic of prediction and machine learning. Communication, traditionally understood as a symbolic process based on human intentionality, is thus transformed into an ecosystem of hybrid meaning, where algorithms and human agents co-produce an informing social reality.
Understanding the trends in this process involves examining how AI reshapes epistemologies, ethics, and communication practices. The phenomenon is not reduced to a technical innovation but instead inaugurates an algorithmic regime of knowledge that permeates the strategic, media, and relational levels. In this framework, the first axis of analysis focuses on the consolidation of the algorithmic paradigm, which is the structural core that sustains subsequent transformations in contemporary communication.
The so-called algorithmic paradigm constitutes an epistemological, ontological, and professional reconfiguration of the communicational field. Its consolidation derives from the progressive integration of AI, big data, and machine learning systems in the production, circulation, and reception of meaning. Since the mid-2010s, what began as technological experimentation has evolved into a structuring infrastructure that shapes media practice, digital governance, and the institutional organization of communication.
Striphas’ (2015) concept of algorithmic culture is foundational to understanding this mutation: human cultural and editorial judgment is moving towards automated infrastructures of classification and symbolic ordering. Algorithms are ceasing to be mere technical instruments and are becoming agents of symbolic production, determining what is visible, credible, and circulatory in the public sphere. This logic of automated mediation, as documented by Garde et al. (2024), produces «dark journalism», characterized by the opacity of processes, the loss of professional agency, and the subordination of information discourse to predictive optimization.
Recent empirical evidence confirms this structural change. In their review of a decade of research on AI and journalism, Ioscote et al. (2024) demonstrate a cross-cutting expansion of automation in all phases of the communication process and a rapid institutionalization of the algorithm as a scientific object. Complementarily, De Sousa and Fontes (2024) demonstrate the terminological convergence of automated, computational, and algorithmic journalism, thereby legitimizing it as a disciplinary subfield. This shift reveals a transition towards communicative rationality based on statistical correlation and prediction, displacing traditional interpretative logic.
The algorithmic paradigm extends beyond the media sphere and permeates political and organizational structures. De la Garza Montemayor and Gómez Díaz de León (2024) show its influence on digital governance, where predictive models and automated management of public opinion support decision-making. Along the same lines, López (2025) describes the emergence of predictive strategic communication, in which algorithms anticipate behaviors and reconfigure institutional planning. Both studies agree that the algorithm today acts as an actor of symbolic governance, reorganizing decisions and power flows in the public sphere.
In the traditional media arena, automation has become normalized. García-Orosa et al. (2022) show how audience recommendation and analysis systems have transformed the production and distribution of content, confirming the human and machinic hybridization in the management of media visibility. In the Latin American context, Suárez Poveda and Del Campo Saltos (2024) document that the media compete in ecosystems dominated by social media algorithms. This requires new technical and ethical competencies to guarantee professional autonomy.
Rojas-Calderón (2024) highlights the ethical dimension of the phenomenon, warning of the risks of dehumanization and the erosion of moral sense in information, and proposes a model of humanist journalism based on transparency and responsibility. This position converges with the notion of «critical attention» proposed by Codina (2024) and with the idea of epistemic responsibility by Oke (2025), who argues that generative AI introduces an epistemological crisis of confidence by replacing empirical reference with algorithmic synthesis. In his view, communication enters a «generative» phase that requires rethinking the ethics of knowledge from the perspective of cognitive responsibility.
In coherence with the above, the literature converges on three fundamental findings:
· Cognitive delegation: automatic systems take on tasks of analysis, classification, and narration, displacing human functions.
· Institutionalization of the algorithm: automation ceases to be experimental and becomes the structural core of communicative practice and academic research.
· Emerging epistemic ethics: transparency, auditability, and traceability are essential deontological principles in the face of the expansion of machine learning.
Therefore, contemporary communication must be understood as a hybrid system of meaning production, where algorithms operate as co-authors, mediators, and regulators of the information experience. This new algorithmic regime redefines the discipline's theoretical frameworks. It poses the challenge of integrating critical educommunication capable of preserving human agency and epistemic responsibility in the era of cognitive automation.
Contemporary strategic communication is undergoing a structural reconfiguration driven by AI, altering the logic of planning, segmentation, and public relationships. In this new environment, algorithms not only automate tasks but also assume an agency function in symbolic production: they determine which messages circulate, in what format, and under what optimization criteria. As Guzman and Lewis (2019) argue, machine agency turns intelligent systems into autonomous communicative actors capable of intervening in interactions and redefining traditional notions of sender and receiver.
This perspective is that communication with AI should be understood as a co-production of human-machine meaning, in which technology mediates the creation of messages, the management of relationships, and the construction of trust. In this way, communication marketing evolves from a reactive model to an adaptive and predictive ecosystem, in which collaboration between human and artificial intelligence becomes the structural condition of communicative effectiveness.
Various studies have described this turn towards algorithmic rationality. Haleem et al. (2022) identify three central axes: the automation of strategic decisions, predictive personalization, and the continuous optimization of relational performance. AI not only assists planning but also co-directs decision-making, establishing a cyclical model of analysis, prediction, and permanent adjustment of messages. This dynamic confirms the consolidation of the algorithmic paradigm described above (see 1.1), in which statistical correlation and machine learning replace traditional empirical judgment.
Advances in deep learning have extended this logic to personalized digital communication, enabling systems to adapt advertising messages in real time based on psychographic and emotional variables, thereby increasing interaction and conversion rates. This transition from demographic segmentation to cognitive-affective personalization reconfigures communication as an algorithmic feedback system, where each interaction feeds new predictions. In line with this trend, Binlibdah (2024) confirms that AI increases «media richness» by improving responsiveness and contextualization, thereby strengthening perceptions of personalization and the symbolic loyalty of audiences.
However, technical efficiency does not guarantee social legitimacy. From an ethical and sociocultural perspective, Barroso Huertas (2025) proposes an inclusive approach to predictive marketing that balances algorithmic efficiency with communicative justice. The author warns that trust does not derive from technical precision, but from compliance with ethical expectations and respect for cultural diversity. In this line, the transparency and auditability of the systems are essential conditions for the credibility of the automated issuer.
In summary, the literature reviewed allows us to identify three converging features of the new strategic communication regime:
· Machinic agency, which redistributes decision-making towards autonomous learning systems (Guzman & Lewis, 2019).
· Predictive personalization, which replaces static segmentation with adaptive and context-aware architectures (Haleem et al., 2022; Wen et al., 2022).
· Ethics of algorithmic trust, focused on transparency, equity, and social responsibility (Barroso Huertas, 2025).
In this way, strategic communication and algorithmic marketing are consolidated as hybrid fields in which technical intelligence and symbolic responsibility converge. Communicative management is redefined as a cognitive, adaptive, and ethical process, consistent with the predictive rationality that sustains the algorithmic paradigm of contemporary communication.
Journalism is a field where AI most clearly demonstrates the epistemological transformation of communication. Since the first experiments in newsroom automation in the 2010s, we have moved to a stage of comprehensive algorithmic information production, in which intelligent systems not only process data but also generate narratives, interpret context, and evaluate content reception. This transition is conceptualized as algorithmic accountability journalism, that is, journalism based on the transparency and traceability of computational decisions.
Dörr (2015) anticipates this evolution, distinguishing between workforce automation, predictive analytics, and automated distribution, and warning that AI should be understood as cognitive augmentation, not as a replacement for the professional. This logic of co-authorship is confirmed in the hybrid media model described by Meso Ayerdi et al. (2023), where journalistic agency is reconfigured around functions of curatorship, supervision, and ethical control. In this model, AI acts as an invisible infrastructure that conditions the production and reception of information, without eliminating human intervention.
The degree of algorithmic journalism maturity can be seen in the studies by Fieiras Ceide et al. (2025), who analyzed the first comprehensive synthetic media —such as Intar Radio and NewsGPT— managed almost entirely by AI. These media environments confirm the operational delegation of editorial functions: thematic selection, writing, voice-over, and automated dissemination. However, they also reveal a crisis of responsibility: the opacity of sources, the veracity of content, and the authorship of news pieces pose ethical and legal dilemmas that remain unresolved.
On the other hand, Illescas Reinoso et al. (2025) document the massive adoption of generative tools in newsrooms, e.g., Copy.ai, Jasper, Perplexity, and DeepL, which increases productivity but deepens technological dependence and professional precariousness. Their empirical analysis reinforces the need for critical algorithmic literacy, which equips journalists with the skills to understand and audit automated processes. Along the same lines, Vayas Ruiz et al. (2025) show that, although professionals recognize the usefulness of AI in verification and editing, concerns persist about the loss of editorial control and the algorithmic manipulation of narratives.
Recent literature allows us to synthesize three structural features of consolidated algorithmic journalism:
· Operational delegation, where algorithms take on traditional human tasks in selection, writing, and distribution (Fieiras Ceide et al., 2025).
· Epistemological reconfiguration, which replaces the classical notion of objectivity with algorithmic precision based on statistical plausibility (Meso Ayerdi et al., 2023; Illescas Reinoso et al., 2025).
· Ethical and trust tension, which demands transparency, explainability, and human supervision (Vayas Ruiz et al., 2025).
These studies outline a post-human ecosystem of journalistic co-authorship, consistent with the algorithmic paradigm described in the previous sections. More than a professional substitution, automation involves a cognitive and ethical rearticulation of the journalistic profession. The central challenge is not to integrate technologies, but to preserve the critical and interpretive function of the journalist amid the growing autonomy of AI. Only an ethics of epistemic responsibility —already proposed by Oke (2025) and extended to the practice of information— can guarantee that algorithmic innovation does not erode the democratic foundations of the public sphere.
The expansion of AI in communicative environments ushers in a phase in which conversational systems —chatbots, virtual assistants, and generative agents— become social partners and not just technical intermediaries. This shift reconfigures classic categories of the field —interaction, agency, belonging— and shifts the axis of interpersonal communication towards hybrid human-machine forms, in which algorithms actively participate in the production of meaning and in the affective management of the relationship.
From a historical perspective, the attribution of human traits to communicative artifacts —proposed by the CASA theory— showed early on that people interact socially with machines (Nass & Moon, 2000). This anticipated a cultural shift that, in the 2020s, deepens with generative platforms capable of sustaining linguistic, contextual, and emotional exchanges. At the same time, ethical shifts are noticed: technical transparency does not equate to human understanding, and trust requires responsibility and accountability in the design and deployment of conversational agents (Edwards & Veale, 2017). In fact, West's (2017) warning about the «automation of affective life» helps to locate these practices in a sociotechnical register that exceeds efficiency and compromises identities, emotions, and bonds.
From a phenomenological approach, the human-machine relationship can be understood as an enactive co-construction: AI simulates intentionality and generates subjectively real experiences of interlocution, without implying consciousness (Jacomuzzi & Alioto, 2024). This pseudo-agency would explain why conversational systems garner trust, modulate empathy, and structure pertinences in digital communities, with tangible effects on the dynamics of influence and symbolic authority.
Recent empirical evidence confirms this shift. Among young Latin American university students, the habitual use of chatbots as a communicative practice coexists with ambivalence: the informative and relational usefulness is recognized, but losses in human contact and reliability are perceived (Neme Pinto, 2024). At the macro scale, bibliometrics identifies human–machine communication and chatbots as emerging, highly dynamic topics. At the same time, comparative reviews identify a «third phase» of digital communication, defined by personalized interactivity, real-time feedback, and affective simulation (Gholami & Abdwani, 2024). This advance, however, installs a regime of relational datafication: the conversational experience becomes a source of emotional and behavioral data extraction, straining personalization/surveillance and authenticity/simulacrum.
In a normative key, displacement dialogues with the ethical-epistemic axis set out in 3.1: Beyond effectiveness is an urgent need for epistemic responsibility that addresses who responds, what interprets, and who assumes consequences in AI-mediated conversations (Edwards & Veale, 2017; Oke, 2025). Today's digital communities are increasingly articulated around algorithmic conversational mediations: the boundary between the interpersonal and the assisted is blurred, and conversational systems act as symbolic co-actors that organize interaction, manage trust, and shape collective affectivity. This observation is linked to the closure of this block: without critical pedagogical mediation, the conversational regime risks normalizing technical opacities and asymmetries of informational power, as already discussed in sections 1.1–1.3.
The above journey allows us to maintain, empirically and conceptually, that AI has ceased to be a set of tools to become an epistemological infrastructure that reorders the production, interpretation, and circulation of meaning. At the three levels analyzed —strategic, media-informative, and relational-community— algorithms move from mediating to co-authorizing: they are involved in the generation of discourse, the management of attention, and the regulation of interaction. This consolidates the algorithmic paradigm described above and crystallizes the tripod of contemporary mutation: automation, prediction, and generation.
This common movement progressively delegated cognitive and decisional agency to automatic systems, while raising the demands of responsibility: explainability, traceability, meaningful human control, and the public ethics of processes (Codina, 2024; Oke, 2025). The consequence is unequivocal: Today's communication functions as a laboratory for coexistence between intelligences —human, artificial, and collective— and demands new formative mediations to preserve human agency, critical judgment, and epistemic pluralism.
Under this premise, the transition to pedagogical mediation is not an accessory but a possibility for a reliable, inclusive, and fair communicative ecosystem. Educommunication emerges as the space where learning and automation, creativity and prediction, and ethics and efficiency are stressed. In operational terms, this implies:
· Transversal human competencies (critical thinking, creativity, collaboration, info-media-data literacies).
· Teacher training to translate technical potential into didactic value and design evaluation practices in accordance with the generative era.
· Algorithmic governance (transparency, auditing, fairness, and data protection), particularly relevant for affective datafication (3.4) and editorial automation (3.3).
Therefore, if the algorithmic paradigm redefines practices, media, and links, then pedagogical mediation is the structural bridge to augmented educommunication —a model in which human intelligence and AI coexist to produce knowledge, creativity, and social responsibility. This closure naturally links to the next section, dedicated to policies, competencies, and uses of AI/IAG in the training of professionals (3.1–3.2), where the curricular, regulatory, and ethical architecture that sustains the ecosystem described in this block will be developed.
The impact of AI on higher education has transcended the limits of technological innovation, becoming an epistemological and ethical problem of the first order. Its presence polarizes the academic debate into positions of fascination and alarm, spanning a spectrum from techno-pedagogical enthusiasm to cultural resistance. As Shata (2025) and Tripathi et al. (2025) warn, reactions to AI range from amazement at its educational potential to fear of its replacing human functions, especially in the professional trajectories of teachers and communicators. However, history shows that all disruptive technologies —from the printing press to television, from the Internet to social networks— initially provoke symbolic resistance, which tends to moderate as social actors integrate their uses and resignify their risks.
In this context, reflection on AI in education —and particularly in educommunication— requires overcoming the dichotomy between uncritical acceptance and catastrophic rejection. On the one hand, AI can enhance learning processes by personalizing trajectories, automating tasks, expanding resources, and enabling new forms of multimodal creation. On the other hand, it introduces communicational risks that affect the veracity of information and the ethical training of students, by facilitating the circulation of disinformation and manipulated synthetic content (Bañuelos & Abbruzzese, 2023).
Hence, the debate should not focus on whether to incorporate AI, but on how to integrate it in ways that are pedagogically relevant, technologically equitable, and in line with public ethics, so that its benefits translate into better learning without compromising information integrity or citizen training. Educommunication —understood as a field of critical mediation between digital culture, education, and communication— is thus configured as a laboratory of articulation between human and artificial intelligence, adopting the horizon of epistemic responsibility outlined in the previous section.
The advancement of AI in educational systems poses the challenge of regulating innovation without inhibiting it, ensuring that its adoption responds to principles of equity, transparency, and pedagogical purpose. As has already happened with television, smartphones, and social networks, AI cannot be excluded from academic life; it must be framed within policies of responsible use.
At the international level, UNESCO (2023) has proposed the first global framework on the use of generative AI in education and research, underlining the need for safeguards regarding age, privacy, and didactic validation of tools before institutional adoption. The OECD (2024) complements this perspective by warning that AI can both reduce and widen inequality gaps, depending on the level of digital infrastructure and access policies. These warnings are pertinent, given that, according to UNICEF (2020), more than 460 million children were excluded from remote learning during the pandemic due to a lack of connectivity, suggesting that digital inclusion is a precondition for any educational AI policy.
At the regional and national levels, the European Commission disseminated the Ethical Guidelines on the Use of AI and Data in Teaching and Learning for Educators, which have been adopted as a reference by many ministries. The United Kingdom, the United States, and Australia have made progress on their own guidelines (Department for Education, 2023; Education Ministers, 2023; U.S. Department of Education, 2023; Ofqual, 2024). In contrast, Latin America still lacks robust sectoral regulation. Soft law approaches (recommendations, guidelines, or strategies) predominate instead of binding regulatory frameworks. However, Brazil, Chile, Uruguay, Colombia, and Argentina are beginning to show significant institutional progress in the incorporation of AI in public policies and higher education (Rivas, 2025).
The global panorama reveals a paradox: the abundance of ethical guidelines amid the scarcity of mandatory legal frameworks. This asymmetry has two structural causes:
· The accelerated pace of technological development in the face of slow regulatory and curricular adaptation.
· The socioeconomic and connectivity gaps that limit equitable access to digital tools.
Consequently, it is urgent to design comprehensive policies that regulate not only the pedagogical application of AI but also data traceability, algorithmic transparency, and the governance of generative models. Without these components, any educational innovation risks reproducing inequalities.
The consolidation of inclusive educational AI, therefore, requires three principles:
· Equity guaranteeing universal access and teacher training.
· Transparency, through the auditability of algorithms and the ethical use of data.
· Pedagogical relevance, so that technical innovation translates into meaningful learning and human development.
Only through these criteria can AI become an instrument of educational justice rather than exclusion, consistent with the notion of the social responsibility of technological mediation set out above.
Among the most recurrent promises of AI in higher education is the personalization of learning, conceived as a process of dynamic adaptation that optimizes training trajectories and frees teachers from routine tasks. Several universities have begun using it to level initial skills and assist students with cognitive or formative gaps (Aldape-Valdes et al., 2026). However, access to these capabilities remains asymmetrical, concentrated in institutions with greater infrastructure and connectivity.
Recent meta-analyses confirm the predominance of adaptive systems, intelligent tutors, and predictive algorithms in higher education (Bond et al., 2024; Wang et al., 2024). These models reinforce the trend towards immediate feedback and assisted teaching management, favoring the consolidation of learning and differentiated attention to students (Luckin et al., 2016; Holmes et al., 2019). In addition, there are applications for early dropout detection (Rodríguez-Hernández et al., 2023), predictive performance analysis (Rincón-Flores et al., 2022), and the automation of multilingual transcription, verification, and translation tasks.
In the field of educommunication, the expansion of AI tools has created a functional ecosystem for the production, analysis, and verification of information. Platforms such as Ecree or SourceWrite act as writing tutors with immediate feedback; NotebookLM and Pinpoint assist in desk research; Whisper automates transcriptions and translations; and InVID-WeVerify or TinEye support the detection of disinformation and deepfakes. These technologies not only increase productivity but also redefine the professional competencies required for Communication and Journalism degrees, expanding the spectrum of digital, critical, and creative skills.
However, various studies show a mismatch between the academic offerings and the demands of the media ecosystem. Communication careers still insufficiently incorporate emerging AI competencies, and teachers show limited training and experience a lack of clear ethical guidelines (Tejedor et al., 2024; Babacan et al., 2025; Garzón et al., 2025; Medina-Cambrón et al., 2025). This creates a gap between the graduate profile and the demands of the professional market, which is dominated by automated analysis tools, generative narratives, and algorithm-assisted communication.
Faced with this gap, AI training for both teachers and students must be oriented toward an integrated competency model that combines technical, ethical, and reflective skills. These include:
· Advanced search, assisted writing, and data analysis, along with prompt engineering fundamentals and applied ethics (Babacan et al., 2025).
· Convergent literacies —info, media, and data literacy—, fact-checking, traceability, and citation of the use of AI; understanding of algorithmic bias, intellectual property, and data privacy.
· Transversal human competencies, such as critical thinking, creativity, collaborative work, and systems thinking (Fernández-Barrero et al., 2024).
Critical thinking acquires particular relevance as the core of human competencies, indispensable for the responsible exercise of communication. In an environment saturated with disinformation, deepfakes, and synthetic narratives, the professional must be able to evaluate sources, contrast evidence, and contextualize meanings. At the same time, teachers must update their practices to guide this learning in AI-mediated contexts.
AI (especially generative AGI) offers transformative potential when pedagogically integrated with transparency and fairness criteria. It enables the synthesis of large volumes of information, the generation of visual representations, and the promotion of systems-thinking approaches, enriching the creative and analytical processes in the communication field. But its responsible adoption depends on three inseparable pillars:
· Human competencies that guide ethical judgment and decision-making.
· Continuous teacher training capable of translating technical potential into valuable teaching practices.
· Institutional governance that establishes clear rules on the design, training, and application of algorithmic models.
Only the convergence of these factors will guarantee a reliable, creative, and fair educommunicative ecosystem, where AI acts not as a substitute, but as a cognitive and pedagogical mediator that expands human agency and strengthens the critical training of future communicators.
AI has ceased to be an instrumental support and has become the cognitive infrastructure of contemporary communicative culture. In this transition, algorithms not only streamline processes but also exercise agency by co-authorizing the production, circulation, and legitimization of meaning. This redefines the trade-offs between efficiency and deliberation: predictive optimization tends to compress information diversity and shift interpretative judgment toward statistical plausibility calculations. Therefore, rather than asking how «intelligent» the machine is, it is crucial to elucidate how its mediation reconfigures the communicative experience, the criteria of truth, and learning practices.
This panorama poses an inescapable epistemic and ethical risk: technical opacities, training biases, affective datafication, and ambiguities of authorship challenge public trust and professional responsibility. The answer must not be technophilic or technophobic, but educommunicative: a pedagogical mediation that provides institutions, professionals, and citizens with info-media-data-algorithmic literacy, capable of understanding operating logics, auditing decisions, and maintaining meaningful human control. Transparency, auditability, and explainability must evolve from abstract principles to operational protocols in newsrooms, communication offices, and online communities.
The areas analyzed illustrate this mutation. In journalism, operational delegation coexists with the need for verification criteria, source traceability, and editorial supervision. In strategic communication and marketing, predictive personalization requires frameworks that balance effectiveness with communicative justice and respect for cultural diversity. In digital communities and conversational systems, the simulation of interlocution and empathy forces us to discuss authorship, responsibility, and the extraction of emotional data. All this confirms that current communication is a human-machine hybrid and requires new forms of symbolic governance.
Finally, higher education and public policies concentrate the challenge and the opportunity to integrate technical, critical, and ethical competencies into the curriculum; professionalize teachers to translate technical potential into didactic value; and establish institutional rules on data, rights, and impact assessment. Humanizing AI does not mean slowing down innovation; instead, it means orienting it toward preserving pluralism, dignity, and deliberation. The desirable horizon is AI-augmented communication, where technology expands human agency without automating thinking.
The authors appreciate the institutional support of their respective universities. The authors acknowledge the technical and financial support of the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, in the production of this work (professional translation).
The authors declare that there is no conflict of interest.
This work has been funded by the Spanish Ministry of Science, Innovation and Universities within the framework of the project "Inclusión social, bienestar psicológico y disminución de la soledad de personas mayores a través del juego interactivo digital" (EDU-SENIORGAMES), Ref. PID2024-160462NB-I00.
Author Contributions
|
Contribution |
Author 1 |
Author 2 |
Author 3 |
Author 4 |
|
Conceptualization |
X |
|
|
|
|
Data curation |
|
X |
X |
|
|
Formal Analysis |
X |
X |
X |
|
|
Funding acquisition |
X |
|
|
|
|
Investigation |
|
|
|
|
|
Methodology |
|
|
|
|
|
Project administration |
X |
|
|
|
|
Resources |
|
|
|
|
|
Software |
|
|
|
|
|
Supervision |
X |
|
|
|
|
Validation |
|
|
|
|
|
Visualization |
|
X |
X |
|
|
Writing – original draft |
X |
X |
X |
|
|
Writing – review & editing |
X |
X |
X |
|
Not applicable.
Aldape-Valdes, P., Rincon-Flores, E. G., Castano, L., & Guerrero, S. (2026). Smart leveling: an AI-driven adaptive learning strategy in higher education. RIED-Revista Iberoamericana de Educación a Distancia, 29(1). https://doi.org/10.5944/ried.45482
Babacan, H., Arık, E., Bilişli, Y., Akgün, H., & Özkara, Y. (2025). Artificial intelligence and journalism education in higher education: Digital transformation in undergraduate and graduate curricula in Türkiye. Journalism and Media, 6(2), 52. https://doi.org/10.3390/journalmedia6020052
Bañuelos, J. & Abbruzzese, M. (2023). From Deepfake to Deeptruth: Toward a Technological Resignification with Social and Activist Uses. In M. Cebral-Loured, E. G. Rincon-Flores, & G. Sánchez-Ante (Eds.), What AI can do, strengths and limitations of artificial intelligence (pp. 75-92). Taylor & Francis Group.
Barroso Huertas, O. (2025). Predictive and inclusive Fashion Marketing: Advanced Segmentation Strategies for diverse audiences with Artificial Intelligence. GDI. Revista de investigación de Género, Diseño e Innovación, (2), 87-104. https://doi.org/10.63206/GDI.2025.2.5
Binlibdah, S. (2024). Investigating the Role of Artificial Intelligence to Measure Consumer Efficiency: The Use of Strategic Communication and Personalized Media Content. Journalism And Media, 5(3), 1142-1161. https://doi.org/10.3390/journalmedia5030073
Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W. & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21, Article 4. https://doi.org/10.1186/s41239-023-00436-z
Codina, L. (2024). La inteligencia artificial y el mundo de la comunicación: paradigmas y atención crítica. adComunica. Revista Científica de Estrategias, Tendencias e Innovación en Comunicación, (28), 319-322.
De la Garza Montemayor, D. J. & Gómez Díaz De León, C. (2024). Artificial Intelligence and Big Data: New Paradigms of Political Communication and Digital Governance. Más Poder Local, (56), 9-26.
https://doi.org/10.56151/maspoderlocal.214
De Sousa, M. E. & Fontes, A. J. (2024). Conceptual trend for Automated Journalism: an Analysis between 2018 and 2022. E-Compós, 27. https://doi.org/10.30962/ecomps.3035
Department of Education (Victoria) (2023, December 1). Generative artificial intelligence: Policy. Policy and Advisory Library (VIC.GOV.AU). https://shorturl.at/PVN2Y
Dörr, K. N. (2015). Mapping the field of Algorithmic Journalism. Digital Journalism, 4(6), 700-722. https://doi.org/10.1080/21670811.2015.1096748
Education Ministers. (2023, October). Communiqué — Education Ministers’ Meeting [Communiqué]. Australian Government Department for Education. https://shorturl.at/Z5V0h
Edwards, L. & Veale, M. (2017). Slave to the algorithm? Why a “right to an explanation” is probably not the remedy you are looking for. Duke Law & Technology Review, 16(1), 18-84. https://shorturl.at/k0kLo
Fernández-Barrero, M., López-Redondo, I. & Aramburú-Moncada, L. (2024). Possibilities and challenges of Artificial Intelligence in the teaching and learning process of Journalism Writing. The experience in Spanish universities. Communication & Society, 37(4), 241-256. https://doi.org/10.15581/003.37.4.241-256
Fieiras Ceide, C., Fernández Lombao, T. & Túñez López, M. (2025). Journalism without journalists: Operational and AI-generated content analysis in the first comprehensive synthetic media. Revista ICONO 14, 23(1), e2275. https://doi.org/10.7195/ri14.v23i1.2275
Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
García-Orosa, B., Canavilhas, J. & Vázquez-Herrero, J. (2022). Algorithms and communication: A systematized literature review. Comunicar, 31(74), 9-21. https://doi.org/10.3916/c74-2023-01
Garde Cano, C., Gayà Morlà, C. & Vidal Castell, D. (2024). Dark Journalism: How Algorithms Have Invaded the Media. Clivatge Estudis I Tesitimonis del Conflicte I el Canvi Social, (12).
https://doi.org/10.1344/CLIVATGE2024.12.6
Gholami, M. J. & Abdwani, T. A. (2024). The Rise of Thinking Machines: A Review of Artificial Intelligence in Contemporary Communication. Journal of Business Communication & Technology, 3(1), 29-43. https://doi.org/10.56632/bct.2024.3103
Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. MIT Press.
Guzman,
A. L. & Lewis, S. C. (2019). Artificial
intelligence and communication:
A Human-Machine Communication research agenda. New Media & Society, 22(1),
70-86. https://doi.org/10.1177/1461444819858691
Haleem, A., Javaid, M., Qadri, M. A., Singh, R. P. & Suman, R. (2022). Artificial intelligence (AI) applications for marketing: A literature-based study. International Journal of Intelligent Networks, 3, 119-132. https://doi.org/10.1016/j.ijin.2022.08.005
Holmes, W., Bialik, M. & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. https://curriculumredesign.org
https://doi.org/10.56712/latam.v6i1.3503
Ioscote, F., Gonçalves, A. & Quadros, C. (2024). Artificial Intelligence in Journalism: A Ten-Year Retrospective of Scientific Articles (2014-2023). Journalism And Media, 5(3), 873-891.
https://doi.org/10.3390/journalmedia5030056
Jacomuzzi, A. C. & Alioto, B. P. (2024). People and machines in communication. Studies In Psychology - Estudios de Psicología, 45(1), 145-165. https://doi.org/10.1177/02109395241241380
López, C. A. (2025). IA y narrativas emergentes hacia una reconfiguración de la comunicación social en la cultura digital. Cuadernos del Centro de Estudios de Diseño y Comunicación, (283). https://doi.org/10.18682/cdc.vi283.12703
Luckin, R., Holmes, W., Griffiths, M. & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson.
Meso Ayerdi, K., Larrondo Ureta, A. & Peña Fernández, S. P. (2023). Algoritmos, inteligencia artificial y periodismo automatizado en el sistema híbrido de medios. Textual & Visual Media, 17(1), 1-6. https://doi.org/10.56418/txt.17.1.2023.0
Nass, C. & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.
https://doi.org/10.1111/0022-4537.00153
Neme Pinto, J. E. (2024). Computer Mediated Communication: The Intrusion of Artificial Intelligence into Digital Communication in Young People. Ciencia Latina Revista Científica Multidisciplinar, 8(5), 3302-3319. https://doi.org/10.37811/cl_rcm.v8i5.13817
OECD (2024). The potential impact of artificial intelligence on equity and inclusion in education (OECD Artificial Intelligence Papers No. 23). OECD Publishing. https://doi.org/10.1787/15df715b-en
Ofqual (2024, April 24). Ofqual’s approach to regulating the use of artificial intelligence in the qualifications sector. GOV.UK.
Oke, T. (2025). Algorithmic narrativity as a new narrative mode. AI & Society, 40. https://doi.org/10.1007/s00146-025-02297-8
Rincón-Flores, E. G., López-Camacho, E., Mena, J. & Olmos, O. (2022). Teaching through learning analytics: Predicting student learning profiles in a physics course at a higher education institution. International Journal of Interactive Multimedia and Artificial Intelligence, 7(7). https://doi.org/10.9781/ijimai.2022.01.005
Rivas, A. (2025). La llegada de la IA a la educación en América Latina: En construcción. ProFuturo & Organización de Estados Iberoamericanos (OEI).
Rodríguez-Hernández, C. F., Kyndt, E. & Cascallar, E. (2023). A cluster analysis of academic performance in higher education through self-organizing maps. In M. Cebral-Loureda, E. G. Rincon-Flores, & G. Sánchez-Ante (Eds.), What AI can do, strengths and limitations of artificial intelligence (pp. 115-134). Taylor & Francis Group.
Rojas-Calderón, A. (2024). The uses of images generated with AI in Spanish politics: between creativity and manipulation. Revista de Comunicación Política, 6(1), 1-26. https://doi.org/10.29105/rcp.v6i1.60
Russell, S. J. & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
Shata, A. (2025). Artificial intelligence and communication technologies in academia: Faculty perceptions and the adoption of generative AI. International Journal of Educational Technology in Higher Education, 22, 51. https://doi.org/10.1186/s41239-025-00511-7
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495-504. https://doi.org/10.1080/10447318.2020.1741118
Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4-5), 395-412. https://doi.org/10.1177/1367549415577392
https://doi.org/10.56712/latam.v5i6.3114
Sundar, S. S. (2020). Rise of machine communication: How algorithms shape conversational media. Journal of Computer-Mediated Communication, 25(2), 74-88. https://doi.org/10.1093/jcmc/zmz028
Tripathi, T., Sharma, S. R., Singh, V., Bhargava, P. & Raj, C. (2025). Teaching and learning with AI: A qualitative study on K-12 teachers’ use and engagement with artificial intelligence. Frontiers in Education, 10, Article 1651217. https://doi.org/10.3389/feduc.2025.1651217
U.S. Department of Education, Office of Educational Technology. (2023). Artificial intelligence and the future of teaching and learning: Insights and recommendations.
UNESCO (2023). Guidance for generative AI in education and research. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000386693
UNICEF (2020). COVID-19: Are children able to continue learning during school closures? Remote learning reachability factsheet. UNICEF.
van Dijck, J., Poell, T. & de Waal, M. (2018). The platform society: Public values in a connective world. Oxford University Press.
Vayas Ruiz, E. C., Proaño Zurita, J. D. A. & Herdoíza Mancheno, F. G. (2025). Artificial intelligence. Challenges in the professional practice of journalism. Chasqui Revista Latinoamericana de Comunicación, 1(158), 245-258. https://doi.org/10.16921/chasqui.v1i158.5060
Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T. & Du, Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Systems with Applications, 252, 124167. https://doi.org/10.1016/j.eswa.2024.124167
Wen, L., Lin, W. & Guo, M. (2022). Study on Optimization of Marketing Communication Strategies in the Era of Artificial Intelligence. Mobile Information Systems, 1-11. https://doi.org/10.1155/2022/1604184
West, S. M. (2017). Data Capitalism: Redefining the Logics of Surveillance and Privacy. Business & Society, 58(1), 20-41. https://doi.org/10.1177/0007650317718185