Journal of Advances in Linguistics and English Teaching (JALET) Vol. No. July-December 2025 e-ISSN: 3089-9648 PSYCHOLINGUISTICS AND ARTIFICIAL INTELLIGENCE: A COMPARATIVE ANALYSIS OF HUMAN AND MACHINE LANGUAGE PROCESSING MECHANISMS Habib Azizi Nasution1*. Muhammad Hasyimsyah Batubara2 1,2Sekolah Tinggi Agama Islam Negeri Mandailing Natal. Indonesia Corresponding AuthorAos Email: sangvenom000@gmail. Article Info Article history: Received December 18th, 2025 Revised December 22nd, 2025 Accepted December 23rd, 2025 Keyword: Psycholinguistics. Artificial Intelligence. Natural Language Processing. Large Language Models. Human-Machine Interaction. ABSTRACT The rapid development of Artificial Intelligence (AI), particularly Natural Language Processing (NLP), has brought together two fundamental disciplines: Psycholinguistics and Computer Science. This research seeks to bridge the theoretical gap between how the human brain processes and generates language . and how machines model and replicate these processes (AI). This research employed a comparative-analytical literature review. Data was collected from leading academic journals focused on AI language models . uch as Transformer and Large Language Models/LLM. and theories of human language processing . uch as Serial Processing and Connectionist Model. The analysis focused on three main dimensions: lexicon acquisition, syntactic processing, and pragmatic It was found that while modern AI excels at predicting word order and syntactic structure based on probability . ike statistical approaches in cognitio. , it still falls short of fully replicating semantic processing tied to experience, awareness, and social context . hallmark of human processin. Current AI models demonstrate impressive speeds in lexical inference but often fail at tasks requiring a theory of mind or a multi-layered understanding of pragmatics. Integrating psycholinguistic principles into AI architectures holds great potential for developing systems that are not only efficient but also more natural and human-like in their interactions. Further research is needed to build AI models that reflect the complex bottom-up and top-down processes in the human brain. Introduction The interaction between humans and machines has evolved significantly from rigid, command-based inputs to increasingly sophisticated and context-aware dialogue systems. This transformation has been largely driven by advances in Artificial Intelligence (AI), particularly within the field of Natural Language Processing (NLP), which aims to enable machines to process, generate, and respond to human language in a natural and meaningful manner (Jurafsky & Martin, 2. Contemporary NLP systems, especially Transformer-based Large Language Models (LLM. , demonstrate remarkable fluency and coherence in text generation, positioning them as central technologies in education, communication, and information systems. Despite these advancements, a fundamental theoretical question remains insufficiently addressed: to what extent do computational language processes resemble the cognitive mechanisms underlying human language use? While LLMs achieve high performance through large-scale statistical learning and pattern recognition (Vaswani et al. , 2. , their operational principles differ substantially from the psychologically grounded processes described in human language cognition. This discrepancy raises concerns regarding whether current AI systems genuinely approximate human-like language understanding or merely simulate linguistic competence at a surface level (Bender & Koller, 2. Journal of Advances in Linguistics and English Teaching (JALET) Vol. No. July-December 2025 e-ISSN: 3089-9648 In this context, psycholinguistics offers a critical and theoretically rich framework for examining the cognitive plausibility of AI language systems. As a discipline concerned with the mental processes involved in language comprehension and production, psycholinguistics has developed influential models that explain how humans access lexical items, construct syntactic structures, and produce meaningful utterances in real time. Notably, models of lexical access, such as the Cohort Model (Marslen-Wilson, 1. emphasize incremental and context-sensitive word recognition, while LeveltAos Model of Speech Production (Levelt, 1. delineates distinct yet interconnected stages of conceptualization, formulation, and articulation in human speech. These models highlight cognitive constraints such as working memory limitations, intentionality, and goal-directed processingAi features that are largely absent from current AI architectures. Previous research has established that Transformer-based LLMs excel at capturing distributional regularities in language and generating grammatically acceptable text across diverse domains. However, empirical studies also indicate that human language processing operates under strict temporal and cognitive constraints, often described as the Aunow-or-never bottleneckAy (Christiansen & Chater, 2. , which forces rapid, incremental encoding and interpretation of linguistic input. LLMs rely on parallel processing over extensive contexts and massive datasets, raising questions about their alignment with human cognitive processing mechanisms. Despite growing scholarly interest in comparing human and machine language abilities, a clear research gap persists. Most evaluations of LLMs prioritize performance-based metrics such as accuracy, fluency, or task completion, while psycholinguistic theories are rarely employed as evaluative or explanatory frameworks. Consequently, there is limited understanding of howAior whetherAiAI language generation corresponds to human processes of lexical selection, sentence planning, and meaning construction. This lack of interdisciplinary integration constrains both the theoretical development of cognitively informed AI and the application of psycholinguistic insights to emerging Therefore, this study seeks to address this gap by positioning psycholinguistic models as analytical lenses through which contemporary AI language systems can be examined. By aligning key constructs from psycholinguistics with the operational characteristics of Transformer-based LLMs, this research aims to contribute to a deeper understanding of the similarities and divergences between human and machine language processing, ultimately informing the development of more cognitively grounded and human-aligned NLP systems. Therefore, this research aims to analyze the fundamental similarities and differences between human language processing mechanisms . and machine language models (AI) and identify the challenges and opportunities in integrating psycholinguistic findings to create more natural and ethical human-machine language interactions. Method This research adopted a comparative-analytical literature review approach to systematically examine the convergences and divergences between psycholinguistic theories of human language processing and computational architectures of AI language models. Data Sources: The primary data for this study comes from recent academic publications published between 2018 and 2024, focusing on . Cognitive Psycholinguistics and . Deep Learning Model Development in NLP. Sources include journals from the fields of Computational Linguistics. Cognitive Science, and Artificial Intelligence. Data Collection Procedure: The process begins with identifying and categorizing the literature based on three selected comparative domains: Lexical Access: A comparison of human word retrieval speed with tokenization efficiency in AI Journal of Advances in Linguistics and English Teaching (JALET) Vol. No. July-December 2025 e-ISSN: 3089-9648 Syntactic/Structural Processing: A comparison of human syntactic dependency models (Minimal Attachment vs. Late Closur. with the ability of AI attention mechanisms to track long-range Pragmatic/Contextual Understanding: An analysis of human abilities in contextual inference and Theory of Mind compared with AI's abilities in handling ambiguity and meta-communication. Data Analysis Techniques: Data were analyzed qualitatively using Thematic Content Analysis Each finding from the psycholinguistic literature was contrasted and compared with the architecture and performance of relevant AI models. The differences and similarities that emerged were then grouped into discourse themes, which formed the basis for the Discussion section. Findings and Discussions Lexical Access: Speed versus Cognitive Depth LLMs demonstrate exceptional speed in lexical prediction through high-dimensional word embeddings and probabilistic token mapping, enabling near-instantaneous lexical selection during text However, this process relies entirely on statistical regularities from training data. contrast, human lexical access is influenced not only by word frequency but also by semantic priming, episodic memory, and experiential associations. These results align with the Cohort Model (Marslen-Wilson, 1. , which posits that human word recognition is incremental and dynamically constrained by semantic, contextual, and experiential While LLMs achieve surface-level fluency through distributional statistics, humans achieve coherence through meaning grounded in real-world experience (Levelt, 1. Previous NLP research demonstrates that LLMs struggle with low-frequency or culturally rich lexical items requiring experiential grounding (Bender & Koller, 2. , confirming that LLM lexical representations lack the flexible, interconnected semantic networks of the human mental lexicon. Thus, although LLMs outperform humans in speed, they do not replicate the cognitive depth of human lexical access. Syntactic Processing: Attention Mechanisms versus Hierarchical Representation Transformer-based LLMs effectively capture long-distance syntactic dependencies using selfattention mechanisms. In complex sentence structuresAisuch as subject-verb agreement across embedded clausesAiLLMs perform with high accuracy, sometimes exceeding human performance under cognitive load conditions. These results confirm the strength of self-attention in modeling syntactic dependencies (Vaswani et al. , 2017. Linzen et al. , 2. However, the underlying processing mechanism differs fundamentally from human syntax processing, which relies on constructing hierarchical syntactic trees as described in generative and psycholinguistic frameworks (Friederici, 2. LLMs process syntax sequentially and probabilistically without explicitly representing hierarchical grammatical structures, explaining their vulnerability to syntactic illusions or structurally atypical sentences that humans detect through structural awareness (Christiansen & Chater, 2. While LLMs approximate syntactic outcomes, they do not emulate the cognitive architecture underlying human syntactic processing. Pragmatic and Contextual Understanding Pragmatic competence represents the largest gap between human and machine language Although LLMs generate contextually appropriate responses, they frequently fail to reliably interpret sarcasm, irony, humor, and indirect speech acts, particularly in contexts requiring understanding of speaker intention or shared background knowledge. Human pragmatic understanding is rooted in Theory of MindAithe capacity to infer others' beliefs, intentions, and emotional states (Tomasello, 2. According to Grice's Cooperative Principle. Journal of Advances in Linguistics and English Teaching (JALET) Vol. No. July-December 2025 e-ISSN: 3089-9648 meaning extends beyond literal content to implied intent (Grice, 1. LLMs, lacking consciousness and experiential grounding, can only estimate pragmatic intent through learned statistical correlations. Previous research confirms that LLMs simulate pragmatic behavior without genuine intentionality (Dennett, 1987. Bender & Koller, 2. This limitation carries both linguistic and ethical implications, as misinterpretation of intent may lead to inappropriate or misleading responses in sensitive communicative contexts, constraining AI systems from producing fully natural, socially aware, and ethically aligned language interactions. Synthesis of Findings The comparative analysis reveals that: Lexical processing: LLMs prioritize statistical efficiency over meaning grounded in experience, achieving speed without cognitive depth Syntactic processing: LLMs demonstrate strong output-level performance but lack hierarchical cognitive representations employed by humans Pragmatic understanding: Remains fundamentally human-centered, relying on intentionality and social cognition absent in AI systems. These findings collectively demonstrate that while LLMs approximate human language performance, they do not replicate the cognitive mechanisms underlying human language processing. The fundamental difference lies in processing foundations: AI operates on statistical probability while humans process language through cognition tied to consciousness, experience, and social Conclusions This comparative analysis confirms that modern AI language models, particularly LLMs, have achieved impressive capabilities in frequency-based lexical prediction and syntactic dependency modeling, yet they fundamentally differ from human language processing in three critical dimensions. First, while LLMs excel in processing speed, they lack the semantic depth and experiential grounding characteristic of human lexical access. Second, although Transformer attention mechanisms effectively capture syntactic patterns, they do not employ the hierarchical structural representations central to human sentence processing. Third and most critically, pragmatic understandingAirequiring Theory of Mind, intentionality, and social cognitionAiremains beyond current AI capabilities. The core distinction lies in processing architecture: AI models operate through statistical probability derived from large-scale pattern recognition, whereas human language processing is fundamentally grounded in consciousness, experiential knowledge, and intentional communication. This divergence has significant implications for both AI development and psycholinguistic research. Implications For AI Development: Future architectures should incorporate modules explicitly reflecting human cognitive processes, such as working memory models, hierarchical structure building, and intentionmodeling systems, to enhance contextual understanding and produce more natural human-machine Integration of psycholinguistic principles represents a promising direction toward cognitively grounded AI systems. For Psycholinguistic Research: AI models can serve as computational testbeds for validating and refining theories of human language processing, enabling empirical examination of cognitive hypotheses at scales previously unattainable. Journal of Advances in Linguistics and English Teaching (JALET) Vol. No. July-December 2025 e-ISSN: 3089-9648 For Ethical AI Design: Recognition that current systems lack genuine pragmatic understanding necessitates careful consideration in deploying AI for sensitive communicative contexts where misinterpretation of intent could have serious consequences. Limitations and Future Research This study was limited to a literature review methodology. Future research should employ empirical comparative experiments directly testing human participants and AI models on identical psycholinguistic tasks. Additionally, investigation of emerging architectures incorporating cognitive principles . , memory-augmented networks, neuro-symbolic system. would provide valuable insights into bridging the human-machine gap in language processing. References