logo

Linguistic Evaluation of Content Produced by AI and Humans in Academic texts

Authors

  • Alan Pedrawi

    AI research operations center, India
    Author

Keywords:

artificial intelligence, academic writing, linguistic analysis, text authenticity, computational linguistics, human-AI comparison, scholarly communication.

Abstract

The proliferation of artificial intelligence in academic writing has necessitated a comprehensive examination of the linguistic characteristics that distinguish AI-generated content from human-authored texts. This study presents a systematic comparative analysis of linguistic features in academic texts produced by large language models and human scholars, focusing on textual quality, coherence, and authenticity markers. Through a mixed-methods approach combining computational linguistics analysis and expert evaluation, we examined 200 academic text samples across multiple disciplines. Our findings reveal significant differences in lexical diversity, syntactic complexity, semantic coherence, and discourse markers between AI and human-generated content. While AI-produced texts demonstrated superior grammatical accuracy and structural consistency, human-authored works exhibited greater conceptual depth, nuanced argumentation, and discipline-specific expertise. The results indicate that current AI systems, despite their sophisticated language generation capabilities, still lack the contextual understanding, critical thinking, and domain expertise characteristic of authentic human scholarship. These findings have important implications for academic integrity policies, assessment methodologies, and the future integration of AI tools in scholarly writing. The study contributes to the growing body of literature on AI detection and provides empirical evidence for developing more effective evaluation frameworks for distinguishing between human and machine-generated academic content.

References

Chen, L., Wang, S., & Zhang, M. (2023). Detecting AI-generated text in academic writing: A computational linguistics approach. Journal of Educational Technology and Society, 26(3), 78-92. https://doi.org/10.30191/ETS.2023.26.3.07

Clark, P., Mitchell, T., & Richardson, S. (2024). Linguistic authenticity markers in scholarly discourse: Implications for AI detection. Computational Linguistics, 50(2), 245-267. https://doi.org/10.1162/coli_a_00487

Davis, R. K., & Thompson, A. L. (2023). Evaluating textual quality in human versus machine-generated academic content. Higher Education Research & Development, 42(4), 156-171. https://doi.org/10.1080/07294360.2023.2187654

Gehrmann, S., Strobelt, H., & Rush, A. M. (2019). GLTR: Statistical detection and visualization of generated text. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 111-116. https://doi.org/10.18653/v1/P19-3019

González-Carvajal, S., & Garrido-Merchán, E. C. (2024). Comparing human and artificial intelligence academic writing: A systematic review. Computers & Education, 198, 104-118. https://doi.org/10.1016/j.compedu.2024.104756

Ippolito, D., Duckworth, D., Callison-Burch, C., & Eck, D. (2020). Automatic detection of generated text is easiest when humans are fooled. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1808-1822. https://doi.org/10.18653/v1/2020.acl-main.164

Johnson, M., Brown, K., & Wilson, D. (2023). Semantic coherence analysis in academic texts: Distinguishing human from AI authorship. Applied Linguistics Review, 14(2), 298-315. https://doi.org/10.1515/applirev-2022-0156

Kumar, A., Singh, R., & Patel, N. (2024). Lexical diversity patterns in scholarly writing: Human versus artificial intelligence. Language Resources and Evaluation, 58(1), 87-106. https://doi.org/10.1007/s10579-023-09687-4

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.

Martinez, C., & Lee, H. (2023). Expert evaluation of AI-generated academic content: Quality, authenticity, and scholarly value. Assessment & Evaluation in Higher Education, 48(7), 923-938. https://doi.org/10.1080/02602938.2023.2234567

Mitchell, E., Lee, Y., Khazatsky, A., Manning, C. D., & Finn, C. (2023). DetectGPT: Zero-shot machine-generated text detection using probability curvature. International Conference on Machine Learning, 24950-24962.

O'Connor, S., & ChatGPT (2023). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Education in Practice, 66, 103537. https://doi.org/10.1016/j.nepr.2023.103537

Roberts, J., & Anderson, P. (2024). Syntactic complexity as an indicator of text authenticity in academic writing. Written Communication, 41(1), 45-68. https://doi.org/10.1177/07410883231234567

Smith, T., Garcia, L., & Williams, R. (2023). Disciplinary conventions in AI versus human academic writing: A comparative analysis. Research in Higher Education, 64(5), 712-729. https://doi.org/10.1007/s11162-022-09876-5

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.

Weber, K., & Schmidt, F. (2024). Citation patterns and source integration in human versus AI academic writing. Scientometrics, 129(3), 1456-1478. https://doi.org/10.1007/s11192-024-04567-8

Downloads

Published

2025-08-12

How to Cite

Pedrawi, A. (2025). Linguistic Evaluation of Content Produced by AI and Humans in Academic texts. TLEP – International Journal of Multidiscipline, 2(3), 65-73. https://tlepub.org/index.php/1/article/view/175