Towards Transparent Learning Analytics: A Study on Explainable AI in Cognitive Skill Prediction
Keywords:
Explainable Artificial Intelligence, Learning Analytics, Cognitive Skill Prediction, SHAP, LIME, Educational Data Mining, Interpretability, Transparent AI.Abstract
The increasing adoption of artificial intelligence (AI) in educational technology has paved the way for advanced predictive models capable of analyzing and forecasting student cognitive skills. However, the opaque nature of many high-performance AI models limits their trustworthiness and practical utility in educational settings. This paper explores the integration of Explainable AI (XAI) techniques into cognitive skill prediction to achieve transparency, interpretability, and fairness in learning analytics. By employing methods such as Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and attention-based neural networks, the proposed framework deciphers the factors influencing cognitive skill development while maintaining robust predictive performance. Experimental results on large-scale educational datasets reveal significant improvements in interpretability without compromising accuracy, enabling educators to make data-driven decisions for personalized learning interventions and equitable assessment practices.
