<record>
  <header>
    <identifier>oai:eurokd.com:article/2081</identifier>
    <datestamp>2026-04-23</datestamp>
  </header>
  <metadata>
    <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/">
      <dc:title>Personalized Applications of Large Language Models in Pre-University Computer Science Education: Bridging Global Equity, Mitigating Bias, and Addressing Structural Challenges</dc:title>
      <dc:relation>Volume 3</dc:relation>
      <dc:creator>Yang Xia</dc:creator>
      <dc:subject>Large Language Models</dc:subject>
      <dc:subject>Computer Science Education</dc:subject>
      <dc:subject>Personalized Learning</dc:subject>
      <dc:subject>Bias Mitigation</dc:subject>
      <dc:subject>Educational Equity</dc:subject>
      <dc:subject>Digital Divide</dc:subject>
      <dc:subject>Intelligent Tutoring</dc:subject>
      <dc:description>&lt;p style="text-align: justify;"&gt;The rapid development of Large Language Models (LLMs) has offered new opportunities for personalized education while raising concerns about bias amplification, equity gaps, and the digital divide. This quasi-experimental study explores LLM applications in high school Computer Science (CS) education, focusing on bias mitigation and bridging to higher education. A sample of 616 students from a key high school in northern China was randomly assigned to an experimental group (LLM personalized instruction, n=308) or a control group (traditional instruction, n=308). The 8-week intervention built CS skills through progressive tasks from basic syntax to comprehensive projects, aligned with university domains (e.g., ACM CS2023). Methods integrated UNESCO (2025) and OECD (2025) guidelines, emphasizing low-bandwidth solutions and bias mitigation strategies such as prompt engineering and multi-model comparison. Data analysis combined quantitative approaches (t-tests, ANOVA, bridging index) and qualitative NVivo thematic coding to assess performance gains, subgroup equity, and bias indicators (MAB and MDB). Results showed significant improvements in the LLM group (p&amp;lt;0.05), with a 15% average bridging index and 80%&amp;plusmn;5% bias mitigation rate. However, urban-rural and gender biases still require further optimization. This study provides empirical insights into responsible LLM use in education and proposes policy frameworks and model optimization (unified super models vs. mixture of experts) to advance global equity and ethical AI integration.&lt;/p&gt;</dc:description>
      <dc:publisher>Individual Differences in Language Education: An International Journal </dc:publisher>
      <dc:date>2026-04-23</dc:date>
      <dc:type>Text</dc:type>
      <dc:identifier>https://api.eurokd.com/Uploads/Article/2081/idle.2025.03.04.pdf</dc:identifier>
      <dc:identifier>https://doi.org/10.32038/idle.2025.03.04</dc:identifier>
      <dc:language>en</dc:language>
      <dc:coverage>Pages 45–70</dc:coverage>
    </oai_dc:dc>
  </metadata>
</record>