Abstract
This paper presents a Socratic Agent: a small, auditable tutoring model that foregrounds the learner’s reasoning rather than exposing the model’s chain-of-thought. To mitigate cognitive offloading risks from general-purpose LLMs, we target edge-capable Small Language Models adapted via lightweight fine-tuning for classroom use. The agent runs a compact loop—Elicit → Structure → Test → Summarize—modulated by a stance st ∈ {explore, verify} and a readiness score Rt that gates the disclosure of answers via a logit-bias–based deference mechanism. Dialogue is constrained to a small set of speech acts (ask, clarify, probe, challenge, summarize, verify). Evidence and metacognition are externalized into two artifacts: a Learner Reasoning Trace (claims, steps, evidence, counterexamples) and a metacognitive ledger (goals, assumptions, plans, criteria, confidence). Tool use follows an ask-first, teston-demand policy; curated tools (e.g., calculator, unit check, code runner, rubric check) are executed solely to evaluate learner hypotheses without revealing solutions. We outline an evaluation plan across numeric and units tasks, diagram reading, rubric-graded responses, and conceptual probes, with outcomes measures for learning and retention, metacognitive coverage, trace quality, deference compliance, and cost and latency, and we discuss limitations, ablations, and a practical path from design to evidence. This work is offered as a design that articulates an auditable tutoring architecture and a concrete evaluation plan to guide future empirical validation.
📖 Citation
Urteaga-Reyesvera, J.C., Cadena Martínez, R. (2026). Ask First, Test on Demand: A Deference-Gated Socratic Agent Design. In: Martínez-Villaseñor, L., et al. Advances in Computational Intelligence. MICAI 2025 International Workshops. MICAI 2025. Lecture Notes in Computer Science(), vol 16265. Springer, Cham. https://doi.org/10.1007/978-3-032-17933-3_3
BibTeX
@InProceedings{10.1007/978-3-032-17933-3_3,
author="Urteaga-Reyesvera, J. Carlos
and Cadena Mart{\'i}nez, Rodrigo",
editor="Mart{\'i}nez-Villase{\~{n}}or, Lourdes
and V{\'a}zquez, Roberto A.
and Ochoa-Ruiz, Gilberto
and Montes Rivera, Mart{\'i}n
and Zapotecas-Mart{\'i}nez, Sa{\'u}l
and Barr{\'o}n-Estrada, Mar{\'i}a Luc{\'i}a
and Mezura-Montes, Efr{\'e}n
and Gomez Chavez, Arturo",
title="Ask First, Test on Demand: A Deference-Gated Socratic Agent Design",
booktitle="Advances in Computational Intelligence. MICAI 2025 International Workshops",
year="2026",
publisher="Springer Nature Switzerland",
address="Cham",
pages="21--29",
abstract="Large Language Models (LLMs) are increasingly embedded in everyday study practices, assisting with content generation, explanations, and exam preparation. LLMs are now deeply embedded in classroom instruction, facilitating content generation and explanations. However, the traditional LLM model is prone to exposing the model's internal chain-of-thought. In this paper, a Socratic Agent is presented as auditable tutoring model that foregrounds the learner's reasoning rather than exposing its internal chain of thought. An evaluation plan is outlined across numeric and unit-conversion tasks, diagram reading, rubric-graded responses, and conceptual probes; outcome measures cover learning and retention, metacognitive coverage, trace quality, deference compliance, and cost/latency. Limitations, ablations, and a practical path from design to evidence are also detailed.",
isbn="978-3-032-17933-3"
}