Abstract
This paper presents a Socratic Agent: a small, auditable tutoring model that foregrounds the learner’s reasoning rather than exposing the model’s chain-of-thought. To mitigate cognitive offloading risks from general-purpose LLMs, we target edge-capable Small Language Models adapted via lightweight fine-tuning for classroom use. The agent runs a compact loop—Elicit → Structure → Test → Summarize—modulated by a stance st ∈ {explore, verify} and a readiness score Rt that gates the disclosure of answers via a logit-bias–based deference mechanism. Dialogue is constrained to a small set of speech acts (ask, clarify, probe, challenge, summarize, verify). Evidence and metacognition are externalized into two artifacts: a Learner Reasoning Trace (claims, steps, evidence, counterexamples) and a metacognitive ledger (goals, assumptions, plans, criteria, confidence). Tool use follows an ask-first, teston-demand policy; curated tools (e.g., calculator, unit check, code runner, rubric check) are executed solely to evaluate learner hypotheses without revealing solutions. We outline an evaluation plan across numeric and units tasks, diagram reading, rubric-graded responses, and conceptual probes, with outcomes measures for learning and retention, metacognitive coverage, trace quality, deference compliance, and cost and latency, and we discuss limitations, ablations, and a practical path from design to evidence. This work is offered as a design that articulates an auditable tutoring architecture and a concrete evaluation plan to guide future empirical validation.
📖 Citation
Urteaga-Reyesvera, J. C., & Cadena Martinez, Roridgo (2025).
Ask First, Test on Demand: A Deference-Gated Socratic Agent Design.
In MICAI 2025 Workshops (Lecture Notes in Artificial Intelligence).
Springer Nature Switzerland AG. (Forthcoming)
BibTeX
@inproceedings{urteaga2025agentcards,
author = {Urteaga-Reyesvera, J. Carlos and Cadena Martinez, Roridgo},
title = {Ask First, Test on Demand: A Deference-Gated Socratic Agent Design.},
booktitle = {Proceedings of the MICAI 2025 Workshops},
series = {Lecture Notes in Artificial Intelligence},
publisher = {Springer Nature Switzerland AG},
note = {Forthcoming},
year = {2025},
}