Skip to content

Research

LocoLab’s research explores how local AI can support education, what small language models can actually do on consumer hardware, and how students interact with AI systems. Each paper is its own repository — published papers become public with links to the full text.

Each paper enacts one or more of the lab’s five principles. The artifacts behind these papers — built but not yet published — live in findings.

These projects have scaffolded repositories and are under active development.

PaperDescriptionStatusPrinciples
Cognitive Strategy TransferFramework for understanding how cognitive strategies transfer across AI-assisted learning contexts (4-paper series)In progressconversation; vary
DSR AI Education SimulationDesign science research on AI-powered education simulationsIn progressconversation; methodological
Keep Asking — Study 1: Does the Nudge Work?Using frontier models, test whether a conversational nudge shifts students from passive delegation to active conversation and improves task outcomesIn progressconversation
Keep Asking — Study 2: Does Conversation Compensate for Model Quality?Test whether nudged students using a weak local model can match un-nudged students using a frontier model — reframing AI equity as a habits problemPlanned (pending Study 1)conversation; vary

Early-stage ideas with initial notes. Not yet under active development.

PaperDescriptionPrinciples
PCIe Multi-GPU Inference ScalingDoes VRAM tier or architecture generation matter more? GTX vs RTX scaling comparison on consumer hardware (experiment design)engineer
Context Length Effects on Small Language ModelsHow context window size affects small language model performance on consumer hardwarespecialize
Perceived Intelligence vs Token RateRelationship between perceived AI intelligence and token generation speedmethodological

No papers published yet — check back soon.