Michael Borck · Curtin University

AI, education, and
human-computer interaction

I lead LocoLab, an applied AI research initiative at the School of Marketing and Management, Curtin University. We study what local, privacy-first AI can do for teaching and learning — on modest consumer hardware, honestly benchmarked.

"We map the floor because most people live there, and nobody is documenting it honestly."
Under Submission 5 papers
Submitted 2025 AI in EducationPrompt Design
Keep Asking: Conversational Prompts and Active AI Engagement in Student Learning
Michael Borck

Investigates whether structured conversational prompts help students move beyond passive AI use toward active engagement. Tested with advanced frontier models, measuring depth of inquiry, iteration rates, and learning outcomes.

Submitted 2025 Cognitive ScienceAI in Education
Cognitive Strategy Transfer Across AI-Assisted Learning Environments
Michael Borck

A four-paper framework examining how cognitive approaches developed in one AI-assisted context transfer to new domains. Studies metacognitive regulation, strategy adaptation, and the role of AI feedback in transfer.

Submitted 2025 Design ScienceSimulation
Design Science Research for AI-Powered Educational Simulation
Michael Borck

Applies the Design Science Research methodology to the development and evaluation of AI-powered workplace simulations for professional skills education. Reports design cycles, artefact evaluation, and generalised design principles.

Submitted 2025 Learning AnalyticsAI in Education
Patterns in Educational AI Chat Interactions: A Large-Scale Analysis
Michael Borck

Analyses interaction logs from AI-assisted learning environments to identify recurring patterns in student questioning behaviour, AI response quality, and correlation with learning outcomes.

Submitted 2025 PedagogyAI in Education
Deliberate Friction as a Pedagogical Strategy in AI-Assisted Learning
Michael Borck

Explores whether intentionally introducing friction into AI-assisted workflows — slower responses, constrained outputs, forced reflection steps — improves long-term retention and metacognitive development.

In Progress 3 papers
Draft 2026 AI in EducationLocal Models
Keep Asking Study 2: Local Model Constraints and Engagement Parity
Michael Borck

Tests whether students nudged toward deeper engagement with weaker local models can match the learning outcomes of un-nudged users working with frontier systems. Examines the role of model capability as a confound.

Draft 2026 Local ModelsBenchmarking
Context Length Effects on Small Language Models for Consumer Hardware
Michael Borck

Examines how model context window sizes impact performance, coherence, and throughput on consumer-grade GPU hardware. Establishes practical guidance for deploying local models in resource-constrained educational environments.

Draft 2026 HCILocal Models
Perceived Intelligence vs Token Rate in Local Language Models
Michael Borck

Explores the relationship between a model's token generation speed and users' perception of its intelligence and usefulness. Investigates whether slower models are systematically underestimated in educational contexts.

Research Studies 6 studies
Keep Asking active

A two-study programme examining whether structured conversational nudges help students engage more deeply with AI — and whether that engagement transfers across model capability tiers.

Cognitive Strategy Transfer active

A four-paper framework examining how cognitive approaches transfer across AI-assisted learning environments. Covers metacognitive regulation, strategy adaptation, and the role of AI feedback.

DSR AI Education Simulation active

Design science research programme applying rigorous artefact-evaluation cycles to AI-powered professional simulation tools used in workplace readiness education.

Educational AI Chat Analysis active

Large-scale analysis of interaction logs from AI-assisted learning environments, examining questioning patterns, response quality, and correlations with outcomes.

Deliberate Friction active

Investigates whether introducing intentional friction into AI-assisted workflows — slower responses, forced reflection, constrained outputs — improves long-term retention.

Consumer Hardware Benchmarking active

Systematic performance testing of local language models across consumer GPU memory tiers — context length effects, token rate, and perceived intelligence.

Themes
AI in Education How local, privacy-first AI tools change the teaching and learning dynamic — from assessment to simulation to tutoring.
Human-Computer Interaction How people interact with AI systems, especially when hardware constraints shape the experience.
Constraint-Driven Innovation What emerges when you take away the cloud, the budget, and the assumptions. Honest benchmarks on modest hardware.