Michael Borck · Curtin University
I lead LocoLab, an applied AI research initiative at the School of Marketing and Management, Curtin University. We study what local, privacy-first AI can do for teaching and learning — on modest consumer hardware, honestly benchmarked.
"We map the floor because most people live there, and nobody is documenting it honestly."
Investigates whether structured conversational prompts help students move beyond passive AI use toward active engagement. Tested with advanced frontier models, measuring depth of inquiry, iteration rates, and learning outcomes.
A four-paper framework examining how cognitive approaches developed in one AI-assisted context transfer to new domains. Studies metacognitive regulation, strategy adaptation, and the role of AI feedback in transfer.
Applies the Design Science Research methodology to the development and evaluation of AI-powered workplace simulations for professional skills education. Reports design cycles, artefact evaluation, and generalised design principles.
Analyses interaction logs from AI-assisted learning environments to identify recurring patterns in student questioning behaviour, AI response quality, and correlation with learning outcomes.
Explores whether intentionally introducing friction into AI-assisted workflows — slower responses, constrained outputs, forced reflection steps — improves long-term retention and metacognitive development.
Tests whether students nudged toward deeper engagement with weaker local models can match the learning outcomes of un-nudged users working with frontier systems. Examines the role of model capability as a confound.
Examines how model context window sizes impact performance, coherence, and throughput on consumer-grade GPU hardware. Establishes practical guidance for deploying local models in resource-constrained educational environments.
Explores the relationship between a model's token generation speed and users' perception of its intelligence and usefulness. Investigates whether slower models are systematically underestimated in educational contexts.
A two-study programme examining whether structured conversational nudges help students engage more deeply with AI — and whether that engagement transfers across model capability tiers.
A four-paper framework examining how cognitive approaches transfer across AI-assisted learning environments. Covers metacognitive regulation, strategy adaptation, and the role of AI feedback.
Design science research programme applying rigorous artefact-evaluation cycles to AI-powered professional simulation tools used in workplace readiness education.
Large-scale analysis of interaction logs from AI-assisted learning environments, examining questioning patterns, response quality, and correlations with outcomes.
Investigates whether introducing intentional friction into AI-assisted workflows — slower responses, forced reflection, constrained outputs — improves long-term retention.
Systematic performance testing of local language models across consumer GPU memory tiers — context length effects, token rate, and perceived intelligence.