NewsMarch 4, 2026·6 min read

University of Miami Researcher Tests If AI Chatbots Actually Help Learning

While universities debate AI policies, Dr. Selma Sabanovic is conducting actual research on whether AI chatbots enhance student learning or just enable faster cheating.

#AI in education#ChatGPT#University of Miami#learning technology#academic research
Share
University of Miami Researcher Tests If AI Chatbots Actually Help Learning

AI as Your Study Buddy: Finally, Someone's Testing If These Tools Actually Help Students Learn

A researcher at the University of Miami is doing something that should've been done the moment ChatGPT hit campus: actually studying whether AI chatbots help students learn or just help them cheat faster.

While universities scramble to update their academic integrity policies and professors debate whether to ban or embrace AI tools, Dr. Selma Sabanovic and her team are taking a more pragmatic approach. They're treating AI as what most students already use it for — a learning companion — and figuring out if it's any good at the job.

What's Actually Happening Here

The research focuses on how students interact with AI tools like ChatGPT when studying complex subjects. Instead of the usual hand-wringing about whether students are using AI to write their essays (spoiler: they are), this study examines the learning process itself. Are students engaging critically with AI responses? Are they fact-checking? Are they using AI as a springboard for deeper understanding, or just as a fancy copy-paste machine?

The early findings suggest something most of us already suspected: it depends entirely on how you use it.

When students treat AI like a magic answer box — asking it to solve problems and accepting responses at face value — they learn approximately nothing. But when they use it as an interactive tutor that explains concepts, breaks down complex ideas, and helps them work through problems step-by-step, the results look more promising.

The Problem With How We're Talking About AI in Education

Here's my issue with most of the AI-in-education discourse: it's either utopian ("AI will revolutionize learning!") or dystopian ("Students will never learn to think!"). Both perspectives miss the point.

AI tools aren't going away. Banning them is like banning calculators in 1985 — technically possible, practically stupid. The real question isn't whether students should use AI, but how to use it without turning their brains into mush.

The University of Miami research matters because it's asking the right questions:

  • How do students actually interact with these tools in practice?
  • What interaction patterns lead to better learning outcomes?
  • Can we teach students to use AI in ways that enhance rather than replace critical thinking?

The Learning Buddy Framework

The "learning buddy" framing is surprisingly apt. A good study buddy doesn't just give you answers — they ask questions, challenge your assumptions, and help you work through problems. A bad study buddy lets you copy their homework.

Current AI tools can be either, depending on how you prompt them. Consider these two approaches:

Bad approach:

User: "Explain quantum entanglement"
AI: [Provides complete explanation]
User: [Copies explanation, moves on]

Better approach:

User: "I'm trying to understand quantum entanglement. Can you ask me 
questions to check my current understanding?"
AI: "Sure! Let's start with the basics. In your own words, what do you 
think 'quantum' refers to in physics?"
User: [Attempts explanation]
AI: "Good start. Now let's dig deeper..."

The second approach forces active engagement. The student has to articulate their understanding, identify gaps, and build knowledge iteratively. The AI becomes a Socratic dialogue partner rather than a vending machine for facts.

Why This Matters Beyond Universities

The education sector is the canary in the coal mine for AI integration everywhere. How we figure out the learning buddy problem will inform how we use AI in professional development, corporate training, and skill acquisition across industries.

If we can't teach college students — presumably smart, motivated people — to use AI tools effectively for learning, what hope do we have for the broader workforce?

The research also highlights a critical skill gap that nobody's talking about: prompt literacy. Knowing how to ask good questions of an AI system is becoming as important as knowing how to Google effectively was 15 years ago. Maybe more important, because AI can be far more helpful or far more misleading depending on how you engage with it.

The Technical Reality Check

Let's be honest about what current AI models can and can't do as learning tools:

What they're good at:

  • Breaking down complex concepts into simpler components
  • Providing multiple explanations from different angles
  • Generating practice problems and examples
  • Answering follow-up questions patiently (unlike human TAs at 2 AM)

What they suck at:

  • Knowing when they're wrong (hallucinations are still a massive problem)
  • Understanding what you specifically don't understand
  • Pushing back when you're on the wrong track
  • Adapting to different learning styles without explicit instruction

The hallucination issue is particularly thorny for education. An AI that confidently explains a concept incorrectly is worse than useless — it's actively harmful. Students need to develop a healthy skepticism and fact-checking habit, which runs counter to how we usually interact with authoritative-sounding text.

What Educators Should Actually Do

Rather than fighting a losing battle against AI use, educators should focus on:

  1. Teaching AI literacy explicitly: Make "how to use AI as a learning tool" part of the curriculum, not an afterthought.

  2. Designing AI-resistant assessments: Focus on synthesis, analysis, and application rather than regurgitation. If ChatGPT can ace your exam, your exam sucks.

  3. Modeling good AI use: Show students how you use AI tools in your own work and research. Demystify the process.

  4. Creating structured AI interactions: Provide frameworks and prompts that guide students toward productive engagement rather than lazy shortcuts.

The Bottom Line

The University of Miami research is valuable because it treats AI as a tool rather than a threat or a panacea. AI chatbots won't replace learning any more than calculators replaced math education. But they will change what we need to teach and how we teach it.

The students who learn to use AI as a genuine learning buddy — questioning it, challenging it, using it to deepen rather than shortcut understanding — will have a massive advantage. The ones who treat it as a cheat code will find themselves unprepared for work that requires actual thinking.

The real test isn't whether AI can help students learn. It's whether we can teach students to use AI in ways that make them smarter rather than just faster at appearing smart. Based on this research, the answer is yes — but it requires intention, instruction, and a willingness to engage with these tools honestly rather than pretending they don't exist.

#AI in education#ChatGPT#University of Miami#learning technology#academic research
Share

Related Articles

Newsletter

Get the signal. Skip the noise.

One email per week with the AI stories that actually matter. No spam, no hype — just the good stuff.