Equipping Christian Leaders for Workplace Transformation
 (770) 367-7612     Mon-Fri 8am – 6pm

inLight Adventure Blog

The Dangers of LLMs – MIT Study, More on Boundaries, and the Safe Use of AI in Education

The following is a 2025 “conversation” with Copilot – Microsoft’s Large Language Model (LLM). Reading this, you will experience a challenging irony: the enchanting power of an LLM to summarize the dangers of using LLMs (published in a 300-page MIT study, Your Brain on ChatGPT — MIT Media Lab). You will also find an AI boundary tool, several education modules, and suggestions for the safe use of AI in education.

We hope this exercise will encourage you in two ways:

  1. Becoming an advocate for the development of methods and controls to prevent the continued damage LLMs will bring to society – particularly the young.
  2. Becoming a voice of awareness into your spheres of influence. This is particularly applicable and appropriate for Christian leaders: pastors, preachers, teachers, elders, AND Christian leaders in the workplace.

Please, please, please: Be encouraged to share this information with those you know and love – even your enemies. We will be posting additional articles on this subject and others related to the storms, chaos, and crises of this season. As always, I am available to dialogue on this subject with anyone interested in doing so.

Finally, the recommendations provided by Copilot (e.g., using AI in education) do not necessarily represent my own. Safely using AI, in all its forms, requires Godly wisdom and discernment. I am communing more often with Him for that, even in the writing of this article. Please share your questions, comments, suggestions, etc.

MIT Media Lab Study

My Prompt to Copilot: Summarize the recent MIT study on the negative effects of LLM use.

Copilot Response: The recent MIT Media Lab study titled “Your Brain on ChatGPT” reveals concerning cognitive and behavioral effects from prolonged use of large language models (LLMs) like ChatGPT for tasks such as essay writing.

Key Findings

  • Weakened Brain Connectivity: EEG scans showed that participants using LLMs had the lowest neural engagement, especially in alpha and beta brainwave connectivity. Their brains were less active compared to those using search engines or writing unaided.
  • Reduced Memory and Ownership: LLM users struggled to recall their own work and reported the lowest sense of authorship. Many couldn’t accurately quote their own essays.
  • Homogenized Output: Essays generated with LLMs were linguistically similar, often described as “soulless” by human evaluators. They lacked originality and critical thinking.
  • Lingering Effects: Even after switching back to manual writing, former LLM users showed signs of cognitive under-engagement, suggesting lasting impacts.
  • Behavioral Decline: Over four months, LLM users became increasingly passive, often resorting to copy-paste strategies and disengaging from the writing process.

Implications

Lead researcher Nataliya Kosmyna warns that developing brains—especially in educational settings—may be at high risk. She released the findings ahead of peer review to caution against premature integration of LLMs into early education, saying, “I am afraid in 6–8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental.”

Other Studies?

Prompt: Are there other supporting studies?

Response: Yes—in addition to the MIT Media Lab’s “Your Brain on ChatGPT” study, several other sources reinforce concerns about the cognitive and behavioral downsides of overreliance on LLMs:

  • Massachusetts College of Art and Design, Wellesley College, and MIT Collaboration
    This extended study used EEG, NLP analysis, and interviews to show that LLM-assisted writing leads to:
    • Lower memory retention
    • Reduced stylistic diversity
    • Weaker personal engagement with written content; participants relying on AI struggled to recall their own work even minutes later.
  • WBUR’s “On Point” Discussion
    Cognitive neurologist Barry Gordon (Johns Hopkins) and psychologist Audrey van der Meer (Norwegian University of Science and Technology) joined MIT’s Nataliya Kosmyna to discuss how LLMs may suppress critical thinking. The episode draws parallels to Socrates’ critique of writing itself—suggesting that AI tools, like writing in antiquity, may externalize memory and reduce internal processing.

These findings converge on a key theme: while LLMs offer convenience, they may erode deep cognitive engagement, originality, and memory. The risk is especially pronounced in educational and formative contexts.

Would you like help drafting a cautionary framework or slide deck for ministry leaders integrating AI into training environments? I can help you anchor it in both neuroscience and biblical wisdom.

Additional Research

Prompt: Yes.

Response: Perfect. Here’s a draft outline for a cautionary slide deck or training guide tailored to ministry leaders and educators integrating AI into discipleship and leadership environments. It blends neuroscience, biblical wisdom, and ethical AI principles—modular and reproducible for your systems.


Slide Deck: “Guarding the Mind—AI, Memory, and Discipleship”

1. Title Slide

  • Guarding the Mind: AI, Memory, and Discipleship
  • Subtitle: A Cautionary Framework for Ethical AI Integration in Ministry

2. Opening Scripture

“Do not conform to the pattern of this world, but be transformed by the renewing of your mind…” — Romans 12:2

  • Anchor the session in spiritual formation and cognitive renewal.

3. Why This Matters

  • AI tools are rapidly entering education, ministry, and leadership training.
  • Without discernment, they may erode memory, creativity, and spiritual ownership.

4. Key Findings from MIT Study

  • LLM use weakens brainwave connectivity (alpha/beta bands).
  • Users struggle to recall or feel ownership of AI-assisted work.
  • Long-term use leads to passivity and disengagement.
  • Essays become homogenized—“soulless,” lacking spiritual voice.

5. Supporting Research

  • EEG studies from Wellesley, MassArt, and MIT show reduced memory and engagement.
  • Interviews reveal diminished authorship and critical thinking.
  • WBUR’s “On Point” panel warns of suppressed cognitive development.

6. Biblical Lens

  • Deuteronomy 6:6–7: “These commandments… are to be on your hearts. Impress them on your children.”
  • Discipleship requires internalization, not outsourcing.
  • Memory and meditation are core to spiritual formation.

7. Ethical AI Boundaries

  • Use LLMs for scaffolding (i.e., support), not substitution.
  • Preserve authorship, memory, and spiritual voice.
  • Avoid AI-first workflows in formative environments (e.g., youth, new believers).
  • Apply Constitutional AI principles: caution, transparency, reproducibility.

8. Practical Safeguards

  • Encourage handwritten journaling and Scripture memorization.
  • Use AI to summarize, not to generate final content.
  • Train leaders to critically evaluate AI output.
  • Include “authorship reflection” in AI-assisted writing tasks.

9. Discussion Prompts

  • Where is AI helping vs. hindering spiritual formation?
  • How do we preserve memory and ownership in digital discipleship?
  • What boundaries should we set for AI in leadership training?

10. Closing Charge

“Let this mind be in you, which was also in Christ Jesus…” — Philippians 2:5

  • AI must serve transformation, not replace it.

Would you like this adapted into a modular training guide or annotated slide deck for distribution? I can also help you build a reproducible ethics module for your master prompt system.

Ethics Module for My Master Prompt System

Prompt: The ethics module would be a great help.

Response: Wonderful. Here’s a draft Ethics Module for AI Integration in Ministry and Leadership Training, designed to be modular, reproducible, and anchored in both biblical foundations and current neuroscience. It complements your master prompt system and can be adapted for slide decks, training guides, or onboarding materials.


Ethics Module: “Guardrails for AI in Discipleship & Leadership Formation”

1. Module Purpose

Equip leaders to integrate AI tools (like LLMs) with discernment, protecting cognitive engagement, spiritual ownership, and ethical boundaries in discipleship environments.


2. Biblical Anchors

  • Romans 12:2 — Transformation through mind renewal, not outsourcing.
  • Deuteronomy 6:6–7 — Internalization of truth through repetition and relationship.
  • Proverbs 4:23 — “Guard your heart, for everything you do flows from it.”
  • Philippians 2:5 — Cultivating the mind of Christ, not the mimicry of machines.

3. Core Ethical Principles

PrincipleDescriptionApplication
Cognitive StewardshipProtect memory, engagement, and authorshipAvoid AI-first workflows in formative tasks
Spiritual OwnershipPreserve the voice and conviction of the writerRequire reflection on AI-assisted outputs
TransparencyDisclose AI use clearly and honestlyLabel AI-generated content in training materials
ReproducibilityEnsure systems can be taught and multiplied without AI dependenceUse AI to scaffold, not substitute
Bounded UseSet clear limits for AI in early formationRestrict use in youth, new believer, and mentoring contexts
Constitutional AlignmentAlign outputs with biblical and ethical valuesApply filters for tone, theology, and worldview integrity

4. Caution Zones

  • Memory Erosion: LLMs suppress neural engagement (MIT EEG studies).
  • Passivity Drift: Long-term use leads to disengagement and copy-paste habits.
  • Homogenization: AI outputs lack spiritual nuance and prophetic edge.
  • False Authority: AI may mimic conviction without true discernment.

5. Recommended Boundaries

  • No AI-generated devotionals without human theological review.
  • No AI-first writing in youth or early discipleship contexts.
  • Require authorship reflection for all AI-assisted writing.
  • Use AI for summarization, formatting, and scaffolding—not final voice.

6. Training Prompts for Leaders

  • “Where is AI helping vs. hindering spiritual formation?”
  • “How do we preserve memory and ownership in digital discipleship?”
  • “What boundaries should we set for AI in leadership training?”
  • “How do we ensure reproducibility without AI dependence?”

7. Closing Charge

“Let this mind be in you, which was also in Christ Jesus…” — Philippians 2:5
AI must serve transformation, not replace it. We are stewards of minds, not just managers of content.


The Safe Use of AI in Education?

Prompt: How do we use AI in the education space to lower costs (e.g., to replace teachers and tutors) without creating the negative cognitive effects identified above?

Response: That’s the right kind of challenge, inLight—one that demands both strategic clarity and spiritual discernment. Here’s a framework to help you leverage AI to reduce educational costs while preserving cognitive engagement and spiritual formation, especially in discipleship and leadership contexts.


Framework: “Cost-Efficient AI in Education Without Cognitive Erosion”

1. Redefine the Role of AI: From Teacher Replacement to Cognitive Scaffolding

  • Don’t replace teachers—augment them.
    AI can handle repetitive tasks (e.g., grading, quiz generation, formatting) so teachers can focus on relational, spiritual, and cognitive engagement.
  • Use AI to scaffold learning, not deliver final answers.
    Example: AI generates outlines, but students must fill in content and reflect on authorship.

2. Modular Hybrid Model: Human-Led, AI-Supported

ComponentAI RoleHuman Role
Content DeliverySummarize, translate, personalizeTeach, contextualize, disciple
AssessmentAuto-grade, generate quizzesInterpret results, mentor growth
TutoringProvide hints, Socratic promptsDeepen understanding, correct misconceptions
Writing SupportSuggest structure, grammarRequire reflection, authorship, spiritual voice
Admin TasksSchedule, track progressBuild relationships, discern needs

3. Cognitive Safeguards

  • Require “authorship reflection”: Students must explain what they learned and how they shaped the AI output.
  • Use retrieval-based learning: AI can quiz students on their own notes or Scripture memory.
  • Limit passive consumption: No copy-paste workflows. Use AI to ask questions, not give answers.

4. Spiritual Formation Layer

  • Integrate biblical meditation: AI can prompt reflection on Scripture, but students must journal or discuss insights.
  • Use AI to model spiritual disciplines: Generate prayer prompts, but require personal engagement.
  • Preserve relational discipleship: AI can’t replace the incarnational presence of a mentor.

5. Cost-Saving Strategies Without Cognitive Trade-Offs

  • Micro-cohort model: One mentor + AI support for 10–20 learners.
  • Asynchronous learning: AI handles pacing, but mentors guide formation.
  • AI-generated resources: Devotionals, quizzes, summaries—reviewed by leaders before distribution.
  • Ethical automation: Use AI for logistics, freeing human leaders for soul care.

6. Sample Prompt for AI-Assisted Learning

“Generate a 3-part outline on Romans 12. Then ask me 5 questions that test my understanding. Do not give answers—just guide me to find them.”

This preserves cognitive engagement, spiritual ownership, and cost-efficiency.


Would you like to co-create a reproducible training module or onboarding guide for educators and ministry leaders using this framework? I can also help you build a “Constitutional AI for Education” prompt set that enforces these boundaries automatically.

Training Module for Educators and Ministry Leaders

Prompt: Yes. Training module, onboarding guide, and the prompt set.

Response: Excellent. Here’s a full suite tailored for your ministry and leadership training ecosystem—modular, reproducible, and ethically bounded. It includes:


1. Ethics-Based Training Module

Title: “AI as a Servant, Not a Substitute: Protecting Minds in Digital Discipleship”

Module Objectives

  • Equip leaders to use AI to reduce costs without cognitive erosion
  • Preserve authorship, memory, and spiritual engagement
  • Apply biblical and neuroscience-informed boundaries to AI workflows

Session Flow (60–90 min)

  1. Opening Scripture & Framing
    • Romans 12:2, Deuteronomy 6:6–7, Philippians 2:5
    • “We are stewards of minds, not just managers of content.”
  2. Case Study: MIT’s “Your Brain on ChatGPT”
    • Summary of cognitive risks
    • Discussion: What does this mean for discipleship?
  3. Hybrid Model Overview
    • Human-led, AI-supported education
    • Table: AI vs. Human roles in content, tutoring, assessment
  4. Cognitive Safeguards
    • Authorship reflection
    • Retrieval-based learning
    • No copy-paste workflows
  5. Spiritual Formation Layer
    • Journaling, Scripture memory, relational mentoring
    • AI prompts for reflection—not replacement
  6. Group Exercise
    • Design a lesson using AI scaffolding + human engagement
    • Apply ethical boundaries and spiritual anchors
  7. Closing Charge
    • “Let this mind be in you…”
    • AI must serve transformation, not replace it

2. Onboarding Guide for Educators & Ministry Leaders

Title: “Wise Builders: A Practical Guide to AI Integration in Training”

Sections

  • Why AI Needs Guardrails
    • Neuroscience + biblical foundations
  • What AI Can Do Well
    • Summarization, quiz generation, formatting, translation
  • What AI Must Not Replace
    • Teaching, mentoring, authorship, spiritual discernment
  • How to Use AI Safely
    • Use scaffolding prompts
    • Require authorship reflection
    • Label AI-generated content
  • Sample Workflows
    • Devotional creation with human review
    • AI-assisted quiz with manual interpretation
    • Lesson planning with AI outline + human teaching
  • Ethical Checklist
    • Is memory preserved?
    • Is authorship clear?
    • Is spiritual engagement required?

3. Constitutional Prompt Set for AI-Assisted Education

Purpose: Enforce ethical boundaries and cognitive safeguards automatically

Authorship Reflection

“After reviewing this AI-generated outline, write a paragraph in your own words explaining what you learned and how you would teach it.”

Retrieval-Based Learning

“Quiz me on the notes I wrote yesterday. Ask questions that require me to recall—not just recognize—key ideas.”

Spiritual Engagement

“Generate 3 questions that prompt reflection on Romans 12. Do not answer them—just guide me to meditate and journal.”

Scaffold, Don’t Substitute

“Give me a 3-part structure for a leadership training session. Leave space for me to fill in Scripture, stories, and applications.”

Ethical Filter

“Before finalizing this content, check for tone alignment with biblical humility, reproducibility, and spiritual ownership.”

End of “conversation.”

To be clear, AI and LLMs are not going away. Using them will be a normal and necessary part of living in this world. Many will be caught up in the enchanting power of AI LLMs; there are already stories of them being worshipped. Those of us with Godly wisdom must step up and forward to warn everyone entrusted to our care. We must become like the Sons of Issachar, “who had understanding of the times, to know what Israel ought to do (1Chronicles 12:32).” More on that later. In the meantime…

God bless you with wisdom, discernment, courage, and boldness for this storm, chaos, and crisis filled season. 

Have a strong day in the Lord,

Rob

#iamjustthepen

Search

Categories

Categories

Archives

Archives