AI Risks & Concerns

Understanding and mitigating AI risks in education

AI is powerful, but it's not perfect. Understanding the risks helps you use it responsibly and maintain the trust of your community.

Hallucinations

The problem: AI can confidently state things that are completely false. It doesn't "know" when it's wrong.

Why it happens

AI generates text that is statistically likely to follow from your prompt, not necessarily true. It has no concept of truth, only patterns.

Education examples

  • Citing non-existent research studies
  • Attributing quotes to the wrong person
  • Inventing statistics that sound plausible
  • Creating fictional legal precedents

Mitigation

  • Always verify facts, citations, and statistics independently
  • Use AI for drafting and ideation, not as a source of truth
  • Ask AI to include sources, then check those sources exist
  • Be especially careful with numbers and dates

Bias

The problem: AI reflects biases present in its training data, which includes historical inequities and societal biases.

Manifestations

  • Stereotyping in generated content
  • Uneven quality of responses about different groups
  • Reinforcing historical inequities
  • Lack of diverse perspectives

Education implications

  • Student accommodation documentation that inadvertently stereotypes
  • Curriculum suggestions that lack cultural diversity
  • Assessment questions with embedded bias
  • Communications that don't resonate with all families

Mitigation

  • Review AI outputs critically for bias
  • Provide diverse examples in your prompts
  • Include explicit instructions about inclusive language
  • Have diverse reviewers check important content

Privacy & Data Security

The problem: Information you share with AI may be stored, used for training, or potentially exposed.

Concerns

  • Student personally identifiable information (PII)
  • FERPA compliance
  • Staff personnel information
  • Sensitive institutional communications

Best practices

  • Never input student PII into general AI tools
  • Use enterprise versions with data protection agreements
  • Anonymize data before analysis
  • Understand your vendor's data retention policies
  • Follow your institution's acceptable use policy

Over-reliance

The problem: AI can make us lazy thinkers if we stop engaging critically with its outputs.

Warning signs

  • Accepting first drafts without review
  • Stopping your own research process
  • Deferring to AI on judgment calls
  • Losing skills through disuse

Maintaining balance

  • Use AI as a starting point, not an endpoint
  • Continue developing your own expertise
  • Question AI recommendations
  • Maintain human decision-making authority

Academic Integrity

The problem: The line between AI assistance and AI replacement is unclear.

Considerations

  • When does AI help cross into AI doing the work?
  • How should policies address AI use?
  • What are the learning implications?
  • How do we prepare students for an AI-assisted world?

A framework

Instead of banning AI, consider:

  1. Define acceptable use clearly
  2. Focus on process, not just product
  3. Teach effective AI collaboration
  4. Assess understanding, not just output

AI Governance Framework

Effective AI governance isn't about control. It's about enabling responsible innovation while protecting your community.

Governance Principles

1. Transparency

  • Be open about where and how AI is used
  • Communicate clearly with stakeholders
  • Document AI-assisted decision-making

2. Accountability

  • Human oversight on all consequential decisions
  • Clear ownership of AI outputs
  • Defined escalation paths

3. Equity

  • Audit for bias regularly
  • Ensure equitable access to AI benefits
  • Monitor for disparate impacts

4. Privacy

  • Data minimization principles
  • Clear data handling procedures
  • Vendor due diligence

Policy Components

Your AI policy should address:

AreaKey Questions
Acceptable UseWhat AI tools are approved? What data can be used?
Student DataHow is FERPA compliance maintained? What is prohibited?
Staff UseWhat training is required? What disclosure is needed?
Academic IntegrityHow should students use AI? How do we assess fairly?
ProcurementWhat vendor requirements exist? Who approves new tools?
Incident ResponseWhat happens when something goes wrong?

Implementation Roadmap

Phase 1: Foundation (Months 1-3)

  • Form AI governance committee
  • Audit current AI tool usage
  • Develop draft acceptable use policy
  • Identify training needs

Phase 2: Pilot (Months 4-6)

  • Select pilot use cases
  • Train pilot participants
  • Establish feedback mechanisms
  • Refine policy based on learnings

Phase 3: Scale (Months 7-12)

  • Roll out approved tools broadly
  • Implement ongoing training program
  • Establish regular policy review cycle
  • Share learnings with community

Board-Level Considerations

Institutional leaders should prepare boards for:

  • Budget implications: AI tools have costs, but also ROI potential
  • Liability questions: Who is responsible for AI errors?
  • Community concerns: Students, staff, and community members will have questions
  • Competitive positioning: How are peer institutions approaching AI?
  • Staff impact: How will roles evolve?

Red Lines

Some things should never involve AI without explicit human review:

  • Special education placement decisions
  • Disciplinary actions
  • Hiring/firing recommendations
  • Budget allocations affecting students
  • Communications to families about sensitive matters
  • Legal or compliance documentation

Identifying and Managing AI Risk

These risks apply whether your institution formally adopts generative AI or not. Staff and students are already using these tools. The question is whether you manage the risks proactively or reactively.

Area of RiskLikelihoodImpactRecommended Action
Utilizing Hallucinated DataHighCriticalPolicy and training for content ownership
WorkslopHighModeratePolicy, training, and culture for way of working
Working Dual JobsModerateLowClear policy and manager training
Always On RecordingHigh (*)HighOutlaw, if you can, or highly limit
PRA: Organizational UseHighHighTraining for PRA awareness
PRA: Data AnalysisHighModerateStress test PRA submissions for risk
Mental Health IssuesHighHighConsistent training, monitoring, and communication
Personal Health InformationHighModerateConsistent training, monitoring, and communication
Homegrown AppsModerateModerateClear policy and governance pipelines
Staff Developed Apps / Shadow ITModerateModerateClear policy and governance pipelines
Deepfake Audio, Video, and DocsHighHighEstablish verification procedures

Key Takeaways from the Risk Matrix

  • High-likelihood, high-impact risks (always-on recording, PRA organizational use, mental health, deepfakes) demand immediate policy attention
  • Workslop is a quality risk that compounds over time: it erodes trust in AI-assisted work across your institution
  • Homegrown and staff-developed apps represent shadow IT risk that grows as AI tools become more accessible
  • PRA (Public Records Act) exposure is a unique concern for public institutions: AI conversations and outputs may be subject to records requests

Consider: AI Risk Tabletop Exercises

Just as your institution likely runs tabletop exercises for cybersecurity incidents or campus emergencies, consider running scenario-based exercises for AI risks. Walk through realistic scenarios (a deepfake of the chancellor, a PRA request for AI chat logs, a data breach through an unapproved AI tool) and test your response procedures.


The Balanced Approach

Risk doesn't mean avoidance. It means thoughtful adoption with appropriate safeguards.

For every AI use case, ask:

  1. What could go wrong?
  2. What's the worst-case impact?
  3. What verification steps are needed?
  4. Who needs to review the output?
  5. What's our fallback if AI fails?

The goal is confident, responsible use, not fearful avoidance or reckless adoption.


Resources for Leaders

Sample Policies

Further Reading