These 10 recommendations reflect what we have seen work in practice. They are not theoretical; they come from real implementation experience.
1. Overcommunicate, Train Extensively, and Set Clear Policy
This is number one for a reason. The single biggest mistake institutions make is adopting AI tools without investing adequately in training and communication.
What this looks like in practice:
- Develop a written AI acceptable use policy before (or alongside) tool deployment
- Offer regular training sessions, not just a one-time workshop
- Communicate clearly about what AI tools are approved, what data can and cannot be shared, and what quality standards apply
- Create reference guides and quick-start materials that staff can consult on their own
Why it matters: Without training, you get workslop. Without policy, you get risk. Without communication, you get confusion and resistance. All three must happen together.
2. Leaders Should Model and Display Mature AI Habits
If leadership is not visibly using AI in thoughtful, effective ways, the rest of the organization will not either.
What this looks like in practice:
- Mention when AI contributed to a report, analysis, or presentation
- Share examples of how you use AI in your own work during staff meetings
- Demonstrate critical evaluation of AI output, not blind acceptance
- Be transparent about both the value and the limitations you encounter
Why it matters: Culture follows leadership. When executives model mature AI habits, it signals that AI use is expected, professional, and valued. When leaders are absent from the conversation, AI becomes an underground activity with no quality standards.
3. Maximize All Available Privacy Settings
Every major AI platform offers privacy and data protection settings. Most users never configure them.
What this looks like in practice:
- Enable "do not train on my data" settings across all platforms
- Use enterprise or education versions when available (they offer stronger data protection)
- Audit which AI tools staff are using and ensure approved tools have appropriate privacy configurations
- Establish clear rules about what types of data can be entered into AI tools (never student PII in consumer AI tools)
Why it matters: The default settings on most AI platforms are not configured for institutional data protection. Taking 10 minutes to configure privacy settings can prevent significant compliance and reputational risks.
4. Develop a Center of Excellence and Identify Internal Change Agents
Sustainable AI adoption requires internal champions, people who are enthusiastic, knowledgeable, and willing to help their colleagues.
What this looks like in practice:
- Identify 3-5 staff members across different departments who are already experimenting with AI
- Formalize their role as AI champions or a Center of Excellence (COE)
- Give them time and resources to learn, experiment, and support others
- Create channels (Slack, Teams, regular meetings) for sharing tips, use cases, and lessons learned
- Have the COE evaluate new tools and make recommendations
Why it matters: External consultants can help you get started, but sustainable adoption requires internal expertise. Change agents create peer-to-peer learning networks that scale far more effectively than top-down mandates.
5. Start on Paper: Write Out What You Are Trying to Accomplish
Before typing a prompt, clarify your own thinking. What exactly are you trying to achieve?
What this looks like in practice:
- Before opening an AI tool, spend 2-3 minutes writing down your goal, your audience, and your constraints
- Draft an outline or bullet points of what you want the output to include
- Identify what "good" looks like before you start, so you can evaluate the result
- Consider what context and background the AI needs to do its best work
Why it matters: The quality of AI output is directly proportional to the quality of the input. Vague prompts produce vague results. When you take time to clarify your thinking first, your prompts become more specific, and the results improve dramatically. This is true at every level of AI use, from simple emails to strategic analysis.
6. Actively Utilize, Experiment, and Test Boundaries
You cannot learn AI from a slide deck. You learn it by using it.
What this looks like in practice:
- Set a personal goal of using AI at least once per day for the first month
- Try different models and platforms to understand their strengths
- Push AI with challenging tasks and observe where it excels and where it struggles
- Keep a running list of what works well and what does not
- Share your experiments with colleagues
Why it matters: Comfort comes from repetition, not instruction. The most effective AI users we have seen are the ones who committed to daily experimentation. Even 15 minutes a day builds significant capability over a few weeks.
7. Stay Current on Latest Releases and News
AI capabilities are expanding at an extraordinary pace. What was impossible six months ago may be standard today.
What this looks like in practice:
- Subscribe to 2-3 newsletters that cover AI developments (we recommend platform-specific blogs from Anthropic, OpenAI, and Google)
- Attend quarterly AI briefings or webinars
- Task your Center of Excellence with monitoring releases and communicating relevant updates
- Review your AI strategy quarterly, not annually; the landscape shifts too quickly for annual reviews
Why it matters: Training compute for AI models is doubling every 6 months. That means the AI you are evaluating today will be meaningfully more capable by the time you finish implementing it. Staying current ensures your strategy reflects actual capabilities, not outdated assumptions.
8. Focus on the Potential, Not the Limitations
Every AI tool has limitations. Hallucinations, bias, privacy concerns, and quality variability are all real. But leading with limitations creates paralysis.
What this looks like in practice:
- Acknowledge risks honestly (see our AI Risks page) while maintaining a forward posture
- Frame discussions around "how can we use this responsibly?" rather than "why we should be cautious"
- Celebrate early wins and share success stories across the organization
- When AI makes a mistake, treat it as a learning opportunity, not evidence that AI should be avoided
Why it matters: Organizations that lead with fear adopt slowly, adopt poorly, and fall behind. Organizations that lead with informed optimism build cultures that can adapt and improve. Risk management and enthusiasm are not opposites; they are complementary.
9. Focus on Amplification and Growth More Than Efficiency
Efficiency is the most commonly cited benefit of AI, but it is not the most important one.
What this looks like in practice:
- Ask "what can we do now that we could not do before?" not just "how can we do the same things faster?"
- Look for opportunities where AI enables new programs, services, or analyses that were previously impossible due to resource constraints
- Use AI to expand access, improve equity, and reach students who might otherwise fall through the cracks
- Think about institutional growth and student outcomes, not just administrative time savings
Why it matters: If AI only saves time, you get incremental improvement. If AI amplifies your capabilities, you get transformation. A community college that uses AI to create personalized intervention strategies for at-risk students is getting fundamentally more value than one that just uses AI to write emails faster.
10. Communicate Hope and Opportunity
AI is a story of hope, not fear. The institutions that lead with this message will attract talent, build trust, and create momentum.
What this looks like in practice:
- Frame AI adoption as an investment in your team's capabilities, not a replacement for their skills
- Share concrete examples of how AI is helping real people do meaningful work
- Be honest about challenges while maintaining confidence in your institution's ability to navigate them
- Connect AI adoption to your institutional mission: student success, equitable access, community service
Why it matters: Your faculty, staff, students, and community are watching how you respond to this moment. The narrative you set matters. Institutions that communicate hope and opportunity inspire engagement. Institutions that communicate anxiety and restriction inspire avoidance.
Putting It All Together
These 10 recommendations are not independent; they reinforce each other:
- Policy (#1) creates the foundation for privacy (#3) and experimentation (#6)
- Leadership modeling (#2) drives the culture that makes a Center of Excellence (#4) effective
- Starting on paper (#5) improves the quality of active experimentation (#6)
- Staying current (#7) ensures you keep focusing on potential (#8) based on real capabilities
- Amplification thinking (#9) gives you the success stories needed to communicate hope (#10)
Start with the ones that feel most urgent for your institution. There is no wrong order, but doing nothing is the only wrong answer.