AI Policies That Work: Practical Guidance for Public Health
AI is everywhere, and it’s not going away. Today, the loudest need isn’t “more AI,” but clear policies, safeguards, and shared guardrails that help community health teams use AI correctly. To meet this need, Metopio recently hosted a webinar with AAron Davis (Wichita State University Community Engagement Institute) and Tatiana Lin (Kansas Health Institute).
This conversation delivered practical frameworks you can adapt now, not someday. Here’s the recap.
Why this matters (and where AI is already showing up)
AI isn’t just one thing. Beyond public tools like ChatGPT and CoPilot, AI is embedded across software: Metopio, for example, uses LLMs to assist with CHA drafting, text-to-SQL agents to turn plain-English questions into queries against survey data, and sentiment analysis to classify qualitative inputs. With AI woven into everyday workflows, governance must move from ad hoc to intentional — balancing innovation with privacy, ethics, equity, and safety.
The AI Fluency Ladder
AAron outlined an “AI fluency ladder” for organizations:
Awareness & shared literacy – common language for benefits/limits
Individual application – safe, purposeful use with training and guardrails
Projects & workflows – cross-functional workgroup (IT, legal, HR, program) to pilot, measure value, and iterate
Enterprise transformation – AI becomes ubiquitous, governed by live policies and ongoing quality improvement
How do you know it’s time for a formal policy?
Tatiana called out some red flags that might point to the need for a formal policy: staff using AI without guidance; AI outputs influencing public-facing materials; unclear accountability if something goes wrong. But before drafting one for your team, you’ll need to align on:
What you’re safeguarding/promoting (e.g., patient privacy vs. accessibility)
Capacity (who will assess vendors, monitor risk, handle procurement/compliance)
Policy fit with existing SOPs, cybersecurity, procurement, records, and communications
How to avoid common pitfalls:
Write a tool-agnostic policy
Scope beyond “gen AI” to AI more broadly
Lean into feasible provisions (e.g., require documentation and “explainability when possible,” not absolute explainability)
Build in adaptivity (review cadence, sunset/trigger clauses)
Set up a cross-functional governance team
Three policy structures you can copy-adapt
Operations-first: Lead with how tools are approved, monitored, documented, then layer governance/enforcement. This method is practical and innovation friendly.
Governance-first: Front-load oversight bodies, criteria, and risk classification. This version is best for orgs with mature committees/process.
Enforcement-first: Begin with incident handling (breach, algorithmic harm, equity concerns), then move on to prevention. This tactics is powerful but capacity intensive.
A risk-based, ethical oversight framework
Structure requirements around risk, not tool names. Here’s some examples of how this could look:
Low-risk (internal ops, minimal public impact): light approvals; staff review and human oversight
Medium-risk (supports public-facing work): added documentation, review, and testing
High-risk (influences services, eligibility, or trust): formal assessment, equity review, stronger controls, and governance sign-off
AAron and Tatiana recommend differentiating procured enterprise tools (evaluating security, data handling, suitability for higher-risk tasks) from publicly accessible tools (allowing only for low-risk tasks under clear guidance). Specify applicability to staff, contractors, grantees, and how/when new contracts must comply.
They also called on teams to address limits and harms directly in policy sections:
Human oversight to catch incorrect outputs/hallucinations
Bias mitigation & equity review for models, data, and outcomes
Privacy & data minimization; vendor obligations; breach protocols
Transparency & documentation standards (what was used, for what, with what safeguards)
How to get started right away
Stand up an AI workgroup: Include IT, legal/compliance, HR, program leaders, and communications
Baseline literacy: Offer AI 101 and safe-use trainings (talk through prompting, privacy, bias)
Run low-risk pilots: Define success metrics up front; measure time saved/quality gains; capture lessons learned
Adopt a risk-based policy (operations- or governance-first), then review on a cadence
Tap funding: Explore public health infrastructure/data modernization resources for AI-readiness and governance support
The bottom line:
There’s no one-size-fits-all template — but there is a repeatable path. Build shared literacy, pilot with purpose, and adopt a risk-based, tool-agnostic policy that centers ethics and equity. That’s how hospitals and public health agencies are moving from AI hesitation to safe, accountable value.