AI is reshaping the HHS landscape
Artificial intelligence (AI) is rapidly transforming how Health and Human Services (HHS) agencies operate, from streamlining administrative processes to enhancing the quality, access, and delivery of public health services. However, this technological advancement requires careful consideration. The integration of AI into HHS functions, such as patient care management, predictive analytics for public health, and administrative efficiency, raises complex questions about procedural fairness, equity, and bias mitigation. To ensure AI serves the public good, HHS agencies must adopt governance frameworks prioritizing ethical considerations from the outset.
HHS AI in action: Successes and challenges
Across the healthcare sector, many leaders are exploring generative AI capabilities or have already adopted generative AI to improve operations and stakeholder engagement. These capabilities have been employed to enhance clinical productivity and administrative efficiency, highlighting AI's potential in healthcare settings.
As HHS entities explore AI-driven innovation, the outcomes have varied. For instance, Maryland attempted to use AI to determine social assistance program eligibility, hoping to process applications more efficiently. However, post-deployment analysis revealed that the system disproportionately flagged applications from minority communities for additional scrutiny. This unintended bias, likely stemming from historical inequities embedded in the training data, led to delays in access to services and public concern. In response, Maryland established an AI Subcabinet to oversee and coordinate efforts to study and provide recommendations on AI use across state government.1
In contrast, Tennessee's Medicaid program has strategically implemented AI to improve administrative efficiency, deploying a chatbot that assists beneficiaries navigating intricate policy guidelines. Aligned with Tennessee's Enterprise Generative AI Policy, which emphasizes protecting sensitive data like protected health information (PHI), this initiative improved service delivery by reducing the time staff spends on routine inquiries.2
These cases underscore a critical reality: while AI can enhance HHS operations, its implementation must be approached thoughtfully. Without oversight and ethical safeguards, AI can inadvertently perpetuate biases, reduce transparency, and erode public trust.
Core principles for HHS AI implementation
For AI to function ethically and deliver true value in HHS, agencies should adhere to four fundamental principles:
- Equity: AI technologies, if not carefully designed and tested, risk reinforcing existing disparities. This is especially apparent when historical biases are embedded in training datasets or when algorithms operate without transparency. HHS agencies must proactively evaluate their AI systems for equity, using tools like pre-deployment bias reviews and ongoing algorithmic monitoring. Some states have demonstrated how natural language processing can be employed to extract insights from complex case narratives, while maintaining a focus on equity throughout the process.
- Clarity: AI should never be a “black box” to the people it serves. Clear, understandable explanations of how AI systems reach decisions—especially those that affect eligibility, benefits, or care—are essential. Public disclosure of where and how AI is used, combined with efforts to make system logic understandable to both staff and constituents, increases trust and allows for more informed oversight.
- Responsibility: Maintaining human oversight is critical. HHS agencies must establish governance mechanisms that clarify roles, provide channels for feedback and redress, and regularly evaluate the ethical performance of AI systems. This may include forming advisory boards, scheduling independent audits, and developing formal appeal processes that allow the public to contest AI-driven outcomes when necessary.
- Security and Confidentiality: Because AI systems in HHS routinely handle sensitive information—including medical and socioeconomic data—agencies must prioritize strong data protection protocols. This includes using encryption, data masking, and rigorous access controls, as well as complying with applicable privacy regulations at both the state and federal level. Safeguarding constituent data isn’t just a compliance issue, it’s fundamental to earning public confidence in AI-driven programs.
Overcoming barriers to ethical HHS AI implementation
Despite growing awareness of AI ethics, many HHS agencies face challenges in effective implementation. Addressing these barriers is crucial to unlocking the true potential of AI and fostering public trust.
As agencies consider integrating AI into their operations, several key challenges require deliberate attention and coordinated action:
- Inconsistent regulatory frameworks: The current landscape of AI-related policies across jurisdictions poses significant challenges to consistency and accountability. Without unified guidance, agencies risk diverging interpretations and uneven implementation.
To mitigate this, policymakers should prioritize intergovernmental collaboration, both horizontally across states and vertically with federal partners, to establish clear, harmonized standards for ethical AI use in HHS. Engagement with national organizations like the American Public Human Services Association (APHSA) offers a valuable forum for shaping consensus-driven policy approaches. - Capacity and skill gaps: A shortage of in-house AI knowledge remains a major barrier to responsible adoption. Many HHS organizations lack the technical capacity to vet, implement, and monitor AI systems effectively. Closing this gap requires sustained investment in workforce development.
Strategic partnerships with academic institutions, public-sector training collaboratives, and technology experts can help agencies build internal capacity and develop a pipeline of AI-literate public servants equipped to manage emerging technologies with confidence and integrity. - Public trust and perception: Public hesitation around AI remains high, particularly when systems influence decisions tied to essential services such as benefits eligibility or access to care. To foster legitimacy and reduce skepticism, agencies must lead with transparency and prioritize community engagement.
This includes clearly communicating where and how AI is used, ensuring language and materials are accessible to diverse audiences, and creating inclusive spaces for stakeholder feedback. These efforts not only reinforce public accountability but also help build the civic trust necessary for sustainable innovation.
Navigating the complexities of AI integration requires a strategic approach that embraces collaborative governance and leverages partnerships. Establishing AI Centers of Excellence (CoE) within agencies is instrumental in fostering good governance and optimizing the use of AI tools. These centers bring together multidisciplinary teams spanning security, operations, policy, and customer service, enabling agencies to explore innovative use cases and develop responsible AI strategies.
Public awareness and education
In addition to technical strategies, cultivating public awareness and education is equally important to responsible AI in HHS. Prioritizing human-centered design and empathetic communication is crucial for achieving true value gains and public trust.
Many individuals affected by AI, especially in interactions with HHS agencies, lack an understanding of how these systems operate, what data they use, and the potential implications of their decisions on personal well-being. This lack of understanding is exacerbated by a digital environment where users are conditioned to "blindly click" their agreement to extensive, opaque terms and conditions simply to access essential services. When this consent culture extends to interactions with AI-driven HHS systems, it can create a disconnect between the technology's capabilities and the public’s right to informed engagement.
Raising awareness means creating opportunities for community education, offering simplified disclosures, and fostering open dialogue between agencies and constituents. It is crucial to design AI-related communications that respect users’ time, literacy levels, and accessibility needs, ensuring that everyone can engage with the technology meaningfully. When individuals understand how AI impacts their health and well-being, and how to seek recourse, they are better equipped to advocate for themselves and trust in the systems designed to serve them.
Incorporating constituent feedback, and involving frontline staff in AI system design, ensures solutions meet real-life needs and foster a culture of empathy and trust. Effective change management also plays a pivotal role in building a positive AI culture within agencies. Leaders must clearly communicate with constituents to illustrate how AI enhances interactions with government services, while internally sharing a clear vision for AI to gain buy-in from staff. This holistic approach combines technology with human-centric values, helping HHS agencies harness AI's potential and enhance service delivery.
Case studies: Leading HHS AI strategy and governance
Maryland’s AI strategy for HHS delivery
Maryland is advancing a thoughtful AI strategy by focusing on improving service access, streamlining eligibility and case management, and safeguarding equity. Through its 2025 AI Enablement Strategy, the state is using AI to simplify complex application processes, better predict client needs, and enhance decision-making while embedding rigorous data governance and bias mitigation practices. Maryland’s approach balances innovation with responsibility, aiming to create more equitable, efficient, and proactive services for residents, supported by dedicated workforce training and ethical oversight.3
Utah's Office of Artificial Intelligence Policy
In March 2024, Utah established the Office of Artificial Intelligence Policy, becoming the first state to enact AI regulations under its consumer protection laws. This office focuses on regulating AI applications in healthcare, with an initial emphasis on mental health chatbots. The initiative aims to ensure that AI technologies used in healthcare are safe, effective, and ethically deployed. Utah's proactive approach serves as a model for other states considering AI governance in health services.4
California’s GenAI equity guidelines for inclusive public services
California has established pioneering GenAI equity guidelines (to ensure the responsible and equitable adoption of generative AI) across state government programs, including health and human services. These guidelines provide a structured framework for evaluating potential impacts of AI tools on vulnerable and marginalized communities, both before procurement and throughout deployment. Agencies are required to conduct equity impact assessments, engage affected communities, and use an iterative evaluation checklist to monitor risks of bias and exclusion. This proactive approach underscores California’s commitment to embedding equity into every stage of AI implementation, fostering more inclusive and just public services.5
Getting started with your HHS AI strategy
As AI becomes integral to public health services, HHS agencies must balance leveraging its benefits with safeguarding ethical standards.
CAI’s many years of experience working with state and local HHS agencies put us in a unique position to assist in navigating this complex landscape.
Working together, we can help with:
- Policy development: Assisting in crafting guidelines that align AI deployment with ethical principles, ensuring compliance with federal, state, and local regulations.
- Risk assessment: Conducting thorough evaluations of AI systems to identify and mitigate potential biases and ethical concerns.
- Data-private AI: Equipping HHS employees and caseworkers with secure AI tools that prioritize data privacy and compliance, while fostering innovation at scale.
- Testing protocols: Designing and implementing tailored testing protocols and scenarios to ensure system quality, reduce risk, and support compliance across agency IT initiatives.
- Training programs: Offering educational initiatives to equip HHS employees with the knowledge to effectively manage and oversee AI technologies.
- Public engagement: Developing strategies to maintain transparency and build public trust in AI applications within health and human services.
By collaborating with agencies, CAI can help establish policies that ensure fairness, accountability, transparency, and the protection of individual rights.
If you’re interested in learning more about ethical AI integration, contact us.
Endnotes
- “Governor’s Artificial Intelligence Subcabinet.” Maryland Manual On-Line, 2024. https://msa.maryland.gov/msa/mdmanual/08conoff/cabinet/html/ai.html. ↩
- “Enterprise Generative AI Policy.” Tennessee.gov. Accessed August 4, 2025. https://www.tn.gov/content/dam/tn/finance/aicouncil/documents/TN%20Enterprise%20Generative%20AI%20Policy.pdf. ↩
- “2025 Maryland AI Enablement Strategy & AI Study Roadmap.” doit.maryland.gov, 2025. https://doit.maryland.gov/About-DoIT/Offices/Documents/2025%20Maryland%20AI%20Enablement%20Strategy%20and%20AI%20Study%20Roadmap.pdf. ↩
- “Artificial Intelligence Amendments.” le.utah.gov, 2024. https://le.utah.gov/~2024/bills/static/SB0149.html. ↩
- “State of California Guidelines for Evaluating Impacts Of ...” genai.ca.gov, December 2024. https://www.genai.ca.gov/wp-content/uploads/sites/360/2024/12/GenAI-Equity-Guidelines-2024.12.19.pdf. ↩