When the algorithm picks last and human bias kicks in

How AI bias mitigation strategies and modernized data governance can produce accurate and ethically responsible workflow systems.

banner image

How bias in AI tools shows up

Have you ever been in a situation where others assumed you had certain skills and capabilities? You’re in third grade and the class is divided into two teams to play kickball. Each team has a captain. They begin to select from the remaining class members, and are you the last classmate selected? Why? Were you small in stature? Were you a child who tripped over their own two feet? Or were you a girl and viewed as less than capable of playing kickball compared to boys? While the latter example is what I would call implicit bias, it is still bias and has been around for centuries. This type of bias, and others, develop over a lifetime of situations we experience and respond to.1 Bias is not something new. It’s a lurking factor that can affect what milk you buy, where you live, and even how or IF you get hired.

Fast forward to today’s environment, where artificial intelligence (AI) solutions are springing up in a multitude of work solutions. Those same biases are present and can be even more pronounced. Situations such as hiring practices, housing locations, permit approvals, and property tax assessments are just a few of the local government opportunities where bias can be encountered.

To better understand this dichotomy, this article will define bias versus ethics, explain how AI exacerbates bias, why it is important to address, and then steps to put into practice to mitigate the bias.

Bias versus ethics

Bias is a symptom. Ethics is the moral code and is the system that determines whether we catch it or allow it.

Based on research and real-world applications, ethics is the framework by which each of us operates. Bias is where an individual or a group of individuals is judged and adversely treated based on a variety of differing characteristics. Put in simpler terms, bias is something you fix. Ethics is something you decide and enforce.

In artificial intelligence, this means that either AI produces unfair or skewed results based on underlying data cultivated through bad sourcing, or that human intervention applies conscious or unconscious bias to the results.

Ethical AI is about ensuring AI serves humanity responsibly, without causing harm by developing and deploying artificial intelligence systems that are:

  • Fair: Treating all users and groups equitably (i.e., filling positions based on skill rather than relationships)
  • Transparent: Providing clarity about how systems work (i.e., explaining the formulas used in assessing property taxes using AI)
  • Accountable: Taking responsibility for AI decisions and outcomes (i.e., leadership reviews and audits results and adjusting as needed

How does AI affect bias

AI is a tool that relies on underlying data to be accurate, up to date, and representative of the right protections. However, the algorithms or formulas can be written in such a way to make correlations and assumptions that may contain biased results. This is how AI ethical issues propagate in standard prompts, even without user guidance or intervention. For example, if your underlying data for employee history shows that 75% of the successfully hired are males, when you use that data, the results will favor men. Or if you ask an AI tool to build an RFP that excludes particular vendors, that is an example of a conscious bias and does not provide results representative of the skills or specific requirements needed.

Why this matters:

The impacts of bias continue to evolve at an alarming rate and can come from both data and users. Here are a few examples:

  • A Bellingham City staffer used ChatGPT to develop an RFP that would exclude a particular vendor from a city contract. The bias was in how the staff person wrote the prompt.2
  • AI-generated police report incorrectly stated that an officer “turned into a frog.” Due to the underlying data, in this case, the body-worn camera, the report generated created misrepresentations.3
  • Hiring and recruitment solutions are using AI in their application screening processes, which in some cases is resulting in bias by screening out certain applications.4

Each of these situations demonstrates the need for ethical use of AI, as conscious and unconscious bias in these platforms is escalating, creating discriminatory and unjust results. Accountability and transparency in AI must come from all who utilize it so that results can be trusted.

Steps to mitigate bias in data

The good news is that you can address ethics and bias in the use of AI. These steps can help foster an ethical approach to the responsible use of AI and should become an integral part of your strategy.

  1. Develop an Ethical AI use policy or integrate ethics and bias into your existing Artificial Intelligence policy. The goal is to define acceptable AI uses, prohibited uses, data requirements, and the importance of human oversight.
  2. Check that the data includes all demographics, time periods, and scenarios that AI will encounter. This includes automated checks on incoming data (missing fields, anomalies, skewed distributions).
  3. Document the data lineage from end-to-end for AI transparency. Every prediction should be traced back to the source data, which is critical for explainability in audits
  4. Protect sensitive data points such as Personally Identifiable Information (PII) and Protected Health Information (PHI) by using automated tagging. Implement automated redaction of data like SSN, health records, and individual financial data that are not needed in AI training.
  5. Applying data classification and metadata tagging through automation. This can be done by using scripts or tools that automatically label data types, sensitivity levels, and apply quality scores.
  6. Utilize implementation bias detection approaches that validate demographic balance in data. In some use cases, it may be necessary to obscure these data elements that could lead to bias (i.e., age, gender, volunteer associations, geographic location, income levels).
  7. Adopt AI and data governance quality standards that include fairness metrics. This can be accomplished by reviewing quality checks that measure representation rates. If the historical data AI ingests is from the real world, then AI results will be biased. For example, if 50% are men and 50% women, but your hiring database reflects 80% men, 20% women. AI will train and extend the continuation of that bias assumption.
  8. Ensure data access controls prevent unauthorized AI use (i.e., permissions limit who can use sensitive data for AI training or experimentation)
  9. Define data stewardship roles with their AI accountability frameworks defined. It is important to identify the humans that need to be in the loop for reviewing AI data inputs and signing off on the data being used as well as the results. Including roles for reviewing flagged issues, approving exceptions, and providing mitigation. Keep in mind it may not be just one person, and it may also include other reviewers, such as community members
  10. Practice continuous monitoring of data results to track data drift/quality. Dashboards are a fantastic way to show when data distributions change, quality degrades, or biases emerge. Identifying data results by geographic and demographic means helps to uncover disparities, and adding automated alerts can improve the early detection of bias.

Ethical AI in practice

The kickball captain who picked you last didn't set out to be unfair. They simply acted on what they knew, what they'd seen, and what felt familiar. AI does the same thing, only faster, at a greater scale, and with far less accountability. The difference is that we can use AI to have a positive impact.

Addressing bias in AI is not a one-time fix; it is an organizational commitment. It requires strong policy, intentional data practices, honest self-examination, and humans who are willing to stay in the loop even when the algorithm seems confident. The AI governance and accountability for agencies in local governments (and throughout the public sector) requires close attention, and up-to-date information around prompt control and data validity.

The changes we are experiencing because of AI are real. Jobs, housing, healthcare, and public services are increasingly shaped by algorithmic decisions. While you can’t get rid of bias in a day, you can start a reformation.By developing a strategy centered around accountability and transparency in AI. That way, we can choose a more equitable future for both local government and the communities we serve. By using the new advances in development, we can turn the tide and potentially have AI reduce bias in government.

For assistance with that strategy, CAI has developed a data governance toolkit that consolidates the six different areas of data governance, including ethics and data.

To learn more about how CAI can help your government organization develop stronger data governance for AI, fill out the form below.


Endnotes

  1. “Understanding Different Types of Bias.” NHS choices, October 13, 2022. https://nshcs.hee.nhs.uk/about/equality-diversity-and-inclusion/conscious-inclusion/understanding-different-types-of-bias/.
  2. Nate Sanford. "Bellingham staffer asked ChatGPT to ‘exclude’ vendor from city contract" KNKX Public Radio. January 5th 2026. https://www.knkx.org/government/2026-01-05/city-of-bellingham-chatgpt-ai-contract-vendor.
  3. Mya Constantino. "Artificial Intelligence programs used by Heber City police claim officer turned into a frog" Fox 13 Utah. December 2025. https://www.fox13now.com/news/local-news/summit-county/how-utah-police-departments-are-using-ai-to-keep-streets-safer.
  4. Virginia Backaitis. "Why AI Hiring Discrimination Lawsuits Are About to Explode" reworked. October 2025. https://www.reworked.co/talent-management/why-ai-hiring-discrimination-lawsuits-are-about-to-explode.

Let's talk!

Interested in learning more? We'd love to connect and discuss the impact CAI could have on your organization.

All fields marked with * are required.

Please correct all errors below.
Please agree to our terms and conditions to continue.

For information about our collection and use of your personal information, our privacy and security practices and your data protection rights, please see our privacy policy and corresponding cookie policy.