Reducing Bias from AI

This week is the GovAI Coalition Summit in San José, CA. GovAI is a coalition of municipal leaders and technologists working to make artificial intelligence (AI) safer, more equitable, and transparent for everyone. Artificial intelligence holds the promise of transforming public services, making them more efficient, equitable, and accessible. However, realizing this potential requires a proactive commitment to addressing and reducing bias in AI systems.

As a sponsor of the GovAI Coalition Summit happening this week, City Detect proudly supports advancing responsible AI practices that prioritize fairness, transparency, and equity. By combining innovative technology with ethical safeguards, we ensure our AI-powered platform not only optimizes urban management but also upholds the trust and values essential to public service.

City Detect’s AI-powered platform transforms urban development by helping municipalities optimize code enforcement. It also assesses property conditions and supports data-driven decisions. However, the conversation around AI often raises concerns about bias, especially when it comes to public-sector applications. Coalitions like GovAI are taking a proactive approach to public sector AI applications, and City Detect is part of these conversations. At City Detect, we take these concerns seriously. Reducing bias from AI is a central tenant of City Detect in all operational and business units. By integrating responsible AI practices and diverse voices contributing to the process, City Detect ensures fairness, transparency, and accuracy across all the communities we serve.

Why AI Bias Matters

AI bias occurs when an algorithm produces results that are systematically prejudiced due to erroneous assumptions or biased data used to train the algorithm. These biases can stem from an underrepresentation within datasets, flawed training processes, or even the absence of human oversight. In government applications, biases, whether human or computer, could have a significant negative impact on the resources allocated to support communities or unequal treatment of different neighborhoods. These biases in the cloud could easily transform prejudiced data into systemic inequalities in the real world. 

So how can we start reducing bias in such a new technology like AI? Let’s look to a more familiar space: physical product supply chains. In the world of physical products, companies and consumers scrutinize the supply chain – where a product was made, what materials were used, labor practices, and disposal of waste. Just like for physical product companies, it is important that digital service companies have a high degree of transparency and understand every decision point and all the inputs.

At the heart of City Detect is the belief that AI should be an enabler of equity, not a barrier. That’s why we’ve adopted a rigorous framework to minimize bias from our AI systems, the data they produce, and the reports used by municipalities to make critical resource decisions.

City Detect’s Approach to Responsible AI

City Detect’s Responsible AI Strategy prioritizes fairness, transparency, and continuous improvement to ensure that our AI models operate ethically and without bias. Here’s how we tackle bias at each stage of our process:

High-Quality Data Collection

Our AI models are trained on high-quality, real-world images collected through legal and transparent methods, similar to the way local government officials capture property images from public spaces, or Google Street View captures street images. By using accurate, relevant, and diverse datasets, we ensure our AI systems are equipped to handle the full range of conditions found across diverse neighborhoods and communities. This prevents the skewing of results based on underrepresented data.

Population-Level Data for Comprehensive Insights

Bias in AI models often arises when they are trained on missampled data or when data class types are disproportionately represented. Unlike many AI systems that rely on sample data, City Detect collects population-level data. This means that for a given municipality, we collect data on every parcel across entire neighborhoods, reducing the risk of sampling errors and ensuring that all areas are represented fairly. We collaborate with various municipal departments, including IT and GIS, to ensure full coverage of the relevant jurisdiction. Because our training data encompasses entire cities, we control for biases that arise from under/overrepresentation. This approach ensures that our AI provides equitable outputs through accurate reports and data visualization.

Human Oversight with a Transparent Process

Human-in-the-loop feedback is a cornerstone of our responsible AI strategy. By involving human experts in reviewing AI outputs, we catch and correct potential biases before they impact decision-making. Specifically trained on the objects we detect (roofs, litter, graffiti, etc.), these reviewers undergo an additional review using statistically rigorous random sampling techniques to evaluate their output. City Detect also uses “manual labelers.” Manual labelers are people who help train the AI models.

When AI is learning, it needs examples, and manual labelers provide these examples by “labeling” or identifying what certain pieces of data mean. For instance, if an AI model is being trained to recognize blight, manual labelers will look at pictures and tag each one as “overgrown lawn,” “broken window,” “tarp on roof,” and so on. Our manual labelers deal exclusively with City Detect data and receive weekly feedback from experts. Additionally, our transparency extends to our customer onboarding and customer service ensuring everyone using our platform fully understands how our AI systems operate, empowering informed decision-making.

Continuous Monitoring and Improvement

We can’t treat bias as a one-time fix; it demands constant vigilance. City Detect uses performance monitoring and feedback loops to continuously refine our AI models, reducing bias effectively. This adaptive approach gathers municipal input, keeping the system relevant, ethical, and effective as community needs evolve. Continuous oversight and monitoring occur at every stage of the AI lifecycle, from model training to beta testing and, finally, into production environments. For example, before creating a case from an AI-flagged code violation, the officer reviews the actual photo. This ensures accuracy in assessing the parcel or roadside condition.

The Importance of Explainable AI

One of the key challenges in AI is ensuring that decision-makers can understand how the models work. City Detect prioritizes model explainability, favoring simple solutions over opaque ones. This allows platform users to focus on actionable insights without needing a deep technical background. Making AI decisions transparent and explainable eliminates ‘black box’ fears and reduces the potential for bias. Unlike generative AI models, whose output and reasoning are almost entirely opaque, City Detect systems rely upon predictive AI. These outputs are easily interpretable and consistent, and the model’s reasoning is readily visible.

AI Public Service Thought Leadership & Coalitions

The GovAI Coalition is a nationwide initiative comprising over 1000 members and over 350 local, state, and federal agencies. It aims to promote responsible and ethical AI governance and the use of AI tools within the public sector. GovAI brings together government agencies and vendors to ensure AI serves social good, focusing on fairness, transparency, and accountability. The coalition sets industry standards, improves public services, and supports equitable AI systems, addressing bias, privacy, and digital equity.

City Detect’s participation in the GovAI Coalition demonstrates our commitment to the responsible and ethical use of artificial intelligence in public governance. The GovAI Coalition emphasizes ethical governance, vendor accountability, and collaboration—core values that City Detect upholds in all AI solutions.

As a registered vendor and participant, we collaborate with GovAI Members working to promote the responsible use of AI for social good. City Detect partners with the GovAI Coalition to shape the future of AI technologies in public service. We ensure their development and use remain equitable. This includes refining AI systems to reduce biases, improve transparency, and uphold digital privacy. Our ethical responsibility as an AI provider goes beyond creating cutting-edge technology. We also ensure that our products align with the values of public safety, privacy, and fairness promoted by the coalition.

Moving Toward a Fairer Future

Reducing bias in AI is not just a technical challenge – it’s an ethical responsibility. At City Detect, our engineers and our customers are creating the future of computing and AI. At City Detect we build AI systems that empower municipalities to serve their communities more effectively and fairly. We ensure our technology upholds equity and justice in public service by collecting population-level data and integrating human oversight. We maintain transparency and actively engage with the community to reinforce these principles.

City Detect’s AI isn’t just about efficiency. It’s about empowering Americans to keep the promises that we make to our children. We will leave the country a better place than we found it.

For more information about how City Detect ensures responsible AI, visit https://citydetect.com/responsible-ai-strategy/

Schedule a call and see a demo of responsible AI in action: https://citydetect.com/contact/ 

Jonathan

Jonathan Richardson is an AI Engineer specializing in Computer Vision applications with 5 years of experience developing state-of-the-art models for vision recognition, object detection, and instance segmentation tasks. Passionate about advancing AI and committed to leveraging it for social impact, Jonathan specializes in fine-tuning state-of-the-art model architectures via transfer learning to solve real-world problems. With a background in quantitative economics and statistical methods, he knows that clean, robust data can paint a picture worth well more than 1,000 words - it's why he believes that empowering local leaders with actionable insights is the key to shaping the city of tomorrow.