Dr. Rumman Chowdhury: Culturally Aware A.I. Starts With Local Realities
From classrooms to city halls, Chowdhury argues that A.I. only works if humans keep control, test it rigorously and design with local realities in mind.

Dr. Rumman Chowdhury, featured on this year’s A.I. Power Index, advocates for grounding artificial intelligence in local realities. As founder of Humane Intelligence, a nonprofit focused on “bias bounties” and “institutionalized red teaming,” she assesses A.I. systems for vulnerabilities and sociotechnical risks across industries. Chowdhury fundamentally rejects the assumption that A.I. will replace people, arguing instead that “novel ideas originate in human minds” and that A.I. should augment rather than supplant human judgment, creativity and critical thinking. As a member of New York City’s AI Steering Committee, she tackles the unique challenge of translating A.I. ethics into operational guidelines, from public benefits to policing algorithms. Her greatest concern centers on A.I. evaluations being treated as regulatory afterthoughts rather than critical readiness assessments, warning that without rigorous real-world testing with affected communities, deployed A.I. systems will remain “brittle, unaccountable and out of step with people’s needs.”
What’s one assumption about A.I. that you think is dead wrong?
The belief that A.I. will replace people is fundamentally mistaken. A.I., at its core, is a tool created and shaped by humans. The true value still lies with human ingenuity; A.I. augments rather than supplants the irreplaceable qualities of human judgment, creativity and critical thinking. Delegating our thinking or agency to A.I. not only underestimates ourselves but undercuts the value of genuine, human-led innovation.
Was there one moment in the last few years where you thought, “Oh no, this changes everything” about A.I.?
This past year, collaborating with an edtech company to test A.I. with real students was a turning point. Hearing firsthand how students understand and interact with A.I.—and seeing how their experiences are deeply shaped by the broader structures of education—revealed that A.I.’s impact is far from automatic. Simply put, access to the benefits of A.I. in education still maps tightly to pre-existing socioeconomic divides. Unless we’re intentional, these tools will end up amplifying advantage for privileged students and deepening gaps for those already underserved. A.I. won’t solve education’s core inequities on its own—it could make them worse if we aren’t careful.
What’s something about A.I. development that keeps you up at night, that most people aren’t talking about?
One thing that worries me is how evaluations are treated as a regulatory afterthought, not as a critical part of readiness. Building trustworthy A.I. isn’t just about the technical ingredients—data, compute or clever models. It’s about whether we rigorously test these systems in real-world settings, ideally with the people who are actually affected. If evaluations remain just a checkbox for compliance, rather than a meaningful process for stress-testing and improvement, we’ll end up deploying A.I. that’s brittle, unaccountable and out of step with people’s needs.
You’ve said that novel ideas come from human brains, not A.I. systems. How do you help organizations implement this philosophy practically when designing A.I. systems, and what guardrails do you recommend to preserve human creativity and critical thinking?
The core principle is that novel ideas originate in human minds—not data sets or pre-trained models. Translating this into practice, I advise organizations to [do a few things]. Implement participatory design and evaluation, involving diverse stakeholders early and often rather than after deployment. Create clear guidelines and “guardrails” that ensure decisions requiring creativity, ethical reasoning or contextual understanding are retained for humans, not delegated to A.I. Institutionalize red teaming and public feedback cycles—requiring evidence that system outputs reflect genuine stakeholder values and priorities. These steps guard against over-automation and help preserve space for authentic human contribution throughout the innovation process.
Through your work with Humane Intelligence and various organizations, you’ve emphasized letting local realities guide A.I. innovation. Can you give specific examples of how culturally aware A.I. deployment differs from one-size-fits-all approaches, and what mistakes do you see companies making?
Culturally aware A.I. starts with local realities—local data, user needs and lived experience—rather than assuming a global model will work everywhere equally. Take our multilingual red teaming exercises: in Singapore, we brought together testers from nine countries to reveal biases and failures invisible in monolingual, monocultural lab settings. Conversely, companies often deploy global solutions without this adaptation, missing harmful edge cases and undermining trust where the model doesn’t “fit” the context. Effective organizations understand that building, testing and governing A.I. must be grounded in local agency.
As an A.I. Committee Member for New York City, you’re working on A.I. governance at the municipal level. What unique challenges do cities face in regulating A.I. compared to federal approaches, and how do you balance innovation with protecting citizens from algorithmic bias and harm?
Cities face unique challenges: their problems are intensely practical, close to daily life and directly impact millions—from policing algorithms to school placement or housing applications. Local agencies must balance limited resources, urgent service delivery and the imperative for fairness and transparency. Unlike federal regulators, city officials can’t simply issue broad principles—they must translate A.I. ethics into operational guidelines and procurement standards. The solution is strong cross-agency governance, external expert panels and robust public participation, all anchored in formal principles (like those adopted by New York City) that prioritize transparency, appropriateness and equity, while making room for innovation.
What's Your Reaction?






