What Americans Really Think About AI Algorithms: Public Confidence and Transparency in Government

Artificial intelligence has taken little time to shape how we live, work, and interact with institutions. From facial recognition in policing and algorithmic hiring tools to government chatbots and AI-generated content, the technology is being deeply woven into the fabric of public life.
But as AI becomes more prevalent, a growing body of research reveals a nation that is equal parts intrigued and uneasy. New data from Pew Research Center, Gallup-Bentley University, and recent public sector surveys highlight a clear trend: Americans want AI that works for the public good, but they’re not convinced that’s what they’re getting.
The Mood: Cautious, Concerned, and Not Quite Ready
Despite the breathless optimism surrounding AI in business and innovation circles, public sentiment tells a different story. According to Pew Research, only 18% of Americans say they’re more excited than concerned about AI in daily life. A much larger group (37%) expresses more concern than excitement, while 45% land somewhere in the middle.
This hesitation stems from a variety of sources. Many Americans worry that AI could exacerbate inequality, threaten jobs, infringe on privacy, or make opaque decisions with serious consequences, often without meaningful human oversight. As one woman in her 70s put it, “It will eventually eliminate jobs. Then what will those people do to survive?”
Trust in Government Use of AI Is on Shaky Ground
Nowhere is public caution more evident than in the government sector. While agencies across the country are exploring AI for digital services, automation, and data analysis, many residents remain wary of the implications.
A recent Gallup-Bentley University survey found that 77% of Americans distrust both businesses and government agencies to use AI responsibly. Concerns range from bias in automated decision-making to the risk of surveillance, data breaches, and algorithmic errors that could impact people’s lives in ways they don’t fully understand or consent to.
While a recent poll showed that 56% of respondents are at least somewhat comfortable with AI in government, a significant number (44%) remain uncomfortable. The dividing line often comes down to how AI is used. Americans are generally supportive of AI handling repetitive or mundane administrative tasks. Still, they draw the line at AI making high-stakes decisions, particularly in areas like law enforcement, public benefits, or hiring.
The Risk of Misinformation and the Decline of Trust
One of the most consistent themes in the research is fear of misinformation. In an era where AI-generated content can be nearly indistinguishable from authentic material, the potential for deception is enormous, and people know it. A full 76% of Americans say they are concerned about AI tools producing false or misleading information.
And that concern isn’t limited to text. Deepfakes, AI-manipulated images, and voice cloning are making it harder to trust what we see and hear. In fact, 98% of consumers in one study said that authentic imagery is crucial for building trust, underscoring just how fragile that trust has become in the digital age.
AI and Jobs: Assist or Replace?
Employment is another flashpoint. AI’s potential to streamline workflows and boost productivity is widely recognized: 55% of respondents say they anticipate greater workplace efficiency due to AI. Yet, fears of job loss remain acute. In one study, 77% of people expressed concern that AI could lead to job displacement within the next year.
Interestingly, concerns about AI replacing human workers are most pronounced in sectors where empathy, clarity, and accountability matter, like customer service, public outreach, or healthcare. A recent Gartner report found that over half of consumers would consider switching to a different provider if they learned a company planned to replace human customer support with AI. People don’t mind getting help from a chatbot until that’s the only option left.
Equity, Representation, and the Missing Voices
While public sentiment varies by use case, one issue cuts across demographics: representation in AI design. Pew’s data shows that only 30% of Americans believe AI systems can be designed to make fair decisions in complex situations. Even more telling, large portions of the population, especially Black, Hispanic, and Asian adults, don’t believe their perspectives are reflected in how AI tools are built or deployed.
This lack of inclusion raises serious questions about bias, especially in public sector AI. If the systems aren’t designed with diverse communities in mind, how can they be trusted to make equitable decisions?
Regulating AI: Challenges, Possibilities, and the Road Ahead
As AI’s influence grows, so do calls for regulation. But confidence in the government’s ability to provide meaningful oversight is low. According to Pew Research Center’s study, across political and demographic lines, 62% of Americans say they have little or no confidence in federal agencies to regulate AI effectively, a number echoed by 53% of AI experts.
Even those who are optimistic about the technology’s potential worry that, without clear guidelines, ethical frameworks, and public input, AI could become another force that reinforces existing power imbalances instead of challenging them.
Despite widespread skepticism, it’s important to note that Americans are not anti-technology. Many recognize that AI can enhance education, speed up tedious tasks, and make services more accessible. A significant number (65%) say they still trust businesses that use AI, as long as they know how and why it’s being used.
The path forward is not to abandon AI in public life but to build it better, with transparency, equity, and community involvement at the center.
Here’s how government agencies and institutions can start:
- Design with people in mind: Use AI to enhance services, not replace the human connection that makes them trustworthy.
- Be radically transparent: Communicate how AI tools work, what data they use, and how residents can raise concerns.
- Engage the community: Invite public input before rolling out new tools, and take feedback seriously.
- Educate, don’t just deploy: Host learning sessions and provide clear resources to help people understand what AI is doing and what it isn’t.
- Plan for accountability: Be clear about who’s responsible when things go wrong and how errors will be corrected quickly and fairly.
The Future Needs Data-Literate Policy Leaders
As artificial intelligence continues to evolve, the decisions made today will determine whether it becomes a tool for public good or a source of deeper distrust and inequity. That’s why the next generation of public leaders must be fluent not only in the language of policy but also in the logic of algorithms.
Graduates of the Cornell Brooks School’s Master of Science in Data Science for Public Policy (DSPP) are uniquely positioned to lead this transformation. Armed with advanced analytical skills, ethical training, and a deep understanding of how public institutions work, they are already stepping into roles that require both precision and principle. Whether working on AI governance and assessing risk in finance, strategy and operations in healthcare, algorithmic accountability in federal agencies, promoting equity in smart city planning, or building public trust in global nonprofit initiatives, Cornell DSPP alumni are turning complex data into smarter, more inclusive policy.
In a future where AI will shape everything from social services to civic engagement, these leaders will set the standard. They’ll design systems that center people, not just efficiency. They’ll champion transparency, demand fairness, and ensure that innovation doesn’t come at the expense of public trust.
The challenges are real, but so is the opportunity. With thoughtful leadership and interdisciplinary expertise, AI can help governments become more responsive, equitable, and human-centered. And it will be graduates of programs like the DSPP who ensure we don’t lose sight of what matters most: serving people, not just data.
MS in Data Science for Public Policy (DSPP):
The world needs data-literate policy leaders. That could be you.
Master technical and ethical data skills to inform smarter, more just policy in a fast-moving digital world with the MS-DSPP.


