TechnologyFebruary 14, 2025

New AI Agents Survey: Trust, Control, and Reliability Remain Concerns

New AI Agents Survey: Trust, Control, and Reliability Remain Concerns

Despite the fact that developers and organizations are gung-ho about building AI agents, many still have misgivings about their ability to trust and control these highly autonomous AI systems.

A new DataStax survey found that nearly half (48.3%) of respondents said they’re concerned about the ethical implications of deploying AI agents in their industries. This survey found that trust and safety are top barriers to adopting AI agents. 

We surveyed 178 attendees of our Jan. 30 Hacking Agents event (more on that below). Though it wasn’t a huge population of respondents, it generated some noteworthy findings that reflect the feelings of developers ranging from those who are experienced with building AI applications to those just starting out.

Too much autonomy?

Concerns about the governance required to manage AI responsibly aren’t new, but the rise of agents and agentic frameworks in the past year or so have the potential to introduce a much higher level of autonomy into the business process layer. Agentic AI has the potential to  orchestrate complex workflows and coordinate with other agents—all without human intervention. But that autonomy brings with it concerns of potential data leaks or socioeconomic risks, like job displacement.  

For 32% of respondents, the main barrier preventing them from adopting AI agents is trust and safety (29.2%, on the other hand, felt no hindrance to adoption – and said they are diving in now). To help reduce potential risks that could crop up when deploying agents, 47% said guardrails are required, while 44.4% said humans should remain in the loop. Traceability and evaluations were considered important controls as well, with 38.2% and 37.6% of respondents mentioning these safeguards, respectively.

Agents for augmentation

AI replacing human workers has remained a concern for many. Nearly 32% of the people we surveyed said they see AI taking over for humans when it comes to completing mundane tasks. Yet a significant percentage of respondents said they expect AI agents to accelerate human productivity: 20% said agents would lead to cost savings and 25% saw speed improvements through augmentation. And a full 64% said they trust autonomous agents to make low-risk decisions without human oversight. 

Our survey respondents are moving to take advantage of these benefits. Forty percent said that they were working on proof-of-concept agent projects. Yet 45.5% said they’re still unsure if agent frameworks adequately address the real-world challenges of building agentic applications for production.

Langflow to the rescue

Delivering agentic AI to production has been a challenge, but DataStax has been working hard to change that. Langflow, our low-code visual development environment backed by the world's most-scalable AI cloud, simplifies the creation of multi-agent applications. 

Attendees at our recent Hacking Agents event learned all about Langflow’s simplicity and capabilities for building production agentic AI. Sessions included:

You can view the complete YouTube Hacking Agents playlist here

Up next: The Hacking Agents Hackathon

We’re keeping the energy from Hacking Agents rolling with a hackathon that kicks off in San Francisco on Feb. 28. ​Join us, along with Cloudflare, Unstructured, Twilio, and OpenAI, for an epic 24 hours where we'll be diving into what developers can build with the latest and greatest in AI tooling. Request to join the Hacking Agents Hackathon here.

Discover more
Agents
JUMP TO SECTION

Too much autonomy?

Agents for augmentation

Langflow to the rescue

Up next: The Hacking Agents Hackathon

One-Stop Data API for Production GenAI

Astra DB gives developers a complete data API and out-of-the-box integrations that make it easier to build production RAG apps with high relevancy and low latency.