Despite tech companies’ promises about the potential of AI agents, knowledge workers are still cautious about the technology. New research from Asana’s Work Innovation Lab shows that today, workers offload 27 percent of their workload to agents. However, that number is expected to rise to 43 percent within three years. But what’s the cause of this hesitation? According to the data, 62 percent of respondents say agents are unreliable.
Even with concerns about trust and accountability, workers are still adopting agents—they just aren’t willing to delegate much of their work to them. More than three-quarters (77 percent) said they’re using a bot in some capacity, and a nearly equal number (76 percent) consider agents to be a “fundamental shift” in the future of work.
“I think the thing that was most surprising to me was actually the combination of the lack of trust with how much they expect to delegate in the future,” Dr. Mark Hoffman, the lead at Asana’s Work Innovation Lab, told me. He finds this dichotomy interesting because it suggests that although workers are skeptical now, they realize there’s future potential for agents. It’s a problem Hoffman finds intriguing—how to bridge the current trust gap with what needs to change in the next five years to make workers more comfortable with AI delegation.
So what are workers essentially using AI agents for? Unsurprisingly, the most common tasks are taking meeting notes (43 percent), helping organize documents (31 percent), and scheduling meetings (27 percent).
Subscribe to The AI Economy
Who’s Responsible for Agents?
When it comes to AI agents failing, all eyes turn to who is responsible for fixing their mistakes and ensuring they are not repeated. However, identifying who’s accountable isn’t clear-cut. A third of knowledge workers (33 percent) don’t believe any human is responsible or aren’t sure who to blame. Less than a quarter (22 percent) assigns the blame to the end user, 20 percent attribute it to IT teams, and nine percent think it’s the person who built the bot.
This is further compounded by the fact that most organizations lack AI guardrails. Asana’s research showed 14 percent of businesses have ethical frameworks for agents, 15 percent have development processes in place, and 12 percent actually review employee-created agents. More than a third (31 percent) reportedly take a laissez-faire approach, allowing employees to develop agents without management approval.
And when it comes to monitoring AI error rates, only 19 percent of organizations do so today, even though nearly two-thirds of workers (64 percent) say accuracy should be a top metric.
Agents Not Being Team Players
While we often hear about the benefits AI agents can provide, workers aren’t drinking the Kool-Aid, at least not for now. They’re frustrated that bots aren’t delivering on their promise: reducing routine tasks and freeing up time. In addition to the 62 percent who see agents as unreliable, 57 percent of workers report being able to share wrong information confidently, 56 percent complain that agents aren’t learning, and 54 percent highlight that AI agents are just creating extra work that forces them to redo everything.
Attitudes only get more negative from here, with respondents expressing frustration that agents can’t understand their team’s work (49 percent) and are focusing on the wrong priorities (48 percent).
The Training Gap
Organizations must stop assuming the cold introduction of technology will produce positive results. When it comes to AI agents, an overwhelming 82 percent of workers believe that proper training is essential for success. The reality is that only 38 percent of firms appear willing to provide it.
More than half of workers (52 percent) have also petitioned their employers to clarify the responsibilities of humans and AI. 56 percent want formal usage guidelines.
In short, the absence of training and rules will continue to hinder employees from integrating agents into their workflow.
The Work Forward
What can be done to convince workers to delegate work to agents? After all, Asana’s findings don’t paint these bots in a positive light. With a good percentage of people expressing doubt about the potential of AI agents, there’s undoubtedly more work organizations need to do to assuage any fear, uncertainty, and doubt. Asana’s recommendation: “Organizations will only see value from AI agents if they treat them like teammates, not tools. That means giving them the right context, defining responsibilities, embedding feedback loops, measuring accuracy as the top metric, and training employees to use agents effectively.”
Workers and executives are bombarded with pitches to try agentic solutions, so how should they evaluate them to know which one will produce the right results? Hoffman said Asana’s data doesn’t bear enough information to generate a “knowledge worker playbook,” but conceded that it’s an important feedback for the company. However, he added, workers should start by evaluating one of three dimensions when it comes to automating workloads: the energy, risk, or frequency.
With energy, workers list which tasks either drain or don’t restore their energy. As for risks, which tasks are a high risk to automate? Procurement and legal processes may not be the first choice, Hoffman pointed out, but what are low-risk tasks an agent can easily handle? Lastly, when it comes to frequency, what are the tasks that workers perform often enough that the time investment warrants having an agent handle them? These shouldn’t be plans or tasks that workers undertake on a quarterly basis, but rather daily or weekly assignments that are non-essential to the work getting done. Hoffman warns against being too dependent on agents in the event the bot breaks and “everything falls apart.”
“Just start with one,” he urged. “There’s no reason to try and put agents in every part of your workflow yet, especially if you’re early on and trying to understand how they work. Pick one dimension that you can then evaluate different options for that one thing you want to do.”
When asked what takeaways he hoped people would take away from Asana’s annual work survey, Hoffman shared that it was hope. To him, knowledge workers hold incredible power in determining whether and how agents get adopted in an organization. Executives and developers must recognize that, despite the pressure they place on workers to use agents, there are three areas where people don’t appear willing to relinquish control to AI.
Hoffman’s team referred to them as the ownership paradox, the creativity fortress, and the relationship red line. “When you think about developers, what are you developing to automate away?” he asked. “The issue right now is people are focused on where the most money is spent in organizations, which is how someone who’s purely focused on the economics of their build would think.” AI proponents may seek to automate away designers, engineers, creative writers, or other cost areas.
“But it’s clear…from the research that the ownership paradox is essentially that people are happy to delegate administrative overhead, things that are kind of for them, but they don’t like to delegate things that represent them.” Hoffman elaborated that workers don’t wish to automate away “the things that are core to their professional identity and their creative, strategic work.”
Another area where workers draw the proverbial red line is with relationships. They’re not fans of using AI agents to handle social interactions on their behalf. Hoffman posits that organizations should take this to explore where the right energy should be spent to drive tool adoption. Resources shouldn’t be used to automate relationships or creative work production.
Hoffman stressed that Asana’s research isn’t meant to take sides on AI—it’s not ammunition for either the skeptics or the evangelists. Regardless, he believes that agentic technology will be built, but “how do we collectively build that technology in a way that leads to an outcome that we all collectively would actually want?” To achieve this, builders and businesses must identify what people want.
“It’s clear to me that negativity spikes the second you start entering into the more human and creative processes that people enjoy about their work, and it goes away as people start thinking about things like administrative stuff that takes up their time, but they don’t like doing.”
Over 2,000 knowledge workers—all who self-reported themselves as such—in the U.S. and the UK were surveyed for Asana’s 2025 State of AI report. The company stated that it didn’t filter by industry or job title, but noted that the majority of respondents work in a corporate desk environment.
Featured Image: An AI-generated image of a human passing work to a robot, symbolizing trust and delegation. Credit: Adobe Firefly
Subscribe to “The AI Economy”
Exploring AI’s impact on business, work, society, and technology.


Leave a Reply
You must be logged in to post a comment.