A growing share of the global workforce has made peace with a once-unthinkable idea: letting artificial intelligence call the shots at work. Not just scheduling meetings or sorting emails. Actually managing them. Assigning tasks, evaluating performance, making decisions that shape careers. And according to recent survey data, many employees don’t just tolerate the concept — they actively prefer it to the alternative of a flawed human supervisor.
The numbers are striking. A survey conducted by Businessolver, a benefits technology company, found that 42% of workers would be comfortable reporting to an AI manager, as TechRadar reported. That figure is up sharply from 2024, when only 26% said the same. In a single year, the share of employees willing to take orders from a machine jumped by more than half.
But here’s the catch. Workers don’t trust AI to do everything a manager does. They’re comfortable with it handling scheduling, tracking productivity, and providing data-driven feedback. When it comes to compensation decisions — raises, bonuses, promotions — confidence drops fast. Only a fraction of respondents believed AI should have a say in pay. The technology, it seems, is welcome as an administrator. Not yet as a judge of human worth.
This split reveals something fundamental about how workers relate to authority in 2025. The appeal of an AI boss isn’t that it’s smarter than a human manager. It’s that it’s perceived as more consistent. More transparent. Less political. Employees who’ve suffered under erratic, biased, or simply absent leadership see an algorithm as a form of relief. No mood swings. No favoritism. No forgetting what you said in last quarter’s review.
The Businessolver data aligns with a broader pattern visible across multiple surveys this year. A January 2025 report from Resume Builder found that nearly half of companies already use AI in some management functions, and that employees in those organizations reported higher satisfaction with day-to-day task management than those working under purely human oversight. Meanwhile, research published by the Harvard Business Review in late 2024 showed that workers rated AI-generated feedback as fairer than feedback from human supervisors — provided they knew the AI was involved. Transparency, not secrecy, drove the positive perception.
So what’s actually happening on the ground? In practice, very few companies have replaced managers with AI wholesale. The more common model is augmentation: AI tools that handle the mechanical parts of management while humans retain authority over sensitive decisions. Think of it as splitting the manager role in two. The machine handles logistics, pattern recognition, and routine communication. The human handles empathy, negotiation, and judgment calls that require context no model can fully grasp.
Companies like Lattice, 15Five, and BetterUp have built products around this exact division of labor. Their platforms use AI to summarize employee performance data, flag potential burnout, suggest development plans, and even draft talking points for one-on-one meetings. The manager still sits in the chair. But the chair is increasingly bolted to an AI chassis.
Not everyone is enthusiastic. Labor advocates have raised pointed concerns about algorithmic management, particularly in industries like warehousing, logistics, and gig work where AI systems already exert enormous control over workers’ daily experience. Amazon’s warehouse management software, which tracks productivity down to the second and can trigger disciplinary action automatically, has drawn sustained criticism from unions and regulators alike. The European Union’s AI Act, which began phased implementation in 2025, specifically classifies employment-related AI systems as “high-risk” and subjects them to mandatory transparency and oversight requirements.
The distinction matters. There’s a world of difference between an AI system that helps a knowledge worker organize their week and one that penalizes a warehouse employee for taking an unscheduled bathroom break. Workers expressing comfort with AI management in surveys are likely envisioning the former. The latter is already here and already controversial.
Generational divides play a role too. Younger workers — Gen Z and younger millennials — consistently show higher comfort with AI management than their older colleagues. This isn’t surprising. They’ve grown up with algorithmic recommendation systems shaping what they watch, read, listen to, and buy. An algorithm shaping their work assignments doesn’t feel like a radical departure. It feels like the next logical step.
And yet the resistance to AI-driven compensation decisions cuts across age groups. Even workers who happily accept AI task assignments balk at the idea of a machine deciding their salary. The reason is partly emotional and partly practical. Compensation is where work becomes personal. It’s the number that determines whether you can afford rent, whether you feel valued, whether you stay or leave. Handing that decision to an algorithm feels like removing the one lever employees believe they can influence through persuasion, relationship-building, and demonstrated loyalty.
There’s also a trust deficit rooted in recent experience. Workers have watched companies deploy AI tools that promised objectivity but delivered bias. Amazon famously scrapped an AI recruiting tool in 2018 after discovering it systematically penalized women. Hiring algorithms at other firms have been shown to discriminate by race, age, and disability status. If AI can’t be trusted to select candidates fairly, the reasoning goes, why would it be trusted to determine pay fairly?
The Businessolver survey, as noted by TechRadar, also surfaced an interesting nuance about empathy. A significant number of respondents said they believed AI could be more empathetic than their current boss. That finding sounds paradoxical — machines don’t feel empathy — but it makes sense when you consider what workers actually mean. They’re not saying AI understands their emotions. They’re saying it might respond to their needs more reliably than a distracted, overworked, or indifferent human manager. Consistency is being mistaken for, or perhaps redefined as, empathy.
This redefinition has implications for how companies train and evaluate their human managers. If workers are benchmarking their bosses against algorithms — and finding the humans wanting — that’s a damning indictment of current management practice, not a ringing endorsement of AI capability. The bar, in many workplaces, is apparently that low.
Corporate leadership is paying attention. A May 2025 Gartner survey found that 67% of HR leaders are actively exploring AI-augmented management tools, up from 38% in 2023. The acceleration is driven by two forces: cost pressure and talent retention. AI management tools promise to reduce the administrative burden on human managers, freeing them to focus on strategic work. They also promise to standardize the employee experience, reducing the variability that causes some teams to thrive under great managers while others languish under poor ones.
But standardization carries its own risks. Management is, at its best, an act of improvisation. Good managers read rooms. They sense when an employee is struggling before the data shows it. They know when to push and when to back off. They make exceptions to rules because they understand that rules, applied uniformly, sometimes produce unjust outcomes. An AI system optimized for consistency will, by definition, struggle with these judgment calls. It will be fair on average and wrong in specific cases that matter enormously to the individuals involved.
The legal dimension is evolving rapidly. In the United States, the Equal Employment Opportunity Commission issued updated guidance in 2024 clarifying that employers remain liable for discriminatory outcomes produced by AI management tools, even if those tools were developed by third-party vendors. Several states, including Illinois, New York, and Colorado, have enacted or proposed legislation requiring employers to disclose when AI is used in employment decisions and to conduct regular bias audits. Companies adopting AI management tools without robust compliance frameworks are walking into significant legal exposure.
Then there’s the question no one has fully answered: what happens to middle management? If AI absorbs the administrative and analytical functions that consume most of a manager’s time, the role itself must change. Some analysts predict a dramatic thinning of management ranks, with AI enabling wider spans of control — one human manager overseeing fifty or a hundred direct reports instead of ten. Others argue that the human manager role will become more specialized, focused entirely on coaching, mentoring, and organizational culture. A smaller number of managers, but more skilled ones.
Neither outcome is painless. Middle management has long served as the primary career advancement path in most organizations. Shrink it, and you remove the rungs of the ladder that ambitious employees expect to climb. Companies that deploy AI management tools aggressively may find they’ve solved an efficiency problem while creating a retention crisis.
The worker attitudes captured in the Businessolver survey are real and significant. But they should be read carefully. Saying you’d be comfortable with an AI boss in a survey is different from actually reporting to one. The abstract appeal of algorithmic consistency may collide with the concrete frustration of being managed by a system that can’t understand why you were late because your child was sick, or why your productivity dipped the week after a family death. Humans are messy. Good management accounts for the mess.
Still, the trend line is unmistakable. Comfort with AI management is rising fast. The technology is improving faster. And the economic incentives pushing companies toward adoption are enormous. Within five years, some form of AI-augmented management will likely be the norm in large enterprises across most white-collar industries. The question isn’t whether AI will manage workers. It’s how much authority it will hold — and who will be accountable when it gets things wrong.
For now, workers have drawn a clear line. Manage my calendar. Fine. Manage my career. Not yet.
Your Next Boss Might Be an Algorithm — And Workers Say They’re Fine With That first appeared on Web and IT News.
