The Hidden Challenge of Managing Employees in the Age of Generative AI
Generative AI is revolutionizing workplaces—but are bosses prepared to manage its use effectively?
As generative AI tools like ChatGPT rapidly integrate into everyday work routines, employers struggle to define how these technologies should be used—or if they should be used. The rapid adoption of these tools has created a gap between the capabilities employees are unlocking and companies' policies. This imbalance presents a significant challenge for leaders: maintaining control and oversight while encouraging innovation. An original story by the Financial Times
AI in the Shadows: A Rapid Adoption with Limited Oversight
Take the example of "Matt," a pseudonymous researcher at a pharmaceutical company. Like many workers, Matt turned to ChatGPT to handle complex coding tasks when he started his new job. It wasn't laziness, he explained, but a belief that AI could improve accuracy and efficiency. Yet he admitted feeling embarrassed and uncertain—his company had no clear policy on generative AI usage.
Matt's experience reflects a broader trend. According to a Federal Reserve Bank of St. Louis survey, nearly 25% of the U.S. workforce used generative AI tools weekly by mid-2023, rising to almost 50% in software and finance industries. Yet most employers have failed to establish clear guidelines or training for AI use, leaving employees to navigate these tools independently.
From Blanket Bans to Strategic Integration
Many organizations initially responded to the rise of generative AI with blanket bans. Companies like Apple, Samsung, and Goldman Sachs cited concerns over data privacy and intellectual property. However, as AI becomes increasingly critical for maintaining competitive advantage, leaders are reevaluating this approach.
Walmart, for instance, shifted from a restrictive policy to a "controlled experimentation" model. Jerry Geisler, Walmart’s Chief Information Security Officer, explained that the company developed internal AI tools while allowing limited use of external platforms. Walmart monitors employee AI use in real-time, intervening when necessary, not to punish but to guide employees toward secure alternatives.
This pragmatic approach helps balance innovation with security. "We don't want them to think they're in trouble," Geisler said. "We just want to help them achieve their goals while minimizing risk."
The Culture Clash: Productivity Gains vs. Transparency
Despite these evolving policies, a cultural barrier remains. Workers often hesitate to admit their use of generative AI tools, fearing accusations of laziness or incompetence. A survey by Slack revealed that almost half of desk workers would avoid disclosing AI usage to their managers. Some feared layoffs if productivity gains made through AI were perceived as a reason to reduce headcount.
Ethan Mollick, a management professor at the University of Pennsylvania, notes another dimension: workers who leverage AI for exceptional results may be reluctant to share their methods, preferring to maintain their competitive edge. “They look like geniuses,” Mollick says. “They don’t want to not look like geniuses.”
The Business Case for Proactive Policies
For employers, the risks of ignoring AI adoption are manifold. Unregulated use could lead to breaches of data privacy, intellectual property violations, or reliance on unreliable outputs. Conversely, restrictive policies risk stifling innovation and alienating employees who see AI as an essential productivity tool.
Victoria Usher, CEO of communications agency GingerMay, illustrates a balanced approach. After starting with a blanket ban, her company now allows AI use for internal purposes, provided employees request permission. While this system is not yet seamless, Usher sees flexibility as key: "We’re really happy to keep changing our policies."
Large organizations are also investing in bespoke solutions. JPMorgan Chase’s "LLM Suite," McKinsey’s "Lilli," and Walmart’s "My Assistant" are examples of how companies can create secure environments tailored to their needs.
Lessons for Leaders: Navigating the AI Evolution
As AI technologies evolve, so too must the strategies for managing them. Here are key takeaways for leaders:
Establish Clear Policies: Organizations must communicate clear guidelines on AI use, addressing security, ethical concerns, and acceptable use cases.
Invest in Internal Tools: Developing in-house AI systems or securing enterprise-grade tools can provide employees with safe, efficient options.
Foster a Culture of Transparency: Encouraging open dialogue about AI use can dispel fears of reprimand and unlock shared innovation.
Adapt Policies Over Time: AI technology is evolving rapidly. Companies must regularly review and update their policies to stay relevant.
A New Era of Workforce Management
Generative AI represents a double-edged sword for employers. On one hand, it promises unprecedented efficiency and creativity. Conversely, it challenges traditional management structures, creating uncertainty and risk.
For leaders, the question is no longer whether to adopt AI but how to guide its use responsibly. Striking the right balance between oversight and empowerment will define the success of AI integration in the workplace—and the competitiveness of businesses in the years ahead.