A recent survey conducted by Software AG reveals that half of all desk-based professionals rely on personal AI tools. These individuals, often referred to as knowledge workers, use AI for various reasons. Some adopt personal AI tools because their company’s IT department does not provide them, while others prefer the flexibility of choosing their own software.
John, a software developer at a fintech firm, exemplifies this trend. He believes that taking action first and dealing with potential consequences later is the best approach. He, like many others, utilizes AI tools at work without formal approval from the IT team.
Although his company offers GitHub Copilot for AI-assisted coding, John opts for Cursor, a tool he finds more efficient. “It’s an advanced autocomplete, but it’s incredibly effective,” he explains. John advises businesses to remain adaptable when it comes to AI tools. “I’ve told my colleagues not to commit to year-long team licenses because the AI landscape shifts dramatically within a few months,” he points out. “People will want to explore different options, and a long-term contract might restrict them.”
Peter, a product manager for a data storage firm, is another example of someone navigating AI policies at work. Although his company restricts the use of external AI tools and provides access to Google Gemini, he bypasses this limitation by using ChatGPT via the search platform Kagi. For him, AI serves as an intellectual counterpart, helping him assess his plans from multiple customer viewpoints.
He takes advantage of ChatGPT’s ability to analyze videos, summarizing competitor content in a fraction of the time it would take to watch manually. He notes that this efficiency equates to having an extra third of an employee’s output at no additional cost.
The unauthorized use of AI, often referred to as “shadow AI,” falls under the broader category of “shadow IT,” where employees deploy unapproved services or software. Harmonic Security, a firm specializing in AI risk mitigation, actively monitors this trend, tracking over 10,000 AI applications, 5,000 of which have been identified in active use.
While widely adopted, shadow AI presents risks. AI systems process large amounts of data during training, and about 30% of the tools tracked by Harmonic Security use user-provided data for continuous learning. This could potentially expose proprietary information.
Though some worry about AI models revealing sensitive company data, Harmonic Security CEO Alastair Paterson believes direct leaks are unlikely. However, businesses remain cautious of data being stored in external AI systems without oversight, which could make them vulnerable to breaches.
Despite these concerns, AI’s efficiency makes it appealing, especially to younger employees. The Adaptavist Group CEO Simon Haighton-Williams compares AI to tools like encyclopedias and calculators, which enhance knowledge and productivity. “AI won’t replace experience entirely, but it can condense years of expertise into seconds through effective prompting,” he notes.
His advice to companies grappling with shadow AI usage is to accept that it’s happening. Rather than banning AI outright, understand how employees use it and find a way to integrate and regulate it.
Trimble, a company specializing in hardware and software for built-environment data management, has taken a proactive approach. To ensure safe AI use, it developed Trimble Assistant, an internal AI app built on models similar to those in ChatGPT. Employees utilize it for diverse functions, including market research, product development, and customer service. The company also offers GitHub Copilot for developers.
KaroliinaTorttila, Trimble’s AI director, encourages employees to experiment with AI in their personal lives but stresses the need for caution in professional settings. “Employees need to develop the skill of recognizing sensitive information and exercising judgment about where to share it,” she explains.
She believes that personal experience with AI can shape corporate policies as AI evolves. “There must be continuous dialogue about which tools best serve our needs,” she says. As more entities like Microsoft Corp. (NASDAQ: MSFT) continue introducing new AI solutions on the market, the trend of workers smuggling these solutions into their workplaces is likely to gather momentum.
About AINewsWire
AINewsWire (“AINW”) is a specialized communications platform with a focus on the latest advancements in artificial intelligence (“AI”), including the technologies, trends and trailblazers driving innovation forward. It is one of 70+ brands within the Dynamic Brand Portfolio @ IBN that delivers: (1) access to a vast network of wire solutions via InvestorWire to efficiently and effectively reach a myriad of target markets, demographics and diverse industries; (2) article and editorial syndication to 5,000+ outlets; (3) enhanced press release enhancement to ensure maximum impact; (4) social media distribution via IBN to millions of social media followers; and (5) a full array of tailored corporate communications solutions. With broad reach and a seasoned team of contributing journalists and writers, AINW is uniquely positioned to best serve private and public companies that want to reach a wide audience of investors, influencers, consumers, journalists, and the general public. By cutting through the overload of information in today’s market, AINW brings its clients unparalleled recognition and brand awareness.
AINW is where breaking news, insightful content and actionable information converge.
To receive SMS alerts from AINewsWire, text “AI” to 888-902-4192 (U.S. Mobile Phones Only)
For more information, please visit www.AINewsWire.com
Please see full terms of use and disclaimers on the AINewsWire website applicable to all content provided by AINW, wherever published or re-published: https://www.AINewsWire.com/Disclaimer
AINewsWire
Los Angeles, CA
www.AINewsWire.com
310.299.1717 Office
Editor@AINewsWire.com
AINewsWire is powered by IBN