ChatGPT is proving to be a rather alluring assistant in many professions, but it's not without risks, and some companies have banned the chatbot at work.
It may seem obvious that uploading work-related information to an online artificial intelligence platform owned by another company is a potential security and privacy breach. Still, ChatGPT can be a real boon for some feeling the time crunch.
In particular, software engineers find ChatGPT useful for writing, testing, or debugging code, even though the technology is prone to errors.
Around 43 percent of employees use AI such as ChatGPT at work, mostly without telling their boss, according to a survey of about 12,000 professionals.
Samsung Electronics recently cracked down on the use of generative AI after an engineer manifested a tech company's worst nightmare by copy-pasting sensitive source code into ChatGPT.
Like many companies, Samsung is worried that anything uploaded to AI platforms like OpenAI's ChatGPT or Google's Bard will get stored on those companies' servers, with no way to access or delete the information.
OpenAI can use anything typed into AI systems like ChatGPT to improve the system. The fear is that proprietary or sensitive company information given to ChatGPT could be unintentionally shared with other users.
And OpenAI is still ironing out security issues: It temporarily shut down ChatGPT in March to fix a bug where users could see the titles from other users' chat histories.
Then, in April, OpenAI made it possible for users to turn off their chat history, which the company said would stop ChatGPT from using the data to train its AI model.
As a result of various security concerns around the chatbot, around half of human resources leaders are issuing ChatGPT guidelines for staff, while 3 percent banned ChatGPT outright, according to a survey by consulting firm Gartner.
However, some companies have recognized that the AI cat is already out of the bag and developed – or are in the process of creating – their own AI platforms as safer alternatives to the freely accessible ChatGPT.
Amazon banned ChatGPT in January and has urged its developers to use its in-house AI called CodeWhisperer if they want coding advice or shortcuts.
In May, Apple restricted the use of ChatGPT for some employees to prevent the exposure of confidential information. Apple is developing its own AI platform in competition with ChatGPT, itself backed by a multi-billion-dollar Microsoft investment.
The Commonwealth Bank of Australia restricted the use of ChatGPT in June and directed technical staff to use a similar tool called CommBank Gen.ai Studio, which was developed in partnership with Silicon Valley tech company H2O.ai.
Other banks, including Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo & Co, and JP Morgan, issued outright bans of ChatGPT.
Accounting firm PwC has encouraged staff to play around with ChatGPT but warned them not to use the program for client work.
"Our policies don't allow our people to use ChatGPT for client usage pending quality standards that we apply to all technology innovation to ensure safeguards," PwC's chief digital information officer Jacqui Visch told Financial Review.
Around 15 percent of law firms have issued warnings about ChatGPT, according to a survey of more than 400 legal professionals from the US, UK, and Canada. Mishcon de Reya, a UK-based law firm with around 600 lawyers, banned using the AI platform due to risks to sensitive data.
In May, staff at five hospitals in Western Australia were told to stop using ChatGPT after some used the platform to write private medical notes.
"Crucially, at this stage, there is no assurance of patient confidentiality when using AI bot technology, such as ChatGPT, nor do we fully understand the security risks," said Paul Forden, who heads up Perth's South Metropolitan Health Service.
"For this reason, the use of AI technology, including ChatGPT, for work-related activity that includes any patient or potentially sensitive health service information must cease immediately."
The companies openly embracing ChatGPT see it as a way to save on content generation costs. The Coca-Cola Company plans to use ChatGPT and AI image generator DALL·E for branding and content. In January, BuzzFeed announced a partnership to create quizzes and other content for Meta using OpenAI's publicly available API.
Blog website Medium has "welcomed the responsible use of AI-assistive technology" but requires authors to disclose its use. CNET had quietly experimented with AI-written stories but announced a pause on this operation in January.
Undoubtedly, generative AI will eventually have a place in the office and may even replace some staff. But, for now, many companies see more risks than benefits.