While ChatGPT and other generative AI tools are finding their place in the workflows of some companies, they are proving to be liabilities for others. According to Bloomberg, Samsung has cracked down, banning the use of these pieces of AI tech when it was revealed that staff had uploaded “sensitive code” to ChatGPT.
In a memo reviewed by Bloomberg, Samsung stated that the results of an internal survey at the company showed that 65% of those polled believed ChatGPT represented a security risk for the company. Additionally, it revealed that last month, engineers accidentally leaked source code to the platform. As a result, usage of the platform and others like it are banned at Samsung until further notice.
Per the memo, “Interest in generative AI platforms such as ChatGPT has been growing internally and externally. While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI. We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment. HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency. However, until these measures are prepared, we are temporarily restricting the use of generative AI.”
See also: Samsung and LG resume negotiations over OLED panels