Sign In  |  Register  |  About Menlo Park  |  Contact Us

Menlo Park, CA
September 01, 2020 1:28pm
7-Day Forecast | Traffic
  • Search Hotels in Menlo Park

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

AI data leak crisis: New tool prevents company secrets from being fed to ChatGPT

The surge in using artificial intelligence tools like ChatGPT has already resulted in workers leaking sensitive company data to bots, but a new technology aims to protect company data.

The meteoric rise in the everyday use of artificial intelligence has also raised the risk that workers – inadvertently or otherwise – could leak sensitive company data to new AI-powered tools like ChatGPT whether their company has banned their use or not.

In fact, it's already happening. Samsung recently experienced a series of leaks after employees purportedly pasted source code into the new bot, potentially exposing proprietary information. 

Serial tech entrepreneur Wayne Chang has worked in the AI space for years, and anticipated that breaches like Samsung's would also rise as workers embraced the new technology. Now, he's rolled out an AI tool of his own that blocks leaks by preventing chatbots and large language models (LLMs) from taking company secrets.

Chang told FOX Business that when OpenAI's ChatGPT was released to the public in November, he saw how powerful it would be, but says it also "comes with huge, huge risks."

FORMER PRESIDENTIAL CANDIDATE WARNS AI COULD ‘DESTROY US’ IF AMERICA REMAINS ‘DECADES BEHIND THIS CURVE’

So in December, he began working to develop LLM Shield, a product for companies and governments that uses "technology to fight technology" by scanning everything that is downloaded or transmitted by a worker and blocking any sensitive data from being entered into the AI tools – including ChatGPT and its rivals like Google's Bard and Microsoft's Bing. 

LLM Shield was just released last week, and it alerts organizations whenever an attempt is made to upload sensitive information. 

The way it works is that administrators can set guardrails for what type of data a company wants to protect. LLM Shield then warns users whenever they are about to send sensitive data, obfuscates details so the content is useful but not legible by humans, and stop users from sending messages with keywords indicating the presence of sensitive data.

JAPANESE CITY BECOMES COUNTRY'S FIRST TO USE CHATGPT TO HELP WITH ADMINISTRATIVE TASKS

And just like the AI tools it is tasked with reining in, LLM Shield will continue to get smarter. It updates very much like spam filters, so as more AI bots come onto the market, the software updates automatically to bolster protection.

The company also has plans in the work to release a personal edition for individuals to download for home use.

While many companies are simply banning the use of AI tools altogether out of fear of leaks, the LLM Shield team is trying to reduce the negative effects and encourage more AI adoption rather than banning LLM systems. 

Chang says the emergence of these new AI tools mark the beginning of a massive shift in productivity, and he believes the workforce overall will benefit from the positive effects of the technology.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

"Things are going to speed up quite rapidly – that's both the positive and the negative," he told FOX Business. "My focus here is that I want to make sure we can hopefully steer AI more towards the positive and avoid as much downside as possible."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 MenloPark.com & California Media Partners, LLC. All rights reserved.