ChatGPT reached 100 million users in January 2023, only two months after its release. That’s a record-breaking pace for an app. Numbers at that scale indicate that generative AI — AI that creates new content as text, images, audio and video — has arrived. But with it comes new security and intellectual property (IP) issues for businesses to address.

ChatGPT is being used — and misused — by businesses and criminal enterprises alike. This has security implications for your business, employees and the intellectual property you create, own and protect.

How is ChatGPT being used?

With over 100 million users, the applications for ChatGPT are legion. However, there are many real-world examples of how businesses are leveraging this app. IT companies are applying the app to software development, debugging, chatbots, data analysis and more. Service companies are streamlining sales, improving customer service and automating routine tasks. Government and public service sectors see benefits in creating draft language for laws and bills and creating content in multiple languages. And countless individuals are using the app as a personal productivity tool.

Of course, as with all innovations, thieves discover uses as well. Generative AI tools are being used in phishing attempts, making them faster to execute, harder to detect and easier to fall for. ChatGPT imitates real human conversation. That means the typos, odd phrasing and poor grammar that often alert users to phishing foul play may soon disappear. Fortunately, while generative AI can be used by criminals to create problems, cybersecurity pros can use ChatGPT to counter them.

Explore Watsonx

Pitfalls of ChatGPT and its intellectual property implications

OpenAI, the developer of ChatGPT, notes hazards of the generative AI app. They state that “…outputs may be inaccurate, untruthful and otherwise misleading at times” and that the tool, in their words, will “hallucinate” or simply invent outputs. Generative AI models improve as they learn from an ever larger language data set, but inaccuracy remains common. Any output generated by the app requires human fact-checking and quality control before use or distribution.

These inaccuracies can complicate your company’s IP rights. IP rights fall into four main categories: patents, trademarks, copyrights and trade secrets. If you claim IP rights to something even partially AI-generated, you need to ensure its accuracy first. To make matters muddier, one big question remains unresolved about AI-generated IP: ownership.

Who owns ChatGPT output? It’s complicated.

Per current ChatGPT terms of use, where permitted by law, you own the input (such as the prompts, questions or texts you enter when seeking output from the tool). Based on your input, the service delivers output. Collectively, the input and output are known as “content” per the terms of use. The terms state that OpenAI assigns to you all its rights, title and interest in and to the output.

However, OpenAI can’t assign rights to content it didn’t initially own. The use terms also state that the user is responsible for generated content. This includes ensuring it does not violate applicable laws or OpenAI’s terms of use. The terms also note that one user’s output may be exactly the same as another’s. For example, they use the query, “Why is the sky blue?” Two different users might ask that same question and the output could be the same for both.

Many issues revolve around the intersection of AI and intellectual property. A few have been decided, while others are not yet litigated and remain unresolved. Thaler v. Vidal decided the issue of patents in the U.S. In April 2023, the U.S. Supreme Court upheld that AI inventorship is not a thing and that patents can only be obtained by humans. However, Congress is now considering the issue and seeking guidance on how AI inventorship should be treated.

In March of 2023, the U.S. Copyright Office delivered guidance on registering copyright for works with AI-generated material. During the copyright application, the applicant must disclose if the material contains AI-generated content. The guidance also states that the applicant has to explain the human’s contributions to the work and that there must be sufficient human authorship established to ensure copyright protection for that part of the work.

What about user input? That’s complicated too.

AI language models use data to continuously improve their models. ChatGPT captures your chat history data to help train its model. Its model training could use your input. If you input confidential or proprietary information, that could put your company’s intellectual property at risk of theft or dissemination. Samsung discovered this the hard way when Samsung engineers accidentally leaked internal source code in an upload to ChatGPT. In response, the company has temporarily banned staff from using generative AI tools on company-owned devices.

Samsung isn’t alone. One data security service discovered and blocked requests to input confidential data into ChatGPT from 4.2% of 1.6 million workers at its client companies. The inputs included client data, source code and other proprietary and confidential information. One executive pasted corporate strategy into the app and requested the creation of a PowerPoint deck. In another incident, a doctor input a patient’s name and condition into the model to help write a letter to an insurance company. The fear is that this confidential data could resurface as output in response to the right query.

What can security teams do to safeguard IP?

Generative AI is a fast-moving target. Keeping your employees and confidential information secure takes vigilance. Review and update your security posture regularly. For now, here are some simple things you can do to safeguard your IP.

  • Opt out of model training. Turn off chat history and model training in ChatGPT data controls settings. OpenAI notes in their terms of use that disabling may limit the app’s functionality, but that may be a reasonable price to pay for IP safety.
  • Provide employee training. Tell staff how these models work and that their inputs could become public, harming the company, partners, customers, patients or other employees. Also, teach staff how generative AI improves phishing and vishing schemes to increase their vigilance for those types of attacks.
  • Review terms of use. ChatGPT terms of use update in response to issues that arise with users. Check the terms of use for this and other generative AI tools frequently to ensure you stay protected.
  • Follow relevant IP legal proceedings. Globally, there will be more laws and rulings about IP and its intersection with generative AI. Corporate legal teams need to follow court proceedings and keep security teams informed of how they might affect security guidelines and adherence to the law.
  • Use the least privilege principle. Give employees the least access and authorizations required to perform their jobs. This might help cut down on unauthorized access to information that can be shared with external AI tools.

The easy proliferation of generative AI has democratized and accelerated its adoption. This tech-led trend will drive disruption. Questions about intellectual property protection will arise from it. Learn more about how IBM helps you embrace the opportunities of generative AI while also protecting against the risks.

More from Artificial Intelligence

How I got started: AI security executive

3 min read - Artificial intelligence and machine learning are becoming increasingly crucial to cybersecurity systems. Organizations need professionals with a strong background that mixes AI/ML knowledge with cybersecurity skills, bringing on board people like Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace, who has a unique blend of technical and soft skills. Carignan was originally a dance major but was also working for NASA as a hardware IT engineer, which forged her path into AI and cybersecurity.Where did you go to…

ChatGPT 4 can exploit 87% of one-day vulnerabilities: Is it really that impressive?

2 min read - After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a human cybersecurity professional’s results for the same tasks would compare.To get some answers, I talked with Shanchieh Yang, Director of Research at the Rochester Institute…

How cyber criminals are compromising AI software supply chains

3 min read - With the adoption of artificial intelligence (AI) soaring across industries and use cases, preventing AI-driven software supply chain attacks has never been more important.Recent research by SentinelOne exposed a new ransomware actor, dubbed NullBulge, which targets software supply chains by weaponizing code in open-source repositories like Hugging Face and GitHub. The group, claiming to be a hacktivist organization motivated by an anti-AI cause, specifically targets these resources to poison data sets used in AI model training.No matter whether you use…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today