We select and review products independently. When you purchase through our links we may earn a commission. Learn more.

OpenAI Offers Cash Bounties for ChatGPT Bugs

ChatGPT on a Google Pixel smartphone.
Justin Duino / Review Geek

Open AI has declared open season for bounty hunters. This week, the company announced a Bug Bounty program that offers researchers, ethical hackers, and technology enthusiasts cash rewards for finding and reporting bugs in its generative AI chatbot, ChatGPT.

The Bug Bounty program is administered by Bugcrowd and pays out anywhere from $200 to $6,500 per discovered vulnerability, with a maximum reward of $20,000. As of this writing, the average payout for the program is $1,287.50.

However, bug bounty hunters must discover bugs within the program’s scope. These include issues relating to OpenAI’s APIs and public cloud resources or infrastructure, such as cloud storage accounts and cloud compute servers. ChatGPT issues are also in the scope of the bounty program and include bugs related to logins, subscriptions, OpenAI-created plugins, and more.

Particularly, OpenAI is interested in finding bugs that deal with authentication and authorization issues, data exposure, payment issues, browsing and code interpreter plugins, security issues with the plugin creation system, and more.

Additionally, the company wants to find vulnerabilities that might be exposed through third-party applications such as Google Worskspace, Trello, Evernote, Zendesk, Salesforce, Stripe, and more. However, participants in the bug bounty program are not authorized to conduct tests against outside companies and must limit their research to confidential OpenAI information. However, these sorts of bugs will have a higher payout rate.

Potential bug bounty hunters should also keep in mind what is out of the scope of the program. These include model issues such as jailbreaking, manipulating ChatGPT to “say bad things,” or “tell you how to do bad things,” and “write malicious code for you.” Model hallucinations, such as “getting the model to pretend to do bad things,” “getting the model to pretend to give you answers to secrets,” and “getting the model to pretend to be a computer and execute code,” are also out of the scope of the program.

Danny Chadwick Danny Chadwick
Danny has been a technology journalist since 2008. He served as senior writer, as well as multimedia and home improvement editor at Top Ten Reviews until 2019. Since then, he has been a freelance contributor to Lifewire and ghostwriter for Fit Small Business. His work has also appeared on Laptop Mag, Tom’s Guide, and business.com. Read Full Bio »