Saturday, July 27, 2024

Google Presents Bug Bounties for Generative AI Safety Vulnerabilities

[ad_1]

Google’s Vulnerability Reward Program affords as much as $31,337 for locating potential hazards. Google joins OpenAI and Microsoft in rewarding AI bug hunts.

Google logo at Googleplex Silicon Valley Mountain View in California.
Picture: Markus Mainka/Adobe Inventory

Google expanded its Vulnerability Rewards Program to incorporate bugs and vulnerabilities that may very well be present in generative AI. Particularly, Google is searching for bug hunters for its personal generative AI, merchandise similar to Google Bard, which is accessible in lots of international locations, or Google Cloud’s Contact Heart AI, Agent Help.

“We imagine this may incentivize analysis round AI security and safety, and convey potential points to mild that may finally make AI safer for everybody,” Google’s Vice President of Belief and Security Laurie Richardson and Vice President of Privateness, Security and Safety Engineering Royal Hansen wrote in an Oct. 26 weblog put up. “We’re additionally increasing our open supply safety work to make details about AI provide chain safety universally discoverable and verifiable.”

Bounce to:

Google’s bug bounty program: Limitations and rewards

There are limitations as to what counts as a vulnerability in generative AI; a whole record of what vulnerabilities Google considers in scope or out of scope for the Vulnerability Rewards Program is in this Google safety weblog.

Generative AI introduces dangers conventional computing doesn’t; these dangers embody unfair bias, mannequin manipulation and misinterpretations of knowledge, Richardson and Hansen wrote. Notably, AI “hallucinations” — misinformation generated inside a personal shopping session — don’t rely as vulnerabilities for the needs of the Vulnerability Rewards Program. Assaults that expose delicate info, change the state of a Google consumer’s account with out their consent or present backdoors right into a generative AI mannequin are inside scope.

In the end, anybody taking part within the bug bounty must show that the vulnerability they uncover may “pose a compelling assault situation or possible path to Google or consumer hurt,” in accordance with the Google safety weblog.

Potential Google AI bug bounty rewards

Rewards for the Vulnerability Rewards Program vary from $100 to $31,337, relying on the kind of vulnerability. Particulars on rewards, payouts may be discovered on Google’s Bug Hunters website.

Different bug bounties and customary assault varieties in generative AI

OpenAI, Microsoft and different organizations supply bug bounties for white hat hackers who discover vulnerabilities in generative AI techniques. Microsoft affords between $2,000 and $15,000 for qualifying bugs. OpenAI’s bug bounty program will give between $200 and $20,000.

SEE: IBM X-Power researchers discovered phishing emails written by individuals are barely extra more likely to get clicks than these written by ChatGPT. (TechRepublic)

In an October 26 report, HackerOne and OWASP discovered that the most typical vulnerability in generative AI was immediate injection (i.e., utilizing prompts to make the AI mannequin do one thing it was not meant to do), adopted by insecure output dealing with (i.e., when LLM output is accepted with out scrutiny) and the manipulation of coaching knowledge.

study to make use of generative AI

Builders and safety researchers simply beginning out with generative AI have loads of choices in terms of studying how you can use it, from experimenting with free functions similar to ChatGPT to taking skilled programs. DeepLearning.AI has programs at each newbie and superior ranges for professionals who wish to discover ways to use and develop for synthetic intelligence and machine studying.

[ad_2]

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles