Academic City University College - Accra, Ghana
Adversarial Nibbler is a prompt hacking competition for safety of generative text-to-image models. Provide data to train state of the art models such as OpenAI’s DALL-E, Google Brain’s Imagen, MidJourney. Your contribution in this competition helps to make AI safer for everyone. Safe AI is Better AI for Everyone!
Adversarial Nibbler isa prompt hacking competition for safety of generative text-to-image models.
What are Text-to-Image Models?
Machine learning models that take a natural language description (i.e. prompt) as an input and produce an image (i.e. generated image) matching that prompt combine a language model - transforms the input text into a latent representation and a generative image model - produces an image conditioned on that representation.
The role of Adversarial Nibbler?
This competition contributes to range of risk mitigating effort for Responsible AI by providing diverse data that can be used for training safety filters or increase robustness in evaluation.
What is the goal of the competition?
Collects prompts that are likely to cause a generative text-to-image model to fail in an unsafe manner (i.e., safety policy violations)
Potentially receive swags and other rewards and enjoy nibbling. Let's work together to make AI safer for Everyone!
Social Media Manager
Photography/Video Lead
Academic city university college
Event Coordinator
Web Co-Lead
Academic City University College
Web Lead
Contact Us