Adversarial Nibbler Hackathon

Adversarial Nibbler is a prompt hacking competition for safety of generative text-to-image models. Provide data to train state of the art models such as OpenAI’s DALL-E, Google Brain’s Imagen, MidJourney. Your contribution in this competition helps to make AI safer for everyone. Safe AI is Better AI for Everyone!

Nov 3, 2023, 7:00 – 8:00 PM

5
RSVP'd

RSVP Now

Key Themes

Explore MLKaggleMachine Learning

About this event

Adversarial Nibbler isa prompt hacking competition for safety of generative text-to-image models.

What are Text-to-Image Models?

Machine learning models that take a natural language description (i.e. prompt) as an input and produce an image (i.e. generated image) matching that prompt combine a language model - transforms the input text into a latent representation and a generative image model - produces an image conditioned on that representation.

The role of Adversarial Nibbler?

This competition contributes to range of risk mitigating effort for Responsible AI by providing diverse data that can be used for training safety filters or increase robustness in evaluation.

What is the goal of the competition?

Collects prompts that are likely to cause a generative text-to-image model to fail in an unsafe manner (i.e., safety policy violations)

Potentially receive swags and other rewards and enjoy nibbling. Let's work together to make AI safer for Everyone!

Facilitator

  • Papa Kofi Boahen

    Academic City University College

    Lead at ACity

Organizers

  • Papa Kofi Boahen

    Academic City University College

    GDSC Lead

  • Kwaku Amo-Korankye

    Academic City University College

    Co-Lead

  • Doreen Owoo

    Social Media Manager

  • Bempa Dwomoh

    Photography/Video Lead

  • Emmanuella Uwudia

    Academic city university college

    Event Coordinator

  • Maukewonge Nyarko-Tetteh

    Design Lead

  • David Ekong

    Web Co-Lead

  • William Okwale

    Academic City University College

    Web Lead

Contact Us