Transformer Model for Language Understanding Part 01 - DLMC47

The core idea behind the Transformer model is self-attentionโ€”the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

Mar 30, 2021, 4:00 โ€“ 5:00 PM

39
RSVP'd

RSVP Now

Key Themes

Machine Learning

About this event

The core idea behind the Transformer model is self-attentionโ€”the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections Scaled dot product attention and Multi-head attention.

Speaker

  • Muhammad Huzaifa Shahbaz

    Lenaar Digital

    Co-founder

Partners

Office of Research, Innovation & Commercialization - ORIC logo

Office of Research, Innovation & Commercialization - ORIC

Amal4Ajar logo

Amal4Ajar

IEEE SSUET - Student Branch logo

IEEE SSUET - Student Branch

IEEE Computer Society SSUET logo

IEEE Computer Society SSUET

Organizers

  • Laiba Rafiq

    GDSC Lead

  • Maaz Farman

    SPARKโšกBIZ

    Community Mentor

  • shayan faiz

    techrics

    Outreach Coordinator

  • Muhammad Ahmer Zubair

    Sharp Edge

    Media Creative Lead

  • Ehtisham Ul Haq

    Tech Lead

  • Mohammad Nabeel Sohail

    AI and Chatbot Developer | Full Stack Web | PAFLA Ambassador | Public Speaker | Trainer

    Communications Lead

  • Kashan Khan Ghori

    Softseek International

    Operations Lead

  • Daniyal Jamil

    Technology Links

    Marketing Lead

  • Sami Faiz Qureshi

    ConciSafe

    Event Management Lead

  • Maham Amjad

    Content Writing Lead

  • Syed Affan Hussain

    HnH Soft Tech Solutions Pvt Ltd

    Host

Contact Us