Introduction to Autoencoders in Tensorflow - DLMC26

An autoencoder is a special type of neural network that is trained to copy its input to its output. For example, given an image of a handwritten digit, an autoencoder first encodes the image into a lower-dimensional latent representation, then decodes the latent representation back to an image. An autoencoder learns to compress the data while minimizing the reconstruction error.

Jan 27, 2021, 3:00 – 4:30 PM

56
RSVP'd

RSVP Now

Key Themes

Machine Learning

About this event

An autoencoder is a special type of neural network that is trained to copy its input to its output. For example, given an image of a handwritten digit, an autoencoder first encodes the image into a lower-dimensional latent representation, then decodes the latent representation back to an image. An autoencoder learns to compress the data while minimizing the reconstruction error. Let's see its intro in core TensorFlow today.

Speaker

  • Muhammad Huzaifa Shahbaz

    Lenaar Digital

    Co-founder

Partners

Amal4Ajar logo

Amal4Ajar

Office of Research, Innovation & Commercialization - ORIC logo

Office of Research, Innovation & Commercialization - ORIC

IEEE SSUET - Student Branch logo

IEEE SSUET - Student Branch

IEEE Computer Society SSUET logo

IEEE Computer Society SSUET

Organizers

  • Laiba Rafiq

    GDSC Lead

  • Maaz Farman

    SPARK⚑BIZ

    Community Mentor

  • shayan faiz

    techrics

    Outreach Coordinator

  • Muhammad Ahmer Zubair

    Sharp Edge

    Media Creative Lead

  • Ehtisham Ul Haq

    Tech Lead

  • Mohammad Nabeel Sohail

    AI and Chatbot Developer | Full Stack Web | PAFLA Ambassador | Public Speaker | Trainer

    Communications Lead

  • Kashan Khan Ghori

    Softseek International

    Operations Lead

  • Daniyal Jamil

    Technology Links

    Marketing Lead

  • Sami Faiz Qureshi

    ConciSafe

    Event Management Lead

  • Maham Amjad

    Content Writing Lead

  • Syed Affan Hussain

    HnH Soft Tech Solutions Pvt Ltd

    Host

Contact Us