Hiding from Facial Recognition

Modern facial recognition systems are very advanced nowadays and might be misused and abused in multiple ways by both law enforcement agencies and private companies. Thus, we believe in need for protecting photos people sharing online from being used to recognize their owners. To address this problem we design an adversarial perturbation, which when applied to a photo, is invisible to human eye, but able to fool industrial facial recognition systems. The main goal of the project is to develop an app, where users could apply a “protective” adversarial perturbation to their photos before posting them online.

Valeria Cherepanova
Valeria Cherepanova
PhD Student in Applied Math

My research focuses on adversarial machine learning and fairness in deep learning