For years, privacy experts and civil rights advocates have warned about the potential misuse of facial recognition technology, especially if it includes the scientifically iffy “emotion recognition”.
It makes a lot of sense, considering not even humans can do accurate “emotion recognition” on others. Think about the famous Monalisa smile and how different it’s read by each individual looking at the painting – or take a look at the image and venture a guess about what the woman is feeling.
Impossible to tell for sure? Now imagine an AI doing that same exercise, then making a decision based on the emotion it “read.”
Fortunately, Microsoft is finally scraping that technology from its Azure Face facial recognition systems, an important first win for those concerned about how facial recognition technology is implemented.
Thanks to new AI ethics policies adopted by the company, Microsoft will stop offering “emotion recognition” for its AI and will phase out AI’s that can attribute gender and age to people.
“Effective today, new customers need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer.
Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit. This includes introducing use case and customer eligibility requirements to gain access to these services. […]. Starting June 30, 2023, existing customers will no longer be able to access facial recognition capabilities if their facial recognition application has not been approved,” says Microsoft.
Microsoft will also attempt to address more of the socio-technical risks posed by facial recognition technology, like the potential for discrimination and invasion of privacy.
“In another change, we will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup. We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs. In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of “emotions,” and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services,” adds Microsoft.
You can read more about how they’re implementing responsible AI practices here.