Image Synthesis from Yahoo’s open_nsfw

Yahoo’s recently open sourced neural network, open_nsfw, is a fine tuned Residual Network which scores images on a scale of to on its suitability for use in the workplace. In the documentation, Yahoo notes

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another.

What makes an image NSFW, according to Yahoo? I explore this question with a clever new visualization technique by Nguyen et al.. Like Google’s Deep Dream, this visualization trick works by maximally activating certain neurons of the classifier. Unlike deep dream, we optimize these activations by performing descent on a parameterization of the manifold of natural images. This parametrization takes the form of a Generative Network, , trained adversarially on an unrelated dataset of natural images.

The “space of natural images”, according to , look mostly like abstract art. Unsurprisingly, these random pictures, lacking any kind of semantics, have low scores on the classifier.

https://open_nsfw.gitlab.io/
https://github.com/yahoo/open_nsfw

Open Sourcing a Deep Learning Solution for Detecting NSFW Images

Automatically identifying that an image is not suitable/safe for work (NSFW), including offensive and adult images, is an important problem which researchers have been trying to tackle for decades. Since images and user-generated content dominate the Internet today, filtering NSFW images becomes an essential component of Web and mobile applications. With the evolution of computer vision, improved training data, and deep learning algorithms, computers are now able to automatically classify NSFW image content with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

To the best of our knowledge, there is no open source model or algorithm for identifying NSFW images. In the spirit of collaboration and with the hope of advancing this endeavor, we are releasing our deep learning model that will allow developers to experiment with a classifier for NSFW detection, and provide feedback to us on ways to improve the classifier.

Our general purpose Caffe deep neural network model (Github code) takes an image as input and outputs a probability (i.e a score between 0-1) which can be used to detect and filter NSFW images. Developers can use this score to filter images below a certain suitable threshold based on a ROC curve for specific use-cases, or use this signal to rank images in search results.

https://yahooeng.tumblr.com/post/151148689421/open-sourcing-a-deep-learning-solution-for

https://github.com/yahoo/open_nsfw