Home  ›  News  ›

Google Opens Pixel's Depth-Of-Field Effect Code to Researchers

Article Comments  

Mar 15, 2018, 8:21 AM   by Eric M. Zeman   @phonescooper
updated Mar 15, 2018, 8:49 AM

Updated: typo

Google this week made its semantic image segmentation feature, which is what powers the portrait effect on the Pixel smartphones, available to academia and researchers. Google uses semantic image segmentation to define the shape of objects within photos and then assign those objects a label, such as person, dog, or car. In short, it's what allows smartphones such as the Pixel to recognize the shape of a subject's head in a portrait, draw a line around it, and then take action with the other regions of the photo, such as blur the background. Google is open-sourcing the newest version of its semantic image segmentation code, DeepLab-v3+, as implemented in TensorFlow. It is built on top of a convolutional neural network (CNN) backbone architecture, which Google says is meant for server-side deployments. Google's immediate goal is not to bring Pixel-level portrait shooting to other smartphones. "We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology," said the company. Researchers and academics interested in the technology can find the necessary resources on GitHub.

more info at Google »

more news about:

Google
 

AD

Comments

This forum is closed.

This forum is closed.

No messages

 
 
Page  1  of 1

Subscribe to news & reviews with RSS Follow @phonescoop on Twitter Phone Scoop on Facebook Subscribe to Phone Scoop on YouTube Follow on Instagram

 

All content Copyright 2001-2018 Phone Factor, LLC. All Rights Reserved.
Content on this site may not be copied or republished without formal permission.