Tech

Retouch Your Photos Before You Even Click them

Google and MIT’s brand New Machine learning algorithm retouches your photos before your take them

Retouch Your Photos Before You Even Click them August 2, 2017Leave a comment

 

After coming up with the highest resolution in phone camera it is becoming quite hard for the developers to come up with something even more advanced.

This is the main reason that Google is adapting computational photography.

Machine learning and algorithms work together to improve the quality of your photos.

Google collaborated with scientists from MIT and took this mission to a whole new level. They were able to produce algorithms which are completely capable of retouching your photos like a highly skilled Photoshop expert. That too in real time, before you even click and take the picture.

The scientists used machine learning to come up with their own software. They trained neural networks on a dataset of more than 5,000 photos generated by Adobe and MIT.

The Algorithm retouches the photo in the same way as a professional photographer retouches photos.

After that Google and MIT’s algorithm used this entire data to study what type of enhancements to be made with certain photos.

It means where to increase brightness and where to lessen the saturation and the list goes on.

To improve photos machine learning comes into action and this is not the first instance . The real advancement done with this development is slimming down of the algorithms so they are small and efficient. This will enable them to run on a user’s device without lagging.

The entire software is just equal of a single digital image and according to a blog post made by MIT it could be completely equipped “to process images in a range of styles.

The neural networks is made to practice on new images, and they could learn to replicate an individual photographer’s signature look.  Facebook and Prisma  also practices it as they have created artistic filters to replicate the work of famous painters.

Not to mention that smartphones already process snaps in real time. These new techniques are subtler and they react to users and responds to their requirements regarding a certain image. They algorithm does not apply one general rule on one image.

To decrease the size of algorithms the scientists applied various techniques. The sole purpose of reducing the size is to enable this algorithm to fit in to a user’s smartphone. It will with not take up a lot of space.

 

First of all, these included turning the changes made to each photo into formulae. Then using grid-like coordinates to map out the pictures. The illustrations will be in the terms of mathematics, not like full-scale photos.

“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” Google research Jon Barron told MIT, “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”

 

.

Leave a Reply

Your email address will not be published. Required fields are marked *