Google AI black Technology: a small movie with photos

 

According to the US technology blog Gizmodo reported this week, “MIT Technology Review” (TechnologyReview) magazine published an article disclosed the new system Google developed DeepStereo, the system can be seamlessly through artificial intelligence technology will become a portfolio of photographs video .

The authors named John – Flynn (JohnFlynn), is a Google engineer, the other three co-authors also work in Google. In the paper, Flynn explained the whole process Google developed DeepStereo system.

Long before DeepStereo, there is a similar use of a static image output animation technology exists. ACM Computer Graphics Professional Group (SIGGRAPH) had been delayed by the online images produced animations.

But compared with other static image generation animation technology, the biggest difference DeepStereo system is that it can guess the missing part of the image, in the space to create a new image is not the source image. Register According to British media reports, the use of traditional animation and visual pause works differently, DeepSteoreo could “imagine” picture two still images.

Flynn and his co-authors wrote in the paper, “The technology is very different from previous products, we try to use the new direct synthesis of a new image depth architecture, no pre-set depth of field, focal length and other training data.”

Network architecture principle behind the system is complex, drawing on precedents. But the authors introduced the technique is unique in the text: the system will be used at work, two independent network architecture. One can predict the depth of each pixel based on existing 2D data. Another color would make predictions. Which together form the complete 2D image depth and color forecasts, the final composite video.

At DeepStereo still inadequate: the corner of the screen video is very clear. “Regional algorithm does not relate to the often vague and can not be covered, you can not use pixel fill,” the team explains. However, this system is a hidden object is generated by vague source FIG tips: “Moving objects are very common in the training data, our model can gracefully to complete this action: began when it is ambiguous, then gradual transition motion blur effect. ”

Although the final product and the system-generated by the image synthesis animation simple little different, but the technology is able to Google’s Street View technology icing on the cake. While also providing a more practical example is Google’s artificial intelligence technology.

This month, Google’s “Dream Robot” became popular on the Internet, which is the company’s super-advanced artificial neural network, developed by the team of engineers from Google. It is designed to find a practical way to make computer recognizable image content. Google engineers are working to teach these artificial “brain” identify animals or architecture incomprehensible way also doing the “Dream”, this is shocking and fear.