Ten things you need to know about Google’s Deep Dream

Google's incredible psychedelic software has some mysteries to reveal


What is Deep Dream?

These strange images look like the stuff of nightmares, but they are actually the product of Google’s Deep Dream algorithm. This code is part of Google’s ‘machine learning’ artificial intelligence software. Intended to make image searching more intuitive, Google has been teaching it to recognise what things look like. This will aid the software in understanding the context of an image search, for example, when you type ‘fork’ it will display pictures of cutlery rather than forks in the road.

Why is it being used?

However, while the main aim of Deep Dream is to help contextualise the world around us and improve online searching, a quirk in the code produces spellbinding surrealist art. When shown images, the software tries to slightly change them to match the patterns it has learnt to identify, often transforming them beyond recognition, creating vivid and lucid images. Realising its artistic potential, Google has now made the Deep Dream code open to the general public.

Try it for yourself

Google has made the Deep Dream code that was used to generate these bizarre images open source, making it available in an IPython notebook. The code is based on Caffe, uses available open-source packages and is designed to have as few dependencies as possible. You can view dream.ipynb directly on GitHub, or clone the repository, install dependencies listed in the notebook and play with code locally. Download it for yourself from

Or you can always find someone to do it for you

If GitHub means absolutely nothing to you, trying to play around with the complex code is probably a bad idea. Fortunately, numerous websites have sprung up to put your own photos through Deep Dream’s unique framework for free. However, for the best results and no delay in image processing, Deep Dreamer by Realmac Software produces great images for £9.90. For more information, visit

Image5How does it work?

To test how well it works, Google engineers showed millions of images to ANN, which filtered each image through ten to 30 stacked layers of artificial neurons. Each layer extracts more information about an image, eventually spitting out what it thinks the image is. If it’s wrong, the engineers can then adjust the parameters and dig into the layers to find out at what point in this game of Chinese whispers the error occurred.

It’s the basis for Now on Tap

We were fascinated by the possibilities of Google Now on Tap, the evolution of the Android’s personal assistant. It promises to deliver better results and be able to understand vague queries based on the context of your search, such as asking ‘What band was he in?’ when listening to a particular artist. It will also pick up keywords on your screen and deliver search results based on them. ANN is the framework Now on Tap will use.

People are going to get creative with Deep Dream

Whenever something like this exists, you know that people are going to take the technology and run with it. This is already happening with Deep Dream as people are mashing series of images together to create Deep Dream GIFs that Image9are truly terrifying. The best example we have seen so far is from YouTuber Roelof Pieters, who recreated a scene from Fear and Loathing in Las Vegas using Deep Dream. When you can make that film more freaky than it was, then you know you’re playing with some crazy technology.

It’s gone a bit rogue

As with all experimental technology, there are teething problems and ANN can sometimes get confused. When Google engineers asked it to recognise a tree, it thought it was a building. It also changed a plant to a bird and added disembodied arms to images of dumbbells. As most of the images it has seen of dumbbells have people’s arms in shot, it naturally assumed they were part of the object. ANN recognises the shapes perfectly well, but not the finer details.

You will see puppyslugs

Yes, it sounds horrible, but for some reason Deep Dream sees a huge number of puppy faces and slugs in the images it has been fed. This leads to a lot of the images created by the software being littered with these unsettling images of dogs and slugs merged together. Especially in places where they have no usual right to be, such as the sky or in bowls of cereal. Check out #puppyslug on Twitter to see just what we mean, but don’t expect to sleep easily afterwards.

Deep Dream is going to shape your online future

Even though it’s a fun thing to play with, everything that is created using Deep Dream is going to have a practical benefit. An eye that appears on your arm will provide data to the network for understanding the human form and your puppyslug will reveal further intriguing bugs in the system. This will help Google to make any future search you do a lot more accurate and we think that’s worth a few sleepless nights.