Behind the Scenes: "Bullet Time" Demo with Raspberry Pi
A technical look at how to make the Matrix Bullet Time Demo for fraction of the Hollywood budget using Raspberry Pi.
By Richard Bair

Before diving into some of the technical challenges that went into building the Matrix Bullet Time Demo, it is useful to review the general problem statement and the hardware design solution that went with it. The goal for this demo was to take instantaneous photos of a subject from 360 degrees, and then stitch these photos together to form a movie. The intended final effect was for it to appear as though the camera was moving around a subject frozen in time.

To accomplish this, Jasper Potts and I needed to mount a large number of cameras in a 360-degree circle. To add some visual interest, we wanted to design it such that the cameras were mounted in a kind of helix. Each of these cameras needed to be focused on a single, distinct point. Each camera had to be connected with a central server to receive the commands to take a picture at the same moment in time and to transfer images back to the server to turn them into a movie.

Figure 1. Me writing the software beneath the assembled helix

We built the Matrix Bullet Time Demo from 60 individual Raspberry Pi 3 single-board computers with Raspberry Pi cameras. There were a few interesting problems to solve in trying to mount 60 Raspberry Pi units in such a way that we could surround the subject in 360 degrees! We needed to design a mounting system and a method of powering that many Raspberry Pi units. We also needed to take into account how to break down and transport the rig between locations. (During JavaOne, we had the demo in two different locations on different days, and we also wanted to design it to be shipped internationally.) Jasper found a lighting track system with curved tracks which, when joined together, formed a circle. By hanging this track from adjustable stands, he was able to vary the height of the track as it circled, forming the helix shape we were looking for.

Figure 2. Close-up of a Raspberry Pi and a camera

One of the benefits of using a lighting track system is that it handles power distribution. You provide the 120 volt input power to the track and it carries that power through copper wires built into the track. At any point where you want to have a light, you use a mount designed for the track system, which transfers the power through the mount to the light. What we had to do instead was route this power to a transformer for each Raspberry Pi that would step down the power to 5 watts. Jasper designed custom boards, printed them with a 3D printer, and mounted these to the light mounts. In this way, power was delivered to each of the 60 Raspberry Pi units.

Originally we tried to use Wi-Fi dongles for each Raspberry Pi for communicating with the server, but we had a horrible time getting consistent latencies and consistent connectivity. Instead, we ran an Ethernet cable from each Raspberry Pi along the track to switches and from there to the server. Jasper and his wife Fiona put in all the hard work designing, printing, and assembling the hardware for this demo.

Figure 3. Jasper assembling the hardware

On the software side, we needed to run software both on the Raspberry Pi units and on a central coordinating server. We also had a web UI for running the demo. Users entered their Twitter username so that the final video that we uploaded to Twitter could be linked back to their own personal Twitter account. The overall system worked like this:

  1. The user would input their Twitter handle on the Oracle JavaScript Extension Toolkit (Oracle JET) web UI we built for this demo, which was running on a Microsoft Surface tablet.
  2. The user would then click a button on the Oracle JET web UI to start a 10-second countdown.
  3. The web UI would invoke a REST API on the Java server to start the countdown.
  4. After a 10-second delay, the Java server would send a multicast message to all the Raspberry Pi units at the same moment instructing them to take a picture.
  5. Each camera would take a picture and send the picture data back up to the server.
  6. The server would make any adjustments necessary to the picture (see below), and then using FFMPEG, the server would turn those 60 images into an MP4 movie.
  7. The server would respond to the Oracle JET web UI's REST request with a link to the completed movie.
  8. The Oracle JET web UI would display the movie and allow the user to either upload it to Twitter or discard it.

In general, this system worked really well. The primary challenge that we encountered was getting all 60 cameras to focus on exactly the same point in space. If the cameras were not precisely focused on the same point, then it would seem like the "virtual" camera (the resulting movie) would jump all over the place. One camera might be pointed a little higher, the next a little lower, the next a little left, and the next rotated a little. This would create a disturbing "bouncy" effect in the movie.

We took two approaches to solve this. First, each Raspberry Pi camera was mounted with a series of adjustable parts, such that we could manually visit each Raspberry Pi and adjust the yaw, pitch, and roll of each camera. We would place a tripod with a pyramid target mounted to it in the center of the camera helix as a focal point, and using a hand-held HDMI monitor we visited each camera to manually adjust the cameras as best we could to line them all up on the pyramid target. Even so, this was only a rough adjustment and the resulting videos were still very bouncy.

The next approach was a software-based approach to adjusting the translation (pitch and yaw) and rotation (roll) of the camera images. We created a JavaFX app to help configure each camera with settings for how much translation and rotation was necessary to perfectly line up each camera on the same exact target point. Within the app, we would take a picture from the camera. We would then click the target location, and the software would know how much it had to adjust the x and y axis for that point to end up in the dead center of each image. Likewise, we would rotate the image to line it up relative to a "horizon" line that was superimposed on the image. We had to visit each of the 60 cameras to perform both the physical and virtual configuration.

Then at runtime, the server would query the cameras to get their adjustments. Then, when images were received from the cameras (see step 6 above), we used the Java 2D API to transform those images according to the translation and rotation values previously configured. We also had to crop the images, so we adjusted each Raspberry Pi camera to take the highest resolution image possible, and then we cropped it to 1920x1080 for a resealing hi-def movie.

On each Raspberry Pi, we used a simple Python app. All communication between the Pi units and the server was done over a multicast connection. On the server, when images were received they were held in memory and streamed to FFMPEG, such that only the resulting movie was actually written to disk. All communication between the Oracle JET web UI and the server was done using REST. The server itself was a simple Java 9 application (we just used the built-in Java web server for our REST API). I would have liked to revisit this and make use of some of the lightweight Java microservice web servers out there, because that would have resulted in our having less code. But the end result was still rather pleasant for such a small project.

Figure 4. Super Bruno poses for the cameras
About the Author
Richard Bair Richard Bair is currently the cloud architect for the Oracle Internet of Things suite of products. Previously he spent several years as the Chief Java Client Architect at Oracle. He has presented numerous times at JavaOne over the past 12 years.
Join the Java Community Conversation
DEVO_ATTACH_BOTTOM
Experience Oracle Cloud —Get up to 3,500 hours free.