Video

by Daniel Shiffman

This tutorial is from the book Learning Processing, 2nd Edition by Daniel Shiffman, published by Morgan Kaufmann, © 2015 Elsevier Inc. All rights reserved. If you see any errors or have comments, please let us know.

Live Video

Now that you’ve explored static images in Processing, you are ready to move on to moving images, specifically from a live camera (and later, from a recorded movie). I’ll begin by walking through the basic steps of importing the video library and using the Capture class to display live video.

Step 1. Import the Processing video library.

Although the video library is developed and maintained by the Processing Foundation, due to its size, it must still be downloaded separately through the contributions manager. The Video and Sound libraries need to be downloaded through the Library Manager. Select Add Library... from the Import Library... submenu within the Sketch menu. Then search for or scroll down to the Video library.

Once you’ve got the library installed, the next step is to import the library in your code. This is done by selecting the menu option Sketch → Import Library → Video, or by typing the following line of code (which should go at the very top of your sketch):

import processing.video.*;

Using the Import Library menu option does nothing other than automatically insert that line into your code, so manual typing is entirely equivalent.

Step 2. Declare a Capture object.

The Capture object video is just like any other object — to construct an object, you use the new operator followed by the constructor. With a Capture object, this code typically appears in setup().

video = new Capture();

The above line of code is missing the appropriate arguments for the constructor. Remember, this is not a class you wrote yourself so there is no way to know what is required between the parentheses without consulting the online reference.

The reference will show there are several ways to call the Capture constructor. A typical way to call the constructor is with three arguments:

void setup() {
  video = new Capture(this, 640, 480);
}

Let’s walk through the arguments used in the Capture constructor.

  • this: If you’re confused by what this means, you are not alone. Technically speaking, this refers to the instance of a class in which the word this appears. Unfortunately, such a definition is likely to induce head spinning. A nicer way to think of it is as a self-referential statement. After all, what if you needed to refer to your Processing program within your own code? You might try to say me or I. Well, these words are not available in Java, so instead you say this. The reason you pass this into the Capture object is you are telling it: Hey listen, I want to do video capture and when the camera has a new image I want you to alert this sketch.
  • 640: Fortunately, the first argument, this, is the only confusing one. 640 refers to the width of the video captured by the camera.
  • 480: The height of the video.

There are some cases, however, where the above will not do. For example, what if you have multiple cameras attached to your computer? How do you select the one you want to capture? In addition, in some rare cases, you might also want to specify a frame rate from the camera. For these cases, Processing will give you a list of all possible camera configurations via Capture.list(). You can display these in your message console, for example, by saying:

printArray(Capture.list());

You can use the text of these configurations to create a Capture object. On a Mac with a built-in camera,for example, this might look like:

video = new Capture(this, "name=FaceTime HD Camera (Built-in),size=640x480,fps=30");

Capture.list() actually gives you an array so you can also simply refer to the index of the configuration you want.

video = new Capture(this, Capture.list()[0]);

Step 4. Start the capture process.

Once the camera is ready, it’s up to you to tell Processing to start capturing images.

void setup() {
  video = new Capture(this, 640, 480);
  video.start();
}

In almost every case you want to begin capturing right in setup(). Nevertheless, start() is its own method, and you do have the option of, say, not starting capturing until some other time (such as when a button is pressed, etc.).

Step 5. Read the image from the camera.

There are two strategies for reading frames from the camera. I will briefly look at both and choose one forthe remainder of the examples in this chapter. Both strategies, however, operate under the same fundamental principle: I only want to read an image from the camera when a new frame is available to be read.

In order to check if an image is available, you use the function available(), which returns true or false depending on whether something is there. If it is there, the function read() is called and the frame from the camera is read into memory. You can do this over and over again in the draw() loop, always checking to see if a new image is free to be read.

void draw() {
  if (video.available()) {
    video.read();
  }
}

The second strategy, the event approach, requires a function that executes any time a certain event — in this case a camera event — occurs. The function mousePressed() is executed whenever the mouse is pressed. With video, you have the option to implement the function captureEvent(), which is invoked any time a capture event occurs, that is, a new frame is available from the camera. These event functions (mousePressed(), keyPressed(), captureEvent(), etc.) aresometimes referred to as a callback. And as a brief aside, if you’re following closely, this is where this fits in. The Capture object, video, knows to notify this sketch by invoking captureEvent() because you passed it a reference to this sketch when creating the Capture object video.

captureEvent() is a function and therefore needs to live in its own block, outside of setup() and draw().

void captureEvent(Capture video) {
  video.read();
}

You might notice something odd about captureEvent(). It includes an argument of type Capture in its definition. This might seem redundant to you; after all, in this example I already have a global variable video. Nevertheless in the case where you might have more than one capture device, the same event function can be used for both and the video library will make sure that the correct Capture object is passed in to captureEvent().

To summarize, I want to call the function read() whenever there is something to read, and I can do so by either checking manually using available() within draw() or allowing a callback to handle it for you — captureEvent(). This allows sketches to operate more efficientlyby separating out the logic for reading from the camera from the main animation loop.

Again, anything you can do with a PImage (resize, tint, move, etc.) you can do with a Capture object. As long as you read() from that object, the video image will update as you manipulate it. See the following example:

import processing.video.*;

Capture video;

void setup() {
  size(320, 240);
  video = new Capture(this, 320, 240);
  video.start();
}

void captureEvent(Capture video) {
  video.read();
}

void draw() {
  background(255);
  tint(mouseX, mouseY, 255);
  translate(width/2, height/2);
  imageMode(CENTER);
  rotate(PI/4);
  image(video, 0, 0, mouseX, mouseY);
}

Note that a video image can be tinted just like a PImage. It can also be moved, rotated, and sized just like a PImage. Following is the adjusting brightness example with a video image:

// Step 1. Import the video library
import processing.video.*;

// Step 2. Declare a Capture object
Capture video;

void setup() {
  size(320, 240);

  // Step 3. Initialize Capture object via Constructor
  video = new Capture(this, 320, 240);
  video.start();
}

// An event for when a new frame is available
void captureEvent(Capture video) {
  // Step 4. Read the image from the camera.
  video.read();
}
void draw() {
  loadPixels();
  video.loadPixels();

  for (int x = 0; x < video.width; x++) {
    for (int y = 0; y < video.height; y++) {
      // Calculate the 1D location from a 2D grid
      int loc = x + y * video.width;

      // Get the red, green, blue values from a pixel
      float r = red  (video.pixels[loc]);
      float g = green(video.pixels[loc]);
      float b = blue (video.pixels[loc]);

      // Calculate an amount to change brightness based on proximity to the mouse
      float d = dist(x, y, mouseX, mouseY);
      float adjustbrightness = map(d, 0, 100, 4, 0);
      r *= adjustbrightness;
      g *= adjustbrightness;
      b *= adjustbrightness;

      // Constrain RGB to make sure they are within 0-255 color range
      r = constrain(r, 0, 255);
      g = constrain(g, 0, 255);
      b = constrain(b, 0, 255);

      // Make a new color and set pixel in the window
      color c = color(r, g, b);
      pixels[loc] = c;
    }
  }

  updatePixels();
}

Recorded video

Displaying recorded video follows much of the same structure as live video. Processing’s video library accepts most video file formats; for specifics, visit the Movie reference.

Step 1. Instead of a Capture object, declare a Movie object.

Movie movie;

Step 2. Initialize Movie object.

movie = new Movie(this, "testmovie.mov");

The only necessary arguments are this and the movie’s filename enclosed in quotes. The movie file should be stored in the sketch’s data directory.

Step 3. Start movie playing.

There are two options, play(), which plays the movie once, or loop(), which loops it continuously.

movie.loop();

Step 4. Read frame from movie.

Again, this is identical to capture. You can either check to see if a new frame is available, or use a callback function.

void draw() {
  if (movie.available()) {
    movie.read();
  }
}

Or:

void movieEvent(Movie movie) {
  movie.read();
}

Step 5. Display the movie.

image(movie, 0, 0);

The following code shows the program all together:

import processing.video.*;

// Step 1. Declare a Movie object.
Movie movie;

void setup() {
  size(320, 240);

  // Step 2. Initialize Movie object. The file "testmovie.mov" should live in the data folder.
  movie = new Movie(this, "testmovie.mov");

  // Step 3. Start playing movie. To play just once play() can be used instead.
  movie.loop();
}

// Step 4. Read new frames from the movie.
void movieEvent(Movie movie) {
  movie.read();
}

// Step 5. Display movie.
void draw() {
  image(movie, 0, 0);
}

Although Processing is by no means the most sophisticated environment for displaying and manipulating recorded video, there are some more advanced features available in the video library. There are functions for obtaining the duration (length measured in seconds) of a video, for speeding it up and slowing it down, and for jumping to a specific point in the video (among others). If you find that performance is sluggish and the video playback is choppy, I would suggest trying the P2D or P3D renderers.

Following is an example that makes use of jump() (jump to a specific point in the video) and duration() (returns the length of movie in seconds). In this example, if mouseX equals 0, the video jumps to the beginning. If mouseX equals width, it jumps to the end. Any other value is in between. The jump() function allows you to jump immediately to a point of time within the video. duration() returns the total length of the movie in seconds.

import processing.video.*;

Movie movie;

void setup() {
  size(200, 200);
  background(0);
  movie = new Movie(this, "testmovie.mov");
}

void movieEvent(Movie movie) {
  movie.read();
}

void draw() {
  // Ratio of mouse X over width
  float ratio = mouseX / (float) width;

  movie.jump(ratio * movie.duration());

  image(movie, 0, 0);
}

Software mirrors

With small video cameras attached to more and more personal computers, developing software that manipulates an image in real-time is becoming increasingly popular. These types of applications are sometimes referred to as “mirrors,” as they provide a digital reflection of a viewer’s image. Processing’s extensive library of functions for graphics and its ability to capture from a camera in real-time make it an excellent environment for prototyping and experimenting with software mirrors.

You can apply basic image processing techniques to video images, reading and replacing the pixels one by one. Taking this idea one step further, you can read the pixels and apply the colors to shapes drawn onscreen.

I will begin with an example that captures a video at 80 × 60 pixels and renders it on a 640 × 480 window. For each pixel in the video, I will draw a rectangle eight pixels wide and eight pixels tall.

Let’s first write the program that displays the grid of rectangles. In the following example, the videoScale variable stores the ratio of the window’s pixel size to the grid’s size and for every column and row, a rectangle is drawn at an (x,y) location scaled and sized by videoScale.

// Size of each cell in the grid, ratio of window size to video size
int videoScale = 8;
// Number of columns and rows in the system
int cols, rows;

void setup() {
  size(640, 480);
  // Initialize columns and rows
  cols = width/videoScale;
  rows = height/videoScale;
}

void draw() {
  // Begin loop for columns
  for (int i = 0; i < cols; i++) {
    // Begin loop for rows
    for (int j = 0; j < rows; j++) {
      // Scaling up to draw a rectangle at (x,y)
      int x = i*videoScale;
      int y = j*videoScale;
      fill(255);
      stroke(0);
      rect(x, y, videoScale, videoScale);
    }
  }
}

Knowing that I want to have squares eight pixels wide by eight pixels high, I can calculate the number of columns as the width divided by eight and the number of rows as the height divided by eight.

  • 640/8 = 80 columns
  • 480/8 = 60 rows

I can now capture a video image that is 80 × 60. This is useful because capturing a 640 × 480 video from a camera can be slow compared to 80 × 60. I only want to capture the color information at the resolution required for the sketch.

For every square at column i and row j, I look up the color at pixel (i, j) in the video image and color it accordingly. See the following example with the new parts in bold:

import processing.video.*;

// Size of each cell in the grid, ratio of window size to video size
int videoScale = 8;
// Number of columns and rows in the system
int cols, rows;
// Variable to hold onto Capture object
Capture video;

void setup() {
  size(640, 480);
  // Initialize columns and rows
  cols = width/videoScale;
  rows = height/videoScale;
  background(0);
  video = new Capture(this, cols, rows);
  video.start();
}

// Read image from the camera
void captureEvent(Capture video) {
  video.read();
}

void draw() {
  video.loadPixels();
  // Begin loop for columns
  for (int i = 0; i < cols; i++) {
    // Begin loop for rows
    for (int j = 0; j < rows; j++) {
      // Where are you, pixel-wise?
      int x = i*videoScale;
      int y = j*videoScale;
      color c = video.pixels[i + j*video.width];
      fill(c);
      stroke(0);
      rect(x, y, videoScale, videoScale);
    }
  }
}

As you can see, expanding the simple grid system to include colors from video only requires a few additions. I have to declare and initialize the Capture object, read from it, and pull colors from the pixel array.

Less literal mappings of pixel colors to shapes in the grid can also be applied. In the following example, only the colors black and white are used. Squares are larger where brighter pixels in the video appear, and smaller for darker pixels.

// Each pixel from the video source is drawn as
// a rectangle with size based on brightness.

import processing.video.*;
// Size of each cell in the grid
int videoScale = 10;
// Number of columns and rows in the system
int cols, rows;
// Variable for capture device
Capture video;

void setup() {
  size(640, 480);
  // Initialize columns and rows
  cols = width / videoScale;
  rows = height / videoScale;
  // Construct the Capture object
  video = new Capture(this, cols, rows);
  video.start();
}

void captureEvent(Capture video) {
  video.read();
}

void draw() {
  background(0);
  video.loadPixels();
  // Begin loop for columns
  for (int i = 0; i < cols; i++) {
    // Begin loop for rows
    for (int j = 0; j < rows; j++) {
      // Where are you, pixel-wise?
      int x = i*videoScale;
      int y = j*videoScale;

      // Reverse the column to mirro the image.
      int loc = (video.width - i - 1) + j * video.width;

      color c = video.pixels[loc];
      // A rectangle's size is calculated as a function of the pixel’s brightness.
      // A bright pixel is a large rectangle, and a dark pixel is a small one.
      float sz = (brightness(c)/255) * videoScale;

      rectMode(CENTER);
      fill(255);
      noStroke();
      rect(x + videoScale/2, y + videoScale/2, sz, sz);
    }
  }
}

It’s often useful to think of developing software mirrors in two steps. This will also help you think beyondthe more obvious mapping of pixels to shapes on a grid.

Step 1. Develop an interesting pattern that covers an entire window.

Step 2. Use a video’s pixels as a look-up table for coloring that pattern.

Say for Step 1, I write a program that scribbles a random line around the window. Here is my algorithm, written in pseudocode.

  • Start with an (x,y) position at the center of the screen.
  • Repeat forever the following:
    • Pick a new (x,y), staying within the window.
    • Draw a line from the old (x,y) to the new (x,y).
    • Save the new (x,y).
// Two global variables
float x;
float y;

void setup() {
  size(640, 480);
  background(255);
  // Start x and y in the center
  x = width/2;
  y = height/2;
}

void draw() {
  float newx = constrain(x + random(-20, 20), 0, width);
  float newy = constrain(y + random(-20, 20), 0, height);

  // Line from (x,y) to the (newx,newy)
  stroke(0);
  strokeWeight(4);
  line(x, y, newx, newy);

  x = newx;
  y = newy;
}

Now that I have finished the pattern generating sketch, I can change stroke() to set a color according to the video image. Note again the new lines of code added in bold in the following code:

import processing.video.*;
// Two global variables
float x;
float y;

// Variable to hold onto Capture object.
Capture video;

void setup() {
  size(640, 480);
  background(255);
  // Start x and y in the center
  x = width/2;
  y = height/2;
  // Start the capture process
  video = new Capture(this, width, height);
  video.start();
}

void captureEvent(Capture video) {
  // Read image from the camera
  video.read();
}

void draw() {
  video.loadPixels();
  float newx = constrain(x + random(-20, 20), 0, width);
  float newy = constrain(y + random(-20, 20), 0, height-1);

  // Find the midpoint of the line
  int midx = int((newx + x) / 2);
  int midy = int((newy + y) / 2);
  // Pick the color from the video, reversing x
  color c = video.pixels[(width-1-midx) + midy*video.width];

  // Draw a line from (x,y) to (newx,newy)
  stroke(c);
  strokeWeight(4);
  line(x, y, newx, newy);

  // Save (newx,newy) in (x,y)
  x = newx;
  y = newy;
}