Design Process

Iteration 6

From my previous post I wanted to experiment with trying to add more videos to play one after the other. First I loaded in the new videos along with the first one using the code:

react = new Movie(this, “beth wave.mov”);
react = new Movie(this, “beth smile.mov”);
react = new Movie(this, “beth kiss.mov”);

Once I loaded the videos I created an if statement. The if statement i used was to check if there was a face, I made a nested if statement which checked the duration of the first video playing. If it had reached the end of the video it would load and play the next video on top. I had a problem with this. Getting the video to pause and start again when a face wasn’t seen by the camera was too difficult for me to work out.

If I were to carry on with this I would also need to work out a way to get the videos to loop around back to the beginning once they had all been played, but this is too advanced for my current level of processing knowledge. So I may return to this before the deadline.

Iteration 5

//create image in folder i save it in, in data folder.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Movie react;
Capture video;
OpenCV opencv;

void setup() {
size(1280, 720);
//scale video down, so it runs smoother
video = new Capture(this, 640/2, 480/2);
//loading open cv, and face tracking
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();

//loading my video
react = new Movie(this, “beth wave.mov”);
//looping video
react.loop();
}

void draw() {

//scaling video back up to fit the canvas
//scale(2);
opencv.loadImage(video);

//displaying the camera video, will be removed
image(video, 0, 0 );

//adjusting the reaction video
//pushMatrix();
//scaling video down to fit canvas
//scale(0.5);
//tint to make transparent
//tint(255, 185);
//display my reaction video
image(react, 0, 0);
//popMatrix();

//styling for the face tracking rectangle
//noFill();
//stroke(0, 255, 0);
//strokeWeight(3);
Rectangle[] faces = opencv.detect();
println(faces.length);

//draw rectangle around the face
//for (int i = 0; i < faces.length; i++) { // println(faces[i].x + “,” + faces[i].y); // rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // } //if one or more faces are seen, reaction video will play, else it is paused. if (faces.length >= 1) {
react.play();
} else {
//when theres no face, reaction video jumps to beginning
react.jump(0);
react.pause();
}
}

void captureEvent(Capture c) {
c.read();
}

void movieEvent(Movie m) {
m.read();
}

In this iteration I have used the final video for my project, it is the smile and wave video. In the video you can see that the face animates until there is no face seen. This then pauses the video and jumps it back to the beginning. I am happy with this because it is how I wanted it to be, but I think I’m going to mess around with it and see if i can add the other videos into it so that they play after the first video has played.

This a video of the piece, with me in the corner showing how it works:

The video below shows how the video would display to the audience:

Planning/filming beth for video

For my project I have decided to use my friend beth for my videos. I have decided on a few different facial expressions for her to pull:

  • smile and wave
  • blow a kiss
  • excited
  • smile

I got her to stand in front of a white wall so that the focus is on her. This makes it obvious when she moves on the screen. Once i filmed her I imported them into iMovie to remove sound and add a filter to it. Once these videos were edited and exported I named the files by their expressions to make it easier when coding and added them into my data folder ready for my next attempt in processing.

Screen Shot 2015-01-14 at 16.33.27

Iteration 4

Since my last post I feel that i’ve made progress with my idea and it’s starting to come together to what I want. Now that the video will stop and jump to the beginning when a face isn’t seen and play when a face is seen, I have cleaned up the code a bit.

//create image in folder i save it in, in data folder.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Movie react;
Capture video;
OpenCV opencv;

void setup() {
size(640, 480);
//scale video down, so it runs smoother
video = new Capture(this, 640/2, 480/2);
//loading open cv, and face tracking
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();

//loading my video
react = new Movie(this, “movie.mp4”);
//looping video
react.loop();
}

void draw() {

//scaling video back up to fit the canvas
scale(2);
opencv.loadImage(video);

//displaying the camera video, will be removed
image(video, 0, 0 );

//adjusting the reaction video
pushMatrix();
//scaling video down to fit canvas
scale(0.5);
//tint to make transparent
//tint(255, 185);
//display my reaction video
image(react, 0, 0);
popMatrix();

//styling for the face tracking rectangle
noFill();
stroke(0, 255, 0);
strokeWeight(3);
Rectangle[] faces = opencv.detect();
println(faces.length);

//draw rectangle around the face
//for (int i = 0; i < faces.length; i++) {
// println(faces[i].x + “,” + faces[i].y);
// rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
// }

//if one or more faces are seen, reaction video will play, else it is paused.
if (faces.length >= 1) {
react.play();
} else {
//when theres no face, reaction video jumps to beginning
react.jump(0);
react.pause();
}
saveFrame(“###.jpg”);
}

void captureEvent(Capture c) {
c.read();
}

void movieEvent(Movie m) {
m.read();
}

This code above stops the green rectangle from displaying and removes the tint so now its only the video playing that shows. Now that i have the correct code that works, I am going to find someone to be in my video that will make the facial expressions once a face is seen. I have removed the tint from this video and future videos because I want to work with it how it will be seen when presented. So every time the video stops and starts again, its just me covering and un-covering my face to make it work.

Iteration 3

4

Since my last attempt I have written in the code to stop the video when a face isn’t seen and then jump back to the first frame until a face is seen again. I simply did this by adding this line of code that I found in the video reference library:

react.jump(0);

to make the video jump to the first frame of the video. Now that I have all the code working how I want it, I am going to remove parts of the unnecessary code that I wouldn’t need when testing.  I will show how I did this in my next post.

Attempt 2

The problem with my previous attempt was that the video just disappeared when a face wasn’t seen so in my previous post I said I was going to try to pause the video when a face is not seen.

2

This time I have been trying to make the video pause when a face isn’t seen. I have done this by adding this extra part of code that I found in the video reference library:

if (faces.length >= 1) { react.play(); } else { react.pause(); } }

This means that once 1 or more faces is seen it will play otherwise it will pause. It pauses on the frame that it is playing and then resumes from that frame.

Next time I am going to code it so that when a face isn’t seen, it stops the video and jumps it to the beginning frame of the video.

Iteration 1

Now that I have my chosen idea I have been able to begin learning and making my processing piece. First of all I have had to find out how I can use face tracking in my project. I researched ‘processing face tracking’ and found an open cv library which has a face tracking sketch. The face tracking in open cv uses haar cascades which is explained here. After I had a play around with the face detection I found the reference library for the video tags so i could add in video tags to the face tracking and see what would happen.1

The above code is my first attempt at creating my processing piece. For my first few attempts I am using a place filler video of myself spinning on a chair. I have implemented the face tracking from the open cv library. Using this I have added my own code to control video play back when a face is seen. This code allows a video to play when a face is seen, when a face is not seen, the video is not displayed and the screen shows what the camera sees.(as seen in video below) I am leaving the green rectangle to display around the faces for my first few attempts so I know that its tracking faces. I have also made the video slightly see through so i can show the video and what the camera sees to show how it works.  I have commented on the code so that I can return to the code and remember what it all means to hopefully speed up the process. For my next attempt I am going to try to pause the video when a face is not seen.

My chosen Idea

The idea I am working on is a social experiment to see if people present themselves differently once they are being watched. Using The presentation of self in everyday life by Erving Goffman theory. On the screen is going to be a video of a person from the shoulders up, while there is no one in front of the camera the video is paused, so it looks like a still image of a person. When a face is recognised by the camera it will play the video. The video will play the person reacting to seeing someone. This reaction could be happy, sad, or shocked etc. I am hoping that it will cause people to feel as though they are being watched so that they may change their performance.

First Ideas

1) After looking at users and gratifications media theory, i liked the sound of the diversion theory. I wanted to create something that a user could interact with that would help them ‘escape’ from reality. My idea was to use face tracking, once a face was seen it would signal a video of a kind of escape, such as an empty beach, to play, until they looked away. I liked the idea, but didn’t think it would work properly in the environment chosen.

2) I’ve taken an interest in identity and performance, I have the idea to create a social experiment to view how people preform once they become public or ‘watched’. The idea is to create a false sense of being watched and being interacted with, from the other side of the screen. I am hoping that this would change people behaviours within the environment, once they’ve been seen and cause them to act differently to how they would if they were not watched (put on a performance).