Testing Day 1

Today I conducted my first live test of my project in the foyer. From the video below you can see a working test of my project, you can see that when someone is in front of the camera, the video plays beth smile and wave. In my next post I am going to be evaluating the time spent testing.

Advertisements

Iteration 6

From my previous post I wanted to experiment with trying to add more videos to play one after the other. First I loaded in the new videos along with the first one using the code:

react = new Movie(this, “beth wave.mov”);
react = new Movie(this, “beth smile.mov”);
react = new Movie(this, “beth kiss.mov”);

Once I loaded the videos I created an if statement. The if statement i used was to check if there was a face, I made a nested if statement which checked the duration of the first video playing. If it had reached the end of the video it would load and play the next video on top. I had a problem with this. Getting the video to pause and start again when a face wasn’t seen by the camera was too difficult for me to work out.

If I were to carry on with this I would also need to work out a way to get the videos to loop around back to the beginning once they had all been played, but this is too advanced for my current level of processing knowledge. So I may return to this before the deadline.

Testing Expectations

Tomorrow I will be testing my project in the foyer of Weymouth house. I will be using one of the screens and a webcam to do this as well as plugging my laptop into the screen. I am hoping to go during the peak time so that I can spend roughly an hour there, so I will spend half an hour during peak time and half an hour during a quieter period when the majority are in lectures. I feel that doing this will allow me to see how it works when lots of faces are seen compared to when there are not many being seen. I expect that when lots of faces are seen, my piece may not work as well as the video will constantly be playing because of all the faces. As well as this, when there are lots of people around I don’t think people will feel comfortable walking slowly past a screen and interacting with it. During the quieter time I am hoping that people may be more curious to investigate.

Iteration 5

//create image in folder i save it in, in data folder.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Movie react;
Capture video;
OpenCV opencv;

void setup() {
size(1280, 720);
//scale video down, so it runs smoother
video = new Capture(this, 640/2, 480/2);
//loading open cv, and face tracking
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();

//loading my video
react = new Movie(this, “beth wave.mov”);
//looping video
react.loop();
}

void draw() {

//scaling video back up to fit the canvas
//scale(2);
opencv.loadImage(video);

//displaying the camera video, will be removed
image(video, 0, 0 );

//adjusting the reaction video
//pushMatrix();
//scaling video down to fit canvas
//scale(0.5);
//tint to make transparent
//tint(255, 185);
//display my reaction video
image(react, 0, 0);
//popMatrix();

//styling for the face tracking rectangle
//noFill();
//stroke(0, 255, 0);
//strokeWeight(3);
Rectangle[] faces = opencv.detect();
println(faces.length);

//draw rectangle around the face
//for (int i = 0; i < faces.length; i++) { // println(faces[i].x + “,” + faces[i].y); // rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // } //if one or more faces are seen, reaction video will play, else it is paused. if (faces.length >= 1) {
react.play();
} else {
//when theres no face, reaction video jumps to beginning
react.jump(0);
react.pause();
}
}

void captureEvent(Capture c) {
c.read();
}

void movieEvent(Movie m) {
m.read();
}

In this iteration I have used the final video for my project, it is the smile and wave video. In the video you can see that the face animates until there is no face seen. This then pauses the video and jumps it back to the beginning. I am happy with this because it is how I wanted it to be, but I think I’m going to mess around with it and see if i can add the other videos into it so that they play after the first video has played.

This a video of the piece, with me in the corner showing how it works:

The video below shows how the video would display to the audience:

How the Illusion of Being Observed Can Make You a Better Person

I have found this interesting article, by Sander Van Der Linden. Which fits into my theme of identity and performance. In this article Linden is explaining how being watched makes you perform as a better person as you are less likely to do something wrong when someone is watching you, such as sticking gum under a table or not picking up dog poo in a public place when no one is around.

Hawthorne Effect and My Idea

The Hawthorne effect relates to my project as it is about how humans behaviour changes when they are being watched. Some people will work harder and perform better, they may change their behaviour due to the attention they are receiving.

Henry A. Landsberger described the hawthorne effect during his analysis of experiments conducted during the 1920s and 1930s at the Hawthorne works electric company. Hawthorne works electric company commissioned research to determine if people working in a certain environment for example, being watched by cameras would effect how they worked for the better.

    • “The original data have since been re-analysed, and it is not so clear whether the original results hold up. Nevertheless, the concept has been established – the very fact that people are under study, observation or investigation can have an effect on them and the results.”
      (Earl-Slater, 2002)
  • “One way to deal with the Hawthorne effect (and demand characteristics) is to observe the participants unobtrusively. This can be done using the naturalistic observation technique. However, this is not always possible for all behaviours. Another way to deal with the Hawthorne effect is to make the participants’ responses in a study anonymous (or confidential). This may eliminate some of the effects of this source bias.”
    (McBride, 2013)

http://psychology.about.com/od/hindex/g/def_hawthorn.htm

Earl-Slater, A. (2002). The handbook of clinical trials and other research. London: Radliffe Medical Press.

McBride, D. M. (2013). The process of research in psychology. London: Sage Publications.

Planning/filming beth for video

For my project I have decided to use my friend beth for my videos. I have decided on a few different facial expressions for her to pull:

  • smile and wave
  • blow a kiss
  • excited
  • smile

I got her to stand in front of a white wall so that the focus is on her. This makes it obvious when she moves on the screen. Once i filmed her I imported them into iMovie to remove sound and add a filter to it. Once these videos were edited and exported I named the files by their expressions to make it easier when coding and added them into my data folder ready for my next attempt in processing.

Screen Shot 2015-01-14 at 16.33.27

Iteration 4

Since my last post I feel that i’ve made progress with my idea and it’s starting to come together to what I want. Now that the video will stop and jump to the beginning when a face isn’t seen and play when a face is seen, I have cleaned up the code a bit.

//create image in folder i save it in, in data folder.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Movie react;
Capture video;
OpenCV opencv;

void setup() {
size(640, 480);
//scale video down, so it runs smoother
video = new Capture(this, 640/2, 480/2);
//loading open cv, and face tracking
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();

//loading my video
react = new Movie(this, “movie.mp4”);
//looping video
react.loop();
}

void draw() {

//scaling video back up to fit the canvas
scale(2);
opencv.loadImage(video);

//displaying the camera video, will be removed
image(video, 0, 0 );

//adjusting the reaction video
pushMatrix();
//scaling video down to fit canvas
scale(0.5);
//tint to make transparent
//tint(255, 185);
//display my reaction video
image(react, 0, 0);
popMatrix();

//styling for the face tracking rectangle
noFill();
stroke(0, 255, 0);
strokeWeight(3);
Rectangle[] faces = opencv.detect();
println(faces.length);

//draw rectangle around the face
//for (int i = 0; i < faces.length; i++) {
// println(faces[i].x + “,” + faces[i].y);
// rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
// }

//if one or more faces are seen, reaction video will play, else it is paused.
if (faces.length >= 1) {
react.play();
} else {
//when theres no face, reaction video jumps to beginning
react.jump(0);
react.pause();
}
saveFrame(“###.jpg”);
}

void captureEvent(Capture c) {
c.read();
}

void movieEvent(Movie m) {
m.read();
}

This code above stops the green rectangle from displaying and removes the tint so now its only the video playing that shows. Now that i have the correct code that works, I am going to find someone to be in my video that will make the facial expressions once a face is seen. I have removed the tint from this video and future videos because I want to work with it how it will be seen when presented. So every time the video stops and starts again, its just me covering and un-covering my face to make it work.

Iteration 3

4

Since my last attempt I have written in the code to stop the video when a face isn’t seen and then jump back to the first frame until a face is seen again. I simply did this by adding this line of code that I found in the video reference library:

react.jump(0);

to make the video jump to the first frame of the video. Now that I have all the code working how I want it, I am going to remove parts of the unnecessary code that I wouldn’t need when testing.  I will show how I did this in my next post.