Author: mariachelseawatkins

About mariachelseawatkins

My Website: http://dakar.bournemouth.ac.uk/~mwatkins/index.html

Project Evaluation

Once I was given the brief to create an interactive info graphic I was not sure I was going to be able to complete this project with a final piece, but looking back through my blog now I feel great accomplishment that I have a working final piece, since I knew nothing about the theories I’ve come to learn about or the processing language. Once I first starting thinking about what my idea could be and what I would use, i always knew in the back of my mind I wanted to use a camera rather than a microphone, to me I thought this would be more effective in the space. Before the trip to London I was adamant that my idea was going to use the diversion theory, I thought this theory was an interesting concept to base my project on but I failed to find an interesting way to portray the theory in an interactive way. After the London trip and I interacted with the advertisement at the bus shelter there was when my ideas really started flowing. My aims with my chosen idea were to achieve:

  • Working, implemented face tracking
  • Use the face tracking to detect a face
  • Import a video into processing
  • Use the face tracking to control the playback of the video using faces to play and pause the video when seen or not
  • Hope that people may have felt watched or noticed.

My finished piece does all of my above aims. The coding part of the project was fine, all my aims were met with few problems, however my theory goals (seen below) were not quite what I thought they may have been. I was hoping to use Ervin Goffmans theory “The presentation of self in everyday life” to create a piece that would make the audience feel as if they were being interacted with, from behind the screen and hoped that this would create the sense of being watched which I hoped would make people act differently in front of the camera. This however failed in the environment and I think due to my video I chose to use. Within the chosen environment it lacked forced interaction and advertisement, like in a museum, science centre or art gallery, people know it’s there and its to be viewed/interacted with. As there was no fore-warning of the projects being displayed I feel people were unaware of what was going on on the screens and too shy to find out so went on with what they were originally doing, unless the piece was very eye catching or they could see themselves. As my piece was just seen as a face until a face was detected it could have been easily dismissed by passers by as just an image of someones face on the screen. As well as this I came to the realisation that my chosen video was probably quite boring and could have been different.

If I was to do this again or had longer to work with this project I would invite people to see the screens and create a bigger build up to it so when it’s displayed, people who are interested know when to come along and view it which I think will create the right type of audience for my project.  I would also create more interesting video and perhaps have it so that it displays different videos for different faces. I also thought of a good idea when I was testing my idea, that I could use a more advanced face tracking code, I could detect mouth movement, so that when a smile is detected the video will smile or if a laugh is detected the video will laugh, so the video will play what emotion it sees.

Testing day 2

After my first test in weymouth house I found a problem with the speed of the video being an issue. So today I tested my project again after I changed the video to the original version to see if this would be an improvement. The original video seemed to work better when a face was seen, but due to peoples faces not being recognised from further away it still made it difficult for people to understand what was happening. Without moving the position of the camera I am unaware of how to fix this. Unfortunately moving the camera away from the screen was not possible as it was connected to the screen by a wire.

My predictions for the testing were that when lots of faces are seen, my piece may not work as well, as the video will constantly be playing because of all the faces. As well as this, when there are lots of people around I don’t think people will feel comfortable walking slowly past a screen and interacting with it. During the quieter time I was hoping that people may be more curious to investigate. During testing I compared my predictions to what was actually happening. Due to the faces not being detected very well when not in close proximity, there was no problem with the video constantly playing when there were lots of faces in the room. When the room was busy I did notice that when people were walking through the space they would move quickly instead of dawdle.

Testing Evaluation

Today I went to the weymouth house foyer to do some testing, It was during a busy time of the day(12:45) which mean’t that there were lots of people around to use to test. From testing I have noticed that I need to speed the video up, as I had previously slowed it down as I thought it would work better. After testing I had comments as well as my own judgements to slow the video down and also have the video play continuously and then stop and start again when a face is seen. From this I am going to use the original video in my next testing session tomorrow. I also noticed that when someone walked past they had to be quite close to the camera for it to recognise their face which made it difficult to get people interested in my piece. One way I could have over come this is to move the camera to a different location, further away from the screen but close enough so that people will notice that they are triggering the video.

Iteration 6

From my previous post I wanted to experiment with trying to add more videos to play one after the other. First I loaded in the new videos along with the first one using the code:

react = new Movie(this, “beth wave.mov”);
react = new Movie(this, “beth smile.mov”);
react = new Movie(this, “beth kiss.mov”);

Once I loaded the videos I created an if statement. The if statement i used was to check if there was a face, I made a nested if statement which checked the duration of the first video playing. If it had reached the end of the video it would load and play the next video on top. I had a problem with this. Getting the video to pause and start again when a face wasn’t seen by the camera was too difficult for me to work out.

If I were to carry on with this I would also need to work out a way to get the videos to loop around back to the beginning once they had all been played, but this is too advanced for my current level of processing knowledge. So I may return to this before the deadline.

Testing Expectations

Tomorrow I will be testing my project in the foyer of Weymouth house. I will be using one of the screens and a webcam to do this as well as plugging my laptop into the screen. I am hoping to go during the peak time so that I can spend roughly an hour there, so I will spend half an hour during peak time and half an hour during a quieter period when the majority are in lectures. I feel that doing this will allow me to see how it works when lots of faces are seen compared to when there are not many being seen. I expect that when lots of faces are seen, my piece may not work as well as the video will constantly be playing because of all the faces. As well as this, when there are lots of people around I don’t think people will feel comfortable walking slowly past a screen and interacting with it. During the quieter time I am hoping that people may be more curious to investigate.

Iteration 5

//create image in folder i save it in, in data folder.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Movie react;
Capture video;
OpenCV opencv;

void setup() {
size(1280, 720);
//scale video down, so it runs smoother
video = new Capture(this, 640/2, 480/2);
//loading open cv, and face tracking
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();

//loading my video
react = new Movie(this, “beth wave.mov”);
//looping video
react.loop();
}

void draw() {

//scaling video back up to fit the canvas
//scale(2);
opencv.loadImage(video);

//displaying the camera video, will be removed
image(video, 0, 0 );

//adjusting the reaction video
//pushMatrix();
//scaling video down to fit canvas
//scale(0.5);
//tint to make transparent
//tint(255, 185);
//display my reaction video
image(react, 0, 0);
//popMatrix();

//styling for the face tracking rectangle
//noFill();
//stroke(0, 255, 0);
//strokeWeight(3);
Rectangle[] faces = opencv.detect();
println(faces.length);

//draw rectangle around the face
//for (int i = 0; i < faces.length; i++) { // println(faces[i].x + “,” + faces[i].y); // rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // } //if one or more faces are seen, reaction video will play, else it is paused. if (faces.length >= 1) {
react.play();
} else {
//when theres no face, reaction video jumps to beginning
react.jump(0);
react.pause();
}
}

void captureEvent(Capture c) {
c.read();
}

void movieEvent(Movie m) {
m.read();
}

In this iteration I have used the final video for my project, it is the smile and wave video. In the video you can see that the face animates until there is no face seen. This then pauses the video and jumps it back to the beginning. I am happy with this because it is how I wanted it to be, but I think I’m going to mess around with it and see if i can add the other videos into it so that they play after the first video has played.

This a video of the piece, with me in the corner showing how it works:

The video below shows how the video would display to the audience:

How the Illusion of Being Observed Can Make You a Better Person

I have found this interesting article, by Sander Van Der Linden. Which fits into my theme of identity and performance. In this article Linden is explaining how being watched makes you perform as a better person as you are less likely to do something wrong when someone is watching you, such as sticking gum under a table or not picking up dog poo in a public place when no one is around.