Hawthorne Effect and My Idea

The Hawthorne effect relates to my project as it is about how humans behaviour changes when they are being watched. Some people will work harder and perform better, they may change their behaviour due to the attention they are receiving.

Henry A. Landsberger described the hawthorne effect during his analysis of experiments conducted during the 1920s and 1930s at the Hawthorne works electric company. Hawthorne works electric company commissioned research to determine if people working in a certain environment for example, being watched by cameras would effect how they worked for the better.

    • “The original data have since been re-analysed, and it is not so clear whether the original results hold up. Nevertheless, the concept has been established – the very fact that people are under study, observation or investigation can have an effect on them and the results.”
      (Earl-Slater, 2002)
  • “One way to deal with the Hawthorne effect (and demand characteristics) is to observe the participants unobtrusively. This can be done using the naturalistic observation technique. However, this is not always possible for all behaviours. Another way to deal with the Hawthorne effect is to make the participants’ responses in a study anonymous (or confidential). This may eliminate some of the effects of this source bias.”
    (McBride, 2013)

http://psychology.about.com/od/hindex/g/def_hawthorn.htm

Earl-Slater, A. (2002). The handbook of clinical trials and other research. London: Radliffe Medical Press.

McBride, D. M. (2013). The process of research in psychology. London: Sage Publications.

Advertisements

Planning/filming beth for video

For my project I have decided to use my friend beth for my videos. I have decided on a few different facial expressions for her to pull:

  • smile and wave
  • blow a kiss
  • excited
  • smile

I got her to stand in front of a white wall so that the focus is on her. This makes it obvious when she moves on the screen. Once i filmed her I imported them into iMovie to remove sound and add a filter to it. Once these videos were edited and exported I named the files by their expressions to make it easier when coding and added them into my data folder ready for my next attempt in processing.

Screen Shot 2015-01-14 at 16.33.27

Iteration 4

Since my last post I feel that i’ve made progress with my idea and it’s starting to come together to what I want. Now that the video will stop and jump to the beginning when a face isn’t seen and play when a face is seen, I have cleaned up the code a bit.

//create image in folder i save it in, in data folder.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Movie react;
Capture video;
OpenCV opencv;

void setup() {
size(640, 480);
//scale video down, so it runs smoother
video = new Capture(this, 640/2, 480/2);
//loading open cv, and face tracking
opencv = new OpenCV(this, 640/2, 480/2);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

video.start();

//loading my video
react = new Movie(this, “movie.mp4”);
//looping video
react.loop();
}

void draw() {

//scaling video back up to fit the canvas
scale(2);
opencv.loadImage(video);

//displaying the camera video, will be removed
image(video, 0, 0 );

//adjusting the reaction video
pushMatrix();
//scaling video down to fit canvas
scale(0.5);
//tint to make transparent
//tint(255, 185);
//display my reaction video
image(react, 0, 0);
popMatrix();

//styling for the face tracking rectangle
noFill();
stroke(0, 255, 0);
strokeWeight(3);
Rectangle[] faces = opencv.detect();
println(faces.length);

//draw rectangle around the face
//for (int i = 0; i < faces.length; i++) {
// println(faces[i].x + “,” + faces[i].y);
// rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
// }

//if one or more faces are seen, reaction video will play, else it is paused.
if (faces.length >= 1) {
react.play();
} else {
//when theres no face, reaction video jumps to beginning
react.jump(0);
react.pause();
}
saveFrame(“###.jpg”);
}

void captureEvent(Capture c) {
c.read();
}

void movieEvent(Movie m) {
m.read();
}

This code above stops the green rectangle from displaying and removes the tint so now its only the video playing that shows. Now that i have the correct code that works, I am going to find someone to be in my video that will make the facial expressions once a face is seen. I have removed the tint from this video and future videos because I want to work with it how it will be seen when presented. So every time the video stops and starts again, its just me covering and un-covering my face to make it work.

Iteration 3

4

Since my last attempt I have written in the code to stop the video when a face isn’t seen and then jump back to the first frame until a face is seen again. I simply did this by adding this line of code that I found in the video reference library:

react.jump(0);

to make the video jump to the first frame of the video. Now that I have all the code working how I want it, I am going to remove parts of the unnecessary code that I wouldn’t need when testing.  I will show how I did this in my next post.

Identity and Performance

The presentation of self in everyday life by Erving Goffman considers the different situations that people can present themselves in. Goffman thinks that when a communication happens, an individual is presented and absorbed. Everyone js trying to be someone and persuade people that they are that person. There are two impressions given when interacting, someone will give and the other will give off. to give is to communicate the information they intend to give. Giving off is when someone accepts a projected identity.

An individual gives off a front to their audience. There are two different types of acts someone will do; private and public. When someone is in the public it is thought that they ‘perform’ a different self. Whereas when someone is in private they are thought to be themselves.

Using this theory I am going to conduct my own social experiment to see if people in Weymouth house foyer ‘perform’ once they feel they are put into the public eye.

https://www.essex.ac.uk/sociology/documents/pdf/ug_journal/vol2/2009SC203_HannahHammond.pdf

Attempt 2

The problem with my previous attempt was that the video just disappeared when a face wasn’t seen so in my previous post I said I was going to try to pause the video when a face is not seen.

2

This time I have been trying to make the video pause when a face isn’t seen. I have done this by adding this extra part of code that I found in the video reference library:

if (faces.length >= 1) { react.play(); } else { react.pause(); } }

This means that once 1 or more faces is seen it will play otherwise it will pause. It pauses on the frame that it is playing and then resumes from that frame.

Next time I am going to code it so that when a face isn’t seen, it stops the video and jumps it to the beginning frame of the video.

Iteration 1

Now that I have my chosen idea I have been able to begin learning and making my processing piece. First of all I have had to find out how I can use face tracking in my project. I researched ‘processing face tracking’ and found an open cv library which has a face tracking sketch. The face tracking in open cv uses haar cascades which is explained here. After I had a play around with the face detection I found the reference library for the video tags so i could add in video tags to the face tracking and see what would happen.1

The above code is my first attempt at creating my processing piece. For my first few attempts I am using a place filler video of myself spinning on a chair. I have implemented the face tracking from the open cv library. Using this I have added my own code to control video play back when a face is seen. This code allows a video to play when a face is seen, when a face is not seen, the video is not displayed and the screen shows what the camera sees.(as seen in video below) I am leaving the green rectangle to display around the faces for my first few attempts so I know that its tracking faces. I have also made the video slightly see through so i can show the video and what the camera sees to show how it works.  I have commented on the code so that I can return to the code and remember what it all means to hopefully speed up the process. For my next attempt I am going to try to pause the video when a face is not seen.

Panopticism

The Panopticon is a type of prison designed by English philosopher and social theorist Jeremy Bentham in the late 18th century. It features a watchmen in the middle and the inmates in cells around the outside. The watchman would be able to watch over all of the inmates without the inmates being able to tell whether or not they were being watched. It’s impossible for the watchman to watch all the inmates at once but the inmates wouldn’t know when they’re being watched, which means that they act as though they are. This would change their behaviour at all times. Below is a diagram of the panopticon design:

This theory is relevant to my project because its about how being watched how it changes you, the prisoners in prison would want to act in the right way because they feel they are being watched and this would avoid punishment. With my project my audience will hopefully feel they have been ‘seen’ and so they may change from doing something ‘wrong’.

http://www.ucl.ac.uk/Bentham-Project/who

http://www.ucl.ac.uk/Bentham-Project/who/panopticon