Once I was given the brief to create an interactive info graphic I was not sure I was going to be able to complete this project with a final piece, but looking back through my blog now I feel great accomplishment that I have a working final piece, since I knew nothing about the theories I’ve come to learn about or the processing language. Once I first starting thinking about what my idea could be and what I would use, i always knew in the back of my mind I wanted to use a camera rather than a microphone, to me I thought this would be more effective in the space. Before the trip to London I was adamant that my idea was going to use the diversion theory, I thought this theory was an interesting concept to base my project on but I failed to find an interesting way to portray the theory in an interactive way. After the London trip and I interacted with the advertisement at the bus shelter there was when my ideas really started flowing. My aims with my chosen idea were to achieve:
- Working, implemented face tracking
- Use the face tracking to detect a face
- Import a video into processing
- Use the face tracking to control the playback of the video using faces to play and pause the video when seen or not
- Hope that people may have felt watched or noticed.
My finished piece does all of my above aims. The coding part of the project was fine, all my aims were met with few problems, however my theory goals (seen below) were not quite what I thought they may have been. I was hoping to use Ervin Goffmans theory “The presentation of self in everyday life” to create a piece that would make the audience feel as if they were being interacted with, from behind the screen and hoped that this would create the sense of being watched which I hoped would make people act differently in front of the camera. This however failed in the environment and I think due to my video I chose to use. Within the chosen environment it lacked forced interaction and advertisement, like in a museum, science centre or art gallery, people know it’s there and its to be viewed/interacted with. As there was no fore-warning of the projects being displayed I feel people were unaware of what was going on on the screens and too shy to find out so went on with what they were originally doing, unless the piece was very eye catching or they could see themselves. As my piece was just seen as a face until a face was detected it could have been easily dismissed by passers by as just an image of someones face on the screen. As well as this I came to the realisation that my chosen video was probably quite boring and could have been different.
If I was to do this again or had longer to work with this project I would invite people to see the screens and create a bigger build up to it so when it’s displayed, people who are interested know when to come along and view it which I think will create the right type of audience for my project. I would also create more interesting video and perhaps have it so that it displays different videos for different faces. I also thought of a good idea when I was testing my idea, that I could use a more advanced face tracking code, I could detect mouth movement, so that when a smile is detected the video will smile or if a laugh is detected the video will laugh, so the video will play what emotion it sees.