This is the results page of the poll app that I designed. When designing this app I took into consideration the lectern and intro panels for the exhibition, and using the similar colour schemes. I thought this would work well as it would link the app and the exhibition together through design. The results page would display your results and the rest of the worlds results so you can compare. I used circular result displays as I thought this would look more appealing than just a bar.
Today was the pitch to Liam, the presentation can be found here. We gathered a lot of feedback from our ideas. The first idea was preferred as it was more interesting than a poll. However we were asked a lot of questions about it as people were confused about what the idea was. A lot of people seemed to understand what it was when we re-phrased it to sticker book or storybook. We were told that the age range was too low as its hard to imagine a 10 year old telling the app its thoughts on the exhibition. From this we will need to consider the age range more carefully. The poll was also criticised as its too simple.
From the pitch we have decided to change the first timeline idea to a sticker book idea as it sounds better and makes more sense. Now we will be dropping the poll idea and thinking of something more fun and interactive than a poll.
Today we had our first group meeting, in the meeting we discussed first ideas. From the meeting we decided that shauns idea of a timeline would be good and laurence’s idea of a poll.
The timeline would be a series of ‘moments’ from the exhibition that would be displayed as a timeline. This can then be viewed again and again as a reminder of the experience. The poll would be a series of questions about the exhibition and the users experience so the results can be collated and compared with other people who have visited the exhibition, the user can then share their results on social media.
Now we are going to work individually to create designs ready for the presentations. I have been designated to design the poll app with Hayley.
This is my mood board of my first idea. It is a photo app that the user can use to ‘become’ a character from the magna carter. It is similar to an app called ‘face in hole’ which uses the users face to add someone else’s body.
I think that this idea would be quite fun for children and young teenagers. As well as silly fun, the app would display information about the character that they have chosen, so it is educational as well.
Once I was given the brief to create an interactive info graphic I was not sure I was going to be able to complete this project with a final piece, but looking back through my blog now I feel great accomplishment that I have a working final piece, since I knew nothing about the theories I’ve come to learn about or the processing language. Once I first starting thinking about what my idea could be and what I would use, i always knew in the back of my mind I wanted to use a camera rather than a microphone, to me I thought this would be more effective in the space. Before the trip to London I was adamant that my idea was going to use the diversion theory, I thought this theory was an interesting concept to base my project on but I failed to find an interesting way to portray the theory in an interactive way. After the London trip and I interacted with the advertisement at the bus shelter there was when my ideas really started flowing. My aims with my chosen idea were to achieve:
- Working, implemented face tracking
- Use the face tracking to detect a face
- Import a video into processing
- Use the face tracking to control the playback of the video using faces to play and pause the video when seen or not
- Hope that people may have felt watched or noticed.
My finished piece does all of my above aims. The coding part of the project was fine, all my aims were met with few problems, however my theory goals (seen below) were not quite what I thought they may have been. I was hoping to use Ervin Goffmans theory “The presentation of self in everyday life” to create a piece that would make the audience feel as if they were being interacted with, from behind the screen and hoped that this would create the sense of being watched which I hoped would make people act differently in front of the camera. This however failed in the environment and I think due to my video I chose to use. Within the chosen environment it lacked forced interaction and advertisement, like in a museum, science centre or art gallery, people know it’s there and its to be viewed/interacted with. As there was no fore-warning of the projects being displayed I feel people were unaware of what was going on on the screens and too shy to find out so went on with what they were originally doing, unless the piece was very eye catching or they could see themselves. As my piece was just seen as a face until a face was detected it could have been easily dismissed by passers by as just an image of someones face on the screen. As well as this I came to the realisation that my chosen video was probably quite boring and could have been different.
If I was to do this again or had longer to work with this project I would invite people to see the screens and create a bigger build up to it so when it’s displayed, people who are interested know when to come along and view it which I think will create the right type of audience for my project. I would also create more interesting video and perhaps have it so that it displays different videos for different faces. I also thought of a good idea when I was testing my idea, that I could use a more advanced face tracking code, I could detect mouth movement, so that when a smile is detected the video will smile or if a laugh is detected the video will laugh, so the video will play what emotion it sees.
After my first test in weymouth house I found a problem with the speed of the video being an issue. So today I tested my project again after I changed the video to the original version to see if this would be an improvement. The original video seemed to work better when a face was seen, but due to peoples faces not being recognised from further away it still made it difficult for people to understand what was happening. Without moving the position of the camera I am unaware of how to fix this. Unfortunately moving the camera away from the screen was not possible as it was connected to the screen by a wire.
My predictions for the testing were that when lots of faces are seen, my piece may not work as well, as the video will constantly be playing because of all the faces. As well as this, when there are lots of people around I don’t think people will feel comfortable walking slowly past a screen and interacting with it. During the quieter time I was hoping that people may be more curious to investigate. During testing I compared my predictions to what was actually happening. Due to the faces not being detected very well when not in close proximity, there was no problem with the video constantly playing when there were lots of faces in the room. When the room was busy I did notice that when people were walking through the space they would move quickly instead of dawdle.
Today I went to the weymouth house foyer to do some testing, It was during a busy time of the day(12:45) which mean’t that there were lots of people around to use to test. From testing I have noticed that I need to speed the video up, as I had previously slowed it down as I thought it would work better. After testing I had comments as well as my own judgements to slow the video down and also have the video play continuously and then stop and start again when a face is seen. From this I am going to use the original video in my next testing session tomorrow. I also noticed that when someone walked past they had to be quite close to the camera for it to recognise their face which made it difficult to get people interested in my piece. One way I could have over come this is to move the camera to a different location, further away from the screen but close enough so that people will notice that they are triggering the video.