Saturday, September 17, 2011


I really like the idea of using RFID for our project to implement the locations.  This would solve the problem of not knowing where the mouse is if the user picks it up.  This would also add a wire if we put the RFID scanner inside the sheep, and the tags on the 3D map.  On the other hand, the RFID tag could be embedded in the mouse and the scanner could be on the map.  Then when the user picked up the mouse, he/she could be  instructed to put it back on the RFID scanner start space.  This is just a random idea influenced by working with RFID tags in lab yesterday.

Where's Bo Peep? Making the TouchMouse a finger puppet

We reevaluated our project design and decided we didn't have a great reason for using a projector.  Then we thought, why use visual output?  When people think of computers, they think of a monitor, mouse and keyboard.  We want to be non-traditional, so we are going to do away with the visual output and instead use auditory and vibrotactile feedback.

Instead of "Where's Wendy?", we have altered the game to be "Where's Bo Peep?".  More people know about Bo Peep and her lost sheep, so we are going to make a twist on the nursery rhyme and have her sheep look for her.  Since we got rid of the visual display, there is going to be a 3D physical map the mouse will travel over and then the user can interact with the physical objects by gesturing and receiving audio feedback.

The TouchMouse is now going to be dressed up as a sheep.  There will be two different ways of gesturing on the mouse: in the head of the sheep like a finger puppet, and on the bare surface of the mouse.  By allowing the user to control a finger puppet, the gestures he/she makes are much more intuitive.  If he/she wants to make the sheep look around a tree, they gesture to the right by moving the sheeps head forward and to the right.

The users will help the sheep find Bo Peep by looking in different places represented by the 3D map.  For example, the sheep could go knock on Bo Peep's door and then will receive a response from Bo Peep (if she is there), or her mother (if she is not there).  The sheep will also vibrate when she gets excited, such as if she is close to finding Bo Peep.  If the users are lost, they can go get a hint from the old wise sheep.  This approach will encourage the users to collaborate and work with each other to find Bo Peep.  The size of the 3D board is a constraint that will also force them to work together.  The users will probably be elementary school aged with varying technological experience.

I think this conception is much more unique and creative.  It will be exciting to begin implementing it.  Our next challenge will be to use the TouchMouse SDk to interpret touch data.  I have downloaded the software from DreamSpark, so I'm starting down that path!!

Friday, September 9, 2011

Tangible Video Editor

Tangible user interfaces are related to other areas of research as well.  The two frameworks I focused on both expanded the area of focus and wanted to combine some topics.  Getting a Grip on Tangible Interaction: A Framework on Physical Space and Social Interaction focused on combining four major areas: Tangible Manipulation, Spatial Interaction, Embodied Facilitation, and Expressive Representation.  Reality-Based Interaction: A Framework for Post-WIMP Interfaces based the framework on the intuitiveness of using natural everyday concepts that the user already knows.  I use both of these frameworks to analyze the Tangible Video Editor, which is a movie editor that consists of plastic video puzzle pieces that fit together, transition pieces that fit in between the video puzzle pieces, and a play-controller which will play the video clips of the connected pieces.

Users can reach out and pick up the video clips that they would like to view, and can arrange them by physically moving the video pieces.  This can be categorized under haptic direct manipulation as well as naive physics.  The user knows about touching things in the world and can arrange the objects as they desire, such as by arranging the clips they desired to use closer to them.  The tangible video editor is lightweight because it allows users to experiment with different orders and transitions without any consequences.  The feed back is not as rapid.  For the clips to be played, the video pieces must each be connected to the play-controller by physically moving them.

The user has a large amount of spatial interaction with the editor.  The video puzzle pieces are physical pieces that the user can interact with and can reorder them.  The materials are configurable because when the user puts them in an order connected to the play-controller he/she has created a new movie segment.  There is a clear connection between putting the video pieces together in a specific order and then playing back the resulting movie.  The user perceives the coupling between the digital video media and the physical pieces of plastic.  The puzzle shaped pieces are tailored to users’ experience because how to construct a puzzle is general knowledge.

Users who edit with the tangible video editor collaborated greatly with each other.  The number of video segments means the users have to collaborate to sort all of them, and they are too large for one user to gather only him or her.  These embodied constraints force users to work together to create a movie.  This helps users use their social awareness.  They are able to share the puzzle pieces and discuss their plot plans.  All of the users are also able to see what is going on and be near the play-controller, which is the way to control when to play the clips and which clips to play.  This is a good use of multiple access points because one user cannot take over the whole project.

The tangible video editor is a good example of a new tangible system that encourages collaboration and makes video editing an enjoyable social activity.  It contains many of the concepts that are in both frameworks, many of which overlap.  These concepts help show and define the parts of this system that contribute to making it sucessfull!

Saturday, September 3, 2011

"Where's Wendy" Proposal

I decided to enter UIST Student Innovation Contest, where teams have to use a Microsoft mouse to create a new creation.  My team's creation is going to be a tangible user interface that does not rely on the "normal" computer interaction setup (such as a mouse, monitor and keyboard).  This setup has encouraged my generation (and/or the one under mine) to be more removed from the world engrossed in a computer screen.  To encourage collaboration and socializing, my team decided to have a projector project the display image across the mouse.  We thought a game would be a fun way of encouraging socializing with others and using technology, so "Where's Wendy" was conceived.

The image would be a digital world with a person named Wendy hiding inside of it.  The goal of the user would be to find Wendy by moving around the world, and then looking behind objects.  When the mouse is moved, the location inside of the digital images also moves.  This is very intuitive, as people grow up learning how to move objects by touching them.  To look behind objects, the user would make certain gestures on the mouse such as swiping to the left or right, or stroking up or down.  The ease of using this device will encourage novices to use it.  

We want users of all levels to be able to use this device, especially novices.  Users of all ages over 4 or 5 are also able to use this device because of the simplicity: one uses intuitive qualities that they have known from young ages.  Older users will also have fun with this game because they probably have memories of playing similar games as children, and will have the chance to share the game with their children.

We wanted our approach to be something that has not been seen before and something that was creative in an unexpected way.  Projecting the image on the mouse is similar to the ideas behind a touch screen, but is actually able to provide normal mouse motion input in addition to touch input.  There are related works that also involve using projectors in different ways.  One, MotionBeam, was our inspiration to use a projector in a non-traditional way.  MotionBeam has a projector that the user can actually move around, and gave us the idea that a projector does not need to be in a fixed location.  There are more and more new ideas with the development of handheld projectors, including Spotlight Navigation, Twinkle, Multi-User Interaction using Handheld projectors, and Interacting with Dynamically Defined Information Spaces using a Handheld Projector and a Pen.

Overall, I think this is going to be a great project!  It might be a bit challenging (we do still have to acquire a mini projector and communicate with it), but our creativeness is flowing.  We also have to design a structure that will move the projector with the mouse without wobbling or falling over.  I believe that will be our greatest design challenge.  My team is very excited and we are hoping we can all attend the convention.  We all agreed that the coolest part of the convention would be seeing all of the other creations that students imagined.