Friday, December 2, 2011

Evocative Objects

In class we learned about objects and the power they hold.  When you think about it, objects tie us to everything because we live surrounded by them.  I wanted to design an object that captured me and who I am, without being to obvious about it.  I wanted puzzles and mystery.  If you look closely at the outside edge, you will see six abstract "M"s.  I identify with the letter "M" because my name starts with it, and so I have always been "MTL" or my favorite animal has been monkey simply because it starts with the letter "M" (and they are cute).  The main design your eyes are drawn to is the spiral in the middle.  I wanted this to be obvious because many things in life seem to spiral, and this semester my time has been spiraling so that it seems to pass by even more quickly.  The overall shape reminds me of a snowflake, which I love making and have spent countless hours littering the floor with paper bits.  Snowflakes (the design kind) also put me in the holiday spirit!!  To me, my object has significant meaning that is not obvious.  The secrets an object hold seem to make them more meaningful.


Saturday, November 5, 2011

Exploring with TUI Systems

Today in class was exciting!  I was able to interact with new TUI systems.  I got to connect different Topobo pieces and create sculptures (although I couldn't animate my pieces because we are still waiting for the charger to ship).  This is a really cool idea, I can't wait to use it while it is functioning.
Topobo
Another system I got to explore was Sifteo.  This is awesome, there are 6 cubes that can interact with each other.  I'm not sure how multiplayer this was supposed to be, but we ended up each having a cube and interacting with each other to solve puzzles.  There are more apps that can be downloaded for different games.  It was a great experience!  I didn't realize that you could use tilting until I accidentally tilted mine and all of my pieces went sliding to one side, that was a great feature that I ended up using very intuitively to connect the color pieces in my cube to the same colored pieces in the other players' cubes.

Sifteo
 The Life of George was a fun interactive game!  We were racing against the clock to build the picture the ipod showed us and then we could take a picture of it and it would recognize what we made!!  This was best for around 3 people.  It was a little hard with 4 adults trying to hover around this little board, but we all helped out yelling such things such as "blue 4" or "black single".  I could definitely see my cousins loving this game, and I am thinking about purchasing it for a Christmas present!
Lego: Life of George


Tangible Stories

Working on Tangible Stories was a very interesting experience.  Computers kept crashing and Tanner was also in the middle of it.  It was also a valuable experience, I tested my patience with computers and collaborated with others when exceptions occurred (especially XAML parse errors).


I was inspired by the idea of being able to share my photos and videos interactively.   To encourage users to interact with the surface, I made two tangible objects: a storybook and a mystery person. 
  
I made the story book a tangible object because I like the idea of being able to get information about an image by placing a book on top of it.  This works as a tangible object and not a button because I want the user to be able to move the photo around by holding the book.  The focus should be on the story of the picture.  I made the mystery person a tangible object because, again, I wanted the user to be able to manipulate the image.  In this case, I also wanted the user to be able to move the “identified” image around in case they also wanted to see the original image. 

The three main controls involve what the user would like to do.  They can view pictures, view videos, or receive hints on what to do.  The words “pictures” and “videos” are buttons, which then show all of the pictures or all of the videos.  The hint button is a toggle button, which allows the user to toggle the hint text (while the button is “checked/on” the text changes to “hide hints” to let the user know how to make the hint text disappear).

For the videos and pictures, the next screen shows many scatter view items with either a picture or video inside.  The user can then control the picture or video’s size and location by moving the boundaries or moving the object.  The user can also control what detail they would like to see, this control is given by a Tag Visualizer.  The user can use different tagged items (the storybook or the mystery person) to decide what information they would like to know. 

While interacting with Tangible Stories, users are able to talk to each other and discuss the pictures or videos.  The volume is soft enough that they are able to make comments while watching videos and enjoy watching them.  The application encourages socialization, which is great because it is just a medium for users to learn about a trip or memories.  Interacting with this application is realistic because the photos are scattered about on a coffee table, but also has digital aspects because videos are also scattered on the coffee table and more information about the images and video may be easily discovered.

Here is a video of my version of Tangible Stories: http://www.youtube.com/watch?v=G6k71_8O9cI

Wednesday, November 2, 2011

UIST Day 2 & 3

I've heard that UIST gets more exciting after the first day, which stumped me because I had a great time the first day.  I've also met so many more people and it's been great!!  I was intimidated at first, but everyone is so nice and loves telling you about their work and gives great advice.

One paper that I thought was really cool and could be relevant to CS110, was "Vice Versa."  It takes markup and then animates to what would actually be displayed.  It is a quick in-place preview technique developed at the University of Paris.  Unfortunately it is still a proof-of-concept.  I would really like to use this tool!

The tangible session was extremely interesting.  I really liked it!  ZeroN was astounding.  A team from MIT used magnets to suspend a ball in midair and it was interacted with.

Overall, UIST was a fantastic experience and I love every minute of it!  It has inspired me to look more into research, which is something I have not done much.  It also has confirmed my interest in the general area!!

Wednesday, October 19, 2011

UIST 2011 Student Innovation Contest

When it finally came time for the Student Innovation Contest, our poster and game board were ready to go and our demo was working!!  This was very exciting because we had come with expectations that everything would break and things would go wrong, but everything seemed to be going well.  I was nervous before beginning, but when it started I became really excited to share the project and I realized I knew what I was talking about!! (Thank you Consuelo for preparing us!!).

For our project, my team, Wendy & Michelle, and I made an interactive game board titled "Where's Bo Peep?".  We hid the Microsoft TouchMouse inside of a stuffed sheep that we made a finger puppet.  By performing normal sheep movements, such as looking left and right around objects or looking inside something, the user actually was performing gestures on the TouchMouse that we interpreted and reacted to with audio.  The overall goal of the game was to help the sheep find Bo Peep.  The user navigated around the game board by moving the sheep (mouse) and gesturing at "hotspots" around the board.  One scenario is the user could make the sheep look inside the cave by sticking its head inside, which would cause the user to gesture forward.  Following the gesture there would be an audio response reflecting whether or not Bo Peep was inside the cave.  When Bo Peep is at a location, she responds.  When Bo Peep is not at a location, the narrator responds and encourages the user to continue looking for her.  Having a narrator propagates the idea of the user creating their own story!

One of the main goals of our project was to be creative in the way we used the mouse and to inspire creativity and social interaction in children.  The target audience of "Where's Bo Peep?" is elementary school aged children.  The game encourages the kids to have social interaction with each other by responding with audio output, which allows them to have eye contact with each other and does not force them to continuously look at the game board because it does not visually change.

Overall, telling everyone who was interested about our project was a very exciting and exhausting experience.  We also met a lot of people and hopefully have made some lasting connections!

UIST 2011 Day 1

The day before UIST we registered and then mingled.  It was a successful night because a huge group of us bonded over walking around looking for an open restaurant.

Day 1
The talks were ver interesting.  Some of the wisdom I took away:

Crowd Control - I learned about this term and it seemed to be a really cool idea; poll the crowd and use the collective wisdom of the people.
"Simulate expertise in crowd of amateurs" -PlateMate group
"disguise crowd as singer worker" -Legion group
A really cool use of crowd control that was shown is transcribing.  Everyone is given a little segment of the test and together the whole handwritten document is typed.

Another really cool idea was the robotic boss idea..  At a company, a boss was in a different city from his workers, so they made a Skype robot that the boss could drive around the office.  This is a new idea to me, although the group was presenting a more complex idea about controlling the volume people use.  One of their ideas was to use a side tone, which was used by land line phones to let people know how loud they are speaking.

From the program, one of the presentations we were most looking forward to was the tongue-input device by Disney Research.  Today, the giant costume characters at Disney are mute and use their arms to be expressive and engage in a body language conversation.  The research team wanted the character actors to be able to control voice clips to make the characters more interactive with the children.  The idea is, while the actor is using his/her arms to be expressive, he/she could use his/her tongue to select the appropriate voice clips.  This would be combined with a tree so a conversation would have choices related to the choice that was previously chosen.

My favorite paper of the day is Collabode, which happens to be developed by our neighbors at MIT.  It is a program for writing programs, much like Google Docs is a program for writing documents.  It encourages collaboration and prevents errors that could be created when having users working on the same project at the same time by only sharing code once it automatically compiles.  This would be great!  We were thinking that CS111 (& CS230) could greatly benefit from this program, as there is much pair programming in these classes and this program would help prevent the "lopsided" control that occurs.

Later in the afternoon, there was a session on social learning.  This was mostly about connecting the queries that people use on the internet to the actual programming.  During one of the talks, the group showed users how to take tutorials for Adobe Photoshop and actually perform them in Gimp, a free photo-editor.  After that talk, one of the conference attendees who works at Adobe asked the group if they had thought of the legal implications of their piece of software.  The software was taking instructions for an expensive piece of software and showing how to accomplish the same task in an open source software.  This could be seen as taking money away from Adobe.  The room was completely silent after the guy made his remarks.  I was not quite sure what to think.  On one hand, Adobe is a software company and their goal is to make money off of their software.  On the other hand, conferences are supposed to promote research and shared learning.  I feel that tutorials for most tasks could be out on the internet in both Gimp and Adobe Photoshop instructions, and so this conversion would not be a huge leap.  One could also take the time to browse the help sections in Gimp to figure out the task they would like to perform.

The keynote, Breaking Barriers with Sound, was extremely interesting.  I love music and computers, so it's a mix of both worlds!  Ge Wang from Stanford was engaging and a great speaker.  He demoed his music coding language, Chuck, and he showed us apps that his company, Smule, has developed.  Last Christmas I discovered the Ocarina app and loved it, and today I found out he created it!!  Another awesome app he demoed was an auto-tuner that let you sing like a pro.  One project he had been working on was a laptop orchestra.  The idea of this sounds extremely out-of-this world to me.  It is a very creative concept and they pulled it off very well.  I especially liked that each member had their own speaker to better reflect a natural orchestra.  There were amazing videos on YouTube.


Saturday, September 17, 2011

RFID

I really like the idea of using RFID for our project to implement the locations.  This would solve the problem of not knowing where the mouse is if the user picks it up.  This would also add a wire if we put the RFID scanner inside the sheep, and the tags on the 3D map.  On the other hand, the RFID tag could be embedded in the mouse and the scanner could be on the map.  Then when the user picked up the mouse, he/she could be  instructed to put it back on the RFID scanner start space.  This is just a random idea influenced by working with RFID tags in lab yesterday.

Where's Bo Peep? Making the TouchMouse a finger puppet



We reevaluated our project design and decided we didn't have a great reason for using a projector.  Then we thought, why use visual output?  When people think of computers, they think of a monitor, mouse and keyboard.  We want to be non-traditional, so we are going to do away with the visual output and instead use auditory and vibrotactile feedback.

Instead of "Where's Wendy?", we have altered the game to be "Where's Bo Peep?".  More people know about Bo Peep and her lost sheep, so we are going to make a twist on the nursery rhyme and have her sheep look for her.  Since we got rid of the visual display, there is going to be a 3D physical map the mouse will travel over and then the user can interact with the physical objects by gesturing and receiving audio feedback.

The TouchMouse is now going to be dressed up as a sheep.  There will be two different ways of gesturing on the mouse: in the head of the sheep like a finger puppet, and on the bare surface of the mouse.  By allowing the user to control a finger puppet, the gestures he/she makes are much more intuitive.  If he/she wants to make the sheep look around a tree, they gesture to the right by moving the sheeps head forward and to the right.

The users will help the sheep find Bo Peep by looking in different places represented by the 3D map.  For example, the sheep could go knock on Bo Peep's door and then will receive a response from Bo Peep (if she is there), or her mother (if she is not there).  The sheep will also vibrate when she gets excited, such as if she is close to finding Bo Peep.  If the users are lost, they can go get a hint from the old wise sheep.  This approach will encourage the users to collaborate and work with each other to find Bo Peep.  The size of the 3D board is a constraint that will also force them to work together.  The users will probably be elementary school aged with varying technological experience.

I think this conception is much more unique and creative.  It will be exciting to begin implementing it.  Our next challenge will be to use the TouchMouse SDk to interpret touch data.  I have downloaded the software from DreamSpark, so I'm starting down that path!!


Friday, September 9, 2011

Tangible Video Editor

Tangible user interfaces are related to other areas of research as well.  The two frameworks I focused on both expanded the area of focus and wanted to combine some topics.  Getting a Grip on Tangible Interaction: A Framework on Physical Space and Social Interaction focused on combining four major areas: Tangible Manipulation, Spatial Interaction, Embodied Facilitation, and Expressive Representation.  Reality-Based Interaction: A Framework for Post-WIMP Interfaces based the framework on the intuitiveness of using natural everyday concepts that the user already knows.  I use both of these frameworks to analyze the Tangible Video Editor, which is a movie editor that consists of plastic video puzzle pieces that fit together, transition pieces that fit in between the video puzzle pieces, and a play-controller which will play the video clips of the connected pieces.

Users can reach out and pick up the video clips that they would like to view, and can arrange them by physically moving the video pieces.  This can be categorized under haptic direct manipulation as well as naive physics.  The user knows about touching things in the world and can arrange the objects as they desire, such as by arranging the clips they desired to use closer to them.  The tangible video editor is lightweight because it allows users to experiment with different orders and transitions without any consequences.  The feed back is not as rapid.  For the clips to be played, the video pieces must each be connected to the play-controller by physically moving them.

The user has a large amount of spatial interaction with the editor.  The video puzzle pieces are physical pieces that the user can interact with and can reorder them.  The materials are configurable because when the user puts them in an order connected to the play-controller he/she has created a new movie segment.  There is a clear connection between putting the video pieces together in a specific order and then playing back the resulting movie.  The user perceives the coupling between the digital video media and the physical pieces of plastic.  The puzzle shaped pieces are tailored to users’ experience because how to construct a puzzle is general knowledge.

Users who edit with the tangible video editor collaborated greatly with each other.  The number of video segments means the users have to collaborate to sort all of them, and they are too large for one user to gather only him or her.  These embodied constraints force users to work together to create a movie.  This helps users use their social awareness.  They are able to share the puzzle pieces and discuss their plot plans.  All of the users are also able to see what is going on and be near the play-controller, which is the way to control when to play the clips and which clips to play.  This is a good use of multiple access points because one user cannot take over the whole project.

The tangible video editor is a good example of a new tangible system that encourages collaboration and makes video editing an enjoyable social activity.  It contains many of the concepts that are in both frameworks, many of which overlap.  These concepts help show and define the parts of this system that contribute to making it sucessfull!

Saturday, September 3, 2011

"Where's Wendy" Proposal

I decided to enter UIST Student Innovation Contest, where teams have to use a Microsoft mouse to create a new creation.  My team's creation is going to be a tangible user interface that does not rely on the "normal" computer interaction setup (such as a mouse, monitor and keyboard).  This setup has encouraged my generation (and/or the one under mine) to be more removed from the world engrossed in a computer screen.  To encourage collaboration and socializing, my team decided to have a projector project the display image across the mouse.  We thought a game would be a fun way of encouraging socializing with others and using technology, so "Where's Wendy" was conceived.

The image would be a digital world with a person named Wendy hiding inside of it.  The goal of the user would be to find Wendy by moving around the world, and then looking behind objects.  When the mouse is moved, the location inside of the digital images also moves.  This is very intuitive, as people grow up learning how to move objects by touching them.  To look behind objects, the user would make certain gestures on the mouse such as swiping to the left or right, or stroking up or down.  The ease of using this device will encourage novices to use it.  

We want users of all levels to be able to use this device, especially novices.  Users of all ages over 4 or 5 are also able to use this device because of the simplicity: one uses intuitive qualities that they have known from young ages.  Older users will also have fun with this game because they probably have memories of playing similar games as children, and will have the chance to share the game with their children.

We wanted our approach to be something that has not been seen before and something that was creative in an unexpected way.  Projecting the image on the mouse is similar to the ideas behind a touch screen, but is actually able to provide normal mouse motion input in addition to touch input.  There are related works that also involve using projectors in different ways.  One, MotionBeam, was our inspiration to use a projector in a non-traditional way.  MotionBeam has a projector that the user can actually move around, and gave us the idea that a projector does not need to be in a fixed location.  There are more and more new ideas with the development of handheld projectors, including Spotlight Navigation, Twinkle, Multi-User Interaction using Handheld projectors, and Interacting with Dynamically Defined Information Spaces using a Handheld Projector and a Pen.

Overall, I think this is going to be a great project!  It might be a bit challenging (we do still have to acquire a mini projector and communicate with it), but our creativeness is flowing.  We also have to design a structure that will move the projector with the mouse without wobbling or falling over.  I believe that will be our greatest design challenge.  My team is very excited and we are hoping we can all attend the convention.  We all agreed that the coolest part of the convention would be seeing all of the other creations that students imagined.