In class we learned about objects and the power they hold. When you think about it, objects tie us to everything because we live surrounded by them. I wanted to design an object that captured me and who I am, without being to obvious about it. I wanted puzzles and mystery. If you look closely at the outside edge, you will see six abstract "M"s. I identify with the letter "M" because my name starts with it, and so I have always been "MTL" or my favorite animal has been monkey simply because it starts with the letter "M" (and they are cute). The main design your eyes are drawn to is the spiral in the middle. I wanted this to be obvious because many things in life seem to spiral, and this semester my time has been spiraling so that it seems to pass by even more quickly. The overall shape reminds me of a snowflake, which I love making and have spent countless hours littering the floor with paper bits. Snowflakes (the design kind) also put me in the holiday spirit!! To me, my object has significant meaning that is not obvious. The secrets an object hold seem to make them more meaningful.
Margaret's Adventure in Tangible User Interfaces
Friday, December 2, 2011
Saturday, November 5, 2011
Exploring with TUI Systems
Today in class was exciting! I was able to interact with new TUI systems. I got to connect different Topobo pieces and create sculptures (although I couldn't animate my pieces because we are still waiting for the charger to ship). This is a really cool idea, I can't wait to use it while it is functioning.
Topobo |
Sifteo |
Lego: Life of George |
Tangible Stories
Working on Tangible Stories was a very interesting experience. Computers kept crashing and Tanner was also in the middle of it. It was also a valuable experience, I tested my patience with computers and collaborated with others when exceptions occurred (especially XAML parse errors).
I was inspired by the idea of being able to share my photos
and videos interactively. To encourage
users to interact with the surface, I made two tangible objects: a storybook
and a mystery person.
I made the story book a tangible object because I like the
idea of being able to get information about an image by placing a book on top
of it. This works as a tangible object
and not a button because I want the user to be able to move the photo around by
holding the book. The focus should be on
the story of the picture. I made the
mystery person a tangible object because, again, I wanted the user to be able
to manipulate the image. In this case, I
also wanted the user to be able to move the “identified” image around in case
they also wanted to see the original image.
The three main controls involve what the user would like to
do. They can view pictures, view videos,
or receive hints on what to do. The
words “pictures” and “videos” are buttons, which then show all of the pictures
or all of the videos. The hint button is
a toggle button, which allows the user to toggle the hint text (while the
button is “checked/on” the text changes to “hide hints” to let the user know
how to make the hint text disappear).
For the videos and pictures, the next screen shows many
scatter view items with either a picture or video inside. The user can then control the picture or
video’s size and location by moving the boundaries or moving the object. The user can also control what detail they
would like to see, this control is given by a Tag Visualizer. The user can use different tagged items (the
storybook or the mystery person) to decide what information they would like to
know.
While interacting with Tangible Stories, users are able to
talk to each other and discuss the pictures or videos. The volume is soft enough that they are able
to make comments while watching videos and enjoy watching them. The application encourages socialization,
which is great because it is just a medium for users to learn about a trip or
memories. Interacting with this
application is realistic because the photos are scattered about on a coffee
table, but also has digital aspects because videos are also scattered on the
coffee table and more information about the images and video may be easily
discovered.
Here is a video of my version of Tangible Stories: http://www.youtube.com/watch?v=G6k71_8O9cI
Wednesday, November 2, 2011
UIST Day 2 & 3
I've heard that UIST gets more exciting after the first day, which stumped me because I had a great time the first day. I've also met so many more people and it's been great!! I was intimidated at first, but everyone is so nice and loves telling you about their work and gives great advice.
One paper that I thought was really cool and could be relevant to CS110, was "Vice Versa." It takes markup and then animates to what would actually be displayed. It is a quick in-place preview technique developed at the University of Paris. Unfortunately it is still a proof-of-concept. I would really like to use this tool!
The tangible session was extremely interesting. I really liked it! ZeroN was astounding. A team from MIT used magnets to suspend a ball in midair and it was interacted with.
Overall, UIST was a fantastic experience and I love every minute of it! It has inspired me to look more into research, which is something I have not done much. It also has confirmed my interest in the general area!!
Overall, UIST was a fantastic experience and I love every minute of it! It has inspired me to look more into research, which is something I have not done much. It also has confirmed my interest in the general area!!
Wednesday, October 19, 2011
UIST 2011 Student Innovation Contest
When it finally came time for the Student Innovation Contest, our poster and game board were ready to go and our demo was working!! This was very exciting because we had come with expectations that everything would break and things would go wrong, but everything seemed to be going well. I was nervous before beginning, but when it started I became really excited to share the project and I realized I knew what I was talking about!! (Thank you Consuelo for preparing us!!).
For our project, my team, Wendy & Michelle, and I made an interactive game board titled "Where's Bo Peep?". We hid the Microsoft TouchMouse inside of a stuffed sheep that we made a finger puppet. By performing normal sheep movements, such as looking left and right around objects or looking inside something, the user actually was performing gestures on the TouchMouse that we interpreted and reacted to with audio. The overall goal of the game was to help the sheep find Bo Peep. The user navigated around the game board by moving the sheep (mouse) and gesturing at "hotspots" around the board. One scenario is the user could make the sheep look inside the cave by sticking its head inside, which would cause the user to gesture forward. Following the gesture there would be an audio response reflecting whether or not Bo Peep was inside the cave. When Bo Peep is at a location, she responds. When Bo Peep is not at a location, the narrator responds and encourages the user to continue looking for her. Having a narrator propagates the idea of the user creating their own story!
One of the main goals of our project was to be creative in the way we used the mouse and to inspire creativity and social interaction in children. The target audience of "Where's Bo Peep?" is elementary school aged children. The game encourages the kids to have social interaction with each other by responding with audio output, which allows them to have eye contact with each other and does not force them to continuously look at the game board because it does not visually change.
Overall, telling everyone who was interested about our project was a very exciting and exhausting experience. We also met a lot of people and hopefully have made some lasting connections!
For our project, my team, Wendy & Michelle, and I made an interactive game board titled "Where's Bo Peep?". We hid the Microsoft TouchMouse inside of a stuffed sheep that we made a finger puppet. By performing normal sheep movements, such as looking left and right around objects or looking inside something, the user actually was performing gestures on the TouchMouse that we interpreted and reacted to with audio. The overall goal of the game was to help the sheep find Bo Peep. The user navigated around the game board by moving the sheep (mouse) and gesturing at "hotspots" around the board. One scenario is the user could make the sheep look inside the cave by sticking its head inside, which would cause the user to gesture forward. Following the gesture there would be an audio response reflecting whether or not Bo Peep was inside the cave. When Bo Peep is at a location, she responds. When Bo Peep is not at a location, the narrator responds and encourages the user to continue looking for her. Having a narrator propagates the idea of the user creating their own story!
One of the main goals of our project was to be creative in the way we used the mouse and to inspire creativity and social interaction in children. The target audience of "Where's Bo Peep?" is elementary school aged children. The game encourages the kids to have social interaction with each other by responding with audio output, which allows them to have eye contact with each other and does not force them to continuously look at the game board because it does not visually change.
Overall, telling everyone who was interested about our project was a very exciting and exhausting experience. We also met a lot of people and hopefully have made some lasting connections!
UIST 2011 Day 1
The day before UIST we registered and then mingled. It was a successful night because a huge group of us bonded over walking around looking for an open restaurant.
Day 1
The talks were ver interesting. Some of the wisdom I took away:
Crowd Control - I learned about this term and it seemed to be a really cool idea; poll the crowd and use the collective wisdom of the people.
"Simulate expertise in crowd of amateurs" -PlateMate group
"disguise crowd as singer worker" -Legion group
A really cool use of crowd control that was shown is transcribing. Everyone is given a little segment of the test and together the whole handwritten document is typed.
Another really cool idea was the robotic boss idea.. At a company, a boss was in a different city from his workers, so they made a Skype robot that the boss could drive around the office. This is a new idea to me, although the group was presenting a more complex idea about controlling the volume people use. One of their ideas was to use a side tone, which was used by land line phones to let people know how loud they are speaking.
From the program, one of the presentations we were most looking forward to was the tongue-input device by Disney Research. Today, the giant costume characters at Disney are mute and use their arms to be expressive and engage in a body language conversation. The research team wanted the character actors to be able to control voice clips to make the characters more interactive with the children. The idea is, while the actor is using his/her arms to be expressive, he/she could use his/her tongue to select the appropriate voice clips. This would be combined with a tree so a conversation would have choices related to the choice that was previously chosen.
My favorite paper of the day is Collabode, which happens to be developed by our neighbors at MIT. It is a program for writing programs, much like Google Docs is a program for writing documents. It encourages collaboration and prevents errors that could be created when having users working on the same project at the same time by only sharing code once it automatically compiles. This would be great! We were thinking that CS111 (& CS230) could greatly benefit from this program, as there is much pair programming in these classes and this program would help prevent the "lopsided" control that occurs.
Later in the afternoon, there was a session on social learning. This was mostly about connecting the queries that people use on the internet to the actual programming. During one of the talks, the group showed users how to take tutorials for Adobe Photoshop and actually perform them in Gimp, a free photo-editor. After that talk, one of the conference attendees who works at Adobe asked the group if they had thought of the legal implications of their piece of software. The software was taking instructions for an expensive piece of software and showing how to accomplish the same task in an open source software. This could be seen as taking money away from Adobe. The room was completely silent after the guy made his remarks. I was not quite sure what to think. On one hand, Adobe is a software company and their goal is to make money off of their software. On the other hand, conferences are supposed to promote research and shared learning. I feel that tutorials for most tasks could be out on the internet in both Gimp and Adobe Photoshop instructions, and so this conversion would not be a huge leap. One could also take the time to browse the help sections in Gimp to figure out the task they would like to perform.
The keynote, Breaking Barriers with Sound, was extremely interesting. I love music and computers, so it's a mix of both worlds! Ge Wang from Stanford was engaging and a great speaker. He demoed his music coding language, Chuck, and he showed us apps that his company, Smule, has developed. Last Christmas I discovered the Ocarina app and loved it, and today I found out he created it!! Another awesome app he demoed was an auto-tuner that let you sing like a pro. One project he had been working on was a laptop orchestra. The idea of this sounds extremely out-of-this world to me. It is a very creative concept and they pulled it off very well. I especially liked that each member had their own speaker to better reflect a natural orchestra. There were amazing videos on YouTube.
Day 1
The talks were ver interesting. Some of the wisdom I took away:
Crowd Control - I learned about this term and it seemed to be a really cool idea; poll the crowd and use the collective wisdom of the people.
"Simulate expertise in crowd of amateurs" -PlateMate group
"disguise crowd as singer worker" -Legion group
A really cool use of crowd control that was shown is transcribing. Everyone is given a little segment of the test and together the whole handwritten document is typed.
Another really cool idea was the robotic boss idea.. At a company, a boss was in a different city from his workers, so they made a Skype robot that the boss could drive around the office. This is a new idea to me, although the group was presenting a more complex idea about controlling the volume people use. One of their ideas was to use a side tone, which was used by land line phones to let people know how loud they are speaking.
From the program, one of the presentations we were most looking forward to was the tongue-input device by Disney Research. Today, the giant costume characters at Disney are mute and use their arms to be expressive and engage in a body language conversation. The research team wanted the character actors to be able to control voice clips to make the characters more interactive with the children. The idea is, while the actor is using his/her arms to be expressive, he/she could use his/her tongue to select the appropriate voice clips. This would be combined with a tree so a conversation would have choices related to the choice that was previously chosen.
My favorite paper of the day is Collabode, which happens to be developed by our neighbors at MIT. It is a program for writing programs, much like Google Docs is a program for writing documents. It encourages collaboration and prevents errors that could be created when having users working on the same project at the same time by only sharing code once it automatically compiles. This would be great! We were thinking that CS111 (& CS230) could greatly benefit from this program, as there is much pair programming in these classes and this program would help prevent the "lopsided" control that occurs.
Later in the afternoon, there was a session on social learning. This was mostly about connecting the queries that people use on the internet to the actual programming. During one of the talks, the group showed users how to take tutorials for Adobe Photoshop and actually perform them in Gimp, a free photo-editor. After that talk, one of the conference attendees who works at Adobe asked the group if they had thought of the legal implications of their piece of software. The software was taking instructions for an expensive piece of software and showing how to accomplish the same task in an open source software. This could be seen as taking money away from Adobe. The room was completely silent after the guy made his remarks. I was not quite sure what to think. On one hand, Adobe is a software company and their goal is to make money off of their software. On the other hand, conferences are supposed to promote research and shared learning. I feel that tutorials for most tasks could be out on the internet in both Gimp and Adobe Photoshop instructions, and so this conversion would not be a huge leap. One could also take the time to browse the help sections in Gimp to figure out the task they would like to perform.
The keynote, Breaking Barriers with Sound, was extremely interesting. I love music and computers, so it's a mix of both worlds! Ge Wang from Stanford was engaging and a great speaker. He demoed his music coding language, Chuck, and he showed us apps that his company, Smule, has developed. Last Christmas I discovered the Ocarina app and loved it, and today I found out he created it!! Another awesome app he demoed was an auto-tuner that let you sing like a pro. One project he had been working on was a laptop orchestra. The idea of this sounds extremely out-of-this world to me. It is a very creative concept and they pulled it off very well. I especially liked that each member had their own speaker to better reflect a natural orchestra. There were amazing videos on YouTube.
Saturday, September 17, 2011
RFID
I really like the idea of using RFID for our project to implement the locations. This would solve the problem of not knowing where the mouse is if the user picks it up. This would also add a wire if we put the RFID scanner inside the sheep, and the tags on the 3D map. On the other hand, the RFID tag could be embedded in the mouse and the scanner could be on the map. Then when the user picked up the mouse, he/she could be instructed to put it back on the RFID scanner start space. This is just a random idea influenced by working with RFID tags in lab yesterday.
Subscribe to:
Posts (Atom)