Monday, January 25, 2010

A Reconfigurable Ferromagnetic Input Device

Written by: Jonathan Hook, Stuart Taylor, Alex Butler, Nicolas Villar, Shahram Izadi

The researchers put together a set of sensors that measure ferrous fluctuation and return it as an input. This means that anything ferrous, like a ball bearing, ferro-fluid bladder, or a magnet, can be used as an input device or as a part of an input device. Input devices could be configured on the spot for multi-input unique uses. This works by having sensor coils configured in an array under the ferrous material to sense if anything has changed. This works in a 3D realm, somewhat like a Theremin. 


The amazing part of this paper is that it shows how input devices can just be created on the spot, without cameras. Only ferrous materials can be used for this device, but adding a little piece of ferrous material to an input device wanted would allow for an unprecedented number of unique inputs for whatever program running. I dont see a crazy amount of future research in this other than 3D interpretation. Though this is really cool, only so many types of inputs are needed. This fills a gap where cameras cant be used, but otherwise, I see camera interpretation with depth perception as a much better avenue than this.


Detecting and Leveraging Finger Orientation for Interaction with Direct-Touch Surfaces

Written by: Feng Wang, Xiang Cao, Xiangshi Ren and Pourang Irani

The authors presented information about the evolving touch technology that they are developing. Finger orientation is being used to determine what a person is looking at and what the user intends to do next. Also, knowing finger orientation allows the user to point. This is then expanded to finger orientation. Allowing the user to flick, "click" different directions, or have gestures greatly expands the range of input possible. The writers also move into inferring user position by figuring out which finger is where on the hand and the angle the hand is held in view. All of this combined with multiple fingers and multiple hands leads to a very complex input device. 


This paper is important because it shows just how complex hand and finger recognition is. The continued research in this area will lead to much more user friendly inputs and possibly remove the need for a mouse in the near future. The next step, as the authors said, is to migrate this into a 3D world where tilt can be implemented and 3D gestures can be used, adding another exponential number of possible inputs.

Thursday, January 21, 2010

Bonfire: A Nomadic System for Hybrid Laptop-Tabletop Interaction

Bonfire is a laptop that has been modified to have two cameras and two projectors pointing to the sides of the laptop so that workspace can be enlarged and so the laptop can "recognize" what is going on in the workspace, whether it is hand gestures or knowing what you're reading. Bonfire is a compilation of technologies to produce a new and exciting form of computer interaction. This setup extends user interaction space, allows the laptop to be "aware", enables physical interaction with the laptop, and provides horizontal workspace. 


All of these things group to provide a more enriched experience than what you would get with a regular laptop. The problems lie in the usefulness. It is extremely geeky and cool, but is it practical to broadcast images on a table when you have a laptop right there, even more so, is it economical? I think this would be a great addition to what the laptop is now and heralds a new era, but I dont think it will be a staple because people want smaller things. I see something like this being made for a phone. There are projectors small enough now to attach to phones, and soon integrate them. Make a phone be able to recognize things and project and take gestures so all you have to do is point and act instead of carrying around a laptop. I say this because the additions being made seem like they are more for a social media expansion and nomadic life than for work and business.


Wednesday, January 20, 2010

Virtual Shelves: Interactions with Orientation Aware Devices

This paper is about using a motion sensing device to access menu items in a virtual hemispherical shelving unit. The virtual shelves allow menu items to be accessed with kinesthetic movement, allowing muscle memory to dictate where things are rather than sight. The user moves the device around in front of them to choose the item or function placed in a preset location. Two experiments are done by the authors. The first is to find the bounds of the Virtual Shelves that they have created. To do this they use a Wiimote and scan the hemispherical planes in front of them vertically and horizontally. They found bounds on the edges of the shelves that were hard to access because of accuracy issues around the edges. The second experiment is a proof of concept where the Wiimote is replaced with a Nokia N93 with software loaded onto it with preset items in the shelves. The actions were able to be accomplished and in less clicks than it would take with the traditional interface.


This is a great leap forward for interfacing with phones. Smaller is not always better with electronics simply because of the interfacing issues. This solution allows for much easier use of the device and allows muscle memory to dictate use. Virtual Shelves would be awkward to use in public in its current state but after motion sensing gets more accurate the amount of movement required to select different shelves could be minimized and a simple twist of the phone might be able to do the trick.