Mobile off-screen pinching interaction using two mobile phones and OpenCV programming to sense the input gestures in off-screen space. This project tries to explore the use of spatial memory to enhance the interaction with mobile devices. As mobile devices become smaller while providing more features at the same time, we think the extension of the interaction to off-screen space is a good opportunity to enable users to manipulate applications based on new interaction paradigms. As a result, the user will not be tied to small screens when interacting but is able to involve the environment in his close proximity.
Project Idea & Prototype Setup
The idea is to have a sensing unit (phone 1) hanging around your body with the camera pointing towards the second phone and the users hands. This phone (phone 1) will sense the off-screen interaction by analyzing the input images. The idea is to recognize the hand of the user that interacts in close proximity to the second phone. As you can see in figure 1, the user holds the second phone in one hand and controls the application on that phone by interacting with his other hand.
As described in [Wilson, A.D. 2006, ACM], there is a huge potential for using spacial memory to explore new interaction paradigms. As the size of mobile devices decreases while the functional capabilities increase it can be helpful to instrument the spacial awareness of the user while interacting.
In order to run the prototype setup as discribed above, at least one Iphone with OS 3.xx is required. The second Iphone (Application phone) can be substituted by the use of the Iphone simulator.
The off-screen interaction is based on the concept of panning and zooming without touch. The pinch gesture (figure 2) is used to indicate the start and the end of a input gesture. The maps application on phone 2 allows the user to understand the device as a peephole that can be modified by dragging different parts of the map from off-screen into the visible area of the application phone.
Pinch and move your hand around to drag the modify the visible portion of the map.
Pinch and move your hand towards the sensing phone to zoom in or vice versa to zoom out.
The detection process is done on the phone that senses the input gestures. The resulting information i.e. pinch or no pinch, movement of the hand are send to the phone that has the maps application running via wireless connection.
The tracking is based on the following steps:
1. Sample the skin color of the hand: Press the 'sample color' button on phone 1. A red rectangle appears; the color within the rectangle will be sampled on button release event.
2. Start the tracking by pressing the 'Start Detection' button on phone 1. The tracking is based on the camshift tracker; further details can be obtained by reading the presentation slides/watching the final presentation video.
(Note: This scheme does not contain following steps: Connected Components method based on contour segmentation)
The pinch detection is based on the threshold-based flood-fill approach that has following assumption:
In case the user does a pinch action, a black spot - the pinching hole will remain. Otherwise the whole image will be flooded and the tracked skin color object remains without a black spot inside.
The mobile off-screen pinching interaction is a great opportunity to extend the interaction space and enables the exploration of the use of spatial memory to aid interaction. However, there are difficulties regarding changing lighting conditions i.e. bright sunlight. In addition, the tracking frame rate is too slow for regular user interaction. This is due to the workaround for acquiring the camera images on the iphone.