Browsing in the bathtub: The future of computer interfaces
If Hideki Koike’s dream comes true, people could watch some games at the 2020 Tokyo Olympics from the ball’s point of view. The professor of computer science at Tokyo Institute of Technology has developed what he calls BallCam, a way of integrating cameras into the balls used in football and other sports.
Koike and his collaborators managed not only to embed a camera into a football, they figured out how to stabilize the video feed from the ball as it flies through the air spinning at over 600 rotations per minute. Through image-processing algorithms, the video is corrected for distortion, expanded and interpolated. The result is a stable, unbroken shot from the ball’s perspective as it sails through the air into the arms – or feet – of a waiting player.
Koike began work embedding cameras into balls back in 2011, when he built a prototype transparent ball housing six iPod Touches. Two years later, his video of footage from a camera inside a football made the rounds on internet news sites. It’s all part of Koike’s fascination with human-computer interaction (HCI), which focuses on interfaces we use to manipulate computers and new ways of using digital devices.
“When I was an engineering graduate student at the University of Tokyo, I didn’t like to use PCs, but I was so impressed when the Macintosh came out because of its HCI,” Koike explained on the sidelines of the 2016 Rakuten Technology Conference in Tokyo, where he gave a presentation on his HCI projects.
HCI interfaces evolve slowly. Still in use today, QWERTY keyboards date to typewriters developed in the 1870s, while the ubiquitous mice we still use with computers were invented in the 1960s. Touchscreens are the norm today, while gestural control is becoming increasingly popular in gaming and other applications. It takes decades for new interfaces to go from lab bench to store shelf, so it’s no wonder if some of Koike’s creations seem a little off the wall.
AquaTop, for instance, is an award-winning approach to displaying and manipulating content. And if you’ve ever wanted to browse the web while soaking in the tub, it’s the HCI for you. It turns bathwater into an interactive surface that can be used for browsing images, video or text – or even playing games.
A Microsoft Kinect and a projector are placed above the bathtub, which is filled with water and enough bath salts to make it milky. The system can recognize gestures such as poking or paddling the water, and reacts by moving projected elements around or making them appear to ripple. In a video demo, Koike showed how a user could even scoop up a projected cartoon fish into a real-life bucket and drag and drop it to another part of the tub. Adding an LED-adorned speaker system to the tank means you can produce a mini sound and light show, perfect for video games during bath time.
More recent research by Koike includes projection mapping on moving objects and methods of subtly blurring the contents of webpages to direct users’ gaze toward different parts of the screen. But despite this kind of progress in the way people interact with computers, Koike says it’s still too soon to throw out your dusty old keyboard.
“If you have to work with a document, the keyboard is still necessary,” says Koike. “Speech recognition is promising because many major companies such as Apple and Google are shifting to it and deep-learning technology is very appropriate for that. HCI is always changing.”
If you’d like to hear more about Koike’s work, check out his full presentation at Rakuten Technology Conference 2016:
Read more posts from the Rakuten Technology Conference 2016 here.