This past year I had the pleasure of working with Seongkook Heo while he was an intern at Autodesk Research on quite a cool input and interaction techniques project. The project focused on analyzing, and developing an understanding of the situational factors that can constrain our opportunities for input with smart watches, and then used this knowledge (and the resulting taxonomy) to ideate on ways that we an utilize other body parts or actions to re-enable such input. From 3D printing fake hands to reading through participant comments about from Mechanical Turk about Seongkook kneading dough, the project was a very interesting exploration of input opportunities and ended up being a lot of fun. The paper, No Need to Stop What You’re Doing: Exploring No-Handed Smartwatch Interaction, was also co-authored by Ben Lafreniere, Tovi Grossman, and George Fitzmaurice from Autodesk Research, and will be presented at GI in May.
Smartwatches have the potential to enable quick micro-interactions throughout daily life. However, because they require both hands to operate, their full potential is constrained, particularly in situations where the user is actively performing a task with their hands. We investigate the space of no-handed interaction with smartwatches in scenarios where one or bot hhands are not free. Specifically, we present a taxonomy of scenarios in which standard touchscreen interaction with smartwatches is not possible, and discuss the key constraints that limit such interaction. We then implement a set of interaction techniques and evaluate them via two user studies: one where participants viewed video clips of the techniques and another where participants used the techniques in simulated hand-constrained scenarios. Our results found a preference for foot-based interaction and reveal novel design considerations to be mindful of when designing for no-handed smartwatch interaction scenarios.
This year I was fortunate enough to collaborate with Jeeeun Kim and Tom Yeh (from the University of Colorado) and Haruki Takahashi and Homei Miyashita (from Meiji University) on a rather interesting alt. CHI paper. The work, entitled “Machines as Co-Designers: A Fiction on the Future of Human-Fabrication Machine Interaction” draws attention to the ways in which current fabrication practices do not facilitate the serendipitous and in-situ creativity discoveries that occur during traditional craft practices. For me, this project and the accompanying alt. CHI review process were very illuminating (I highly recommend that anyone who has not submitted an alt. CHI paper and experienced the nervousness that comes from reading community’s reviews of their work everyday to do so – it’s a great learning experience). The full paper will be submitted at CHI 2017 and I will link to it after it has been published. Until now, here is the abstract!
While current fabrication technologies have led to a wealth of techniques to create physical artifacts of virtual designs, they require unidirectional and constraining interaction workflows. Instead of acting as intelligent agents that support human’s natural tendencies to iteratively refine ideas and experiment, today’s fabrication machines function as output devices. In this work, we argue that fabrication machines and tools should be thought of as live collaborators to aid in-situ creativity, adapting physical dynamics come from unique materiality and/or machine specific parameters. Through a series of design narratives, we explore Human-FabMachine Interaction (HFI), a novel viewpoint from which to reflect on the importance of (i) interleaved design thinking and refinement during fabrication, (ii) enriched methods of interaction with fabrication machines regardless of skill level, and (iii) concurrent human and machine interaction.
Last year, I was approached by Jim Foley to transform my dissertation on the challenges facing pen computing into an article for the IEEE Computer Graphics and Applications magazine. This was a very interesting experience, especially when it came time to distill an entire thesis down into a few pages! The process of disseminating my work in a venue that doesn’t commonly focus on HCI or pen computing was a very good exercise, as it made me reflect on why my work was important to the body of research knowledge as a whole, and the importance of articulation and conciseness when writing.
The ubiquity and mobility of contemporary computing devices has enabled users to consume content, anytime, anywhere. Yet, when we need to create content, touch input is far from perfect. When coupled with touch input, the stylus should enable users to simultaneously ink, manipulate the page, and switch between tools with ease, so why has the stylus yet to achieve universal adoption? The author’s thesis sought to understand the usability barriers and tensions that have prevented stylus input from gaining traction and reaching widespread adoption. This article in particular explores the limits of human latency perception and evaluates solutions to unintended touch.
Woo! Next month, I will be going to Brisbane, Australia to present work that was done last summer in the DGP Lab by myself, Matthew Lakier, and Mingzhe (Franklin), about Haunted User Interfaces. We were interested in developing new ways that information could be conveyed to users in a household setting and used ideas from haunted and paranormal phenomenon to do so.
Our animatronic moose built from LEGO and Servo Motors!
Along with a number of prototypes, we also ran a Mechanical Turk study to gather information about the objects people have in their living rooms and how they interact (or as it turned out, ignore) these objects. We also synthesized the survey results, prototypes, and construction lessons into a Haunted Design Framework that can be used to develop or re-imagine interfaces for the home.
A quick video illustrating some of the ideas and prototypes:
Abstract: Within this work, a novel metaphor, haunted design, is explored to challenge the definitions of display’ used today. Haunted design draws inspiration and vision from some of the most multi-modal and sensory diverse experiences that have been reported, the paranormal and hauntings. By synthesizing and deconstructing such phenomena, four novel opportunities to direct display design were uncovered, e.g., intensity, familiarly, tangibility, and shareability. A large scale design probe, The Living Room, guided the ideation and prototyping of design concepts that exemplify facets of haunted design. By combining the opportunities, design concepts, and survey responses, a framework highlighting the importance of objects, their behavior, and the resulting phenomena to haunted design was developed. Given its emphasis on the odd and unusual, the haunted design metaphor should great spur conversation and alternative directions for future display-based user experiences.
I am so super, super excited and honored to have been chosen as the 2015 recipient of the L’Oréal-UNESCO For Women in Science – NSERC Postdoctoral Fellowship Supplement! It was an honor before to win an NSERC PDF, but to be recognized for my contributions to advancing women in science and my research interests is beyond amazing.
The winners of the L’Oreal Canada and L’Oreal-UNESCO awards (I am one of the only in pants!)!
The awards presentation was held in Ottawa, Ontario, at the French Embassy so I got to have my first visit to the nation’s capital (the National Gallery of Canada is fabulous!). Never did I imagine that I would get to go to an Embassy, so it was a real treat to meet and learn about the lives of diplomats and those who make decisions about funding at NSERC.
Art Deco and some fabulous marble!
Feels just like a Castle in France.
The press release, which also announces a fabulous new program from L’Oreal supporting women in science, can be found here.