Monday, December 17, 2012

New Note. New Note. *sigh* New Note.

I'm interested in what you can do with audio if you're wearing a headset/microphone. As a baseline, I want to see what you can do with off the shelf components already with minimal frustration. The test task is making a note without taking the phone out of my pocket.

(tl;dr: this does not work at all. I am surprised at how much this doesn't work. Geez, I figured voice commands were solved.)

Headsets:
I've tried the Plantronics M50 and the LG Tone. The M50 is a standard bluetooth headset, the Tone is worn around the neck and has detachable magnetized earbuds that you can put in your ears. First impressions:
- I feel like a businessman and a tool wearing the M50. With the Tone I feel like a regular person. This is important.
- Music streaming to the ears works right out of the box on both. This is surprisingly great, especially when biking.
- Sound quality in my ears seems fine on both, although I've only used them a couple days each. The M50 is a little bit quiet even on max volume for listening to music, but okay for calls. Can't say much about voice recording quality.

Apps:
- the Google app that includes Voice Actions and Google Now (on Jelly Bean, I think) is so cool and so flawed and buggy. The cool: saying "note ____" records your voice and sends it to yourself in an email: both the audio file and an attempted transcription. No confirmations or anything. Exactly correct. However, when I associate the "bluetooth button" action with the Google Search app and press the button, nothing happens. And when I open the app on my own, sometimes it jumps right into recording, and sometimes it, says "initializing" forever and freezes. This is on my 2-year-old Nexus S, and the Galaxy S3 I'm working with doesn't have Jelly Bean. I'm guessing when they get the bugs worked out, this will be the reasonable way to talk to my phone.

- Voice Control (Full) works pretty well. Pressing the button on my bluetooth opens it and starts listening (IF you disable the Google app and then re-associate the "bluetooth button" action with Voice Control). I can say "Make a note" and it puts it in my Evernote account. The downside is that it's a 5-step process: "Make a note" What should the content be? "blah blah" Do you want to make a note with content 'blah blah'? "Yes" What should the title be? "title title" Do you want the title to be 'title title'? "Yes." Five steps is four steps too many (especially because all these steps can, and do, fail).

- Vlingo boasted at least somewhat-competent voice recognition but you can only do about 7 things, none of which I even want to do. (call people, text people, update your facebook status...)

Utter looks like it's headed in the right direction, but right now doesn't accept the button on the bluetooth to trigger it.

- Samsung's S Voice: not so good. Pressing the button on my Bluetooth device starts it... sometimes... and sometimes it just opens the app so I can start it by pressing a button on the screen (which defeats the whole purpose of the bluetooth button). Also, when you're making a note, it asks you to confirm by saying "Save note"... and if it doesn't hear "save", it just hears "note", which throws away your old note and starts a new one. What!

- Skyvi ("Siri for Android") had poor voice recognition. Also, it doesn't support just taking notes. Am I the only one who never wants to call people if it's not very reliable that the service will call the right person?

- Iris ("Siri backwards") just didn't work on my Nexus S, and on the Galaxy S3, it looks like it's a press-an-on-screen-button app. (with lots of annoying advertising for some other app too.)

Finally, a couple of thoughts Absolutely Correct Ideas about how voice commands should work, based on a day of futzing with them:
- start on bluetooth button press. If I take my phone out of my pocket, you've lost me. Ideally ideally, we'd be starting on a wake-up word, but I assume the battery life isn't there quite there yet.
- don't rely on correct transcription, when possible. (taking notes shouldn't rely on correct transcription.)
- corollary: don't ask me to confirm stuff, unless it relies on correct transcription (like calling a person). I should say things once, maybe twice.

Wednesday, October 24, 2012

Memorizing Names

Say I was talking to you and I told you one fact (like my name) and you wanted to memorize it, but we kept conversing. How would you do it?

You should rehearse it at T+8 seconds, 14 seconds, 32 seconds, 86 seconds, and increasing intervals in a 5+3^n pattern. Here's why:

Spaced Repetition has been around a long time. The idea is that, if you're going to practice a fact N times to remember it, you should practice it over time, not all at once. (this is called the Spacing Effect.) You should be asked to reproduce the item, not simply shown it again. ("Testing Effect") Furthermore, these rehearsals should be in increasing intervals. ("Expanded Retrieval")

About the spacing effect: this has been shown repeatedly (see pretty much any link in this post where spaced practice beats massed practice)
About the testing effect: this has been shown repeatedly too, e.g. by Carpenter and DeLosh (2005).

About expanded retrieval: this is a little less clear. Pimsleur (1967) suggested exponentially increasing intervals. Landauer and Bjork (1978) found that increasing intervals (e.g. rehearse in 1-5-9 seconds) is better than equally-spaced intervals (like 5-5-5) if you're testing yourself, but neither Carpenter and DeLosh (linked above) nor Balota et al (2007) found much support for the "increasing intervals is best" argument. Indeed, Karpicke and Roediger (2007) found that increasing intervals helped short-term recall, but equally-spaced intervals helped long-term recall. But they found that this effect may be due to the equally-spaced intervals' lack of an immediate recall (the "1" in a 1-5-9 schedule). They showed that just delaying the first test by 5 seconds makes it harder, which helps long-term recall. So it seems like you should be able to get the best of all worlds by adopting an increasing-intervals schedule, but also delaying the first review.

Another consideration is that this is the real world, not a 3-repetition study in the lab, and increasing intervals scales better. If you start practicing every 5 seconds, by the time you're at repetition 10 you'll be fed up, whereas if you go with 3-9-27-81 etc the intervals will quickly become so infrequent that you're not bothered.

But what should the first interval be? Peterson and Peterson (1959) show that recall percentage at 3 seconds is better than 6 seconds, 6 better than 9, etc., and we want them to remember it at the first interval, so might as well make the first interval 3 seconds. But, as Karpicke and Roediger (linked above) mentioned, we don't want it to be too easy to remember it at the first interval. Well, the same Peterson et al (1963) found 8 seconds to be the best interval.

So, how about a 3-9-27-81 interval plus 5 second delay, so 8-14-32-86-etc? Whew! Well, whatever; increasing + delayed-first-item sounds like at least a pretty good way to go.

Hmm... but if you're not in a lab, talking to an experimenter, how will you remember to test yourself at all these intervals? Hey, what if we could do that with a system using instant, unconscious, subtle microinteractions...
Stay tuned!

Saturday, October 20, 2012

Microinteractions: the book

Since I came across Daniel Ashbrook's thesis, I've been thinking about "microinteractions": "interactions with a device that take less than four seconds to initiate and complete." I'm interested in expanding the space of possible microinteractions that people use.

Now this fellow Dan Saffer is writing an O'Reilly book with the title Microinteractions. You can currently read a draft of the first chapter. Sounds like he's using a more general definition of the term, to include lots of non-mobile interactions: the "see translation" button on Facebook, the password entry form on Twitter, calendar apps including the duration and end time when you're scheduling a thing. I like it. It leaves me wondering: is this just "everything" now? Is "microinteractions" a synonym for "details"; something that of course we should focus on but nobody's going to have some big revolutionary ideas about? Or is this part of a big shift in thinking, now that we've got enough computing resources to actually make meaningful and positive microinteractions?

Incidentally, your potential microinteraction of the day: squeeze your phone as you pull it out of your pocket depending on how you want to use it.

Thursday, October 11, 2012

UIST 2012 highlights


... in my humble opinion, based on my particular interests:

Tactile/hands/fingers:

Watches are old news. How about having 6 watch screens in a bracelet shape around your wrist? It's clunky now, but who knows. They can detect pose and interact smoothly as needed.

A ring for input with 4 sensors. Clever: recognizes IR reflection and skin deformation to tell whether you're clenching, bending, pushing, flipping, or rotating the ring. Detects position/rotation by melanin content (which varies around your finger. Wired currently (and a ring is so small it makes me wonder if the wire could be removed).

Camera/LED on your finger for always-available input. The camera is 1x1mm, 148x148 pixels. ("NanEye") Read "fake textures"; ascii characters printed into patterns. Downsides: wired and ~1s latency on touches. Still, cool!

IR laser line and camera mounted on your wrist to detect your finger positions. Allows finger gesture prediction, 3d manipulation, etc. It's a bulky box now, but you could imagine it shrinking. Between 2-9 degrees of error, which is good enough for a lot of tasks.

Wear gloves so when you're looking at a wall of stuff you can find the exact thing. Sounds like it could be useful. (the trick is finding a task where the computer knows where the right thing is, but you still have to find it with your hands.)

An addition to phone calls. You can squeeze the phone, and then it vibrates on the receiver's ear. Four intensity levels, from light to sharp. This might sound a little silly, but:
- they tried it with 3 couples for a month, and they all sent at least one Pressage every single call.
- they wanted to have it available other times too, and as another channel of communication (e.g. light buzz = "I'm coming home")
Assuming they didn't just get 6 quirky people in their study, people will use this. It's super intuitive and quick, and adds a layer of richness to phone conversations. (the channel during non-phone conversation is mostly a nice bonus, and is kind of tricky for a lot of reasons.)

Other Things:

Pan-tilt projector and kinect so things can project and interact all around the office. Quite a feat of engineering.

Our current displays have around 100ms latency on touch. You can see this yourself: draw a quick line with your finger; it lags an inch behind you. What if instead we had 1ms latency? I tried the demo, and it is much slicker. Feels like you're moving real objects. Remember when Google started targeting latency and all of a sudden Gmail became a viable non-painful application? Latency matters on tablets too.

You know image histograms? What if you could select pixels by brightness or by blueness instead of by location, and edit them all based on that? Looks fun.

Ever made an iOS app with Interface Builder? You specify constraints (like "this text box aligns with the center of this image") and they are automatically maintained through resizes etc. ConstraintJS looks like a way you could do this on the web, and for more than UI layouts. You can make asynchronous calls and display all their states without the pain! http://cjs.from.so

An IDE for developing camera- (i.e. Kinect-) based applications. If I made a camera app, I think I would want this.

Instead of expensive "clickers" which you might have used in university classes, just print everyone a QR-like card and have them hold it up. Cheap webcam takes a picture of the whole class.

What if the default unit on the web were a JSON object instead of a hyperlink? Sounds like web standards stuff, where it's awesome as much as everyone uses it, and good luck, but would be really useful in a ton of ways.

MTurk is great, but people cheat. Some tasks you can design around this, but some you can't. Crowdscape lets you visualize what people are doing as they do your task, easily weed out cheaters, use that as labeled input to bootstrap a machine learning system, and more importantly understand what work patterns lead to a good response. (Maybe tooting my school's horn a bit, but: won a best paper award!)

Some folks made a braille tutor out of the pressure-enabled touchpad that we got for the Student Innovation Contest. Awesome. They won 2nd place; I'd have given them first. Our ambient stress sensor was neat but did not win. Nor did it deserve to; there was a lot of great stuff.

Cool posters:
MMG armband, Shumpei Yamakura (like EMG, but resilient to sweat; not sure how they'd compare)
MISO, David Fleer, Bielefeld; point and snap at your electronics
Tongue-finding with Kinect (i.e. for rehabilitation), Masato Miyauchi
Breathwear (band to detect when you're breathing), Kanit Wongsuphasawat

Cool coffeeshops in Cambridge: Crema in Harvard Square for cappuccino and Voltage by Kendall for a fine Guatemala Buena Esperanza roasted by Barismo.

Yes! Another good conference. People asked me multiple times "So how's UIST going?" Look, of course it's fun and full of cool people doing exciting stuff!


Monday, October 1, 2012

Thinking about unconscious/micro interactions

Trying to define a research plan or story or something that I can both work on and apply for fellowships about. Right now a lot of work that I'm interested in feels related to me, but it's hard to explain to other people, which means it's not well-defined enough. In this post, I'm working on that.

All our interactions with computers/smartphones now are both intentional and slow. By "intentional" I mean that you have to think about getting out your computer (or phone), and by "slow" I mean at least on the order of seconds, if not minutes. Right now, get your phone out and check the weather forecast, and count seconds while you do it. (Just tried it; 23 seconds.) I want to break both of those constraints.

Why?

When you remove "intentional", you move from the slow brain to the fast brain. You get North Paw, Lumoback; systems that train you on a physical level. Ambient systems which help you change behavior: DriveGainBreakawayUbiFit (some slides). The Reminder Bracelet: ambient notifications. I guess it feels like, and I'm not sure how best to put this, when you do things unconsciously/unintentionally, you can learn procedural things or adapt physical movement without increased cognitive load. "Human attention is the scarce resource", and these systems give you something for free.

When you remove "slow", a lot more things become possible.
Thad Starner mentions the "two-second rule" in wearable interaction (IEEE article): people optimize their systems so they can start taking notes within two seconds of realizing the need. Daniel Ashbrook, in his thesis, defined microinteractions as interactions under four seconds, start to finish. At Google, speed was a big emphasis, and they're right: if something takes longer, people will use it less. (wish I had a good citation for this.)

Interactions are also overt; everyone can tell when you're computing. Breaking that constraint lets you interact with your computer without people knowing, which seems useful. Enrico Costanza has worked on "intimate interfaces" (EMG armband, eye-q glasses). ("intimate interfaces" is overloaded to also mean "interfaces that allow intimacy", e.g. among remote couples or family; not talking about that here.) Is this good? Detractors might argue that if there are social cues against something, they're there for a reason. Nobody wants you to be computing when you're trying to talk with them. However, two things: first, it's a tool just like anything else and should be used wisely; second, people already do these things with their phones. They get buzzed, answer texts, silence their phones, etc. But I'm not sure that I want to get rid of "overt", or at least not necessarily.

How?

Watches:
Mounting things on the wrist can cut down the action time up to 78%. You can use round touchscreen watches. Conveniently, the Pebble watch is now somewhere in production stages, and the Metawatch is... shipping? InPulse has been around for a couple years, but is a little clunky and doesn't have the battery life.
PinchWatch might not be what you think of as a watch (besides the display); a lot of the interaction is done by pressing fingers together. 
Nenya is a ring/watch system, getting really minimal. I like it. Reminds me of Abracadabra, due to the magnetic sensing, but now the ring just a regular-looking ring you would wear anytime.

Pockets:
You can touch your phone in your pocket, even do a Palm-Graffiti-style input (PocketTouch). More simply, for some tasks you could just hit your phone (Whack Gestures). Sami Ronkainen et al investigated this first, though they hit more false positives. (they also found taps to be more natural/accepted than gestures.)

Speech:
This seems obvious, right? But it's not. First, talking while walking around is weird. (ever play Bluetooth Or Crazy?) Second, it's not easy to get audio input into your phone in your pocket unless you're wearing a Bluetooth or something.
Sunil Vemuri et al's "Memory Prosthesis" was one approach, focusing on recording nearly-continuously and then searching; the search is less interesting to me, but the continuous recording is useful. Ben Wong experimented with dual-purpose speech: the user wouldn't give direct commands to a system; rather, the system would harvest information from things the user was saying. The Personal Audio Loop recorded your last 15 minutes and let you go back to search within it.

Monday, September 17, 2012

Quantified Self 2012: some cool things


Quantified Self is a movement of researchers, business folks, and hobbyists who are interested in understanding themselves more deeply through data. Usually we track some data, either numeric (number of steps I took each day, number of hours I sleep) or more abstract (dream journal, photos taken every 5 minutes by a camera around my neck). Usually the meetings are local; a couple dozen people get together and share whatever projects they're working on or questions they're interested in. Then there's an annual conference; this was the second one.

Stuff's less polished than at an academic or business-focused conference, so the things I took from it are a little more abstract than a list of papers. Here's some good stuff:

Instant Feedback Gadgets
Nancy Dougherty demoed an EMG smile sensor attached to a string of blinky LEDs. When she smiled, the lights blinked. She mentioned she'd post instructions on her blog at theengineeress.com soon.

Lumoback is a posture sensor and feedback device. It's a comfortable band you wear around your waist that buzzes you when you slouch. This is the sort of thing I love, because it feels like you'd start to get a visceral sense of when you're slouching and automatically correct it. After a while, you wouldn't have to think about it at all, your posture would just be better. www.lumoback.com

In the same vein, I chatted with Eric Boyd, inventor of some neat biofeedback devices like the NorthPaw, which buzzes north until you eventually get a sense of where north is. What I didn't know before is that he's selling them. sensebridge.net

Butterfleye (butterfleyeproject.com) is a pulse meter for swimmers. I love the inventor's goals: frictionless and glance-able. 

Other Tools That Work

Quantified Mind (quantified-mind.com) is a platform for testing mental functions. Nick Winter talked about his experiments trying 11 different interventions to improve his cognitive skills. (creatine and piracetam+choline worked well. butter actually made him much worse. interesting, given the QS community's interest in butter as a mental enhancement.) Yoni Donner gave a talk about the platform and their goals. They've got tests for processing speed, executive function, attention, inhibition, context switching, working memory, learning, motor skill, and visual processing. I love this; the idea that there's a battery of tests out there that we can take anytime that might actually repeatably measure cognitive skills is exciting. The downside is that it's hard to convince people (even me) to take tests for 10 minutes. They may be working on something about this, but even if not, it's super cool stuff.

Project Life Slice is a short script that takes a screenshot and a photo of you every hour. So simple, but smart: gives you a sense of when you're working and what you're doing. wanderingstan.com/lifeslice

Other Neat Ideas
Matthew Keener talked about different brain areas that help make up our concept of the "self". Sure it's a simplification (we're dealing with brains after all) but identifying about 9 areas that really matter (and how they matter) is very interesting to me. So what can we do with this? (Besides FMRI tests?)

Kevin Kelly (www.kk.org) became the first person I've ever met to have reported actually trying the Uberman sleep schedule (20 minute nap every 4 hours, no sleep at night) successfully. He said he did it for two months but eventually gave it up because you really couldn't miss even one nap or you'd crash hard.

Larry Smarr reported on a long series of self-tracking to understand health problems. I've heard of omega-3 to omega-6 ratio, but he also pointed out Complex Reactive Protein (CRP) as an inflammation marker.

Robin Barooah talked about his relationship with coffee; stopping coffee made him more productive (though he felt less productive), but starting it again helped his mood. Indeed, coffee can ward off depression. What struck me about his talk is how this data not only gave him things to act on, but it helped him reflect on portions of his life and meant a lot to him. (oh, he's made a cool meditation tracking iPhone app too.)

Tuesday, September 11, 2012

Ubicomp 2012: some cool things

Ubicomp 2012 just ended, right here in Pittsburgh. I was a student volunteer and had a great time. As I guess is the norm at conferences, I'm a bit overwhelmed by all the information coming at me at once, so here's an attempt to sift through it a little bit by summarizing some talks/papers/posters that I liked. I'm in Week 3 of my PhD program, so this is not going to be super focused.

An Ultra-Low-Power Human Body Motion Sensor Using Static Electric Field Sensing by Gabe Cohn et al. Currently to track motion, we use accelerometers. Their device lets you wear one sensor on your wrist, it can detect when any part of your body moves, and the power usage is about 1-10% of an accelerometer's.

A Spark Of Activity: Exploring Informative Art As Visualization For Physical Activity by Chloe Fan, Jodi Forlizzi, and Anind Dey. So your Fitbit (pedometer) counts steps, right, but it only gives the data back to you in graph form. That's cool, but somehow it's more fun (and indeed, motivating) if it's in a little bit poppier form. She's developed some visualizations and found that people do prefer more abstract visualizations for display/fun purposes. (Graphs are still better if you're looking for concrete numbers.)

Lullaby: A Capture and Access System for Understanding the Sleep Environment by Matt Kay et al. Put this box in your room while you sleep, it'll tell you if there are any disturbances or anything that might be hurting your sleep. You can't tell what's wrong when you're sleeping. Difficult task, and well executed. Also, there are privacy concerns when there's a camera in your bedroom! (They address this.)

RubberBand: Augmenting Teachers' Awareness of Spatially Isolated Children on Kindergarten Field Trips by Hyukjae Jang et al. A system that'll alert a teacher if kids go wandering off. Solves a real problem, and does it in a clever way: clusters the kids based on proximity, then detects if any cluster is getting too far from the other kids, not from the teachers.

Providing eco-driving feedback to corporate car drivers: what impact does a smartphone application have on their fuel efficiency? by Johannes Tulusan, Thorsten Staake, and Elgar Fleisch. They gave drivers an iPhone app to mount on their dashboard that would give real-time feedback so they can learn to drive more efficiently. Cool not so much for the gas-saving effect (3%) as for the idea that they maintain their skills even after they take the phone app away. I'd love to know how they're driving a month or a year later.

SpiroSmart: Using a Microphone to Measure Lung Function on a Mobile Phone by Eric Larson, Mayank Goel, et al. Got lung problems? Need to measure lung function? Toss your $2000 home spirometer, use a smart phone.

MoodMeter: Counting Smiles in the Wild by Mohammed Hoque, Javier Hernandez, et al. They set up cameras and big screens around MIT that would detect who's smiling and who's not. Cool way to measure happiness of different places (in a sense), interesting interactions (everyone would try to make it see them as a smile or not), nice face recognition. What's this good for? As is, just seems neat, but it makes me think of a few other ideas. What if you had reminders to smile in your house? Making a smile causes you to feel happier... how far does this effect go, and is it worth trying to do it repeatedly? Or do we get into annoying cheesy "smile!" dystopias? Also, does the number or percent of smiles actually tell you anything useful about an area?

Enhancing the "Second Hand" Retail Experience with Digital Object Memories, by Martin de Jode et al. They put RFID tags and QR codes on stuff in Oxfam (second hand) shops in the UK, so you can hear the original owner's story behind something you buy. Cool. They had 50% more sales, but they couldn't attribute that to the tags. It makes the world more magical: you could find a secret story in any nook and cranny. Imagine you buy a second-hand fridge, find a QR code in the drawer, and the original owner left a message about a big party they had where they stored beer there. Or a recipe of their favorite thing to make, or a log of when people repaired this dang thing. Also, I'd love to know long-term how this affects sales; I'd imagine it'd make people more likely to buy and sell things. Increased use of second-hand shops is good for the environment, your wallet, etc.

Making Technology Homey: Finding Sources of Satisfaction and Meaning in Home Automation by Leila Takayama et al. They interviewed people who did home automation projects, from the basic to the extreme. People who made their own stuff had more connection to it than people who bought premade solutions. Sometimes other people think they're wasting time, but it brings them some meaning. One guy had a "Canyon Cam" on his vacation home so he could see this view he loved, one guy rigged up a system to take pictures of his cat when it got scared off the counter, one guy would turning off the lights in the whole house as a subtle signal to his daughter to go to sleep. These are surprisingly cool and surprisingly meaningful, and I guess the crucial insight is that people enjoyed them the most when they were connecting with their home, not controlling it.

Augmenting Gesture Recognition with Erlang-Cox Models To Identify Neurological Disorders in Premature Babies by Mingming Fan et al. Put accelerometers on babies, tell if they're doing Cramped Synchronized General Movements, which correlate with Cerebral Palsy. Instead of watching an hour of a baby moving, doctors can just watch 10% as much video to detect CSGMs for sure. Useful in the medical field (clinical trial going on now) and uses a cool variant of hidden Markov models.

Identifying Emotions Expressed by Mobile Users through 2D Surface and 3D Motion Gestures by Celine Coutrix and Nadine Mandran. This is not about general emotion detection, but rather about intentional actions people might use while expressing certain emotions. Neat study: when triggered once a day or on demand, users would do whatever gesture expresses what they're feeling, then rate how they felt on a PAD model (pleasure, arousal, dominance). This is interesting when creating apps that take emotional state into account (e.g. "shake the phone angrily to restart if it freezes"; I don't know whether this would be good or bad, but you get the idea.)

What Next, Ubicomp? Celebrating an Intellectual Disappearing Act by Gregory Abowd. Okay, everyone was talking about this the whole conference, so I've got to mention it. His point, as I understand it, is: "Ubicomp" used to just mean anything with small computers. Now that field is so big, the Ubicomp conference can't continue to be just anything with small computers. Imagine having a "Personal Computing" conference nowadays; it's too broad. Maybe new conferences need to form for subfields or something. Discuss.

Demos, Posters, etc.

Touche by Ivan Poupyrev, Chris Harrison, Munehiko Sato. I feel like I've heard about this, but it's still cool. Make anything touch- and gesture-sensitive.

SenSprout: Inkjet-Printed Soil Moisture and Leaf Wetness Sensor, by Yoshihiro Kawahara et al. Print out some conductive ink and it, well, senses soil moisture and leaf wetness.

Design of a Context-Aware Signal Glove for Bicycle and Motorcycle Riders, by Anthony Carton. Of course I want this.

uSmell: A Gas Sensor System to Classify Odors in Natural, Uncontrolled Environments by Sen Hirano, Khai Truong, and Gillian Hayes. I've never seen a smell sensor before. This allows lots of possibilities.

Big talks:

The keynote by Steve Cousins, CEO of Willow Garage, was cool. A "state of personal robotics". Saw how finding a beer is easy but opening the fridge is hard, folding a towel is easy but finding the corners is hard, letting go of things at the right times is hard, etc.

The talk before the conference by Jun Rekimoto and colleagues was cool too. FlyingBuddy2, a drone you can control with your mind; glass that you can turn transparent or opaque; a drawer system that can tell what's in each drawer; armband that programmatically activates your fingers, a smile sensor before you can open your fridge; a really cute potted plant on wheels that drives around; a fork that makes different noises based on conductivity when you touch food to your mouth. Some of these things I'm not going to argue are super useful, but they're all cool.

Wednesday, July 4, 2012

About Google Glass

I'm starting to think about research again. Not intensely (still traveling!) but that doesn't stop me from collecting articles.

Doubters raise familiar arguments: "It'll be alienating!" "You'll look goofy!" etc. To this I agree with this post, which argues that alienation from technology has already happened, Glass will be less alienating, and looking goofy hasn't stopped technology before. (or rather, looking goofy has, but unconventional and new doesn't necessarily imply goofy.)

Another more interesting argument is: "There aren't any useful applications." Like Microsoft Surface. I guess that remains to be seen, but I for one would be interesting in exploring the application space. Here are a few directions off the top of my head:
- add in face recognition and tell you who you see and when. (tell you, not google! yes, privacy is important.)
- is there a microphone? tell you something about what you say.
- peripheral displays for things you care about peripherally. The time, if you're into that. Progressive levels of alerts; stop you from pulling out your cell phone for each call.
- stuff for improving social interactions! how long you've been making eye contact! hell, an eye contact coach!
- help you take breaks from work.
- tell you about how much your gaze wavers as some kind of proxy for focus.
- let you review a video of your day so you know how long stuff actually takes.

Sunday, April 8, 2012

I'm going to the HCII at CMU.

Yes indeed! I've decided that is the best place for me. Excited about this. See you in Pittsburgh this fall!

Tuesday, March 27, 2012

You mean I have to choose a grad school?

I figured I'd get into one school and it'd be an easy decision. Somehow, they let me in to four! UW CSE, CMU HCII, Toronto, and Georgia Tech. So now it's decision time. I'm thinking primarily about the first two, but I'll let you know in a couple weeks what I come up with.

Overall, I'm excited! Every school I visited, I met the coolest group of researchers doing all sorts of amazing things. Wherever I end up, I'm looking forward to seeing you all at conferences and all the rest.