Analysis of Studies at MobileHCI 2010

Yesterday I started to prepare my MobileHCI tutorial. It is basically about doing studies with a large number of subjects (e.g. >1,000) and therefore I started to wonder how many subjects participate in the average mobile HCI study. But first of all, what is MobileHCI anyway?

“MobileHCI provides a forum for academics and practitioners to discuss the challenges, potential solutions and innovations towards effective interaction with mobile systems and services. The conferences cover the analysis, design, evaluation and application of human-computer interaction techniques and approaches for all mobile computing devices, software and services.” [1]

Collected Data

Using the DBLP I fetched all short and long papers that have been presented at MobileHCI 2010. 20 short papers and 23 long papers have been accepted and the acceptance rate was about 20% [2].

For each paper I determined the total number of subjects that took part in the conducted studies. In fact, there is only one paper that comes without a study that involves human subjects. In addition, I tried to determine the number of male and female subjects as well as their age. Unfortunately, not all papers report participants’ age and gender. In [3] for example they conducted a study with 40 participants but I couldn’t find any information about their age or gender. Other papers only report participants’ age but not their gender (e.g. [4]). The way subjects’ age is reported is very inconsistent across the papers. [5,6], for example, give a range (e.g. “18 to 65 years”) while other papers provide more information (e.g. [7] reports that “Twenty university students (10 female and 10 male) aged between 23 and 34 (M=27.35, SD=3.10) participated in the study.”). I tried to guess or compute unclear details if I felt the paper provide enough information for doing that.

Number of subjects

Overall, the average number of subjects per paper is M=21.49 (SD=19.99). For short papers the average number of subjects is M=23.20 (SD=24.95) and for long papers it is M=20.00 (SD=14.83). The chart below shows the histogram of the distribution.

 

Subjects’ gender

As described above it wasn’t always easy (or possible) to determine the subjects’ gender. Based on the provided data 474 males, 328 females, and 106 people with an unknown gender participated in the studies. That makes M=13.17 males (SD=11.59) and M=9.11 females (SD=10.63) per paper that reports the gender. The chart below shows the subjects’ gender for short and long papers. The error bars show one standard error.

Out of curiosity, I tested if the amount of guys and girls is significantly different. A simple paired t-test (probably not the best tool for such a post-hoc test) shows that significantly more males than females participated in the studies (p<.001, d=0.37). The difference is also significant for long papers (p<.01, d=0.57) but not for short papers (p=0.13, d=0.14).

So what?

From the analysis I learned that a number of papers only briefly describe their participants and not all report participants’ age and gender. Large-scale studies are obviously not common in the community. Half the papers conducted studies with 20 or less participants and there are only three papers with more than 40 participants. With 30% more males than females the sample is clearly biased towards male participants. I, however, must admit that a large and perfect sample of the population is not always necessary. [8] is a nice example of an ethnographic study and I guess no one would complain about the small biased sample. I might talk about the different kinds of studies that are conducted next time.

References

[1] The International Conference Series on Human Computer Interaction with Mobile Devices and Services website
[2] MobileHCI 2010 notification of acceptance email.
[3] Jarmo Kauko and Jonna Häkkilä: Shared-screen social gaming with portable devices. Proc. MobileHCI, 2010.
[4] Ming Ki Chong, Gary Marsden, and Hans Gellersen: GesturePIN: using discrete gestures for associating mobile devices. Proc. MobileHCI, 2010.
[5] Simon Robinson, Matt Jones, Parisa Eslambolchilar, Roderick Murray-Smith, and Mads Lindborg: “I did it my way”: moving away from the tyranny of turn-by-turn pedestrian navigation. Proc. MobileHCI, 2010.
[6] Yolanda Vazquez-Alvarez, and Stephen A. Brewster: Designing spatial audio interfaces to support multiple audio streams. Proc. MobileHCI, 2010.
[7] Alessandro Mulloni, Andreas Dünser, and Dieter Schmalstieg: Zooming interfaces for augmented reality browsers. Proc. MobileHCI, 2010.
[8] Marianne Graves Petersen, Aviaja Borup Lynggaard, Peter Gall Krogh, and Ida Wentzel Winther: Tactics for homing in mobile life: a fieldwalk study of extremely mobile people. Proc. MobileHCI, 2010.

When do Android users install games and why should developers care?

When publishing or updating an Android app it appears in the “just in” list of most recent apps. Potential users browse this list and submitting a new app can result in some thousand initial installations – even if only a few users install it afterwards. To maximize the number of initial installations it is important to submit an app when most potential users are active but the fewest number of apps get deployed by other developers.

I already looked at the time games are published in the Android Market. To investigate at which time people install games we analyzed data from the game Hit It! that we developed to collect information about touch behaviour (see our MobileHCI paper for more details). We first published Hit It! in the Android Market on October 31, 2010. Until April 8, 2011 the game was installed 195,988 times according to the Android Developer Console. The first version that records the time the game is played and started was published as an update on December 18, 2010. We received data about the starting times from 164,161 installations but only use the data received after the 20th of December from 157,438 installations.

For each day of the week and for each hour of the day we computed how many installations were started for the first time. Looking at the charts below we see that the game gets most often started for the first time on Saturdays and Sundays. The most active hours of the day are around shortly before midnight GMT. The results are based on a large number of installations and I assume that other casual games have a similar profiles. We do not measure when the game is installed but when the game is started for the first time but we, however, assume that the first start of the game strongly correlates with the time it is installed.



The data collected from Hit It! can be combined with the statistics of our observation of the Android Market. We simple divide the number of started games by the number of deployed apps. The average over the day is shown in the diagram below. The peak is between 23 o’clock and 5 o’clock. That means that three times more games per deployed game get started at this time compared to 13 o’clock. Taking also the day of the week into account it might be expect to get 4 times more installations from being listed as a most recent app on Sunday evening compared to Tuesday noon (all GMT). As the absolute number of players is higher in the evening than in the morning we conclude that the best time to deploy a game in the Android Market is on Sunday evening GMT.

We will also publish our results in a poster that has been accepted at MobileHCI 2011.

When get games published in the Android Market?

The Android Market is a crowded marketplace. In order to maximize the initial number of installations the timing for deploying your app can be crucial. When publishing or updating an Android app it appears in the “just in” list of most recent apps. Thus, you probably don’t want to release a game when all the other developers do it as well. To find the best point in time to submit a game to the Android Market it is important to know when other developers submit new games or update existing ones.

Monitoring the Android Market

We implemented a script that monitors new and updated apps in the Android Market using the android-market-api. The script retrieves the 10 newest or updated apps from the Market’s eight categories for games once every 10 minutes. Starting on March 11, 2011 we monitored the Market for two months. As the script needs to provide a locale and an Android version we could only record apps that are available for users with the locale en_US and the Android version 2.1.

Point in time games get deployed

To determine when games get published we took the average over the eight categories for games and the time we monitored the Market. The graph below shows the average number of deployed games per hour for each weekday. 25% more games get deployed on an average Friday compared to Mondays.


The average number of games published in the Android Market per day (relative to GMT). Error bars show standard error.

We looked at the time of the day games get installed in more detail. The graph below shows the distribution over the day new apps get deployed in the Market. The peak is around 16 o’clock GMT. At this time more than twice the number of games get published than at less populated times. Less frequented hours are around 6 o’clock in the morning (GMT) and after 22 o’clock in the evening (GMT).


The average number of games published in the Android Market per hour of the day (relative to GMT). Error bars show standard error.

Our results suggest that the most popular day to submit a game is Friday and the least popular day is Monday. Furthermore, we learned that most games get deployed between 12 o’clock and 17 o’clock (GMT) while less active hours are after 18 o’clock and before 11 o’clock. One should probably try to avoid these hours.

Knowing when other developers deploy their games is surely important but knowing when Android users install games is a least equally important. One should certainly look for the time a lot of users are looking for new games but only few developers want to satisfy their needs.

Type It! – an Android game that challenge your texting abilities

Type It! is a game for the Android platform that is all about speed and quick fingers. It challenges (and hopefully improves) your texting abilities. You have to touch and type as fast as you can to see if you can beat all levels. The player’s task is to enter the words that appear as fast as possible. The faster they are the more points they get. Players might improve their dexterity by trying to be the fastest guy in the high score.

This game is part of our research about the touch performance on mobile devices and also part of my work as a PhD student. While users play the game we measure where they hit the screen and how fast they are. By combining this information with the position of the keyboard we can estimate how easy each key is to touch. Based on this data we are hopefully able to predict user’s performance with different keys and character sequences. We plan to derive an according model and this model could possibly be used to improve the virtual keyboards of current smartphones.

We hope that we can collect data from thousands of players. That would enable us to derive information that is valid not only for a small number of people but for every user. We are, however, not interested in you contact list, browsing history, or phone number. Okay – if you are good looking I might be interested in your phone number but I don’t want to collect such data automatically ;). In general we don’t want or need data that enables identifying individuals. Thus, we do not collect those things or other personal information.

Type It! is available for Android 2.1 and above. You can have a look at users’ comments and the game’s description on AppBrain or install it directly on your Android phone from the Market.

Evaluation of our HCI lecture

We conducted an evaluation of our lecture and lab about Human-Computer Interaction. The aim of the study is to improve the lecture in the future. We collected qualitative feedback using a questionnaire from nine students. Overall the participants appreciate the practical projects and the lecture itself. The participants criticized the weekly presentations about the on-going practical project as well as the room. Participants recommend a larger room and project presentations only every second week.

Motivation and Background

This year we gave the lecture and lab for the third time. As most lecturers we were never trained in lecturing and base our work only on assumption and personal experiences. While we appreciate the overall results of the lecture and the practical part we each year do not had tangible data about the students’ opinions.

Our HCI lecture is split into two parts. We give lectures about the usual topics of a HCI course along the user-centred design process. E.g. we teach about how to collect requirements, different kinds of prototypes, usability evaluations and how to design and interpret experiments. The practical part runs in parallel to the lecture. In the beginning of the semester PhD students from our group present a number of topics. The students pick one topic and form groups of 2-4 students. During the term the students had to work on these projects along the user-centred design process and present their progress in weekly presentations. In the end of the semester the students have to present their project to our group and interested guests in a final presentation and take an oral exam.

Design

As the aim of the study is to improve the lecture in the future we focussed on qualitative feedback. We compiled a questionnaire with the following four questions (we actually asked the questions in German):

  • What did you like about the course?
  • What did you not like about the course?
  • How would you change the course?
  • Do you have additional comments?

We did not ask demographic questions or similar aspects in order to keep the results anonymous.

We distributed the questionnaire to all students of the course that were present (about 20) during the last lecture and collected them after the lecture. While we asked the students to fill the questionnaire we also told them that they are free to not fill it.

Results

In total we collected 9 questionnaires resulting in a return rate of about 50%. Most participants provided answers to the first three questions but no one gave additional comments. After collecting the questionnaires we sorted the data by the questions, clustered the statements by topic and translated them to English. In the following we provide an overview about the results grouped by the three first questions.

What did they like about the course?

Four participants wrote that they liked the lecture. They stated that it is a “good lecture”, appreciated the “very good content of the lecture” and that the “content is well conveyed”. Four participants also liked the hands-on work. Participants explicitly mentioned “the large amount of practical work”, the “practical work” and the “practical experience”. Two students highlighted the structure of the lecture and two others mentioned “new technologies” and the diversity of the projects. One participant highlighted the support by the supervisors when working on the practical project.

What did they not like about the lecture?

Five participants criticized the weekly presentations of the projects’ progress. They stated that there have been “too many presentations” and that “5 minutes is too short for the presentations” even though we scheduled 10 minutes for each presentation plus further question and comments. Three participants commented on the room for the lecture. They criticized that the room is too small. One of the three participants also criticized the low quality of the projector. One participant criticized that the lecture is not always relevant for the practical project and another one the synchronization between the lecture and the practical work. One participant mentioned that the lecturers did not always upload their slides to the learn-management system on time.

How would they change the lecture?

Participants recommended changing four aspects of the course. Four participant recommended fewer presentations of the ongoing work (e.g. “presentations only every second week”) or more interaction between the groups. Three participants recommended a better room. In particular, they requested a room with ventilation or just a bigger room. One of these participants also recommended a larger projector. For the lecture one participant requested a short description for each lecture and another one recommended to make the lecture “even more interactive”. One participant stated that “the practical part (projects) could eventually be reduced”.

Limitations

We collected feedback only from nine out of about 20 students. Thus, we got only results from self selected participants. We assume that this could have resulted in a bias towards positive feedback. Participants only had limited time to fill the questionnaire and we might have collected only superficial feedback.

Conclusions

Overall the participants appreciate the lecture and in particular the practical work. Participants did not like the weekly presentations about the ongoing practical work and recommended to reduce the number of presentations, probably to one presentation every second week. Participants also did not like the technical resources of the course, in particular, the room and the projector and recommend a larger ventilated room.

While the return rate is only around 50% and the results might be biased by self selection we assume that the results can provide insights for future courses. E.g. we will try to organize a bigger room with a build-in projector. One particular aspect that raised our attention is the critique about the weekly project presentations. We originally structured the course with fewer students in mind. The current structure might not scale well with an increasing number of students. We will consider reducing the number of project presentation as requested by the students. This might also help to scale the lecture to a slightly larger group of students.

Hit It! – a fast-paced Android game

Hit It! is a game for the Android platform that is all about speed and quick fingers. You have to touch and move as fast as you can to see if you can beat all levels. The player’s task is to simply touch each appearing circle as fast as possible. The faster they are the more points they get. Players might improve their dexterity by trying to be the fastest guy in the high score.

This game is part of our research about the touch performance on mobile devices and also part of my work as a PhD student. While users play the game we measure where they hit the screen and how fast they are. By combining this information with the position and size of the circles we can estimate how easy each screen position is to touch. Based on this data we are hopefully able to predict user’s performance with different button sizes and positions. We plan to derive an according model and this model could possibly be used to improve the user interface of current smartphones.

We hope that we can collect data from thousands of players. That would enable us to derive information that is valid not only for a small number of people but for every user. We are, however, not interested in you contact list, browsing history, or phone number. Okay – if you are good looking I might be interested in your phone number but I don’t want to collect such data automatically ;). In general we don’t want or need data that enables identifying individuals. Thus, we do not collect those things or other personal information.

Hit It! is available for Android 1.6 and above. You can have a look at users’ comments and the game’s description on AppBrain or install it directly on your Android phone from the Market.

Sensor-based Augmented Reality made simple

I did some content-based augmented reality for Android and my former student developed a sensor-based Augmented Reality App. Thus, I thought I should be able to do the sensor-based stuff as well. I fiddled around a lot to make it work with the canvas but finally I realized that I’m just not able to do it with the Canvas and switched to OpenGL. I attached an Eclipse project with the source code.

Even though I couldn’t find a good example or tutorial it was pretty easy and definitely much easier than going the Canvas way. Basically you have to use the SensorManager to register for accelerometer and magnetometer (that’s the compass) events. You find the code in the class PhoneOrientation. Accelerometer data and compass data can be combined to create a matrix using the code below. I also had to “remap the coordinate system” because by the example uses a portrait mode.

SensorManager.getRotationMatrix(newMat, null, acceleration, orientation);
SensorManager.remapCoordinateSystem(newMat,
		SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X,
		newMat);

The newMat is a 4×4 matrix as a float array. This matrix must be passed to the OpenGL rendering pipeline and loaded by simply using:

gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadMatrixf(floatMat, 0);

That’s it basically. As I never learned how to use OpenGL, in particular how to load textures, the project is based on an earlier example that renders the camera image on a cube. The project also uses an Android 2.2 API and reflection to access camera images in a fast way (that’s why it works on Android 2.1). Check out the Eclipse project if you are interested or install the demo on you Android 2.1 device (on cyrket/in the market).

Beeing the off-screen king

Recently Torben and I spammed the “International Conference on Human-Computer Interaction with Mobile Devices and Services” (better known as MobileHCI) with two papers and a poster about off-screen visualizations. Off-screen visualizations try to reduce the impact of the immanent size restrictions of mobile devices’ display. The idea is that the display is just a window in a larger space. Off-screen visualizations show where the user should look for objects located in this larger space.

The title of the first paper is Visualization of Off-Screen Objects in Mobile Augmented Reality. It deals with displaying points-of-interests using sensor-based mobile augmented reality. We compare the common mini-map that provides a 2D overview about nearby object with the more uncommon visualization of nearby objects using arrows that point at the objects. The images below show both visualizations side-by-side.

off-screen visualizations for handheld augmented reality

To compare the mini-map with the arrows we conducted a small user study in the city centre. We randomly asked passersby to participate in our study (big thanks to my student Manuel who attracted 90% of our female participants). We ended up with 26 people testing both visualizations. Probably because most participants where non tech-savvy guys the collected data is heavily affected by noise. From the results (see the paper for more details) we still conclude that our arrows outperform the mini-map. Even though the study has some flaws I’m quite sure that our results are valid. However, we only tested a very small number of objects and I’m pretty sure that one would get different results for larger number of objects. I would really like to see a study that analyzes a larger number of objects and additional visualizations.

In the paper Evaluation of an Off-Screen Visualization for Magic Lens and Dynamic Peephole Interfaces I compared a dynamic peephole interface with a Magic Lens using an arrow-based off-screen visualization (or no off-screen visualization). The idea of dynamic peephole interfaces is that the mobile phone’s display is a window to a virtual surface. You explore the surface by physically moving your phone around (e.g. a digital map). The Magic Lens is very similar with the important difference that you explore a physical surface (e.g. a paper map) that is augmented with additional information. The concept of the Magic Lens is sketched in the Figures below.

handheld augemented reality with paper mapsConceptual sketch of using a Magic Lens to interact with a paper map.

We could measure a difference between the Magic Lens and the dynamic peephole interface. However, we did measure a clear difference between using an off-screen visualization or not. I assume that the impact of those off-screen visualizations has a much larger impact on the user experience than using a Magic Lens or the dynamic peephole. As the Magic Lens relies on a physical surface I doubt that it has a relevant value (for the simple tasked we tested – of course).

As some guys asked me why I use arrows and not those fancy Halos or Wedges (actually I wonder if someone ever fully implemented Wedge for an interactive application) I thought it might be nice to be able to cite my own paper. Thus, I decided to compare some off-screen visualizations techniques for digital maps (e.g. Google maps) on mobile phones. As it would’ve been a bit boring to just repeat the same study conducted by Burigat and co I decided to let users interact with the map (instead of using a static prototype). To make it a bit more interesting (and because I’m lazy) we developed a prototype and published it to the Android Market. We collected some data from users that installed the app and completed an interactive tutorial. The results indicate that arrows are just better than Halos. However, our methodology is flawed and I assume that we haven’t measured what we intended to measure. You can test the application on you Android Phone or just have a look at the poster.

Screenshots of our application in the Android Market

I’m a bit afraid that the papers will end up in the same session. Might be annoying for the audience to see two presentations with the same motivation and similar related work.

Statistics from inside the Android Market

Some thousand users installed the application I published in the Android market. I was curious about where the guys come from and which devices they actually use. Thus, I integrated some logging in two of my apps. Hit the Rabbit is a simple game where the player should hit as many rabbits as possible with his finger. In order to find the rabbits one must pan the background around to find the evil creatures. The other one is the Map Explorer a simple location based application (localized to English and German) that allows to search for POIs and retrieve some basic information about them (using either Qype or Yahoo local).

Probably the most interesting thing I learned from the statistics is that the vast majority of users are from America and uses an English localisation. Even having a German version doesn’t make a big difference.

Another interesting aspect is the large number of devices people use. In total I collected data from more than 35 different devices. There are some devices that I never heard of (“zeppelin”???). Surprising for me is that the Nexus One (codename “passion”) seems to be quite unpopular.

The last thing wasn’t really surprising. Most users still use Android 1.5. Almost no one uses Android 2.0 (or 2.01) and 1.6 will probably die out soon as well.

I uploaded the compiled statistics for both applications. The time span for collecting the data was around one month mostly collected in April 2010. The statistics are limited because of the number of installation (approximately only 4.000 installations) and because only one of the application has been localized (and it has only been localized to a single language – German).

Hit the Rabbit!

Fight the dreadful rabbits and crush them with your holy thumb. The shooting season begins with my first game in the Android Market. Your job is to hit as many rabbits as possible. Pan the background around to find some of these evil creatures and hit them with a lusty touch. You can show your skills in different levels that force to hurry up. The time trial mode adds even more variety and you can fight against the clock.

You can download the latest version from the Android Market and don’t forget to give me some proper rating if you like it. Please leave a comment if you have critics or recommendations. In particular, if you have ideas to improve the game. It’s my first game (ever) so please be gentle with me. You find the game in the Market. You can also have a look at the description and screen shots.