Category Archives: programming

Why Android is so Awesome – for Prototypes and Research

Smartphones currently become the most pervasive computing devices of all times. They currently become even the best-selling consumer electronic devices of all. Obviously there is a huge amount of research that investigates how people use their phones and how we can improve their experience. If doing research using smartphones, an important practical question is which platform one should choose. Basically, there are three major platforms left and alive: iOS on the iPhone, Windows Phone, and Android.

Developing

Developing for Android is nice but developing for the other platforms isn’t worse. While Java might not be the most innovative language it easily beats iOS’s Objective C (garbage collection anyone?) and is almost on par with the .NET languages (and you could also use one of the other JVM languages). What makes Java compelling is the huge number of available examples but what really sticks out (for us) is that all our computer science students have to learn Java in the first semester. This means that every single, somewhat capable, student knows how to program Java that is even used throughout their university courses. It also comes in handy (actually this is already a real show stopper) that unlike developing for iOS you don’t need a Mac and unlike Windows Phone you don’t need Windows. Linux, Windows, MacOS – yes they can all be used to develop for Android (and those who like the pain can also use BSD).

Openness

Android is free and open. Sure, it is probably free like beer and not like free speech but you can still look into the code. Being able to look into your OS’s source code might seem like an academic detail… One of my former students had to look into the Android’s sources to understand the memory management for developing commercial apps. Having the source code enabled us to understand the Android keyboard and reuse it during our studies. We even patched Android to develop handheld Augmented Reality prototypes. All this is only possible if you have the source code available. For these examples, it might not be necessary to look in the code on other platform. Still, at one point or another you might want to dig down to the hardware level and you are screwed if it isn’t Android that you have to dig through.

Deploying

While developing prototypes and conducting lab studies is nice at one point or another you might want to deploy your shiny research prototype. It might be for research, it might be for fun, or just for the money. Deploying your app in the Android market takes just seconds (if you already have those screenshots and descriptions readily available). There is no approval process. No two weeks waiting until Apple decides that your buggy prototype is – just a too buggy prototype. All you need is 25$ and a credit card (and a Google Account and a soul to sell).

Market share

Windows Phone will certainly increase its market share by some 100% soon – which isn’t difficult if you start from 0.5%. However, Android overturned all other platforms, including iOS and Blackberry. The biggest smartphone manufacturer is Samsung with their Android phones. They sell more smartphones than Nokia and they sell more smartphones than Apple. Well, and they are not the only company with an Android phone in their portfolio.

Fragmentation

Fragmentation is horrible! I developed for Windows Mobile and for JavaME. Even simple applications need to be tested on different devices to hope that it works. Things aren’t too bad for Android (if you don’t use the camera or some sensors or recent APIs or some other unimportant things…). Fragmentation can even be great for the average mobile HCI researcher. Need a device with a big screen or with a small display? Fast processor, long battery life, TV out, or NFC? There is a device for that! There are very powerful and expensive devices (the ones you will use to test your awesome interface) but also very cheap ones for less than 80€ (that you can give to your nasty students).

Usability, UX, …

Android offers the best usability of all platforms ever – well probably not. Would I buy an Android phone for my mother? If money doesn’t count I would certainly prefer an iPhone. What would I recommend to my coolish step brother? Certainly a Windows Phone to impress the girls. But what would I recommend to my students? There is nothing but Android!

Hit It! – a fast-paced Android game

Hit It! is a game for the Android platform that is all about speed and quick fingers. You have to touch and move as fast as you can to see if you can beat all levels. The player’s task is to simply touch each appearing circle as fast as possible. The faster they are the more points they get. Players might improve their dexterity by trying to be the fastest guy in the high score.

This game is part of our research about the touch performance on mobile devices and also part of my work as a PhD student. While users play the game we measure where they hit the screen and how fast they are. By combining this information with the position and size of the circles we can estimate how easy each screen position is to touch. Based on this data we are hopefully able to predict user’s performance with different button sizes and positions. We plan to derive an according model and this model could possibly be used to improve the user interface of current smartphones.

We hope that we can collect data from thousands of players. That would enable us to derive information that is valid not only for a small number of people but for every user. We are, however, not interested in you contact list, browsing history, or phone number. Okay – if you are good looking I might be interested in your phone number but I don’t want to collect such data automatically ;). In general we don’t want or need data that enables identifying individuals. Thus, we do not collect those things or other personal information.

Hit It! is available for Android 1.6 and above. You can have a look at users’ comments and the game’s description on AppBrain or install it directly on your Android phone from the Market.

Sensor-based Augmented Reality made simple

I did some content-based augmented reality for Android and my former student developed a sensor-based Augmented Reality App. Thus, I thought I should be able to do the sensor-based stuff as well. I fiddled around a lot to make it work with the canvas but finally I realized that I’m just not able to do it with the Canvas and switched to OpenGL. I attached an Eclipse project with the source code.

Even though I couldn’t find a good example or tutorial it was pretty easy and definitely much easier than going the Canvas way. Basically you have to use the SensorManager to register for accelerometer and magnetometer (that’s the compass) events. You find the code in the class PhoneOrientation. Accelerometer data and compass data can be combined to create a matrix using the code below. I also had to “remap the coordinate system” because by the example uses a portrait mode.

SensorManager.getRotationMatrix(newMat, null, acceleration, orientation);
SensorManager.remapCoordinateSystem(newMat,
		SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X,
		newMat);

The newMat is a 4×4 matrix as a float array. This matrix must be passed to the OpenGL rendering pipeline and loaded by simply using:

gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadMatrixf(floatMat, 0);

That’s it basically. As I never learned how to use OpenGL, in particular how to load textures, the project is based on an earlier example that renders the camera image on a cube. The project also uses an Android 2.2 API and reflection to access camera images in a fast way (that’s why it works on Android 2.1). Check out the Eclipse project if you are interested or install the demo on you Android 2.1 device (on cyrket/in the market).

Hit the Rabbit!

Fight the dreadful rabbits and crush them with your holy thumb. The shooting season begins with my first game in the Android Market. Your job is to hit as many rabbits as possible. Pan the background around to find some of these evil creatures and hit them with a lusty touch. You can show your skills in different levels that force to hurry up. The time trial mode adds even more variety and you can fight against the clock.

You can download the latest version from the Android Market and don’t forget to give me some proper rating if you like it. Please leave a comment if you have critics or recommendations. In particular, if you have ideas to improve the game. It’s my first game (ever) so please be gentle with me. You find the game in the Market. You can also have a look at the description and screen shots.


What’s in the off-screen? Different techniques to show POIs on a map

My student Sascha and I implemented some visualization techniques for maps on phones. Don’t know what this is all about? Let’s have a look at the abstract of the paper Halo: a technique for visualizing off-screen objects:

As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the off-screen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.

A couple of other approaches try to support similar tasks. We thought testing is better than believing and implemented three different visualization techniques for digital maps on Android. There is a demo app in the market (direct link). We tried to make the whole thing portable but only tested on the G1 and the emulator. I would love to know if it works on other devices like the Motorola Milestone

I removed the app from the market because I lost my keystore and can’t update it anymore. If you are interested in testing it check out the Map Explorer. It is an updated version that you can find in the market.

Camera image->NDK->OpenGL texture

Since we are currently working on some augmented reality stuff for Android I need to show the camera image using OpenGL ES. It works great with pure Java if one uses only the grayscale image. However, I needed the color image. The G1’s camera delivers the image in a YUV format while OpenGL only understand RGB images. Unfortunately it is out of question to convert the YUV image to RGB in pure Java for images with 480×320 pixels. Thus, I used the NDK to implement the conversion. The code below does the job. It is based on code provided by Tom Gibara.

void toRGB565(unsigned short *yuvs, int widthIn, int heightIn, unsigned int *rgbs, int widthOut, int heightOut) {
  int half_widthIn = widthIn >> 1;

  //the end of the luminance data
  int lumEnd = (widthIn * heightIn) >> 1;
  //points to the next luminance value pair
  int lumPtr = 0;
  //points to the next chromiance value pair
  int chrPtr = lumEnd;
  //the end of the current luminance scanline
  int lineEnd = half_widthIn;

  int x,y;
  for (y=0;y> 1;
    for (x=0;x> 8) & 0xff;
      Y1 = Y1 & 0xff;
      int Cr = yuvs[chrPtr++];
      int Cb = ((Cr >> 8) & 0xff) - 128;
      Cr = (Cr & 0xff) - 128;

      int R, G, B;
      //generate first RGB components
      B = Y1 + ((454 * Cb) >> 8);
      if (B < 0) B = 0; if (B > 255) B = 255;
      G = Y1 - ((88 * Cb + 183 * Cr) >> 8);
      if (G < 0) G = 0; if (G > 255) G = 255;
      R = Y1 + ((359 * Cr) >> 8);
      if (R < 0) R = 0; if (R > 255) R = 255;
      int val = ((R & 0xf8) << 8) | ((G & 0xfc) << 3) | (B >> 3);

      //generate second RGB components
      B = Y1 + ((454 * Cb) >> 8);
      if (B < 0) B = 0; if (B > 255) B = 255;
      G = Y1 - ((88 * Cb + 183 * Cr) >> 8);
      if (G < 0) G = 0; if (G > 255) G = 255;
      R = Y1 + ((359 * Cr) >> 8);
      if (R < 0) R = 0; if (R > 255) R = 255;
      rgbs[yPosOut+x] = val | ((((R & 0xf8) << 8) | ((G & 0xfc) << 3) | (B >> 3)) << 16);
    }
    //skip back to the start of the chromiance values when necessary
    chrPtr = lumEnd + ((lumPtr  >> 1) / half_widthIn) * half_widthIn;
    lineEnd += half_widthIn;
  }
}

The code is not that optimized at the moment but can process a 480×320 image in ~25ms on my G1 (which is somewhat slow according to my student’s comments). In order to call this function from Java I needed a wrapper with a JNI signature:

/**
 * Converts the input image from YUV to a RGB 5_6_5 image.
 * The size of the output buffer must be at least the size of the input image.
 */
JNIEXPORT void JNICALL Java_de_offis_magic_core_NativeWrapper_image2TextureColor
  (JNIEnv *env, jclass clazz,
  jbyteArray imageIn, jint widthIn, jint heightIn,
  jobject imageOut, jint widthOut, jint heightOut,
  jint filter) {

	jbyte *cImageIn = (*env)->GetByteArrayElements(env, imageIn, NULL);
	jbyte *cImageOut = (jbyte*)(*env)->GetDirectBufferAddress(env, imageOut);


	toRGB565((unsigned short*)cImageIn, widthIn, heightIn, (unsigned int*)cImageOut, widthOut, heightOut);

	(*env)->ReleaseByteArrayElements(env, imageIn, cImageIn, JNI_ABORT);
}

To make it more interesting I added some filter to the camera image. There is a demo app in the market (direct link to the market). I tried to make the whole thing portable but would love to know if it works on other devices like the Motorola Milestone.
Sepia effectBlack & White effectFisheye effectInvert effect

Instant exit with PowerPoint Viewer 2007

Recently I tried to deploy one of my prototypes on another machine. The functionality of the prototype was quite simple: Hold a predefined object in front of your webcam and the system starts an according presentation on the screen. Not sure if this is really useful for anything but I did something similar with printed photos. Whenever I held a printed photo in front of my webcam the system tries to find the digital equivalent on my computer. If a photo is recognized the according directory is opened in the file explorer and the photo is selected.
However, when I tried to use the PowerPoint Viewer 2007 to open presentations it didn’t worked. When I tried to start the PowerPoint Viewer it instantly exited. It took me a while to find a solution for this behaviour. It is somehow related to a bug described in the Microsoft Knowledge Base. The described cause is:

“This issue occurs when you have a non-English version of Microsoft Office or of PowerPoint Viewer installed on a computer that has an English version of Microsoft Windows installed.”

(by the way is that really a cause or rather an excuse?)
I run a German Windows XP and I tested the PowerPoint Viewer on at least three other computers with a German XP. Maybe the described “cause” is just wrong because the following solution worked very well for me:

Go to the directory: C:\Programme\Microsoft Office\Office12
Copy the folder which is located in that directory and rename it to 1033

Camera image as an OpenGL texture on top of the native camera viewfinder

I played a bit with the camera viewfinder on my G1 which is usually displayed directly by the camera driver. I hoped that I could synchronize the driver’s camera frame rendering with my own processing and visualization. After an hour or so later I now assume that this is not possible at the moment. However, while playing around I extended the example below as you can see in these screenshots.

OpenGL Camera ScreenshotOpenGL Camera Screenshot

An OpenGL cube textured with the camera frame is rendered on top of the standard camera viewfinder. Thus, the standard camera image in the background is colored while the the cube is only grayscale. I worry that I have to make the OpenGL texture colored as well soon. I also cleaned up the source code a bit by extending GLSurfaceView instead of doing most of the OpenGL stuff myself and using a SurfaceView. I uploaded an updated version to the android market (direct link to the android market). You find the sourceode here.

Showing camera images with OpenGL on Android example

I fiddeled a small example together that shows how to get images from the camera and render them with OpenGL. The example is for Android phones and consists of three classes:

  • GLCamTest is the application’s main Activity. It does nothing special apart from putting the app in fullscreen mode and creating a GLLayer object as well as a Preview object.
  • The Preview class handles the camera. In particular, the method setPreviewCallback is used to receive camera images. The camera images are not processed in this class but delivered directly to the GLLayer. This class itself does not display the camera images.
  • GLLayer uses OpenGL ES to render the camera’s viewfinder image on the screen. Unfortunately I don’t know much about OpenGL (ES). The code is mostly copied from some examples. The only interesting stuff happens in the main loop (the run method) and the onPreviewFrame method.

Furthermore, we have the BooleanLock class which is completely boring. I uploaded the eclipse project containing the source code. I have only tested it on the emulator and with my tuned G1 not sure if it works with normal devices.

I just tested it on a normal G1. Performance is horrible; the Garbage Collector jumps in a few times per second and stops the video. It’s because of the Camera Preview Callback memory Issue. Unfortunately I assume that this can’t be changed without touching the firmware. I also uploaded the example to the Android Market.

Processing camera frames on Android

Recently I wanted to process and display camera frames using my Android G1. I’ve done similar things using Python on S60 and Windows Mobile 6 and expected it to be quite easy on the G1 as well. As first step I extended a SurfaceView that uses the camera and calls setPreviewCallback to register a onPreviewFrame callback and receive images from the camera as described in several tutorials. The camera frames are then displayed via my SurfaceView and I receive the according data as well.

However, I wanted to keep the processing of the frames and displaying the frames in sync. With the simple approach this is not possible because the onPreviewFrame is not synchronized with displaying the frames. My alternative was to not display the frames with the SurfaceView directly but convert the received image data to an OpenGL texture and render the camera viewfinder with OpenGL ES. This works surprisingly fast on my G1. In the video below I render the camera frames on an OpenGL rectangle to get some fancy effects.

My viewfinder is grayscale because I only copy the luminance part of the camera frames (which is encoded in a YUV colour space) to the OpenGL texture. Decomposing also the U and V part is probably a bit slower. Copying a 160×240 YUV frame to a 256×256 luminance array (which is used to create the texture) is very simple and looks as follows.

public static void yuvToLum160x240(byte[] yuv, byte[] lum) {
int lumCount=0;
int yuvCount=0;
for (int y=0;y<160;y++) {
System.arraycopy(yuv, yuvCount, lum, lumCount, 240);
yuvCount=yuvCount+240;
lumCount=lumCount+256;
}
}