Hit It! – a fast-paced Android game

Hit It! is a game for the Android platform that is all about speed and quick fingers. You have to touch and move as fast as you can to see if you can beat all levels. The player’s task is to simply touch each appearing circle as fast as possible. The faster they are the more points they get. Players might improve their dexterity by trying to be the fastest guy in the high score.

This game is part of our research about the touch performance on mobile devices and also part of my work as a PhD student. While users play the game we measure where they hit the screen and how fast they are. By combining this information with the position and size of the circles we can estimate how easy each screen position is to touch. Based on this data we are hopefully able to predict user’s performance with different button sizes and positions. We plan to derive an according model and this model could possibly be used to improve the user interface of current smartphones.

We hope that we can collect data from thousands of players. That would enable us to derive information that is valid not only for a small number of people but for every user. We are, however, not interested in you contact list, browsing history, or phone number. Okay – if you are good looking I might be interested in your phone number but I don’t want to collect such data automatically ;). In general we don’t want or need data that enables identifying individuals. Thus, we do not collect those things or other personal information.

Hit It! is available for Android 1.6 and above. You can have a look at users’ comments and the game’s description on AppBrain or install it directly on your Android phone from the Market.

Sensor-based Augmented Reality made simple

I did some content-based augmented reality for Android and my former student developed a sensor-based Augmented Reality App. Thus, I thought I should be able to do the sensor-based stuff as well. I fiddled around a lot to make it work with the canvas but finally I realized that I’m just not able to do it with the Canvas and switched to OpenGL. I attached an Eclipse project with the source code.

Even though I couldn’t find a good example or tutorial it was pretty easy and definitely much easier than going the Canvas way. Basically you have to use the SensorManager to register for accelerometer and magnetometer (that’s the compass) events. You find the code in the class PhoneOrientation. Accelerometer data and compass data can be combined to create a matrix using the code below. I also had to “remap the coordinate system” because by the example uses a portrait mode.

SensorManager.getRotationMatrix(newMat, null, acceleration, orientation);
SensorManager.remapCoordinateSystem(newMat,
		SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X,
		newMat);

The newMat is a 4×4 matrix as a float array. This matrix must be passed to the OpenGL rendering pipeline and loaded by simply using:

gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadMatrixf(floatMat, 0);

That’s it basically. As I never learned how to use OpenGL, in particular how to load textures, the project is based on an earlier example that renders the camera image on a cube. The project also uses an Android 2.2 API and reflection to access camera images in a fast way (that’s why it works on Android 2.1). Check out the Eclipse project if you are interested or install the demo on you Android 2.1 device (on cyrket/in the market).

Beeing the off-screen king

Recently Torben and I spammed the “International Conference on Human-Computer Interaction with Mobile Devices and Services” (better known as MobileHCI) with two papers and a poster about off-screen visualizations. Off-screen visualizations try to reduce the impact of the immanent size restrictions of mobile devices’ display. The idea is that the display is just a window in a larger space. Off-screen visualizations show where the user should look for objects located in this larger space.

The title of the first paper is Visualization of Off-Screen Objects in Mobile Augmented Reality. It deals with displaying points-of-interests using sensor-based mobile augmented reality. We compare the common mini-map that provides a 2D overview about nearby object with the more uncommon visualization of nearby objects using arrows that point at the objects. The images below show both visualizations side-by-side.

off-screen visualizations for handheld augmented reality

To compare the mini-map with the arrows we conducted a small user study in the city centre. We randomly asked passersby to participate in our study (big thanks to my student Manuel who attracted 90% of our female participants). We ended up with 26 people testing both visualizations. Probably because most participants where non tech-savvy guys the collected data is heavily affected by noise. From the results (see the paper for more details) we still conclude that our arrows outperform the mini-map. Even though the study has some flaws I’m quite sure that our results are valid. However, we only tested a very small number of objects and I’m pretty sure that one would get different results for larger number of objects. I would really like to see a study that analyzes a larger number of objects and additional visualizations.

In the paper Evaluation of an Off-Screen Visualization for Magic Lens and Dynamic Peephole Interfaces I compared a dynamic peephole interface with a Magic Lens using an arrow-based off-screen visualization (or no off-screen visualization). The idea of dynamic peephole interfaces is that the mobile phone’s display is a window to a virtual surface. You explore the surface by physically moving your phone around (e.g. a digital map). The Magic Lens is very similar with the important difference that you explore a physical surface (e.g. a paper map) that is augmented with additional information. The concept of the Magic Lens is sketched in the Figures below.

handheld augemented reality with paper mapsConceptual sketch of using a Magic Lens to interact with a paper map.

We could measure a difference between the Magic Lens and the dynamic peephole interface. However, we did measure a clear difference between using an off-screen visualization or not. I assume that the impact of those off-screen visualizations has a much larger impact on the user experience than using a Magic Lens or the dynamic peephole. As the Magic Lens relies on a physical surface I doubt that it has a relevant value (for the simple tasked we tested – of course).

As some guys asked me why I use arrows and not those fancy Halos or Wedges (actually I wonder if someone ever fully implemented Wedge for an interactive application) I thought it might be nice to be able to cite my own paper. Thus, I decided to compare some off-screen visualizations techniques for digital maps (e.g. Google maps) on mobile phones. As it would’ve been a bit boring to just repeat the same study conducted by Burigat and co I decided to let users interact with the map (instead of using a static prototype). To make it a bit more interesting (and because I’m lazy) we developed a prototype and published it to the Android Market. We collected some data from users that installed the app and completed an interactive tutorial. The results indicate that arrows are just better than Halos. However, our methodology is flawed and I assume that we haven’t measured what we intended to measure. You can test the application on you Android Phone or just have a look at the poster.

Screenshots of our application in the Android Market

I’m a bit afraid that the papers will end up in the same session. Might be annoying for the audience to see two presentations with the same motivation and similar related work.

Statistics from inside the Android Market

Some thousand users installed the application I published in the Android market. I was curious about where the guys come from and which devices they actually use. Thus, I integrated some logging in two of my apps. Hit the Rabbit is a simple game where the player should hit as many rabbits as possible with his finger. In order to find the rabbits one must pan the background around to find the evil creatures. The other one is the Map Explorer a simple location based application (localized to English and German) that allows to search for POIs and retrieve some basic information about them (using either Qype or Yahoo local).

Probably the most interesting thing I learned from the statistics is that the vast majority of users are from America and uses an English localisation. Even having a German version doesn’t make a big difference.

Another interesting aspect is the large number of devices people use. In total I collected data from more than 35 different devices. There are some devices that I never heard of (“zeppelin”???). Surprising for me is that the Nexus One (codename “passion”) seems to be quite unpopular.

The last thing wasn’t really surprising. Most users still use Android 1.5. Almost no one uses Android 2.0 (or 2.01) and 1.6 will probably die out soon as well.

I uploaded the compiled statistics for both applications. The time span for collecting the data was around one month mostly collected in April 2010. The statistics are limited because of the number of installation (approximately only 4.000 installations) and because only one of the application has been localized (and it has only been localized to a single language – German).

Hit the Rabbit!

Fight the dreadful rabbits and crush them with your holy thumb. The shooting season begins with my first game in the Android Market. Your job is to hit as many rabbits as possible. Pan the background around to find some of these evil creatures and hit them with a lusty touch. You can show your skills in different levels that force to hurry up. The time trial mode adds even more variety and you can fight against the clock.

You can download the latest version from the Android Market and don’t forget to give me some proper rating if you like it. Please leave a comment if you have critics or recommendations. In particular, if you have ideas to improve the game. It’s my first game (ever) so please be gentle with me. You find the game in the Market. You can also have a look at the description and screen shots.


What’s in the off-screen? Different techniques to show POIs on a map

My student Sascha and I implemented some visualization techniques for maps on phones. Don’t know what this is all about? Let’s have a look at the abstract of the paper Halo: a technique for visualizing off-screen objects:

As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the off-screen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.

A couple of other approaches try to support similar tasks. We thought testing is better than believing and implemented three different visualization techniques for digital maps on Android. There is a demo app in the market (direct link). We tried to make the whole thing portable but only tested on the G1 and the emulator. I would love to know if it works on other devices like the Motorola Milestone

I removed the app from the market because I lost my keystore and can’t update it anymore. If you are interested in testing it check out the Map Explorer. It is an updated version that you can find in the market.

Goodbye garbage collector – patching Android to make real-time camera image processing feasible

If you want to process camera images on Android phones for real-time object recognition or content based Augmented Reality you probably heard about the Camera Preview Callback memory Issue. Each time your Java application gets a preview image from the system a new chunk of memory is allocated. When this memory chunk gets freed again by the Garbage Collector the system freezes for 100ms-200ms. This is especially bad if the system is under heavy load (I do object recognition on a phone – hooray it eats as much CPU power as possible). If you browse through Android’s 1.6 source code you realize that this is only because the wrapper (that protects us from the native stuff) allocates a new byte array each time a new frame is available. Build-in native code can, of course, avoid this issue.

I still hope someone will fix the Camera Preview Callback memory Issue but meanwhile I fixed it, at least for my phone, to build prototypes by patching the Donut’s (Android 1.6) source code. What you find below is just an ugly hack I did for myself! To reproduce it you should know how to compile Android from source.

Avoid memory allocation

Diving in the source code starts with the Java Wrapper of the Camera and its native counterpart android_hardware_Camera.cpp. A Java application calls setPreviewCallback, this method calls the native function android_hardware_Camera_setHasPreviewCallback, and the call is passed further down into the system. When the driver delivers a new frame towards the native wrapper in return it ends up in the function JNICameraContext::copyAndPost():

void JNICameraContext::copyAndPost(JNIEnv* env, const sp& dataPtr, int msgType)
{
    jbyteArray obj = NULL;

    // allocate Java byte array and copy data
    if (dataPtr != NULL) {
        ssize_t offset;
        size_t size;
        sp heap = dataPtr->getMemory(&offset, &size);
        LOGV("postData: off=%d, size=%d", offset, size);
        uint8_t *heapBase = (uint8_t*)heap->base();

        if (heapBase != NULL) {
            const jbyte* data = reinterpret_cast(heapBase + offset);
            obj = env->NewByteArray(size);
            if (obj == NULL) {
                LOGE("Couldn't allocate byte array for JPEG data");
                env->ExceptionClear();
            } else {
                env->SetByteArrayRegion(obj, 0, size, data);
            }
        } else {
            LOGE("image heap is NULL");
        }
    }

    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
    if (obj) {
        env->DeleteLocalRef(obj);
    }
}

The evil bouncer is the line obj = env->NewByteArray(size); which allocates a new Java byte array each time. For a frame with 480×320 pixels that means 230kb per call and that takes some time. Even worse this buffer must be freed later on by the Garbage Collector which takes even more time. Thus, the task is to avoid these allocations. I don’t care about compatibility with existing applications and want to keep the changes minimal. What I did is just a dirty hack but works for me quite well.

My approach is to allocate a Java byte array once and reuse it for every frame. First I added the following three variables to android_hardware_Camera.cpp:

static Mutex sPostDataLock; // A mutex that synchronizes calls to sCameraPreviewArrayGlobal
static jbyteArray sCameraPreviewArrayGlobal; // Buffer that is reused
static size_t sCameraPreviewArraySize=0; // Size of the buffer (or 0 if the buffer is not yet used)

To actually use the buffer I change the function copyAndPost by replacing it with the following code:

void JNICameraContext::copyAndPost(JNIEnv* env, const sp& dataPtr, int msgType) {
    if (dataPtr != NULL) {
        ssize_t offset;
        size_t size;
        sp heap = dataPtr->getMemory(&offset, &size);
        LOGV("postData: off=%d, size=%d", offset, size);
        uint8_t *heapBase = (uint8_t*)heap->base();

        if (heapBase != NULL) {
            const jbyte* data = reinterpret_cast(heapBase + offset);
            //HACK
            if ((sCameraPreviewArraySize==0) || (sCameraPreviewArraySize!=size)) {
                if (sCameraPreviewArraySize!=0) env->DeleteGlobalRef(sCameraPreviewArrayGlobal);
                sCameraPreviewArraySize=size;
                jbyteArray mCameraPreviewArray = env->NewByteArray(size);
                sCameraPreviewArrayGlobal=(jbyteArray)env->NewGlobalRef(mCameraPreviewArray);
                env->DeleteLocalRef(mCameraPreviewArray);
            }
            if (sCameraPreviewArrayGlobal == NULL) {
                LOGE("Couldn't allocate byte array for JPEG data");
                env->ExceptionClear();
            } else {
                env->SetByteArrayRegion(sCameraPreviewArrayGlobal, 0, size, data);
            }
        } else {
            LOGE("image heap is NULL");
        }
    }
    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event, mCameraJObjectWeak, msgType, 0, 0, sCameraPreviewArrayGlobal);
}

If the buffer has the wrong size a new buffer is allocated. Otherwise the buffer is just reused. This hack has definitely some nasty side effects in common situations. However, to be nice we should delete the global refference to our buffer when the camera is released. Therefore, I add the following code to the end of android_hardware_Camera_release:

if (sCameraPreviewArraySize!=0) {
    Mutex::Autolock _l(sPostDataLock);
    env->DeleteGlobalRef(sCameraPreviewArrayGlobal);
    sCameraPreviewArraySize=0;
}

Finally, I have to change the mutex used in the function postData. The Java patch below avoids passing the camera image to another thread. Therefore, the thread that calls postData is the same thread that calls my Java code. To be able to call camera functions from that Java code I need another mutex for postData. Usually the mutex mLock is used through the line: Mutex::Autolock _l(mLock); and I replace this line with Mutex::Autolock _l(sPostDataLock);.

Outsmart Android’s message queue

Unfortunately this is only the first half our customization. Somewhere deep inside the system probably at the driver level (been there once – don’t want to go there again) is a thread which pumps the camera images into the system. This call ends up in the Java code of Camera.java. Thereby the frame is delivered to the postEventFromNative method inside Camera.java. However, afterwards the frame is not delivered directly to our application but takes a detour via Android’s message queue. This is pretty ugly if we reuse our frame buffer. The detour makes the process asynchronous. Since the buffer is permanently overwritten this leads to corrupted frames. If you want to avoid this detour this must be changed. The easiest solution (for me) is to take the code snippet that handles this callback from the method handleMessage:

            case CAMERA_MSG_PREVIEW_FRAME:
                if (mPreviewCallback != null) {
                    mPreviewCallback.onPreviewFrame((byte[])msg.obj, mCamera);
                    if (mOneShot) {
                        mPreviewCallback = null;
                    }
                }
                return;

and move it to the method postEventFromNative.

	if (what==CAMERA_MSG_PREVIEW_FRAME) {
                if (c.mPreviewCallback != null) {
                    c.mPreviewCallback.onPreviewFrame((byte[])obj, c);
                    if (c.mOneShot) {
                        c.mPreviewCallback = null;
                    }
                }
                return;
	}

This might have some nasty side effects in some not so specific situations. If you done all that you might want to join the discussion about Issue 2794 and propose an API change in the Camera API: Excessive GC caused by preview callbacks thread to find a proper solution for the Camera Preview Callback memory Issue (and leave a comment here if you have a better solution).

Baking a Donut for a Dream

As Android 2.0 will probably be available for the G1/HTC Dream soon I decided to keep up with the times and update to Android 1.6. Testing different Donut releases such as cyanogenmod and the vanilla htc version frustrated me. Support for processing camera images is still rubbish because of permanent memory allocation and garbage collector runs. Thus, I decided to bake my own donut. The guide below is heavily based on Johan de Koning’s Building Android 1.5 series and the Building For Dream Or Sapphire documentation.

Collecting the tools

In order to build and deploy your own Android 1.6 you need:

  • A G1 with Android 1.6 and a boot image with enabled fastboot
  • A computer with a recent Ubuntu or Windows with > 14GB free hard disk and > 2GB RAM

Using Windows 7 I downloaded and installed VirtualBox 3.1.2. The virtual machine needs around 14GB hard disk and 2GB RAM. To install a Linux I downloaded Ubuntu 9.10 mounted the ISO image and installed Ubuntu inside VirtualBox. It’s also a good idea to install the VirtualBox Guest Additions and create a shared folder to exchange data with the host machine. My folders sharename is exchange and can be mounted by typing the following into a terminal:

sudo mount -t vboxsf exchange /mnt

Next step is to install all kinds of stuff to be able to download things conveniently and compile the system:

sudo apt-get install git-core gnupg flex bison gperf libsdl-dev libesd0-dev libwxgtk2.6-dev build-essential zip curl libncurses5-dev zlib1g-dev

We also need Java but some parts of the source tree are (still) not compatible with Java6 and Java5 is not available as a package for Ubuntu 9.10. Following the Enea guys I used packages from the previous Ubuntu version Jaunty Jackalope. You first have to add the Jaunty repositories to your source list by typing:

sudo gedit /etc/apt/sources.list

in a terminal and add the following lines:

deb http://us.archive.ubuntu.com/ubuntu/ jaunty multiverse
deb http://us.archive.ubuntu.com/ubuntu/ jaunty-updates multiverse

Afterwards Java5 can be installed by typing:

sudo apt-get update
sudo apt-get install sun-java5-jdk

We need two additional tools to proceed. We will create a bin folder in our in our home directory:

cd ~
mkdir bin

The first tool is repo. We download it and make it executable:

curl http://android.git.kernel.org/repo >~/bin/repo
chmod a+x ~/bin/repo

The second tool is unyaffs a program that extracts files from a yaffs file system image.

curl http://unyaffs.googlecode.com/files/unyaffs >~/bin/unyaffs
chmod a+x ~/bin/unyaffs

We put the bin folder in our path by adding the following line to the .bashrc:

export PATH=${PATH}:~/bin:~/android-sdk-linux_86/tools

Getting the source and proprietary apps

Next step is to download the source code using repo and git. However, first we will create a folder for our source tree and then we can check out the donut sources:

mkdir mydroid
cd mydroid
repo init -u git://android.git.kernel.org/platform/manifest.git -b donut-plus-aosp
repo sync

This will take a while, time to buy a six-pack (needed when we’ll compile the system). When the check out finished (and you’re not to drunken) we can proceed.

To grab some proprietary binaries from your device which can’t be distributed due to legal reasons (whatsoever). Download the “HTC Proprietary Binaries for ADP1” from HTC’s developer site, add it to your source tree in vendor/htc/dream-open/ and decompress it. Afterwards you have to connect your Android 1.6 equipped G1 to your computer and make the device available to the virtual machine. Then execute the file from the vendor/htc/dream-open/ directory.

We also need the Android 1.6 recovery image which was available from http://developer.htc.com/adp.html a while ago. Unfortunately the links are dead now (but the files are still there…). At the time of writing you could get it by typing at the root of your source tree:

wget --referer="http://developer.htc.com" http://member.america.htc.com/download/RomCode/ADP/signed-dream_devphone_userdebug-ota-14721.zip

From the vendor/htc/dream-open/ directory run the “unzip-files.sh” script to unzip some proprietary files for your device.

Since we need some Google Applications which are not open source (e.g. the Market, Google Maps, …) we will extract them from the system image which we can download using:

wget --referer="http://developer.htc.com" http://member.america.htc.com/download/RomCode/ADP/signed-dream_devphone_userdebug-img-14721.zip

Inside your home directory create the folder htc and extract the zip file to this folder. Afterwards we can extract the system.img using unyaffs:

cd ~/htc
unyaffs system.img

To copy the apps to your source tree execute the attached copy_google_apps.sh script. In addition, you have to edit the build script to include these apps by replacing ~/mydroid/vendor/htc/dream-open/htc_dream.mk by an extended device_dream.mk.

Now execute the envsetup.sh script from the root of your source tree and run “lunch aosp_dream_us-eng” to specifically configure the build system for the G1/Dream.

. build/envsetup.sh
lunch aosp_dream_us-eng

The output should look like this:

============================================
PLATFORM_VERSION_CODENAME=REL
PLATFORM_VERSION=1.6
TARGET_PRODUCT=aosp_dream_us
TARGET_BUILD_VARIANT=eng
TARGET_SIMULATOR=false
TARGET_BUILD_TYPE=release
TARGET_ARCH=arm
HOST_ARCH=x86
HOST_OS=linux
HOST_BUILD_TYPE=release
BUILD_ID=Donut
============================================

Compile and deploy

Finally grab the beer go to the source of your build tree and type

make

If everything went well you find the result in the out/target/product/dream-open directory. If you want to deploy it on your G1 you have to boot into fastboot mode (shut down the device and power it up again while holding the BACK key). The fastboot tool is part of the Android SDK but can also be downloaded from HTC. I copied the files to Windows but you could probably also flash them directly from Ubuntu:

fastboot flash boot boot.img
fastboot flash system system.img
fastboot flash recovery recovery.img
fastboot flash userdata userdata.img
fastboot reboot

The first start will take some time but you can follow the process with adb:

adb logcat

You should end up with a system that hopefully looks and behaves just like a vanilla Donut release. Time to change the source and do some serious stuff.

cURLing Android Market stats in my website

Last week I thought it would be nice to collect some statistics about my apps in the Android Market. Seeing Websites like androlib.com and androidpit.de I thought it shouldn’t be a problem. However, apart from strazzere.com I haven’t found much useful information. Since I’m only interested in the stats of my own apps I took a deeper look on Google’s Developer Console for the Android Market.

I played a bit with Firebug and learned that the Developer Console is a GWT application, that JSON is used to get the app descriptions from the server, and that the GWT stuff is horrible to reverse engineer. Luckily I found a post that shows how to get the stats from the Android Marketplace with PHP/cURL. It didn’t worked for me at first but after toying around a bit it works now for me. However, I still have no clue what the JSON stuff I get from the GWT server means and I’m only guessing the most important values. It will likely break when I change anything in my developer account.

Below is the PHP script I use to fetch the data from the developer console. 99% is copied from Craige Thomas (I have absolutely no clue about PHP or cURL!). I only added the part that guesses the position of the values and the caching. The script is used to produce the output of the widget on the right.

60)) {

	//do google authorization

	$data = array('accountType' => 'GOOGLE',
	'Email' => 'YOUR GOOGLE LOGIN'
	'Passwd' => 'YOUR PASSWORD',
	'source'=>'',
	'service'=>'androiddeveloper');

	$ch = curl_init();
	curl_setopt($ch, CURLOPT_URL, "https://www.google.com/accounts/ClientLogin");
	curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
	curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
	curl_setopt($ch, CURLOPT_POST, true);
	curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
	curl_setopt($ch, CURLOPT_POSTFIELDS, $data);

	$output = curl_exec($ch);

	$info = curl_getinfo($ch);
	curl_close($ch);

	//grab the AUTH token for later

	$auth = '';
	if($info['http_code'] == 200) {
		preg_match('/Auth=(.*)/', $output, $matches);

		if(isset($matches[1])) {
			$auth = $matches[1];
		}
	}

	//login to Android Market
	//this results in a 302
	//I think this is necessary for a cookie to be set

	$ch = curl_init ("http://market.android.com/publish?auth=$auth");
	curl_setopt($ch, CURLOPT_COOKIEJAR, 'cookies.txt');
	curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
	$output = curl_exec($ch);

	//go to the Developer Console
	$ch = curl_init ("http://market.android.com/publish/Home");
	curl_setopt($ch, CURLOPT_COOKIEFILE, $ckfile);
	curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
	curl_setopt($ch, CURLOPT_COOKIEJAR, 'cookies.txt');
	$output = curl_exec($ch);

	//grab the JSON data
	$perm = "746E1BE622B08CBF950F619C16FCFF1E";
	$headers = array(
		"Host: market.android.com",
		"User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2",
		"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
		"Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3",
		"Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7",
		"Keep-Alive: 115",
		"Connection: keep-alive",
		"Content-Type: text/x-gwt-rpc; charset=utf-8",
		"X-GWT-Permutation: $perm",
		"X-GWT-Module-Base: http://market.android.com/publish/gwt/",
		"Referer: http://market.android.com/publish/gwt/$perm.cache.html");

	//not sure what x-gwt-permutation means, I think it may have to do with which version of GWT they serve based on your browser

//Change here?
	$postdata = "5|0|4|http://market.android.com/publish/gwt/|14E1D06A04411C8FE46E62317C1AF191|com.google.wireless.android.vending.developer.shared.AppEditorService|getFullAssetInfosForUser|1|2|3|4|0|";

	$ch = curl_init ("http://market.android.com/publish/editapp");
	curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
	curl_setopt($ch, CURLOPT_POST, 1);
	curl_setopt($ch, CURLOPT_POSTFIELDS, $postdata);
	curl_setopt($ch, CURLOPT_COOKIEFILE, 'cookies.txt');
	curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

	$output = curl_exec($ch);
	
	$output = substr($output,4);

	$json = json_decode($output);
	$csv = explode(',',$output);

	$apps = array();

	$index = 0;
	$app_count = 0;
	for($i = 0; $i < sizeof($csv); ++$i) {
		if (is_array($json[$i])) {
			$innerArray=$json[$i];
			break;
		}
		if (strpos($csv[$i], ".") !== false) {
			if ($index==1) $apps[$app_count][comments]=$csv[$i];
			else if ($index==2) $apps[$app_count][rating]=$csv[$i];
			else if ($index==4) $apps[$app_count][installs]=$csv[$i];
			else if ($index==6) $apps[$app_count][total]=$csv[$i];
			$index++;
			if ($index==7) {
				$index=0;
				$app_count++;
			}
		}
	}
	for($i = 4; $i < sizeof($innerArray); ++$i) {
		if ((substr($innerArray[$i],-1)=='k') && (substr($innerArray[$i-2],0,17)=='GetImage?imageId=')){
			$app_count--;
			$apps[$app_count][icon]="http://market.android.com/publish/".$innerArray[$i-2];
			$apps[$app_count][name]=$innerArray[$i-3];
			$apps[$app_count][size]=$innerArray[$i];
			$apps[$app_count][package]=$innerArray[$i-1];
		}
	}

	$Handle = fopen('market_stats.txt', 'w');
	fwrite($Handle, '');
	for($i = 0; $i < sizeof($apps); $i++) {
		fwrite($Handle, "");
	}
	fwrite($Handle, '
"); fwrite($Handle, ''); fwrite($Handle, ""); fwrite($Handle, ''); fwrite($Handle, $apps[$i][name].'
'); fwrite($Handle, 'Total installs '.round($apps[$i][total])); fwrite($Handle, "
'); fclose($Handle); $Handle = fopen('market_stats_history.txt', 'a'); for($i = 0; $i < sizeof($apps); $i++) { fwrite($Handle, $apps[$i][package].', '); fwrite($Handle, round($apps[$i][total]).', '); fwrite($Handle, time()); fwrite($Handle, "\n"); } fclose($Handle); } $readHandle = fopen('market_stats.txt', 'r'); echo fread($readHandle, filesize('market_stats.txt')); fclose($readHandle); ?>

Camera image->NDK->OpenGL texture

Since we are currently working on some augmented reality stuff for Android I need to show the camera image using OpenGL ES. It works great with pure Java if one uses only the grayscale image. However, I needed the color image. The G1’s camera delivers the image in a YUV format while OpenGL only understand RGB images. Unfortunately it is out of question to convert the YUV image to RGB in pure Java for images with 480×320 pixels. Thus, I used the NDK to implement the conversion. The code below does the job. It is based on code provided by Tom Gibara.

void toRGB565(unsigned short *yuvs, int widthIn, int heightIn, unsigned int *rgbs, int widthOut, int heightOut) {
  int half_widthIn = widthIn >> 1;

  //the end of the luminance data
  int lumEnd = (widthIn * heightIn) >> 1;
  //points to the next luminance value pair
  int lumPtr = 0;
  //points to the next chromiance value pair
  int chrPtr = lumEnd;
  //the end of the current luminance scanline
  int lineEnd = half_widthIn;

  int x,y;
  for (y=0;y> 1;
    for (x=0;x> 8) & 0xff;
      Y1 = Y1 & 0xff;
      int Cr = yuvs[chrPtr++];
      int Cb = ((Cr >> 8) & 0xff) - 128;
      Cr = (Cr & 0xff) - 128;

      int R, G, B;
      //generate first RGB components
      B = Y1 + ((454 * Cb) >> 8);
      if (B < 0) B = 0; if (B > 255) B = 255;
      G = Y1 - ((88 * Cb + 183 * Cr) >> 8);
      if (G < 0) G = 0; if (G > 255) G = 255;
      R = Y1 + ((359 * Cr) >> 8);
      if (R < 0) R = 0; if (R > 255) R = 255;
      int val = ((R & 0xf8) << 8) | ((G & 0xfc) << 3) | (B >> 3);

      //generate second RGB components
      B = Y1 + ((454 * Cb) >> 8);
      if (B < 0) B = 0; if (B > 255) B = 255;
      G = Y1 - ((88 * Cb + 183 * Cr) >> 8);
      if (G < 0) G = 0; if (G > 255) G = 255;
      R = Y1 + ((359 * Cr) >> 8);
      if (R < 0) R = 0; if (R > 255) R = 255;
      rgbs[yPosOut+x] = val | ((((R & 0xf8) << 8) | ((G & 0xfc) << 3) | (B >> 3)) << 16);
    }
    //skip back to the start of the chromiance values when necessary
    chrPtr = lumEnd + ((lumPtr  >> 1) / half_widthIn) * half_widthIn;
    lineEnd += half_widthIn;
  }
}

The code is not that optimized at the moment but can process a 480×320 image in ~25ms on my G1 (which is somewhat slow according to my student’s comments). In order to call this function from Java I needed a wrapper with a JNI signature:

/**
 * Converts the input image from YUV to a RGB 5_6_5 image.
 * The size of the output buffer must be at least the size of the input image.
 */
JNIEXPORT void JNICALL Java_de_offis_magic_core_NativeWrapper_image2TextureColor
  (JNIEnv *env, jclass clazz,
  jbyteArray imageIn, jint widthIn, jint heightIn,
  jobject imageOut, jint widthOut, jint heightOut,
  jint filter) {

	jbyte *cImageIn = (*env)->GetByteArrayElements(env, imageIn, NULL);
	jbyte *cImageOut = (jbyte*)(*env)->GetDirectBufferAddress(env, imageOut);


	toRGB565((unsigned short*)cImageIn, widthIn, heightIn, (unsigned int*)cImageOut, widthOut, heightOut);

	(*env)->ReleaseByteArrayElements(env, imageIn, cImageIn, JNI_ABORT);
}

To make it more interesting I added some filter to the camera image. There is a demo app in the market (direct link to the market). I tried to make the whole thing portable but would love to know if it works on other devices like the Motorola Milestone.
Sepia effectBlack & White effectFisheye effectInvert effect