- Valentin Schwind, Niklas Deierlein, Romina Poguntke, Niels Henze: Understanding the Social Acceptability of Mobile Devices using the Stereotype Content Model, Proceedings of CHI, 2019
- Sven Mayer, Valentin Schwind, Huy Viet Le, Dominik Weber, Jonas Vogelsang, Johannes Wolf, Niels Henze: Effect of Orientation on Unistroke Touch Gestures, Proceedings of CHI, 2019
- Ashley Colley, Sven Mayer, Niels Henze: Investigating the Effect of Orientation and Visual Style on Touchscreen Slider Performance, Proceedings of CHI, 2019
- Valentin Schwind, Pascal Knierim, Nico Haas, Niels Henze: Using Presence Questionnaires in Virtual Reality, Proceedings of CHI, 2019
- Alexandra Voit, Sven Mayer, Valentin Schwind, Niels Henze: Online, VR, AR, Lab, and In-Situ: Comparison of Research Methods to Evaluate Smart Artifacts (GitHub), Proceedings of CHI, 2019
Category Archives: Windows Mobile
Tutorial on Intelligent Mobile User Interfaces @ MobileHCI
Together with Sven and Huy, I’ll give a tutorial on Machine Learning for Intelligent Mobile User Interfaces using TensorFlow. One key feature of TensorFlow includes the possibility to compile the trained model to run efficiently on mobile phones. This enables a wide range of opportunities for researchers and developers. In the tutorial, we teach attendees two basic steps to run neural networks on a mobile phone: Firstly, we will teach how to develop neural network architectures and train them in TensorFlow. Secondly, we show the process to run the trained models on a mobile phone.
CHI 2016 Videos
The Effect of Focus Cues on Separation of Information Layers
Video for our CHI 2016 paper “The Effect of Focus Cues on Separation of Information Layers”, written by Patrick Bader, Niels Henze, Nora Broy and Katrin Wolf.
Impact of Video Summary Viewing on Episodic Memory Recall
Video for our CHI 2016 paper “Impact of Video Summary Viewing on Episodic Memory Recall”, written by Huy Viet Le, Sarah Clinch, Corina Sas, Tilman Dingler, Niels Henze, and Nigel Davies.
CHI 2014 Videos
Large-Scale Assessment of Mobile Notifications
Our CHI video 2014 for our paper Large-Scale Assessment of Mobile Notifications, written by Alireza Sahami Shirazi, Niels Henze, Tilman Dingler, Martin Pielot, Dominik Weber, and Albrecht Schmidt.
Exploiting Thermal Reflection for Interactive Systems
Our CHI video 2014 for our paper Exploiting Thermal Reflection for Interactive Systems, written by Alireza Sahami Shirazi, Yomna Abdelrahman, Niels Henze, Stefan Schneegass, Mohammadreza Khalilbeigi and Albrecht Schmidt.
Delay Time for Pre-Moderated User-Generated Content on Public Displays
Our CHI video 2014 for our note I Can Wait a Minute: Uncovering the Optimal Delay Time for Pre-Moderated User-Generated Content on Public Displays, written by Miriam Greis, Florian Alt, Niels Henze and Nemanja Memarovic.
Markerless Object Recognition on a Mobile Phone
I implemented a markerless object recognition that processes multiple camera images per second on recent mobile phones. The algorithm combines a stripped down SIFT with a scalable vocabulary tree and a simple feature matching.
Based on this algorithm we implemented a simple application which is shown in the video below. The stuff is described in more detail in a paper titled “What is That? Object Recognition from Natural Features on a Mobile Phone” that we submitted to MIRW’09.
The beauty of ARM assembler
After realizing that Visual Studio’s support for XScale intrinsics is somewhat buggy I took a look at ARM assembler. The needed SSD function is quite simple so it was quite easy to implement it (actually it took me quite some time to find the assembler on my disk). Since my data is only aligned to 32 bit I had to stick loading 4 bytes at a time. It looks likes this
squared_distance_asm proc
wldrw wR0, [r0] ; load 4 bytes in wR0
wzero wR10 ; rW10 == 0
wldrw wR1, [r1] ; load 4 bytes in wR1
wunpckilb wR2, wR0, wR10
wunpckilb wR3, wR1, wR10
wsubhss wR2, wR2, wR3
wldrw wR0, [r0, #4] ; load 4 bytes in wR0
wmacsz wR13, wR2, wR2
wldrw wR1, [r1, #4] ; load 4 bytes in wR1
wunpckilb wR2, wR0, wR10
wunpckilb wR3, wR1, wR10
wsubhss wR2, wR2, wR3
; repeat the above as often as necessary
; return the result
tmrrc r0, r1, wR13
end mov pc,lr ; return to C with the return value in R0
Loads and calculation are interleaved to have less pipeline stalls. I haven’t looked at it in detail but the assembler version need ~25% less time than the intrinsics which needs around 25% less time than the naive C version. Still the assembler and the intrinsic versions are slower than I expected. Probably they are not properly inlined.
WMMX is buggy in Visual Studio 2008
I implemented an object recognition algorithm for Windows Mobile 6 using Visual Studio 2008. When it worked somehow I thought about improving its performance to process more images per second. One aspect of my implementation is to compute two byte vector’s sum of squared differences (some million times per second of course). My device is an ASUS P535 with an Xscale processor so I opt for using Wireless MMX. Since in-line assembler is not supported for ARM processors I used the according MMX intrinsics.
My inital attempt to compute the SSD of two 8 byte vectors looked as follows:
//Computes the sum of squared difference for eight values
int squared_distance(unsigned char *a, unsigned char *b) {
__m64 v1=*((__m64*)(a));
__m64 v2=*((__m64*)(b));
__m64 v3=_mm_subs_pi16(_mm_unpacklo_pi8(v2, zero), _mm_unpacklo_pi8(v1, zero));
result=_mm_mac_pi16(result,v3, v3);
__m64 v4=_mm_subs_pi16(_mm_unpackhi_pi8(v2, zero), _mm_unpackhi_pi8(v1, zero));
result=_mm_mac_pi16(result,v4, v4);
return result.m64_i32[0];
}
Of course the function must be adapted to fit the actual length of the vector to be useful. However, the function returned completely random results. It took me a while to puzzle out why my function is buggy. Actually the values loaded in v1 and v2 are already wrong. __m64 v1=*((__m64*)(a)); should load 8 bytes in v1 but loads only 4 bytes in the lower half of v1. The other 4 bytes seem to be random. I tested a bunch of other options to load values in a __m64 variable and all failed in the same way. Looking into the assembler code generated by the compiler reveals that instead of using the wldrd instruction (which actually loads 8 bytes) the compiler generates a wldrw instruction (which loads only 4 bytes). It might be a compiler bug and I assume that its related to the alignment of the arrays. Intel’s assembler reference manual says that in order to load 8 bytes in a WMMX the bytes must be aligned to 8 bytes. However, Microsoft’s documentation of the WMMX intrinsics tells us that if “data is not appropriately aligned, the program will throw an exception“. No exception for me and I also tried to align the data properly.