Live video footage is fed from the handset to a central server, which rapidly matches on-screen objects to images previously entered into a database. The server then sends find relevant information and sends it back to user (…) The central server uses an algorithm called the Scale-Invariant Feature Transform to match objects. The algorithm uses hundreds or thousands of reference points, corresponding to physical features such as edges, corners or lettering, to find a match. The process works no matter how the object is oriented, but objects must first be carefully imaged and entered into the central database.
This is certainly a step forward compared to RFID and 2D barcodes such as Semacodes or QR codes. It reminded me of Atom tags that could recognize existing logo’s and also used server-side shape analysis and pattern recognition.
Unlike these two techniques, the existing 2D barcodes are not human-readable.
This is great news. I have switched to using Gmail for all my mail (work and personal) for about half a year now, and never looked back. I especially like having access to my mail in the way I organize it from every computer and am an avid user of Gmail’s labels to tag my messages (e.g. Waiting, Later, etc.). Being able to use these features from within a desktop client makes the experience even better.
Lode wrote in his last post (amongst others) about the fact that the hype of Virtual Reality is over. This doesn’t have to be negative in my opinion. Maybe having a fresh (and more realistic?) view on virtual reality and its possible uses can help.
As a comparison, look at the original promise of artificial intelligence (also called strong AI), versus the current, more realistic view (_weak AI_). Just as weak AI revived AI’s fortunes, Yvonne Rogersbelieves that Ubicomp research that enables people to become smart and proactive instead of focusing on a smart environment as in the original vision by Weiser can help bring success to the field.
Speaking of ubiquitous computing, I think that research in ubiquitous computing and more natural forms of interaction can benefit in some part from the previous work in Virtual Reality. Virtual Reality provided a way to interact with a three-dimensional world instead of using the traditional keyboard and mouse (albeit a virtual world), while one of the goals of ubiquitous computing is to interact in a natural way with the real world (which is of course three-dimensional).
Lode also referred to the Reality-Virtuality (RV) Continuum, which I hadn’t heard of yet. It will certainly be interesting to have a look at. I think it all depends on how you define things. Mark Weiser for example referred to ubiquitous computing as the opposite of Virtual Reality, namely embodied virtuality.
In my previous post I wrote about this year’s Master’s thesis students that will be working on Uiml.net. However, I hadn’t blogged about what last year’s students accomplished yet.
Ingo Berben chose a Bachelor’s thesis to improve the standards compliance of our renderer. He eventually concentrated on the behavior section, and more specifically on supporting conditions other than events. The movie below shows how this can be used to support form validation. The renderer checks if every field is filled in, and displays a message accordingly.
Rob Van Roey worked on support for multimodal user interfaces for his Master’s thesis. He implemented a new X+V backend for Uiml.net, which is thereby the first backend that renders to another XML document. The movie shows a multimodal user interface for controlling a smart home, in which Rob turns off the lights and turns on the alarm after saying the correct password.
Finally, Jan Meskens created a UIML design tool on top of Uiml.net. The workspace of the tool is generated dynamically according to the loaded vocabulary. The movie shows the basic working of this tool. Jan joined our ranks after his studies, and will continue to improve the UIML designer.
Ingo’s code is already available in a separate Uiml.net branch, and will soon be merged into the mainline. Jan is also working on integrating his work. When I find some time, I will probably merge Rob’s code as well.