Wearable Technologies

NOUS supercomputer + Wearables = Unlimited possibilities


What is the problem with wearables?

Wearable devices are a great way to display content, but they lack processing power. With so many on-board sensors generating data the question is: can we process this data in real-time on a wearable device? In most cases the answer is no. COSMONiO has addressed this problem by integrating its NOUS supercomputer with wearable devices to offer real-time data processing.

How does NOUS work with wearables?

  • The wearable device captures data from different sensors.
  • The captured data is wirelessly transmitted to NOUS.
  • NOUS analyses the data in real-time using its ultra-fast parallel architecture.
  • NOUS streams the results back to the wearable device.
  • The wearable device displays the results.

What are the applications?

Let’s take Google Glass as an example. With its on-board camera it can capture video and stills. What if we could run a Deep Learning application to recognise patterns in these images? Currently, Glass or any paired mobile device would not have enough power to perform this task. This is where NOUS comes in to offer real-time video stream analysis.


Example


A navigation system for visually-impaired people

Problem

Many people assume that the main problem that visually-impaired people experience is avoiding obstacles. As a result, a lot of research has focussed on developing 3D mapping systems to help with navigation in unknown areas. However, an expert team from the University of York led by Professor Helen Petrie identified a different kind of problem as the leading one. Visually-impaired people can be trained to use the walking stick or a guide dog very efficiently. But if they need to go the the post-office, how do they navigate the last few meters between the bus-stop and the post-office door?

Research question

We asked ourselves: Could we develop a system that helps visually-impared people navigate the last few meters of their journey?

Solution

If the visually-impaired person wears a wearable device such as Google Glass then we use the camera to capture images of the street. Since we know the user’s GPS location we can also gain access to all the Google Streetview data in the area.

My Image

Using computer-vision we can then compare the camera images to the Streetview images.

My Image

Since we know the location from which the Streetview images were captured we can work out the user’s position and give navigation instructions.

Since Google Glass does not yet have enough power to perform this visual navigation task, we developed a proof-of-concept prototype using a more powerful iPhone. The following video demonstrates the concept.

Disclaimer: The copyright of this video belongs to the University of York

The future

Wearable devices with on-board cameras offer a wide-range of possibilities for computer-vision and machine learning applications. We have already developed an architecture for real-time video transmission between wearable devices and the NOUS supercomputer, which allows really advanced applications to run on wearables with zero lag.


Project partners




© 2017 COSMONiO Limited • All rights reserved
Cranfield Innovation Centre, University Way, Cranfield, Bedford, MK43 0BT, United Kingdom • Registered in England, No. 07977865
Privacy PolicyTerms of Use