View Single Post
Old 03-23-2011, 06:32 AM   #2356
alkemical
Guerrilla Ontologist
 
alkemical's Avatar
 
rorrim|mirror

Join Date: Apr 2001
Location: Future
Posts: 43,083

Adopt-a-Bronco:
Prima Materia
Default

Quote:
Originally Posted by alkemical View Post
http://hackaday.com/2011/03/19/build...=Google+Reader

Building a home automation mesh network

[Ian Harris] designed a bunch of home automation for his parents using X10 hardware. He was a bit disappointed by the failure rate of the modules and the overall performance of the system so he set out to replace it with his own hardware. Lucky for use he’s documented the journey in a four-part series about mesh networks.

The hardware seen above is his test rig. He’s using a couple of Sparkfun breakout boards to develop for nrf2401a RF transceiver chips. These could be used as slave modules, with a central command device, but due to the home’s architecture wireless signals don’t propagate well from one end of the house to the other. The solution is to build a mesh network that will allow each module to act as a network node, receiving and passing on messages until they arrive at the target device. He’s trying to do this with cheap hardware, selecting the PIC 16F88 which boasts 7 KB or program memory and 368 bytes of ram. In the end it doesn’t take much code to get this running, it’s the concepts that take some time and research before you’ll be comfortable working with them.
http://www.orangemane.com/BB/showthread.php?t=97457

There is an audrino thread i started for anyone else interested




http://www.gizmag.com/kinect-as-a-set-of-eyes/18179/

NAVI project turns Kinect into a set of eyes for the visually impaired



While we've looked at a couple of efforts to upgrade the humble white cane's capabilities, such as the ultrasonic Ultracane and the laser scanning cane, the decidedly low tech white cane is still one of the most commonly used tools to help the visually impaired get around without bumping into things. Now, through their project called NAVI (Navigation Aids for the Visually Impaired), students at Germany's Universität Konstanz have leveraged the 3D imaging capabilities of Microsoft's Kinect camera to detect objects that lie outside a cane's small radius and alert the wearer to the location of obstacles through audio and vibro-tactile feedback.

The vibrotactile wistbelt
Debug view of the software used to tune the parameters for depth processing
The Kinect camera mounted on a sugru socket and fixed with duct tape
The backpack used to hold the laptop
View all

That's right, I said "wearer" because the system created by Master's students Michael Zöllner and Stephan Huber places the Kinect camera atop the visually impaired person's head thanks to a hard hat, some sugru and a liberal application of duct tape. The image and depth information captured by the Kinect cameras is sent to a Dell laptop mounted in a backpack, which is connected via USB to an Arduino 2009 board glued to a fabric belt worn around the waist.

The depth information captured by the Kinect camera is processed by software on the laptop and mapped onto three pairs of Arduino LilyPad vibration motors located at the upper and lower left, center and right of the fabric belt. When a potential obstacle is detected, its location is conveyed to the wearer by the vibration of the relevant motor.

A Bluetooth headset also provides audio cues and can be used to provide navigation instructions and read signs using ARToolKit markers placed on walls and doors. The Kinect's depth detection capabilities allows navigation instructions to vary based on the distance to a marker. For example, as the person walks towards a door they will hear "door ahead in 3, 2, 1, pull the door."

The students see their system as having advantages to other point-to-point navigation approaches using GPS – which don't work indoors – and seeing-eye dogs – which must be trained for certain routes, cost a lot of money and get tired.

For their NAVI project, the Universität Konstanz students wrote the software in C# and .NET and used the ManagedOpenNI wrapper for the Kinect and the managed wrapper of the ARToolKitPlus for marker tracking. The voice instructions are synthesized using Microsoft's Speech API and all input streams are glued together using Reactive Extensions for .NET.
alkemical is offline   Reply With Quote