Update: This article and all subsequent articles have moved to :
The goal of this article is to describe the process of getting a Raspberry Pi to successfully interact with a Kinect sensor.
Note: I am using a Macbook Air running Mountain Lion as mission control on this journey, so any specific commands noted here will be Mac-specific.
Let’s jump into it.
Step 1 - Burn Raspbian Wheezy to SD card
Download the Raspian Wheezy disk image and unzip it. Open Terminal, navigate to the folder that contains the disk image, and run the following command, with n as the disk number that your SD card is (About This Mac -> System Report -> Card Reader -> disk2 for me). Unmount the SD card using Disk Utility (but do not eject the SD card) before running:
sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
This will take a while.
Now eject the SD card and install it into the Pi’s card reader. Boot up the Pi. The first time you do this, you have to configure some stuff. I chose to fill the rest of the card, since it’s an 8MB card but the disk image is only 2GB. I also chose to boot straight to the desktop. And enabled SSH (optional, but handy). And set the timezone.
Reboot the Pi.
Now we should have a working Pi.
And for reference, the default Pi login credentials are:
Step Two - Hook up the Kinect
From the research I’ve done, there are two options: 1) libfreenect, and 2) SensorKinect. 1) gives you easy access to the raw data stream (RGB and depth) from the Kinect, so this is better if you want to custom-build your own stuff. 2) gives you access to the NITE middleware, which gives you skeleton tracking and whatnot. Will go with option 2) this time.
Start with the OpenNI SDK:
Download the tarball and extract and run:
Ok, that worked. But having trouble with the next part, namely, installing SensorKinect.
Currently installing libusb so I can at least try the OpenNI samples. Will need to hook up the Kinect.
Meta-Step - Hardware Note