The Fluid Wall Chapter


Late last year the complete development process of the Fluid Wall application was published as an online chapter in the book 'Mastering OpenCV with Practical Computer Vision Projects' The book contains a few absolutely amazing projects by some of the experts in OpenCV and the field of computer vision.

I am in no way an OpenCV expert, and am very grateful to have our Fluid Wall project included in the book as an online ad-on. This humble chapter you will attempt to give a detailed explanation about how we integrated the Kinect data through OpenNI and OpenCV with fluid mechanics of a fluid simulation from Jos Stam to create a fun, interactive program.


Furthermore, I would like to make acknowledgement here:


    I wanted to especially mention the work of another student from Texas A&M, whose name you will undoubtedly come across in the code included for this chapter. Fluid Wall was developed as part of a student project by Austin Hines and myself. Major credit for the project goes to Austin, as he was the creative mind behind it. He was also responsible for the arduous job of implementing the fluid simulation code into our application. Even though he wasn't able to participate in writing this chapter due to a number of work- and study-related preoccupations, but loads of credit for this project goes to him.

See the following links for more details about the project or the book..

Project FluidWall


This is a project I have been working on with another student from Texas A&M Visualization Lab, Austin Hines.

Here's a link to the project's initial draft on Austin's blog.
We've also made this project opensource and created a whole separate blog for FluidWall.
FluidWall Source: http://code.google.com/p/fluidwall/
FluidWall Blog: http://fluidwall.blogspot.com

The idea was to use the depth information and user tracking from the Kinect to be able to interact with a fluid simulation on the screen. Austin Hines worked on all aspects related to the fluid simulation, while I handled exploring and integrating the SDK that would let us communicate with the Kinect with the most ease. Using the OpenNI  along with with NITE middle ware we were able to get some really nice and simple depth data working with our simulation. This allowed us to have silhouettes of people / objects (whatever the depth sensor returned) to interact with fluid on the screen. Any movements would trigger the fluid which would dissipate in a bluish-white whirl. as depicted in pictures of the original idea on Austin's blog (mentioned above).

Soon we were able to track users and send appropriate data into the fluid simulation. This allowed us to start emitting different colors for different users.

The final result was quite amazing...


The Kinect SDK of Choice [OpenNI + Primesense + CLNUI]


Driving the Kinect Camera
Using the OpenNI SDK with PrimeSense's Driver and middle-ware, life has been much simpler :)
Here's what it can do right after the installation:



The Motor was Another Story Altogether
The only problem with the PrimeSense driver was it does not support the Kinect Motor. So, after several experiments and some time searching online forums for a solution, it was discovered that the PrimeSense Sensor's camera driver and CLNUI motor + audio drivers can be installed to run side-by-side.

This is what the Device manager might look like on a Windows XP machine with the two installed together:

On a Win XP Machine
On Windows 7 it somehow mixes up the CL Devices and PrimeSensor names, but the drivers still work fine.

How to make them work together (on Windows):
We had to do the triple installation for the OpenNI+Sensor+NITE and the installation for CLNUI Devices could be done either before or after them (we've tried both versions on different machines).

Once installed, the Kinect will probably have the first driver you installed as its primary driver. So, you'll have to go into the Device Manager and manually change the driver for the Camera to 'Kinect Camera' and the Motor to 'NUI Motor'.

And now, for something entirely different: Taming the Kinect




I had started a separate blog for my Kinect-based projects, but somehow it doesn't seem to be as popular. Therefore I've decided to transfer my Kinect related posts to this blog. So, here goes:

-------------------------------------

Here I share with you some general information and helpful links to all the Kinect-related open source content and websites I was able to find along the way during my search for the solution for using the Kinect on my computer:

The OpenKinect Community
OpenKinect is a very informative open community for updates and latest information on the current progress on the Kinect libraries and other software being developed by users all over the internet. There is also a lot of useful hardware information for understanding how the Kinect works and about USB devices and protocols in general for anyone interested in finding out more.

Here are all the open-source and freely available SDK's I came across when I first started searching for ways to use the Kinect with my computer. The pros and cons I've discussed here are based solely on my own experience using the SDK's.


Attempt #1: Code Lab's NUI Platform

The first result I found when searching for an SDK for the Kinect was the Code Laboratories' NUI Platform. The libraries were developed by AlexP, the first person to have hacked a Kinect sensor for use with a computer since after its release.

Pros:
  • Pristine C#  libraries and sample code. 
  • Using Visual Studio Express, the provided sample code compiled and runs quite easily.
  • C++ API is also available from the Code Lab website (No samples in C++ though)
  • Installation and running the sample code was a breeze on Windows 7 (I tried it on a quad core machine running an NVidia GT9800 graphics card).
Cons:
  • No samples for C/C++  (I found a sample code by another user on their forum, but I wasn't able to run it). Still, creating a simple code on C++ by just using their API was simple enough once I figured out the flow of control from the C# samples.
  • There are no skeleton-extraction and /or tracking options on this SDK (yet). One can possibly work out how to use some middle-ware (like the NITE libraries available for use with the OpenNI framework) to accomplish that. But there isn't any support available on the CodeLab website for skeleton detection and tracking (...yet).
  • Doesn't seem to work on my old Dell laptop (XPS M1210 running WinXP: dual core, NVidia Go7400). The RGB video would show, but it refused to display any depth data. (the laptop does have a USB 2.0, so it's not a USB port issue) I don't believe the problem is caused by any requirements from the Kinect but the limitation might be on the driver/SDK side.
Still, overall a very neat and nicely put together SDK without any messy compilations and multiple installs - just install the 'CL NUI Platform Installer' (available on the right-hand-side panel) and you're done. That should install the Drivers & the SDK, along with all  the required windows paths to the system, and even add the samples in the installed directory with Visual Studio project solutions ready to be run. 

Very neat, indeed!

Attempt #2: Libfreenect
This is the platform actively supported by the OpenKinect community. It is constantly being updated and includes a ton of user support on its forums.

Pros:

  • Cross Platform drivers, SDK and support.
  • Wrappers for multiple languages 
  • Documentation, very detailed. 
  • Both High Level and Low Level APIs available
  • Compilation instructions for several programming environments, including Visual Studio
Cons:
  • No pre-compiled binaries for the SDK available. You have to compile them separately for the SDK to work for your specific system and programming environment.
  • Complicated compilation process - several different libraries have to be downloaded from various sources and installed separately from the main code. 
  • No people tracking, gesture/skeleton tracking options. No integration with a middle-ware (such as NITE) available.



Attempt #3: OpenNI + PrimeSense Sensor + PrimeSense NITE

This was a set of three different components that had to be installed separately, but they worked together quite nicely to provide the most out of my Kinect experience! The setup comprises of the following three to be installed in order as follows:
There are some compatibility issues with different versions of these, however the combination of versions on this site (although a little old) worked together perfectly for me.

Pros:
  • Works great on both computers I tried it on, the Win7 quad-core machine as well as the WinXP dual-core laptop (both mentioned above)!
  • Cross Platform (Windows & MacOSX support, I wasn't able to find info on Linux usage).
OpenNI Installation
  • Works great when all the right versions are installed in the correct order.
  • Several different sample provided with source and pre-compiled binaries for doing all sorts of stuff: from getting simple RGB & depth images  to detecting people as separate objects and tracking them.
  • Documentation included! (Very detailed too!)
NITE Installation
  • NITE installs separately and provides its own list of amazing samples. Works as the middle-ware for detecting people when using OpenNI.
  • Includes its own Documentation! (Also very detailed)
  • Several samples provided (separate from the ones from OpenNI). Again source and binaries both included. Let's you explore several hand-tracking options. 

Cons:
  • Making sure I had all the compatible versions of OpenNI and Primesense was a bit of a pain. 
  • Installs as three parts - as compared to the 1 installation for the CL NUI Devices mentioned above.
  • No Motor or Audio driver included in the PrimeSense Sensor (not a game-breaker though, it plays well with the CLNUI package to let you control the motor through the CLNUI SDK).

Fritzing.org: Create Awesome PCB layouts to document your projects

From the Fritzing website:

"We are creating a software and website in the spirit of Processing and Arduino, developing a tool that allows users to document their prototypes, sharethem with others, teach electronics in a classroom, and to create a pcb layout for professional manufacturing."



I'll be trying my hands on the Fritzing software next time I work on an Arduino project. Maybe one day I'll even have time to go back and fix my older posts to have better documentation à la Fritzing! :)

Maxbotix XL-EZ2: Ultrasonic Range Finder

The Ultrasonic Range Finder is a sensor device that measures distance between the sensor and another object. It sends out an ultrasonic sound beam and measures the time taken for it to bounce back off of the closest object in front of it. Then depending on the length of the time interval, it determines the range (distance) to the nearest object in inches.
This kind of a sensor can be used in robotics to help navigate unmanned vehicles or in robotics to help them avoid running into walls or other objects. The rangefinder would detect when there is something in front of the vehicle and the vehicle or robot can be programmed to either stop or to try going around the obstacle. The ultrasonic rangefinders can also be used in stationary installations, like machines, appliances or artwork to activate certain switches based on the distance of a person from the sensor.

About the Maxbotix XL-EZ2 Ultrasonic Rangefinder:

The MaxSonar XL-EZ2 Ultrasonic Rangefinder by MaxBotix  used here gives the user an option of reading in the range either as serial, analog or PWM inputs.  The sample codes mentioned here use the analog input method (pin3 on the rangefinder) to read in the ranges. The ranges being read appear to be precise from around 20cm to up to even around five meters.  However the closer range values tended to fall into a dead zone on distances less than ~20cm when reading through the rangefinder's analog interface for the code below.





It is also possible to read the data in PWM or serial format on this module. Separate pins are used for these (pin2 for PWM, pin5 for Serial data). The user can also set the Serial output pin's mode to read either asynchronous Serial data, or simply just a pulse instead.  The beam size of the rangefinder also varies depending upon the input voltage. It can be connected to 3.3V to up to 5V to increase total area that the ultrasonic beam covers. The code below can be used with either voltage.




Circuit:




Schematic:
Sample Arduino Program:
This code simply takes the input from the rangefinder's analog pin and checks it against a threshold to light up an LED if an object is too close. 
/*    
This code takes in the input from the rangefinder's analog pin.
The pin returns distance (in inches) from the sensor to the
closest object in front of it. If an object is closer than the
threshold distance, the LED lights up.

Setup:
Connect the analog pin (3/AX) of the rangefinder to the Arduino
Board's Analog input pin0. Connect the ground and v++ on the
rangefinder to a GND and 5V on the Arduino respectively. Connect
the LED to pin13 and GND.

By Naureen Mahmood

*/

#define THRESHOLD  30      // threshold distance in inches
#define LED_PIN    13      // LED output pin
#define RF_PIN      0      // Range Finder input pin

void setup()
{
  Serial.begin(9600);
  pinMode(RF_PIN, INPUT); 
  pinMode(LED_PIN, OUTPUT); 
}

void loop()
{
  // Read distance on Rangefinder's analog pin
  int distance = analogRead(RF_PIN);
  Serial.println(distance);          // Print measured distance
 
  // if an object is closer than threshold distance, turn on LED
  if (distance < THRESHOLD)
    digitalWrite(LED_PIN, HIGH);
  else
    digitalWrite(LED_PIN, LOW);
}



Sample Arduino Program (with averaged values to avoid jitters):
In this code, to avoid jitters, we take an average of 50 input samples before printing out the measured distance. If the measured distance is less than 30 inches, the LED on pin 13 is turned on.

/*    

This code takes in the input from the rangefinder's analog pin. The pin
returns distance (in inches) from the sensor to the closest object in
front of it. To avoid jitters, we take an average of 50 input samples
before printing out the measured distance. If the measured distance is
less than 30 inches, the LED on pin 13 is turned on.

To avoid jitters, we take an average of 50 input samples
before printing out the measured distance. If the measured distance is
less than 30 inches, the LED on pin 13 is turned on.

Setup:
Connect the analog pin (AX/pin3) of the rangefinder to the Arduino
Board's Analog input pin0. Connect the ground and v++ on the
rangefinder to a GND and 5V on the Arduino respectively.

By Naureen Mahmood

*/

#define THRESHOLD  40      // threshold distance in inches
#define LED_PIN    13      // LED output pin
#define RF_PIN      0      // Range Finder input pin

int sampleSize   = 50;     // n readings 0 .. (n - 1)
int sampleCount  = 0;      // to count how many samples taken so far
int distance     = 0;

void setup()
{
  Serial.begin(9600);
  pinMode(RF_PIN, INPUT); 
  pinMode(LED_PIN, OUTPUT); 
}

void loop()
{
  // sampleCount starts life at 0, then loops through sampleSize
  // at each iteration of this main loop
 
  int avgDist;

  distance += analogRead(RF_PIN);
  sampleCount++;
  if (sampleCount == sampleSize)
  {
    avgDist = distance/sampleSize;
    Serial.println(avgDist);  //Print average of all measured values
    sampleCount=0; 
    distance = 0;
  }
 
  // if an object is closer than threshold distance, turn on the LED
  if (avgDist < THRESHOLD)
    digitalWrite(LED_PIN, HIGH);
  else
    digitalWrite(LED_PIN, LOW);
}

DIY Touch Sensor (Capacitive Sensor)

Capacitive Sensors is a technology which detects proximity or touch (by a hand/skin, or any conductive object). The sensor measures the capacitance between the input and output nodes to detect a touch. The sensor detects anything that is conductive, so these sensors can be used to replace any normal switches to make them touch sensitive or even be utilized in making touch screens for monitors, touch-pads and touch sensitive buttons in phones, laptops or other devices.  

About the Touch Sensor:
The sensor setup in the example below is a simple DIY setup without using a commercial sensor chip.

Setup:
Attach a high value resistor (1-10M Ohm) between an input and an output pin. Also connect a short bare copper or aluminum wire/foil to the input pin. If the wire is to be a longer one, make sure it isn't touching any other wires along the way, or just use a covered wire with a small uncovered area at its tip. This will be the touch sensor for the capacitive sensor (i.e. activates at touch).

An LED is also connected to a separate output pin and GND. This LED turns on when someone touches the sensor with a conductive object (e.g. capacitive sensors are most commonly used to sense touch with skin/fingers etc.)

It is also possible to vary the capacitance reading of this setup to detect even when one's hand is 3 to 4 inches from the sensor, or make it activate just on absolute touch. One can use lower values of R (e.g. 1 M Ohm or less) for absolute touch to activate the sensor. With a 10 M resistor the sensor should start to respond 1-2 inches away.

Code:
When the value at the output pin is changed from LOW to HIGH, it changes the state of input pin to LOW(or 0) for a very short time interval. This time interval is defined by:

T  =  R  x  C,

where
T  =  time interval,
R  =  resistance,
C  =  capacitance of the sensor + capacitance of any conductive object in contact with the sensor pin

So, this time interval increases if the sensor on input pin (the bare copper/aluminum wire) is touched with a conductive object. And the interval reduces again when the conductive
object is removed from the sensor.  So, we measure the length of the time interval to get a measure of capacitance on the touch sensor.

Threshold:
The value of the threshold here depends on how sensitive the user wants the sensor to be.  The lower bound of the threshold would be the value of R (the resistance) itself, since that remains constant in when measuring T = R x C. But, the upper bound can be changed depending on the requirements of the system.

Smoothing:
However, there might be a lot of jitter as well as environmental conditions that might make the
capacitance value jump around a lot. This can be overcome by using a smoothing function. For example, this can be done by reading the capacitance measure for a number of times and then averaging the values overall.

Circuit:

Schematic:
Sample Arduino Program:
About this Code:
When the output at pin4 transitions from LOW to HIGH, it changes the state of input pin5 to LOW(or 0) for a very short time interval. This time interval increases if the sensor on input pin5 is touched with a conductive object and vice versa.

At the start of each main loop cycle in this program, we set the value of a variable 'capX' to 0. Then for the time interval the value at input pin5 returns LOW, we increment 'capX'. This results in 'capX' being barely incremented if the sensor is not in contact with a conductive object. But, as soon as someone holds/touches the sensor the value of capX quickly increments because of the longer time interval. So, if the capX value is bigger than a given threshold, it means the sensor just detected a touch.

The value of the threshold here depends on how sensitive the user wants the sensor to be and/or the environmental affect the initial value at the sensor itself.

/*

This code turns the LED on while the sensor is in contact
with a conductive material (e.g. when someone touches it
with their bare skin/fingers)

Setup:
Attach a high value resistor (1-10M Ohm) between an output
pin 4 and input pin 5. Also connect a short bare copper or
aluminum wire/foil to the input pin5. Connect an LED to
output pin13 and GND.

By: Naureen Mahmood.

*/

#define LED        13
#define THRESHOLD   5

int capI;      // interval when sensor pin 5 returns LOW

void setup()
{
  Serial.begin(9600);
  pinMode(LED, OUTPUT);
  pinMode(4, OUTPUT);     // output pin
  pinMode(5, INPUT);      // input pin
}

void loop()
{
  capI = 0;      // clear out capacitance measure at each loop

  // transition output pin4 LOW-to-HIGH  to 'activate' sensor pin5
  digitalWrite(4, HIGH);     

  // On activation, value of pin 5 stays LOW for a time interval T = R*C.
  // C is big if the sensor is touched with a conductive object.
  // Increment capI for the interval while pin5 is LOW
  int val = digitalRead(5);  // read the input to be checked
  while (val != HIGH){   
    capI++;   
    val = digitalRead(5);    // re-read the input to be checked
  }
  delay(1);
 
  // transition output pin4 HIGH-to-LOW to 'deactivate' sensor pin5
  digitalWrite(4, LOW);     
  Serial.println(capI, DEC);  // print out interval

  if (capI > THRESHOLD)       // Turn LED on if capI is above threshold
    digitalWrite(LED, HIGH);
  else 
    digitalWrite(LED,  LOW);
}


Sample Arduino Code (with smoothing filter):

About this Code:
 This code uses the same technique for measuring capacitance as  the earlier one. But, this one also uses a smoothing filter to  remove any jitter along the measured values by averaging 4 consecutive values from the input pin. Then at each iteration of that 4-time loop, after transitioning the output pin 4 from low to hi, we measure the duration for which the value at input pin 5 remains low (and save it in variable capLo). Then we transition the output pin 4 back to low, and measure the duration for which the input pin 5 is high (and save it in variable capHi). We won't be using the capHi variable for the filter but, this helps read out any noise at the input pin before the next iteration.

After this loop, the smoothing filter is applied to the measured capLo value. We use the current capLo value and previous filtered value, called prevCapI, from the last iteration of the loop function (not the for-loop), and multiply them by f_val and (1 - f_val) respectively. Here f_val is the amount of filtering to be applied to the measured capacitance values. This value can be between 1 (no filter) and 0.001 (max filter). This makes sure both the current and previous values of the measured capacitance are included in the final filtered value, and the value f_val determines what proportion of each is to be included in the final value. Therefore, even the sudden changes in the capacitance are smoothed out based on previous input.

So, then the LED brightens or dims smoothly based on these filtered values from the touch sensor.

/*    

 This code makes the LED intensity go from dim to bright
 smoothly when someone touches the sensor with a bare
 finger, and then smoothly dims down to turn off after
 the person lets go of the sensor.

 Setup:
 Attach a high value resistor (1-10M Ohm) between output
 pin 4 and input pin 5. Also connect a short bare copper or
 aluminum wire/foil to the input pin5. Connect an LED to
 output pin 11 (or any PWM pin) and GND.

 [ Smoothing filter based on code by Paul Badger found at:
   http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1171076259 ]

 By: Naureen Mahmood
 */

// You can change the bounding values for the capacitive/touch
// sensor depending on what values work best for your setup
// + environmental factors
#define LOW_T       10    // lower bound for touch sensor
#define HIGH_T      60    // upper bound for touch sensor
#define LED         11    // LED output pin

// These are variables for the low-pass (smoothing) filter.
float prev_capI;    // previous capacitance interval
float filt_capI;    // filtered capacitance interval
float f_val = .07;  // 1 = no filter, 0.001 = max filter
unsigned int capLo; // duration when sensor reads LOW
unsigned int capHi; // duration when sensor reads HIGH

void setup()
{
  Serial.begin(9600);

  pinMode(LED, OUTPUT);
  pinMode(4, OUTPUT);    // output pin
  pinMode(5, INPUT);     // input pin
}

void loop()
{  
  // clear out the capacitance time interval measures at start
  // of each loop iteration
  capHi = 0;
  capLo = 0;

  // average over 4 times to remove jitter
  for (int i=0; i < 4 ; i++ )
  {      
    // LOW-to-HIGH transition
    digitalWrite(4, HIGH);   

    // measure duration while the sense pin is not high
    while (digitalRead(5) != 1)
      capLo++;
    delay(1);

    //  HIGH-to-LOW transition
    digitalWrite(4, LOW);             

    // measure duration while the sense pin is high
    while(digitalRead(5) != 0 )    
     capHi++; 
    delay(1);
  }

  // Easy smoothing filter "f_val" determines amount of new data
  // in filt_capI
  filt_capI = (f_val * (float)capLo) + ((1-f_val) * prev_capI);   
  prev_capI = filt_capI; 

  Serial.println( filt_capI ); // Smoothed Low to High

  // Map the capacitance value range to LED brightness (0-255)
  int ledVal = map (filt_capI, LOW_T, HIGH_T, 0, 255);

  if (filt_capI > LOW_T)
    analogWrite(LED, ledVal);
  else
    analogWrite(LED, 0);
}