The key components to user interaction in the M.K. system is to know where the microphone is being held.
By knowing X/Y position of the microphone, we can assume a very general position of the user. Essentially user body center is < 1 meter from microphone. There are better ways to get user position, but this one is the easiest, since we are leveraging the the technology that the user is ALREADY holding.
The original concept was to build a band of IR LEDs that circle the microphone. A problem with this is knowing how close the user is to the camera. The hack solution is to provide a narrow stage that limits where the user can position themselves. A better solution is for the microphone system to let the computer know its exact position.
For this reason, ARToolkit was considered. ARToolkit faducial can be rigged as a backlight. This would give X,Y,Z and tilt position. Very powerful for user interaction. Unfortunately, ARToolkit is built for Augmented Reality,and seems to assume the user and camera are in same position - heads up or see through displays. For a 7 inch faducial we get 50 feet distance from faducial to camera. Assuming a 1 foot covering is largest we can give user, then we can expect no more than a 100 foot distance from camera, or 8 feet.
In looking at positioning under this system, the ideal is to place camera and projector as close to one another as possible. This model was suggested by Zach Lieberman, whose work with Open Framework makes him an expert - for sure. It seems to make sense, since you are minimizing skew between projection and vision systems. Problem is that projectors typically require 15 feet for proper projection throw. This ofcourse depends on size of the projection. Were camera and projector next to each other, then faducial would be ... about 2 feet big! This would be cumbersome and a huge obstruction.
So for now, back to drawing board.
Saturday, February 28, 2009
Friday, January 16, 2009
motion tracking in flash
This video shows a pretty cool implementation of motion tracking using flash.
Tracking Multiple Objects Using a Webcam from chris teso on Vimeo.
Tracking Multiple Objects Using a Webcam from chris teso on Vimeo.
Full blog post here.
i give you lobster crotch
Wednesday, January 14, 2009
MR TEEEEEEEEEEEETH
Thursday, January 8, 2009
TITFISH
Tuesday, January 6, 2009
Playing with AS3
One of the first things I ran into was the lack on an onEnterFrame. It seems that event listeners are the way to go.
Here's what I did today. There is no analysis yet, but it tracks the x and y locations of the mouse for a period of thirty seconds (when running at 30 frames/second).
I just wanted to check that I am using the best approximation of an onEnterFrame function.
----------
var xLocations:Array=new Array();
var yLocations:Array=new Array();
var testFreq=3;
var counter=0;
//300 at 10 sample per second is 30 seconds
for (var i=0; i<300; i++) {
xLocations.push(0);
yLocations.push(0);
}
addEventListener(Event.ENTER_FRAME,TrackMouse);
function TrackMouse(event:Event) {
counter++;
if (counter>=testFreq) {
counter=0;
xLocations.push(mouseX);
xLocations.splice(0,1);
yLocations.push(mouseY);
yLocations.splice(0,1);
}
}
-------------
-Andy
Here's what I did today. There is no analysis yet, but it tracks the x and y locations of the mouse for a period of thirty seconds (when running at 30 frames/second).
I just wanted to check that I am using the best approximation of an onEnterFrame function.
----------
var xLocations:Array=new Array();
var yLocations:Array=new Array();
var testFreq=3;
var counter=0;
//300 at 10 sample per second is 30 seconds
for (var i=0; i<300; i++) {
xLocations.push(0);
yLocations.push(0);
}
addEventListener(Event.ENTER_FRAME,TrackMouse);
function TrackMouse(event:Event) {
counter++;
if (counter>=testFreq) {
counter=0;
xLocations.push(mouseX);
xLocations.splice(0,1);
yLocations.push(mouseY);
yLocations.splice(0,1);
}
}
-------------
-Andy
Capturing CD+G
I have access to over 3000 karaoke songs on the typical format, CD+G, G standing for graphics. There is an excellent explanation of this data system here.
It appears that the words are ACTUALLY a video that is compressed and hidden in the available stream on regular CDs. This means that for me to translate the files into the madlib karaoke format, I will need to take the ripped video file, place it in a video editor, and get the time points for each new line.
It would be great to find some video analysis software that does video-to-text, but I don't know of any that exists :(
It appears that the words are ACTUALLY a video that is compressed and hidden in the available stream on regular CDs. This means that for me to translate the files into the madlib karaoke format, I will need to take the ripped video file, place it in a video editor, and get the time points for each new line.
It would be great to find some video analysis software that does video-to-text, but I don't know of any that exists :(
motion library
Andy is working on creating a motion library that translates mouse positions over time into specific motions.
This library will be activated by data from the mouse for now, but will soon be received from IR coming from the microphone. We will be using XML Sockets in flash to communicate between the IR recognition and Flash
This library will be activated by data from the mouse for now, but will soon be received from IR coming from the microphone. We will be using XML Sockets in flash to communicate between the IR recognition and Flash
Subscribe to:
Posts (Atom)