Please see Remote Pulse main page if you haven't already!
Wow, much more to read and learn in Wikipedia pages.
Use a Raspberry Pi camera?! $30 for full access to 90 FPS data using their straightforward interface.
This work uses the same averaging algorithm by ——–, but rigorously defends its optimality and combines it with Lucas-Kanade point tracking to track non-still individuals. are not real-time, and don't seem to and don't explain the underlying noise characteristics of the video require either an assumption about the underlying dataset (bandpass heart rate frequencies) or ove
For our Pattern Recognition class, my partner (Billy Keyes) and I implemented an algorithm to track cardiovascular vital signs (like pulse and heart-rate variability) from video of exposed skin. Our current version uses OpenCV for face tracking and tracks the variations in an average over a region-of-interest (in this version, a sub-rectangle of the face). This readout is very similar to SPO2 sensors that you might find in hospitals, except for a fraction of the cost and much more convenient to use! Future plans are to improve robustness with Lucas-Kanade point tracking.
Excellent NASA Paper talking about determining pulse from radar, infrared, and visible spectrum. Gives a good overview, but doesn't do much dreaming.
Astronomers do picture stacking to make their end pictures better. Some use Toucan (a philips webcam), whereas others use a 5MP imager, like this one
MIT guys used a DSLR camera, but it looks like it did h.264 compression.
Just do a few tests and be done with it!
I was initially very skeptical of the proposed applications for this technology.
(To put somewhere in here). From Andrew: Stereotypical startups want to get something flashy out fast, build up their “worth” by hiring people that appear to know stuff, and then sell to a big company and get out as quickly as possible making as much money as they can. They don't really care about whether it actually meets a need in the long run, they are in it for the money more likely. Altruistic projects don't equal money, at least in the short-term.
Daytime Star Viewer
Not sure how this is useful, but it's definitely possible. Just take lots of pictures during the daytime of a sky and fix the aperture and exposure. Keep a solid object in view so that any motion from the shutter are corrected for.
Keep the aperture as wide open as possible and take a long enough exposure that still captures some shot noise on the pixel level when you diff the images. (you want the shot noise gaussian to be as small as possible to minimize the number of images you have to take to find the true value, but not too small so that you don't get any noise at all).
. All of blue sky is ~20000 lux during midday vs. lux of the moon (.25). However, we can still see moon during the day. Venus at brightest is .00014 lux. Looks like you've got a lot of pictures to take!
Set Nikon D90 to uncompressed
14-bit RAW (NEF) (super-huge-file). On the Mac it looked like the NEF was still compressed some on the pixel level, although that could have been the Preview viewer….(here are some other NEF files
How does this guy show stars while the sun is still up??? Is it because there's less atmosphere up there and this is normal viewing?
Ensure that people's faces you are verifying are actually living people and not paper printouts of them.
Video replay over a tablet attack? You're clever…you'll have to run the code yourself to find out
My biggest beef is that there are other methods for determining if the surveillance video is fake or not, including if the background of the object/person changes, random blinking of LED's and checking for diffuse reflectance off of skin, etc. that seem much easier.
If you have a high enough frame rate on your camera (or have direct control of CMOS sensor pixels), you can see the way that the pulse propagates
throughout the skin. The original paper by Verkyusse (Remote Photoplethysmographic Imaging Using Ambient Light
uses it to evaluate the effectiveness of Port Wine Stain therapy. (I suspect that you can't actually pull out phase from an undersampled signal, but I could be wrong
Check out the “phase” of the middle pane vs the right pane of the 2nd youtube video above, it's from the paper).
From this, you can determine the pulse wave velocity
, which apparently is directly correlated to arterial stiffness
, which is a marker of potential heart failure but isn't related to Cardiac Output (Edwards Lifesciences' big product). Good to know.
Apparently you can better see the vasculature underneath the skin using the IR spectrum. F.P. Wieringa's thesis
talks a lot about his work towards applying that to skin therapy. He averages over ROI's too, but I'm not sure what for.
OpenCV Compiling Cross-Platform
Google Research on Face Tracking
. Uses SVMs and stuff, former CMU grad students. Probably very similar to the algorithm being used for Google Hangouts, although probably better too. Hangout app seems similar to AAM stuff, although I haven't read their paper enough to know.
Utility of the Photoplethysmogram
, a frank view of how useful a photoplethysmogram might be. Seems to indicate blood flow, as well as be complicated by other effects in the body like oxygenation and stuff.
Getting Rid of Poisson Shot Noise from the CCD
Active Appearance Model Tracker (Getting rid of motion noise)
Reimplementation of Jason Saragih's face tracker in OpenFrameworks here
by Kyle Mcdonald
Getting FaceTracker working on Windows
How Behind I Am
I am ashamed to send you this code, but here you go. Didn't make time to clean it up this weekend.
How about this. I'll explain to you the algorithm real quick. Maybe you want to trade knowledge with me on bioinformatics? Oh, wow, bioinformatics is kinda what we're talking about here! Yeah, could you maybe give me the 2-minute explanation of the relation of de Brujin graphs to de novo assembly?
So, for this algorithm (and any pulse detection algorithm), you need to amplify small changes in color in local areas of change on open skin. The face and hands work well because there's lots of blood vessels going there I think and they're not covered by hair. Literally, hair will screw this method up, it's that sensitive. :)
Anyways, to do that, assuming the person is perfectly still (which doesn't happen because their heart is beating, and slightly moves their face too), you subtract the “mean” value from every pixel and then multiply the leftover “variance” (which you hope is only the change in color of the face). On this level, noise from photons hitting the camera CCD is actually kind of important, but they show up as spotty noise whereas the blood color is pretty consistent (which is why the paper just smoothed the whole image). A good alternate video is on my website here. Anyways, to find the mean value, often people use a running exponential filter or a boxcar filter (add em up and divide by N, aka the average). The running exponential filter is easier to program:
currentFilteredValue = (alpha)*currentRawSample + (1-alpha)*lastFilteredValue
whereas for a boxcar you have to keep an array of numbers. Someday I'll just make a well-documented set of functions like my boss has right now, but that's beside the point.
So, for the program, just do this for each pixel:
-Get mean value
-Subtract mean value from current value
-Multiply result by a magnification factor.
-Hope you have enough light and the person didn't move.
-Redraw the to the image buffer and display result.
Fun times! I threw the attached code together in time for our final demo two semesters ago and was waiting for a much better method to come out, which the MIT paper did. I'm still understanding their solution, because ours is fundamentally flawed (the person moves!). Whoo, time for work.
Looking forward to still talking! The other 3 interested people never emailed back :/
Kalman filters assume independent over time…maybe not the case for CCD noise. Still need to research more
Few groups publish oxy/hemoglobin *reflectance* spectra. Seen glimpses of it on a few patents. Would like to see results for Near-Infrared range too. Red, doesn't go at all, green/blue looks the most reflectance. Probably scales with lighting amount.
. Uses a few techniques, including wiener filtering. Probably best model is to develop noise model for individual camera.