Hacker News new | past | comments | ask | show | jobs | submit login
Removing objects from video in real time (kurzweilai.net)
76 points by chaosmachine on Oct 15, 2010 | hide | past | favorite | 27 comments



AKA: real-time repair tool. Impressive results overall, and a fun use for AR for sure, though it seems they're not using previous frames' covering textures as hints (or not overly well). Especially visible with a heavy-noise background like where the drainage grate is, at around 1:46.

Next up: augmented reality headgear, so you don't have to see billboards while driving / hunting zombies in the real world. Or your annoying coworkers.


Until it mistakes the sticker filled VW Van pulling out in front of you as a Billboard....oops :-)


A real world AdBlock if you will. I want one.


Actually this is the real world adblock: http://news.ycombinator.com/item?id=1770950

;)


Except that that's way worse (currently) than the main article's effort. Ow.


Well great. Now I can't even trust live video not to be censored.

I wonder how it works on non-static objects, like people? Seeing the drain removed from the sink made me think of removing faces from people. Advance it enough, and you could probably get a visual Autotune for people to make them generically attractive by removing wrinkles and whatnot. Quite useful for politics, news, etc.


Actually there is already a tool for enhancing anatomical features: "MovieReshape: Tracking and Reshaping of Humans in Videos", http://www.mpi-inf.mpg.de/resources/MovieReshape/


Given that the much simpler problem of autotune can't be solved without giving the voice an artificial sound, I doubt that auto-removal of wrinkles is going to work very well.

Of course you can also just remove wrinkles with a blur. I guess this gets more and more difficult in the age of HDTV.

Also: I'm less concerned about the fact that I can't trust live video to be uncensored, and more concerned about the fact that conspiracy theorists now can't trust live video to be uncensored. Now that they've been given the idea, I look forward to the next round of conspiracy theories, in which pixels are analyzed to prove that [favourite boogeyman] was really behind [next major event].


The OP is a more advanced technology. But as to whether you've already seen this on live video, if you've ever watched a baseball game, live coverage of home plate usually replaces the advertisement behind the batter. You'll see that newer stadiums facilitate this by painting blocks of solid color in the spots where the ads appear on TV.


It removes the box, but its reflection is still present in the mirror, which is spooky.


When I was in college I wrote some software that did something sorta like this. My target was to post-process video content from TV shows with the channel logos in the corners.

My approach was to detect the logos based on rudimentary pattern-detection (mostly finding parts of the screen the didn't change over a period of time) then calculate the replacement pixel-color based on the other pixels around the edges of this logo, weighting their impact on the replacement pixels by their proximity to the pixel being calculated.

It worked reasonably well ... albeit much slower than the demo in this video ... and transparent logos were then introduced complicating the calculations. Also, sitcoms worked much better than sports. Their cameras were more stationary making detection and pixel-color replacement-calculation easier.


It's re-synthesize but then for the number of frames per second, there are already re-synthesize plugins for Photoshop and Gimp so this is only a logical step up.


There is more information available, from the previous frames.

Its likely that more work is being done than just simply applying a similar algorithm to Photoshops re-synthesise to each frame.


I'm a little bit skeptical of the process. How do you "improve" the image and still get the details back? When you downsample/blur, you lose information. Does he keep the original and splice parts of it back? If so, how does he decide which to splice where? And if it's just using pixel color statistics when you up sample, that would work on most of the single color textures in the video, but how would that work for the brick road with the drain? The seams between the bricks are directional, so there'd have to be some way to account for that.

Anyone got a good guess? (or a paper for that matter?)


I'm thinking this won't work as well on a background that's not plain. I want to see the same thing except with a checker board pattern in the background.


well, it would likely work on checkerboard, but not for say a marble on a bed of mixed color paperclips.


Wow! This reminds me of this story...

http://news.ycombinator.com/item?id=1770950


I've often wondered how long it will be before TV companies edit the video of sporting events to replace the adverts in the stadium with different ones they've sold themselves. Given the adverts are in fixed places it would be easy to do you'd think.


I thought this already happened, with localised adverts being shown to different audiences?


Yes, it does.

Next time you go to a (newer) baseball stadium, check out behind home plate and you'll sometimes see painted blocks where the TV cameras overlay ads.


The sporting leagues would disallow this in their rights deals, IMO.


IANA graphics person but could someone help me out in understanding this?

Once the resolution of the image has been reduced, it becomes a low-res image. How do they get back the original resolution ?

Frankly, it sounds a lot like "Enhance" to me.


"I thought what I'd do was, I'd pretend I was one of those deaf-mutes."


Hey, I know that bathroom! (I’m a student at TU Ilmenau.)

I really don’t know why I didn’t know that our Institute does something other than social science. Cool!


there is something unnervingly Orwellian about this.


Great for those videos you took on vacation last year -- with the girlfriend that didn't work out. Now she's not there any more.

I would imagine the next logical step (assuming people removal works, it was not demonstrated) would be to _add_ people to videos in realtime. Uncle Joe dead? No problem. We'll just take those videos that we had of him last Christmas and put him in our videos from this year.

While the tech is nowhere being new, the real-time nature of this really makes you think about the possibilities.


I wonder what could be some real world applications for this technology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: