Hacker News new | past | comments | ask | show | jobs | submit login

That's a super fair question. Depending on the image, Deep Angel can produce a quite plausible background but sometimes it just fills in the object with a "gray blob." The gray blob issue can arise from (1) the object is too big relative to the rest of the image. For example, consider an image in which 67% of it is made up by the object that you wish to remove. In this case, DeepFill doesn't have enough context to fill in the pixels in a plausible manner. (2) the object is on the side of the image. The further skewed the object is from the center, the less information DeepFill has to plausibly inpaint. (3) the training data is quite different from the test data.

The gray blob is a collapse of the pixels to the mean of the colors and textures around the removed portion of the photo.

The AI isn't yet perfect, and as you use the AI, you'll start to see which kind of photographs work really well and which do not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: