Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the fact that it's surprising does not make it a visual cue. A cue to what? I am not aware of any psychophysics study that says we have perception of droplets or lens transformations (in contrast to shadows , gradients etc that are well studied). There also doesn't seem to be an evolutionary reason for it because the natural world does not have lenses and glass. And UIs are usually based on intuitive features.


Not saying this makes the ui good but it should go without saying that the natural world has water which acts as a lens.

Also, of course we have perception of droplets. What we don’t have is an intuitive understanding of how light interacts with droplets.

I suspect that Apple are trying to leverage this lack of intuition to make their ui interesting to look at in an evergreen way. New backgrounds mean new interesting interactions. I’m not confident that they’ve succeeded or that that’s actually a good goal to have though. I have it on my iPhone 13 and personally I find it annoying to parse, and I feel relief when I go back to traditional apps untouched by the update like Google Maps


droplets of water are not lenses without a glass behind it, and we couldn't see substantial effects behind them before we had glass windows. There was little evolutionary reason to develop any perception of refraction in droplets of water. in contrast, shadows are instant indicators of distance and gradients instantly distinguish concave from convex surfaces for light coming from above.

(water doesnt do lensing unless it s a droplet)


I get that your point is that we don’t have a strong intuition for lenses and that’s tied to a lack of evolutionary reason to have them. I agree and suspect that might be the point of why Apple are using a the lens effects. We don’t need to go so far as to say the natural world is completely devoid of such phenomena. Of course they’re there but they’re largely not relevant to survival throughout human history


Is there any study saying that user interfaces should use visual effects for which our brains have hardware acceleration? It seems a reasonable premise, but is there data?


Taking advantage of innate perceptual cues is smart and our interfaces have always taken advantages of them https://en.wikipedia.org/wiki/Depth_perception

we shouldn't need a manual to interpret a UI


I don’t entirely disagree, but that is still an intuition, not a proof that our interfaces should always work that way.

We used to ride animals with legs, which worked a lot like our legs do. Does that mean the wheel is wrong? We don’t have wheels, and they don’t occur in nature.

I don’t think Apple has invented the wheel, and I’m inclined to agree that leveraging our hardware acceleration makes sense. But I haven’t seen anything beyond blind assertion that of course it has to work that way.


I think things are more "differential" than that. Since many of us look at these interfaces more than any other visual stimulus, our perception will be optimized around them. The ideal system, in the short term, will involve familiarity more than anything.


I assume he's saying the "disconnect" is easy to see.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: