Putting aside the greater variety of physical traits that you describe, dogs generally are more adaptable than cats. They are estimated to have twice the number of neurons and are much more malleable whereas cats feel more hardwired into a set of cat behaviours.
I’ve assumed that this greater learning capacity and malleability is both the best part of a dog and a vulnerability that can lead them to become highly anxious and dependent animals.
I’ve had both cats and dogs, and loved them both, but my goodness they are so wildly completely different animals.
I admit to invoking the phrase “Where we’re going, we won’t need eyes to see” at least once a year when something feels like it’s going horribly wrong.
There are many motivations for shooting jpeg with film sims, from just not wanting to expend the effort editing photos to my motivation as a colour-blind person who simply cannot see colour well enough to manually adjust photos. For me, it’s incredible being able to choose a film simulation and be happy with the result even if I know that the colours I’m seeing aren’t quite the same that others will see. It’s the entire reason I bought into the FujiFilm system.
If you want to shoot to JPEG, and not post process, you aren't really going to need a camera that was designed to capture far more data than the target format is capable of representing. And yet, people pay for really expensive cameras with the kind of dynamic range that is only useful for post-processing. It is like paying for a sports car with a big engine -- and then have someone else drive it no faster than 20mph while you sit in the passenger seat. It is a waste of money. And camera companies are taking advantage of consumers who think they need these expensive cameras to get the kinds of shots they want.
They don't.
Of course I understand that it is more complicated than that. How the camera looks and handles is a huge part of the equation. (I am, after all, the kind of moron who has a Leica in their collection of cameras -- which is a nice camera, but it isn't technically as good as my Nikons :-)). But I still feel that the industry is taking advantage of consumers by selling them capabilities they aren't ever going to use.
Some camera manufacturers do something that is somewhat sensible: they make their film emulation profiles available in post-processing. So you can shoot raw, take advantage of the leeway this provides to get the exposure and tonality right, and then apply the film simulations in post.
As for post-processing, I think the biggest problem is that people think it requires a lot of work and that it is complicated. It is easy to get that impression when you see all of the _atrocious_ editing videos on youtube of people over-editing pictures.
If you do have to spend a lot of time post-processing, the problem is usually that you have no idea how to capture a photo in the first place -- or you have no idea what you want. It pays off to learn how to shoot. And if people aren't interested in learning: mobile phone cameras will usually make more satisfying images with a lot less work. They are _far_ more capable of instant gratification than expensive compact cameras from just 10 years ago.
And I say that as someone who spends a lot of time learning. Even after 30 years. Either you want to up your game, or you don't. If you don't, then there is very little a film preset can do for you.
As for color blindness: you will be no more capable of creating a decent color photo by having the camera slap some color grading on your picture than if you actually edit it in lightroom. Though you can probably learn how to correct images that have obvious color defects without actually being able to see them in post. You can't do that in the camera.
That being said, I do most of my (very rapid) post processing in black and white. The first thing I do is to turn off the colors to adjust exposure, contrast, tonality etc. Once that is in place I turn the colors back on and do any color grading/corrections I want. This is where you'd apply film simulations etc. And as I said in the paragraph above: if you are color blind, it makes no difference if you let the camera do it or some film preset.
I spend perhaps 10-30 seconds per image in post. (Usually I spend more time on the first picture in a series and then apply those edits to all photos of the same scene or with the same settings and lighting with minor variations).
The the big advantage of doing this in post is that you have an entire universe of film simulations to choose from. You are not limited to what comes with your camera. The difference is that you will have a lot more wiggle room to get the exposure and tonality right.
A lot of photographers (myself included) don't actually shoot so the image looks like what I want to end up with, but with specific processing in mind. Usually because you know what the camera sensor is capable of doing, so you optimize for capture of usable raw data so you can get the result you want in post. And with practice, post processing shouldn't be time consuming.
What my doctor has told me, after attending a urology + prostate cancer conference, is to think of the prostate as a sponge that absorbs testosterone. And once the sponge overflows, prostate cancer can be triggered.
But once cancer has occurred, adding more testosterone doesn't matter because the sponge is already super-saturated.
In fact, doctors who have this perspective will permit men with prostate cancer to continue testosterone therapy.
My father was recently diagnosed with prostate cancer and the first treatment was eradicating all testosterone from his body as the affected cells were "feeding" from it.
It's not a cure as they tend to find other ways to grow with time but testosterone does make it faster.
Of course there are many types of cancer so this may not be true for all prostate cancers.
I thought the accepted explanation of Annihilation was a metaphor for cancer. The slow seemingly unstoppable spread. Mutations creating new things that mimic the familiar but in often grotesque ways, mechanistically expanding to destructively consume everything in its path into a new form of life.
Wow, that's a throwback. I remember when my Ontario high-school in '84 or '85 got a whole room full of ICON computers. It was a total miserable nightmare for the teaching staff as each computer class had at least a couple of students with the skills to elevate themselves to root privileges and cause constant mischief.
The ICONs were way more fun from the aging Commodore Pets that had populated Ontario high-schools, and the big trackballs felt futuristic.
Yes! By the time I was in high school in Ontario (early 90s) we had PCs as I recall, but in grade school there were Pets and then later ICONs. At home a lot of kids had C64s, though my family’s first machine was an IBM XT clone.
I vaguely remember a mysterious “other room” where ICON-related things happened - this must have been where the LEXICON server was located.
I agree that truthfully we're not talking about excluding those users, but only preventing those users from upgrading to the latest version of the app. But for lots of businesses those two things are viewed as effectively the same, because they believe they are adding value to the new version of the app that will result in greater user stickiness and direct or indirect revenue opportunities.
Honestly I'm thrilled anytime I'm working for a client that has a iOS(-2) rule versus an iOS(-4) rule.
If you're a business with a consumer facing app, you don't want to exclude 5% or 10% of your users. Most of the client projects I've worked on had a requirement to support iOS(-4) which is super painful from a development standpoint and usually the number of users on a device with iOS four versions old are in the range of 2%. But I get that it's tough for a financial institution or a streaming media company or a telco to exclude potentially 2% of users.
100%. I thought the same thing with my Apple Watch and I resisted spending what I considered to be an unreasonable amount on earbuds as I didn’t care (or so I thought) about having wireless earbuds. But eventually so many colleagues had them and talked to me about how great they were that I caved and bought a pair. Wow was I wrong about how wireless earbuds was something I didn’t need. Then I started evangelizing to people about them, and the cycle continued.
I agree, at first the watch was not really that interesting. Now, I can see it eating Garmin and wahoo for athletes. Why buy a separate device for tracking the bike rides or the runs when you can just use the apple watch. The vision pro will be the same thing. Give it 5 years.
You're supporting the idea behind "A lot of times, people don’t know what they want until you show it to them." Except, it sounds like the barrier is even higher for some people.
Very true. And I’m an iOS/macOS/tvOS developer who has all the toys, yet even I had trouble coming to grips with paying a premium for AirPods over a good pair of wired buds.
This makes me realize that the gestures it is being demonstrated with are going to be hilarious if they are misidentified. I'm /assuming/ that the thing will not pick up gestures between you and someone else. Such that if I pick up a paper with a similar hand shape as their "pinch and move things" gesture, it will realize it wasn't the same.
Or will this be akin to how siri does a shit job understanding anything that is not mechanical in speech? Will be absolutely hilarious if it has a hard time recognizing non-light skinned hands for the gestures. Really hope they don't make that mistake.
Hopefully the troubles with Siri are understood enough in Apple that they won’t make that mistake, because Siri is truly awful on my HomePods for thinking I’m talking to Siri when I’m not.
Thankfully many of the people in the demo videos were people of colour, so I’m fairly confident Apple has gotten that bit right and hopefully their gesture detecting cameras have IR or dot-pattern emitters to work in the dark as well.
I would cite the growing unease in Siri as evidence that they almost certainly don't have this done well.
IR dot-patterns will be its own problem. And I hope you never want to curl up on that couch to watch a movie with a blanket. :D
That said, I am certainly not trying to say they definitely got it wrong. I share high hopes that this will work. Not enough that I will be an early adopter, though.
> IR dot-patterns will be its own problem. And I hope you never want to curl up on that couch to watch a movie with a blanket. :D
Speaking of which, notice how nobody who was relying on finger gestures (rather than a keyboard) was using menus, or doing anything mouse-like. Just scrolling and clicking.
You point to things by looking at it, which is very precise according to people who have tried it, like MKBHD. Clicking and dragging is done by finger pinching. So it works quite similar to a touchscreen or a mouse.
Yes apparently it is surprisingly precise. There might be an uncanny valley when the system is perhaps a little too good at predicting what your next move is. Are the goggles monitoring your facial expressions, looking not just for eye movement but also wrinkled noses, scrunched-up foreheads, etc. ? Will it be able to function as (for example) a lie detector ?
I’ve assumed that this greater learning capacity and malleability is both the best part of a dog and a vulnerability that can lead them to become highly anxious and dependent animals.
I’ve had both cats and dogs, and loved them both, but my goodness they are so wildly completely different animals.
reply