It's interesting that this $45000 camera has a high speeding rolling shutter CMOS sensor (rather than a global shutter). They're spiel claims to minimise the jello effect rather than eliminate it. Would have expected more at that price.
It’s not for saving money. Rolling shutter sensors have better dynamic range and better light sensitivity (e.g. BSI is only possible on rolling shutter sensors).
Rolling shutter sensors can also have stacked RAM which results in very, very fast readout and cancels rolling shutter artifacts.
Even on non-BSI stacked sensors rolling shutter results usually in better IQ because it's a "simpler" pixel and readout logic.
Global shutter is more a feature for machine vision sensors only now, where artifacts are not tolerable because you are processing the image assuming absolute and precise temporal locality in each pixel's information and acceptable image quality for those products has flatlined the last few years on ~60% QE and around 15ke- FW.
>> I suspect that this is the tradeoff for pixels of that size and speed on CMOS.
Probably. I find it interesting that people in the film industry prioritize everything having to do with light (bit depth, color gamut, resolution) over spatial/temporal characteristics. Not saying it's the wrong choice, it's just interesting.
First, I don't think that is true at all. High budget film movies are meticulous about every aspect that they can loosely quantify, even if it means someone eventually making decisions by eye.
Second, there is no reason that every shot has to be done with the same camera. Back before digital cameras and fluid manipulation it might have been more difficult to match shots in a sequence, but even so vistavision cameras were still used for visual effects plates.
If there was a fast whip pan where the rolling shutter would have a large impact, it could be shot with a different camera.
Exactly. Work within the limitations of the gear you are using. If you can’t do swish pans and other fast camera movements as a trade off to get higher resolution, higher bit depth, larger dynamic range, etc, then so be it.
The one tell I see with the rolling shutter is light flashes where only part of the frame catches the flash. The next frame is then usually flipped as to which part of the frame caught the flash.
I'd imagine a global shutter would negatively impact the battery life as well. Keeping the full sensor energized 100% of the time vs the sliver of the chip as it scans must have an impact to consider.
Am I reading this correctly? "Anamorphic license", "full-frame license"... This is insane!
The camera already costs 45k and they want to sell firmware feature unlocks?
> You say that as if it costs too much
That's not what I meant. I am fully aware that this is a great value. What I was saying is that, given the price (and IMO regardless of the price), that kind of "feature unlock" model is ridiculous. The profit margins on this level of gear tend to already be very high. They do not need to charge even more, especially for what I assume are fairly trivial firmware differences.
One non-obvious benefit of feature licenses is that it creates a strong incentive for Sony to add firmware features after release.
I wouldn’t be surprised if a majority of buyers would prefer more available features (and a longer hardware lifetime) and having to pay for them instead of fewer features for free.
45k is cheap for a cinema camera. Since it’s generally large budget films that shoot anamorphic, this scheme allows the camera to be accessible to production companies that might never need anamorphic. Anamorphic tech costs money to develop, so it seems that having all users subsidize an advanced feature they might not want would result in the camera costing more for everyone.
Outside of the lenses used, where does this other cost come from? The lens itself squeezes the wider image into the same shape as the sensor. The NLE or other DI processes would then just stretch it back out into whatever frame size is needed. It's the same concept 16:9 DVDs used too fit the wider content into an SD frame.
A camera sensor doesn't care what shape the image is, as it only cares about if photons are hitting it or not. Yes, you can tell the camera you are shooting a certain aspect so that it doesn't energize parts of the sensor that's not needed. So some additional dev costs to add that feature.
This is standard MO for Sony. They make cameras capable of more features that must be purchased separately. However, Sony is not the only company to do this. It is much easier/cheaper to manufacture one thing that is capable of everything, but limit features based on locking them behind a software license.
Did I completely miss it, but I didn’t see anything about the record media type. Did they come out with yet another media format for this? The F55 used AXS cards, and were expensive. It would be right out of Sony’s playbook to come up with another media format. Would really love it if cameras could accept an M.2 form factor. They are fast enough while being significantly cheaper. Never have looked into the robustness of the chips on “professional” media formats. I always just assumed they can change so much because of the special form factor.
M.2 isn't designed for hot swap. It would have to engineered into a carrier that can provide the necessary electrical and mechanical interface to permit that.
Like a cover over the slot that closes and opens that has a switch connected? This is a pretty standard thing for media slots on cameras. For the Canon 5D series, it is even suggested to wait for the access light to flash once the door is opened before removing the media. The Canon C100 won't even acknowledge a card is present while the lid is open.
That is only to permit writes to complete. The removable media still won't be damaged by unexpected removal because it has been designed to accommodate that. M.2 has no such protections.
RAW
This ultimate 16-bit linear RAW format preserves all the
information captured in 4K, with 16 times more tonal value
than 12-bit RAW.
I would be interested in real-world benchmarks by professionals not affiliated to a camera company about this. I myself have shot in RAW 12 bpc with Magic Lantern and was very impressed, I am wondering if going higher has advantages, for example less noises in the shadows
Nikon DSLRs can use either 12 or 14 bit RAW and there is quite a lot of discussions regarding that. The consensus seems to be that for a well exposed picture you can't tell any difference, it's only in some heavily processed edge cases there may be an advantage.
Remember it doesn't give a wider dynamic range, it just gives a greater number of graduations within that range. How the image is displayed also affects how it looks, but having a better source should give your less degradation during processing and display.
Adding more bits just reduces quantization noise, so it only helps if the quantization noise is above the actual sensor/amplifier/ADC circuit's noise floor in the first place.
The D810 for example can slightly exceed 12 bits when used at very low ISOs, <200. If Sony has managed to make an extremely low noise sensor here then there can be some benefit.
its more used for "fuckit fix it post". It allows the DI people to pull out/mix&match bits of the frame that would otherwise be lost to noise.
Its also useful in VFX for things like matchmoving, as you can crank up the exposure so the tracker can grab onto something (again assuming low noise.)
That's the theory, with RED it never delivered, being noisy and not very sensitive (see DoP/ set deisgner's mutterings in the hobbit.)