> MacBooks do fairly well considering that they are using a complete processor based around the ARM instruction set and software that is likely not fully optimized yet
If by "likely not fully optimized yet" they mean "running under x86 emulation"
This is just an ad. Compare a $1,700 Mac with an integrated GPU against a $2,400 or $3,500 PC with a discrete GPU on graphics processing tasks. The discrete GPU is gonna win every time.
Honestly, it's not much of an ad, either. The more expensive desktop has less performance per $ than the laptop, and that's before it's running natively!
I guess it boils down to what you need. Sometimes performance per $ matters most and and sometimes you need to hit a minimal performance target for a system to even be worth considering.
For after effects I'd expect that loads of ram would be the biggest winner.
Personally if I was going to be procuring machines for adobe/nuke/other creative apps, I'd go with a second hand workstation from HP, and give it loads of ram and a new GPU.
it'd be the same price as a new macbook, with less single threaded performance, but much great GPU ram and number of threads.
When it comes to portability though, thats a different issue...
I have no expertise whatsoever in this area, but I‘m curious: Is there not any timesharing / cloud solution for this? The Hardware you mention would be taken advantage of 1/3 of a day at max, right?
There are solutions, but they are not overly compelling from a cost perspective (yet)
For a g4dn.8xlarge (32 cpus, 128 gigs of ram) plus one GPU its $1000 a month (or $355 if you're only doing 12 hours a day 5 days a week, no over night renders source: https://calculator.aws/#/createCalculator)
Thats still $4k a year.
seeing as you need a remove viewing solution too, so you need a machine to host the Keyboard Video and Mouse, its still not great.
Where the cloud is useful is if you are hiring someone for a 3 month contract, or you need to render something fast, as 100 hours on one machine costs the same as 1 hour on a 100 machines.
Very interesting! I wouldn‘t have expected such a large difference considering the economies of scale (you probably need to also calculate a small cost for maintenance but that should be minor I guess).
Maybe in the end it also comes down to the ergonomics of working with a remote software. If you would have a very good client I think that could abstract most of the limited latency and bandwidth away. Onboarding the data might be a hinderance but considering it is only done once (and also considering the services like aws snowcone) I think thats also minor. Given the name creative cloud I would expect adobe at some point to do the move and also provide the compute for their products (or rather services by now)
Its interesting, there was a VFX company that was called Atomic Fiction, that worked on Looper, Sully and a few other big budget films. They were entirely cloud based.
in about 2014 it did look like everything was going to the cloud. Upgrading a render farm takes mega bucks($1-4 million for a large company) so converting that to a pay-and-go solution looked interesting financially.
A lot of high end "live" editing/colour correcting is done remotely, because the machines are too noisy to have in the same room. (obviously 10ms lag is trivial compared to 45 plus loss)
I think the thing that stopped the cloud rollout was how revisions are made. A scene in a film typically will be priced based on the number of people hours the company thinks it'll take +x%. However a film _producer_ will demand n number of revisions to change things. Those are normally not charged for.
So if you have your own compute, it only costs you staff and power. _but_ if you are pay and go, it also costs you CPU rental as well.
I think you are right, I suspect that at some point either Amazon, or more likely google are going to start giving away software with CPU time. (something like https://www.zyncrender.com/)
I'd assume the time cost of uploading/downloading assets would kill all productivity in the most interactive workflows. Otherwise there are solutions for batch processing.
I'm curious how much the inclusion of the video cards mattered in this test. What seems like a more reliable and meaningful test would be testing Adobe product workloads with the M1 and Intel versions of the MacBook Pro and MacBook Air to see how much having to emulate amd64 shows in the performance numbers (and it provides a reference for later when the Adobe Suite is released with native ARM64 support).
And yes... it looks far more like a native ad than an actual "test".
Flagging it. There is nothing of value here. Adobe products haven't been properly ported to Arm architecture and this is comparing a significantly more expensive and less portable machine with Apple's M1 laptops.
It's clear that M1 is a winner from independent authorities like Anandtech.
If by "likely not fully optimized yet" they mean "running under x86 emulation"