I used to do in-house development at a shop that had a great policy on that front: Developer machines were always a generation behind the workstations of people who would be using the software they write.
And a strong culture of belief that, if a developer couldn't get something working well enough in that kind of environment, then that should be taken as a reflection on basically anything but the developer workstation. Inefficient code, excessive memory usage, just plain trying to do too much, bloat, whatever.
But I realize that's a hard thing to sell. A lot of developers don't really appreciate being reminded who works for who.
This is a really poorly thought out policy masquerading as deep understanding. It's if I can coin a phrase yoda talk. Basically incoherent ideas aren't improved by being couched as hard earned wisdom.
The tools being used to create a piece of software are often fundamentally different than those used on the other end.
This means that machines should be provisioned in accordance with the needs of the actual tools running on them.
Developers may in addition to running different tools may need to iterate quickly in order to test functionality. It may be acceptable that an app that is started once at the beginning of the day takes 3 seconds to start but if you deliberately handicap developers machines and it takes 7 each time the developer is trying to fix a bug in the space of a few minutes instead of 1 on a faster machine you may have damaged your own productivity for no reason.
This has no nothing to do with the idea of who works for whom. Incidentally this statement sounds entirely passive aggressive. Ultimately I'm pretty sure all of you worked for whomever is your ultimate customer and are invested in helping each other do effective work it sounds like the management team was entirely unclear on how to do this. Is that shop even in business.
Not only is the shop in business, it's one of the most profitable place I've had the pleasure of working at. The policy was actually handed down by the CTO, who started in the company as an entry level developer. Some money was incidentally saved, but, at least to hear him talk about it, it was more about getting people's incentive structures in line: If you want to discourage developers from doing something, the most straightforward way is to make it painful for them to do it. If you want to make creating performance problems painful, you accomplish that much more effectively by making it apparent on their workstations, where their nose will be rubbed in it constantly until they do something about it. Slow performance on an external test environment is much easier to ignore, because people typically don't bother testing there until they think they're basically done with their work, at which point their incentive structures are nudging them toward ignoring any potential problems.
Contrary to some of the criticism I'm seeing here, it wasn't actually hated by the development team, either. The hypothetical "can't get any work done because compiling takes a bazillionty days" scenarios that people are pulling out of thin air here simply didn't happen. At a shop where developers were expected to think carefully about performance and know how to achieve it, they tended to do exactly that.
Someone who was busy making excuses about how they weren't very productive because they only got a 3.4 GHz CPU while the analysts got a 3.6 GHz CPU probably wouldn't have lasted long there.
"Yoda talk" is a very nice phrase, I hope it catches on.
Dogfooding is sometimes a good idea, and of course testing on a range of setups is important. I suspect there is a problem with people testing on older software but not trying any older hardware (especially for web apps), which using old machines could have partially avoided.
But the idea that development should inherently be able to happen on older hardware than the product will run on is arbitrary and ridiculous. At best, that creates pointless pressure to rely on hardware-friendly tools, which could mean anything from not leaving open lots of relevant Chrome tabs to pushing developers to use vim instead of a Jetbrains IDE. (Nothing wrong with vim, obviously, but "we intentionally hobbled our hardware" is a weird reason to choose an environment.)
At worst, it fundamentally impedes development work. For an extreme case: Xcode isn't really optional for iOs development, and merely opening it seriously taxes brand new Macbooks - applying this theory to iOs developers might leave them basically unable to work. Even outside that special case, there are still plenty of computer-intensive development tasks that are way outside user experience. Just from personal experience: emulating a mobile phone, running a local test server for code that will eventually land on AWS, running a fuzzer or even a static analysis tool.
Even if we grant the merit of that trite "remember who you work for" line, sticking to old hardware doesn't seem to follow at all. We wouldn't go around telling graphic designers that if they work for a customer with an old iMac G3, they're not allowed to have a computer that can run Photoshop. Heck, are there any professions where we assume that building a thing should be done exclusively via the same tools the customer will employ once it's finished?
I worked in a place where dev workstations were far beyond the end users', and when devs tested (mostly local, not multiple systems across networks), there was no basis in the customers' reality.
Endless performance problems were masked in development, and then very noisily evident when it reached customers. Performance testing was largely reactive and far removed from the creators of some terrible code choices, so they'd tend to shrug ("Works fine here") until you analyzed the hell out of it.
Now, the code was a big C++ environment, and compilation speed was a problem, but maybe a means of testing in a throttled state would have prevented a lot of grief much, much earlier.
This reminds me of Facebook's "2G Tuesdays" (https://www.theverge.com/2015/10/28/9625062/facebook-2g-tues...), where they emulate their internet speed to that of emerging countries. If you are frustrated while waiting for a page to load, then your users are going to be as well.
This is a really nice way of working and you can see the results when browsing https://mbasic.facebook.com: even at 2G speeds, even with photos, pages load fast. No unnecessary background process trying to do something, all buttons are there to be clicked on already. A really smooth experience.
Correct, it seems much better to create a test bed environment emulating this, rather than artificially constraining a person own development environment due to some poorly thought out policy which is actually likely a way to save money.
That's a complete non sequitur. The grandparent comment is not talking about how resource intensive background apps affect the performance of the product, it's talking about how the development tools themselves may not run smoothly and efficiently on shitty hardware if they are resource intensive.
They are resource intensive because they prioritize feature completeness over being sleek and lightweight. That's the correct thing to prioritize because, because extra RAM is dirt cheap compared to benefit of higher developer productivity.
That's the point, they already were feature complete back when they were less bloated. VS2017 offers very little over VS6.0 or VB6, I think they're min requirements were 128MB of RAM, it was a lot more responsive too. Similar for eclipse.
Developer productivity hasn't improved since, even gone backwards in some ways.
I think you're mis-remembering your time period. 20" monitors weren't widely available until the mid-late 1990s; nor were P133 machines (P133 release was in June 1995; so for it to be considered mid-low end, late 90s would be more likely).
Where I worked in the early 1990s (1992-ish), us developers fought over who would have the color terminals (Wyse 370/380) and who would have the monochrome ones (Wyse 50/60). I was low-man on the pole so I always got stuck with the low-spec terminal (though I still have an affinity today to amber VT220 text characters).
Until I "wowed" the owner of the company (small shop) showing how to do windows, dialogs, etc in a terminal using a virtual buffer and some other "tricks". Then I was given one of the color terminals, to jazz our interface (which was mostly numbered menus and basic flat screens).
At one point, I played around with the 16 color Textronix graphics mode you could pop into with the right escape sequence; I think I made a very slow Mandelbrot set generator (never showed that to him; our app was already pushing our dev hardware, which was an IBM RS/6000 desktop workstation running AIX we all shared via 9600 bps serial connections)...
90s and 2000s was a pretty different story. These days a 5 year old i7 with 16g of ram is pretty damn fast. Maybe it's a different story if you're running windows with a big IDE.
This may be workable for a certain subset of projects, but programmers often have much more on their system than the end user. End users don't need a bloated IDE, an SQL server, an HTTP server, etc all running at the same time. Trying to run all of these programs on an old computer is of zero benefit to the process. Better to give programmers a new machine with remote desktop access to a slower computer/virtual machine that they can use to test out their software.
You could easily argue the opposite as well. Developers don't need an IDE, a SQL server, an HTTP server, etc running on their device at all. The choice is to use a bloated IDE that most people only use a small fraction of the features for. The servers could all run on a dev server and compile/test cycles can be done on similar servers.
Mind you I don't necessarily agree with all of this. Well except the IDE part, Vim and Emacs are tools that more people need to learn.
> The servers could all run on a dev server and compile/test cycles can be done on similar servers.
In every case I've had a dev db running on a shared test server, that DB has been woefully underspecced for the purpose and often in a datacenter with 300ms latency from the office over the company VPN.
While production instances are in the same datacenter as the production DB with 5ms latency.
Unless you are literally building the dev tools you are using that doesn't make any sense. That shop lost tons of money on wasted dev cycles. You spend much more time building the app than running the app.
They should have bought everyone a second low spec machine to test on, and let them use proper dev machines for building the software.
I guess if is a shop where the management feels they need to remind developers suffering through that for 8+ hours a day "who works for who", that was probably the least terrible part of working there.
The idea you're trying to advance here is just plain silly.
You need your doctor more than your doctor needs you. That doesn't change the fact that your doctor is doing work for you, and not the other way around. Same for lawyers, plumbers, electricians, architects, and anyone else working in any number of other skilled professions.
Do you also engage in silly power plays to put your doctor in his/her place and remind them that they are working for you? Maybe you can insist that they use a 10 year old stethoscope or you else you'll take your business elsewhere.
Developer can switch jobs fairly easily. Its a sellers market. Companies that don’t understand this are going to wonder why they have a hard time retaining talent.
And a strong culture of belief that, if a developer couldn't get something working well enough in that kind of environment, then that should be taken as a reflection on basically anything but the developer workstation. Inefficient code, excessive memory usage, just plain trying to do too much, bloat, whatever.
But I realize that's a hard thing to sell. A lot of developers don't really appreciate being reminded who works for who.