TFA author - I have worked on low-cost embedded systems. (Tens/hundreds of megs of RAM, not KBs.) The only case where you don't hoard resources is if the system is so small that it's programmed by a small team since the functionality is severely limited by resource scarcity, so there's no reason to grow the team to deploy more functionality. Above a certain not-so-large size, people will hoard resources in an embedded system - they can't ask for more capacity like they would if it was server-side software, but they sure can avoid giving up whatever resources their code is already using.
Researchers actually have a limited and smallish hardware budget, so academia is likely to come up with cost-saving ideas even when hardware performance grows very quickly. In the industry you can throw more hardware at the problem even if it's not improving (outside embedded/client devices)
> TFA author - I have worked on low-cost embedded systems. (Tens/hundreds of megs of RAM, not KBs.)
I worked with kilobytes, small team, everyone sat together. No one was hoarding resources, because just to ship out the door we had to profile every single function for memory and power usage!
IMHO Hundreds of MB of RAM isn't embedded, it is just "somewhat constrained". :-D Regular tools run just fine, you can use higher level languages and garbage collectors and such, you just have to be a little bit careful about things.
> Researchers actually have a limited and smallish hardware budget, so academia is likely to come up with cost-saving ideas even when hardware performance grows very quickly.
Agreed, but I also think it is difficult to determine when forward progress is stymied by too many resources VS too few!
As someone who works in embedded systems- I agree. Teams will deploy whole desktops to an embedded system if permitted to expand enough. Architectures will evolve to require hardware that can support the bloated stack.
Even on power constrained robots operating in the deep sea it's a pretty safe bet that some of them are running whole Windows VMs.
Researchers actually have a limited and smallish hardware budget, so academia is likely to come up with cost-saving ideas even when hardware performance grows very quickly. In the industry you can throw more hardware at the problem even if it's not improving (outside embedded/client devices)