Hacker News new | past | comments | ask | show | jobs | submit login
24-core CPU and I can’t type an email – part two (randomascii.wordpress.com)
249 points by MBCook on Aug 23, 2018 | hide | past | favorite | 98 comments



It's tangential here, but, seriously, let me ask:

For those of us here with > 15 years in the business, when's the last time you really felt a BIG uptick in performance or responsiveness from a new computer upgrade?

I buy a new laptop every 3-4 years, and I buy a nice one, but generally speaking I feel like we've been kind of flat in terms of usable power for a while. 6 or 8 years ago I switched to an SSD, and THAT was a big deal -- easily the most dramatic uptick in performance since I moved from an AT clone to a 386 in 1991. But since then? Not so much.

My laptop is smaller. It runs longer on the battery, and is generally cooler. The screen is better. But in terms of how long it takes to boot, or open large data sets, or whatever? Not so much different than 5 years ago.


For my personal computing needs I recently upgraded from the combination of an ancient i7 920 quad core desktop and a quad core Haswell laptop to a Ryzen 2700X desktop and a pile of lovely disposable X220 Thinkpads.

On the one hand, for software that's not complete garbage in terms of optimization (scientific software, video/audio software, games, my own code) the difference is staggering, I hadn't felt a change this drastic since I moved from Pentium 1 to Pentium 3. On the other hand, I now know that there is no amount of resources that will make a modern web browser or Electron app run well.

The good news is that now that I feel I deserve good performance I've made an effort to get rid of most of the crap. Apart from disabling JS etc. by default and blocking everything I possibly can I have no solution for the browsers, sadly, but everything else is gone. The Thinkpads all run minimalistic Arch Linux setups and mostly work as thin clients, so overall every system I use is snappy and responsive, for the first time in a decade. I shouldn't have needed a hardware upgrade to return to common sense in computing, but it is good to know that with some discipline and a low tolerance for garbage it is still (mostly) possible to have a reasonable computing experience. Now, if only there was a usable browser out there...


This is heavily correlated with your age (or, as you put it, "years in the business".) We old-timers clearly remember the days of Moore's Law, or more precisely, the days when Moore's Law had a direct impact on single-processor performance.

For me, every upgrade was at least a factor of two improvement to pretty much everything. It was a qualitative difference -- things that just didn't make sense to run were now reasonable to do, things that meant leaving it overnight could now be kicked off before going to lunch or whatever, things that required scheduling were now causes for slashdot breaks (age, remember?), and those minutes-long jobs were now something you could run and wait for without context switching.

I could go on for one more step -- except that's kind of where it ended, at least for reliable improvements. Stuff that was almost buttery smooth might become buttery smooth, but stuff that was buttery smooth already might start lagging some here and there.

These days upgrades are more about SSDs, RAM, decaying batteries that tip you over the edge, connectors, or maybe the collection of weird compatibility problems (eg 3 monitor support) that you've sort of worked around but the workarounds are breaking down, and you have this naive notion that the latest greatest laptop will magically be more robust. (Instead, you usually just trade over to a different set of problems.)

I'd agree that the SSD switch was the biggest boost in recent memory, and it's been about a decade since I've experienced one of those glorious upgrades of yore. And I miss them -- the heady excitement of everything being better is now replaced with a nervous inventory of everything I need to work or think might be improved, trying it all out in hopes of feeling some improvement or at least having it continue to work.


It used to be that each generation of CPU was a significant performance jump.

DOOM on a 386 was nearly unplayable. On a 486 it was perfect. Quake on a 486 was awful, was great on a Pentium, and was smooth as butter on a Pentium II.

These days, each generation is less than 20% faster in single-core performance. My desktop at home is an i7-3770k. In a couple months, it'll be 6 generations out of date, and yet it's still fast enough for everything I do, even gaming!

My CPU is 6 years old, yet doesn't feel like it. If you were running a 6 year old computer in the year 2000, you'd be running a 133 Mhz Pentium while everyone with a new system at the time would be on a 1 Ghz Pentium III. The performance difference was at least and order of magnitude and definitely noticeable.


I'd hazard to venture that modern microprocessor improvements "seem" less for two reasons.

1) A lot of the "a ha" magic really has been figured out in microprocessor arch (e.g. pipelining, superscalar, multi-level caching).

Previously, we were reaping the benefits of process shrinks AND microarch epiphanies. Now, only a slower former and less powerful versions of the latter (e.g. branch predictor tweaks).

2) We're discounting the "performance" that has instead been allocated to power. Up until the P4, we had the benefit of just saying "more power!" in pursuit of performance.

Now, not only do we have a power budget to stay within, but we're actively trying to decrease power draw for the same workload in mobile parts.

So instead of getting something twice as fast, we get something that has battery twice is long (or however the math works out).


> So instead of getting something twice as fast, we get something that has battery twice is long (or however the math works out).

We've essentially changed the metric of performance from "calculations per second" to "calculations per watt", which can be useful for mobile (battery life) and data centers (reducing heat and the need for massive air conditioners, plus the electric bill), but less useful for gamers that often need more single-core performance.


UI and user-facing elements are largely dependent on perception and I/O (human and computer), and the band of improvement there is narrow.

Also, I/O is often the bottleneck, which is why most people perceive a speed increase moving from spinning platters to SSDs. If you move stuff to RAM, I bet you'd see another perceptible speed increase. (I've recently started using the RAM-backed /dev/shm on Linux for transient, throwaway data generated by my programs, and boy it is fast.)

I have a Linux machine running on a Core 2 Duo machine (2005) at home and an Core i7 (2017) at work. An "ls -la" feels about the same on both. But when I try to train an ML model, the difference is stark and perceptible.


When I finally was able to upgrade from 4MB to 16MB of RAM on my Linux box in ~95-96, I went from constant swapping to FAST COMPUTER. Seriously, I had no idea compiling a kernel could be fast. It was taking something like 7-8 hours and I just thought it was a lot of work (though it is, of course).

The next time I compiled a kernel took something like 8 minutes. I couldn't believe my eyes as the steps flew by on the console.

Other than "adequate RAM" I'll go for "SSD" like everyone else. Now adequate RAM doesn't even matter all that much because swapping is so fast (relatively speaking).


I actually feel a big difference when I upgrade, usually on the same 3-4 year cadence.

Admittedly, the most recent cycle felt somewhat the same, but generally I see a boost in quality of life. Not things like how fast a script runs on a large dataset, but like how responsive my machine is _while_ that script is running, or _while_ I'm doing some intensive task.


I got an SSD in 2012, and was miraculous, like everyone who's done it can say.

Aside from that, 2008 was probably the biggest jump. Going from a mediocre laptop to a custom built ~$2500 desktop with a Core 2 Quad and (especially) 24" monitor was astounding. I'm never going to make a laptop my main machine ever again.


It is rare to notice small improvements differences when they are delivered incrementally. You would notice if you went back to that machine from 7 years ago and tried to use it today.

There is also the idle nature of most computer usage. That next gen i7 will not load Facebook appreciably faster. CPU limited tasks could be only 5% of normal use. So a 20% improvement for 5% of the time.


Yeah, but if you back up a decade, the improvements were anything but incremental. Even a 2 year upgrade was a massively noticeable improvement.


For desktops, i would say it was the 2nd gen of the core i series(2xxx, like the 2500k). For laptops, definitely the original retina macbook pro in 2012.

I'm still using that machine, i try out new stuff every few months. I still don't see the point. I load it down pretty heavily all the time and it still kicks right through it just like it did when i took it out of the box.

If it breaks, i'll just buy another similar one. They're not much over $500 now and it's an amazing amount of computer for that price even in 2018(19?). My only regret is not maxing out the ram and storage really, but that's on me.

I've never felt this way until now, i used to wish i could justify a new machine every year or two and often would. I think i didn't keep a laptop for over a year and a half until i got this one... But now i just don't care. I don't own a single machine with a newer cpu than this one, and my desktop still does great too with a 3770 and a couple SSDs.


> when's the last time you really felt a BIG uptick in performance or responsiveness from a new computer upgrade?

Earlier this year. I upgraded (my personal computer) from a 2010 dual-core i5 (Arrandale) laptop to a 12-core Threadripper. My old laptop was still usable for single-thread tasks - I upped the memory to it's limit (8GB) and upgraded to SSD, and this contributed to my long upgrade cycle.

My workflow has changed since I bought the laptop in 2010 - I now tend to run multiple VMs and docker in parallel, and there I noticed a BIG uptick in performance. I could only run 2 VMs (slowly, with swapping) at most before the upgrade, post-upgrade, I'm now able to run 6 VMs with no signs of slow-down - I haven't tried to find the upper limit to number of VMs, but there's plenty of room for growth for additional RAM and SATA disks.


Given:

- gigabit ethernet LAN and a speedy Internet connection

- doing "desktop" work -- browser, SSH, office docs, media consumption

All my machines feel snappy, ranging from the $250 NUC through the MBP2011 (with 16GB RAM and an SSD) through the midrange Intel desktop (also 16GB RAM and an SSD).

I don't play any significant games and major computation happens on servers, not on anything I'm typing directly on.

It's been snappy for a decade, and I don't expect a noticeable improvement the next time I get something new. I might be pushing more pixels to a screen, but it will respond at about the same speed.


Since everyone mentioned SSDs, I'll second getting a gigabit local network connection. Now I store everything on the server in another room, and my laptop, desktop, and consoles can load and share files almost like the hard drive was connected to each directly.


Great point. I went from 25Mbps DSL to symmetric gigabit fiber last year, and that'll change your life.


When I fitted my laptop with 118GB Optane drive (intel 800p). My builds run twice as fast now. I wouldn't achieve this by swapping CPU.


What kind of builds are you doing? Some of my colleagues are running tests at work and are looking into what benefits the most.


+1 on the SSD. I've declined my companys managed PC upgrade twice now because performance increases are negligible and build quality got worse (at least on the models we were offered).

I agree that laptops/batteries have gotten smaller at the same price/performance point, though. Very good for my company.


+1 for experiencing a noticeable improvement with SSDs.

One more I would submit is the transition from DDR2 to DDR3. I do not yet have a DDR4 machine, but I am anticipating some kind of "aha! so this is what I have been missing out on" moment when the time comes.


I was reading about DDR3 vs DDR4 the other week, and apparently it's not so clear cut - DDR4 can be slower in some cases.


2014 I replaced my MBA 11" with a maxed out MBP 15", and the performance difference was crazy.

Buuuut, my next laptop (if I stick with Apple) will probably be a MB 12" with medium to low specs, with most of my demanding dev stuff moved to a Linux server.


For me too it was swapping my HDDs for SSDs. Also, getting 64GB RAM (I do lots of video and stuff).

Everything else, moving from CPU generations to newer generations, _felt_ like it barely made a dent.

GPU rendering via CUDA also meant a massive jump.


>But in terms of how long it takes to boot

I notice a huge improvement in boot times with every (windows) OS upgrade I've gotten.


Recently overclocked L5639 from 2.13Ghz to 3Ghz and saw a noticeable difference but not sure how pronounced that is for a i7-3xxx at 3.5Ghz to an i7-8xxx at 5Ghz

There's definitely a difference between the L5639 at 3Ghz and a i7-6700HQ (2.6Ghz base) in my laptop


Oh, but booting is faster because systemd, remember? /s


This is literally the one thing "the systemd people" -- which is to say, Red Hat -- fell back on when their backs were to the wall after having their short list of sales pitch lies refuted. If none of the other supposed benefits were benefits or actually true, at least systemd would make our systems boot faster. Red Hat leaned on this hard throughout the process which culminated in the exclusively political decision (i.e. not based on merit) by the Debian TC chairman to completely replace existing init in "the rock-solid stability distro" with such unstable, buggy, immature software.

Here's a fun project idea: Plot the boot times of machines running systemd (and some control) over time, and I'm willing to bet you'll see a substantial increase in boot time immediately subsequent to the systemd decision by the Debian TC.


But what are boot times like now compared to an init-based system after years of development?


Worse. Do you think somehow cramming more wishlist kitchen sinks into systemd made it faster?


It's not the hardware per se, it's the software. On my various sysadmin forums I've never seen so many "I went full gnu/linux and am never looking back!" Posts. Windows has been shitting the bed for a long time now and 7 was just the outlier in the trend. Hardware, especially m2 and data ssd do make a difference, but it's mostly bad software people are Stockholm syndromed into using that is the problem.


I have on my desk a machine that would be an equivalent of a large datacenter two decades ago and then sometimes it is barely able to keep up with me pressing keys on the keyboard.

I think the next major step will be for the humanity to learn make software more efficient and effective rather than throw more CPU cycles at the problem.


Instead we get Electron, and people use it to make text editors that take up 500 MB of space.


Electron gets a lot of hate, but I think people generally are looking at the wrong thing when focusing on memory usage, ask yourself why it’s so important to so many organizations.

The number one cost at most companies, is humans. Before electron what did you have for cross platform development? Java was there, but that never really took off for embedded UI inside the browser, and the browser became the primary target for most businesses in the last ten years.

Electron allows small teams to focus their most costly investment on developing toward one product being developed across many platforms.

I use electron apps that give a consistent experience, on macOS and Linux (not personally Windows, obviously as well), and then the same web app on Firefox, Chrome and Safari across those same platforms with the addition of my phone.

Yes, that comes at the cost of running a full browser, but it’s a logical choice when optimizing for the diverse computing landscape of today.


"Looking at the wrong thing" is a matter of perspective. As a user, I couldn't care less that you saved money by writing a bloated web app. I care about the resources I have, which includes my own time, energy, money, computer, battery etc. In those terms, there are clear disadvantages to Electron based software.


The incentives are all messed up due to how we pay (or don't) for software; this is similar to how it's much easier to waste viewer's battery and bandwidth showing adverts than it is to charge them money.


Isn't this mainly because of the OS builders? MS, Apple, and Google? The only cross platform GUI system they all support at this point is browser based UIs. And given that developers basically any UI dev practically needs to target the web now, I don't see why this is surprising.

We seem to be blaming developers, but developers are just dealing with an ecosystem where this is the best common denominator. There's another comment in this thread, that points out that Electron is only 5 years old, and there is a lot of opportunity for performance and system utilization improvement.

IMHO if we want this to be better, then the OSes should make this easier and more performant for developers, and by extension, then their platforms will out perform and outsell the others.


See my response to a similar sibling comment: https://news.ycombinator.com/item?id=17826915

My basic point there being that losing a small number of users due to these concerns is probably worth it.

I don't want to come across as someone who's hyper-defensive of Electron. I hate the fact that such a bloated piece of software has become the standard means for supporting cross-platform development. At the same time, I really don't see other viable options at this point in time.

I'm curious though, is there something you would recommend instead that meets these requirements with a single codebase (~90% shared code): target all major platforms (macOS, Windows, Linux) and target all major browsers (Chrome, Safari, Firefox, Edge, IE), (edit: and how could I forget, all major phone browsers) with nearly identical UI/UX?


There is no such thing and I would argue there shouldn't be. The notion that you could use a "nearly identical UI/UX" across a desktop application, a mobile application and a web page invariably leads to horrible results. The usecases are just too different. The only reason people pretend otherwise is laziness and cost cutting - a context in which Electron seems reasonable as well, to the horror of technically literate users everywhere.

What might be reasonable, at least for applications that aren't performance sensitive, is asking for a cross platform solution between desktop OSes and a different one across mobile platforms. Those exist, as you well know.


> There is no such thing and I would argue there shouldn't be

It's one thing to not want to use Electron app, it's another to say no one should use them. It rubs me the wrong way. There are a lot of things I'd never use, but they don't rile me to that level.

> The only reason people pretend otherwise is laziness and cost cutting - a context in which Electron seems reasonable as well, to the horror of technically literate users everywhere.

"Laziness and cost cutting" is just your label for trade-offs you don't agree with. That would apply to cross-platform Java/Swing apps or Gnome. The "technically literate" users don't have to worry about business considerations and engineering tradeoffs - the authors do. If memory usage is such a big deal, then the market will self-correct, I was told it's a meritocracy.


> It's one thing to not want to use Electron app, it's another to say no one should use them. It rubs me the wrong way. There are a lot of things I'd never use, but they don't rile me to that level.

The reason this trend riles me so much is that we now have companies like Slack, which easily have enough resources to do an efficient desktop app on any platform they choose, releasing utter garbage that, far from merely wasting memory, takes up ridiculous amounts of CPU time (and therefore battery time and energy) to do the simplest things (like render emoji, or even a blinking cursor). The aggregate waste of resources is mind boggling, and we've gone far past the point where Electron was solely used as a quick solution for very resource constrained companies or single devs.

> "Laziness and cost cutting" is just your label for trade-offs you don't agree with. That would apply to cross-platform Java/Swing apps or Gnome. The "technically literate" users don't have to worry about business considerations and engineering tradeoffs - the authors do. If memory usage is such a big deal, then the market will self-correct, I was told it's a meritocracy.

It certainly does apply to Swing apps, Qt, Gnome, etc. I've always considered all three of those frameworks ludicrously bloated, by the way, but Electron has far exceeded my worst nightmares in that regard.

I'm not sure who told you the market is a meritocracy or why you believe it, but I don't see much evidence for that view, personally.


> My basic point there being that losing a small number of users due to these concerns is probably worth it.

Again, this is the developer's perspective. As a user, I don't care at all how many others use my email client or text editor. What's more important to me is that it doesn't suck, especially on my personal computer which isn't quite as tricked out as the computer I use at work.

My point is as previously stated: there are multiple perspectives on this. For me as a user there is absolutely no benefit to the developer using Electron. I don't care if it took a million man years to put together; I care about my own resources. I'm not arguing with your perspective, just arguing that it's just that, one perspective, and that users aren't "looking at the wrong thing" when they complain about the result being crap.

> At the same time, I really don't see other viable options at this point in time.

Electron is five years old. Surely, developing a cross platform application was viable before that? Seems like an ironic statement considering that the larger part of Electron is Chromium, a cross platform GUI application predating Electron. There are plenty of libraries and frameworks that abstract OS specific stuff with regards to GUI, networking, file system handling etc.

> I'm curious though, is there something you would recommend instead that meets these requirements with a single codebase (~90% shared code): target all major platforms (macOS, Windows, Linux) and target all major browsers (Chrome, Safari, Firefox, Edge, IE), (edit: and how could I forget, all major phone browsers) with nearly identical UI/UX?

For one, your web app could be just that: a web app. There's no reason to have multiple copies of chromium running and littering your disk when you already have a browser.

Second, these UIs that look consistent across all platforms probably make the designers happy, but for the user (or at least me, personally) it is better if the interface is consistent with the design conventions of the platform it's running on.

But no, I don't really have an alternative to suggest if those are your goalposts. Targeting multiple platforms is just one of those things you have to suffer through with some consideration if you don't want your application to be a bloated webapp-bundled-with-a-browser.


> I'm curious though, is there something you would recommend instead that meets these requirements with a single codebase (~90% shared code): target all major platforms (macOS, Windows, Linux) and target all major browsers (Chrome, Safari, Firefox, Edge, IE), (edit: and how could I forget, all major phone browsers) with nearly identical UI/UX?

That is ridiculous comment to start with. You might as well say that something's name has to start with an 'E' ends with 'n' and is 8 letter word and developed by Chrome browser team.

If it were not a software, I think it would right in category of 'blame the victim'. Seems you really believe it is users problem that they do not upgrade their phones and computers every couple of years.


> That is ridiculous comment to start with.

Is it, really? Let's drop the web as one of the platforms we don't want to target. There are some reasonable options left, but when you add the web in, and just limit that to the major browsers, your requirements for support just got extremely more complex. Why do you want to target the web? because in general, that's the easiest place for user acquisition and on-boarding of a product.

And where did you get this: "Seems you really believe it is users problem that they do not upgrade their phones and computers every couple of years." This is a decision people need to make in how they target certain users. If your target user base has limited compute available, limited bandwidth, limited memory, etc., then of course you need to build software that meets your business requirements.

There are no victims here, there are users, potential users, and former pissed off users who've gone elsewhere. These are all business decisions you need to make as a developer. If you target Electron and by doing so piss off all your users and they ditch your software, then you screwed up. But I only see VSCode gaining in usage, even Slack being a horrible memory hog, they're still gaining customers.

In a nutshell, there are companies that are being very successful with this, success is hard to argue with, as much as you might dislike the way it's being achieved.


It's not quite that simple though, because plenty of high profile apps have gone through large rewrites from native to Electron. Skype is probably the biggest offender here, taking an app that was actually quite good and turning it into a steaming pile of crap.

Alternatively, we've got closed systems like Slack that are slowly and pervasively getting rid of open systems like IRC.

I'm really not sure that there's an engineering time advantage in taking a reasonably solid existing piece of software and rewriting it in Electron. Is the technical debt really that large?

Really, I think the problems we have are political, and not technical. I include things like NIH syndrome in that group.


I think there's an inevitable undeniable need some something like Electron, but it needs to be well supported and ubiquitous.

Before Electron there was xulrunner, whose main purpose in life was to be the cross platform application layer of the Firefox web browser.

Even though it was officially an unreleased "technology experiment", xulrunner was successfully used both internally at Mozilla to develop Thunderbird and Sunbird, and externally to develop other cross-platform desktop apps like TomTom Home, Uploadr, Nightingale, Songbird, Miro, Joost, Lotus Notes, etc.

But xulrunner was never as fully fleshed out and widely used as Electron, and Mozilla was never serious enough about supporting xulrunner as a platform for other applications, for it to be viable in the long term.

It never caught on and became a standard, and now it's obsolete.

https://en.wikipedia.org/wiki/XULRunner

>XULRunner is a "technology experiment", not a shipped product, meaning there are no "official" XULRunner releases, only stable builds based on the same code as a corresponding Firefox release.

>Mozilla stopped supporting the development of XULrunner in July 2015.

In order to solve the problem of every application shipping with its own web browser, there needs to be ONE standard Electron shell that can run all those simple apps and even the advanced ones, and apps need to be able safely include their own native code extensions.

Many people developing Electron apps explicitly and legitimately WANT their app to include the whole web browser, just so they don't have to expend any effort supporting other browsers.

Electron needs to mature and become stable enough that there can be one global install that runs all apps.

But that requires long term support, and a big enough well funded team working on it full time.

Just as all the successful corporations who built on top of OpenSSH have an moral obligation to support its developers, I think successful companies making money by shipping big fat Electron apps today have a moral obligation to support the development of a standard Electron shell.

Think of it as buying carbon credits.

https://en.wikipedia.org/wiki/Carbon_credit


I remember XULRunner, but never used it. Do you know if it was any lighter weight, or how much so, than Electron?

> I think successful companies making money by shipping big fat Electron apps today have a moral obligation to support the development of a standard Electron shell.

I agree completely. The issue that I mainly see, with the possible exception of Linux, it's a drop in the bucket of number of users (and has it's own issues GDK vs. QT/KDE), the main adversaries have traditionally been the OS builders. They actively seem to be intentionally creating a walled garden model.


Yes, ever since computers became "fast enough" developers have traded user's efficiency for their own. It's faster to write, but slower to run. The difference is that it's written once but run thousands or millions of times each day. It's distributed waste of resources, and it adds up.


And users willingly accept this by upgrading their computers every 3 years just to run the same software with the same performance.

I shouldn’t have to buy 8GB more RAM just because Companies X, Y, and Z only want to hire a few JavaScript programmers.


The conservation of developer's time (and company money) at the expense of the users' time and money is the epitome of an externality. It's clear that we'd all be a bit better off if user's time was valued more. But who's going to pay to make that happen?


The users. If a person wants something, they buy it; if they don't, they don't; and if they didn't, they didn't.

A very simple and straightforward model of action. Yet it has fueled almost every major non-military-funded technological advancement since at least the Industrial Revolution, inclusive.

(And if you consider the military a very special customer who has been graciously granted the disposal of the entire nation's profits ... then they are no exception after all.)


Users could easily pay a few cents more to hire that competent dev to optimize, but there's a massive information problem that keeps the option from showing up.


> The number one cost at most companies, is humans. Before electron what did you have for cross platform development?

Why does development have to be cross platform?

If you are a startup with a small team, and you truly only have two or three developers, then I can understand this desire. I totally understand the "this was my side project" or "we're just four people in a garage" arguments. I get it.

But most products people complain about are from billion dollar corporations that could easily handle having totally different codebases for 4+ platforms clients, (with way more total lines, but only a little bit more complexity and cost) than one single cross-platform super-client. Humans are not expensive to these companies, and we're only talking about adding a few more.

From my own personal experience, writing a native Java Android version and a native Windows UWP C# version of the same app, the effort + maintenance of those two codebases combined was only a little bit more effort than writing one HTML+JS+Electron-ish project that spits out an Android and Windows version. Native code just isn't that difficult if you spend a little bit of time learning their APIs (I would argue it's actually simpler than modern web frontend).

I understand why companies choose Electron instead -- and I've even had to do so myself at times, I get that. It's great for small projects or low-budget projects or internal-only uses, etc.

But it's not unreasonable for people to complain a little bit when these large hyper-popular billion-dollar services (like Slack or Spotify or Twitter) cheap out a bit on their software in that way.


Electron allows small teams to focus their most costly investment on developing toward one product being developed across many platforms.

At the cost of battery life and memory usage for the users....


Electron started as an experiment that took off with quite a bang, and all told it's still pretty new. I think there's a lot of efficiency work to be had, but it's stunning how easy it makes building GUIs.


I use VSCode (electron based) on a flight cross country regularly. I generally have enough power for the flight (4-5 hours), without much problem.

The memory usage hasn’t been bad enough for me to notice it stealing resources from other apps that need it. I don’t think most users notice.

I’m not talking about you. It’s clear this is a big deal to you, and so I assume you don’t use any electron apps. I think that’s a gamble most companies would make, losing a small number of users, to have the potential to gain many more.


Visual Studio Code = real work with a complicated set of features. I expect it to consume a fair amount of memory and battery life.

Slack - Shouldn’t.


What makes VSCode more “real” in your opinion?



I think with Slack, the computational workload is just network I/O and rendering text and images. With VS Code, in addition to rendering text, there is code linting and compilation, which is more intensive.


VSCode is like the poster child for Electron, Electron done right. I often bring up VSCode when people are hating on Electron here.

Yes, it's easy to make crappy memory hogs with Electron, but it's also possible to make fully-featured, snappy, great applications. Such as VSCode.


I'm pretty sure VSCode is actually React-Native, not Electron. So there isn't the overhead of a full Chrome instance like there is for Slack or Atom.


I'm a React fanboy, but no. VS Code is definitely written on top of Electron, and as far as I know the codebase is entirely custom rather than using any kind of JS SPA framework.


A good native app is still preferable. I understand, but that doesn't mean I have to like it.


I generally agree with this. And maybe WASM will offer some new options here, such that native code can now target the browser. That will give us different options for targeting all platforms, possibly allowing for a world where cross platform development doesn’t come at the cost of memory and performance.

We’ll see.


I don't have unlimited ram and am not interested in subsidizing the companies cost at the expense of my resources.


Compared to some of the Adobe products I have to use, 500MB would be a dream.

There was a time when I couldn't have two Adobe products like Photoshop and Illustrator open at the same time. They would just continue to hoard resources until my machine locked up and I had to reboot.

Even today I have a fairly robust system (i7 processor, 16GB of RAM and a 3GB Video card) and running multiple programs still makes all my fans kick in and start whining at me.


In the last few years we switched from throwing more CPU cycles at the problem to throwing more CPUs at the (not necessarily parallelizable) problem.


Each of the 40 cores on my workstation runs at 3.5 GHz. I had a quick look and I found reference to Google using pentium 2 CPUs in their data centres in 1999. According to Wikipedia, they max out at 450MHz. So my workstation is 8 times faster per core, and has 40 more cores. It's hard to justify the differences really.


20 years is equivalent to a couple of geological ages in tech, definitely not the "last few years" I was referring to.

This article is from 2014 but hardly anything has changed.

https://www.comsol.com/blogs/havent-cpu-clock-speeds-increas...

https://superuser.com/questions/543702/why-are-newer-generat...

https://i.stack.imgur.com/z94Of.png


> 20 years is equivalent to a couple of geological ages in tech

I disagree with this. I'm not going after you specifically, but I think this attitude is part of why the IT industry seems to reinvent the wheel every few years; there's this perception that we're GOING WHERE NOBODY HAS GONE BEFORE. No, most of us are not. Maybe a very few people in research labs are, or people really pushing at the raw edge of cryptography or mathematics, but the rest of us are basically cycling through the same ideas over and over, in different clothing.

I can't tell you how many times I've run into a problem and done some research, and found out that the optimal practical solution or algo was devised by some dude working at IBM in the 60s. (In fairness, some of those guys were really ahead of their time.) A person could make a very good living just strip-mining old ACM research papers from the 80s and selling the ideas in proof-of-concept form to the government, military, investors, or anyone else with no sense of history.

Sometimes I wonder how much further we'd get if we did a better job building on prior efforts and resisting the urge to clean-slate things quite so often.


What I want is a system where the keyboard is connected by some connection that is a hardware interrupt like the keebs of the old time. Just don't make it interrupt only when booting. That alone will save a few dozen ms of delay. Then that letter goes straight into a framebuffer that displays it on a 100% 2d DE/editor. Just how writing text is supposed to be.


It will happen naturally. Throwing hardware at the problem is still the most economical thing to do. Features and speed of development still beat performance. When that's no longer true we'll focus on performance again.


> the lock was being acquired and released ~49,000 times and was held for, on average, less than one ms at a time. But for some reason, even though the lock was released 49,000 times the Chrome process was never able to acquire it.

Well, locking is hard.

> The good news is that even though there is occasional unfairness, there is unlikely to be persistent unfairness. In order for a thread to steal the lock, it needs to hit the tiny window where the lock is available. In practice, a thread is unlikely to be this lucky repeatedly.

https://blogs.msdn.microsoft.com/oldnewthing/20170705-00/?p=...

> The fact is, any time anybody makes up a new locking mechanism, THEY ALWAYS GET IT WRONG. Don't do it. Take heed. You got it wrong. Admit it. Locking is _hard_.

https://yarchive.net/comp/linux/locking.html


Actually this is exactly because someone in Windows used the plain old mutex. I'd call that "your granddad's lock". It's old, crotchety, unfair and shouldn't be used in situations where any concurrency can be expected.

This despite the kernel having a nice RCU mechanism inside as well as waitfree queues.


If there was a spinlock, the problem stay.

> the rule simply is that you MUST NOT release and immediately re-acquire the same spinlock on the same core, because as far as other cores are concerned, that's basically the same as never releasing it in the first place.

https://yarchive.net/comp/linux/spinlocks.html

Other mechanisms do exist of course.


> I'd call that "your granddad's lock". It's old, crotchety, unfair and shouldn't be used in situations where any concurrency can be expected.

What alternatives do you suggest?


I guess:

s/concurrency/contention/


You should have asked your pointy haired boss. He could have explained the problem in much simpler terms.

Would you expect 24 employees to write ONE email without 4 team leads and one department head?

Obviously NO!!

Your processors obviously need more management. I think Intel has the right offering for you, aka management engine.


New from Intel: the Scrum processor.


You joke, but seriously, this is generally a goal of distributed and/or parallel computing. Reduce interdependencies, stop constant cross chatter and try to do as much computing in isolation as possible.

Scrum’s not necessarily a bad anology. Here’s a different thought that this conversation has me now thinking about, what if we did think about development teams as CPU cores? We might discover weak points in the architecture of an organization, and recognize more quickly where we need to address bottlenecks. The bandwidth of the bus betweeen the cores might be too limited. The pipeline of work (backlog) might not be deep enough, and has a ton of branch statements (spikes) that may throw out the entire pipeline...


Why not just embracing simplicity?

I have a golden rule for my systems. After booting into X and opening one terminal with htop and hiding kernel threads, everything should fit very comfortably in one screen. This constraint forces yourself to have a very simple setup. A few daemons, a window manager and a terminal. I do the rest of my computing in Emacs and Firefox.

I have several Arch or NixOS setups that could work on a 128 MB RAM setup, excluding Firefox. Plus, if something breaks, I know how to fix it.


Organizations like to grow, though. Wouldn’t that imply that organizational structure would have a maximum size?



My MBP only has 8 cores but often times during the day it locks up and blows up a hurricane of fan noise. Why? Idiotic Jamf management software and a virus checker that never finds anything. Some days you only get 1 core for work when it gets stuck and have to reboot to get it back. It's not always the computer, OS or the software you use.


Yep, Jamf and McAfee are both trash-quality software that significantly reduce the overall security of a system. McAfee is a well-known story: their AV unpacks and analyses potential malware via a kernel module. Wtf?

Jamf's code quality and security model are abysmal. It blows my mind that apple recommends the use of Jamf to large Mac shops. Having peeked under the hood at previous employers, I was extremely disappointed. The product is insecure by design - not what one wants for device management that's given root privileges on machines containing corporate crown jewels. I highly doubt their operational paradigm for Jamf cloud has changed in the last year, either.

I apologize for going completely off-topic, but these products make my blood boil. Especially because I now work for a shop that uses both Jamf and McAfee for Mac "security". On a positive note, I removed the corporate mandated malware with ease - no boot to single-user required.


There was a time that running Windows absolutely required you to use an antivirus solution. It was crazy not to.

Those days are past. Now you have to be crazy to run one. Even setting security aside, I think antivirus packages are the number one source of instability and weird performance problems.


Could you build on the always-unfair system along these lines? (C11ish, sorry)

  void 
  do_work(the_work_t *w)
  {
     /* for simplicity here, rather than e.g. w->contenders */
     static _Atomic int contenders = 0;
     
     contenders++;
     for ( ; work_remaining(w); ) {
       take_mutex(w->m);
       do_some_work(w);
       drop_mutex(w->m);
       if (contenders > 1)
         reschedule_this_thread();
     }
     contenders--;
  }
This depends on the OS providing a cheap and fast reschedule_this_thread() mechanism that effectively guarantees that if there is only one other contending thread with work, that thread will end up holding the mutex. (If there are multiple such threads, an arbitrary one of them will end up with the mutex, rather than the thread that just dropped the mutex.)

One could of course only check for other contenders every few times through the for loop if reschedule_this_thread() is expensive or slow, or if contenders is especially hot.

contenders is explicitly not a locking mechanism and should not influence the policy of any code running while the mutex is held. It should also be a per-mutex counter.


The trick is how you implement reschedule this thread. What you want in this specific case is to take it off-core long enough for another thread to wake up and take the lock. That is far too squishy a goal to be something that you can implement. If you sleep for some number of nanoseconds then you are wasting performance and/or not sleeping long enough.

In this particular case the lock was a kernel lock in kernel code so the OS would have to fix this, by making the locks fair (or occasionally fair).


I've been spoiled by a better OS. :D

How about:

  while work:
    if (contenders > 1)
       { reduce_priority; pri_reduced = true }
    take_lock
    if (pri_reduced)
       { unreduce_priority; pri_reduced = false }
    do_work_quantum
    drop_lock
   endwhile
(I mean empirically, although an educated guess would do).

Of course, if reducing priority is too fast, this likely doesn't help; alternatively it could be too slow and what you get back in system latency is taken away in lowered throughput. That's probably not OK if you don't need the system latency to be low.

I wonder if (dramatically, even) reducing the priority of some of the original workload exposing the problem, not just when racing for a lock but when doing the actual work quanta, would help. My thought here is that your email-sending is higher-prirority and will at least push some of the workload out of the way in reasonable time, giving you back some responsiveness.

I'm surprised if Windows doesn't offer up a high-throughput/latency-tolerant QOS for threads.


Priorities don't help. They are only relevant if there are more runnable threads than CPUs. In my case I had lots of spare CPUs so both threads could run, regardless of priority.

A QOS does not directly help. The only thing I am aware of that can help is fair locks, or occasionally fair locks, so that the lock is given directly to the waiting thread, instead of being made available to all.

I have yet to hear of any other solutions.


See here for a good ticketing mutex that fairly solves the problem:

- https://stackoverflow.com/a/5386266


I wonder how much cpu time has been liberated through the CFG scan patch.


I'll have to go through with some of the same steps to see if a recent problem I had is related to this.

In my case it was actually being triggered by a piece of monitor management software that came with the LG ultrawide monitor that I use. No significant CPU load, no memory issues, plenty of cores in an older HP workstation with a Xeon Processor, 48 gig of RAM and a Samsung SSD. When that display management software was running a text editor couldn't even keep up with displaying text as it was typed.

Edit: rereading some of the original 2 articles, yeah, I'm not going to be running the same kinds of tests - even if I did it'd take too long to develop the knowledge base to be able to interpret my results adequately.


What exactly did his it department do using wmi to query this Win32_PerfFormattedData_PerfProc_ProcessAddressSpace_Costly? I can't find this in the article


The actual high-level query is in part one and in the first reply, but I think your real question is "why did they want to scan the address space of every process on the system?"

I think that the answer is that they didn't. That was just one of the bazillion counters that came along for the ride. I believe that that counter has been removed in the latest OS, thus squishing this bug in another way.

I don't understand WMI, but it sounds really weird. One peculiarity is that once some program asks for counters "WMI refreshes the list of counters every 2 minutes until the WMI helper process closes due to inactivity."

So that's great. IT asks for some data, they get memory scans that they don't want, and those scans are repeated every two minutes for a while (ten minutes?) even though nobody is looking at the results.


> "By the way, the reason that the hangs kept happening at 10:30 am is just because that’s when our IT team ran their inventory scans. If you want to trigger a scan manually, you can go to Control Panel | Configuration Manager | Actions | Select ‘Hardware Inventory Cycle’ and then Run Now."


I really enjoyed the first iteration of this article and am happy to see a part two.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: