Hacker News new | past | comments | ask | show | jobs | submit login
Lubuntu is taking a new direction (lubuntu.me)
271 points by niutech on Aug 2, 2018 | hide | past | favorite | 216 comments



> our main focus is shifting from (providing a distribution for old hardware) to a (functional yet modular distribution focused on getting out of the way and letting users use their computer.)

I like Lubuntu and often use it for low powered devices and sometimes devices that aren't so low powered. This phrasing is a PR blunder in my opinion because the old focus is a lot clearer than the new focus. Here's how you should have said it:

We are focusing our resources on developing a fast, clean, functional distribution for devices manufactured in the last ten years.

Then you can go on to explain the rationale (which is all quite sensible, if you don't want to support hardware that's 15-20 years old, you don't have to, it's a pretty niche market anyway).


I have never used Lubuntu other than trying it out a few times, so this is just going off of the content in the article.

> This phrasing is a PR blunder in my opinion because the old focus is a lot clearer than the new focus

To me, it seems that they ran the numbers (kudos to them for taking an empirical approach) and figured out that their original USP was now no longer unique enough, and they need to do something different to stay in the game.

But this article makes it seem like they haven't figured out what that something different is. They list a number of things they are going to provide, including: - leverage modern, Qt-based technologies and programs to give users a functional yet modular experience - Lubuntu will continue to be a transparent and open distribution - create and maintain complete documentation

And a few others. But none of these seem to be different enough from the other great choices available as Ubuntu flavors.

That's just my read on the article anyway.


That's Linux Desktop distributions in a nutshell though isn't it? None of them are really different enough from Ubuntu to be noteworthy.

Personally, I think the fact that so many minor-variant distros exist is all the evidence you need that Linux is a terrible platform for customization, because if it were simple to customize everyone would just use Ubuntu and make their own changes, but instead you get all these distros.


I respectfully disagree with that. In my opinion the point of the different *buntus is convenience for beginners rather than solving hard problems. To that end I have taken Ubuntu machines at work and installed KDE and the whole process took no more than a few minutes and was pretty painless. But that's not necessarily something that a Linux n00b might realise they could do (nor even want to try)

If you look at any of the more "hands on" distros then you might find a couple of themed variations, but generally users will just recommend you install the base distro and change your personalisations yourself. As an aside I went through a phase of running a different DE/WM each week on ArchLinux. Never had any issues there.


It's proof that Ubuntu is a terrible platform for customization.

Most other distros don't have that split - whether it is ArchLinux, Fedora, CentOS, NixOS, and what have you.


Fedora has spins which are distributions of Fedora with differing desktop environments. The Fedora Spins website currently lists spins for 7 alternative desktop environments.


Well Fedora and CentOS are both more or less RedHat splits surely.


CentOS isn't even a split. CentOS is RHEL with different distribution without the trademarks - it's binary-compatible and documentation-compatible, and matches versions patch-for-patch.

Fedora is more like a split - it's the people who like RHEL but want a faster release cycle.


Not exactly. Fedora is more like a non-LTS version of RHEL, if we oversimplify it a bit. CentOS is more like a split, but their goal is to be functionally identical to RHEL.


That might be because a lot fewer people actually use them as a Desktop.


Apart from a few philosophical design decisions (e.g. packet manager, update cycles) the main point of distros is how they are preconfigured, which default tools are installed for what job, etc. Sure I can just deinstall a couple dozen apps and then reinstall new ones doing the same job, but why should I do that when another distro directly ships what I want?

Same about being up to date. always want bleeding edge stuff that might be unstable at times, or change its GUI ever so slightly all the time? Go for arch Linux. Don't mind running a somewhat older version of most tools for some time but have it more stable? Pick Debian. Etc ..


Yep that's a great point, maybe they are not communicating what the unique benefits of Lubuntu will be in the future, or maybe they don't actually know what they'll be.

Right now Lubuntu's niche is really clear for me, if I want a lightweight, general purpose distro I use Xubuntu. If I want a really lightweight, general purpose distro I use Lubuntu. (A step beyond that if I wanted "so damn light there are probably a lot of things I don't be able to do, I would think of Puppy or Damn Small or something, neither of which I've needed to touch in a long time, not sure they're even still maintained.)

So if Lubuntu is no longer my super-light distro, erm, what is it. A competitor to Xubuntu?

(Total side note, I really love the space of light, clean, functional desktop operating systems. The most exciting development in computing for me personally would be if someone figured out how to commercialize one as a competitor to Windows and Mac. If I thought there was a good business model I'd start that company myself.)


In my experience such "lightweight" distros, over time, accumulate expanded functionality/compatibility/surface area and inevitably/gradually become "heavier".

Then the community complains "X" is too heavy, we need a lightweight system! and the cycle repeats itself.


Which is why ArchLinux is so nice that it doesn't suffer from this type of growth by nature of its design. You can very easily layer lightweight frontends on a very reliable core.


Putting Arch and reliable in vicinity to each other is stretching it a little.


I think it's becoming the Qt (via LXQt) competitor to the GTK3 (via Xfce) Xubuntu.


Re: Puppy and Damn Small: the former's still maintained, while the latter (last I checked) is not.

Another one to add to your list of tiny Linux distros is Tiny Core Linux, which is indeed tiny while being fully graphical.


Great to hear Puppy is still being developed. I'm downloading it now!

I see Puppy's shifted their idea of "lightweight" too, I recall the ISO being 4-5x smaller last time (2007ish).


In my opinion, if you like tiny distros (as do I), you might want to look at Alpine Linux. It has a good number of up to date packages and has a very tiny footprint. The installation process could use some help to improve adoption, but it is otherwise a fun and tiny distribution of Linux.


> But none of these seem to be different enough from the other great choices available as Ubuntu flavors

Yep. "Gets out of the users way" is what a lot of Ubuntu's goals are about already.


Was that sarcasm?


Even at its worst (with Unity), I'd say Ubuntu does do a reasonably good job of staying out of users' way. Maybe not compared to other distros with lighter environments, but certainly compared to, say, Windows 10.


Not at all. It wasn't a comment on how well they manage it but both Mint and Ubuntu seek to work "out of the box" to the greatest extent possible. In my opinion, they do it much better than any other distro.


"Modern, Qt-based technologies?" Qt is 27 years old. It came out for Windows 3.1.


Windows is 32 years old and people still develop for it, sometimes even for profit.

And wait until you hear about Unix.


It did not stop improving since then. "Modern" describes the current status, not when it came out initially.


> Qt is 27 years old. It came out for Windows 3.1.

Same as NTFS file system then


NTFS was NT 3.1 -- in 1993.

Win 3.1 (1992) used only FAT.


functional yet modular distribution focused on getting out of the way and letting users use their computer.

It's been a while since I checked all possible distros out there (which used to be a doable thing 2 decades ago, not so much anymore) but to me this sounds just like a whole bunch of other distros out there? So yes, not the best PR.

Here's how you should have said it: We are focusing our resources on developing a fast, clean, functional distribution for devices manufactured in the last ten years.

They could have, but it also means something different because it specifically says something about older devices. Whereas it looks like they're actively shifting away from that. I'm saying 'looks like' because honestly after reading the text I'm still not sure what exactly the plan is.. On one point the core goals don't mention anythin about 10 years, and it says should still be usable on old systems which sounds like 'well, maybe, don't care' when on the other hand it also says we will no longer primarily focus on older hardware which implies there is still some focus left. Again, yes, not the best PR as this article just creates confusion because of the wroding.


> We are focusing our resources on developing a fast, clean, functional distribution for devices manufactured in the last ten years.

Very few distros want to support devices older than 10 years. That leaves the distros that want to be fast clean and functional. That really limits things down. </s>


One of my friends was saying how her old Mac Pro with dual X5670 CPUs was one of the fastest machines for video transcoding she ever saw. That's a little bit less than ten years old but still older than Sandy Bridge. Someone did the math and it turns out today a top consumer (not HEDT) CPU barely matches this pair of CPUs in multithread performance https://hardforum.com/threads/replacing-dual-x5670.1963388/#... Obviousy since we are talking of 8 new cores matching 12 old cores the single thread performance will be 50% faster.

Wait, what? 50% faster? Is that all? I mean, many tasks are still single threaded and from 2010 to 2018 we have grown 50% in single thread and 100% in multithread? If things were going the "old way" then in 8 years would've seen a 2^(8/1.5) -- well over thirty times speedup (Moore's law as stated by David House who predicted that chip performance would double every 18 months). You can see some of that http://3dfmaps.com/CPU/cpu.htm here.

No wonder there's no point focusing on old CPUs any more.


«it turns out today a top consumer (not HEDT) CPU barely matches this pair of CPUs»

Well, the X5670 was a high-end 2P server CPU sold for $1400 (https://ark.intel.com/products/47920/Intel-Xeon-Processor-X5...) so to fairly measure how performance has improved, you should compare it to a similarly priced product, like the AMD EPYC 7401 (https://www.amd.com/en/products/cpu/amd-epyc-7401). And, oh, look at this: the SPECint_rate "h264ref" benchmark subcomponent, which measures multithreaded video encoding performance, shows a 4.2× improvement (472 to 2000):

• X5670: https://www.spec.org/cpu2006/results/res2010q2/cpu2006-20100...

• EPYC 7401: https://www.spec.org/cpu2006/results/res2017q4/cpu2006-20171...

Edit: oops, fixed major screw up. I had linked to SPECint numbers, instead of SPECint_rate. The real improvement is 4.2× not 45×! Far from what Moore's Law predicts (2^(8/1.5)) but still a very notable improvement. I'm sure your friend would be happy to transcode 4× faster...


2x or 4x doesn't matter , you'd need magnitudes of difference to need to think what an OS can or can't do. Which is what the old ten year was about -- and look, 2^(10/1.5) is a bit above 101. Two magnitudes. That matters.

To compare, let's presume (this is a great exaggeration I know but it serves the purpose) each Chrome tab takes equal CPU time. Say, on the current computer the 100th tab makes it unusuably sluggish. At 2x, the previous computer could open 50, even at 4x we are looking at 25. At 100, we are looking at not being able to run Chrome.


Is it possible to buy a Mac Pro with an AMD EPYC 7401 CPU? Surely the fairer comparison would be against the current top-of-the-range in Apple's lineup.


Except the Mac Pro hasn't been updated in a while, partially due to thermal constraints. That isn't related to long-term progress, but an uneven cycle.


I'm not sure I buy the analysis, for many applications the single threaded performance of a 8700k vs an X5670 is doubled, e.g. ~6000 vs ~2600 in geekbench single threaded. You also can't draw conclusions of single threaded performance from multi-core comparisons due to turbo boost.

It's also worth noting that a X5670 cost ~$1400, so you're comparing ~$2800 worth of CPU to ~$350 worth of CPU in something like an 8700k, and that doesn't cover platform cost differences. Another way to look at this is that 8 years after the release of the X5670, you can get better performance for ~10% of the price, that's not bad. If you did plop down ~$2800 today for a CPU or two, you'll have 28+ cores at your disposal.


> If you did plop down ~$2800 today for a CPU or two, you'll have 28+ cores at your disposal.

In just over a weeks time you will be able to get a 32 core / 64 thread AMD Ryzen Threadripper 2990X [1] for around $1,500-$1,800 which is pretty amazing.

[1] https://wccftech.com/amd-ryzen-threadripper-2990x-2970x-2950...


I have a desktop with an intel 3570k bought in 2012, I wanted to build a new gaming desktop with a ryzen 2700x.

The ryzen is only 20% better in single threaded performance. I could easily overclock my cpu to close further the gap. Just need to upgrade my gpu and I'm good for a few more years. These cpus have aged quiet well.


I recently replaced my main desktop PC after nine years and the new CPU benchmarks at only about twice the performance.

Double the performance isn't too bad obviously, but that's about the same timespan as between the 486 and the Pentium III.


I'm right there with you guys.

I made my desktop in 2014 with a Haswell i5-4670K @ 3.4GHz. Overclocked it to 4.3GHz, and I still don't have a reason to upgrade. I really want a Ryzen 2700X also but staying patient. Newer and better hardware will keep coming out so I'm in no rush. (but my next CPU will definitely be AMD)

My current thing right now is compiling the Linux kernel exactly catered to my Haswell machine, and using the latest gcc with "-march=native -O2" optimizations. It might seem minor, but man if she ain't screaming right now. I should mention I have an AMD RX 480 graphics card also. So I'm up-to-date in that world and enjoying the best of team red and blue right now. New drivers, software, kernels, and optimizations keep coming out that make my current hardware faster. But then came Spectre and Meltdown lol.



So... always use `-O0 -march=i586`, because using obvious optimizations for your hardware is a total waste of time and will definitely never ever be beneficial? Seriously, I get that it's possible to go off the deep end over optimizing, but that site is way off the deep end in the opposite direction.


I'm sure it was just poking fun. We can't be kernel ricers and get mad when people make fun of it. CFLAGS matter! lol

One thing I will say, compiling your own has many benefits, as all the distribution kernels use Generic x86-64 and miss out on all the Intel/AMD optimizations.

For example, just by switching from "Generic-x86-64" to "Core 2 or newer" in Processor type and features -> Processor family in kernel config, adds these 5 optimizations:

CONFIG_X86_INTERNODE_CACHE_SHIFT=6

CONFIG_X86_L1_CACHE_SHIFT=6

CONFIG_X86_INTEL_USERCOPY=y

CONFIG_X86_USE_PPRO_CHECKSUM=y

CONFIG_X86_P6_NOP=y

You can say it's pulling hairs, but hey, that's what I want. Not many people have the time, know-how, run Linux, etc. to be able to make these optimizations.


7y son's computer is my old, and nearly as old as him! 2600k overclocked to 4.4ghz, 16gb ram, 512gb 840 Pro, NVIDIA 980 on a 1920*1200 U2412M.

My much newer one (six core 6850/950 NVME/1080ti/32gb) really isn't that much faster, GPU aside.


In a similar boat with my old 4770k and a new RX580. I'm kicking myself for the k model though, as I would like to pass my old GPU through to a Windows VM, but the unlocked k model CPUs don't have VT-d, necessary for IOMMU.


Buy a 4th gen second hand CPU? No need to kick yourself, those are cheap now.

For example: https://www.ebay.com/itm/Intel-i7-4770-3-4GHz-Quad-Core-Syst...


This mistake was fixed in Devil's Canyon:

https://ark.intel.com/products/80807/Intel-Core-i7-4790K-Pro...

Intel® Virtualization Technology for Directed I/O (VT-d) ‡ Yes


Can AMD Ryzen and Threadripper run Linux without crashing? What about compiling large source code bases? There were some scary bugs reported on Linux when Ryzen came out.

And that's not even covering motherboard incompatibilities with Linux (e.g. sleep/shutdown not working, WiFi/Bluetooth broken, etc.)

Have those problems been fixed definitively? Or should one stick to Intel for Linux at the moment?


Longtime Linux user who switched to Ryzen 1500x recently from an old Core i7 (2011 model). As far as the processor goes, I have faced no problems w.r.t stability. Paired with an ASRock motherboard, there are no problems with other functionality such as suspend/resume either. Occasionally the Desktop compositor hits a snag freezing my machine and requiring a forced reboot, but I would fault the NVidia drivers for this.

Also, since I use the Arch distro, installing packages from the AUR requires compiling stuff. So far ffmpeg has been among the packages with large source code base that I compile regularly and I've never had issues. I would like to compile the kernel too, but I'm unsure as to whether the difference will be noticeable for my daily usage. On my old machine, the Zen-kernel did improve responsiveness, but on my current one, I'm sticking to the LTS kernel.

Word is that the 2nd generation of Ryzen a.k.a Raven ridge are more stable and worth it if you can afford them (watch out for motherboard compatibility though).


I can't resist crosslink my post from two days ago https://news.ycombinator.com/item?id=17656645

I for myself couldn't suffer a machine where I occasionally need to reset , destroying flow.


I read your post twice and you didn't mention rebooting. Were you rebooting because of docking?

Most of your gripes come from running Linux on a laptop machine. If you are a person who likes to work on powerful desk workstations, Linux behaves great on those. Granted, some things, such as multifunction printers (which are mostly software-driven) don't have support. You do have to be careful when selecting hardware, unfortunately.


No, I just found your off-the-cuff comment absolutely typical of the desktop Linux situation: it's totally stable except when it freezes so hard you need to find a reset button.


> Can AMD Ryzen and Threadripper run Linux without crashing?

Yes, of course!

> What about compiling large source code bases? There were some scary bugs reported on Linux when Ryzen came out.

Are you referring to this? https://www.extremetech.com/computing/254750-amd-replaces-ry...

The later revisions of Ryzen 1000 CPUs have been fixed, you could RMA your CPU if it was affected (I did this myself, AMD's support was excellent). Ryzen 2000 CPUs aren't affected.

> Have those problems been fixed definitively? Or should one stick to Intel for Linux at the moment?

I've never heard of any motherboard incompatibilities. The WiFi chips in some of those AMD motherboards are actually from Intel btw.

I'm running an AMD Ryzen workstation with Linux for over a year now and it works perfectly.


From what I remember those crashes were hardware defects that caused issues under load on any platform. Ryzen 2 hasn't had this issue and my 1600X worked beautifully out of the box on Ubuntu 18.04 except for the lack of sensors.


So your software has no idea whether the hardware is overheating?


Hello, fellow 3570K user - I've also been using one for 6 years now as my main (and gaming) machine. Had to upgrade my GPU once (HD6870 originally, now an RX 470) but I can even play Quake Champions without any problems, in addition to all the games that are a few years old. I've decided to evaluate getting a new machin in 2019, not earlier.


The same pair of video codecs won't magically transcode orders-of-magnitude faster on newer CPUs†, because video codecs—like crypto primitives, compression algorithms, and a few other categories—are designed, in the choices of the types of maths they do, to take advantage of the existence of certain accelerated instructions (e.g. SSE) that exist at the time of their creation. You can't make them take advantage of newer accelerated instructions (e.g. AVX-512) without it being a different algorithm.

In other words: new CPU architectures don't make old algorithms faster†. Instead, new CPU architectures enable newer, faster algorithms.

† Yes, this used to happen, in the 90s, when "new CPU" meant "appreciably higher clock speed." Nobody is making computers faster by increasing clock speeds any more, because higher clock speeds = higher power-draw and higher TDP. CPUs are now advanced by making them work smarter, not harder: accomplishing the same abstract tasks faster by providing hardware that enables newer algorithms to be used for those tasks, while using the same or less energy than the previous implementation would have used.


There is also a ~33% decrease in power usage. So a i7-8700 which also has a similar score now has 65W TDP vs a 95W TDP.

Since moores law is leveling off, programming skill & quality will become more important to extract more performance out of computers. I forsee less javascript and python and more golang, java & rust and other %100 statically typed languages in the future.


And a return to hand-optimized assembly! I knew this day would come, when my skills would be in demand once again!

Like client-server vs local, progress is a cycle.


Dynamic/static typing and execution performance are largely orthogonal. Java manages to be slow while being statically typed.

If your language is inherently heap bound and/or targets bytecode with ad-hoc JIT trickery, no amount of static typing or hard inference is going to help.


My performance hierarchy from fastest to slowest goes in this line. Numbers are based on how much slower it is from the baseline of C / ASM.

C (static typed, static dispatch, manual memory, x1) -> C++ / Objective-C (static typed, dynamic dispatch, manual / refcount memory, x2) -> Java (static typed, dynamic dispatch, GC memory, x3) -> JavaScript (dynamic type, dynamic dispatch, GC memory, x7)

Java is a actually fairly fast for a language with GC. Far faster than equivalent javascript unless you do things like asm.js, which might as well be just a binary runtime.


I feel you are generous with x3 for Java: empirically, a straightforward rewrite to C easily gains you 30x, or more for heavily cache-local datasets.

And it's not setting any speed records among automatic memory management languages either. Swift is well ahead in performance without breaking a sweat. Some Common Lisp implementations generate faster code than Java without any JIT overhead and a billion investment into development.


Swift is a C++/Rust style language vs a Java language. It just looks really pretty, so you might not realize what it's implicitly doing under the hood.

It doesn't use full automatic GC, so of course it is going to be faster than Java.

Swift also has performance gotchas, like it's strings. It's strings are very easy to use in an non-performant way because they are going for full unicode correctness, even when you don't want it.

But it's also a young language with a whole bunch of low hanging fruit in the perf realm.

My perf scale is also more orientated towards business logic / app type things. If your really thinking about your perf and doing a rewrite in fully static C/C++, then I have no idea how much better you can get :)

Is common lisp fully GC?


Swift is distinctly not Rust although it looks a lot like it on the surface. It uses reference counting, just like Perl, Ruby or Python. RC is the most rudimentary form of garbage collection. Also has its problems, like dealing with circular references.

And yes, CL is fully GC. But my point is that's not the culprit. If your problem calls for allocation and management of heap memory, it has to be done one way or another, manually or via GC. GC overhead in that light can really be negligible, and it's other concerns (like scheduling and real time constraints) that typically make its use problematic.


A language's documentation, ease of use, and community are all far more important than CPU performance. Point me to one java project on Github with >1000 stars and I'll give you a hundred Python and JS projects.


High quality TV shows are more popular than oxygen. Unless oxygen runs out.


Yes, finally I will be able to use pen and paper again!


Higher resolution, higher contrast, tactile feedback, cheaper, lighter, thinner

Does not explode, just burns


Dijkstra preferred pen and paper to computers.


You answer to his/her assertion that popularity will shift is that it hasn't shifted yet?


There are heaps of Android Java (or nowadays Kotlin) projects and libraries with well over 1000 stars. I'd guess way more than Python but quite a bit less than JavaScript.


Swift, Kotlin, Rust, Objective-C, C++ and Java have huge communities and code behind them too :)


Design imperatives change too, though. The old machine probably had 3x the power draw. A 4-5x improvement in instructions per joule is nothing to sneeze at. Frankly there are laptops that compete reasonably well with that old space heater.


What's great is how cheap X5600 series CPUs are. I'm upgrading my off lease Dell PowerEdge T410 from dual E5640s with 8GB 1066 MHz RAM to X5670s and 32GB 1333 MHz RAM for less than $200 CAD. I should be also able to sell my old hardware for a few dollars.

The only annoying issue is having to use a piggyback PSU for my graphics card, an RX480. Dell just had to use a proprietary ATX-style connector or else I would have just replaced the stock PSU all together.


What kinds of activities do you need to do video transcoding for?


Vlogging? I hear it's pretty big with the kids these days.


Plex and other home media servers for example.


As an exercise, I threw OpenBSD and GNUStep on a machine the other day. I wouldn’t say GNUStep provides the best Free/Open desktop, but I want to believe that it could.

The thing is, many of the modern Linux desktops seem to be heavily wedded to a lot of dconf and systemd infrastructure. OpenBSD has managed to build analogs for enough of the heavily Linux-centric infrastructure that Gnome 3 depends on, but man it’s clear that Gnome 3 was written with Linux and only Linux in mind.

I didn’t see KDE in the OpenBSD ports when I looked, and I suspect it too is because KDE has become wedded to Linux-only services.

XFCE remains the most portable full-featured Desktop environment and runs well on OpenBSD.

But GNUStep... GNUStep is my biggest regret. GNUStep isn’t entirely comparable to XFCE, because GNUStep isn’t a desktop. It’s more like all GTK plus dconf plus all the desktop services required for interaction between GUI apps plus a uniform display layer.

In short, GNUStep is more or less like Cocoa from macOS. Add GWorkspace, and you have a solid reimplementation of macOS’s Finder.

I wish GNUStep had caught on or that someone would inject new life into it. There are apps that compile on both GNUstep and macOS. For whatever reason macOS seems to attract better desktop apps—both Free and commercial.

Had GNUStep and not KDE or Gnome become the defacto *Nix desktop, I imagine people would be writing macOS apps for free on Linux, and macOS users would be recompiling apps to run on Linux... and not just Linux, but all the platforms that GNUStep supports, like OpenBSD and Windows.

EDIT: Oh yeah, and why I thought to write this, is because GTK vs QT both feel like lower potential toolkits. And GNUStep also feels very lightweight and portable.


> I didn’t see KDE in the OpenBSD ports when I looked, and I suspect it too is because KDE has become wedded to Linux-only services.

I doubt this is the issue. KDE Plasma 5 seemed okay on FreeBSD when I tried it recently, although I don't have any recent KDE+Linux experience to compare it with. More likely there just hasn't been enough developer interest to get it properly ported to OpenBSD.

Edit: To get more at your actual point, I've had similar feelings about GNUStep, but when I dug into it I felt like there was just too much of a mismatch with the rest of the FOSS GUI ecosystem. Maybe I'd be a die-hard devotee if I'd tried it back in the '90s as my first thing after fvwm-configured-to-look-like-mwm.


> I felt like there was just too much of a mismatch with the rest of the FOSS GUI ecosystem.

Definitely. It probably has the largest mismatch outside of exotic options like, I don’t know, a Smalltalk image.

I think that’s why the Free *nix desktop struggles to this day. We never built the core infrastructure that allowed useful inter-application messaging. We never built the application frameworks to enable a really great, consistent UX. When we finally did, we built dconf and systemd.

I can’t say these things are horrible, but neither can I say they offer anything more than macOS’s system services or Cocoa APIs. And the price of being different, is that macOS is far and away the best GUI platform today (I’m not a fan of the Mac these days for other reasons, but the UI consistency is first rate, owing to their frameworks and the NeXT legacy). We could have shared (maybe still could share) so much with the Mac. And importantly, we could have benefited from Apple’s singular vision while also providing an escape from the walled garden.

But yeah, the FOSS GUI ecosystem chased FVWM bling for a long time. Oh my goodness, I spent hours tweaking FVWM configs, later Window Maker, Enlightenment, and who knows what else. But it was all skin-deep looks. We spent so long chasing window themes, but ignoring the hard work of building infrastructure.

KDE and Gnome were ultimately late, divisive, and different from everything else. So much of Linux and the BSDs is great, it bums me to see us so far behind Apple, when we could have chased tail lights until we overtook them.


I remember that I was a kid when I read the news on Slashdot that the Linux desktop experience was going to be basically a competition between Gnome and KDE.

At that moment I knew that Linux would never win on the desktop. All those wasted developer hours.

I am still heartbroken to the day. In capitalism, competition benefits the user, even if the products are virtually identical, because then they compete on price. In the free software world, competition of two indistinguishable products actively harms the user.


The thing is, however, that GNOME and KDE are not indistinguishable - they each have very different goals, ideas on how the desktop should be, how to design applications and UI, what sort of technologies to use and develop, etc and all these end up making two very different desktop environments. As a result, they (and other DEs and WMs, of course) cater to different users and while there are some users that can use both (and all, or at least most of the other DEs/WMs) this sort of "competition" helps the users who align more with what each DE (and WM) provides. In other words, they provide options.

There are people who prefer GNOME over KDE, would you force those people to use KDE? There are people who prefer KDE over GNOME, would you force those people to use GNOME? There are people who prefer Window Maker over any desktop environment, would you force them to use KDE or GNOME? There are people who prefer i3 over any DE or WM that uses an overlapping window UI paradigm, would you force them to use KDE or GNOME or Window Maker?

If you did so, you'd be making their experience with their computers worse - you'd be acting against their choices. If people didn't value having options, they'd flock to a single desktop environment (or WM) and the rest would be a mere curiosity at most. But this doesn't happen.

And of course what Slashdot predicted didn't happen either: in addition to GNOME and KDE we also have MATE, XFCE, Budgie, Pantheon, Enlightenment, Cinnamon, LXDE/LXQt and a TON of standalone window managers to choose from - we even got the rising popularity of tiled window managers, which was certainly not a thing back when KDE and GNOME were new.


Of course, they have different goals, they're using different frameworks and so on. But ask yourself this. If the average Windows user had a choice of two different desktop environments, would they be more likely to be thankful for having a choice, or is it more likely that they would be confused?

Windows and Mac OS X don't allow much customization. Not because Microsoft and Apple are lazy, but because they want the average user to be able to pick up any Windows PC or Mac and be immediately familiar with how to use it.

If we didn't have the Gnome and KDE wars, we would have likely ended with a "standard" Linux desktop.


Considering the popularity of tools like Classic Shell, how much people despite major changes to the Windows UI and even to the lengths some people went to make the classic theme usable on Windows 8 and 10... yes, i'm 100% certain that there would be a lot of users thankful for having a choice.


There is this guy making a DE he calls NEXTSPACE[1] by combining Window Maker, GNUstep and a bunch of custom applications and modifications to existing stuff for them to work better together. I haven't tried it, but i've seen it be mentioned a few times the last couple of years.

[1] https://github.com/trunkmaster/nextspace


Looks like there are active commits too! Thanks for the link; I’d not seen this before.

It looks like he’s done some custom work to the Preferences app. Impressive!


There was an attempt to get GNUStep going again some years back:

http://etoileos.com/

The project looks pretty dead, though.


I really miss GNUStep. Pretty much the only time I remember actually kinda enjoying my Linux Desktop experience was when I had some ancient (found in a dumpster) laptop I set up with debian and WindowMaker+GNUStep. Granted, I didn't use it for much, but those guys at least had a lot of the right ideas going that all the other Linux Desktop Environments continue to ignore and pretend no one cares about.

Imagining an alternative timeline where GNUStep became the dominant Linux Desktop makes me feel like I live in the darkest timeline.


Honestly I remember trying gnustep, it was ugly. I can't even emphasis how terrible it looked from a ux/UI perspective compared to literally every other desktop.


Yup, they were doing the hard part of making application frameworks and consistent APIs, and were always deferring window dressing. By contrast, old school FVWM and later Enlightenment were pushing limits of themability without really building anything of substance beyond literal window dressing.

No reason GNUStep couldn’t be as beautiful as anything else. Oddly, I find the NeXT aesthetic oddly fresh looking today. I was struck by how good it looked when I took a trip to the Living Computer Museum and played on their NeXT cube. What’s old is new I guess.

Still, not supporting themes from the getgo was definitely a reason GNUStep didn’t get traction early on, I’d


I stood in one FOSDEM presentation about 10 years ago, where they spent a couple of hours speaking and demoing part of it, but it never seemed to have moved beyond those presentations.


It looks just like NeXTstep. IMHO, i.e. like the most beautiful, elegant GUI ever designed.


Next step in my experience was crashing constantly and ugly. It was the basis for macos. Which permanently turned me away from apple products in the 90s. Rebooting a frozen computer should not require literally pulling the plug.


GNUstep could be SO much more. But the interface is stuck in 1993. And the maintainers don't want to do anything to modernize. it.

I'd love to switch to a GNUstep powered OS (maybe with Ubuntu underneath), but its just not good enough yet.


I wouldn't say they don't want to modernise it. They have a theming engine that can use some other frameworks' GUIs to provide theming (like GTK+, Windows, and probably Cocoa) — but most distributions only have a hideously ancient version of GNUstep that lacks the theme engine.

I'm with you on the general idea that the GNUstep developers don't seem to be in a rush to take GNUstep out of the shadows and let it be a glory on Linux. More aggressively imitating Cocoa, adopting Swift and filling in some of its holes on Linux, and abandoning the wholly useless enterprise of source-level compatibility with the OPENSTEP APIs of the 1990s… these would be good.

And distros not having the old versions, too!


FVWM forever ^.^


I don't want to come across as ungrateful but I'm not very happy about this. I haven't looked into the reasons for this change but I'm sure there are some compelling ones to make such a big change.

LXDE based Lubuntu struck the perfect balance (for me) between:

- Being minimal and lightweight

- but works out of the box with enough batteries included

- tons of community resources due to being a ~buntu

- very, very customizable (I love my undecorated windows and no bullshit shortcuts/decorations etc)

- sane user experience, without any unnecessary bloat

- LTS support (albeit shorter than other ~buntu releases).

Yes, I can achieve these things some other way, but Lubuntu hit this sweet spot out of the box with minimal tweaking.

Maybe LXQt will be great, may be not, but at this point it reminds me of Ubuntu 10.10 which, IMHO, was as refined as Ubuntu (or any linux distro at the time) could be, and then they just decided to throw all that maturity and refinement away and start from scratch.

I truly hope LXQt based Lubuntu succeeds, but it's a sad day to me.


> Maybe LXQt will be great, may be not, but at this point it reminds me of Ubuntu 10.10 which, IMHO, was as refined as Ubuntu (or any linux distro at the time) could be, and then they just decided to throw all that maturity and refinement away and start from scratch.

I don't know if you know this, but LXDE is GTK2 based, so the decision to "throw all that maturity away" isn't quite the same here. Ubuntu bet on convergence that never happened, Lubuntu has to eventually accept that GTK2 is getting long in the tooth. And since the LXDE maintainers moved to the LXQT project, QT it is.

EDIT: Also, given the choice between GTK3 and QT, the latter is the less resource hungry IIRC, at least when sticking to the basics.


I hoped for a very VERY long time that someone would fork Gtk2 - mainly looking towards the MATE devs. Sadly it never happened, Gtk2 had a ton of warts, but it also was for the longest time the most stable toolkit API, the closest to Windows' win32 in terms of stability and something you could target and expect it to be there in basically any desktop environment :-(


Ubuntu 10.10, huh.

I'm not old enough to have experienced that, but I can say that we have the same thing happening with 16.04

They had it almost perfect and decided to throw it all out.

History does repeat itself...


Not old enough? Well this makes me feel old even though I'm not. I'm 22 and my first version of Ubuntu was 8.04.


I'm guessing they got into using linux later in life. Not all of us messed with linux when we were 12 ;)


Matter of fact, I was using linux when I was 12! Actually installed ubuntu myself.

However, I wasn't into it to the extent that I could follow the linux distro culture. Don't even remember the version now; must have been 10 or 11.

There was also the fact that I had limited access to the internet in the early days (that's just how it was in India).

I was just sick of trying to make pirated software work (including windows), so was looking for something I can just download and run.

I do remember that I liked how easy it was to install games, compared to my windows desktop.

Only now, that I am familiar with software development cycles do I understand how stupid it is of Ubuntu to discontinue fully featured, polished software.

For example, Mac has changed so little over the years (haven't personally used it much), which probably is the reason why it's so much more mature as compared to linux DEs...

P.S. I am 19


When I was 12, it was all windows 3.1, IBM OS/2 and Solaris for the cool cats.


When I was 12 it was C64.


When I was 12 the VIC-20 was released but I rocked a TRS-80 Model 1.


Acorn System 1



If that makes you feel old, I must be ancient.


Still made me feel a billion years old as I am pretty sure I had quite a few Warty Warthog cds around at one point.


CDs? I had 7 floppies for Doom. And I bought a sound card!


When I was 12, we were messing up with the Amiga on geek friend's house, ProTracker had just been released, playing Defender of the Crown on the school PCW computers and doing Z80 programming on my own Timex 2068. :)


>but I can say that we have the same thing happening with 16.04

Yes, 16.04 (unity 7) is peak desktop for years to come. Nothing else works quite as well for me. Luckily, you can use unity 7 with 18.04 without too much friction.


What do you use as a file manager? Ubuntu's built in one lacks the basic functions that one might expect from Windows Explorer or the Finder (e.g., the ability to rename multiple files, open folder as path in terminal, mount SMB shares, see file previews).


With 16.04, I'm using the default bundled nautilus version. With 18.04, I think nautilus lost some functionality. I haven't switched to 18.04 yet though. So I don't know how serious it is. I don't use the file manager too heavily. You can switch to nemo, dolphin or caja if it works better for you. Not sure how nicely they all play with Unity. There was a patched nemo version for Unity 7.


Worth noting that Dolphin devs have disabled it's use as superuser, ... I can't comprehend why. Krusader is my fallback option.


Lubuntu's 'PCMan File Manager' does all those and some more except for renaming multiple files.

'rename' [1] is a good way to do it anyway.

[1] http://manpages.ubuntu.com/manpages/trusty/man1/prename.1.ht...


I wished I could install unity on my daily driver; solus.

Budige is great, but it's not as mature as untiy 7.


I feel like what they are describing is what xubuntu already is....still I'd love to see some more power give to the DE


Because the GTK version is no longer maintained by upstream, the community was not interested on porting to GTK3 instead they will use Qt that is a better in quality and not controlled by GNOME toolkit.

So the "Blame" is not in Lubuntu community but LXDE


> very, very customizable

I don't know where you're getting that. I like LXDE well enough. The only laptop I have running Linux runs Lubuntu. I would not call it "very" customizable. I can't easily rearrange the menu (which I blame the retarded .desktop file standard for), I can't add my own menus (to my knowledge), I can't change the application menus to be mac-style top-of-screen menus like I could in KDE 3, etc.


I agree that LXDE isn't the first choice when you're thinking about customizing your desktop (check out reddit.com/r/unixporn if you're into this sort of a thing).

I meant that from a point of view of comparing it to others like gtk3, unity, cinnamon etc for some specific features that I find useful.

Of course, not all distros will lend themselves to all types of customizations easily and customizability, at least to some degree, is relative.

With that in mind, yes, maybe I should tone it down to 'customizable,' instead of 'very, very customizable.'


I'm not really in to ricing like /r/unixporn is, I just think my tools should be simple and flexible so that I can align them with my workflow.

For instance, why do we need one single global menu with all the apps categorized according to a .desktop file? That's overcomplicated and restrictive garbage.

Here's a simpler alternative: let me place as many panels on the desktop as I want, let me place widgets on the panels, let one of those widgets be a button that presents a menu-ized view of a folder structure. Now I can have as many menus as I want, with whatever layout I want, and organize my application launchers as best suits my workflow.


Without commenting on your suggestions, I'll say this, I am not that big on menu based launchers.

I used to use Gnome Do as my primary launch tool. Now I just use a shortcut (Windows Key + Space Bar) for the 'Run' dialog (usually found by Alt + F2) and I know the first few chars of my favorite tools after which they autocomplete.

For everyday use, it's hard to beat the speed and simplicity of that.


I always loved Qt as a developer and hated it as a user. Every app/desktop environment written in it seem buggy and expensive on RAM.

On the other side, I always loved GTK apps and their feel/design, but never got to make anything big myself in GTK.

There are other points to this debate than RAM/Resources: Looks and feel, philosophy, feature bloatedness, etc.


I used to love GTK apps, but GTK3 has become almost unusable. I used XFCE for years, but now it's switched to GTK3 I've had to abandon it for LXDE (GTK2 version). Even something as simple as the XFCE text editor (Mousepad) now has pointless animations that you can't turn off and it constantly flickers unless you turn on compositing, which is useless latency increasing bloat. Everything about GTK3 feels like it was designed by somebody who'd rather be working on mobile apps. There's frequent animations deliberately wasting my time and distracting me, and excessive whitespace deliberately wasting my pixels. Maybe it's fixable with the complicated and unstable CSS based theming, but instead of fighting the system it would be easier to switch to Qt based apps. Qt at least appears to be designed for experienced users on desktops/laptops, not morons on phones.


The "Experienced users on desktops --> Morons on phones" trend seems to be inescapable, especially in web design.


Given the endless messages that each Gtk+ based application vomits into stdout/stderr, I wouldn't say they are any better.


The average user does not start applications from a terminal.


Not seeing the messages does not magically fix quality issues.


Not even the average Linux user? I don't have any more data than you do, but I question that.


I mean, we have application menus for that.


This says more about how efficient KDE has become than how LXQt is performing, if they're feeling like KDE is eating LXQts lunch. I know people want choice and all, but I think we have reached the point where it would make sense for KDE to replace all other DEs purely for its sheer flexibility and performance. With every other DE there is some trade-off, usually in resource usage. KDE at start-up uses under 500mb with Kubuntu 18.04. Its the one reason I no longer need to upgrade my hardware, and can use it till it breaks.


Additionally there are only three major memory hogs in KDE:

Akonadi, purely associated with the KDEPim stuff. You can just uninstall it all without a fuss.

Baloo, the file scanner. You can disable this outright.

Plasmashell, which is the desktop applet engine. This is where I feel LXQT can find its niche, by providing a simpler shell that uses less resources than the plasmoid JS script container style the Plasma Shell uses.

Without any of these running a KDE desktop with system services etc included is under 200MB, often under 100MB.


Honestly this entire post is a really good call, I didn't even realize whether or not I need all these. Is there a simple wiki or something out there that would help decide what can be disabled?

edit: found this http://www.linux-databook.info/?page_id=3728


Writing this comment on a 4th gen Core-i5 laptop running Lubuntu booted into RAM. UX is just incredibly responsive. Up since Jun 1. And would probably run forever ;)

But into the future I am very interested in restoring older laptops with ChromeOS / Neverware. Especially if they meet the requirements for running the Android / Linux container and studio. By mid-2020's chromebooks could account for 5% of global pc market share.


Agreed on Neverware. It's an absolute dream to hand over an old machine with Neverware installed to a relative with modest tech needs. It's very fast, and I don't get tech support calls any more. Win-win.


I hadn't heard of Neverware before, so thanks for that. It looks like it's the Chromebook OS, does it force you to use the Google apps? Basically, is it usable without a Google account?


As far as I know, except for a guest account, there's no local account on Neverware, so you'll need a Google account to use it, which is unfortunate.


I'm doing this with neverware now, they just landed crostini in the dev release, but they've natively supported flatpak and docker for a while now. It's good stuff!


That doesn't seem particularly old to me? I'm running the latest Fedora with KDE on a "4th gen Core-i5 laptop" and it's more than capable.


I don't understand the rationale behind a separate group/committee/website/idea for the same distro with different defaults. I worked with the Antergos team, a team of 4 people, to write a distro that let you pick and choose everything, from DE to browsers. While I've mainly faded away they are still hard at it. In short, if we can support nearly every DE with 4 volunteers, what are these *buntus spending their resources on?


The boring answer is probably "because defaults matter".

But you are of course right that it would be less work for all people involved if these were instead Debian Derivatives or Fedora Spins where the useful infrastructure, developer processes etc. are already in place. I guess it all comes down to branding, and the fact that starting over is easier right at the beginning.


Antergos most emphatically isn't a distro it's just an installer for arch Linux and quite frankly a poorly designed one that does not always actually work.

After the installer runs if it actually works you aren't left with something much different from a normal arch installation.

The actual distro developers have a lot of work to do as far as assembling a wide range of components and often writing their own like package management systems and recipes for building thousands of packages.

Its unsurprising that people writing 0.1% of distro badly doesn't require much work.


Sorry, I don't follow. Your second paragraph is rather accurate. That said, I don't see how that's different from the various flavors of Ubuntu. I would consider Lubuntu an Ubuntu installer...


Well that’s mean. But I have to agree that it doesn’t actually work 99% of the time for me. It certainly is easier to install arch than it is trying to debug cnchi. Arch requires so much typing, but at least the installation is reliable.


Its mean but its like they had one job... If it doesn't work start over and make it work.


I don't take offense to it personally as I no longer work on the project, but...did you bother to open bug reports and share logs...or do you just throw a fit on public forums? I still run the geo server used in installation, and can assure you the vast majority (>99%) of folks do not have installation troubles. If they do, they aren't reporting it.

Comments like these, which are common, are what pushed me away from open source. Everyone feels entitled. Fix it, report bugs, ask for features. Frankly, as an open source contributor, I want it to work for people, but, if it doesn't work for your specific setup and you offer nothing of value, I don't care that it doesn't.


It's understandable if you still take it somewhat personally. You worked on it, just like an author who wrote an article in a magazine once would still likely care about that.

In my case (not the person you replied to) the Antergos website and IRC (sorry, I know reporting on IRC is not as useful as a proper bug report) people gave me the impression it was all good and it worked for everyone. I spent 2 hours hacking the installer to try to make it work, until eventually giving up and using the Arch iso to install in 10 minutes.

The entitled behaviour one might see is therefore sometimes not just entitlement, but could also be frustration from "feeling misled". We all know it's free, and I'm grateful for that, but I'm not grateful for feeling promised something to work and then spending 2 hours too much on that promise.


I believe you brought up antergos. The exact statement was.

"I worked with the Antergos team, a team of 4 people, to write a distro that let you pick and choose everything, from DE to browsers. While I've mainly faded away they are still hard at it. In short, if we can support nearly every DE with 4 volunteers, what are these *buntus spending their resources on?"

You compared flavors of Ubuntu unfavorably with flavors of antergos and it's neither throwing a fit nor entitled to point out that their resources presumably are spent on making things that work.

Regarding antergos, I consulted their bug tracker my issue had been reported already and wasn't fixed within the month I was interested in the matter and it was impossible to use the older version without the bug because once online it helpfully updated itself to the broken version. Further as far as I could tell the bug was generic enough that it would have effected most users. It simply crashed most of the way through a generic Ext4 install to bog standard desktop hardware.

People throw around entitled as if giving away something for free exempts your work from all critism. Sure you owe me nothing but if your work over promises and under delivers I'm apt to say something so others don't waste their time.


Firstly, I never compared flavors of Ubuntu as being lesser. I only asked why each DE needed it's own community.

Secondly, you didn't report your issues, enough said. When you are a volunteer team, you can't buy every piece of hardware. You absolutely depend on logs and more importantly people willing to test. I had an early Ubuntu beta format my partition when I hadn't selected to do so. I wouldn't say Ubuntu sucks and should give up. It was addressed as a bug and fixed. Such is the nature of software.

I don't think any promises were made. Antergos is an easy installer for arch. I don't know if you know all that goes into an installer. It's way more than I assumed when I signed up. Just writing py3 bindings for libparted, for example, didn't exist. I upstreamed those to RedHat. For a distro, I'd probably say the installer is 20 to 30 percent of the effort. Packaging is the large remaining majority. *buntu flavors reuse packaging, and reuse an installer.

I will never claim Antergos is perfect. It's not, and it does break as Arch changes things. But saying 'it doesn't work' or 'start over' is dismissive and borderline ignorant. Most of the issues over the years were due to changes in Arch packaging, not due to bad code.

I won't reply further, feel free to stand on your soapbox and continue badmouthing something you did nothing to help.


I've never been a Lubuntu user but the problem I see is that the Linux distros supporting x86 are getting rarer it seems.

I have a 2004 nx7010 (awesome laptop, at the time and also 8 years later) that I sometimes use at home or grab to play around with and try out different distros and OSes. I used to use ArchLinux but they stopped supporting x86 a while ago. VoidLinux is also out. Right now I'm running OpenBSD 6.2 on it, which is nice - but the short release cycle for a machine you only start up like 3 times before the OS is EOLed kinda sucks. (Not OpenBSD's fault, of course.)

So we'll see how long one can get a decent Ubuntu fallback solution (as in, it's big enough that enough people work on it that it will mostly work out of the box for these hobby projects of running old but not ancient hardware) for x86.


Last time I was hanging out in the xubuntu dev irc channel I think there was some chat about how they were short on x86 hardware to test on. Emulation is useful but isn't enough, if you need x86 consider running tests or donating hardware.


See, that's the problem - it's not a real "need" - I have 2 old laptops I don't want to throw out. If I was donating them, my need would vanish ;)

Sorry if it sounded like complaining, it wasn't my intention. I just think x86 is dying a slow death, there are not many "enthusiasts" like for e.g. 1980s or 1990s hardware where you have "this one ancient build everyone uses", but x86 has no discernible advantage over x64 these days (please don't name the the RAM requirements for 32 bit builds now :P) - so x64 is in 99% of cases superior and that's why it's rightfully supported. e.g. VAX, Sun SGI and C64 stuff is exciting, x86 in the age of x64 is just "older and slower" to most people.


Complaining, no not at all. Equally I didn't mean to direct those suggestions directly at you :-)


Besides running on old hardware, the main benefit of distros like Lubuntu and Xubuntu is that they run well in VirtualBox without reliable GPU support. Ubuntu with Gnome 3 is almost unusable in VirtualBox.


When you see people recommend new Linux users that they could try Ubuntu in VirtualBox, to get familier with it, I often wonder what kind of impression they walk away with.

An modern Linux distribution in a vm, running on your laptop typically isn't all that great an experience.


Lubuntu 18.04 has/had a blank/corrupted display issue with Virtualbox, if you run into it try enabling 3d acceleration:

https://www.reddit.com/r/Lubuntu/comments/8g7t8u/lubuntu_18_...


Thanks for that comment. I've tried using Ubuntu inside VirtualBox on my brand new i7-8700 / 16 GB RAM computer and it's far from being smooth. I'll give Xubuntu a try.


Xubuntu in Virtualbox is part of my development stack on my Windows desktop. The real deal is Xubuntu on bare metal (not virtualized). It's worked well for years.


While it's true that you can run plain-old Ubuntu on a 10-year-old machine without problems, the growth in popularity of smaller ARM processors (rpi etc) also creates demand for a lightweight linux distro. When I read the title I assumed that Lubuntu's new direction was to move towards offering better support for such devices. Guess they're going the opposite direction...too bad.


I wonder if that factored into the discussion. I'm also hearing a lot of buzz about major OS (Windows, macOS) supporting ARM as a consumer platform in the near future.


The main problem of desktop Linux is a lack of focus on audiences, most just copy the generalist mindset that Apple and MS adhere to, hoping to get a piece of that cake too.

There where Linux is very successfull it's focused on audiences, like sys admins and devops people.

But there's no serious focused distro for web and cross platform app designers/developers and other creators. That's something companies that make design tools could pull off, where the graphical framework of their tool becomes the framework of the OS, similar to how GTK (Gimp Tool Kit) was born, but in a more professional way.


I am quite concerned that network security is pushing very usable software and hardware to the junkyard. Is it possible that for a variety of reasons, it is better to have something reliable and stable, than to throw out working software because of intractable browser and network security problems ?


It's better to throw out insecure software. Computers with security problems get hacked and end up in botnets.


Most everything we do with computers in the modern world uses the network. If software is insecure -- is it really usable? Is it really reliable? I'd pretty strongly argue that the answer to both questions is no :).


sorry where was that bit in the article?


I'm not sure what to think about this - the next few releases would be the deciding factor.

One of the main reasons I use Lubuntu is the fact that it got out of my way (install Lubuntu, remove the handful of stuff that came pre-installed, install i3 and get to work).

With my above setup, I would idle with just 250 MB of RAM usage and almost no CPU usage. I loved that - as we have more and more electron apps (Slack, Teams, Atom, GitKraken etc.).

I hope that bit of focus is not lost as not everyone has the affordability or the ability and access to buy a 32 GB i9 Macbook Pro and develop on. (My personal machine is a Thinkpad T420 with 8 GB RAM and a 250 GB SSD).


I don't know if QT is part of the reason what makes KDE so sluggish, but if thats the case, Lubuntu switching to LXQT might have a lot to do with this challenge. If that's the case, I really have no idea why they're so hell bent on the transition from GTK. If love for them to migrate to JWM, but last time I checked the L in Lubuntu is for LXDE. Is there really any benefit in adopting QT or are they just following what they see as the "forward" direction. LXDE doesn't have to look glossy, skeumorphic, and outdated. A skin is superficial, I would do anything to avoid QTumor.


As far as I've seen, it's not. I used razor-qt when it was still an independent project and I pretty much fell in love with the idea of it. Gnome 3 was being a PITA (I remember throwing things when my artfully crafted css broke with an upgrade) and KDE hadn't crossed over into version 5. I'd tried out XFCE at that point. It didn't excite me because I didn't want to get stuck with GTK 2. I just downloaded razor-qt and realised that with a bit of spit, shine and polish, this had the potential to become what LXDE was to Gnome. I'm pretty happy about LXQT emerging from the LXDE/razor-qt merger.

Anyway, back to your question. razor-qt was pretty snappy on my machine, leading me to conclude that KDE was slow because it was KDE. Akonadi, slow animations, buggy widgets, you name it. I haven't tried KDE since then (this was back in 2012 or so), but I've heard that the latest incarnations are pretty snappy and light. Even if they aren't, I highly doubt LXQT will have any problems with lag.


> I really have no idea why they're so hell bent on the transition from GTK

Because otherwise they have to develop it?

> A skin is superficial, I would do anything to avoid QTumor.

Do me a favour.


Qt works on embedded systems so it is an optimized toolkit, if a Qt app feels slow then most of the time the fault is with the app developer.


How about we use a raspberry pi as a definition of an "old" machine? It's the most popular low powered system. Which distribution would you use?


Linaro and Raspbian both are based off LXDE if I recall correctly.

Wonder if they will also (have already?) adopt LXQt?


PR blurb, telling us, that Lubuntu is going to die within the next two years, or so.


You're downvoted a lot because you didn't substantiate your claim, but I assume many people, myself included, agree with the sentiment.

Up until now Lubuntu had a clear goal: Create a usable Ubuntu-compatible distribution that runs on old hardware.

Now the goals are so vague and subjective that they lead to nowhere. Sure, this won't matter in the short term, but sooner or later the project will get completely derailed because of different interpretations of this dumb statement.

They can still make a good lightweight OS that runs on 10 year old hardware. There's no reason to change the goal.

Sure, it's no longer a massive challenge, but it doesn't have to be. If the developers want to try new things they can fork it.


I am a bit shocked because until now Lubuntu was awesome at serving the older devices.

Lubuntu was working great on one of the first netbooks from 2007 where other distributions out of the box were not.

If people want modular or super configurable then Arch Linux has that use case pretty well covered does it not?


I use Lubuntu as my only OS on a Dell Latitude 5590 and I like the new direction they pronounced in their site a few days ago. I have never seen Lubuntu as a low end alternative, but as a capable OS that does not get into my way. Lubuntu's panel works without glitches on the side of screen allowing me use better the vertical screen size. The LXDE Lubuntu theme combined with Faenza icons is also very nice looking. I am sure LXQt will keep up same vision and the new direction matches my needs exactly. What I need is a smooth upgrade path when time comes to move to LXQt and that it support the features of LXDE I use now. I wish to LXQt and Lubuntu teams all the best to keep up their good work and I am looking forward for the new software.


Are low-end phones now on par with these 10 year old systems?

It's difficult to find a phone with less than 1 GB RAM. And although ARM CPUs are much less powerful than x86 with the same clock, 1 GHz also seems a mininum.


Nevermind arm/Intel, consider that the Intel processor of a notebook is way less powerful than its desktop counterpart


I used Lubuntu on some old systems. I would say the problem being faced by the distro is not so much a matter of the focus, but the package management system could have used some work. If they kept their mission statement and developed something like the Aur in arch but obviously for older systems, I suspect they would do quite well.


This makes sense for things like x86, but what about ARM stuff? It tends to be low power in both wattage and speed, perfect for Lubuntu. I'd love to see even more effort on things like the Pinebook.


I'm a Lubuntu user; I'll shift to https://clearlinux.org


It won't be long until my primary gaming rig is ten years old, and it's more than capable of everything I throw at it. The upgrade cycle is essentially dead.

That being said, I'm not clear on what the real difference between Lubuntu and Kubuntu will be now.


Even good quality laptop hardware is lasting. I'm using a laptop that's about 8 years old as my daily driver, and it can more than handle everything I need it to do - and that includes compiling software regularly. We've reached a great time for computers because pretty much any device you can buy (even used) can do all the stuff most people need to do. The primary advantage of newer gear is just reduced power requirements.


Hopefully this will be the next big battleground in the world of hardware. I'd love a laptop that could go days between charges.


> It won't be long until my primary gaming rig is ten years old, and it's more than capable of everything I throw at it. The upgrade cycle is essentially dead.

Not really for GPUs though. AMD progressed a lot and all the new features are landing in amdgpu, not in radeon. That's important especially for Vulkan support. Older GPUs are locked out of that completely.

CPU progress is less rapid, but something like Ryzen is quite a breakthrough.


KDE and LXQt are radically different, even though they use the same underlying framework. In a way, it'll be the same difference as XFCE vs. GNOME.


I must admit these days i'm really preferring Qt over GTK. It's come a long way since I started using Linux. I remember looking at LXQt back when it was first released and found it fairly interesting but nowhere near developed enough for daily use. I always liked XFCE but found it lacking in customization and kind of buggy at the time I used it. Though it did keep my laptop going after a hardrivr failure running partitioned across 4 USB drives until I got a new one.

It's cool to see LXQt in a state where lubuntu is adopting it as it's primary environment.


> The upgrade cycle is essentially dead.

It's terrible that we can't count on computers getting twice as fast every year any more. I guess it couldn't last for ever...


Nah, nah, it's about damn time software engineers start giving a shit about performance again.


...because we've already got all the computing power we'll ever need? You think software developers not caring about performance is why we can't quickly simulate weather systems or protein folding or do large-scale ML?


GGP comment seemed to mostly be caring about personal computers, in which field we do have most the computing power we will need.

For those kinds of hard problems, the advances seem to be coming in massively parallel computing of the kind that is advancing (the GPGPU has transformed ML, for example).


Yes, but the amount of computation that is practical to put in a car or a phone probably isn't going to increase much.


With self-driving cars, it definitely is.


I'd settle for text editors that don't lock up when I type an open paren. Or use entire sticks of RAM at rest.


Some here are old enough to remember "emacs: eight megabytes and constantly swapping", back when a stick of RAM could have been 8 Megabytes...


> text editors that don't lock up when I type an open paren

This miiiiight be just you. Even sublime text (forget vim or another cli editor) runs fine on ancient hardware.

What type of files are you editing?

> Or use entire sticks of RAM at rest.

I'd hope my text editors would use as much ram as it needs in order to give me the smoothest experience. If possible, that means storing the whole file in RAM (up to a set limit).


Stop using editors written in Javascript by overpaid new-grads


>It won't be long until my primary gaming rig is ten years old, and it's more than capable of everything I throw at it

You must not throw much at it


I cannot think of a single AAA title in recent memory that causes my i7 2600k -- an 8 year old CPU -- to struggle.

It can play absolutely every single modern game, with the only component upgrade in its 8 year life being the GPU.


I'm not a huge gamer anymore but...isn't the main bottleneck for running AAA gaming at high settings the GPU?


Generally yes, although there are certain games that an old CPU would definitely choke on. It’s a bit disingenuous to say the upgrade cycle is dead and then to spend more money on upgrading the GPU than the total value of everything else combined.


It can be the CPU. But since four cores is the best everybody has, games don't push that.


Battle field 4 and battlefield 1, both of those games max out my cpu (3570k overclocked to 4 Ghz) and leave my 1070 twiddling it thumbs based on task manager and software to see GPU utilization.

Either my cpu is slower than it should be, your cpu is faster than it should be, or you don't check out one of the poster child's aaa game series.


>I cannot think of a single AAA title in recent memory that causes my i7 2600k -- an 8 year old CPU -- to struggle

I cannot think of any modern, graphically demanding game which is bottlenecked by the CPU. I imagine you are harming your GPU performance to some degree with such an old CPU though. Not sure I buy what you're saying here. What settings are you using?


My point, of course, is that the only thing you’re going to measure a gaming rig by is gaming, which has very little to do with the age (or speed) of any component other than the GPU. Do you suspect, though, that the standard home user (or user of a repurposed old machine with Lubuntu on it, maybe) is going to be pushing those other components harder than a modern AAA game?

Yeah, me neither.

As an aside to answer your question:, I can run pretty much any game on its highest settings just fine with this machine: which is a gtx1070, 2600k, and 16gb of RAM driving a 120hz 1080p display. It was originally built with two HD5990s in crossfire.


You can stick a new GPU into a 10 year old machine and it would run modern games admirably.


This is some Ship of Theseus territory we're getting into here, but I think most people would say that if you replace the GPU in your rig, it's not the same rig.


Ship of Theseus? That's quite an exaggeration. Motherboard is often considered the core, and same motherboard and CPU is definitely the same rig. It's just an add-on card that has changed.


Sometimes that "add on card" can eclipse the cost of the "rig" itself.

If the purpose of the device is to play games, and it it incapable of doing so without this component, I'd think of it as a bit more than an "add on card"


Yup, but this is the "problem" with computing now adays.

For 90% of computers, merely putting an SSD in it (no matter how old it is, as long as it has, or can fit, at least 2GB of RAM, as to deal with piggy "modern" Electron/CEF/etc apps and similar idiocy in webapps in browsers) can make it seem sometimes a decade newer.

For those who game, more often than not, just upgrading the GPU helps.

I have a i7-4771 @ 3.9ghz /w 32GB of DDR3-2133. This is not a new system, but no one has made anything newer that really would improve actual performance for me... what holds me back is a similarly old GPU that I've finally decided to swap out with a GTX 1180 when they come out next month.

My 7970 served me for many many years and games are finally starting to punish it at 1080p. Still won't need to update my CPU, however.


If it's still in the same case with the same CPU and motherboard, you could make a decent case for it being the same computer. Graphics cards are meant to be upgradeable, that's why they're sold separately.

I think for the purposes of OEM software licensing, they use CPU + motherboard as the defining hardware but I'm not sure.


Obviously, but then can you honestly walk around saying you can play any game on your "10 year old machine"? 10 year old case maybe.

Even then, you're going to have issues with the motherboard and probably the CPU bottlenecking you as well.


I would say adding a PCI card still makes it the same machine.

Also, issues with CPU throttling are not too bad, unless you are already pushing the limits of your GPU, which gets more and more difficult each year.


Eh, PCI-E 2.0 16x is more than enough to keep even modern graphics cards fed. Not sure what else about a 10 year old motherboard would bottleneck you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: