I don't know if this is nostalgia talking but there is something particularly attractive about the BeOS and Haiku desktop both in terms of design and aesthetics. The interface actually has a certain depth and this is consistent across icons, windows, dialogs, menus, buttons, etc. It's a shame that interfaces nowadays are completely flat. They are almost... expressionless and with the exception of the odd drop shadow they completely lack a third dimension. When I first learned of Haiku (and BeOS), back when Gnome was in the 2.x days, I was so impressed by the interface that I installed a look-alike desktop and icon theme. I used it for quite a long time until GTK+ 3.x eventually became prevalent.
All 90s GUIs have something that I miss[1]. I think the flat movement was a desire for hyper genericity .. but turns out that a bit (just a bit) of visual signal and faux skeumorphism (some widgets emulated actual LED keys found on hardware) is good.
ps: also, flat came after both the aqua trend where effects were everywhere and skeumorphism was pushed higher. Not that surprising in a way.
[1] beos, win311, macos classic, win95 (office97 era) and nt5
- Standard interaction widgets. There was no breaking the scroll, there was no button that you can't discover how to press.
- Expert oriented interfaces. There was no action without a shortcut. The most complex software always had some CLI or an API (optionally with an intepreter).
- Discoverability features. The sortcut descriptions were embedded on the same places you had to click to get the functionality. Buttons were clearly marked. There was almost always some text area telling you what was happening.
I don't miss skeumorphism, but it was often used to mark real features, and those features were gone with the arbitrary skeumorphisms. Current GUI trends were created by people with no interest on making their software useful, instead they only keep an eye on showroom conversion rate. (Whether they are right or not on doing so, it's a problem nonetheless.)
> Current GUI trends were created by people with no interest on making their software useful, instead they only keep an eye on showroom conversion rate. (Whether they are right or not on doing so, it's a problem nonetheless.)
Yeah, I feel A/B testing is really the place where Satan secretly influences the world. I'm losing track of how many times I've seen user-hostile or application-debiliating decisions justified by "data" on user behaviour. Something is very wrong here.
- Result: wastes precious vertical space on low-resolution widescreen displays (i.e. business displays and most notebooks) that should be dedicated to showing the document body.
# Due to being horizontal, GUI items appear/disappear depending on window width.
- Result 1: having two documents on screen can be disadvantageous compared to having one on screen because a lot of the GUI items aren't showing or are hidden in submenus, encouraging wasting screen real estate by only having one document on the entire screen.
- Result 2: due to the Ribbon's horizontality, instead of having the elements stay on screen in a consistent manner and be easily scrollable, the interface constantly surprises the user by unexpectedly hiding even key GUI items.
# No option to disable.
- Result: if you wanted to have certain GUI items visible at the same time… well, tough. If the floating palette that inconsistently appears when the user places the mouse cursor over certain elements doesn't have your favourite item, then you're out of luck if the items you want to use exist under different tab groupings.
iWork '09 and prior had the best design: a main toolbar with general items, a smaller context-sensitive toolbar underneath, and context-sensitive inspector windows. If the revamped iWork had simply docked those context-sensitive inspectors, rather than get rid of the context-sensitive toolbar, it mightn't have received such a poor response.
The irony is that Office 2003 (and a few releases prior) already had inspectors on the side, and those would have been perfect given the prevalence of widescreen displays, leaving as much vertical space for the display of the document body as possible.
LibreOffice seems to have three GUI modes; one like iWork, one like the old Office, and one like the Ribbon.
I've been using my taskbar on the side of the screen for the past decade, because every screen I use has an excess of width and limited height. I seem to be very much in a minority here - I guess people don't change things from default?
> I guess people don’t change things from default?
Precisely. The average user assumes (often correctly) that the way things are is the way things are. When it comes to what technical people think are basic features (pairing a wireless keyboard to an iPad, or changing default text size on iPhones, or any number of similar tasks on any system), the modern “ease of use” guidelines suggest hiding everything away as much as possible, severely limiting setting discoverability.
Well, I consider myself an expert user, but I rarely change the default because I switch between environments so often that it would represent a major time overhead to extensively customize application to the way I really want them. And besides, the way I really want them is so far away from the way they are shipped that it's normally unrealistic to maintain that level of customization.
So I get used to the defaults. It makes it easier to throw out, reinstall, or switch environments if I need to. In any given day I use 5 or 6 different primary environments.
Probably i switch so often because of actual limited and limiting system design: in a Plan9-like world user's desktop is the center of the world and anything start from it and being done and used from it. In a commercial world it's common to have tons of crappy devices (so you pay more things more often) and no real integration.
Time ago I have a discussion with a "commercial" guy who say that the sole really integrated platforms are cloud&mobile so they are obviously the future because we are a society and we need to interoperate. I respond plugging my laptop HDMI into the room projector and show a quick Emacs/EXWM(-X) demo: email? Hit a single key (F6 in my case) and my MUA (notmuch-emacs) popup instantly. On top of it's big search bar I have few single-key accessible saved searches and bottom the big series of tags, a far superior "dashboard" than bloated GMail UI. Of course compose a new message can happen with a single key at any time with any open application I have focused. Oh I imaging someone demand me a demo, a quick M-x skeletor (ivy-completed) popup, a single key to choose beamer slide, quick typing and slides are made, tested locally and uploaded. Another imaginary interruption and another task (skeletor again + org-mode), an imaginary patch sent via mail and voilà: magit integration. All datas are really integrated and usable in a consistent environment, anything can be done in a snap and NO other monster modern GUI or '90-style can do the same. That's the past (starting from LispM/MIT AI lab glory time) and the future like we have had "golden age" of ancient Greek polis and more modern "middle (dark) age" and again a modern age. That's integration and customization. No need to switch between systems (while can be done easily with NixOS/GuixSD + homeManager/GNU stow + unison). My system is main and I can replicate/extend it on any decent hw as nedeed. That's "switching systems" IMO :-)
I think you're talking about something very different than that about which the person to whom you responded was talking.
I'm assuming they were talking about different systems they don't own, aren't their own systems, and over which they don't have the sort of control to install their own software and set things up using their personal configuration files.
It's awesome that you've got, or at least dreamt up, a system that works for you, but if you're able to use that exact system on every single machine you use, that isn't quite what was being described. That's an ideal, but only really feasible for personal machines.
Also, I'm going to get downvoted, but please put in a few line breaks.
My point is that we should not normally need to use "other machines", of course for work there are requisite but tech users should IMO do their best to avoid working in bad environment/do their best to convince their company let them use productive software. It maybe a dream but IME it works at least if you are an admin or a relevant devs or you find a good place to work in. Of course it doesn't work if you are an administrative or other roles...
> Also, I'm going to get downvoted, but please put in a few line breaks.
I still have to learn the idiosyncratic way HN handle text... I do put linebreaks, I'm edit in Emacs and paste here, however HN mess it up...
> My point is that we should not normally need to use "other machines"
> of course for work there are requisite
So, which is it? We do or we don't?
> but tech users should IMO do their best to avoid working in bad environment
More often than not, it's not up to the workers but company policies. Besides, needing to use a machine other than yours isn't a "bad environment", it's life.
Not only that, but the "other machines" could equally be non-networked terminals for heavy machinery. A lot of these run a stripped-down version of Windows, so the basic user interface is usually left at default settings whilst an always-open program takes up most of the display.
I agree with you on the next part:
> [...] to convince their company to let them use productive software
I'm right with you here, but again, company policies. Plus, your example suggests you're just thinking of individuals within a company as individuals.
We mustn't assume that all users here are in technical jobs, particularly development; often we're just moving between standardised Windows workstations, lowest common denominator setups so that (A) non-technical users could log in to any machine and still understand how to use it and (B) the IT department have fewer headaches to sort out.
After all, a company is not just made up of individuals; it's full of teams who have to work together to reduce each others' burdens. Sometimes that means using setups that aren't our favourites; our personal productivity mightn't be as great as if we used our own setups, but the company doesn't grind to a halt when someone's delicate configuration goes haywire and the IT team spends more time on it than anybody has any right to expect.
There's a delicate balance to maintain in most companies. IT departments have no trouble labelling even the very technically competent users as ID10Ts.
> however HN mess it up
Are you making sure to use two carriage returns, not just one? It's not particularly idiosyncratic, reddit is the same. I think it might be inherited from non-WYSIWYG forums or bulletin boards.
At any rate, it's becoming a bit of a standard to use two carriage returns due to this being the way that line breaks are entered in Markdown.
This looks even worse, you've got double line breaks which turn simple line breaks into paragraph breaks.
Your editor shouldn't insert the line breaks at 80 column intervals; separate the content from the presentation, and let HN format your text properly. After all, if you have a small screen, the text will be wrapped according to the browser width anyway.
That's what I do in the first post... Maybe I do not understand what you say then, my English is somewhat poor...
I understand that you complaint about my comment's long lines because I "format" in F-F style (i.e. no linebreak except for paragraph), next I format with double linebreak to force HN "cut" longer lines.
I do not know how to format in other way, inserting html+inline CSS with maximum text width or maybe even media query is not something I expect HN accept nor a thing I'd like to do as a HN user...
Same here. And this is the reason why I use Opera as a browser. It gives me features like mouse gestures and ad bloacking without having to download extensions.
Never underestimate the importance of good defaults.
This. So many technically-minded people are so wrapped up in their configurations that they forget that sometimes you're placed in a situation where those configs don't exist. Good defaults ensure a decent, minimal, baseline experience that isn't hair-tearingly bad.
I have always put the Taskbar on the left as long as I remember I could.
At work, it seems I am always the only one with such habits for more than a decade in different client environment.
Most screens are wider than they are tall, but I wouldn’t say there’s an excess of width. I use the width for multiple windows side by side, and I’m fine with having a “global” taskbar running horizontally and using a small portion of the height. MacOS and other operating systems cleverly have a global top bar that changes depending on the focused application, which I like.
There's an excess of width if you aren't a pro user, who knows how to take advantage of it. Linux has tiling WMs, Windows has basic tiling functionality + someone out there made a hackish tiling WM in AutoHotkey of all things[0]. But you're unlikely to discover this as a regular user, unless someone shows it to you.
No, I do the same thing. It also allows me to put text labels on my running applications, because I read English and not heiroglyphics... Especially when MS loves to change their icons for VS and Office with every major release.
I've done that... but some of the orientation experience should actually be flipped... example, in horizontal/left, the windows start menu should probably still be in the bottom left, with taskbar climbing upward. Notifications flipped to top-left.
In the end, I find that left-side orientation is rarely done well, or well imho. I did really like the Unity UI in Ubuntu, but I think I'm the only one on this. I used to position my launchpad in macos on the left as well, but this is awkward, and easier to just have it auto-hide.
I put the macOS Dock on the side, but I still keep the Windows taskbar at the bottom; I just prefer how that looks, aesthetically. That said, I change its size to small, I don't need huge icons.
Yeah. I found only one thing Ribbon excels at, and it's not even in Office. It's in Explorer and some other applications, when used in tablet mode. Then, suddenly, Ribbon becomes a brilliant thing, reinforcing my view that Windows is the only sane, non-toy OS for tablets. But Office use is keyboard-heavy, so I don't understand why the Ribbon is there, or why it started there.
Even then, the touch targets are all of different sizes. I find it really bizarre to have small touch targets on a tablet interface.
That said, I'll agree that the Ribbon does work best on a full screen interface at the top, though I still think a or two context-sensitive toolbar(s) would work better.
I was thinking about this in the shower: why is mapping a drive letter to a path in the Ribbon? Shouldn't that be a button somewhere in or near the navigation tree interface in the left sidebar, grouping it with the other aspects of disk drives and paths.
Plenty of what's in even Explorer's relatively simple ribbon could be in a context-based location for greater semantic grouping. That could simplify the Ribbon, and turn it back into a simple toolbar, maybe with a secondary, highly context-sensitive toolbar at the bottom of the window.
That's not an unprecedented thought. Windows XP had it with the Quick Tasks sidebar. The problem there was the use of sentences rather than simple command names made it difficult to separate the signal from the noise; plus, if you didn't change any of the defaults, you had that little dog making the whole thing seem rather unserious when it was actually a rather power paradigm, poorly implemented but with much untapped potential.
Collapsing the Ribbon is lipstick on a pig. In the end, using the basic functions of the software becomes a game of "is my mouse pointer in the right place to trigger the Ribbon to temporarily show without getting fed up and just making it stay shown all the time?".
Ever seen a non-technical user move the mouse with the same dexterity as you or I? I haven't. The mouse always roams around for a month of Sundays before it eventually arrives in the right place.
This is not a stable interface; it is an ugly hack to make up for wasteful the Ribbon was designed to be
At best, it could be a sort of distraction-free writing mode except that you can still see the rest of the interface, some parts equally as eye-catching as parts of the Ribbon.
> Ever seen a non-technical user move the mouse with the same dexterity as you or I? I haven't. The mouse always roams around for a month of Sundays before it eventually arrives in the right place.
Not just non-technical. I hate having to hunt for the tiny target. A very highly technical friend of mine invented a term for that: “pixel spearing”. I am truly insensed at how much time I waste trying to spear the exact right pixel.
It wouldn't be so bad if the correct zone for bringing up the hidden area obeyed Fitt's Law, but (A) the zone isn't directly at the top of the window, and (B) that wouldn't work with non-maximised windows anyway.
It's why hiding toolbars dynamically works well in macOS' fullscreen mode; flinging your mouse to the top of the screen always shows the menubar and toolbar without fail (unless the mouse is captured by the app, of course, like in a game).
Missing the point. I shouldn't have to hide the primary means of accessing even the most basic functions of the software just to save unnecessarily-wasted vertical space.
At the very least, Office should then show a small context-sensitive toolbar (mini-ribbon?) or a sidebar; instead, the only recourses are either the inconsistent appearance of the floating palette that only appears when the stars are aligned with the user's mouse pointer or temporarily showing the Ribbon again.
Like I said to another reply to my comment, lipstick on a pig. Maybe another analogy is that it's a band-aid on a flesh wound.
I don't blame the ribbon design for using vertical space. Unfortunateley nearly all programs do so. It goes hand in hand with the poor vertical resolution of monitors. 1920x1080 is not better then 1920x1200, but marketing says otherwise ("Full-HD"). The inital design phase of the ribbon style was probably at a time where 4:3 or 5:4 monitors were more common.
The floating palette is not inconsistent. It appears when you select text. First it's half transparent because you might not need it. If you want to use it, hover with the mouse above, the palette becomes non-transparent. If you just want to highlight something by selecting it, you can move the cursor away, the transparent palette hides.
I can reproduce this behaviour all the time. It might not be the best idea, but it's not inconsistent in its usage.
> The initial design phase of the ribbon style was probably at a time where 4:3 or 5:4 monitors were more common
Though they were more common, the market had already moved to notebooks outselling everything else. Microsoft should have had foresight. In fact, you might say they did, with Windows Sidebar in Vista; whilst not fantastic, it was a good use of horizontal space.
Besides this, they already had interfaces in prior versions of Office that made more efficient use of vertical space __and__ made efficient use of horizontal space on widescreen displays; they were the sidebar palettes, still used in Visual Studio. They just needed further development; instead, they were completely removed.
> It's not inconsistent
It is if you're a user with special accessibility requirements, especially those with motor skill problems or vision difficulties; ephemeral interfaces are hard to target, and without a means to manually invoke it and a consistent location, might as well not be there for many a user.
A well-designed interface doesn't need to account for edge cases. Office 2003 and prior's interface, whilst not pretty, was already extremely usable in that sense. All that was needed was context-sensitivity; instead, the baby went out with the bath water.
>It is if you're a user with special accessibility requirements
But you didn't specify that when you called it "inconsistent" in the parent comment. I assumed no special accessibility, as did you, because you haven't mentioned it before. So maybe it is inconsistent of those users, for the rest it's still consistent.
>All that was needed was context-sensitivity; instead, the baby went out with the bath water.
The ribbons in Office have context sensitivity. Select an image, image ribbon is shown, table - same and so one. Since text is the primary context, it is always shown per default ("Start" ribbon).
That's your mistake, then. When talking about user experience design, it's always inclusive-by-default, accessibility a top priority, not an afterthought.
This comes back to my parent comment right at the top of this thread: using native widgets with full accessibility support gives you this for free. Of course, I'm not saying there shouldn't be innovation, but those outcomes should be on par with the default widgets, not even a tiny bit lesser.
> The ribbons in Office have context sensitivity
But they (A) still show too many features for whatever is selected, demonstrating that the context-sensitivity is limited; (B) don't always automatically change to the appropriate tab; (C) sometimes show two tabs, confusing users (especially when it comes to tables or graphs); (D) hide other tools, by virtue of switching tabs, which would still be useful (namely, everything on the main tab).
Also, context-sensitivity would mean that the Ribbon would change back to the main tab after any operation in the other tabs was done. Since it doesn't, it demonstrates that the user has to constantly switch between contexts manually, meaning that the Ribbon's context-sensitivity is pretty poor and, again, inconsistent.
>That's your mistake, then. When talking about user experience design, it's always inclusive-by-default, accessibility a top priority, not an afterthought.
Not really. I think you pulled this card to win the "consistent" argument. After all, I don't see evidence that this style does hinder accessibility.
>(B) don't always automatically change to the appropriate tab;
They do, Word 2010. Inserting Image -> Image ribbon, same goes for tables. If you want to force-show a ribbon you can always doubleclick.
>(C) sometimes show two tabs, confusing users (especially when it comes to tables or graphs)
Whats confusing about this? It shows a header "table tools" (translated) so the purpose is clear. Sometimes stuff is more complex and needs more space.
>Also, context-sensitivity would mean that the Ribbon would change back to the main tab after any operation in the other tabs was done
It does. Again Word 2010. Select image, make an operation. Write text again, (because you might make several operations) thus exiting the image manipulation mode and the Start ribbon is there again.
Note: I'm not saying this is the best interface there is, merely that it's not as inconsistent as you depict it.
> I think you pulled this card to win the "consistent" argument
Actually, if you look at all the comments I've been making in response to my initial comment, the parent of the thread, I've been talking about accessibility the entire time, especially for visually impaired.
For the visually impaired and those with motor skills impairments, it's fairly easy to overshoot where the floating palette appears. If you shoot too far up, it disappears; it doesn't reappear when the mouse comes back to where the palette was, the text needs reselecting.
That's my main inconsistency.
> (B) don't always automatically change to the appropriate tab;
> They do
I stand corrected in one respect, but we still have inconsistent behaviour. Say you create a table, the Ribbon will change to Table Tools. Great.
Click away from the table and then click back. Still on the Home tab. Well, this makes sense since you're probably editing text — but again, that just reinforces my belief that those tools should be in a separate toolbar (à la Office 2003 and prior).
> What's confusing about this? It shows a header "table tools" so the purpose is clear
The technically minded may figure it out, but ordinary users have to rote learn what the tabs do. As an example, Table Tools shows two subtabs, Design and Layout.
Do you think you could get Sheila from accounting or Bob from packing to tell me the difference between the two tabs, or how the two Design tabs differ, without letting them click around the interface? I doubt it.
When there are two context-sensitive tabs shown, and they're both heavily related, you can almost bet money that an ordinary user, Sheila from accounting or Bob at reception, is going to cycle between the two tabs to find what they're looking for. This isn't intuitive.
> Note: I'm not saying this is the best interface there is, merely that it's not as inconsistent as you depict
That isn't high praise. A user interface shouldn't be inconsistent at all, especially the flagship product of a multi-billion dollar company, and especially the de facto standard in productivity software.
>The technically minded may figure it out, but ordinary users have to rote learn what the tabs do. As an example, Table Tools shows two subtabs, Design and Layout.
Not just the technically minded. Office is primarly used by non-technical people.
>Do you think you could get Sheila from accounting or Bob from packing to tell me the difference between the two tabs, or how the two Design tabs differ, without letting them click around the interface? I doubt it.
Telling? Probably not. But asking them to do X with the table? Yes, at this point (when working with the program) it's probably muscle memory to do the stuff you want.
Who, in my experience, have only a grasp of the absolute basics of what they're using (and pretty much only live in the home tab), and consider most of the other basic functions as too complicated.
> [...] it's probably muscle memory to do the stuff you want
A good user interface design doesn't _require_ muscle memory, though it can reward it by making repetitive actions quick and efficient. Intuitiveness should be the goal.
That and removing "increasingly little used" things because the telemetry told them so. Features that they've spent the last 4 releases downplaying and hiding. Funny that.
Or occasionally a setting you last touched five years ago when you installed the thing, as you're quite happy it's been set ever since.
How strange that they don't need data to to install ads in start menus and new tabs, "refreshing" the whole GUI (once you've fully assigned the old one to muscle memory), or adding other pointless bloat.
There's a common anti-pattern that happens with what I'll call "short attention span" development processes (excessive reliance on A/B testing, agile sprints, etc, no long term vision). A poorly implemented, buggy, poorly documented, and/or not well integrated feature doesn't see much use so the developers avoid putting work into it, which just makes all of its problems worse until eventually someone has the bright idea to cut it because "nobody uses it". Sometimes this is warranted, but more often than not it's just laziness and short-sightedness. Nobody ever puts in the effort to fix broken core functionality, all the effort ends up dedicated to superficial turd polishing or gimmicks.
Ever wonder how bland, feature barren, largely useless lowest common denominator shit that nobody really seems to like, or identify with can come to dominate design of software products?
My theory is that the assumption there is some kind of meaningful "Average user" which a product is then built for ultimately destroys utility in software.
If you have 50 features that on average 1% of people use, you can easily reach the false conclusion nobody cares about these features, when on aggregate 90% might use at least one of those features. Thus in aiming for the average, you haven't designed for anyone at all.
Combine this with the fact that metrics can't truly tell you why something is, only that things somehow appear to have a relation. Developers love love to run wild with a narrative about how their particular reading of why the metrics shook out this way is unchallengable fact, backed by DATA (when its really exactly the opposite, you drew a conclusion based on relations you assumed to exist), you have a recipe for some really horrible misdesigning.
I think modern application design is completely off the rails, driven by statistical phantoms that are assigned significance in an arbitrary and non-systematic way and its leading to software that is genuinely a degraded experience for the user, wearing the mask of something that is propped up as "objectively better".
Everything was accessible via the menu! You could tell what was a label and what was a button! There were tooltips! There were no hidden gestures--some magic combination of taps and swipes that has you banging on your phone like a monkey.
What's remarkable about GUIs today is that the software is hardware to use despite having less functionality and fewer features than ever. Compare new GMail to Outlook 98; Windows 2000 to Windows 10; the built-in PDF viewers in Edge/Chrome to the Acrobat PDF plugin; etc.
I miss tooltips the most because I can no longer predict what icons do on sight. Not only are tooltips missing, developers who use these unintuitive icons also remove labels. Thus, users must read articles or watch tutorial videos to discover info a 2 second tooltip would have provided.
The other trend that's pissing me off is - actions are increasingly irreversible. I loved computers because it seemed to have reset or undo buttons everywhere! Most actions / mistakes were fixable in one click. Even Windows Recovery which failed to work correctly most of the time - still provided mental safety and I explored with ease.
Now, things seem to happen for no reason. Flick the mouse the wrong way and something will change, and render a software unusable with no way to fix it unless the application's uninstalled - some changes even survive reinstalls and the only way to fix is an OS change.
I still fire up my trusty old Win2k VM from time to time. Fast, stable, simple, intuitive. Along with Delphi 7 it's incredible how productive we used to be some 15 years ago. Documentation was sooo good back then.
Agreed. 2000 and 2003 were peak-Windows for me. I used my trusty 2003 version from MSDN AA for years, skipping XP entirely, and then jumping straight to Vista with hardware change.
Windows 7 was a-ok too, but my best memories are still with 2000/2003.
What does this mean? It's much more of a "real OS" than DOS at least, no? And it's certainly a platform that can be used to get computer things done, in a much more resource-efficient manner than modern systems.
> Standard interaction widgets. There was no breaking the scroll, there was no button that you can't discover how to press... There was no action without a shortcut... The sortcut descriptions were embedded on the same places you had to click to get the functionality. Buttons were clearly marked. There was almost always some text area telling you what was happening.
This still is the right way to design GUIs for applications to which it is applicable, many people (me included) still follow it and I would warn against thinking of it as of obsolete.
Sadly, you are in the minority. Look at this [0]. This is why no one needed a baked in "dark mode" for Windows 98. There is no equivalent dialog in modern Windows. It's ridiculous how much computer interfaces have regressed in recent decades.
> This is why no one needed a baked in "dark mode" for Windows 98.
Nevertheless there was and it looked really crazy.
> There is no equivalent dialog in modern Windows
The last version of Windows I used was Windows 7 and I'm pretty sure it was there. I was coding WinForms apps and it uses native system widgets stylable via these settings. Now I use Qt to develop cross-platform apps (on KDE5 as the primary environment), it follows the principle in almost every aspect (it doesn't let you change the colors easily however, you need to design a custom theme for this), it also suggest attaching a keyboard shortcut to every action and displays these right to the menu items.
> Nevertheless there was and it looked really crazy.
You're talking about the high-contrast theme. It wasn't anything special, it was just a specific set of settings from this dialog. Compare that to now, when having a dark mode took so much work Microsoft made a special announcement and everything. Ridiculous.
> The last version of Windows I used was Windows 7 and I'm pretty sure it was there.
It's been a while since 7, but if this dialog was there it was because you could still disable compositing and use the old fashioned Win32. The closest thing in 10 lets you change your titlebar color and background.
> This still is the right way to design GUIs for applications to which it is applicable, many people (me included) still follow it and I would warn against thinking of it as of obsolete.
I'm grateful for all those people, yourself included. But the problem is that most software is moving to the Web, and the browser is treated as blank canvas. Everyone reimplements their own UI to their aesthetic taste, with no regard for accessibility and interoperability. It's the other reason (besides security) that Flash was bad; just calling it "DOM" doesn't make it suck any less.
In my experience, today's recipe is "follow the trends of market leaders" (see e.g. everyone jumping on Material Design), and after that perhaps "optimize for conversion". Listening to UI/UX people, I often hear about not confusing the user, not burdening the user, removing obstacles. What I never hear is empowering the user, providing value to the user, enabling the user to fulfill their goals (as opposed to leading them through the path to monetization).
Although you call into question Material Design, I’ll argue that it was a force for good since it added consistency among apps and moved a lot of app-specific functionality within the central left-side menu, making features and settings more discoverable.
I'm not really criticizing Material Design per se here, just people jumping to it (especially on the web) regardless of whether it makes sense for a particular product.
Oh, there's a lot of thought there. Back then, people didn't bother thinking about the interface at all, and just pushed the lib provided widgets on the same place they were everywhere (there were plenty of bad metaphors that became prevalent, there were many stuff that research showed that it shouldn't be done, but everybody did anyway).
Today people are always optimizing (for the showroom), always redesigning. That's part of the problem, not of the solution.
Haiku has the most cozy UI skin I can name (I have also seen some even prettier but that was something completely exotic and I don't even know what OS that was), another one of this kind was QNX 6.0 Photon desktop. An the same time when it comes to icons - I enjoyed Windows 3.11 the most (Windows 98 was not bad too).
You’ve got it exactly backwards: todays “flat” UIs are actually many layers deep. If you look at the styleguide for Android or iOS they have like 6 different UI depths and thicknesses that can be at play at any time.
What’s refreshing about Be is that it only goes like three deep. Button and not button, and... desktop?
It’s because the Be UI is so shallow that it is charming. If you allow 6 different types of every UI thing like Google and Apple do, your UI designer has no chance of getting everything correct. There’s just too many combinations of widget states.
Only by limiting UI depth can you get that Nintendo-Esque complete feeling, like everything is there for a reason. And Be does.
I don't think it can all be attributed to nostalgia. I have read papers on HCI that show this effect exactly, too much ornamentation to to little. The big thing is consistency though. You can adapt to pretty much anything apparently if it the same everywhere, tougher to do if it is only "similar" everywhere. Or worse, the same but different.
I've got an old Celeron system I was going to throw away but because of this I just installed the 32bit version of Haiku on it and have been enjoying its simple style.
Am I the only one who wants a modern OS designed for single-user use as opposed to one that descends from a long line of time-sharing systems? That isn’t a mobile operating system?
No, you are not. I've been yearning for such an OS for some time now, and have been putting a lot of thought into precisely what it is that is missing from modern computing that makes me feel like a "personal computer" doesn't exist anymore. I think John Ohno's article "Big and small computing" [0] managed to articulate it better than I could. So what I've been doing some research and planning and trying to find the shortest path from the garbage pile we have today to something reasonably like what I actually want, and doing the research needed to make that happen. Unfortunately there's quite a lot to be done and I lack the experience and skill set to do it all, currently, but it's something I really want to do. I wish I could find a community already engaged in the same goal.
You are not alone. I miss this all the time. Nevrtheless I can see at least 3 important ways in which multi-user design is useful:
1. Constraining a guest user so they won't access anything they shouldn't - when a guest comes to you and asks to use you PC and you want to be nice and let them.
2. Constraining resident server programs so you won't get pwned.
3. Constraining nonfree apps (their installers especially) so they won't put/remove/modify anything outside user directory (Windows apps love to do this).
I've been thinking about this a lot lately. I'm becoming a big fan of minimalism and simplicity.
I find it particularly funny that Android is based on an OS designed for thousands of simultaneous users (although they did get some nice security / isolation out of it).
As much as I agree with this, are security and single-user truly mutally exclusive, or is it just that all the single user operating systems we know of (correct me if I'm wrong) have poor security records?
Absolutely. I'm not saying you can't build it, I'm just saying that P(correct security | correct multi user) > P(correct security) because you already have an isolation mechanism in place.
Not necessarily. You can design your isolation system to be better suited to a single user system, rather than hacking it into a system which was meant to support multiple human users.
Again, not disagreeing. Just saying that in aggregate, and from looking at history, the requirement of isolation in multi user systems trends towards better security.
You can absolutely build a better single user system.
Oh, ransomware only succeeds because of improper user permissions? Really?
Placing the security around users is used by server operating systems to protect shared resources from malicious users. It is completely useless at protecting a user's resources from malicious applications run by that user. The mobile OSs got this right in that the put the permissions on the applications instead.
I have been installing Windows 10 inside some VMs for running windows applications lately, and that's a pretty ridiculous experience when the host OS is the one really doing all the heavy lifting for multi-user and authentication, and one really just needs a thin windows runtime for applications inside the VM, not a full-blown "operating system" (a awkward description for Windows 10, considering how much shovelware it comes with).
If you have a permission system and multiple levels of privilege, why not password-protect the different levels? I guess we would also want to name them. In fact, another cool feature would be remote desktop like Windows has. So we should have permissions, in named groups that you have to know the password to access, and I should be able to log in to them remotely. Sounds like a time-sharing system to me.
Applications that run under different user-supplied constraints (cpu, memory, syscalls, file access) doesn't really feel much like having different users.
Things that are separate users by this definition:
- Every VM running under my account.
- Every docker container.
- Every SELinux process/file type combination.
- Every cgroup.
- Each CPU ring.
- Each call to pledge.
and wanting to be able to decide by myself which of these should not run under my user account, as opposed to letting the OS designer make that decision is why i can't imagine working on a single user system.
call them users or roles, doesn't matter. it is crucial that privilege separation is baked into the core and customizable by the owner of the device. at the moment i only see a mulituser system capable of doing that.
Haiku doesn’t have much of a technological edge at the moment, given that the current goal is to reproduce BeOS 5 PE to its fullest extent before modernizing.
Where it does have an edge, though, is how’s its being designed first and foremost as a desktop OS with responsiveness as the top priority above all else. The latter especially is depressingly uncommon in modern operating systems.
As someone who likes responsiveness, I've always wanted BeOS to take off. However, at this time, does OS responsiveness matter?
I'm running three apps right now: VSCode, Slack, and a web browser. Given that VSCode and Slack are basically other browsers, how much does OS responsiveness matter? Has the responsiveness battle just moved to the browser?
How much are people really running other than a web browser on a desktop/laptop now-a-days? When I evaluate an OS today, I'm going to care about how good its trackpad drivers are, whether I'm likely to have wifi issues, whether it can run the *nix tools I need to do my work, maybe whether I like the way it switches between windows/apps, and whether there are modern browsers to run on it. Responsiveness was a big deal back when I was running a Pentium II and BeOS had amazing demos (and I presume real-world usage) with uninterrupted video while multi-tasking and just buttery-smooth context switching in an era where you'd have your system lock-up. Heck, it was many years before OS X got past the beach ball of death happening constantly.
However, today, it's my browser that locks up; it's my browser that I want to be able to smoothly scroll through a web page.
I really want Haiku to succeed, but it feels like we might have moved beyond the OS for so much that the OS can't have the same impact it once had. On the one hand, the web has made it so that alternative OSs don't need a boat load of software to be usable. On the other hand, because the browser is now the OS, the usefulness of an alternative OS is lower.
How much are people really running other than a web browser on a desktop/laptop now-a-days?
Currently open on my machine:
- Safari
- Firefox (streaming audio)
- A program converting a 500 page PDF to 500 300dpi PNGs
- A bash script extracting PDFs from a municipal web site.
- A program downloading my video viewing for the week and converting it to a format better suited for my iPad
- iTunes serving audio and video to TVs in two different rooms
- An FTP program doing weekly backups
- A mail client
- A VPN client
- A spreadsheet
- An IDE (Coda2)
And if I were to look at my wife's laptop, I can guarantee that she's running more than just a web browser, and she's not technically inclined.
People have been pushing browser-only machines since the days of Novell's "thin client" strategy in the 90's. Netbooks were a fad. Consumption-only gained a small amount of traction recently with Chromebooks, but even the non-technical people in my office are dumping their Chromebooks for real laptops these days because they realize they have needs beyond Google's ecosystem.
Maybe we'll get there some day. But for now, if you want a laptop computer, you have to have an actual laptop computer.
This is one of my pet peeves. Netbooks were great as a small disposable internet console. They weren't just for consumption - the keyboard meant you could use them to compose text. The problem was that people bought them when they wanted a super cheap laptop, and so netbook manufacturers kept making the screens bigger and the processors faster until they stopped being netbooks and turned into crappy cheap laptops.
The new wave of tablets with keyboard covers are basically what netbooks were, but for a good decade there we lost the idea of a small cheap internet console which could be used to compose text.
I've got a Samsung NC10[1] running Haiku beautifully. It's a 32-bit machine with just 2Gb of RAM and it's never run so smartly. I'm very impressed by Haiku's stability and consistency. The extended attributes on files are an eye-opener. Your mail client is a settings window, a compose window and a file manager. Just add the attributes you need for email to the list view.
The fact that Google has added support for Android and officially exposed the underlying Linux subsystem, proves that outside the US school system, they are hardly getting any major sales.
Otherwise they wouldn't need to pivot it into a general laptop OS.
It's been 3 years since W10 came out. I recently tested W10 in a VM to see how it's matured. It was unusably slow. Things would take seconds to appear, sometimes dozens of seconds.
Then I tried Windows 7, and I could not believe the difference. We're talking 5-10x faster. Everything just responds instantly. Too bad they're ending support in a year.
People keep bringing this up, but I really don't think I, or anybody, should need an SSD in order to have a fast-acting computer. Spinning rust, RAM, and CPU clocks are faster now than twenty years ago, CPUs have vastly more cache, we have more CPUs, and our compilers and prefetch algorithms are smarter than ever. Why, then, are the same basic tasks not any faster, and sometimes slower? Why is the answer to use even faster technology?
I agree but what I have noticed from doing some troubleshooting is that applications now-a-days (or rather always) use up more hardware as time goes by. For example, I verified with perfmon that my IDE accessed over 50,000 files when i clicked on 'Open' and the directory tree popped up. It accessed every file in my Documents and other standard folders. Browsers also use up a lot of disk IO. Just by getting an SSD you speed up your web browsing.
You’re missing the point. With concurrency handled in user code and (I’d add) reduced latency from SSD storage, do you even want the OS to be excessively preemptive? Doing so has large drawbacks in terms of cache performance, and the difference is no longer noticeable.
I'm not missing the point. I want lightning fast response from software to my actions. If there is one thing I cannot stand, it's when the input loop loses events, which happens very often on Windows and my Linux powered TV. I want to throw them both out of the window, it drives me insane.
I miss the instant-on, straight-to-REPL, ~1 second reset features of my old Commodore 128.
And I am not kidding in the least when I say that. Regarding more modern operating systems, BeOS was so nice to use. Here was an OS that had UI/single-user needs baked in to it from the beginning, and it was a breath of fresh air. I'd liken it to the experience I had switching from a 60Hz monitor to a 164Hz one- I didn't really think about it much at the time, other than I wanted a gsync-capable display, and one was one sale. But the smoothness of the scrolling, mouse use, etc was so much more pleasant. And yet I never would have complained about the 60Hz screen until shown the alternative.
Venturing in to personal opinion, I'd also say BeOS was developed at the inflection point between engineering running the show, UI be damned, and where we are now, with designers running the show, actual capability be damned. It was a good mix.
"I miss the instant-on, straight-to-REPL, ~1 second reset features of my old Commodore 128."
As a C=128D owner, I hear you loud and clear. It's a slap to the contemporary programmers' faces that a 1 MHz MOS 6510 based system was more responsive than multicore GHz, hardware accelerated systems we have now.
The one thing that has not changed over the 20 years since BeOS was around is waiting for the computer to respond. We still wait, except now we're often waiting on network rather than OS latency for that button to indent, or do something.
I find it very noticeable, not least as in terms of those basics we seem to be stepping backwards for responsiveness.
Loading software should be quicker, but no one cares about space and size any more, especially as more and more becomes a front for a browser. Even native software is faintly disappointing. You load a game in seconds from the lightning fast SSD then it takes 2 min to load the save as that's bigger than some OS's. :)
> Given that VSCode and Slack are basically other browsers, how much does OS responsiveness matter? Has the responsiveness battle just moved to the browser?
You can't build a responsive browser on top of an unresponsive OS.
> How much are people really running other than a web browser on a desktop/laptop now-a-days?
I’m definitely not a normal user, but I’m running emacs, st, tmux, stumpwm, dunst, gocode, cupsd, sbcl, redshift — oh yes, and Firefox too. Honestly, I’d love to be able to get rid of my browser, because then I could run emacs in a terminal and get rid of X entirely.
> I'm running three apps right now: VSCode, Slack, and a web browser. Given that VSCode and Slack are basically other browsers, how much does OS responsiveness matter? Has the responsiveness battle just moved to the browser?
The thing that always blew me away with BeOS was the time it took to cold boot. From powering on a machine to being ready to work. It was seconds. Still faster than a chromebook.
That sure is something that most OS'es today can't do.
You can configure most Debian variants to boot in seconds from a decent SSD. Ubuntu Server edition was really pushing this for a while, the desktop editions don't seem to prioritise boot times quite so highly.
Hell, even Windows 10 boots from an SSD these days in less time than my BIOS takes to start the Windows boot process...
The thing is though, BeOS in 1999-2000 was booting in 5-10 seconds on (in my case) a 300MHz Pentium II with 64MB of RAM and a PATA/IDE spinning disk. Even a stripped down installation of Windows 98 took several times longer on the same machine. I lived in BeOS for over a year and a half on that machine, only booting Windows for a few games. It was pure heaven.
Desktop boot time mostly doesn't matter, it is an infrequent event -- once a day at most, if the system is shut down at night. Server boot time, on the other hand, does matter. You really don't want a server to be down for longer than it has to. Being able to reboot in 10 seconds is huge compared to rebooting in a couple minutes.
Now, given that servers are fundamentally running the same OS, improvements geared towards that market tend to bleed over to desktop as well. Desktop just has a bit more to bring up, with a graphical environment and everything.
I agree on the responsiveness part and would like to add that the fact that BeOS dropped you to a debugger shell when it crashed is something missing from modern desktop OSs. Just like the Lisp Machine (Genera, at least), introspection is something that’s present in Haiku and BeOS that was cut out of many modern desktop operating systems for “security” and “ease of use”.
In the JavaScript world, debugging is almost a bad word. It’s almost impossible to maintain a sane debuggable trace though a “modern” JavaScript application. I find I have to ignore basically all of the standard frameworks and transpiration pipelines and just write old school PHP-like code in Node.
I say all of this because I feel that my community has lost control of the debugging pipeline and I applaud Be and Haiku for making that a priority.
I find the benefits of committing to an always-running, always-traceable-in-production execution thread is an EXTRAORDINARY productivity gain (vs console.logs and connecting to IDEs manually via various witchcraft)
> In the JavaScript world, debugging is almost a bad word. It’s almost impossible to maintain a sane debuggable trace though a “modern” JavaScript application.
I felt this last week as I tried Webpack 4 on a new project. The source maps weren't correct, and instead of debugging my app I had to debug multiple Webpack plugins and work out which one was breaking the source maps.
Never did find out, but I fixed my app by debugging it mentally. It sure made me wish for a developer-first language.
> The fact that BeOS dropped you to a debugger shell when it crashed is something missing from modern desktop OSs.
Windows has that - if you have e.g. Visual Studio installed (or any other app that registers itself as a system-wide debugger), then application crash dialog will immediately offer to attach the debugger at the point of the crash.
A GUI feature I read about in 5: menus were going to have a colorized band that followed through all the menus and submenus. It seemed like a feature that would really help with complex software, but of course never saw the light of day. To this day, drilling down to deep submenus quickly becomes user abuse. It wastes a lot of time to have to drill down to something only to have all the menus disappear because the mouse tracked off the submenu accidentally.
It was also just cool to have a GUI mounted on a Unix like os - which wasn’t Linux.
Nowadays though I’d like to see some Plan9-isms adopted instead of cloud stuff. I think people would benefit from being able to make their own user experience that was kept more or less unchanged and the same files and preferences available on all of their devices. I’d also like to be able to “mount” processor/memory hardware from networked machines and delegate processing to them and output results to the local GUI. Like tunneling X but without actually needing to pipe the whole user stack - just the compute input/output.
BeOS offered the same FIFO scheduling mechanic that allows Linux to support soft real-time.
RTLinux was a hard real-time OS which runs Linux as a lower priority service you can pre-empt with your actual hard real-time stuff. Similar systems still exist today. They work fine, if, in fact, what you needed actually was "hard real-time".
If your idea of the potential consequences when your system misses a deadline is more like "Some users are annoyed and demand refunds" rather than "I might go to jail for manslaughter" or "The $40M space probe turns into space garbage" then "hard real-time" probably wasn't what you needed after all.
My point was that (in my experience as a embedded engineer for high availability systems) NewOS has much smaller windows between kernel preemption points, particularly in comparison to Linux (even RT) and NT.
Is Haiku only soft real time? Yes, and I never said anything different.
Does it do a markedly better job at making real time commitments than NT or even RT-Linux? Absolutely mainly due to how much simpler of a kernel it is.
Also, I'm going to throw out there that if you're going to get uppity about hard real time guarantees out of nowhere and throwing out concepts lime manslaughtee charges out of nowhere, I hope that you're only choosing sel4 as last time I checked it was the only kernel that backed up it's guarantees in a way that it can be checked.
Other contemporary multi-processor operating systems were also easily capable of displaying a handful of postage stamp (QVGA or smaller IIRC) videos. This was a demo other people weren't doing mostly not because it was hard to achieve but because it was pointless, what use did you have for showing several videos simultaneously?
They chose to show several postage stamp videos because they only had software decoding. At the time the way most competitors handled video on consumer hardware was to decode to YUV and then overlay and scale that with dedicated hardware, and on most systems you could only play one video this way. BeOS would be much worse at that with a software decoder, by showing many videos they could "make a virtue of a necessity" as they say.
The best thing in Haiku is probably their vector icon format, which offers a good mix of basic vector features for small pictures in a compact binary format. It's mentioned briefly in that article.
I don't think that's entirely fair. Back then I used to be quite an whore when it came to operating systems. I had Windows 2000, Windows 95 (for the rare game that wouldn't run on 2k), BeOS, Linux and Mac OS 9. BeOS was by far the snappiest (actually it was 95, but that was 5 years old by that point, only running DOS games and didn't take advantage of the SMP I was running).
BeOS ran circles around everything else - particularly when multitasking - and I was did use each platform heavily.
I had a similar experience. Windows 2000, ME, 98 and 95, Suse Linux, BeOS. BeOS was way better at multitasking, video playback, etc. I really lamented it's fall.
In my experience BeOS had much better single, full-resolution video playback or editing experience as well. Editing DV footage on BeOS had a no-stutter experience which took MacOS and Windows another decade to deliver, especially since it was rock-solid under multitasking – not just not needing to close your email client or browser but even things like compiling Mozilla from source didn’t cause a dropped frame.
The "why try" was not meant that other OS can't do it. Just that users don't have the need for that.
Showing that BeOS can do it, does not mean other OS can't do it too.
Sometimes, I feel bad for OS with lower number of users like these, FreeBSD etc. There are so many important things in OS that even if it really excels in few things, it doesn't really count. The work that goes into creating these is phenomenal, yet I cannot replace my Mac. I have been paying at least $300 extra for a laptop and have been making so many compromises on the recent hardware.
Eh, I've been running FreeBSD on my primary desktop for about a decade now (and have been running it on every server I can get my grubby hands on). It's equally painful for me to use any other OS because of similar workflow issues. We've all got a wide variety of tasks to complete and a good handful of specialized tools to complete them with -- use whatever works best. If you're doing server work, I recommend giving FreeBSD a trial run!
BeOS was by far my favourite operating system of that era. Fantastic platform that really put the competition to shame. Fast, stable, pretty and powerful.
Same here. I still remember when I first booted it up and ran a demo app (video playing on all sides of a rotating cube), how much it put OS 9 to shame in terms of responsiveness and performance. I’m still pissed that Apple went with NeXTSTEP instead of BeOS for OS X, but I’m glad they went with something.
I realize this maybe a corner case and slightly off-topic, but it would be really nice to be able to boot Haiku via PXE boot in our network infrastructure. This would make it so much easier for people to play with it casually. It seems[0] that there has been some effort to support this, but last time I tried (about 3 months a go), I couldn't get it boot with couple of hours effort (I know, may very well be user error ;-) ).
If anyone from Haiku is reading, would be really cool if you also offered network bootable image by default that can successfully boot from 'standard' PXE boot server setup.
The PXE bootloader was broken following the merge of the package manager (which as this article implies, required changes to the boot process.) Nobody has taken the time to fix it again since then... help wanted, I guess.
I only read about BeOS in exuberant PC World articles in the early '00s, but I got the impression it was really, really good–pre-emptive multi-tasking and all. Now this blog post says that Haiku has device servers. That sounds like the 'fast flyweight delegation' technique described here: https://blog.acolyer.org/2017/12/04/ffwd-delegation-is-much-... ... and also used by the Erlang VM.
Zeta was based on binaries of BeOS Dano (what was going to be the next version of BeOS before Be went under). It's unclear whether the company ever had the source code.
They absolutely did. What came out towards the end was that they didn't really have a license for it; it's likely they had based their work on the BeOS source code leak that occurred in the last few weeks of Be Inc.'s existence.
Haiku is just so cool (although it probably can be made even better, UI included, e.g. I really love the modern Win7/macOS/KDE5/Unity desktops idea od using one icon for both launching and minimizing/restoring/switching an app). I wish it could get more popular and developed more actively. I'd also love to have Haiku-like window management (and widgets toolkit skin if possible, but this is less important of course) on Linux (in KDE preferably).
Sincerely the sole BeOS concept I found really nice that is also by some extent in NEXT is the "fs database" or the ability to "query" storage by file types etc vs only access in a classic files/directories tree.
On package management IMO Nix and derived Guix are real revolution...
Some of this stuff sounds vaguely reminiscent of Nix. Which is unfortunate. I rather see a port of Nix(pkgs) to Haiku so Haiku is able to continue their wonderful unique innovations rather than reinvent the (albeit rarer and hirer quality) wheel.
Linux could kill windows and Microsoft if it just had an improved UI. Currently it looks like a mashup between windows and mac and it's just not working. I've noticed improvements over the years getting closer to Apple's designs, but not quite there and it's unfortunate how much that last minute matters. Once that list mile is covered I think we'll see linux becoming more and more popular as a standard OS.
If Office Libre were to update their UI I think it would make a killer combination.
I think more then "prettier UI" is necessary. I switched from Windows to Linux a few months ago, and while I appreciate being able to use more developer tools and not have annoying ads in my OS, I feel like Linux is just more unstable. My wifi/bluetooth randomly stop working, forcing me to restart my computer. That's unacceptable.
Unfortunately there's not much the Linux community can do about bad drivers. (I assume you have some broadcom chip). The only people really equipped to write the drivers are the vendors. The documents and firmware running on the chips are not published and you can't realistically email their engineers. So Linux devs are basically SOL. Hats off to the legends who sit down and reverse what's needed anyways.
In general, vendors only have financial incentive to get windows drivers working well.
It's the self defeating prophecy of the Linux desktop.
Apple spins their own hardware and I'm sure gets proper documents from the vendors they select.
Yes there is. Windows can restart it's graphics drivers of they crash for example. And it has a stable driver ABI so companies can actually release closed source or out-of-tree drivers in a sane way.
Linux just assumes that all drivers must be perfect with no bugs, and must be open source and distributed with Linux itself. That's just not realistic.
Upgrading Windows from 7 to 10 caused s lot of older devices to stop working for a lot of people, despite the stable ABI; in most cases, it was just driver unavailability and not incompatibility, but that’s a distinction without a difference for most users.
Linux just keeps on working with old devices across distributions and updates.
Linux takes 6m-24m till vendor unsupported drivers stabilize - but at that point, it works way better than windows (and basically forever) in my experience.
I am sure they are, as was the i386. And yet, from experience of ~20 people, none had hardware obsoleted by Ubuntu, even very old hardware - though some had hardware obsoleted by win10 (which was in some cases the trigger to try Ubuntu)
Windows 8 was a train wreck, but gnome still has the full-screen start menu that everyone hated in windows 8. At least the search functionality works well in the menu, the MS one hasn't since windows 7.
I recently heard a friend say that Windows 8 was her favorite version because it was so simple. I thought that was really interesting.
I never really got to experience it the way it was intended, because upon first booting it I became horrified that everything had changed, and installed a bunch of tweaks to make it behave like Windows 7.
Conversely, KDE has always offered what is essentially WinXP++ UX (which is exactly what so many people say they want). And there are distros that use that as the default DE, too.
I used to think this too. But I’ve come to wonder. My more recent hypothesis is that it’s not actually possible to do so. My reasoning is basically economics. Free software developers improve and maintain the UI experience that best fits their needs. If there’s shared ground between what the casual user wants and what the developer wants, they’ll even often err on the side of the casual user. The developer community will even reach into casual user space somewhat because it indirectly responds to the recognition and feedback loop that a wider user base is good for the effort in general, helping to secure other forms of acceptance and therefore employment opportunities.
But that’s all only up to a point. There are concessions one starts making as you woo the business/casual user more and more. Since there is no direct financial compensation to offset the increased sacrifice, an equilibrium arises. An equilibrium of good enough for general consumption, enough of a concession for the developer. Unless something changes the balancing point of that equilibrium, then it’s likely to stay about there.
It’s like a neighborhood handyman/craftsman who first fixed up his own house and then, because it gives him purpose and makes him feel good and contributes to the general betterment of the neighbor hood begins doing the odd project throughout the community. The work is quality AND it’s free(ish). As the community embraces the handyman’s efforts more and more, he begins having to do things more and more that don’t fit his vision or lifestyle. He has two choices: start charging or stick to the projects he likes to do. If he starts charging though, now he becomes dependent on trading concessions for income. The relationship fundamentally alters.
People are more willing to try new things and make sacrifices for short term savings. When linux is free and windows is $100, something on par is good enough to make them switch. If linux wasn't free or if windows was free then I would totally agree with your statement.
OpenJDK 1.8 is (finally) back, and with Swing support even (though only on 32-bit, it still needs to be boostrapped on 64-bit.)
WebPositive works "acceptably", Otter/QupZilla are significantly better. All can at least play YouTube, etc.
We don't have standby support at all. :/ Most of the device hooks are there, and we use ACPICA for ACPI support, so it's just a matter of wiring all the right things up and adding some buttons... but we are pretty starved for time as-is, so nobody is working on it at present.
Looking at the screenshots, I really still like the look of the windows and icons, but the system font is really holding it back. I didn't realize how bad it looked compared to modern system fonts. I get the feeling a font switch would really improve the look.
Perhaps because there's no subpixel hinting, I guess?
Haiku does have it (it was hard-disabled for a long time due to patent concerns), but there's a bug relating to how the main text control draws strings that causes characters to overlap one another with it enabled, and so things generally look worse when it is. Someone should take the time to fix that...
Those screenshots are DejaVu Sans, not Noto Sans, and with a different font rendering engine than the ones in this article. We should probably update those, though.
>One of the coolest areas of package management is that you can go back in time and boot up into a previous state of the system, all thanks to the new packaging system. To do this, simply open the boot menu, choose the boot volume, and select Latest state or a nicely time stamped ‘version’. Very cool.
New versions of programs may modify the user config files in a way that old versions may not be able to read those config files anymore and even corrupt them. How is this handled, if at all?
What config file format are you using that applications would corrupt newer ones? The only one I can think of would be raw memory structs written to disk, which is nasty and ridiculously unportable even across 32-bit vs. 64-bit, let alone other architectures.
On Haiku, config files are generally either plaintext (kernel/drivers/a few core apps, so they are easily editable) or archived-BMessage format. The former are almost always read and not written, and the latter are key/value based so at worst the application would just not know about the new keys, and possibly remove them, which isn't the end of the world.
Configuration changes depend on the developers. For example, I used to use Evolution on Linux / Gnome as my email client. Their attitude to compatibility is terrible. As in, they just don't care at all.
If you allowed it to upgrade your Evolution data store there was no going back.
In a BeOS case it would be as if it migrated to different configuration keys in the new version, then the old version would detect corruption, throw a fit and erase ALL the keys.