This library owns. If you want to impress users with native looks, it's the wrong move. If you want to get shit done, send it!
I am using EGUI for visualizing electron wave functions, configuring and viewing status of UAV flight controllers and peripherals, and as an interface for locating nearby RF devices.
I will say the main negatives are that the `egui`/`eframe` etc split is confusing, and the API rapidly changes in a breaking way. Although part of that is from companion libs like walkers, for maps, that are making their own changes while trying to keep up.
I disagree. I kinda hate eGUI as it's so obvious everytime an app is using it (ROG control center on Linux, for instance). I would much prefer a multiplatform native toolkit as a first class citizen in Rust, versus all the ugly homegrown widget sets they use (something imgui, nuklear and others already do better).
That being said, that's an aesthetic choice; and apps provided in it are better than nothing.
I always find atleast one person making this or similar comment in threads on non native GUI libraries.
I also respect the fact that this is just a statement of a personal preference.
However I don't think most people care.
Good applications with thoughtful UX and useful functionality will succeed whether they choose native widgets or not.
Literally millions of people use applications like VSCode, Blender, OBS Studio. I bet the vast majority of them don't think what this here application needs is more native widgets.
Heck, no one cares that Figma is a web app. From what I can see it beat out all similar desktop native competitors.
I think everyone cares, whether they know it or not. Which sounds like an oxymoron. But my point is that inconsistent UX might not be glaringly noticeable, but it can still give a feeling of messiness and add to cognitive load.
"Native widgets" is just a way to achieve "consistent widgets". A good tradeoff IMO is to at handle as many things with native UX (e.g. window decorations, menus, etc.), as possible without compromising on function.
To avoid the latter, custom GUI is arguably necessary, as is the case with Blender, while OBS arguably doesn't need its own handling of the menu / window decoration.
These things matter very much to some people, as consistency affects how well certain accessibility tooling can understand the contents of applications. UI customization also matters for similar reasons. Certain things like scaling or theming, might rely on native GUI libraries.
To summarize, if the discussion is what is "best", and you don't have a functional reason to justify custom UX, and you disregard development cost, then the best solution is a consistent UX, i.e. native widgets.
The default Win32 C++ UI are all form based (the stuff that requires passing around a HWND handler). The new web style responsive "flat" forms are all completely new widgets and API, or at least they use completely new code for painting.
> I would much prefer a multiplatform native toolkit as a first class citizen in Rust, versus all the ugly homegrown widget sets they use
That’s a design problem, not egui one. The reason UI went to shit is because no margin for creativity anymore, same design, same icons, same charts, same shit, I bet everyone in here can tell a site is using bootstrap css from the first 3 seconds..
I think we have the opposite problem: too much innovation in UI elements that don’t need it or sacrificing usability for aesthetics. I get the desire for uniqueness, but common UI elements have a huge positive impact on usability.
Consequently, I wish more sites used Bootstrap. As an end user I don’t care a whole lot about a brand’s identity or style. I don’t want to have to guess and check to figure out what’s a tab or a menu or how to access settings. The number of custom layouts I’ve had to memorize is staggering and often gets invalidated when someone gets promoted and decides the site or app needs a new design.
Desktop window frameworks may have been bland, but they were based on actual usability studies and I think we gave that up too readily.
> I would much prefer a multiplatform native toolkit as a first class citizen in Rust
It's extremely hard to make a proper cross-platform kit. Because every platform has a gazillion little (and not-so-little) conventions about what's expected, behaviours, and even control placements. Even Qt struggles there and they have been doing it for over 30 years.
> Even Qt struggles there and they have been doing it for over 30 years
Furthermore, even they seem to have thrown in the towel and made the QtQuick (GPU rendered, non native looking widgets) as an alternative to the traditional QtWidgets.
WxWidgets does just that, but it exposes clunky, MFC-style C++ for its API. Bindings to other languages do exist though, so a Rust native binding should be possible too.
WxWidgets has all the same issues: it's almost but not exactly native-looking and native-behaving on all platforms. Though I'll admit I haven't seen many wxWidgets app in the wild lately.
I'm the opposite. I prefer apps where the developer has complete control over the design and functionality of the app, rather than trying to conform to the least common denominator of the various native toolkits.
I agree the distinctiveness of the default style is definitely a problem, but on the other hand at least the default style is pretty good. Could be worse - something like Motif, TK or FLTK.
It would definitely benefit from a more "neutral" style though, or even better a few style choices. GTK2 used to have a ton of actually attractive styles available back in the day.
Sure, but it has two major ones; so target one of those. Don't introduce a new, half-as-functional one.
99% of the users of said platform have both installed on their machine and would prefer either over the janky non-native ones in electron and egui apps.
egui author here! Fell free to shoot me any questions. You can try egui in your browser at https://www.egui.rs/
My company Rerun.io is entirely built on egui so that we get the same high-performance UI on the web as on native. Give it a spin at https://app.rerun.io/
I have an app where some tasks can take a long time (like a minute in some cases), so I want the gui to continue responding, and also run my long task in a background thread, then update the GUI when it is finished.
Is there any good examples for this, the examples (not unreasonably), mostly seem to do fairly trivial work.
That’s the key; the thread showing the UI mustn’t do any real work. Instead do the work on another thread and then communicate the status back to the UI thread. Exactly how you do that is up to you; there are plenty of different ways to do it that depend on other choices you’ve made, such as whether you’re using async or not, what async runtime you use, etc, etc.
The very simplest way to do it might be for the UI thread to send the work thread an Arc<AtomicBool>. The work thread can set the bool to true when the work is finished. The UI thread can branch on the value of the bool each frame to decide what to draw. Job done.
Do you have a plan for improving the font rendering to have sub-pixel aliasing on the web? I saw some thoughts on that in some github issues, but I was wondering if there was a general plan.
I don't have a plan for that right now. I'm not even sure a web page can know what the RGB pattern of the display is so it can do proper sub-pixel anti-aliasing.
I'm sort of hoping high-dpi screens will make this a moot point very soon :)
It's not what I wanted to read, but thanks for the honest answer! I realize creating a new toolkit is a huge endeavor, and I hope you'll some point reconsider :-)
I've just started using this to build a better shell for an emulator project! Of all the GUI toolkits I've tried to learn in Rust, egui is just... *delightfully* simple to set up and get started with. No weird lifetimes to wrangle, the context management is sane, and the whole thing is cooperating very nicely with my threaded environment and application-specific event pump. 10/10, do recommend highly.
What's super cool for an open-source GUI framework is that egui integrates with accesskit, an accessibility framework, so it means you can actually make UIs that are accessible when using egui, which is not that common when using non-mainstream UI frameworks.
Hi, lead developer of AccessKit here. I appreciate Emil's openness to integrating AccessKit, and to accessibility in general. Before AccessKit, egui had a partial accessibility solution based on direct text-to-speech output. Also, support for moving keyboard focus with Tab and Shift+Tab was already there. These things made it easier to implement proper platform accessibility support using AccessKit. IMO it's usable today on Windows and macOS, and we're working on Linux (or more generally, free desktop) support.
Edit to add: At the risk of self-promoting a bit more, I gave a talk about AccessKit at RustConf last September, and I used egui in the first demo. https://www.youtube.com/watch?v=LRBKb6McgqA
How does AccessKit actually work? For example, I opened https://www.egui.rs/#demo and no parts of the interface were available via VoiceOver in the browser.
Sorry, it's not implemented in the web version of eframe yet, only native. We started working on a web backend for AccessKit, but there are higher priorities. The ability to make a fully canvas-rendered web app practically accessible is limited anyway.
That's certainly an interesting challenge. But a web app which is not accessible is a show-stopper in general so it will be hard to use this for anything serious.
Is it an ongoing effort? I couldn't find a lot of pointers in the repo on how one could try to contribute to move this along.
egui's immediate mode works great with Rust's ownership and borrowing, unlike traditional stateful UI frameworks that can have arbitrary references between widgets and events.
It works very well in games, where it allows creation of complex completely custom UIs, with GPU rendering.
However, egui is an odd choice for desktop applications. It's completely non-native. Even "windows" in egui are custom widgets confined to its canvas, not real OS windows.
> However, egui is an odd choice for desktop applications. It's completely non-native.
I happened to immediately (pun intended) stumble with an example of this. Just by chance, I went straight to test the ComboBox (https://www.egui.rs/), and I have the tendency of not just clicking to make the drop-down appear; instead, I click and hold, and only release after selecting the desired choice.
This doesn't work at all on egui: the ComboBox doesn't show its choices until the mouse button is released, not upon the initial mousedown press.
I guess there must be a thousand cuts like this, which is understandable, given that all functionality is implemented from scratch trying to imitate the native widgets we all know and use since forever.
It's also difficult because what is the native behavior here? On Windows the different official Microsoft UI toolkits behave differently. For example the latest Windows 11 UI stuff works like egui in that it opens the dropdown on release, while older GDI stuff opens the dropdown on press, but won't let you make a choice until you release.
After reading this I tried click-and-hold on a folder in the bookmark bar in Chrome. Lo and behold, even Chrome doesn't do what you want. I completely agree about a thousand cuts, and I'm often the kind of wild-eyed idealist that wishes that everything would conform to some universal set of gui standards.
I’m very sorry, but “native looking ui components” is a thing of the past.
I might put some effort in if developing on Mac (but even Apple just does whatever), but on Windows and Linux, the components are a free for all. If even Microsoft won’t respect their own UI, why should I?
I’m not advocating for breaking usability in your apps, just saying that we are well past not using this for desktop.
Which is a shame, because those widgets work infinitely better than the rough-shod ones thrown together by these all-in-one libraries, for the user.
If I have no choice, I'll use one of these apps. But if there's an alternative with a native UI, I'll switch in a heartbeat. Basically, the same as I feel about most electron apps.
For which user? I prefer non-native apps. It makes me feel more immersed in the tool, and gives the developer complete control over their design, whether by choosing a GUI toolkit they like, or rolling their own from scratch to perfectly fit the domain.
I don't like it when developers break from the conventions and theming of my operating system because they feel entitled to. Sure, very sophisticated applications that have niche needs like Blender, Godot. Photoshop, etc may have a justification to do so. But most apps are not those and I expect them to be consistent with the rest of the software on my computer. It's very difficult to retrain nontechnical people to use apps when they break from the conventions and styles of the software they are familiar with.
The only justification that's needed is simply, because that's what they wanted to make. You can incentivize them to make something you want with money, but if they don't choose to they're not unjustified.
I’d be inclined to agree, but I would tend to be critiquing what GP actually said, which is the the gui is totally ‘non-native’. I think you were in part correct to read the ‘native looking ui’ portion into the comment though, it would seem to be a common refrain that gui’s need to appear conforming to the OS’s chosen aesthetic and visual formatting. However, a very real complaint echoed in a sibling comment is the failure of non-native gui’s to respect or implement the OS’s operating behaviors for gui’s. I have no issue with visual differences between applications, but I want clicking and keyboard behavior to remain constant across my apps.
As sibling implies, it’s a thousand small differences that build up, creating friction that foster the push to native GUI’s being the preferred, if not nearly mandatory, solution.
> You can also call the layout code twice (once to get the size, once to do the interaction), but that is not only more expensive, it's also complex to implement, and in some cases twice is not enough. egui never does this.
I've found multi-pass imgui to work totally fine, and I use it for one of my apps [1]. I can support (nested) hstack and vstack layouts which IIRC egui can't. There is added expense of calling the "draw" code again, but it's negligible in my profiles (doing the actual layout calculations is more expensive, so I only invalidate the cached layout when the data model changes). It wasn't particularly complex to implement: each ui function simply does different things if you are doing a layout pass vs a draw pass.
I used egui for a personal project and would gladly recommend it. It's simple to use and really responsive. For me it was the first gui library that was a pleasure to work with.
You don't have the native look but for Rust developers that don't have this requirement you should definitely give it a try.
The curious thing about "native" GUIs, nowadays, is that most people spend their time using "desktop" apps that are just browsers in disguise, and don't really bother with looking like any platform's native UI.
I often wonder how much effort it would take to make one of the popular eguis framework and, rather than make it looks "like a native" app, you could get away with making it look "like a browser". (everyone style buttons/ inputs / etc... but they have a general "default" feeling that people are probably used to, at this point ?)
I also wonder if it's not mostly developers that care about that, as long as the software is well made and fast I think users will use it without thinking twice about it.
For example Blender's GUI has nothing native yet it's massively used, some people complain about the complexity of the UI but rarely if ever of the non native look.
Yep,as a former animator-in-training and heavy Blender user, the fact that the UI didn't match the rest of the OS was an afterthought, at best. You might even argue that having the exact same UI look on multiple OSs was a plus.
> The curious thing about "native" GUIs, nowadays, is that most people spend their time using "desktop" apps that are just browsers in disguise,
I really feel like I live in some alternate universe sometimes. Most people around me use very classic desktop apps for their day-to-day work - blender, kicad, adobe illustrator, qt creator, telegram desktop, krita, ableton live, libreoffice ... none of these are browsers in disguise.
I remember using both Swing and Tk apps on Windows that attempted to replicate a Windows look-and-feel and did not quite succeed, and the uncanny valley effect was strong with those; similarly with Qt apps attempting to emulate the active Gtk+ 2 theme. So being close to native might even be a bit worse than nothing like native.
(Not that it’s impossible to do a near-perfect emulation—IE 5&6, VB 6, and Office 97 all use completely custom widget toolkits, and apart from the funky menus and common file dialogs in Office people rarely complained about mismatches with the platform.)
Egui is reasonably easy to work with, but the default look and feel... remind me of Windows 3.11 somehow?
It feels dated, and I'm not sure if that's a reflection on me or the toolkit, but it makes me want to reach for something that has better default aesthetics. I'm not even sure if it's possible to fix in Egui or not; if it is, I didn't figure out how in the time I spent on the site.
Styling is done with `ctx.set_style`, but creating a nice style isn't very easy at the moment (basically you'll have to tweak constants in code, and then recompile). I'm working on making it easier as we speak though!
app.rerun.io definitely looks a great deal better. Knowing that's possible is good motivation.
Making it easier to create styles is definitely a good idea, but I wonder if it wouldn't be enough for many people to have a library of styles they can choose from? In my own case, I think that would be plenty... honestly I just need the one. That one. ;-)
I used egui + wasm to whip together a really quick GUI to help me learn to play chords on the guitar, and would do it again in a heartbeat, it's just so easy to reason about, debug, and use.
I wouldn't use it for more serious apps, as I think the immediate mode paradigm makes battery killer websites on laptops and mobile, since unless you are really careful about not making the app re-render, it'll be running your loop at 60fps which isn't great when you leave your site on your phone or laptop.
Edit to plug https://github.com/emilk/eframe_template from the same author: it gets you a rust/wasm webapp hosted in github pages in minutes after cloning + changing a few things, with full CI/CD and everything.
I love words like this that have slang meanings that are basically the complete opposite meaning. Very similar to slang usage of "sick", "gnarly", "crazy", "insane", etc in (American?) English.
I'm choosing to believe that the egui (library) name is the "positive egui" form[0]. ;)
I wanted to make a lightweight alternative to Postman and Insomnia so I used this, worked out pretty well I am about to release it to the world after a bit more polishing.
egui and rerun look and work fantastic, one aspect of immediate mode guis I ran into before was missing layout containers, does egui provide those in some manner?
I follow the GUI development space out of curiosity. I have made a few GUIs in Java Swing toolkit and I enjoyed it.
When I did GUIs in Android native, I didn't enjoy it at all.
I have loosely played with Qt Quick but I didn't know C++ so I was clueless how to encode user behaviour into a program that implemented the behaviour. My immediate mode Javascript canvas work is far simpler.
LLMs are already generating web GUIs (such as vercel's v0 and others) and we can encode mapping of current state tokens to next behaviour tokens. Hopefully we can specify all the interactions and what should happen due to them.
React vdom refreshes the whole tree changed nodes for recomputation, LLMs could generate the new tree and use an animating library to transform between states - the difference in the trees.
Immediate mode GUIs lend themselves well to this; you just generate/rerender everything.
What I'm interested in is a novel desktop paradigm alternative to windows, mousing and drag and drop.
Intellisense and LSP lets us see the types of what an operation has. But computer GUIs are inherently synchronous unless you're writing a script or program. I would like to click through operations and transform and map from one kind of things to another type of things. This would queue operations up and the computer can schedule them efficiently.
Haskells knows the type of a pipeline at each stage of processing.
I would like to do "algebra with behaviour" and have built in transformations such as joins, swapping, queries and scheduling. If you can map a problem into something you understand based on position, you can do algebraic transformations between states such as binpacking or path finding.
"Break these files into batches, compress+encrypt them, keep them synchronized between these machines and back them up"
I am interested in immediate mode GUIs but I think we need a different paradigm for GUIs that is less synchronous from a user perspective.
Encoding behaviour in GUI code is complex due to state management and complexity. See jQuery.
In most GUIs as a user you do an operation and wait for it to complete (direct manipulation). You can't queue operations unless you code. I haven't seen macro recording done well.
I am using EGUI for visualizing electron wave functions, configuring and viewing status of UAV flight controllers and peripherals, and as an interface for locating nearby RF devices.
I will say the main negatives are that the `egui`/`eframe` etc split is confusing, and the API rapidly changes in a breaking way. Although part of that is from companion libs like walkers, for maps, that are making their own changes while trying to keep up.