> So at some point, the engine stopped working on macOS. Mostly because Apple doesn’t want people to use OpenGL, so they’re starving their drivers from development
As a graphics developer, all I can say is... sigh. Apple has fought against open standards in graphics. They never overtly move against open standards (that I'm aware of) instead they use their dominant platform and refuse to or drag their heels in implementing support for open APIs.
Even though there's a new era of cross platform graphics APIs in Metal, WebGPU, and Vulkan, the fight has now moved onto formats, as everyone except Apple work towards open formats like WebP, glTF, basis universal (fortunately they can't avoid that last one), while Apple announces they'll use USDZ instead, or not add support for WebP.
If you want to implement an OpenFX [1] effect properly you have to write its core four times:
- A CUDA implementation, if the host uses CUDA
- A Metal implementation, for Apple
- An OpenCL implementation
- A fallback CPU-based implementation
- You don't need a DirectCompute implementation, because thankfully no one is idiotic enough to use it.
- You don't need a Vulkan implementation, because no one is doing compute on Vulkan (afaik).
The state of graphics and compute APIs is utter insanity. Especially Apple can go fuck off and die with their extremely moronic move of introducing OpenCL as an open standard in '09 and then presenting Metal as their closed API in '14, moving against OpenCL, OpenGL and Vulkan. nVidia at least has the excuse they were first to market with CUDA (and it's easier to program), and they stuck to it.
[1] THE standard for video effects, think "VST, but better and for video".
OpenCL was also killed off by fractured support among vendors, especially nVidia who wanted to push CUDA. Now arguably CUDA is the better API/platform but so too is Metal versus OpenCL
WebGPU has compute I believe (or hopefully will) so could theoretically act as a good common API that isn’t controlled by a single vendors whims
> The state of graphics and compute APIs is utter insanity.
It looks like Vulkan is going to be the best-supported common platform for compute, at least for modern hardware. They need compute in the standard to support the core graphics use case anyway, so we might as well use that same support for general-purpose tasks. On Apple platforms, MoltenVk can implement Vulkan support on top of Metal
Yes, a common GPU compute platform would be great! Even if Apple isn't going to support it being able to handle all of it in Vulkan would be nice. Maybe OpenCL can be implemented on top of that.
Because Khronos did not want to move OpenCL beyond an anemic C API, it was only the beating taken from CUDA that made them come up with SPIR, and for what OpenCL 3.0 is the admission of defeat.
By the way, Google also completely ignored OpenCL and came up with its own Renderscript dialect instead.
I have not tested on Apple, but I did on ARM Linux. Their HLSL to GLES GLSL translation worked OK for my purposes. In the documentation they say they support most of them for both graphics and compute, VK and Metal included.
It’s an open source project by Pixar, adopted by large parts of the film industry and seeing adoption by both Unreal and Unity as well.
It’s IMHO a more comprehensive format than gltf , with the big downside being that it’s not supported by the web. However if you join the community, you can see that many big industry players are very involved with it right now.
WebP is a pretty terrible standard, all things considered, and I have very low technical confidence in JPEG XL from the same team. USDZ is also pretty different from glTF; the former is a scene description that includes things like lights and composition, while the latter is strictly model interchange. Originally designed as a transmission format until people realized it wasn't well suited for that.
Fair point about USDZ and glTF. I'm thinking about this from a web developer's point of view which may skew my perspective. USDZ is unusable on the web and probably will remain that way since you need to use an SDK to read the files unless someone can work some wasm magic.
glTF is a full scene exchange format, however, I deal with glTF files on a daily basis and the overwhelming majority contain only models and animation.
JPEG XL isn't from the same team as WebP 2.0 and arguably AVIF is. It's a combination of the team behind Brotli/Guetzli/Brunsli and Jon Sneyes, inventor of FLIF.
JPEG XL is simple enough to be written cleanly and largely from scratch, instead of being based on a fork of the crusty VP8 codebase.
glTF is a far better format than Collada and works well for every skeletal animations I've thrown at it. I use 3DS Max, export to FBX, then use the FBX2GLTF command line tool. Works seamlessly.
USD (Usdz is one of the supported extensions) supports skeletal animation, linear blend skinning and blend shapes in the default UsdSkel implementation. Additionally it’s an extensible format , so you can plug in other implementations as needed
Edit: to explain what I mean by supported extensions in case it comes up. USD is the format/library. It can be represented in three forms. USDA for ASCII, usdc for binary and usdz is a zipped bundle of the other two including any asset dependencies like textures, audio or other referenced usd files
There's system level support for WebP in Mac Big Sur. Also Safari 14 supports it.
I don't like it because WebP doesn't add anything else than better compression compared to PNG and JPEG. AVIF or JPEG XL bring HDR support, so Apple should have adopted one of them. Even Google is planning WebP 2.0 because they noticed that WebP isn't enough: https://www.youtube.com/watch?v=zogxpP2lm-o
WebP does add something more than better compression (unless you can live with enormous PNGs): efficient compression of lossless photographs, lossy diagrams and photographs with alpha channels.
WebP also has an unfortunately low maximum image resolution. I was trying to convert some webcomics pngs to lossy WebP for reading offline, and couldn't because the images were simply too tall to save in WebP format without resizing/cropping.
You will laugh, but Apple started to call “cross platform” things that are used or run on more than one of their product. So cross platform between iOS, macOS, Apple TV OS(?), watchOS.
But all of those run Darwin, so for something like a graphics API that has nothing to do with GUIs and whatnot, the "platform" for all of those is the same.
In this vein and previously discussed on HN[1], but there is a weird DX3D bridge in Linux that is available on WSL2. Still not "cross-platform" and seems to mostly exist to support ML workloads for Windows users.
That's not possible, because the situation we have today is that proprietary APIs (D3D, Metal, CUDA) are pretty stable, whereas open APIs (OpenGL, Vulkan, OpenCL) are buggy or unsupported.
Because Khronos just outputs piles of paper and expects partners to do the necessary, while proprietary APIs provide full stack solutions with graphical tooling.
Metal is extremely buggy if you're pushing the state of the art. Patrick Walton has run into a number of very serious driver bugs in porting Pathfinder to Metal (he can reliably panic the kernel just by running shader code), and I've seen some as well, though today I'm working in Vulkan.
I'm hopeful for WebGPU in the future, not just for its features and expected cross-platform support, but because I expect a pretty thorough test suite to emerge, holding developers' feet to the fire to make their drivers actually work.
I find the use of "extremely" to be extreme here or too forgiving of problems that arise while using other APIs. There are all kinds of state of the art algorithms implemented on top of Metal. But anything improper can easily switch off GPUs, you can even do that with CUDA.
That's fair, I've certainly also had problems with Vulkan drivers from other vendors. So I'm not saying it's more buggy than other GPU platforms, but it is buggy, and it is more buggy than we expect of CPU-based language toolchains and runtimes.
> Windows had proprietary standards, Apple had OpenGL.
Repeating the same nonsense over and over again still doesn't make it true.
Windows always had full support for every graphics API - be it proprietary (like Direct3D) or open (like OpenGL).
The difference between Apple and Microsoft was that Apple controlled access to the API, that's why you were stuck with outdated OGL versions on MacOS and can probably say goodbye to OGL entirely on their future Apple Silicon products.
Microsoft on the other hand simply stopped providing anything but a OpenGL 1.2 implementation, while leaving API support to the hardware vendors via drivers. This happened all the way back in Windows 95 already, which is why some vendors provided their own APIs (remember GLIDE?), while others shipped "MiniGL" drivers just for Quake.
Microsoft never prevented drivers from providing graphics APIs or applications from using them and redirected OpenGL calls to the hardware driver when available.
No, they were into QuickDraw 3D, OpenGL came into the picture via NeXT acquisition.
Even on mobile, probably unknown in US, but Symbian and J2ME were the big motivation for OpenGL ES on mobiles, with N95 being the first handset with proper hardware acceleration.
This may be a bit of an extreme view, but I don't like frameworks at all. In general.
They are nice for little "hello world" applications that demonstrate how easy it is to create and deploy a new project, but it's just like the "programming language demo snippet" problem - it tells you nothing about what it's like to use this thing in a big, long-term project.
APIs and libraries on the other hand are usually not a problem. Especially graphics APIs - if you have a big project that makes heavy use of a graphics API, it's probably best to create an abstraction layer. That can be a very tricky task at conception, but it gives you a lot of flexibility later. It's definitely necessary for dependency inversion.
With frameworks you don't get that benefit. Making a "framework-independent" application (not library) through abstraction is a bit like trying to kiss your own butt. You'll probably break yourself while attempting it and it's a dubious goal in the first place. (I tried it with ASP.NET Core and I still have back problems)
The worst thing about frameworks is that they force their architectural style on you. If you want to get anything done it's a bit like working in a very large company - you have to spend a lot of time learning all of its little idiosyncracies, politics and issues so you can adjust to them, your work has to mirror all of their design mistakes and they can basically just decide to effectively cancel your project at any time for no reason.
>This may be a bit of an extreme view, but I don't like frameworks at all. In general.
I don't think giving an opinion such as "I don't like frameworks" without any concrete source code of non-trivial projects is productive discussion. (My previous comment about "unnamed invisible frameworks" : https://news.ycombinator.com/item?id=14172165)
>They are nice for little "hello world" applications that demonstrate how easy it is to create and deploy a new project,
It's the opposite. A good framework that fits the problem domain does a lot of low-level gruntwork that frees the programmer to focus on higher-level tasks.
It's not libraries OR frameworks. Real-world complex applications can use both.
One of the problems with using a framework for a non-trivial project is that you get locked into the framework. If the framework decides to change something in an update that you depend on, it can mean non-trivial rewrites unless you could expect the development already. Most frameworks tries to be close to 100% backwards compatible and grow in complexity because of it. Others change a little at a time. And then some does big breaking changes or rewrites and calls it version 4.0. That makes it increasingly hard to find help online.
It really depends how you structure your code. While my Qt apps are very much coded against Qt, it’s actually pretty easy for me to currently switch them to other UI frameworks as needed. I’ve done this with some of my bigger tools going to other frameworks.
If you keep your logic separate from your UI, and interact via method calls, it’s not very hard to swap out the underlying object. Arguably, it’s the pattern Qt encourages anyway so that you can flexibly change your UI design in the future
> While my Qt apps are very much coded against Qt, it’s actually pretty easy for me to currently switch them to other UI frameworks as needed.
Are yours apps open-source or do you know some architectured in the same way ? Curious and interested to dive into a well designed Qt application with logic and UI separation.
Second this. I love context-based organizations like newer versions of Phoenix default to. I've been using that in apps for years.
Basically all of the application UI is in framework-independent code. When plugging in the UI portion, you never do logic on stuff, you just call functions/methods to get it done.
It's not perfect, but it makes it much easier to port to a different framework/toolkit in the future.
> I don't think giving an opinion such as "I don't like frameworks" without any concrete source code of non-trivial projects is productive discussion.
Fair enough, even though it's hard to point to a specific piece of code when talking about such abstract concepts. And I'm not working on any open source projects.
My go-to example is dependency injection and ASP.NET Core. For the most part, I really like the features and it's mostly okay as a framework, even though I'm not sure why it has to be one. When I started taking clean code and DI more seriously, I learned that ASP.NET Core gets these things wrong in a subtle way that turns the whole idea of DI into a bit of a farce. This post explains it really well:
I wish there was an easy way of removing the "framework" aspect from that thing and just have a bunch of libraries for handling HTTP requests, rendering pages, dealing with security and so on.
> A good framework that fits the problem domain does a lot of low-level gruntwork that frees the programmer to focus on higher-level tasks.
I agree that this is a noble goal, and I'm sure there are plenty of frameworks that are good enough. But sometimes you don't have a lot of choice, and in the worst case all of the useful libraries for your domain depend on a framework that you don't really want to use.
> It's not libraries OR frameworks. Real-world complex applications can use both.
I think they'll be using libraries either way. But they do have a choice whether to be completely dependent on a framework, and that's a pretty serious choice to make if it's a long-term project.
> My previous comment about "unnamed invisible frameworks"
I don't have a lot of experience with JS other than playing around with it every few years and discovering that none of the things I used last time are in use anymore. It scares and confuses me. But yes, if you write a lot of similar software projects, a framework is probably the way to go if you want to avoid writing the same low-level stuff that has been better written and definitely better tested by the framework people.
And yes, writing my own "invisible" framework is a bad idea, in my own experience too!
But that's why I'm trying to move away from the whole concept of having my code in some sort of harness in the first place. A lot of the repeat-y stuff can be handled by libraries just fine!
Framework, singular. Frameworks takes over the architecture of your application, I don't think they can be compatible with each other. I mean, you can use a QList from Qt alongside an std::vector from the STL, but good luck making several MOC systems coexist.
You pick one framework, stick with it, and die with it. (Or you're stuck with the latest unmaintained version.)
I write big complicated applications and I write all my frameworks myself. (I'm a one person team) It is very much doable. You can check out my frameworks at: gamepipeline.org
This is probably a reasonable sentiment in the abstract, but in the concrete instance of UI development, what are we to do? Are there any production quality UI libraries? There are certainly many primitive libraries that can be composed to build a UI library; however, these are largely written in C, so even integrating them into your build is a Herculean effort, and further these tend to be very specialized (accessibility, text rendering, etc) such that you need to be an expert in each sub domain simply to parameterize them correctly. Further still, I suspect there is a lot of cross-cutting concerns, for example, the text rendering library must work well with whichever rendering approach you take for rendering other widgets, accessibility is not easily decoupled from your layout engine or widget library, etc. How much of this is reasonable to take on before it’s more reasonable to throw in the towel and use a framework (or worse: a whole browser)?
Note that the issues the Krita team ran into aren’t a consequence of frameworks in general, but of the management of the Qt framework in particular (although the entire graphics industry deserves lots of blame of their insane, spurious, and ever-changing APIs). Web browsers have much more difficult problems than Qt and manage to provide a relatively stable developer experience. Further, the issue of “your work has to mirror all of their design mistakes”, I think this is a consequence of depending on someone else to write part of your code, irrespective of whether that code is a framework or library; however, the benefit of libraries is that they are usually loosely coupled such that you can relatively easily swap one out for another (but as previously discussed, I don’t think this is the case for UI libraries in practice).
> The worst thing about frameworks is that they force their architectural style on you.
Sometimes, and sometimes that's a good thing:
- If ones own skills in coming up with a good app architecture are not so great (and that applies to the vast majority of devs). It can't stop bad devs from shooting themselves in the foot, but can help less experienced good devs to avoid straying away and taking wrong shortcuts,
- If ones own architecture style is sufficiently similar to the framework's than it saves time on repetitive, scaffolding work.
IMHO if you build more or less similar type of apps, and you absolutely can't find the right framework for you, then just build your own, save time. Eventually you will end up with something equivalent to the one anyway.
But if you can use a framework then that is going to save you a ton of work, both upfront and in maintenance costs. Since developer resources are almost always limited, that means you end up with more time left over for actual application features.
(And by "use" I mean use directly, not through some framework-abstracting metaframework you've written yourself - I think it's very obvious that's going to be a disaster. The whole point of a framework is to abstract over different APIs, so if you have to abstract on top of them then they serve no purpose. It brings to mind horror stories of hand-rolled database engines written on top of other database engines!)
I think your comment leaps directly from "if you can't use a framework directly then you're best off writing your own" (fair point) to "I don't like frameworks at all", without tackling the key bit in middle: how often is it possible to use a framework directly? The truth is, most applications can be built just fine with a framework like Qt. There is a gap between "hello world" and Krita that encompasses almost all practical programs.
Even in Krita's case, with all the pain they've encountered, they're still not seriously discussing abandoning Qt. They're even talking about maintaining their own patch set! Given how much experience they obviously have on the subject I think that shows there is value even in a framework that doesn't fully work for you. (I know that's an appeal to authority but I think there's some value in that argument.)
One compromise is to use a framework like Qt for all the boring bits of your program, like the dialog boxes and window decorations, then write your own wrapper for low-level APIs for the main display area if it needs it. I must admit I'm not sure how feasable this actually is given that something else in ultimately in control of the overall rending - maybe it isn't at all given that Krita doesn't do it.
When I started working on Krita 2003, it was already using Qt, of course, so I didn't have much of a choice -- if I wanted to work on Krita. But then, in a nice piece of circular reasoning, I choose to work on Krita because it was using Qt. Back then, I only knew Java, and Qt made C++ look like Java. One of the big problems I see in Qt is how the Qt developers are somehow ashamed of that heritage and want to make Qt steadily less Java-y and more Boosty or std-y.
Over the years, we've had a lot of pain, of course... But then again, a lot of fun, too, and without Qt we wouldn't have been able to create an application like Krita. Every alternative just is worse, much worse.
2003 was a long time ago: C++ has evolved, Java has stagnated and is falling out of fashion, and many traditional Qt peculiarities like nonstandard strings and collections or the custom preprocessor have gradually transitioned from idiosyncratic but justified to unreasonably antiquated and possibly unacceptable. I think making Qt as "Boosty" as possible is necessary for it to remain relevant.
Might be failing out of fashion, yet Microsoft just bought a Java company, is now an OpenJDK contributor and is one of tier 1 languages at Azure and Amazon SDKs.
Meanwhile, C++ lost its full stack status, the only mainstream OS that still offers C++ GUI tooling, is Windows, and none of MFC, ATL or UWP/WinUI APIs are "Boosty".
What keeps Qt relevant as the surviving C++ GUI framework not stuck in 90's tooling idioms are the paying customers, and those have quite clear ideas which compilers and ISO C++ versions they want to have Qt on.
It is not that they are ashamed, rather Trolltech made a series of decisions that make sense in the context that Qt was originally designed and the C++ compilers used by the demographic of paying customers, which does seat well with ISO C++ of all things crowd.
Meanwhile std::string still doesn't provide everything that compiler specific frameworks have been doing since mid-90's.
The problem with Qt not using std::string is when you (gasp!) want to integrate with non-Qt libraries that do use std::string. It has nothing to do with what std::string offers or does not offer compared to any other string implementation. std::string was the string implementation for C++ since before Qt began, and by not using it for their strings, they delivered a significant message about their desire to integrate easily with non-Qt libraries.
Qt not using std::string made a lot of sense when everyone though UTF-16 was the future. A 16 bit default string type was just what modern i18n friendly language or framework had to have. It’s only in the past several years with the rise of UTF-8 everywhere that it looks like a mistake.
But I looked at using Qt in 1999 and easily decided against it, in part for this reason. Those of us coming from a *nix background got the UTF-8 thing much earlier than people affected by the Microsoft worldview.
I probably came across as a bit too rant-y. I'm not blaming anyone for using frameworks and I use them all the time myself. And I'm definitely not criticising Krita for using Qt.
Qt, as most frameworks, simply has the ability to get you stuck between a rock and a hard place. It's all a bit of a gamble of course. Similar to writing plug-ins for specific programs or apps targetet at app stores for a living. Someone can just decide to kick you out for no reason. I'm sure Qt doesn't intend to do that, they probably just want to do right by that almost-all-programs group. When you're that popular, ANY decision will hurt a lot of people.
What I'm trying to say is that I wish there was more of an effort in the programming community to avoid these asymmetric relationships. We already make a huge commitment by deciding what programming language to use, which often already comes with a built-in framework or standard library, a package manager and maybe even an IDE. For example, in the .NET world you have to constantly keep up with Microsoft's latest ideas on how to complicate the ecosystem by introducing yet another thing that is supposed to solve all problems at once, including those introduced by the previous attempts. If you're already in one abusive relationship, the last thing you want is two abusive relationships.
My rule of thumb for pulling in an external dependency is whether or not I’d be willing to maintain a private copy myself. For most libraries, that’s a yes— they have only a few entry points that my program uses, and it’s (relatively) easy to step through the execution path from there until finding the problem.
Frameworks, on the other hand, are more troublesome. They tend to have one entry point at the top that deals with everything any user might want to do, so locating a fault involves digging through lots of code unrelated to my particular usecase.
It’s kind of the same advice as for adding tests to a Big Bal of Mud project:
You don’t.
What you do is start trying to rescue code by pulling out as many pure functions as you can, and begin converting the rest. The purity doesn’t even have to be absolute - any code that only manipulates one kind of shared state also works. For instance, the body of an HTTP response. By separating the code that makes the request for the code that parses it. This code only mangles (reorganizes) JSON responses from this REST endpoint. It doesn’t care if we are using Swing or QT, HTTP 1.1 or HTTP3. It doesn’t even care if there is a UI or a network call (eg, in a test harness).
MVC is a continuum. You’ll never get all the way there if only for very real performance reasons, but “keep as much of the logic out of the UI as you can” makes sense as long as the visual workflow and the logic behind it agree about what is being done. You get into bad impedance mismatches when someone wants to build an “engine” that solves a different problem than the business has because they find the business kind of boring and decided to engineer it away instead of just moving on, or getting over it.
Very much on point. If the core of your business, or application, depends on external code, you are taking a huge risk integrating it into your code base.
Sometimes it is better to invent a wheel that does exactly what you need, and nothing more. Software is sometimes about NOT reusing code.
The re-inventing the wheel thing always starts with good intentions and works fine until you have to throw in hundreds of hours to maintain a beast that nobody likes but everybody has to use. You end up being locked in your own solution and instead of blaming your supplier you can blame yourself.
This is not true in most cases. Most situations rely entirely on integrating 3rd party code, and the risk to the business is zero. Most tools and services dont last long enough to justify reinventing the wheel.
Absolutely. Sometimes it feels like some projects want nothing more than stick their tentacles into your business layer. I don't have a lot of OCD but that sort of thing definitely triggers it.
Example: object-database mappers making you put database-specific things into your domain entities, or innocently implementing the unit of work pattern. It's right there, all you have to do is let us in!
I tend to agree with you but I do think it depends on a lot of factors like team size, churn rate of that team, how long the project will exist, etc.
On a very small development team I've seen a framework be a big help to productivity as there were a lot of little wheels we didn't have to recreate and didn't have to search for a library every time we needed another small feature like that. Certainly there were bumps along the road to learn the nuances of the framework (this is compounded by changing versions of the framework) however those almost always ended up being minor headaches in the scope of a larger feature and maybe extended timelines by a day or two.
I do not want to convey here that my experience is true for every team or every framework, rather that I have had one specific set of experiences with one specific framework that has pulled me from condemning frameworks as a whole while still not being a huge fan.
Krita started in 1999, but back then, there was no discussion at all; Krita was a KDE project so it would use Qt. The project was actually created because of a bit of ruckus over a patch that integrated Qt with GIMP.
These days our team is humongous: we have seven sponsored developers!
I agree, frameworks can be beneficial to productivity.
I think my feelings about them mostly have to do with a lack of control over your own creations on multiple levels. Personally I'm a bit slow and I can't hold a lot of things in my head at once, so all of these nice-to-have code quality principles are actually pretty important to me.
I can deal with a library being a bit of a black box, as long as I can make some basic assertions about it such as which parts are thread-safe, what sorts of exceptions to expect and so on. Usually it doesn't take too long to learn how to use it well enough.
With a framework, the learning part is much more convoluted. The tutorials and documentation are usually written for "task-based" learning, i.e. you have a specific goal and the documentation just tells you which buttons to push, which levers to pull in that case. And stack overflow will fill in the blanks. When your (preferably very small) app is covered by that, you're golden.
But when you're doing something more complex or have a specific issue, you need the "model-based" approach. You need to understand the architecture behind the framework, and usually that means you'll have to read most of the source code with zero guidance. And you'll probably have to wade through lots of code that is only there to avoid some boilerplate in "hello world"-level projects. Most people won't do that, and then they're stuck with an incomplete mental model based on a patchwork of experiences. It can get pretty cargo-cultish sometimes when you end up configuring something you don't understand because in your experience it makes things run better.
I admit that I'm oversimplifying/overgeneralizing and frameworks are popular because they tend to help people meet deadlines. And I use them too. But it feels wrong, like I don't really know what I'm doing.
I spent a good chunk of my day today fighting a bug that I imagine is going to bite us sooner or later at work, but so far seems limited to my machine.
We use Qt5 and QGraphicsScene with QGLWidget to let us also do custom background painting in OpenGL. We have tried porting to the new QOpenGLWidget before, but we hit a fun issue: If you add a QWebEngineView to a widget in the graphics scene, the entire widget disappears for a frame every time the browser repaints. Which gives you a really great flickering that makes the software unusable.
It used to only happen on my branch with QOpenGLWidget, but I recently recompiled using a newer gcc version and now it happens to me with the old QGLWidget as well.
Qt and OpenGL seem to not get along all that great. There are all sorts of odd rendering artifacts that pop up and never seem to get fixed. The documentation for internal Qt code is also not great, which makes trying to parse through it a real adventure.
Edit: Another fun bug that popped up in the Qt4 to Qt5 transition is that on Linux if you have our app open and then switch to a different virtual workspace and back, the app stops repainting until you resize the window.
Currently there is the QGLWindow that holds a QGraphicsScene, where all of our windows are QWidgets wrapped by QGraphicsProxyWidget to get them into the scene.
Are you suggesting that we should do another layer of wrapping the QWebEngineView in a QOpenGLWindow wrapped in a QWidget that is then wrapped in a QGraphicsProxyWidget?
I don't understand why that would help, but I guess you never know.
Years ago, I managed to track down one of the core Qt maintainers at a conference to ask about my use of QGraphicsProxyWidget on a QGLWidget. He winced at the mere mention of QGraphicsProxyWidget...
So if I'm reading this right, there is still no such thing as a standard graphics API. There's OpenGL, there's Direct3D, there's other hardware-specific APIs (Metal and Vulkan are mentioned), and now Qt is inventing their own API to abstract over all these APIs.
How is having a standard API to draw 3D graphics not a solved problem at this point?
The revisionist history: AAA game developers are used to console APIs. Which do exactly what you tell them to at the hardware, no sugar coating or drivers in the way. Having shipped a few games, drivers are indeed a terrible curse. One vendor's D3D11 drivers are pretty notorious for being incredibly invasive to your game in the goal of speed, making it difficult to ship content that was consistent across devices. Another vendor, sick of having to turn their driver into a rocket engine, found common ground with some game developers, and made a prototype, Mantle, which showed real-world performance gains on game content. But it was pretty specific to how their hardware and GPUs worked. But given that Xbox wanted performance, Microsoft was I intrigued and worked closely with that vendor to design the next-generation Direct3D API.
Meanwhile, the vendor with the super fancy drivers had a pretty major chokehold on OpenGL, which is just an absolute terrible mess of a bad API. It's an API that is so backwards and difficult that nobody likes it from the driver or the application perspective, but given that this vendor had the best drivers for it, they didn't want to lose it. But Mantle vendor wanted to shake things up and submitted Mantle to Khronos, the organization that standardizes OpenGL, to form the basis for the "gl-next" initiative. Mobile GPU vendors get invo.ved and turn the design into a mess. The original engineers who designed Mantle left their parent company over design disagreements to go join Apple, who also had a vested interest in getting rid of OpenGL but had little desire to use gl-next. This is Metal.
Ultimately, D3D12 is Microsoft's turf. Nobody likes Vulkan but it's there on Android and Stadia and Linux and is pretty mediocre there. And Metal is pretty well-rounded but is the domain of Apple.
3D rendering has always been a mess, but now it's even worse because tiles exist, data bandwidth is crazy expensive, and game revs need more fidelity and FPS than ever before. Synchronizing between coprocessors is now the job of the application.
> Microsoft was I intrigued and worked closely with that vendor to design the next-generation Direct3D API.
I've heard "accusations" that it was actually the inverse.
Microsoft and that vendor were working together on next gen consoles, including the new D3D12 API. But Microsoft weren't planning to release it on windows until long after the xbox one released.
The vendor wanted something to show off now, so they pulled together Mantel in a reasonably short time-frame and announced/launched it before D3D12 was even in an announceable state.
I have idea if these "accusations" are true, it's just what I've heard.
That sounds more as if it would be a window system specific issue? As far as I know OpenGL didn't have that kind of API either and you had to use X11 and the glX bindings for that kind of information.
To complicate the mess, the other vendor DX11 drivers are so bad that some Windows users have started using DXVK (really designed for Linux, but works just fine on Windows) to translate DX11 to Vulkan and getter better performance in many games.
Differences between Mantle and Vulkan are not that big, some things are super similar. The biggest difference between Mantle and Vulkan are renderpasses, and that can feel like a mess if one is used to immediate renderers, but as a concept it’s very much core in Metal and even DX12 has it.
I didn't know that Johan Andersson, from DICE (Battlefield) fame who created Frostbite (the EA engine), the creator of Mantle spec, joined Microsoft. Is your data correct?
We could have OpenGL but Microsoft wanted their own so they made Direct3D and ATI and Intel were unable to make both Direct3D and OpenGL work properly (because most people targeted Direct3D anyway so why bother?) so after SGI bankrupted themselves, Khronos was formed and they decided to define a new OpenGL from scratch to help ATI and Intel claim they have OpenGL (they pay money after all) so they built the core profile but ATI-now-AMD and Intel failed to make that work too (since people still targeted Direct3D so why bother?) so to help them again they decided to ignore OpenGL and make Vulkan which kinda seems to work for now but AMD and Intel took their time to produce certified drivers and Vulkan doesn't look that popular (though Direct3D 12 which is essentially the same thing also isn't that popular either, but the alternative tends to be Direct3D 11 so still no reason to bother fixing OpenGL) so who knows for how long this will work?
Apple was in the OpenGL train initially but like their Java and X11 support that was so they have some easy ports of important stuff and once they got a sniff of popularity they ditched anything non-Apple because why bother maintaining something others control?
That is my interpretation of the story so far anyway. And i didn't mention OpenGL ES which is OpenGL in name only but not in anything that really matters - but thanks to iOS and Android it became popular even though both of these devices had more than enough power to handle the proper thing instead of the crippled down version that was originally designed for J2ME feature phones that couldn't even do floating point math.
On the positive side, OpenGL is still the API that has the most wide availability and of all APIs the one that has the most reimplementations on top of other APIs - though in both cases you'll want to stick to older versions, but TBH versions were always a fluid thing with OpenGL anyway, you're not supposed to think in versions but in extensions.
> Microsoft wanted their own so they made Direct3D
Not quite, it was somewhat of a necessity at that point. 3D graphics was in an abysmal state back in the mid to late 90s. Hardware vendors shipped proprietary APIs (Glide), half-assed OGL ports just for Quake (MiniGL), but very few offered full OpenGL implementations.
In order to migrate gaming from DOS to Windows, Microsoft was in dire need for a reliable and widely available API that they could ship with the OS for game developers to use.
OpenGL wasn't exactly great, since it wasn't controlled by MS and the whole extension system was a huge mess and horrible to work with (I don't care what Carmack thought about it!)
Direct3D on the other hand offered a stable API and most importantly a REFERENCE RENDERER! - something that OpenGL was lacking and lead to super annoying bugs, since every other driver behaved differently...
The latter is still relevant today - OpenGL lacks a defined reference implementation, so "OpenGL support" on the box means very little in practise. This is why certain software packages require certified drivers, because CAD vendors would never be able to ship a finished product if they had to support every single quirk of every vendor, hardware- or driver revision...
When Direct3D was introduced there was no Glide nor MiniGL and OpenGL provided more than enough of the functionality games would need at the time. Microsoft was in control of their OpenGL implementation which allowed both a full replacement (what drivers do nowadays) but also a partial replacement where a driver would only implement a tiny subset of the API and the rest (e.g. transformation, clipping, lighting, etc) would be handled by Microsoft's code.
> In order to migrate gaming from DOS to Windows, Microsoft was in dire need for a reliable and widely available API that they could ship with the OS for game developers to use.
Yes, the rest of DirectX provided that and OpenGL games used it too.
> OpenGL wasn't exactly great, since it wasn't controlled by MS
Which was the only real problem for Microsoft, not anything else
> and the whole extension system was a huge mess
During the mid to late 90s there were barely any extensions and OpenGL 1.1 provided more than enough functionality for the games at the time. The main extension that would be needed during the very late 90s was multitexturing which was importing a few function pointers, nothing "messy".
> and horrible to work with
Compared to early Direct3D, OpenGL was much easier to work with - early Direct3D required to build execute buffers, manage texture loss yourself and other nonsense whereas OpenGL allowed you to essentially say "use this texture, draw these triangles". This was such a big issue with Direct3D's usability that Microsoft eventually added similar functionality to Direct3D in versions 5 and 6 and they even killed execute buffers pretty much instantly. Even then OpenGL still provided more functionality that the drivers were taking advantage as new functionality was available in GPUs (e.g. Direct3D 7 introduced hardware transformation and lighting, but OpenGL had this from day one essentially so all games that used OpenGL got T&L for free when drivers added support, whereas games that used Direct3D had to explicitly enable it).
> Direct3D on the other hand offered a stable API and most importantly a REFERENCE RENDERER! - something that OpenGL was lacking and lead to super annoying bugs
This is wrong, Microsoft had a software rasterizer for OpenGL 1.1 that behaved very close to the spec and SGI had also released their own software rasterizer.
> since every other driver behaved differently...
This was the case with Direct3D too and in fact a much more painful experience. Direct3D tried to alleviate this by introducing capability flags but in practice no game did proper use of them and games had all sorts of bugs and issues (e.g. DF Retro had a video where they tested a bunch of 90s 3D cards on Direct3D games and pretty much all of them had different visual glitches).
> OpenGL lacks a defined reference implementation, so "OpenGL support" on the box means very little in practise
This is an issue indeed, though it is largely a problem with the driver developers not caring about providing a consistent behavior than a problem with the API. If the driver developers cared they'd try to do things similar to other drivers as long as any difference was spotted between implementations.
Though that is a modern issue, for pretty much the entirety of the 90s and early 2000s there were official software rasterizers from both Microsoft and SGI.
> When Direct3D was introduced there was no Glide nor MiniGL
You must be from a different universe: MiniGL was released in 1996 - the very same year Direct3D 4.0 and Direct3D 5.0 shipped... As for Glide - that started also in 1996 and was commonly used until 3dfx went defunct.
> During the mid to late 90s there were barely any extensions and OpenGL 1.1
Again - in which timeline was that the case? Certainly not in this one: in 1996 (!!!) there were about 90(!!!) vendor-specific extensions [1]. This is not a question of whether you in particular are aware of them or their particular usefulness, they did have use cases and were supported across vendors; sometimes, with varying levels of support...
> Microsoft had a software rasterizer for OpenGL 1.1 that behaved very close to the spec and SGI had also released their own software rasterizer.
Neither of those were references that you could reliably run pixel-level A/B tests against to verify your drivers.
There never was an official reference implementation and there probably won't be any either.
> The main extension that would be needed during the very late 90s was multitexturing
Unless you were porting software from other systems like SGI workstations, which I did at the time. And believe me - it wasn't fun and having half a dozen code paths to work around that depending on the target hardware wasn't "clean" either.
I won't comment on your "which API is better"-drivel since your arguments didn't age well anyway. We're back to execution buffers and manual (texture-) managing for performance reasons so I could just as well argue that early Direct3D was actually ahead of its time... But that's a matter of opinion and not a technical issue.
> You must be from a different universe: MiniGL was released in 1996 - the very same year Direct3D 4.0 and Direct3D 5.0 shipped... As for Glide - that started also in 1996 and was commonly used until 3dfx went defunct.
Only the year is the same, but not the dates. Direct3D was introduced in DirectX 2.0 on June 2[0]. Voodoo 1, for which Glide and MiniGL were made, was released after Direct3D, on October 7[1].
It would be impossible for Microsoft to make Direct3D as an answer to APIs like MiniGL since MiniGL didn't exist at the time the first release of Direct3D was made!
> Again - in which timeline was that the case? Certainly not in this one: in 1996 (!!!) there were about 90(!!!) vendor-specific extensions [1]
I'm not sure what you refer to in "[1]", there isn't any date information in there. Regardless, from [2] (which is from 2000, when there were many more extensions than in the mid-90s) you can easily see that the vast majority of extensions are for hardware that is irrelevant to desktop PCs running Windows (e.g. the SGI-specific and GLX stuff).
In addition new OpenGL versions are essentially bundles of previous extensions, so a lot of these extensions are functionality you got with 1.1 (e.g. GL_EXT_texture is basically the ability to create texture objects which was introduced as an extension in OpenGL 1.0 and made part of the core API - and available to anyone with OpenGL on Windows - in version 1.1).
Of all the extensions listed even in the 2000s list, only a handful would be relevant to desktop PCs - especially for gaming - and several of them (e.g. Nvidia's extensions) wouldn't be available in the mid-90s.
> Neither of those were references that you could reliably run pixel-level A/B tests against to verify your drivers.
At the time that was irrelevant as no GPU was even able to produce the exact same output at a hardware level, let alone via APIs.
Also Direct3D didn't have a reference rasterizer until Direct3D 6, released August 1998. The Direct3D 5 (released in 1996) software rasterizers were very limited (one didn't even support color) and meant for performance, not as a reference.
> There never was an official reference implementation and there probably won't be any either.
That doesn't matter, Microsoft's OpenGL software rasterizer was designed to be as close as possible to what the spec described and was much more faithful to it than the software rasterizers available for Direct3D up to and including version 5.
> Unless you were porting software from other systems like SGI workstations, which I did at the time.
Yes, that could have been a problem since 3D GPUs at the time pretty much sucked for anything unrelated to fast paced gaming. But those uses were very rare and didn't affect Direct3D at all - after all Direct3D was at an even worse state with all the caps and stuff you had to take care of that OpenGL didn't require.
> We're back to execution buffers and manual (texture-) managing for performance reasons so I could just as well argue that early Direct3D was actually ahead of its time
Yeah and IMO these modern APIs are a PITA to work with, more than anything ever made before that with any improvement not justifying the additional complexity, especially when OpenGL could have been extended to deal with better performance.
> Microsoft wanted their own so they made Direct3D
Bill Gates "wanted his own" because this would limit software portability, making his near-monopoly in OSes even stronger. Good move for his bottom line, but a dick move for humanity.
The ones that keep mentioning Microsoft alone are the cynic ones, selling their FOSS agenda the best way they see fit, usually with zero experience from the games industry.
As much as games may not be able to be open source, I'm still baffled that the infrastructure still isn't. Open source infrastructure dominates the server space, I don't see why it couldn't dominate the desktop as well.
I see why it doesn't: it's mostly hardware vendors refusing to provide free drivers and refusing to hand over the specs of the hardware they sell (I mean the ISA). There may have been good reason 20 years ago, but it's been some years now that hardware tends to be mostly uniform, and could possibly stabilize its ISA. It has been done for x86 (for better or worse), it could be done for GPUs, printers, web cams, and everything else.
Hardware used to come with a user manual. Then it all stopped, around the time Windows 95 took over. Instead of a manual, they provided opaque software that worked with Windows. That has been the new tradition since, and changing it is hard. For instance it's only very recently that FPGA vendors started to gradually realise that open source toolchains could actually help their bottom line.
My dream, really, would be for hardware vendors to agree on an ISA, so we don't have to put up with drivers any more. https://caseymuratori.com/blog_0031
It dominates the server, because many FOSS users refuse to pay for tooling don't have any other option than paying subscriptions for their servers to keep running, or very least they need to buy hardware.
FOSS Desktop doesn't scale to keep a company running under such premises, because a large majority refuses to pay, and living from patreons and donations only goes as far.
Which is why everyone that wants to make money with Desktop FOSS software, either moved it beyond a paywall served via browsers or to mobile OS stores.
From my point of view FOSS friendliness is a marketing action, where underdog companies play nice, use non-copyleft licenses and as soon as they get rescued due to positive vibes, whatever, hop again into dual licenses to keep their business rolling.
> FOSS Desktop doesn't scale to keep a company running under such premises, because a large majority refuses to pay, and living from patreons and donations only goes as far.
Okay, how complex does an OS need to be, really? Let's see, it needs to schedule and run your programs, interface to the hardware, manage permissions… that's about it. Why would it need to scale? What's so impossibly complex about an OS that it couldn't be done by 5 highly competent engineers in 2 years?
Oh, right, the hardware. With the exception of CPUs, hardware vendors don't publish their specs, and don't agree on a set of interfaces. So you end up having to write a gazillion drivers, dozens of millions of lines of code, just so you can talk to the hardware.
Solve that problem, and OSes won't need to scale. Perhaps even to the point that game devs will be able to ship their own custom OS with their games. As was done in the 80s and early 90s.
Having a stable driver ABi, and being micro-kernel based helps with scaling, which fun fact, that is what Playstation with its heavily customised FreeBSD, or Switch with their in-house OS do.
As for portable specs, if Open Group, Khronos have taught anything, is that there is a big difference between paper and real hardware/platforms.
Yep, we shipped with our customs OSes, which also had our custom workarounds for faulty undocumented firmware bugs, those that we occasionally took advantage of for demoscene events.
> As for portable specs, if Open Group, Khronos have taught anything, is that there is a big difference between paper and real hardware/platforms.
But… they don't even specify ISAs, they specify APIs. I'd wager the big difference is only natural. Another way would be for a vendor to design their ISA on their own, then make it public. If a public ISA gives them an advantage (and I think it could), others would be forced to follow suit. No more unrealistic consortium. :-)
> faulty undocumented firmware bugs
I hope that today, any hardware bug would trigger an expensive recall, and firmware bugs would just be embarrassing. CPUs today do have bugs, but not that many. We could generalise that to the rest of the hardware.
” That is my interpretation of the story so far anyway. And i didn't mention OpenGL ES which is OpenGL in name only but not in anything that really matters”
You might be mistaking the OpenGL es 1.0 for anything modern.
ES2.0 and above is a true subset of the desktop OpenGL, some limits are less and support for things like Geometry shaders are optional, but that’s pretty much it.
OpenGL and OpenGL ES are two completely different APIs with their own specs and implementations. Some versions do have an overlap in functionality in that you can write code that can work with both with minimal (mainly shader) differences, but that's about it.
But IMO OpenGL ES 2.0 was pointless, the devices that were powerful enough to support it were also powerful enough to support the full OpenGL so Khronos should have pushed that instead of fragmenting the driver and API ecosystem.
No really. 4.3 made ES a true subset. As in you cannot write a spec conformant ES3.0+ software that would not run in GL4.3+ implementation.
This was very intentional by Khronos so they could bring the two closer together. In ES2.0 days what you said would have been true, as ES2.0 had some annoying differences especially on shader side, but it’s been 8 years since 4.3 came out.
Es3 shaders (with the modern in/out qualifiers) compile as is on Gl4.3+. It is a true subset. As an example 4.3 brought precision qualifiers to desktop GL. And now that fp16 is in desktop Hw they are actually useful there.
I also recall the early days of D3D (Immediate mode), which although it came after OpenGL, immediate mode allowed better integration with the cards at the time (notably 3dfx) and OpenGL did not have hardware drivers, which meant it was limited to software rendering. So, if you were into game dev, then D3D was your only option early on.
My recollection was that it wasn't until NVidia started up (and broke 3dfx by poaching engineers) that OpenGL started to become 'better'. Intel was left in the dust until mobo support for DMA (?around DX5?), which allowed cards to gain quick access to RAM, which was vital for texturing (you always had to 'upload' textures to the card itself prior to that). It was the final nail in the coffin for 3fdx at that point, who still hadn't released a new card for ages, and OpenGL was finally on par with D3D. D3D had a retained mode which began to be really useful by about that time too.
At the time, many people wanted to use OpenGL because it was loads easier than Immediate Mode and a lot more intuitive to grok. I recall a certain prominant Doom developer berating D3D loudly in a private email list (John Cormack) about this very fact. Ironically, a few months later some guys released a demo for a game called "Unreal" using D3D and everyone was blown away. (circa 1995-6). More ironically, it wasn't for another year that GL Quake came to fruition.
Carmack loved OpenGL because GPU vendors could (and would) release propriety extensions that exposed all the new functionality of new GPUs.
He would rewrite custom rendering paths for various GPUs and common sets of extensions, allowing him to improve performance and/or improve graphics.
With Direct3D, Microsoft defines a common feature set that all GPUs supporting that version of Direct3D are required to support, and any extra functionality that GPUs might provide on top of that are locked away, completely inaccessible.
Checking the Doom 3 source code, he the main ARB and ARB2 pixel shaders paths (equivalent to dx8 shaders). Then for the older gpus that mostly support pixel shaders but not in a standards compliant way, he has an nv10 code path and a nv20 code path.
Then he has a r200 code path, which I think just improves performance on r200 graphics cards over regular ARB2.
Extensions are a good thing since it allows developers to take advantage of new functionality and provide it to consumers pretty much immediately - this is a win win for everyone involved, programmers use cutting edge functionality and consumers actually get to use the fancy GPUs they paid money for.
Direct3D programmers disliking extensions make me think of the sour grapes fable.
But OpenGL providing extensions doesn't mean that Direct3D programmers are free of having to implement different code paths - if anything during the 90s with the proliferation of 3D cards and different capabilities flags, programmers had to take into account a lot of different possibilities in their rendering code (and most failed to do that with games having visual glitches everywhere).
I don't like the term "poaching". It implies that the engineers who took better paying jobs did something wrong, whereas they were just trying to retain a bigger portion of the enormous value that they were creating.
GPU hardware is varied and changes frequently. Many API features are partially or entirely implemented in software (drivers or API runtime) complicating things. Console, mobile and desktop parts are different in terms of functionality. OpenGL is probably the closest to runs on everything. BGFX, sokol, WebGPU and other abstraction layers might be good fits as well. I think options are pretty good if you just want rasterized triangles, vertex and fragment shaders (and maybe compute). As you start to need better performance or other parts of the pipeline options are less clear cut.
> How is having a standard API to draw 3D graphics not a solved problem at this point?
It seems like it mostly is? The problems described in the article appear to be due to a mismatch between how QT and Krita do things under the hood.
OpenGL has broad support (except Apple). The OpenGL ES subset adds web browsers and lots of embedded devices. ANGLE provides translation of ES to D3D, Vulkan, and soon even Metal.
Vulkan seems to work pretty much everywhere (except Apple of course). MoltenVK provides translation of v1.0 to Metal.
If departing native hardware APIs is acceptable, gfx-rs appears to work today. WebGPU is well on it's way to being fully implemented. Plus (as the article mentions) apparently QT6 is intending to introduce their own custom abstraction layer?
OpenGL works badly on MacOS and even worse on Windows unless translated through Angle. On Windows, using OpenGL directly will give performance problems and crashes _all the time_. And weird bugs, like red and blue swap on some combinations of AMD GPU's and driver versions.
True, but Sony has some sort of completely custom thing going on there so it seems that there's zero chance of standardization in that case.
> Android until version 10
I thought Android got VK support way back in version 7 (Nougat)? And hasn't it always supported GL ES? Not being a closed platform, support for any API is of course dependent on the underlying GPU and associated driver.
> Win32 and UWP sandboxing
TIL. I hadn't realized Microsoft restricted access to Vulkan and GL from within the sandbox. It looks like ANGLE has supported GL ES for Windows Store apps since 2014? Regardless, I was under the impression that UWP wasn't very popular with developers anyway.
Android got optional Vulkan support on version 7, yes. And by being optional, meant most OEMs cared as much as the optional updates.
Hence why on Android 10, Google took a set of actions, made Vulkan mandatory, started the path to have OpenGL ES on top of Vulkan and introduced GPGPU driver updates via the Play Store, as means to force OEMs to actually care about Vulkan.
Just like many devices still don't ship with OpenGL ES 3.x, because it is optional as well.
There a reason why I mentioned Win32 and UWP sandboxing and not just UWP. Yes the pure UWP model although quite nice to program for (for me it is what .NET 1.0 should have been all along) failed to gather the mass adoption that Microsoft hoped for, and that is why since two years they have changed course to merge both worlds, now officially known as Project Reunion.
As a matter of fact, here is the application model for the upcoming Windows 10X, where pico processes are used to sandbox Win32, WSL style, https://youtu.be/ztrmrIlgbIc
Angle for UWP was contributed by Microsoft themselves and now they are working together with Collabora to support OpenGL and OpenCL.
Regarding Android, yeah I get that it's optional (I actually didn't know that recent versions had made it mandatory). If you view Android as analogous to Windows though then I think it makes sense. There's lots of different devices running Android, some of which aren't even phones. My point is that, similar to desktop GPUs, you can choose a "lowest common denominator" API based on the maximum age of the physical devices that you want to support.
I don't see a problem with that approach. In fact it seems to be about the best you can hope for when it comes to hardware APIs in general (unless you're Apple and control all the hardware) since things are constantly being revised and redesigned.
Regarding Windows, what are you saying here? That Windows Store apps will be getting Win32 support in the near future because the sandbox will finally be able to accommodate it (but VK and GL will still be blocked)? Or that native (unsandboxed) Win32 apps will become sandboxed (and thus restricted) in the near future? (I suspect the former, which is neat but doesn't change anything regarding graphics APIs.)
I did learn a few new things here but am still left with the general impression that Vulkan has reasonably broad (and increasing) support while OpenGL ES 2.0 can target pretty much everything worth supporting (including most web browsers). (Of course I'd strongly prefer to use a more modern API than ES 2.0 if it's available.)
The difference is that desktops get updates, while on Android is more of wishful thinking targeting latest versions, unless one just targets Pixel and Samsung flagship devices.
Win32 sandboxing is orthogonal to the store.
UWP is not about the store, people keep mixing this up, as it unfortunely refers to multiple things across the Windows stack.
UWP is also known as WinRT, UAP, or just modern COM. And sandboxing UWP applications doesn't necessarly require delivery via the store. Any MSIX package will do.
What Microsoft is now doing (officially as of Reunion) is detaching all this tech from the kernel and have them as userspace APIs, across multiple Windows versions, and merging the UWP and Win32 stacks into one, hence Project Reunion.
Sandboxed Win32 applications don't need to be store only.
Right - what I read is performant WebGPU is more consistent than Vulkan between GPU vendors and I understood that to mean higher-level. But perhaps it's just a promise of a better design.
That means that any project that wants to be multi-platform and support OSX will need some sort of graphics API abstraction anyway. Of course it's still nice to only have to implement Metal and Vulkan. But you're not getting away with depending on a single open standard.
This sentence from the article offers a clue. Of course Apple are not the only bad actors here. But basically, it's company politics.
> So at some point, the engine stopped working on macOS. Mostly because Apple doesn’t want people to use OpenGL, so they’re starving their drivers from development
You need an abstraction over different hardware implementations to have a standard API. Abstraction comes with a cost. And 3D graphics applications want to utilize hardware at its fullest capability. So basically it's not a simple problem. Especially considering that hardware is constantly evolving, inventing new approaches. For example I learned OpenGL about 15 years ago and I never used any shaders. Now, AFAIK, shaders are everywhere.
Probably the best thing you could do at this point is to use game engine like Unity or UE as an abstraction.
Meanwhile we have been compiling the same C and above code for mips, arm, power, x86, and riscv for decades in portable ways while still getting performant code.
Now to be fair GPUs have a legacy of slowly becoming generic processors whereas 10+ years ago they were largely fixed function hardware. Writing generic code for a GPU today is totally sensible because it supports most arithmetic, primitive types, branching, etc.
But there is nothing stopping you from optimizing your compiler for glsl any more than optimizing your platform specific custom vendor compiler for C.
The problem here is that abstraction comes at a cost, and that cost is so great that we need even crazier abstractions to be able to work around it... and now the suggestion is, as you say, work with a whole engine. Ah, now I can draw a pixel on screen! I see it as a lot of wasted potential (even if maybe not that much wasted commercial potential).
the software's "main editor" uses a very traditional QGraphicsScene with a QGLWidget - no QML / QtQuick (QtQuick is a very good tool for a lot of usecases, but not the right tool for this one particular "traditional big desktop app" job imho).
The RHI-using part is for creating custom visuals (think applying various shaders to video & camera inputs for VJ), so I wrote my own scene / render graph leveraging it which renders in separate windows through QRhi.
It was mostly written at my Ballmer peak during last year's christmas / new year's eve though :-) so lacks code quality a fair bit.
There's a graph of nodes. The graph is walked from every output node (screen surfaces) to create a matching "rendered node" (pretty much a render pass). For every node "model", a node "renderer" will be created, which contains uniforms, GPU buffer (QRhiBuffer) & texture (QRhiTexture) handles, etc. and a QRhiGraphicsPipeline (the state in which the GPU must be to render a node, the VAOs, shader programs, layout, culling, blending, etc etc)
Then every time the output node renders due to vsync or whatever, the associated chain of node (render passes) is executed in order.
I recommend looking at the RHI manual tests in the qt source, they show the usage in a very straightforward manner:
Interesting to me that they went for "Rendering Hardware Interface" with their own "cross-compile shaders both at compile and at runtime". I wonder about the exact scope of this work, and if they considered Vulkan Portability instead.
Don't take this the wrong way kvark, but is Vulkan Portability really the long term answer here? Do we want everybody to be writing Vulkan code? It's unfortunate that due to the mess of tilers, Vulkan got way more complicated than it likely should have, a lot of stuff wasn't properly thought out (i.e. pipeline derivatives, framebuffers, push descriptors), and we're even walking back a lot of the "atomic draw" promises with e.g. VK_EXT_extended_dynamic_state. Even D3D12 had its allocation APIs pretty badly neutered, leaving is in this pretty suboptimal place.
I love the promises that Vulkan in theory provides but in practice the gains haven't been as bold as we thought they would be. Explicit APIs have less lies than the older ones, but still are pretty far from zero.
I have always seen its goal as providing the means to create different user libraries for different specialized purposes on top of it.
Then you also get implementations of other APIs like Direct3D on top on Vulkan, which is amazing because you can reuse all your code and tooling and run everywhere.
Funny to read that they are stuck with an old Qt version, 5.12 to be specific. I am on a similar boat, stuck with Qt 5.11 as this ist the last one with support for macOS 10.11.
And I am stuck on macOS 10.11 because it's the last with proper PDF subpixel rendering in Preview.app.
However, many Qt developers seem to upgrade to the newest Qt version as soon as possible. If it's an open source software, I can self-compile it using the older Qt 5.11, which usually works. But by default, Qt is sadly not providing platform independency anymore.
I see no reason to support OSX 10.11 anymore, just like I don't see any reason to support Windows 7 anymore. We have stay on Qt 5.12 because we haven't got the time/are too lazy to fix the bugs in later versions of Qt that break Krita, especially on Windows.
ANGLE is OpenGL ES 3.1 conformant with the Vulkan backend, which runs on Windows, Android, and Linux. A Metal backend was also added recently, though it doesn’t get nearly as much attention as D3D or Vulkan. Seems like fixing their ANGLE integration would be their best answer.
(Ask HN:) At this point, if you want to create a cross-platform hardware accelerated application, is it better to use a game engine like Unity rather than an application framework like Qt?
At work we have an upcoming GUI project that has lots of classical dialogue boxes, but also has a main display area that really needs to be hardware accelerated, and can show 2d and 3d stuff. It needs to be cross platform including mobile. I'm at a bit of a loss where to start.
I would bet that the easiest thing would be FLTK. You can use system colors and make an openGL window while having everything else you would expect in a UI library. Executables with no dependencies start at a few hundred KB and it is very fast. There is even a single big pdf for documentation and dozens of examples that show off every widget. FLTK is very underrated these days because no one is out there marketing it.
GLFW and IMGUI are great libraries, but once you start wanting a program with various modal and non-modal windows, menus, widget layout, a native look, file dialogs, fonts, drag and drop, copy and paste, unicode, spreadsheet tables, physical printing, custom widgets, functions to hide OS specific system stuff (like stat, directory listing etc.) you might wish you had started with something made for what you are trying to do.
CyberDildonics has a good point. Tcl/Tk might also be a good choice for the application widgets, as there are lots of high level language bindings. I use it exclusively and love it, though fltk is also very solid - just lacking the bindings.
> At work we have an upcoming GUI project that has lots of classical dialogue boxes, but also has a main display area that really needs to be hardware accelerated, and can show 2d and 3d stuff. It needs to be cross platform including mobile. I'm at a bit of a loss where to start.
I would look into IMGUI + GLFW. A lot of neat programs are being made with IMGUI these days, for example: https://remedybg.handmade.network/ RemedyBG's entire UI is IMGUI, including the menu bar, tabs, etc.
Dear ImGui is kinda cool, I played with it some a year or two ago. But it definitely has a certain look and feel to it that I think is only appropriate for certain apps (mainly developer tooling).
Lots of applications like that are done in Qt, e.g. 3D editing tools, game engine tooling, ... For mobile you might prefer doing independent apps (reusing some of the 3D code etc of course), but afaik other desktop frameworks are not better than Qt for that.
Google plans to add ANGLE support for Metal backend. Once that backend is done (and even the Fuchsia support), ANGLE will continue to be an extremely viable option.
The delay of having a Metal backend may have been the reason they're deciding to ditch it, but is nonetheless sad.
Another future option will be WebGPU, once this is mainstream what Digia will do with Qt7? Drop everything again?
Since Digia put their hands on Qt, it is slowly going downhill.
Once it's done (tm). And there are other projects higher on the priority queue.
However, note that I also use OOP. No inheritance (death to gobject!), but the whole thing is based around interface polymorphism. This actually simplifies the design significantly. And you can implement an imgui on top of it pretty easily, if you really like hidden state.
I would've thought that HN of all places would understand that oss isn't the right solution for every project and that downvotes like this for someone just saying they might not make their work open source is ridiculous.
at some point, the engine stopped working on macOS. Mostly because Apple doesn’t want people to use OpenGL, so they’re starving their drivers from development
As a graphics developer, all I can say is... sigh. Apple has fought against open standards in graphics. They never overtly move against open standards (that I'm aware of) instead they use their dominant platform and refuse to or drag their heels in implementing support for open APIs.
Even though there's a new era of cross platform graphics APIs in Metal, WebGPU, and Vulkan, the fight has now moved onto formats, as everyone except Apple work towards open formats like WebP, glTF, basis universal (fortunately they can't avoid that last one), while Apple announces they'll use USDZ instead, or not add support for WebP.