Advantages of using this library are that it is uses intrinsics (SIMD) to accelerate operations. There is a lot of Microsoft money & time that has been invested into these code piles.
I also see the guys from Intel constantly stabbing at all these low-level types to optimize them too. There are optimizations in .NET 10 for processors that aren't even released yet.
I suspect it's part of the fun? A way to really learn something?
There's also another hint:
// THIS FILE WAS AUTO-GENERATED.
// CHANGES WILL NOT BE PROPAGATED.
// ----------------------------------------------------------------------------
(Of course this could be a result of something having nothing to do with the contents of the file, but maybe the author has to meta library that can generate the types in different languages).
There seems to be fixed-precision variants of the vector types as well which seems to be not available in the .NET framework.
Plus, of course, you can't add your specifics needs to library types (like the fixed precision). They are closed to modification.
I am just guessing, of course.
That being said, it would also make total sense to use the .NET types.
Historically the .NET and XNA vector types have been seriously lacking for real graphics development, and they still don't even provide swizzling. It's likely that this project predates .NET numerics by many years, and anyone who has had a pet project for long enough will learn to avoid becoming too dependent on libraries and platforms that will die out.
All I'm seeing is they got their hands on the domain, which can be (and was in the past) just part of whatever settlement they agreed on, and the game press spinned that into "Nintendo bought Ryujinx".
This looks interesting and I'm going to take a look later. Just a minor nitpick up front though, I think the performance graph should be a bar graph instead of a line graph. Mainly since the in-between states don't have much meaning as you can't be half way between 2 different gpus.
Those discussions are a bit misleading. Original Doom updates its state only 35 times a second, and ports that need to remain compatible must follow that (though interpolation and prediction tricks are possible for visual smoothing of the movement). Rendering engine is also completely orthogonal to polygon-based 3D accelerators, so all their power is unused (apart from, perhaps, image buffers in fast memory and hardware compositing operations). Performance on giant maps therefore depends on CPU speed. The point of this project is making the accelerator do its job with a new rendering process.
Though I wonder how sprites, which are a different problem orthogonal to polygonal rendering, are handled. So, cough cough, Doxylamine Moon benchmarks?
"Rendering engine is also completely orthogonal to polygon-based 3D accelerators"
Software rendering engine, yes (and even then you can parallelize it). But there is really no reason why doom maps can't be broken down in polygons. Proper sprite rendering is a problem, though.
Sure, that has been done since the late '90s release of the source code, both by converting visible objects to triangles to be drawn by the accelerator (glDoom, DoomGL), or by transplanting game data and mechanics code into an existing 3D engine (Vavoom used recently open-sourced Quake).
However, proper recreation of the original graphics would require shaders and much more modern extensive and programmable pipelines, while the relaxed artistic attitude (or just contemporary technical limitations) unfortunately resulted in trashy y2k amateur 3D shooter look. Leaving certain parts to software meant that CPU had to do most of the same things once again. Also, 3D engines were seen as a base for exciting new features (arbitrary 3D models, complex lighting, free camera, post-processing effects, etc.), so the focus shifted in that direction.
In general, CPU performance growth meant that most PCs could run most Doom levels without any help from the video card. (Obviously, map makers rarely wanted to work on something that was too heavy for their systems, so the complexity was also limited by practical reasons.) 3D rendering performance (in non-GZDoom ports) was boosted occasionally to enable some complex geometry or mapping tricks in popular releases, but there was little real pressure to use acceleration. On the other hand, the linear growth of single core performance has stopped long ago, while the urges of map makers haven't, so there might be some need for “real” complete GPU-based rendering.
As I said, traditional doom bsp-walker software renderer is quite parallelizable. You can split the screen vertically into several subscreens and render them separately (does wonders for epic maps). The game logic, or at least most of it, can probably be run in parallel with the rendering.
And I don't think any of the above is necessary. Even according to their graphs popular doom ports can render huge maps at sufficiently high fps on reasonably modern hardware. The goal of this project, as stated in the doomworld thread, is to be able to run epic maps on a potato.
Even just updating the graphs would be helpful. There appear to have been several releases since 0.9.2.0, including a bump from .NET 7 to .NET 8 (and a bump to .NET 9 in dev).
The more recent .NET versions by themselves are likely to have some impact on the performance, let alone any changes in Helion code between versions.
the Doom in TypeScript types project wouldn't have been possible without Nick and Helion - I owe Nick a huge thanks! He helped with some of the more obscure parts of the engine and also helped make a super small WAD that is what the game eventually ran in.
Microsoft has really been putting a lot of focus on improving it with each release. I love reading through the blog articles for each major release, that outline all the performance improvements that were done: https://devblogs.microsoft.com/dotnet/performance-improvemen...
A warning for those not in the know, the performance improvement posts famously give mobile browsers trouble because they are so massive. All because the extent of the improvements is so great (along with the amount of detail the posts go into about the improvements).
And if you look at the PRs for the core, there are Intel people hacking away at the low-level routines too; to make it run better on their latest server CPUs.
The Benchmarks look a bit sketchy... is the frame uncapped for all the other engines and has vsync been disabled? It's a very odd graph to look at, but great performance regardless
If the authors wanted to protect engine development while allowing indies to sell games made on it, they would have picked LGPL or a more permissive license.
You are technically correct, and I believe the GPL doesn't cover the assets for the game (levels, art, audio, etc.), but I suspect there aren't many GPL licensed games out there for sale that have sold enough copies to make developing them worthwhile financially.
I'd love to be wrong, so if you have a few examples, I'm all ears.
Probably not much in the AA/AAA space, but plenty of indies. The Doom engine (and GZDoom, which is the most common Doom engine derivative) is GPL and there have been multiple commercially successful games released using it. I know at least Hedon[0] and Hands of Necromancy[1] sold enough copies to warrant a sequel.
GPL vs LGPL definitely isn't a blocker for a commercial game, in any case.
Remember the GPL only applies to the code you can make a great game with beautiful artwork and distribute the source code to anyone who wants it. Nobody playing the game will have much fun without the artwork.
Carmack has a post from ages ago wondering why no one does that with the ID engines they open sourced, which were pretty current back then. He was talking about the quake (2?) source code dumps i think.
The GPL license will allow people to take the Quake 3 engine and even go so far as to release a commercial product with it - provided that the source code is published alongside. Nobody has done this with any of the Quake engine games yet, but he hopes to see it happen someday.
You can sell them on PC, but any dream of console releases are dead in the water as Sony,etc forbids distribution or even code using their SDK's to be shared publicly.
All of the methods defined here:
https://github.com/Helion-Engine/Helion/blob/20300d89ee4091c...
Are available in the kitchen sink:
https://learn.microsoft.com/en-us/dotnet/api/system.numerics...
Same idea applies to methods like GetProjection, which could be replaced with methods like:
https://learn.microsoft.com/en-us/dotnet/api/system.numerics...
Advantages of using this library are that it is uses intrinsics (SIMD) to accelerate operations. There is a lot of Microsoft money & time that has been invested into these code piles.