And I imagine there will likely be a sudden increase in the number of classified software projects and national security systems in the next 180 days. This may very well be another case of a law trying to make things better, but ultimately having the opposite effect.
I've said it before, but I feel like I need to keep saying it: If the people actually using/consuming a product aren't willing to pay the cost required to make/produce said product, then that's probably a good indicator that the product doesn't have enough value to justify its existence. If the Verge can't convince enough people to pay for their output, I'd argue it's not worth keeping around. Advertising ruins the producer/consumer relationship and incentivizes behaviors that disadvantage the consumer and push the producer away from their original purpose. If the Verge wants to diversify income, then they should provide a product that has value to a wider audience.
That doesn't mean that every single person consuming/using the product needs to pay either. Patreon and similar services have proven that if what you make is of high enough quality and provides enough value, it's possible to convince a large enough subset of the consumers to pay enough to cover the costs of production. Especially with online services where the cost difference between producing for 1 person vs 1000 is nearly negligible. The few can, and often do, subsidize the many. And if that changes over time, then the product ceases to be valuable enough to be worth producing and should simply stop being made.
And I'd say this is even true of news and things that people might argue are for the public good. In my opinion, most "news" today isn't actual informative news and doesn't really serve the public. But of the vanishing little left that does, if it informs and educates the citizenry in a way that improves the lives and stability of the country as a whole, and if its natural cost is more than can be reasonably covered by willing consumers, then there's a reasonable argument for covering the cost with taxes. Put sufficient legal barriers between the government funding and the content being produced to prevent government manipulation/propaganda and make sure the press can operate unimpeded. My best idea for such a government funded news source is to write into law that said news source's budget is dependent solely upon country population and country GDP, all staffing decisions are made internally, and leaders are decided by citizen vote, but I'm sure there are other ways people can come up with for maintaining a free but government/tax funded press agency. But ad-funded (of which I think even public radio/tv has effectively become), creates perverse incentives that drive the mission away from actually informing and educating.
People can still pay for tech reviews, or travel advice, talking head current events opinion shows, or whatever news is in its current form if they want. Privately owned/operated press can still exist if it's useful enough for people to find it valuable and worth the cost. But advertising just ruins everything eventually.
> If the people actually using/consuming a product aren't willing to pay the cost required to make/produce said product, then that's probably a good indicator that the product doesn't have enough value to justify its existence.
This is not true. In many cases, we can see that people value a service by their returning usage, but... they often want other people to pay for it.
There are a ton of things that many people would like but aren't willing to pay for. Would I like a personal chef and a personal driver at my beck and call? Sure. But I'm not willing to pay what that would cost for routine use. I do pay for some scheduled and infrequent personal care services.
Also, government-funded comes with its own strings which may become more obvious than you like in the coming years.
“Eventually” is doing a ton of heavy lifting in your argument.
Ads have been a primary revenue driver for news media going back hundreds of years, including to when the US was codifying the civic value of news into its governing principles.
The idea that revenue choice is binary or that if you have to use ads you shouldn’t be in business is a very modern one, and one that’s outside of historical norms.
The idea that it's wrong to use slavery as a cheap labor force is also modern and outside historical norms. Just because it's historically prevalent doesn't make something right or even good/acceptable. Some things based on historical norms are good. Some clearly aren't. And it seems to me a bit disingenuous to ignore the technological changes and the speed, pervasiveness, and invasiveness with which advertising is now done compared to the historical context being appealed to. Even if one grants that advertising wasn't "as bad" back then, that doesn't make what we have now acceptable.
I’m more commenting on the idea that ads ruin things over time. If that’s the case then it’s a hundreds of year timescale (or news media was ruined long ago).
I would argue that in many ways (that vary from project to project) the duplicated efforts are a good thing overall. Now, instead of having one single group or person working on a particular topic, there are many and the knowledge, learning, experience, and expertise that comes from that isn't locked up with only a select few. That can be applied in more areas, used for more things, or carried into other endeavors. It also hedges against all that expertise being lost if the few experts of the original project abandon it for whatever reason.
I don't see all the efforts that go into *BSD, Windows, macOS, etc... as being wasted just because Linux is available, nor would I consider the effort to try something completely new in os/kernel design that starts from a clean slate a waste, even if it doesn't end up becoming wildly popular. Not everyone who has the capacity to work on a duplicate effort would be able to contribute to the original, and the original will never be able to perfectly meet the goals/needs of everyone.
Additionally, having worked with some of their network devices at the driver level, they seem to be kludge piled on top of hack poured over a soup of workarounds for hardware bugs. Maybe they've gotten better recently, but just looking at their drivers, it didn't paint a great picture.
> it also requires GIGABYTES of crap you need to download on windows
I was surprised by this since all I need on Linux is llvm and clang, but looking at the official getting started page, it does indeed say "MSVC compiler and windows SDK" are required. Is that really the only way to run it on windows, or is that just the path most familiar to a typical windows developer?
I personally disagree with saying it requires IDE support. I want to write a ctags parser for it, but that's all I would ever use; as a die-hard vim user, I never liked all the language server stuff people are so reliant on these days. But if that's what people want to use, it's available for Odin as well: https://odin-lang.org/showcase/ols/ -or- https://github.com/DanielGavin/ols
And not being dependent on the platform linker was a massive undertaking in Zig... I don't think many languages actually implemented their own cross-platform linker.
But anyway, if you're in the Odin target audience on Windows you most likely have Visual Studio installed anyway. Also IIRC even Clang and Rust depend on the MSVC linker on Windows, Zig is really an "outlier" (in a good way) in that regard.
iirc it is about 6 or 7 GB on windows that you need to install. Zig can do it in 75MB, Go i think around the same. Rust has the same problem though.
As for IDE, people who use plain editors like vim, emacs... exist but their numbers are merely a statistical error compared to people that use IDEs(i am counting VS code into this category even though it's just electron). Of course we could debate what is and what is not IDE, but the point here is that we need syntax highlighting, refactoring, jumping to definitions, finding usages and other functionality that IDEs provide.
I thought the funding section was a little odd; I was under the impression that Bill was being paid by JangaFX to develop/maintain the language? But I think the corporate sponsorship of language development should be the norm. If existing languages are not sufficient to solve a business need, then they should pay for the development of a new one (or directly support one they rely on). But making a language "popular" and widely used is directly opposed to making it paid. There are plenty of closed languages out there, but they're only used by the corporations that developed them, and are probably kept closed to make sure the language doesn't stray from their business needs, and/or to maintain a competitive advantage.
Otherwise, I'm generally a fan of Odin, but I do find it quite irritating that only place to ask questions and participate in the "community" is locked behind discord. I even gritted my teeth and tried to make an account for discord just for this, but discord wouldn't accept my (apparently mandatory) phone number. Community questions and answers need to be readable and searchable without yet another login. If I'm learning a language and can't find an answer to a question that was almost certainly asked already, that's just another stumbling block that will prevent me from using said language.
> Only huge projects can afford to have multiple Discords, Telegrams, IRCs, Wikis
There's one option available to small projects and actually you already named it yourself:
> relay bots.
Take for example Nim community. It's not huge by any margin, but we have fairly active forum[0], occasionally active Telegram channel and most of activity is on Discord, IRC and Matrix. I've grouped these three because they're almost seamlessly connected with relay bots into one platform. You can join one of several bridged platforms[1] and talk to everyone on Discord, Gitter, Matrix, etc. with quotes, pings and attachments working as you'd expect them to.
It is certainly an extra burden to moderate and manage all of this, but now you can atleast have an IRC archive[2] that's indexable and searchable [3].
On the other hand, having Discord as your *only* place for discussions is plain stupid (read: foolish). Because I know several people, including myself some years ago, that just 'nope out' from using a project when they see that the only place to get support is a Discord channel.
And if that's the case, me too. I understand that IRC is plain and boring but discord I avoid.
Shame, I've never seen Odin before and got excited over the show cases. I've been looking to tinker with another language outside of the main three, (python, go, rust) and this looks nice.
Another incredible agent in WWII was Virginia Hall, aka the limping lady, who managed some pretty incredible feats, escapes, and rescues and did so with only one leg.
The Russian sniper women still don’t have a good Hollywood movie about them. Russian government would be upset for bringing it up but the female snipers often worked in pairs and you can totally find some truthful bits of lesbianism to include…
But instead of looking to really history that actually happened, we get the BF5 trailer fake history nonsense
Have you seen libAgar (https://libagar.org)? Cross platform support is certainly there, covering everything from windows XP (and earlier) to *bsd and SGI IRIX. I'm not sure what all having support for accessibility requires as I've never had to worry about it, but am curious if 1, agar has what's needed, and 2, what exactly is required of a GUI library for accessibility. Screen reader support? (Are there SR standards for desktop applications)? Dynamic scaling? High contrast?
(For embedded and/or touch first UI, LVGL is pretty nice, but probably lacking any semblance of accessibility features apart from keyboard navigation, but you could hook that yourself).
This is accurate. I'm an OS/kernel developer and a colleague was given the task of porting rust to our OS. If I remember correctly, it did indeed take months. I don't think mrustc was an option at the time for reasons I don't recall, so he did indeed have to go all the way back to the very early versions and work his way through nearly all the intermediate versions. I had to do a similar thing porting java, although that wasn't quite as annoying as porting rust. I really do wish more language developers would provide a more practical way of bootstrapping their compilers like the article is describing/attempting. I've seen some that do a really good job. Others seem to assume only *nix and Windows exist, which has been pretty frustrating.
I'm curious as to why you need to bootstrap at all? Why not start with adding the OS/kernel as a target for cross-compilation and then cross-compile the compiler?
The article mentions that the Bootstrappable Builds folks don't allow pre-generated code in their processes, they always have to build or bootstrap it from the real source.
that's interesting! what kind of os did you write? it sounds like you didn't think supporting the linux system call interface was a good idea, or perhaps even feasible?
It's got a fairly linux like ABI, though we don't aim or care to be 1-1 compatible, and it has/requires our own custom interfaces. Porting most software that was written for linux is usually pretty easy. But we can't just run binaries compiled for linux on our stuff. So for languages that require a compiler written in its own language where they don't supply cross compilers or boot strapping compilers built with the lowest common denominator (usually c or c++), things can get a little trickier.
The practical concern for my colleagues and me is that we're OS/kernel developers for an operating system that isn't currently supported. I had to fight these kind of problems to get java ported to our OS, and a coworker had to do it for rust, which was much much harder. And he did end up having to start from one of the earliest versions and compile nearly every blasted version between then and now to get the latest. It's a royal pain and a major time sink. If there were a viable rustc that was written in C or even C++ at the time, we could have been done in a few days. Instead it took months.
As in your other comment there seems to be some confusion between bootstrapping and porting here? If you want to port Rust to a new OS then you ‘just’ need to add a new build target to the compiler and then cross-compile the compiler for your OS (from an OS with existing Rust support). That may indeed be a lot of work, but it doesn’t require bootstrapping the compiler, so this project wouldn’t be of any help in that scenario.