> Rust has macros that make serde very convenient, which Gleam doesn't have.
To be fair, Rust's proc macros are only locally optimal:
While they're great to use, they're only okay to program.
Your proc-macro needs to live in another crate, and writing proc macros is difficult.
Compare this to dependently typed languages og Zig's comptime: It should be easier to make derive(Serialize, Deserialize) as compile-time features inside the host language.
When Gleam doesn't have Rust's derivation, it leaves for a future where this is solved even better.
While more automation certainly is useful, I find that auto-generating changelogs in this manner has a number of problems:
Auto-genertaed changelogs lack business-aware context about what is important. You get a big list of new features, but which ones are the most important to stakeholders? You have a few breaking changes, which are likely to have the most widespread impact? Without being judicious about what information is included, you risk overwhelming readers with line noise and burying important notes.
Some things go beyond the scope of a commit message - deployment nuance, interaction with other relases, featureset compatibility matrices. These are best summarised at the top level, they don't fit in individual disparate messages.
One of OP's motivations for starting this thread was to see how people tailor changelogs to different types of stakeholders; techincal vs non-technical, for example. This approach doesn't solve that problem. In fact, I think it's worse due to an additional side effect: the commits are now forced to do double duty; they must be useful commits for developers looking at code history, but now they also must be useful messages to be included in a changelog. While there is some overlap, it's hard to do both simultaneously. One must pick between writing good commit messages for the codebase & developers, versus writing a coherent changelog.
As a matter of personal taste, I think it looks lazy. Changelogs are a unique opportunity to communicate something important, they're written once and read by many. With a list of commits, myself and all other readers must now put in the work to find out what's relevant - it's disrespectful of others' time.
> You get a big list of new features, but which ones are the most important to stakeholders?
I worked for one startup with one major customer who was really skeptic of investing further because of stability problems, feature delay problems, and lack of transparency. Along with a complete list of changes that gave them insight into how we prioritised between stability and feature development, I wrote a human summary of what this meant — experiments, summaries of statistics, summary of most important changes to business logic.
Writing personally to your stakeholders does not exclude being systematic, and vice versa.
> As a matter of personal taste, I think it looks lazy.
That’s funny, because I find the lack of automation to be the lazy choice. Forgetting to add to the changelog because the requirement is checked by humans, or because single commits fix things below some bar of noteworthiness that is entirely subjective and driven by lack of structure. Not writing commit messages worth putting in release notes (fix sht, asdasdasd, etc.)
> Changelogs are a unique opportunity to communicate something important, they're written once and read by many. with a list of commits, myself and all other readers must now put in the work to find out what's relevant - it's disrespectful of others' time.*
When I migrate software, I’m very interested in the complete picture. I’ll ask my AI agent to go over the links in the changelog and summaries for me what are the breaking changes and what manual steps do I need to take. Having them in human-readable form ahead of time would be nice.
Since git-cliff has different sections, I can skip changes to documentation. Because of SemVer, I know if there’s something breaking.
I like your idea of an additional human summary, that does definitely help.
> That’s funny, because I find the lack of automation to be the lazy choice.
Automation is cheap these days. Many automations make things that exceed human ability, but this isn't one of those cases. You'll get something good-enough, but not great. Perhaps that's what your organisation has time and budget for, in which case your use of automation makes sense, but if we're trying to make the best audience-tailored summaries of software releases with specific purpose, the strategy falls short.
> Forgetting to add to the changelog because the requirement is checked by humans,
Individual developers definitely can, which is why you must also have organizational process. If a valid changelog checked by a release manager is a requirement for a software release, it can't be forgotten.
> fix things below some bar of noteworthiness that is entirely subjective and driven by lack of structure. Not writing commit messages worth putting in release notes (fix sht, asdasdasd, etc.)
I'm curious about the implication of insignificant commits coming from a lack of structure. I think it's entirely normal to have single commits fix something innocuous. If there's a typo in one file, you wouldn't fix it as part of creating a feature, because that's work outside the scope of the feature. So it would have to be in its own commit, or alongside other similar refinements. And those examples are indeed unacceptable commit messages that would not make it through code review in any serious shop, but I get what you mean, and it's part of my point: developers are supposed to write commit messages for developers, and the needs of developers are different to the needs of people reading a changelog, so it's only natural that the text should be changed for the different audience.
> I’ll ask my AI agent to go over the links in the changelog and summaries for me what are the breaking changes and what manual steps do I need to take.
I really think this is backwards and exactly the thing I was advocating against. The changelog you have is so large and poorly-structured that you need to use an AI to summarise it for you, and gather the information that should have been in the changelog in the first place. If that needs to be done to make the changelog useful, clearly its original state is deficient?
Yes, unions can be protectionist about their work force, but there are international worker unions; maybe this is a European thing.
An econ 101 observation: unions contribute to structural unemployment: Keeping wages above market-clearing levels, and by preventing wage adjustment.
Through collective bargaining, unions can negotiate wages that are higher than what the market would naturally set. This can lead to the cost of labor being too high for some employers, resulting in fewer jobs. Similarly, unions can prevent wages from adjusting to market conditions.
So for the common good, individuals may go without a job.
For markets to operate well, prices must be easily accessible by both buyers and sellers. Since corporations do their utmost to ensure that workers don't discover wages and salaries of their peers, corporations suppress wages. So, corporations are bad for the common good.
The econ 101 observation feels like it falls apart under light scrutiny. The market sets a rate, but which rate is more "natural?" When individuals negotiate directly with employers, they tend to be at a disadvantage. An individual has less knowledge and bargaining power than an employer in almost all cases; so can we call the rate set by these negotiations to be the "natural" rate? Conversely, when bargaining collectively, employees are able to pool knowledge and resources to bargain more effectively, and they have more leverage as a group which allows them to negotiate on a more even field to the employer. I would consider this outcome to be more "natural," and would argue that it is not that collective bargaining results in higher wages than the market would set but that individual bargaining results in wages that are artificially lower than those of the market clearing rate.
Unions are part of the market like anything else. If wages are higher, they aren't above market-clearing levels, those are the new market-clearing levels. If workers form a union and bargain collectively, that is what the market naturally set.
Do you apply the same argument for employers? Companies contribute to low wages. By collectively bargaining with employees (e.g. hiring at the local grocery store is centralized, you can't go around to all the individual managers and start a bidding war) they can negotiate wages that are lower than what the market would naturally set.
And I bet COSTCO membership-based, warehouse-club model is a bad thing too, since they are able to negotiate prices lower that what the market would naturally set?
Econ 101 observations are utterly useless without the specific context in which they're made. This is like talking about spherical cows in a vacuum in the context of aerodynamics.
In the specific case of unions, they always forget to mention that a higher proportion of a company's income going to salaries generally means increased consumer spending for workers, which spurs other kinds of industry and services that may mean a net benefit for the global economy.
Of course second and third-order effects are not really talked about in Econ 101.
Loaded questions are a rhetorical device taught in high school persuasive writing courses as a tool to dominate a conversation. Its indicative of a bad-faith participant in a discussion.
Speculation masked as "econ 101"-level fact as a way to preemptively dismiss counter arguments is also pretty indicative of bad-faith participation, it just looks more polite in a comment section.
For one customer I do maintenance on a piece of software.
Building it produces about two-three dozen deprecation warnings.
The whole software stack relies on a cluster of packages that stopped receiving updates 5 years ago.
The software is not facing end-users. But it does build using NPM and not via vendored packages.
To avoid those warnings, large parts of the code need to be rewritten using a different set of packages.
That doesn't get prioritised because it works.
The software sucks in many ways, but only from the perspective of an artisan.
The customer is happy to ignore warnings as long as the software does its job.
There isn't money in fixing things that work because it got old.
The incentives for the people putting the deprecation warnings in those packages don't align with the users of those packages. Their timelines and motives are different.
Not gonna lie Terranix has been working great for us, all our configuration is in Nix files anyway so it's so easy to just pass stuff in rather than using Tf variables etc
- Maintaining stateful secrets at rest gives me the heebie-jeebies.
- The tools shouldn't let me shoot myself in the foot.
- The tools should ideally not have such a high learning curve that I won't actually use them.
You can put your secrets in a separate repository and not think of them as the same kind of repository you'd publish.
Like... I wouldn't put a git-crypt'ed / sops-nix'ed repository online, simply because I don't like the idea that now anyone needs is brute-force; I know quantum computers aren't there yet wrt. brute-forcing stuff made by random people like me, but even hypothetically having this attack vector open, I don't like it.
So there's only two good solutions:
- You put secrets in a (hashicorp-style) vault that only decrypts temporarily in memory.
- You put secrets in an encrypted database with only safe tool integration.
The things I don't like about git-based secrets management:
1. You might mix your secrets into projects and then later someone else might release that (against your current interest)
2. The solutions I've seen (sops-nix, agenix, secrix, etc.) are hard to set up and even harder to onboard people on
When something's hard to set up, you might make a mistake or skip some concept.
Well-done secrets management that isn't based on a service like AWS Secrets og GitHub Secrets should be much, much easier.
I like the idea of how easy this is. Now, if it would just be best practice in every possible way at the same time.
The (admittedly well-known) problem with lockenv is that you can't revoke access once a password is known.
> the cost of parts definitely has risen in the same tiers if you look over a long enough period
This is especially apparent if you’re a hardware manufacturer and have to buy the same components periodically, since the performance increase that consumers see doesn’t appear.
> if you... buy the same components periodically... the performance increase that consumers see doesn’t appear.
Good point and that should properly be called inflation in the semiconductor sector. We always have general inflation, but the different sectors of the economy exhibit different rates of inflation depending on the driving forces and their strength.
As of today, tariffs are the major driver of inflation and semiconductors are hit hard because the only high-volume, reasonable quality/price country has been practically excluded from the the US market by export bans and prohibitively high tariffs - that's China of course.
The other producers are in a near monopoly situation and are also acting like a cartel without shame or fear of law... which isn't there to begin with.
They let you produce SD-card images with custom NixOS'es.
Very useful when you want an exact software layout, and exact system settings, like what user accounts and SSH keys to include, what systemd services should run, what directories should be tmpfs, and how to interact with the local network using avahi.
> unless fixing a bug requires a significant refactor/rewrite, I can’t imagine spending more than a day on one
Race conditions in 3rd party services during / affected by very long builds and with poor metrics and almost no documentation. They only show up sometimes, and you have to wait for it to reoccur. Add to this a domain you’re not familiar with, and your ability to debug needs to be established first.
Stack two or three of these on top of each other and you have days of figuring out what’s going on, mostly waiting for builds, speculating how to improve debug output.
After resolving, don’t write any integration tests that might catch regressions, because you already spent enough time fixing it, and this needs to get replaced soon anyway (timeline: unknown).
To be fair, Rust's proc macros are only locally optimal:
While they're great to use, they're only okay to program.
Your proc-macro needs to live in another crate, and writing proc macros is difficult.
Compare this to dependently typed languages og Zig's comptime: It should be easier to make derive(Serialize, Deserialize) as compile-time features inside the host language.
When Gleam doesn't have Rust's derivation, it leaves for a future where this is solved even better.
reply