I think the key is describing the user-facing effects and impact. This means knowing your users (are they developers, IT or highly technical? Do they automatically or manually update quickly, or do they take time to evaluate before rolling out updates?). It's a way to convey continuing value to users and convince them it's worthwhile to keep paying, and get motivated to get the update.
I'd try to change what you wrote to something like:
"Fixed out-of-memory errors on some 32-bit systems"
"Improved startup time for configurations with lots of items"
This makes it easy to gauge if this update is important to get installed ASAP, such as fixing something they've experienced or are likely to experience.
Grouping "several UI fixes" or "performance improvements" is usually fine, but I tend to call out something like "fixed UI bug where a network error could result in changes not being saved with no warning" or an issue that a sizable chunk of users had reported.
Doing this well takes time, though, as you have to synthesize the internal ticket summaries, PRs and/or commit messages and reword almost everything. This is understandable for open source or free/indie apps, but for subscription/"enterprise" software it's definitely one of the differences between "great" and merely acceptable (or worse).
> but for subscription/"enterprise" software it's definitely one of the differences between "great" and merely acceptable (or worse).
I couldn’t agree more.
I tried writing more informative but easy to digest release notes when a couple noisy users mentioned at a trade show that they wanted to know more about our bug-fixing efforts.
To my surprise, we got a huge volume of communications from our users that they loved our new release note style and to keep it up.
It takes us about 1 hour / release to write. We release small changes weekly and large changes monthly to about 1mm MAUs.
> I'd try to change what you wrote to something like:
I'm used to seeing from well managed projects a paragraph saying that there's new feature A and bugfixes and performance improvements.
And then a detailed change log with big feature A, smaller features B & C, and 10 bug fixes described. Maybe 2-3 performance improvements that are expected to only -really- help with degenerate cases.
And then, there's a dozen things that aren't considered important enough to be in this list.
For closed source systems detailed changelog is very nice. "Fixed USB reset issues" is close to useless, when you try the new firmware and it still exhibits USB reset issues. More detail the better. And describing the actual changes ("increased XYZ timeout on USB resume") instead of symptoms they tried to address ("we fixed the USB reset" - not) is actually much better.
I think focusing on "user facing impact" is a good idea and something I'll be using in the future. One challenge is that developers don't always know the user facing impact of their improvements.
For example if I notice a logic error in a system and fix it, I may not know what if any software configuration could trigger the error. If it were obvious, QA would've found it. And given the choice between further investigation or leaving the bug unfixed, my employer would prefer the latter.
Or one time I was working on a new feature that required speeding up the system's implementation of malloc(). That improves the performance of the entire system to varying degrees. Pinpointing exactly where the user's experience improves would require extensive benchmarking outside the scope of the work.
For us, not that much time, as everything goes through the bug tracker (even enhancements) so it's mostly a copy-pasta job. When I submit a fix for a bug, I'll often edit the subject to accurately reflect what the problem actually turned out to be.
I'd try to change what you wrote to something like:
"Fixed out-of-memory errors on some 32-bit systems"
"Improved startup time for configurations with lots of items"
This makes it easy to gauge if this update is important to get installed ASAP, such as fixing something they've experienced or are likely to experience.
Grouping "several UI fixes" or "performance improvements" is usually fine, but I tend to call out something like "fixed UI bug where a network error could result in changes not being saved with no warning" or an issue that a sizable chunk of users had reported.
Doing this well takes time, though, as you have to synthesize the internal ticket summaries, PRs and/or commit messages and reword almost everything. This is understandable for open source or free/indie apps, but for subscription/"enterprise" software it's definitely one of the differences between "great" and merely acceptable (or worse).