Hacker News new | past | comments | ask | show | jobs | submit | dsab's favorites login

C++ is not just C++ but also the C preprocessor, the STL, the linker, the C libraries and SDKs you can't help but depend on, the build system, the build scripts, the package manager, the IDEs and IDE add-ons, the various quirks on various platforms, etc. That's on top of knowing the code base of your application.

Being really good at C++ almost demands that you surrender entire lobes of your brain to mastering the language. It is too demanding and too dehumanizing. Developers need a language and a complete tool chain that is designed as a cohesive whole, with as little implicit behavior, special cases and clever tricks as possible. Simple and straight-forward. Performance tweaks, memory optimizations and anything else that is not straightforward should be done exclusively by the compiler. I.E. we should be leveraging computers to do what they do best, freeing our attention so we can focus on the next nifty feature we're adding.

Zig is trying to do much of this, and it is a huge undertaking. I think an even bigger undertaking than what Zig is attempting is needed. The new "language" would also include a sophisticated IDE/compiler/static-analyzer/AI-advisor/Unit-Test-Generator that could detect and block the vast majority of memory safety errors, data races and other difficult bugs, and reveal such issues as the code is being written. The tool chain would be sophisticated enough to handle the cognitive load rather than force the developer to bear that burden.


The linked CVE-2024-2961 article is a pretty fantastic read on its own:

https://www.ambionics.io/blog/iconv-cve-2024-2961-p1

People are so creative, I can't help but feel some hope for our future :)


HathiTrust is a fine example of a repository which is in theory useful but in practice all but useless.

Participation is limited to tertiary academic institutions, and possibly only four-year (rather than two-year) ones. This excludes local (city/county) libraries, as well as primary/secondary (grammar / middle / high school in the US) libraries.

Even public-domain records cannot be downloaded in whole, but rather can be saved one page at a time as PDFs. I'm pretty sure that those interested in more useful archival will and/or have created automated tools to do so, but HathiTrust remains the most notable point-of-access for such works, and the additional generation of conversion and republication further degrades the quality of original-publication formats. (It's less a problem for regenerated works from OCR'd or manually-converted documents, but those of course lose all the characteristics of original publication.)

And of course, many materials still under copyright are not accessible to the general public at all, no matter how obscure. I'd run into a case of this some months back trying to get a date attribution of an Alan Watts lecture which had been posted to HN:

<https://news.ycombinator.com/item?id=41231047> (thread).

And my request still stands. Anyone with an academic affiliation who can check <https://catalog.hathitrust.org/Record/000678503> and see how it relates to this post (<https://news.ycombinator.com/item?id=41230841>) would have my gratitude.


I have the opposite experience, working in embedded (C, not Rust...). Building a synchronous API on top of an async one is hell, and making a blocking API asynchronous is easy.

If you want blocking code to run asynchronously, just run it on another task. I can write an api that queues up the action for the other thread to take, and some functions to check current state. Its easy.

To build a blocking API on top of an async one, I now need a lot of cross thread synchronization. For example, nimBLE provides an async bluetooth interface, but I needed a sync one. I ended up having my API calls block waiting for a series of FreeRTOS task notifications from the code executing asynchronously in nimBLE's bluetooth task. This was a mess of thousands of lines of BLE handling code that involved messaging between the threads. Each error condition needed to be manually verified that it sends an error notification. If a later step does not execute, either through library bug or us missing an error condition, then we are deadlocked. If the main thread continues because we expect no more async work but one of the async functions are called, we will be accessing invalid memory, causing who knows what to happen, and maybe corrupting the other task's stack. If any notification sending point is missed in the code, we deadlock.


Be veeeery careful. STM32H QSPI peripheral is FULL OF very nasty bugs, especially the second version (supports writes) that you find in STM32H0B chips . You are currently avoiding them by having QSPI mapped as device memory, but the minute you attempt to use it with cache or run code from it, or (god help you) put your stack, heap, and/or vector table on a QSPI device, you are in for a world of poorly-debuggable 1:1,000,000 failures. STM knows but refuses to publicly acknowledge, even if they privately admit some other customers have "hit similar issues". Issues I've found, demonstrated to them, and wrote reliable replications of:

* non-4-byte-sized writes randomly lost about 1/million writes if QSPI is writeable and not cached

* non-4-byte-sized writes randomly rounded up in size to 2 or 4 bytes with garbage, overwriting nearby data about 1/million writes if QSPI is writeable and cached

* when PC, SP, and VTOR all point to QSPI memory, any interrupt has about a 1/million chance of reading garbage instead of the proper vector from the vector table if it interrupts a LDM/STM instruction targeting the QSPI memory and it is cached and misses the cache

Some of these have workarounds that I found (contact me). I am refusing to disclose them to STM until they acknowledge the bugs publicly.

I recommend NOT using STM32H7 chips in any product where you want QSPI memory to work properly.


You cannot talk about the problems with pharma pricing without talking about enclosures [1]. Consider:

1. Health care providers are largely banned from importing drugs [2];

2. Medicare is largely banned from negotiating drug prices [3]

3. The VA was allowed under Obama to negotiate drug prices, something which was promised but never delivered for Medicare. The GAO shows this has reduced costs [4];

4. Pharma companies will tell you R&D is expensive. It is but it's the government paying for it. Basically all new novel drugs relied on public research funds [5];

5. Pharma companies generally spend more on marketing than R&D [6];

6. What R&D pharma companies actually do is typically patent extension [7].

The true "innovation" of capitalism is simply building layers and layers of enclosures.

[1]: https://en.wikipedia.org/wiki/Enclosure

[2]: https://journalofethics.ama-assn.org/article/what-should-pre...

[3]: https://www.healthaffairs.org/content/forefront/politics-med...

[4]: https://www.gao.gov/products/gao-21-111

[5]: https://www.cbc.ca/news/health/drugs-government-funded-scien...

[6]: https://marylandmatters.org/2024/01/19/report-finds-some-dru...

[7]: https://prospect.org/health/2023-06-06-how-big-pharma-rigged...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: