His PhD thesis explains the thinking behind Erlang, especially how it handles failures, message passing, and concurrency. It was last updated in 2003, 22 years ago, time really flies!
> The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may
continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.
Lamport calls it "programming ≠ coding", where programming is "what you want to achieve and how" and coding is telling the computer how to do it.
I strongly agree with all of this. Even if your dev team skipped any kind of theory-building or modelling phase, they'd still passively absorb some of the model while typing the code into the computer. I think that it's this last resort of incidental model building that the LLM replaces.
I suspect that there is a strong correlation between programmers who don't think that there needs to be a model/theory, and those who are reporting that LLMs are speeding them up.
An example of an information dense GUI that might be a bit overwhelming is Ghidra: https://en.wikipedia.org/wiki/Ghidra (page includes a basic screenshot, you can fill your screen(s) with as many sub windows and information panes as you want)
As a side note, while trying to find examples I realized just how few websites there are (any more?) that show a lot of information at the same time. Worst recent offender is YouTube's redesign with only 3 video tiles in a row on a 1440p screen, luckily easily fixed via a ublock rule.
Anyone with a love of CSV hasn't been asked to deal with CSV-injection prevention in an enterprise setting, without breaking various customer data formats.
In case the other answers aren't sufficient, the first step is to understand the λ-calculus[0]. Then, De Bruijn indices[1]. Now, observe that the language we have only has (you need familiarity with the λ-calculus to understand those terms (… pun unintended)) 1/ applications, 2/ abstractions, 3/ integers representing variables [introduced by abstractions]. For example:
(λ (λ 1 (λ 1)) (λ 2 1))
Binary λ-calculus is then merely about finding a way to encode those three things in binary; here's how the author does it (from the blog post):
00 means abstraction (pops in the Krivine machine)
01 means application (push argument continuations)
1...0 means variable (with varint de Bruijn index)
The last one isn't quite clear, but she gives examples in `compile.sh`:
To check your understanding, you may want to try to manually convert some λ-expressions using those encoding rules, starting with simple ones, and check what you have with what `compile.sh` yields.
This code is very beautifully written, thanks for sharing.
However, you should consult the book "C: Interfaces and Implementations" by Dave Hanson, which has a library called "Arena" that is very similar to your "Pool"s, and it shows a few more tricks to make the code safer and easier to read (Chapters 5+6, 6 in particular).
D. Hanson's book (written in the 'literary programming' style invented by Donal Knuth) also demonstrates having debugging and production implementations for the same C API to catch memory errors, and his book is from 1997:
This event is predicted in Sydney Dekker’s book “Drift into Failure”, which basically postulates that in order to prevent local failure we setup failure prevention systems that increase the complexity beyond our ability to handle, and introduce systemic failures that are global. It’s a sobering book to read if you ever thought we could make systems fault tolerant.
I have been using ChatGPT pretty consistently for various summarization type of tasks for a bit.
But this weekend I tried using it to learn a technical concept I've struggled with. Previously, I could have spoken about the topic intelligently, but could not have put it to any practical use.
In a few quick prompts and < 1 hr of synthesis, I was able to get to practical knowledge to understand it enough to try and build something from scratch. And building something in another 2-3 hours.
I think this would have taken me 1-2 months of dedicate hitting my head on the wall time to learn previously.
In that moment, I had a realization similar to the author.
Basic flow of the prompt went something like:
1. Tell me about this thing that's part of the concept that I don't understand?
2. Explain this new term you threw at me? Do it in the context on an industry I understand well.
3. Explain yet another concept you mentioned in the same industry concept?
4. This sounds a lot like this other concept I know. How do these differ in practical use?
5. Write me some sample code in a language I understand very well, that does X use case the you mentioned?
6. What are these other concepts you used?
7. How can I do this new concept to do something I've implemented before.
Essentially, my knowledge of other concepts, industries and being able to draw parallels allowed me to learn much much faster.
The issue isn’t that MBAs have cost reduced bulbs for no reason. The issue is that 95% of consumers will only choose the cheap bulbs, period. As a result, that’s what gets produced at scale.
> We know how to mass-produce quality LEDs to the point entire TVs are made of the things.
They’re not the same thing. Displays are optimized for specific R, G, and B color points. White LEDs are optimized for full, smooth spectrums.
I am burnt out (but recovering!) with web dev and htmx is what I am using for my project.
Django, DRF, Postgres, tailwind and HTMX.
I am so tired of all the front end frameworks and all the complexity that gets added. At some point I think you need it and you get returns from it but hearing more people in the industry recognize and talk about how JS everything isn't always the answer gives me hope.
I like what HTMX has to offer and I am excited to see it continue to get air time.
This is classic “efficiency” vs “antifragility.” There are some things that are okay to ocassionally fail, and for those it’s often good to pursue efficiency. But for something critical like electricity, the market all decided - with good financial justification - that it would be more profitable to just let the system fail occasionally than to be hardened against long-tail events.
In systems engineering if we want this to not happen we usually intentionally add volatility, things like putting chaos monkey into your infra so that long-tail failures become common and thus the necessary hardening can’t be missed.
I’m not quite sure what this would look like for an electricity market. Maybe something like huge fines for having your capacity go off-line, which require insurance, and the insurers thus require winterization or they charge way higher rates which make your plant uneconomical. Or a regulation that says all plants must be winterized to a certain degree thus the cost of electricity will just be higher for everyone to pay for mitigation of long tail risk. The fines approach is more indirect but - assuming it works - it leaves room for edge cases and innovation that the “orderly but dumb” regulation does not.
I agree, and I'm trying to do something about it with two no-build libraries:
https://htmx.org - an 11k library that pushes HTML forward as a hypermedia, making things like AJAX, SSE and CSS transitions available directly in HTML
https://hyperscript.org - a 20k library that replaces jquery/javascript with an inline, HyperTalk inspired syntax
Both can be included on a site without any build infrastructure, and are attempts at bringing the joy of hypermedia and the joy of scripting, respectively, back to the web.
I’ve found Ray Dalio to have some of the most interesting takes on this subject. [1] [2] [3] He’s the (retired) founder of the world’s largest hedge fund and he tries to examine these topics through economic and historical lenses, which seems like as good an approach as any.
He also explains a lot of macroeconomic material in a way that isn’t very jargon heavy (which can sometimes be hard to find, at least for the schools of thought popular among people who do it professionally).
While BeautifulSoup is great, lxml + xpath really is the way to go. XPath is a W3C standard and works cross language and even in the browser.
If you need a an quick way to scrape javascript generated content, you can just open your browser console and use `document.evaluate` with an XPath query.
By the way, the technical side of this is very interesting. If you look at the tools mentioned (the wayback machine, but also perma.cc and other archival solutions), almost all of them rely on a single semi-modern tech stack that produces WARCs (web archives - ISO - ISO 28500:2017 https://iipc.github.io/warc-specifications/specifications/wa...).
Still, I've read through the code of these tools and am feeling that they are failing in the face of the modern web with single page apps, mobile phone apps and walled gardens. Even newer iterations with browser automation are getting increasingly throttled and blocked and excluded from walled gardens.
Perhaps the time has come for a coordinated, decentralized but omnipresent approach to archival.
I've had 2 friends start revenue generating MVPs on Bubble.io
One is a Cameo-like platform for LatAm, the other is a TikTok like short-form audio app .
I've been incredibly impressed by the platform and extensibility. If you have basic coding skills I think it's much more powerful than Webflow or other platforms.
Off on a complete tangent here, only remotely related.
Yo, fellow four-eyed geeks.
I noticed that when lying down and looking up in a room with a light source, with my glasses on, if there is a small droplet of water on the lens (say, due to having just cleaned the glasses and not wicked up every last drop), this droplet can bend the light in such a way that when it subsequently hits the cornea and pupa, a perfectly straight, parallel beam of light passes through the vitreous humor right to the retina.
The consequence of this is that all the junk floating in your eye is rendered in "4K clarity" in a circle of light, regardless of whether it is next to the retina, or floating somewhere in the middle.
Imperfections in your cornea or elsewhere are also rendered, making themselves evident.
It's very powerful, lets you display in multiple different formats (not just 1/2/4/8 bytes, but interlaced formats and byte-arrays) and has the most amazing templating / scripting engine I've seen for this type of tool.
The only caveat is that it isn't free, but if this is something you do for a living (as I do) it's an indispensable tool for exploring file formats and other binary data sources.
Also that blog is made with pelican and hosted with GitHub pages so you can see some of pelican's features. I'm using it for more than 7 years and am really happy with it (the fact that I am familiar with python and can debug some problems myself definitely helps)
MediaWiki has a skin system, and judging from other comments here this appears to be treated as a new skin, or an iteration of the already default "Vector" skin.
which all tried to be responsive rather than using the MobileFrontend extension and splitting the site into two separate device-specific skins.
You can append `?useskin=skinname` on Wikipedia, or any MediaWiki site, to switch the current view to the given skin. For instance, Timeless is installed on wikipedia.org, so you can see what it looks like by going to https://en.wikipedia.org/wiki/Ruth_Bader_Ginsburg?useskin=ti...
The list of installed skins is on the wiki's Special:Version page.
It's a slow process (as I've been respectful of the rate-limit on the reverse geocoding API I'm using), but I finished mapping four states and hope to complete the other 46 by the end of this week.
It's not often you see an image and immediately think "yep, that'll be in some history book in short order". Just utterly divorced and a sign of a _very_ healthy society...
If you can't see it: An image with "Dow's Best Week Since 1938" on a graphic while breaking news declares "More than 16 M Americans have lost jobs in 3 weeks"
Over the past few weeks, I've been setting all my command prompts and terminal emulators to old-school fonts. I've settled on PxPlus IBM VGA8[0] with color #FFB000 (monochrome amber) on black. If font antialiasing blurs things, I use Nouveau IBM[1].
I'd highly recommend reading SQL Antipatterns. It's a very approachable book that illustrates how to design a database schema by taking commonly encountered scenarios and first showing you the naive approach. After explaining why this is bad, it then shows you the recommended way of doing it and why this way works better.
I think 'learning from how not to do something' is a really powerful pedagogical technique that should be more widespread.
His PhD thesis explains the thinking behind Erlang, especially how it handles failures, message passing, and concurrency. It was last updated in 2003, 22 years ago, time really flies!