Quote from Wikipedia: "The Claude E. Shannon Award was established in his honor; he was also its first recipient, in 1973."
That must be a bit awkward to receive a prize named after yourself.
- Turing never won the Turing Award.
- Knuth did, but he never won a Knuth award.
- Dijkstra "kind of" won the Dijkstra Prize: he won the PODC Influential Paper Award, which was renamed after Dijkstra's death to Dijkstra Prize his honour (making the process not awkward).
I was wondering if there were other examples. Google "people who won prizes named after them". The results I got from the "AI Overview" were:
Helen Dunmore
The first winner of the Women's Prize for Fiction, formerly known as the Orange Prize, in 1996 for her novel A Spell of Winter
Dame Jean Iris Murdoch
Won the Booker Prize in 1978 for The Sea, the Sea. The Booker Prize trophy is named "Iris" after her.
Walter Payton
Won the NFL Man of the Year Award in 1977. The award was named after him after his death in 1999.
Taylor Swift
Won the Taylor Swift Award at the 2016 BMI Pop Awards, becoming the second artist after Michael Jackson to have an award named after them.
Stuart Parkin
Won the Draper Prize in 2024 for developing spintronic devices that allow for cloud storage of large amounts of digital data
The first and last ones are true but irrelevant. The others are legitimate but not exactly what we're talking about here (it turns out that the Taylor Swift Award was just given that one time; it's not like they gave it to her in 2016 and then kept giving it to other people in future years). The Walter Payton case is kind of analogous to the Dijkstra one. The Taylor Swift case would be like the Shannon one if they'd kept giving it out.
A difference in 1 IQ point in the 100-101 range might be a difference in absolute problem-solving ability of x units, while than the difference between 170-171 is y units.
"hallucination" here is a technical term for language model output that make no sense to the user.
They are the direct result of the way current neural language models work, namely based on a training regime where random words are "masked" (hidden) in sentences and the hidde word is presented with the sentence as the solution to a riddle to the large neural network that is the language model (large language model, LLM). Over time, the LLM learns what the hidden words could be in the form of a probability distribution. A related, similar training regime exists for predicting the next sentence, given a sentence.
Instead of modelling directly what the propositional/logical meaning of a sentence is, LLMs learn language only via such statistical properties, which
Leech (1981) in his seminal book _Semantics_ (2nd ed.) called "associative meaning".
Occasional wrong, impossible and meaningless responses from LLMs are a result of that architecture, and people have dubbed that effect "hallucination", which I find a bit misleading because a crude mechanism is linguistically wrongly humanized, and there is a connotation with a person not functioning properly, which is even worse: it's a property of the model when it actually works as expected, just an undesired one at that.
As a non-US citizen, I find it shocking that such a high number of US
citizens need to live on food stamps, so I checked the numbers.
Indeed, it's 41.2 million out of a total of 334.9 million U.S.-Americans, or 12.3 % or more than one in ten folks - that this is more than one in hundred suprised me because the US are by some counts the "richest" country on the planet.
It's merely the country with the richest few, perhaps this calculation is just a way to show statistically what many believed all along, namely that the so-called "American dream" is a pipe dream for most, in the sense that the majority of people simply fund a tiny fews success in the way lottery ticket buyers fund a few select millionaires that don't deserve it.
Many corporations pay so low that people have to be on assistance even though they are gainfully employed. Thus, corporations off-load their costs onto the American taxpayer. This is also true for some people in the US military.
Here are the yearly trends showing food stamps from 1985 to 2020, I don't know why 21, 22 and 23 data is not shown.
Since the 1985 until 2008, the number of people on benefits stayed roughly around 25 million. In the same time period, US population grew from 220M to 300M, roughly 1% every year.
From 2008 to 2013, the number of people on SNAP roughly doubled to peak at 47.54M people. Population growth was 300M to 315M.
2019 was the lowest point in recent history of only 35.29M people on SNAP, with population growth from 315M to 330M.
I averaged the monthly data from 2021 onward and got 2021: 41.6M, 2022: 41.2M, 2023: 42.1 and 2024 through July: 41.6M
For a long time, poverty in the US was shrinking as a percent of the population. 2008 reversed that trend with things starting to get better after 2013 and really accelerating up until 2019. It's been flat since the post Covid growth.
So everyone saying; "economy is back to normal, we have recovered." there have been 5M people who don't feel it.
“Number of Americans on food stamps” doesn’t mean much. It tapers off pretty quickly, and you get something like $5.31 a month. Many people who qualify don’t bother.
Food stamps was historically a subsidy for farmers as much as a welfare program. I read something about that changing, but don’t know the details.
But I qualified for food stamps long after I was making good money.
During the great depression all government benefits had a work requirement with very few exceptions such as physical disability. Once that work requirement was removed, many take the path of just getting by on minimum benefits and not working.
That's hillarious. Any chance you remember what you said to the Chinese hairdresser versus what you should have said to protect fellow HNers from such a mishap?
I thought myself to be smart, so with help of Google Translate, I found the Chinese characters that were supposed to spell "3 cm"[0]. I copied them down, and in the barbershop, I proudly showed them to the barber, who nodded and invited me to the chair. The guy was stellar, but halfway through the cutting it dawned on me something is wrong. Turns out, what I thought was "cm" was actually spelling "mm"!
He would've likely double-checked with me if I tried to spell this out at the shop[1], but apparently I came across as someone who really knew what they want, coming in confidently with the order already precisely written in Chinese and all.
Lesson learned. I still think the idea was good, and I'd still go for giving explicit length (it's a natural fit, as it translates to cutting head numbers for the electric hair cutters). I'd just triple-check it next time, and not act like I have it all figured out.
--
[0] - Or thereabouts; I'm sure about the unit, but the exact number might've been something else between 2 and 6.
[1] - The barber didn't know English, but knew the metric system and arabic numerals, so we've confirmed the misunderstanding with pen and paper.
I just gesticulated wildly towards my head while making the "scissors" hand sign and loudly and slowly saying "HAIRCUT". I have no idea how that was misconstrued.
When I've needed haircuts in China, I've always had someone take me to a barber. Problem solved.
In one case, I was walking in a shopping area, someone approached me looking to sell souvenir artwork, I explained that while I didn't need that I was looking for a haircut, and she offered to take me to a barber provided I bought a picture. Everybody wins.
Not to mention the ugly/unusable rendering of mathematical formulate in ebooks on my Kindle, which is gatherig dust.
Layouting is an art and a craft, and the fact that it's automated by people who
lack the specialized knowledge, or for whom it is not a priority (quarter century old bug reports, really?) suggests that in 2025, you should still avoid ebooks if you care about quality and aesthetics.
This is a shame because e-ink is just becoming usable. Anyhow, long live the paper book!
For anyone who might want to jailbreak their Kindle in the future, you'll want to enable airplane mode otherwise it will automatically update its firmware (patching the jailbreak) and there's no way to disable that.
It'll keep updating itself as long as it's powered on, even if you haven't used it in months and there's no telling how long it'll take for current firmware versions to be supported, latest jailbroken version is 17 months old.
Blame the (metal) compositor unions which back in the day bargained for sinecures where all their members were guaranteed perpetual employment rather than choosing to participate in the digital revolution.
Fortunately, some folks did work to preserve the craft and beauty of books --- Dr. Donald Knuth taking a decade off from writing _The Art of Computer Programming_ to create TeX (though initially he thought he'd do it over a sabbatical) is one shining example.
Robert Bringhurst's authoring _The Elements of Typographic Style_ also made a huge difference (I've lost count of how many copies I've given as gifts to folks).
A further issue is that doing a good page layout over an entire chapter (or book if the pagination is continuous) is an NP-hard problem --- I've had a chapter come out correctly on a first pass exactly once in my career (fastest 40 minutes of my life). The usual work-flow is something like:
- check all characters to ensure that hyphens are properly set, en and em dashes replace them where appropriate, and correct the setting of any instances of what should be special characters such as prime or double primes
- assign all formatting and ensure that all heads and paragraphs have settings which will forbid widows/orphans and verify that the callouts for all figures/photos/tables are correct
- review the entire chapter from beginning to end, page by page, verifying that each ends as it should at the bottom of the page, and that a referenced element shows on that page spread
- for instances where things don't work out, check to see which paragraphs can be adjusted to run longer or shorter by one or more lines, adjusting this until one finds a set of adjustments which results in a proper appearance for the page/spread --- repeat for all future pages --- if a particular spread/figure placement is a problem, back up and see if changing previous pages will fix it --- check the last page to ensure that it is full enough, if not, adjust previous spread, if that doesn't work, see if running the entire chapter long or short by a line will fix it.
- review the entire chapter again to ensure that there are no bad breaks or stacks, add discretionary hyphens or non-breaking spaces or adjust paragraph settings as necessary, ensuring that pages still base-align
If someone wants to write an ePub reader or page formatter which can do that, I'd be glad to see it.
Fascinating, but as an ebook consumer my standards are quite a bit lower. I’m happy if the relevant figures are on the same page as the text (but that’s important), and the spacing is not absolutely awful.
If this is is still a problem thirty years after the invention of the web, then I say: So much the worse for mathematical notation. In the future, mathematical ideas will be expressed in other ways.
Mathematical notation, even when considering all its faults, won't be easy to replace.
Simple math, maybe. But, for anything complex, any other symbol would require completely different ways of being expressed, if your aim is to make it more readable for newcomers that is.
JPEG is the absolute worst possible solution here. If MathML or similar is not supported, use an SVG or PDF so that it's zoomable and not made of pixels. It's also slightly readable by screen readers (although you probably want some sort of alt-text for those anyway).
If no vector formats are supported, use PNG, or another "lossless" format not JPEG. JPEG's compression is designed for photos where the probability of 2 neighbouring pixels being the same is tiny. Note that PNG doesn't have to be lossless - if you want to shrink the file size you can reduce the resolution or the colour space.
Even GIF is a much better choice than JPEG for a diagram, mathematical formula or logo with hard edges and a small number of colours. SVG is usually the right choice, (but don't do what one designer did for me and embed a JPEG in an SVG instead of giving me an SVG direct from Illustrator or Inkscape).
I have not once seen a formulae-as-images solution that I would consider acceptable, aesthetically. Common problems are:
- It almost impossible to align the baseline of the formula to the baseline of the surrounding text.
- Often, images are only used for "complex" formulae, while simple ones are implemented using normal typesetting. This resolves the baseline issue for simple formulae, but now the fonts between simple and complex formulae don't match. (This requires extra concentration for the reader, as in other contexts, different font styles are frequently used meaningfully.)
One advantage of ereaders is that the fonts an be set to any size convenient for device and the person reading it.
A fixed pixel size image of something you want to read does not go well together with rendered text at all. It's okay for photos, but very definitely not for formulas, which are basically mostly text.
I'm not even talking about the aesthetics, different fonts because they too can be set by the user, and layout, since what's inside the image is fixed and untouchable by the renderer that handles all the rest of the text.
On my tablet I can use two fingers to zoom. But I pretty much never need to do that with a full size tablet. That's why I bought one with the retina display.
But zooming scales the fonts only. For pixel images you have the pixels that are in it and that's it. Scaling those either up or down does not produce good text.
Now I'm just waiting for the inevitable "AI image scaler" that handles text inside images.
> Now I'm just waiting for the inevitable "AI image scaler" that handles text inside images.
I'm surprised this isn't a thing already, as it seems doable with what people called "AI" 20 years ago. I mean, unless some unusual/non-default font was used, upscaling text on an image should be almost trivial. Ligatures notwithstanding, "printed letters" have a fixed shape, so:
1. Identify the typeface, size, weight, etc. by looking at the pixels of the text;
2. OCR the text (which should be 100% reliable);
3. Blank out the original text pixels; re-render the content (from step 2.) at a larger size (using parameters from step 1.).
I'm hedging here; it feels to me that OCR-ing normal text that never left the digital realm should be 100% reliable, but I'm not a specialist in that subfield so I surely must be missing something...
> I'm hedging here; it feels to me that OCR-ing normal text that never left the digital realm should be 100% reliable, but I'm not a specialist in that subfield so I surely must be missing something...
A string set in a given font at a given size won't always render as a fixed pattern of pixels. The font describes the curves of the letter forms and how that's rasterized depends on lots of factors such as the zoom level, exactly how the font rendering engine is implemented, whether or not anti-aliasing is turned on which is further complicated by the fact that the text can be set in any color with any other color as a background, etc. And there are a LOT of fonts.
Lastly, OCR is not just about recognizing letter shapes but has to contend with how the text flows. It has to understand line-breaks, multi-column layouts, captions, pulled quotes, page-numbers, hyphenation and all the other weird shit that we make text do.
That attitude leads to the shitty epubs we currently have. You either do a fixed-size PDF layout, or you have a proper dynamic solution. For technical/mathematical content, I am not interested in anything in between, given that PDF just works for me, and is easily achieved with tools today.
I love Pascal's clarity, and enjoyed writing several years of Pascal and two years worth of Modula-2 in the 1980s and 1990s, but one has to admit Brian has some good points.
If Pascal was still actively used by more people, a new version of its ISO standard could incorporate many things that "happened since" if the community so choose.
That version not only did exist, it was called ISO Extended Pascal, however by then Apple's and Borland's Object Pascal was what everyone cares about.
That critic has nothing of excellent other than for UNIX heads.
It ignores that for quite some time, outside UNIX, C only existed in various dialects of K&R C like Small-C, RatC, and whatever else was around on CP/M, competing mainframes and what not.
Also ignores the existence of Modula-2, released three years prior to that rant.
That wasn't a "rant", no matter how much it offends you.
Here's what it actually was: Kernighan and Plaugher wrote a book called "Software Tools". The code in it was written in RATFOR, which is a preprocessor for FORTRAN. After that book was written, Kernighan got the idea of re-writing it using Pascal. And it was hard - much harder than he expected. So after he did so, he asked himself, "Why was that so hard? Pascal should have been much better than RATFOR, and it wasn't. Why not?"
That's what the paper is about. (It's not even about C, so what C dialects existed, and where they existed, is completely irrelevant.) And the paper says that's what it's about.
And, having used an ISO standard Pascal, every word of it is true. (The only exception was that we had separate compilation.) Almost everything he said, I ran into. The problems were real, and they were painful.
Why didn't he use Modula-2? Because he didn't write the book in Modula-2. And why didn't he do that? He started the book in March 1980. Modula-2 probably had much less traction than Pascal at that time, so Pascal was a more reasonable language to pick for the book.
Not only it is a rant, it shows the ignorance of the Pascal ecosystem, or the unwillingness to actually learn or make a fair point.
Otherwise getting an augmented Pascal system like plenty folks were doing shouldn't have been a big problem.
The situation with C is fully relevant, because it shows the duality of his views, in a text that is used as a kind of biblical message by many UNIX monks.
I also have used ISO Pascal, only because the university professor was religious about it on assignments, as the DG/UX Pascal compiler had enough extensions to our disposal.
The success of the extensions added to UCSD, Apple, Lisa and Turbo Pascal demonstrate that Kernighan's findings, which only focused on Wirth's original Pascal, were not as wrong as you assume. Lisa, Turbo, Object and VAX Pascal essentially supported the same kind of pointer manipulation as C. And if we take a close look at the Oberon System, even Wirth depended on this feature (although it was actually a backdoor of the language and bypassed the type checker), or he directly escaped to assembler.
Same can expressed to the success of the extensions added to K&R C, starting by inline Assembly, but apparently extensions in C were good to have, bad in Pascal.
Apparently K&R C (which was the version at the time of Kernighan's paper) wasn't good enough itself, and it's fair to say that ANSI C also borrowed features from Pascal (e.g. function prototypes). So the influence was mutual. And I think it's pretty evident that the extensions made by UCSD and later Pascals to Wirth's original version were essential for the success. I don't think that the typical Turbo Pascal user cared much about Wirth's original version.
That was written about the "standard Pascal" (the first standard). Turbo fixed most of that. (Maybe all, in that Turbo became a de-facto standard, so that even the "all the extensions are non-standard" didn't really apply.)
> Brian W. Kernighan's excellent critique of Pascal
Well, most of it was obsolete before the paper appeared (see Apple and Lisa Pascal), and the reason Pascal became very popular in the eighties was not the version Wirth created, but products like Turbo, VAX or Object Pascal, or the ones mentioned, and of course later also Delphi. Even Wirth himself demonstrated in his Oberon System, that the system could not be implemented without direct memory read write and pointer arithmetics (via SYSTEM module, and mostly without support of a type checker). The Pascal ISO standard came much too late and had little significance.
I hope when Jack is in Oxford he'll also visit Cambridge to give a guest talk in the late Ross Anderson's former group.
reply