If you flip through something claiming to be a "book" and immediately see that a majority of pages just contain nonsensical bulleted lists, and furthermore see that chapter titles are printed overlapping with the book title on each page, you can correctly conclude the entire thing is a zero-effort pile of shit without wasting any further time to read it.
I read through a bit of it and it really wasn't all that bad. The only thing that I found to be really problematic were the made up experiences. Clearly hallucinations are still a big problem for LLMs, but if we manage to get rid of those a book like this can really be quite serviceable (a lot of human-written books are badly written so the bar isn't incredibly high, imho).
The creator should really tweak the prompt/process to include automatic review explicitly intended to remove hallucinations. It clearly is already the intent: "Future iterations of this experiment will include AI-powered fact-checking of the content."
I'm looking forward to what the improved version will look like.
Flip through the middle of the book. Nearly every page has either a bulleted list or a numbered list. In several cases, a single list spans multiple pages.
That’s the format of an outline, not a legitimate book.
Is it? Or are 'legitimate' books just too often not concise and structured enough?
I do a lot of personal knowledge management and I use a shit ton of sections and lists in that. Books evolved from the art of telling stories, not from efficiently conveying knowledge. Perhaps we're just way too used for books etc. to an approach that is suboptimal. I know I personally despise news articles and blogs that start with "setting the scene" and are incredibly and needlessly verbose, using thousands of words to say what could be made clear in a single paragraph.
Viewed from another angle: Reading text is inherently serial in nature even though a lot of things are related to each other in a graph. A document with sections with bulleted lists is actually a way to represent a tree, which is closer to a fully unconstrained graph. I would argue that trees like that are much easier to parse than classically written texts.
There is irony here in that I only used some whitespace to add structure, but never used any bulleted lists in this comment.
[...]
I did generate an alternative with Google Gemini 2.5 Pro, but the formatting doesn't work here on HN. It was decent, though!
> I do a lot of personal knowledge management and I use a shit ton of sections and lists in that.
That's because these are notes, not a book. A list-heavy outline format makes sense for notes, as these are summaries that supplement your own memory and knowledge you've already taken in. They're not a sole/primary source of conveying knowledge to others on their own.
> Perhaps we're just way too used for books etc. to an approach that is suboptimal.
If you truly believe books are "suboptimal", I can only suggest that you consider looking inward and do some reflection:
Is the "problem" really with books and long-form writing, which is the dominant form of knowledge transfer across several thousand years of human civilization?
Or is the problem with people's attention spans in the past decade, due to dopamine-fueling social media doom scrolling and AI usage?
Yeah, and the incident details indicate call it "degraded functionality" when it seems broken for everyone across the board. Desktop app, website and mobile app all non-functional.
Oddly, my mobile app still has me logged in and seems to work, but the desktop app switched to its stupid 'oopsy daisy, something not quite right' screen on its own.
I think the core problem at hand for people trying to use AI in user-facing production systems is "how can we build a reliable system on top of an unreliable (but capable) model?". I don't think that's the same problem that AI researchers are facing, so I'm not sure it's sound to use "bitter lesson" reasoning to dismiss the need for software engineering outright and replace it with "wait for better models".
The article sits on an assumption that if we just wait long enough, the unreliability of deep learning approaches to AI will just fade away and we'll have a full-on "drop-in remote worker". Is that a sound assumption?
Easy way to get a fair result from an unfair coin toss: Flip the coin twice in a row, in this case starting with the same side facing up both times, so it's equally unfair for both tosses. If you get heads-heads or tails-tails, discard and start over until you get either heads-tails or tails-heads, which have equal probabilities (so you can say something like HT = "heads" and TH = "tails").
This works even if the coin lands heads 99% of the time, as long as it's consistent (but you'll probably have to flip a bunch of times in that case).
If anyone wants to look up why this might work, it's a Whitening transform [0]. I can't find the name of the algorithm itself being describe in the parent but there's more than just that for accomplishing the same thing.
It seems like he did everything! I first heard of Von Neumann in international relations & economics classes as the person who established game theory, then later in CS classes as the creator of mergesort, cellular automata, Von Neumann architecture, etc.
The odds are important to know because if someone gave you a trick coin that always lands on heads, you will be flipping coins until the end of the universe. And I'm sure you have more important things to do than that.
Then it's impossible to trust the coin in the general case.
Proof: Imagine the extreme case of the coin containing AI that knows exactly how you use it and how to manipulate each toss result. The coin itself can decide the outcome of your procedure, so it's impossible to trust it to generate randomness.
Reminds me of a cool proof I saw recently that there are two numbers a and b such that a and b are both irrational, but a^b is rational:
Take sqrt(2)^sqrt(2), which is either rational or not. If it's rational, we're done. If not, consider sqrt(2) ^ (sqrt(2) ^ sqrt(2)). Since (a^b)^c = a^bc, we get sqrt(2) ^ (sqrt(2))^2 = sqrt(2)^2 = 2, which is rational!
It feels like a bit of a sleight of hand, since we don't actually have to know whether sqrt(2)^sqrt(2) is rational for the proof to work.
I wonder what the easiest to prove example of a, b irrational with a^b rational is?
The easiest I can think of offhand would be e^log(2). To prove that we need to prove that e is irrational and the log(2) is irrational.
To prove log(2) is irrational one approach is to prove that e^r is irrational for rational r != 0, which would imply that if log(2) is rational then e^log(2) would be irrational. To prove that e^r is irrational for irrational r it suffices to prove that e^n is irrational for all positive integers n.
We'd also get the e is irrational out of that by taking n = 1, and that would complete our proof that e^log(2) is an example of irrational a, b with a^b rational.
So, all we need now is a proof that e^n is irrational for integers n > 0.
The techniques used in Niven's simple proof that pi is irrational, which was discussed here [1], can be generalized to e^n. You can find that proof in Niven's book "Irrational Numbers" or in Aigner & Ziegler's "Proofs from THE BOOK".
That can also be proved by proving that e is transcendental. Normally proofs that specific numbers are transcendental (other than numbers specifically constructed to be transcendental) are fairly advanced but for e you can do it with first year undergraduate calculus. There's a chapter in Spivak's "Calculus" that does it, and there's a proof in the aforementioned "Irrational Numbers".
I think a = sqrt(2), b = log(9)/log(2) with a^b = 3 is easier. To show that b is irrational, assume b = n/m for integer n, m. Then 9^m = 2^n, which can't be the case since the lhs is odd and the rhs is even.
Well the proof I would use is let a = e and b = i(pi).
e^(i theta) = cos theta + i sin theta (Euler's identity)
thus e^(i pi) = cos pi + i sin pi = -1 + i(0) = -1
We know that e and i pi are irrational (in fact i pi isn't even a real) and -1 is rational.
Therefore there exist two numbers a and b such that both a and b are irrational but a^b is rational.
In fact log of just about anything is irrational so e^(log x) works as well for just about all rational x, but Euler's identity is cool so I wanted to use that.
When my primary care doc referred me to a dermatologist for a suspicious mole, I could not find an actual dermatologist who would see me in less than ~8 months. I ended up seeing a physician's assistant, which I'm still uneasy about since there's been a study that shows that PA's seem to have a lower success rate vs. doctors [1], and the educational requirements are very different for PAs.
As a layperson, it seems like we (patients / society) would benefit from having more doctors, i.e. opening up more residency slots and admitting more people to med school, but there's probably a lot I don't understand about the issue. Not sure if it's a lack of political willpower to do this, or if there are other reasons why the number of doctors we train is so restricted.
[1] https://pubmed.ncbi.nlm.nih.gov/29710082/ ("PAs performed more skin biopsies per case of skin cancer diagnosed and diagnosed fewer melanomas in situ, suggesting that the diagnostic accuracy of PAs may be lower than that of dermatologists")
> As a layperson, it seems like we (patients / society) would benefit from having more doctors, i.e. opening up more residency slots and admitting more people to med school, but there's probably a lot I don't understand about the issue. Not sure if it's a lack of political willpower to do this, or if there are other reasons why the number of doctors we train is so restricted.
Like so many of America’s issues, it’s due to lobbying based on entrenched greed.
> In 1997, the AMA lobbied Congress to restrict the number of doctors that could be trained in the United States, claiming that, "The United States is on the verge of a serious oversupply of physicians."
Yep. The requirements (and cost!) to become a physician are absolutely insane, and it's entirely intentional. As a society we seem to assume that people in certain trades are altruistic and moral, simply because of their job. For some reason, everyone assumes doctors wouldn't act self-interested. Teachers are often thought of the same way. I don't want to swing the pendulum to the other side and start thinking of them as selfish (though certainly some individuals are), but I do wish as a society we would remember that people are still people. Our systems need to be structured to overcome the natural and innate tendency of people to optimize for themselves or their groups. We don't let the cigarette companies do all the science and make all the laws/rules around tobacco sales, we probably shouldn't do that with medical stuff either. We don't need antagonistic people in charge, but they should be independent.
I don't think there's necessarily much not understood.
Here in Sweden have almost 2x as many physicians you do, and we pay them about half of what you do, so we end up paying approximately the same in salaries (the average Swedish physician is paid 131k) and I think it works out completely.
We start our training of physicians right after high school, so we push them to get an MSc in Medicine, rather than treating physicians as some kind of pseudo-PhDs, with however requiring head physicians to have an actual PhD; and this system is fine. I think it's the same way in Denmark, and given the stuff they've come up with I imagine one can't complain much about their system.
A big driver for the high salaries of medical doctors in the U.S. is the staggering educational debt their degrees leave them with. Is it the same in Sweden? Some degree of wage depression is practically inevitable if we had more doctors, but I wonder how much that could be offset with affordable education?
No. Universities in Sweden are free to citizens (including EU/EEA citizens). That includes highly regarded universities such as Karolinska Institute (considered one of the top medical schools in Europe), Lund University, the University of Gothenburg, and so on.
In Scandinavia, student loans are taken to cover living expenses, not the cost of tuition. Private schools exist, but are not nearly as common as in the U.S.
Biopsy stats might differ because PA's are used in large (cough private equity) practices to do a lot of checks esp. in old-folks homes, and medicare pays. Patients per week can average 120+; no doctor does that. Plus, the PA is supposed to err on the side of caution, meaning more biopsies. DR's are more willing to ignore possible risks.
That said, most anyone (Dr. or PA) who is recently trained at a good school is often better than people with 15+ years of experience.
Also, derm exam skills are not enhanced by the depth of medical education or even much by experience (by contrast to the cardio exam). It's mostly a function of pattern recognition and patient skills.
And then you have the ARNP, and schools who are speedrunning people from the street into ARNPs. Oh, you need an RN? We'll have you in our "Accelerated RN" course, getting your RN in parallel with other studies.
In some places, it is possible to go from high school to ARNP within 6 years.
And while supervision requirements for PAs might vary in terms of actual oversight, ARNPs are ostensibly fully fledged independent providers.
And I'll also say that you see the same pre-hospital too. In the PNW, while there are valid criticisms that can be leveled against two of the pre-eminent paramedic programs (Harborview, and Tacoma Community), there are far, far, too many "strip mall schools" in other states that will take you from "zero to hero" in 4 or 5 months (of 6 days a week, 8 hours a day, of just class time), and dump you out on the world with just enough retained knowledge to pass your NREMT and the barest amount of ride time to meet DOT mandated minimums. It's scary, to be blunt. These people go out with no clinical experience and are now expected not just to work as a team on a 911 call, but to lead it.
When I actually got my appointment within 30 days, due to calling and advocating for myself politely, I started wondering how much ground medical dermatology has ceded to elective and cosmetic dermatology. I am concerned that dermatology is becoming centered around the personal appearance of affluent people rather than medical need.
Try requesting appointments during December or January. A little birdie told me that appointment cancellations go through the roof at some practices during those months.
This article really resonates with me - I've heard people (and vector database companies) describe transformer embeddings + vector databases as primarily a solution for "memory/context for your chatbot, to mitigate hallucinations", which seems like a really specific (and kinda dubious, in my experience) use case for a really general tool.
I've found all of the RAG applications I've tried to be pretty underwhelming, but semantic search itself (especially combined with full-text search) is very cool.
I dare say RAG with vector DBs is underwhelming because embeddings are not underrated but appropriately rated, and will not give you relevant info in every case. In fact, the way LLMs retrieve info internally [0] already works along the same principle and is a large factor in their unreliability.
I think something with the "wow" factor of the Vision Pro but the form factor of a pair of glasses would be the holy grail of AR/VR. I wonder if there are fundamental tradeoffs which would make that impossible in the near term? I think it would remain very niche indefinitely in that case.
I wonder if in the next 5 years there could be a device where the compute is your smartphone but it streams to a display on your AR/VR glasses. I guess the main issue would be where do you put the battery.