Way back in the 90s, I had a hacked satellite dish. This meant that I could get local channels from across the USA. My roommate used this for a school assignment. He looked at how much time local news spent on each topic, categorized by city. Here is what he found:
- All newscasts featured crime more than anything else ("if it bleeds it leads").
- All newscasts had a local feel-good story.
- All newscasts had weather (although East Coast and Midwest stations spent more time on it).
- All newscasts had a local sports update
But what was most interesting was what they spend the rest of their time on:
- In New York, it was mostly financial news.
- In Los Angeles it was mostly entertainment news.
- In San Francisco it was mostly tech related news
- In Chicago it was often manufacturing related.
That homework was really what drove home for me that the news is very cherry picked and I basically stopped watching after that.
In safety-critical systems, we distinguish between accidents (actual loss, e.g. lives, equipment, etc.) and hazardous states. The equation is
hazardous state + environmental conditions = accident
Since we can only control the system, and not its environment, we focus on preventing hazardous states, rather than accidents. If we can keep the system out of all hazardous states, we also avoid accidents. (Trying to prevent accidents while not paying attention to hazardous states amounts to relying on the environment always being on our side, and is bound to fail eventually.)
One such hazardous state we have defined in aviation is "less than N minutes of fuel remaining when landing". If an aircraft lands with less than N minutes of fuel on board, it would only have taken bad environmental conditions to make it crash, rather than land. Thus we design commercial aviation so that planes always have N minutes of fuel remaining when landing. If they don't, that's a big deal: they've entered a hazardous state, and we never want to see that. (I don't remember if N is 30 or 45 or 60 but somewhere in that region.)
For another example, one of my children loves playing around cliffs and rocks. Initially he was very keen on promising me that he wouldn't fall down. I explained the difference between accidents and hazardous states to him in childrens' terms, and he realised slowly that he cannot control whether or not he has an accident, so it's a bad idea to promise me that he won't have an accident. What he can control is whether or not bad environmental conditions lead to an accident, and he does that by keeping out of hazardous states. In this case, the hazardous state would be standing less than a child-height within a ledge when there is nobody below ready to catch. He can promise me to avoid that, and that satisfies me a lot more than a promise to not fall.
Tangential, but I practically owe my life to this guy. He wrote the flask mega tutorial in what I followed religiously to launch my first website. Then right before launch, in the most critical part of my entire application; piping a fragged file in flask. He answered my stackoverflow question, I put his fix live, and the site went viral. Here's the link for posterity's sake https://stackoverflow.com/a/34391304/4180276
I started out my automotive software career with Ford, and as part of the new college hire training program, I actually got to see the process of how "book rate" is determined. They take a brand new car, straight off the assembly line and give a master mechanic a process sheet (head gasket remove and replace, for instance). He has a tool cart with a computer next to it, about 6 feet away from the vehicle. For each step he starts a timer on the computer for that step, picks up the necessary ratchet and socket or whatever, loosens the next bolt, walks the ratchet and socket back to the tool box, puts it away and then finally stops the timer. He probably practices the procedure a few times before the timed run, but basically this prevents the company from setting the time to do a job super crazy low.
He's also not allowed to take any shortcuts from the book procedure, which there frequently are a few available (use a long wobble extension bar and a universal joint and you can get in without taking off all of the stuff above that bolt, whatever). On the other hand, this is the warranty rate (meaning new cars, largely less rust, etc). Independent/non-dealer mechanics will typically charge more time than the warranty time estimate from the manufacturer to account for things like rusty vehicles with harder to remove bolts and such, though this is usually in the rate book they subscribe to from whatever information source they pay for (warranty + 20% or so).
The issue is that the estimated time for a job is probably a high estimate for a brand new car and probably a low estimate for a several year old car, and the risk of that is on the dealership. The dealership then pays mechanics an hourly wage ($20+, fairly high for well certified master mechanics) and assumes that the hours listed on the job from the manufacturer are accurate, leaving the mechanic to take the risk if it goes over. Generally, the dealership loses on this proposition too, since they lose out on business/bay/electric/heat/etc for the lost time, so they don't like warranty work. They can upcharge/charge for more time/etc on a job for a customer, not for warranty repair due to contractual obligations to the OEM. This is particularly bad for Ford, since they currently lead the industry in recalls and warranty spend, meaning that their dealership networks are getting a lot more of that kind of work with limited profit and no ability to turn it down.
Hey guys, I learned electronics from a nobel laureate!
Throughout my physics career including PhD, analog electronics was the most difficult but probably also the most rewarding class to me. I fondly remember staying until 2am in broida at ucsb trying to get a filter to work, getting a few hours sleep, then being back in the lab before sunrise. Of course, this was mostly the result of procrastination, but damn were those good times.
One thing that really bothered me then was the idea of a current source. I was perfectly happy with a voltage source, perhaps naively(1). But a current source seemed magical. I was asking Martinis about this and he seemed dumbfounded that I didn't understand. Of course, the answer is feedback. And, of course, good voltage sources also require feedback. But he was so familiar with feedback control he didn't even consider saying that's whats happening, while I never even heard of controls.
Long story short, sometime later I asked to join his lab as an undergrad researcher. He said no, and to this day I think it's because I didn't understand current sources. Or maybe I was too late, or maybe the A- (see the aforementioned procrastination). That led me to asking a biophysicist, and therefore I became a biophysicist instead of condensed matter/QI/QC. In hindsight, I think this was fortunate. I would've never considered biophysics, which has been one of the loves of my life since then. Who knows, maybe I would've been just as happy with quantum stuff. I'm working through Mike and ike now and find it fascinating.
Funny enough, after my PhD, I co-founded a startup in industrial control & automation. Now I understand feedback quite well, and thus current sources, albeit many years too late.
(1) Of course, good voltage sources vary their resistance just like good current sources vary their voltage. My best guess as to the reason I was more bothered by the current sources is that I was so familiar with voltage sources with confidently claimed constant voltages (batteries). Not a very good reason, I should've questioned it more. In practice, it's much easier to make a near ideal voltage source (very high resistance) than a near ideal current source (0 resistance).
Hah, this is my time to shine. I worked in anime subtitling and timing for a number of years. I helped write our style guide — things like how to handle signs, overlapping dialogue, colors etc.
It wound up being quite a large document!
But the thing to realize here is that, all of these subs have to be placed by hand. There are AI tools that can help you match in and out times, but they have a difficult time matching English subs to Japanese dialogue. So what you have to do is have a human with some small grasp of Japanese place each of these in/out times by hand.
If you’re really good you can do one 25 minute episode in about 35 minutes. But that’s ONLY if you don’t spend any extra time coloring and moving the subs around the screen (as you would song and sign captions).
Elite tier subs can take up to two or even three or four hours per episode. That’s why the best subs, are always fan subs! Because a business will never put in 8x more time on an episodes subtitles than “bare minimum.”
Crunchy roll looks to have at least gone halfway for a while… but multiply those times across thousands of episodes over X years… and you can see why some manager somewhere finally decided 35 minutes was good enough.
I am in the Product world now, and I do think this was a bad move. Anime fans LOVE anime. The level of customer delight (and hate) in the anime industry is like no other. I really miss the excitement that my customers would get (and happily telegraph!) when I launched a product in those days. Which is all to say, you HAVE to factor delight into your product. Especially with a super fan base like you have in anime.
This is a solved problem. The answer is to add extra relevant information to the context as part of answering the user's prompt.
This is sometimes called RAG, for Retrieval Augmented Generation.
These days the most convincing way to do this is via tool calls.
Provide your LLM harness with a tool for running searches, and tell it to use that tool any time it needs additional information.
A good "reasoning" LLM like GPT-5 or Claude 4 can even handle contradictory pieces of information - they can run additional searches if they get back confusing results and work towards a resolution, or present "both sides" to the user if they were unable to figure it out themselves.
I’ve recently thrown out all my masking tape (crepe paper) in favor of Washi tape (rice/mulberry paper with a 3M adhesive). I use Blue Dolphin for house painting and Nichiban for airbrushing. Very nice quality of life upgrade.
Masking tape would bleed or lift paint. (Even frog tape). 10x reduction in these problems since switching to washi.
Thanks! HN was part of the origin story of the book in question.
In 2018 or 2019 I saw a comment here that said that most people don't appreciate the distinction between domains with low irreducible error that benefit from fancy models with complex decision boundaries (like computer vision) and domains with high irreducible error where such models don't add much value over something simple like logistic regression.
It's an obvious-in-retrospect observation, but it made me realize that this is the source of a lot of confusion and hype about AI (such as the idea that we can use it to predict crime accurately). I gave a talk elaborating on this point, which went viral, and then led to the book with my coauthor Sayash Kapoor. More surprisingly, despite being seemingly obvious it led to a productive research agenda.
While writing the book I spent a lot of time searching for that comment so that I could credit/thank the author, but never found it.
No you can just book a ticket like with all the others. Always buy a ticket online. Otherwise you'll be stuck in a queue. The Sagrada stopped doing on the spot visits because the queue was getting too long and Battló did as well. Only at la pedrera (or casa Milà as it's really called) can you still buy a ticket on site. But I wouldn't. You're just wasting your time waiting while all the prebooks go ahead of you.
Battló is the best one for me by far. I love the organic shapes and the light well and the soft wood etc. Wow
Here's a high level overview for a programmer audience (I'm listed as an author but my contributions were fairly minor):
[See specifics of the pipeline in Table 3 of the linked paper]
* There are 181 million ish essentially different Turing Machines with 5 states, first these were enumerated exhaustively
* Then, each machine was run for about 100 million steps. Of the 181 million, about 25% of them halt within this memany step, including the Champion, which ran for 47,176,870 steps before halting.
* This leaves 140 million machines which run for a long time.
So the question is: do those TMs run FOREVER, or have we just not run them long enough?
The goal of the BB challenge project was to answer this question. There is no universal algorithm that works on all TMs, so instead a series of (semi-)deciders were built. Each decider takes a TM, and (based on some proven heuristic) classifies it as either "definitely runs forever" or "maybe halts".
Four deciders ended up being used:
* Loops: run the TM for a while, and if it re-enters a previously-seen configuration, it definitely has to loop forever. Around 90% of machines do this or halt, so covers most.
6.01 million TMs remain.
* NGram CPS: abstractly simulates each TM, tracking a set of binary "n-grams" that are allowed to appear on each side. Computes an over-approximation of reachable states. If none of those abstract states enter the halting transition, then the original machine cannot halt either.
Covers 6.005 million TMs. Around 7000 TMs remain.
* RepWL: Attempts to derive counting rules that describe TM configurations. The NGram model can't "count", so this catches many machines whose patterns depend on parity. Covers 6557 TMs.
There are only a few hundred TMs left.
* FAR: Attempt to describe each TM's state as a regex/FSM.
* WFAR: like FAR but adds weighted edges, which allows some non-regular languages (like matching parentheses) to be described
* Sporadics: around 13 machines had complicated behavior that none of the previous deciders worked for. So hand-written proofs (later translated into Rocq) were written for these machines.
All of the deciders were eventually written in Rocq, which means that they are coupled with a formally-verified proof that they actually work as intended ("if this function returns True, then the corresponding mathematical TM must actually halt").
Hence, all 5-states TMs have been formally classified as halting or non-halting. The longest running halter is therefore the champion- it was already suspected to be the champion, but this proves that there wasn't any longer-running 5-state TM.
Having worked on non-linear simulators/ODE solvers off and on for a decade, I agree and disagree with what you're saying.
I agree with you that that is 100% the point: you don't already know what's going to happen and you're doing modelling and simulation because it's cheaper/safer to do simulation than it is to build and test the real physical system. Finding failure modes, unexpected instability and oscillations, code bugs, etc. is an absolutely fantastic output from simulations.
Where I disagree: you don't already know what's going to happen, but you DO know generally what is going to happen. If you don't have, at a minimum, an intuition for what's going to happen, you are going to have a very hard time distinguishing between "numerical instability with the simulation approach taken", "a bug in the simulation engine", "a model that isn't accurately capturing the physical phenomena", and "an unexpected instability in an otherwise reasonably-accurate model".
For the first really challenging simulation engine that I worked on early on in my career I was fortunate: the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution, but I also had access to incredibly precise state snapshots from the real system (which was already built and on orbit) every 15 minutes. As I was developing the simulator, I was getting "reasonable" values out, but when I started comparing the results against the ground-truth snapshots I could quickly see "oh, it's out by 10 meters after 30 minutes of timesteps... there's got to be either a problem with the model or a numerical stability problem". Without that ground truth data, even just identifying that there were missing terms in the model would have been exceptionally challenging. In the end, the final term that needed to get added to the model was Solar Radiation Pressure; I wasn't taking into account the momentum transfer from the photons striking the SV and that was causing just enough error in the simulation that the results weren't quite correct.
Other simulations I've worked on were more focused on closed-loop control. There was a dynamics model and a control loop. Those can be deceptive to work on in a different way: the open-loop model can be surprisingly incorrect and a tuned closed-loop control system around the incorrect model can produce seemingly correct results. Those kinds of simulations can be quite difficult to debug as well, but if you have a decent intuition of the kinds of control forces that you aught to expect to come from the controller, you can generally figure out if it's a bad numerical simulation, bad model, or good model of a bad system... but without those kinds of gut feelings and maybe envelope math it's going to be challenging and it's going to be easy to trick yourself into thinking it's a good simulation.
I got a walkie talkie set as a Christmas present when I was 8. Which was kind of an evil thing to do given I had no siblings or friends to play with. One day I turned one set on and listened for a while and I thought I heard someone talking behind all the static noise. So I said something and was shocked to hear the voice talking back to me. Fast forward a few decades, next week is my wedding and that voice on the other side of the radio is my best man.
As a former and long smalltalker who learned MVC from the ParcPlace crowd…
I used to say things like this. M and V were always pretty unambiguous, but “controller” was kind of like “Christianity”, everyone talks like it’s a unifying thing, but then ends up having their very own thoughts about what exactly it is, and they’re wildly divergent.
One of the early ParcPlace engineers lamented that while MVC was cool, you always needed this thing at the top, where it all “came together” and the rules/distinctions got squishy. He called it the GluePuppy object. Every Ux kit I’ve played with over the years regardless of the currently in vogue lets-use-the-call-tree-to-mirror-the-view-tree, yesteryears MVVM, MVC, MVP, etc, always ends up with GluePuppy entities in them.
While on the subject, it would be remiss to not hat tip James Depseys MVC song at WWDC
“Mad props to the Smalltalk Crew” at 4:18, even though he’d just sung about a controller layer in cocoa that was what the dependency/events layers did in various smalltalks.
Way back when, I did a masters in physics. I learned a lot of math: vectors, a ton of linear algebra, thermodynamics (aka entropy), multi-variable and then tensor calculus.
This all turned out to be mostly irrelevant in my subsequent programming career.
Then LLMs came along and I wanted to learn how they work. Suddenly the physics training is directly useful again! Backprop is one big tensor calculus calculation, minimizing… entropy! Everything is matrix multiplications. Things are actually differentiable, unlike most of the rest of computer science.
It’s fun using this stuff again. All but the tensor calculus on curved spacetime, I haven’t had to reach for that yet.
Weird! Sorry to hear that commenting (including on HN) didn't make this person any friends. It has made me a bunch of friends, including some very close in-person ones. I don't think I'm an oddball in that regard!
Of particular note: comment culture is how I managed to engage with local politics here in Chicagoland, through which I met a ton of my neighbors and got actively involved with campaigns and the local government itself. Those are all in-person relationships that were (and remain) heavily mediated by comments.
Back in the 90s, I implemented precompiled headers for my C++ compiler (Symantec C++). They were very much like modules. There were two modes of operation:
1. all the .h files were compiled, and emitted as a binary that could be rolled in all at once
2. each .h file created its own precompiled header. Sounds like modules, right?
Anyhow, I learned a lot, mostly that without semantic improvements to C++, while it made compilation much faster, it was too sensitive to breakage.
This experience was rolled into the design of D modules, which work like a champ. They were everything I wanted modules to be. In particular,
The semantic meaning of the module is completely independent of wherever it is imported from.
Anyhow, C++ is welcome to adopt the D design of modules. C++ would get modules that have 25 years of use, and are very satisfying.
Yes, I do understand that the C preprocessor macros are a problem. My recommendation is, find language solutions to replace the preprocessor. C++ is most of the way there, just finish the job and relegate the preprocessor to the dustbin.
I may be the only person who ever understood every detail of C++, starting with the preprocessor. I can make that claim because I'm the only person who ever implemented all of it. (You cannot really know a language until you've implemented it.) I gave up on that in the 2000's. Modern C++ is simply terrifying in its complexity.
(I'm not including the C++ Standard Library, as I didn't implement it.)
Your first prompt is testing Claude as an encyclopedia: has it somehow baked into its model weights the exactly correct skeleton for a "Zephyr project skeleton, for Pi Pico with st7789 spi display drivers configured"?
Frequent LLM users will not be surprised to see it fail that.
The way to solve this particular problem is to make a correct example available to it. Don't expect it to just know extremely specific facts like that - instead, treat it as a tool that can act on facts presented to it.
For your second example: treat interactions with LLMs as an ongoing conversation, don't expect them to give you exactly what you want first time. Here the thing to do next is a follow-up prompt where you say "number eight looked like zero, fix that".
This isn't a failure of PowerPoint. I work for NASA and we still use it all the time, and I'll assure anyone that the communication errors are rife regardless of what medium we're working in. The issue is differences in the way that in-the-weeds engineers and managers interpret technical information, which is alluded to in the article but the author still focuses on the bullets and the PowerPoint, as if rewriting similar facts in a technical paper would change everything.
My own colleagues fall victim to this all the time (luckily I do not work in any capacity where someone's life is directly on the line as a result of my work.) Recently, a colleague won an award for helping managers make a decision about a mission parameter, but he was confused because they chose a parameter value he didn't like. His problem is that, like many engineers, he thought that providing the technical context he discovered that led him to his conclusion was as effective as presenting his conclusion. It never is; if you want to be heard by managers, and really understood even by your colleagues, you have to say things up front that come across as overly simple, controversial, and poorly-founded, and then you can reveal your analyses as people question you.
I've seen this over and over again, and I'm starting to think it's a personality trait. Engineers are gossiping among themselves, saying "X will never work". They get to the meeting with the managers and present "30 different analyses showing X is marginally less effective than Y and Z" instead of just throwing up a slide that says "X IS STUPID AND WE SHOULDN'T DO IT." Luckily for me, I'm not a very good engineer, so when I'm along for the ride I generally translate well into Managerese.
My dad headed up the redesign effort on the Lockheed Martin side to remove the foam PAL ramps (where the chunk of foam that broke off and hit the orbiter came from) from the external tank, as part of return-to-flight after the Columbia disaster. At the time he was the last one left at the company from when they had previously investigated removing those ramps from the design. He told me how he went from basically working on this project off in a corner on his own, to suddenly having millions of dollars in funding and flying all over for wind tunnel tests when it became clear to NASA that return-to-flight couldn't happen without removing the ramps.
I don't think his name has ever come up in all the histories of this—some Lockheed policy about not letting their employees be publicly credited in papers—but he's got an array of internal awards from this time around his desk at home (he's now retired). I've always been proud of him for this.
Funny to see this pop up again (I'm the author). The year is now 2025 and I still use Chase as a personal bank and I'm now discovering new funny banking behaviors. I'll use this as a chance to share. :)
My company had an exit, I did well financially. This is not a secret. I'm extremely privileged and thankful for it. But as a result of this, I've used a private bank (or mix) for a number of years to store the vast majority of my financial assets (over 99.99% of all assets, I just did the math). An unfortunate property of private banks is they make it hard to do retail-like banking behaviors: depositing a quick check, pulling cash from an ATM, but ironically most importantly Zelle.
As such, I've kept my Chase personal accounts and use them as my retail bank: there are Chase branches everywhere, its easy to get to an ATM, and they give me easy access to Zelle! I didn't choose Chase specifically, I've just always used Chase for personal banking since I was in high school so I just kept using them for this.
Anyways, I tend to use my Chase account to pay a bunch of bills, just because it's more convenient (Zelle!). I have 3 active home construction projects, plus pay my CC, plus pretty much all other typical expenses (utilities, car payments, insurance, etc.). But I float the money in/out of the account as necessary to cover these. We do accounting of all these expenses at the private bank side, so its all tracked, but it settles within the last 24-48 hours via Chase.
Otherwise, I keep my Chase balance no more than a few thousand dollars.
This really wigs out automated systems at Chase. I get phone calls all the time (like, literally multiple times per week) saying "we noticed a large transfer into your account, we can help!" And I cheekily respond "refresh, it's back to zero!" And they're just confused. To be fair, I've explained the situation in detail to multiple people multiple times but it isn't clicking, so they keep calling me.
I now ignore the phone calls. Hope I don't regret that later lol.
Presenting information theory as a series of independent equations like this does a disservice to the learning process. Cross-entropy and KL-divergence are directly derived from information entropy, where InformationEntropy(P) represents the baseline number of bits needed to encode events from the true distribution P, CrossEntropy(P, Q) represents the (average) number of bits needed for encoding P with a suboptimal distribution Q, and KL-Divergence (better referred to as relative entropy) is the difference between these two values (how many more bits are needed to encode P with Q, i.e. quantifying the inefficiency):
Information theory is some of the most accessible and approachable math for ML practitioners, and it shows up everywhere. In my experience, it's worthwhile to dig into the foundations as opposed to just memorizing the formulas.
The Kodak Research Labs (like Bell Labs) let their researchers play. In the 1960's my father (who later devised the Bayer filter for digital cameras) coded this algorithm for "Jotto" the 5 letter word version of Mastermind.
Computers were so slow that one couldn't consider every word in the dictionary as a potential guess. He decided empirically on a sample size that played well enough.
I became a mathematician. From this childhood exposure, entropy was the first mathematical "concept" beyond arithmetic that I understood.
Oh hey, I wrote this. Been a long time. I had the lucky of break of working in machine translation / parsing when the most important invention of the century happened in my niche field.
I'm pretty interested in the intersection of code / ML. If that's your thing here are some other writing you might be interested in.
I believe it absolutely should be, and it can even be applied to rare disease diagnosis.
My child was just saved by AI. He suffered from persistent seizures, and after visiting three hospitals, none were able to provide an accurate diagnosis. Only when I uploaded all of his medical records to an AI system did it immediately suggest a high suspicion of MOGAD-FLAMES — a condition with an epidemiology of roughly one in ten million.
Subsequent testing confirmed the diagnosis, and with the right treatment, my child recovered rapidly.
For rare diseases, it is impossible to expect every physician to master all the details. But AI excels at this. I believe this may even be the first domain where both doctors and AI can jointly agree that deployment is ready to begin.
> "When you consider that classical engineers are responsible for the correctness of their work"
Woah hang on, I think this betrays a severe misunderstanding of what engineers do.
FWIW I was trained as a classical engineer (mechanical), but pretty much just write code these days. But I did have a past life as a not-SWE.
Most classical engineering fields deal with probabilistic system components all of the time. In fact I'd go as far as to say that inability to deal with probabilistic components is disqualifying from many engineering endeavors.
Process engineers for example have to account for human error rates. On a given production line with humans in a loop, the operators will sometimes screw up. Designing systems to detect these errors (which are highly probabilistic!), mitigate them, and reduce the occurrence rates of such errors is a huge part of the job.
Likewise even for regular mechanical engineers, there are probabilistic variances in manufacturing tolerances. Your specs are always given with confidence intervals (this metal sheet is 1mm thick +- 0.05mm) because of this. All of the designs you work on specifically account for this (hence safety margins!). The ways in which these probabilities combine and interact is a serious field of study.
Software engineering is unlike traditional engineering disciplines in that for most of its lifetime it's had the luxury of purely deterministic expectations. This is not true in nearly every other type of engineering.
If anything the advent of ML has introduced this element to software, and the ability to actually work with probabilistic outcomes is what separates those who are serious about this stuff vs. demoware hot air blowers.
A few years ago, on my birthday, I quickly checked the visitor stats for a little side project I had started (r-exercises.com). Instead of the usual 2 or 3 live visitors, there were hundreds. It looked odd to me—more like a glitch—so I quickly returned to the party, serving food and drinks to my guests.
Later, while cleaning up after the party, I remembered the unusual spike in visitors and decided to check again. To my surprise, there were still hundreds of live visitors. The total visitor count for the day was around 10,000. After tracking down the source, I discovered that a really kind person had shared the directory/landing page I had created just a few days earlier—right here on Hacker News. It had made it to the front page, with around 200 upvotes and 40 comments: https://news.ycombinator.com/item?id=12153811
For me, the value of hitting the HN front page was twofold. First, it felt like validation for my little side project, and it encouraged me to take it more seriously (despite having a busy daily schedule as a freelance data scientist). But perhaps more importantly, it broadened my horizons and introduced me to a whole new world of information, ideas, and discussions here on HN.
- All newscasts featured crime more than anything else ("if it bleeds it leads").
- All newscasts had a local feel-good story.
- All newscasts had weather (although East Coast and Midwest stations spent more time on it).
- All newscasts had a local sports update
But what was most interesting was what they spend the rest of their time on:
- In New York, it was mostly financial news.
- In Los Angeles it was mostly entertainment news.
- In San Francisco it was mostly tech related news
- In Chicago it was often manufacturing related.
That homework was really what drove home for me that the news is very cherry picked and I basically stopped watching after that.