I don't know but it seems like LLMs are just adding small incremental improvements in the polish/iterating phase.
Some new revolutionary thing will probably happen in the future but it might be decades away. Heck AGI might be even harder to solve than fusion which has been 30 years away since even before I was born.
We're probably close to an AI winter. There's been a huge investment (over $1T iirc) and not that much to show for it. The euphoria of the past couple of years is definitely over.
I do think there are some very useful applications for LLMs... just not the ground breaking cancer solving thing we've been told again and again that would justify that much money thrown at it.
> I don't know but it seems like LLMs are just adding small incremental improvements in the polish/iterating phase.
A couple days ago I was searching my hard drive for <something> and found this java file I downloaded a long time ago. This long, lost project idea from days gone by I never got around to doing anything with though I do remember making an attempt.
So, I got the robots to work. They analyzed the code and answered a whole slew of questions about how it compares to 'modern' implementations. They converted it to C, made improvements (which they also suggested) and added functionality (which I suggested) to the original code then wrapped it all up into a gimp plugin. If I ever get around to installing gimp-devel (or whatever the Fedora package is) I can get the bugs shaken out and upload it to the gimp plugin registry -- as was also suggested by the robots.
And this was all over the course of an hour or two until, quite honestly, I got hungry and went foraging for food. I would never spend the days to learn how gimp plugins work and, most likely, would just have just let the file sit on my hard drive for another decade before even looking at it again.
I have to disagree they are only good at small, incremental improvements. There's a bunch of papers I've collected over they years which only have the 'algorithmic code' I plan on letting them loose on which I either tried (and failed) to convert into actual code or didn't even try at all.
I love HDR for movies/shows on OLED but other than that I agree. It really sucks you can't disable HDR in apps like Netflix etc. It does look terrible on non OLED TVs. In Chrome you can force a specific color profile in the settings. I believe sRGB shouldn't allow HDR content.
Personally I think the biggest benefit of HDR is not even those super bright annoying colors but 10-12 bit colors and the fact that we can finally have dark content. If you look at movies from 10-20 years ago everything is so damn bright.
Just my personal experience, but I've recently upgraded from a MBP with the M1 Max to a new MBP with the M4 Max and it does get hotter when doing heavy tasks (eg: video transcoding). It gets to 95-100ºC faster, uses more power, and the default fan curve is also more aggressive, something that Apple usually avoids doing.
It's still very efficient and doesn't run hot under normal load (right now my CPU average is 38ºC with Firefox and ~15 tabs open, fans not spinning), but it definitely generates more heat than the M1 Max under load. Apple still seems to limit them to ~100ºC though.
If it is new to the DOM it will get added. If it is present in the DOM (based on id and other attributes when the id is not present) it will not get recreated. It may be left alone or it may have its attributes merged. There are a ton of edge cases though, which is why there is no native DOM diffing yet.
I don't think it would do justice to the article. If I could write a good tldr, I wouldn't need to write a long article in the first place. I don't think it's important to optimize the article for a Hacker News discussion.
That said, I did include recaps of the three major sections at their end:
Look, it's your article Dan, but it would be in your best interest to provide a tldr with the general points. It would help so that people don't misjudge your article (this has already happened). It could maybe make the article more interesting to people that initially discarded reading something so long too. And providing some kind of initial framework might help following along the article too for those that are actually reading it.
That is not a good reason to make the content unnecessarily difficult for its target audience. Being smart also means being able to communicate with those who aren't as brilliant (or just don't have the time).
Yes, it looks like a flash was used. A pyrotechnic "big chemical flash" was the standard kind in 1921, so that too.
I am not sure if it was "bounced against a wall to soften" or not, I don't think that our experience about what an electric flash looks like with and without bounce will apply, the pyrotechnic flash won't look exactly the same. The pyrotechnic won't be such a point light source for a start. So I wouldn't leap to the conclusion that there has to be a deliberate bounce.
Some new revolutionary thing will probably happen in the future but it might be decades away. Heck AGI might be even harder to solve than fusion which has been 30 years away since even before I was born.
We're probably close to an AI winter. There's been a huge investment (over $1T iirc) and not that much to show for it. The euphoria of the past couple of years is definitely over.
I do think there are some very useful applications for LLMs... just not the ground breaking cancer solving thing we've been told again and again that would justify that much money thrown at it.
reply