Hacker News new | past | comments | ask | show | jobs | submit | pona-a's comments login

See previous discussion: https://news.ycombinator.com/item?id=42014650

81 comments, 241 points.


This is such a cool experiment, and those comments basically boil down to "Lol, it's just minecraft"

Welcome to hacker news. Home of the shallow dismissal.

I could knock up clone of these shallow dismissals in a weekend.

See previous discussion

https://news.ycombinator.com/item?id=39694045

163 points | 252 comments


As long as there is liability, there must be a human to blame, no matter how irrational. Every system has a failure mode, and ML models, especially the larger ones, often have the most odd and unique ones.

For example, we can mostly agree CLIP does a fine job classifying images, except if you glue a sticky note saying "iPod" onto an apple, it would say classify it as such.

No matter the performance, these are categorically statistical machines reaching for the most immediately useful representations, yielding an incoherent world model. These systems will be proposed as replacement to humans, they will do their best to pretend to work, they will inevitably fail over a long enough time horizon, and a human accustomed to rubber-stamping its decisions, or perhaps fooled by the shape of a correct answer, or simply tired enough to let it slip by, will take the blame.


Lenses seem to be a functional pattern that groups getters and setters into composable functions.

Consider pair(a, b) with first(pair) and rest(pair). I can define first-lens and rest-lens for [get-first, set-first] and [get-rest, set-rest] respectively. I could use them as simple getter/setter groupings with lens-view(rest-lens, pair) or lens-set(first-lens, 4, pair), which isn't so useful by itself, or I can compose then to get third-lens with lens-compose(rest-lens, first-lens), allowing me to lens-set(third-lens, 7, pair) without a nested call.

I just tried to summarize what I could read [1] [2] [3] and I don't claim to entirely understand it, but it's still better than the sibling comment dismissing it as abstract nonsense.

[1] https://docs.racket-lang.org/lens/lens-intro.html

[2] https://hackage.haskell.org/package/lenses-0.1.4/docs/Data-L...

[3] https://theswiftdev.com/lenses-and-prisms-in-swift/


Clojure has a really nice `juxt` function for that, which I have an implementation of in my `.guile` file.

  (define (juxt . fns)
    (lambda args
      (map (lambda (fn) (apply fn args)) fns)))


But were said people well-informed to begin with? Voting can only work if everyone's on the same page regarding both the policy being implemented and its consequences, thus being able to defend their best interests. If the people have been consuming provably false messaging pushed by a fossil fuel industry lobby and sensationalist media, can they be trusted to make a choice that affects not just themselves their entire city, nation, and planet?

The representative democracy does attempt to solve this problem; you pick a representative you understand is aligned with your interests on the issues you do understand and trust they and their experts will do well on the rest.

It's not a perfect system, but unless the populace can be given free time and education to consider individual policies based on reliable evidence and not hearsay from biased sources, it's the best we can do.


What about negative externalities? If we were to force all actors to pay for their emissions and toxic by-products, factoring the entire lifecycle into the cost, what would it look like then? If, even with that in mind, solar looses to nuclear, that would be a fairly interesting result.


The total lifecycle cost of nuclear is massive - rehab'ing a nuclear plant would be well into the 10s of billions.


> Your peak score: 8.01 BPS

Well, I'd say permanent brain damage from scaring alone wouldn't quite be worth an extra 1 click per second...

What it's showing really is a very different problem; the mouse is a fair show and inefficient input method. While some use-cases, like 2d/2d panning, drawing, etc., benefit from a pointing device, most of the typical UI can best be done by a keyboard. I doubt any brain computer interface can achieve 100+ WPM on arbitrary inputs with a low error rate.


And what we do see is terrible; bland art, more spam and political astroturfing than ever in human history, bad code, and ignorant lessons, all to the tune of a PR campaign to shift the Overton window towards praising incompetence and denigrating hard work.

The only real accomplishments of LLMs were how good the proposed use-cases sound on paper under competent implementation, and a theoretical solution to unstructured data parsing that's still too heavy to be worth a tiny bump in performance.

Do you want to live in a future where all human thought has been replaced by its surface level reproductions, made by big tech stuffing copyrighted works into a GPU farm with near-zero human labor? We both know it won't benefit you and me, our role is merely transitory in bootstrapping their self-improvement under the guise of a paid product, nor had the relationship between us and these tools been in any shape collaborative in the first place.

This fantasy targets the owner class, which can finally dream of labor decoupled from the laborer, the work simply costing no more than the price of electricity, all without the demands for livable compensation or following best practice. Even if the LLMs gained above-human performance in all domains of knowledge shortly followed by institution of a universal basic income, their invention will still have only been a force of stagnation, learned intellectual helpless, and overconsumption.


    > terrible; bland art...bad code
Ironically, all of this means that we're not at the apex and there's still a long ways to go both in terms of algorithms and the hardware to run them.

    > Do you want to live in a future where all human thought has been replaced by its surface level reproductions, made by big tech stuffing copyrighted works into a GPU farm with near-zero human labor?
Whether we want to or not, it's the apparent path that will unfold; there's no putting AI back into the box. The race is already on.


> there's still a long ways to go

Sure, in the way that technically this is a computable problem, but maybe not a simple one. Any exponential in the real world is a sigmoid and given all major AI labs, having spent years and incomprehensible sums of cache, have arrived at about GPT-4 performance, including OpenAI's latest release being a smaller model, should tell us something. Be the limiting factor corpus size, model parameters, or an architectural defect, we're clearly loosing momentum, at least until it's to be diagnosed and solved. Deferring to hypothetical futures without meeting the burden of evidence seems ill-advised, especially when it comes to incompetent use today.

> there's no putting AI back into the box

There's also no complete undoing of an oil spill, nor the practical possibility of unilateral nuclear disarmament. The weights are public, the corps is as well, as much as it had broken the open web to gather it, the architecture is known, ergo today's open-weight models are the baseline of capability for all future models. Still, seeing that LLMs form a natural monopoly given the required compute power to enter the field, we can enforce policies like mandatory statistical fingerprinting on all outputs of proprietary models beyond a certain size. LLM detection is also getting quite good; my hope is that adversarial fine-tuning will work out like Bayesian poisoning for email spammers, only giving the discriminator new stable patterns to look for. We as consumers do have power, however small and unconcentrated, to vote with our wallets, which a study posted here shown many are beginning to do [0]. The copyright question also gives quite a nice kill switch if our society decides this whole industry isn't that beneficial, removing the commercial incentive for training new models while providing some for detection.

There's great value from transformer networks, such as state-of-the-art speech recognition, that will make its way into consumer products and will be here to stay. As for the FADs like useless chatbots in every product, that may come under question.

[0] https://news.ycombinator.com/item?id=40390375


KDE? Cinnamon? MATE? LXQt?

When I was in the GNOME bubble, I too thought GNOME was the be-all and end-all of Linux DE usability, with everyone else being savages slapping together UIs without so much as a style guide. Perhaps at some point this may have been partially true.

Today, all major DEs are fine. Plasma did not crash once since 2019 for me and I think its UX is quite nice, in the case of Dolphin in particular visibly better than GNOME's. At the same time, GNOME had routine issues with extensions, semi-frequent crashes, and odd non-compliant bits like refusing to use tray icons, breaking apps that depend on them, and the last time I checked, scaling was a mess.

I do think the GNOME/libadwaita ecosystem is a fantastic achievement and agree with many of their ideas, but it would be dishonest to say all other DEs are inferior and don't deserve any consideration as a default.


Only KDE and Gnome offer proper Wayland support making them in my eyes the only truly modern DEs on your list. Out of these two, Gnome is the only one that doesn’t glitch constantly. It’s true that Plasma finally stopped krashing every time you look at it funny but it’s still a bloated, glitchy mess of a DE. You really can’t count on KDE to provide updates without introducing major bugs and stability issues as proven by recent Plasma 6 release. Believe me I hate Gnome for how poorly managed and anti user it is but at the same time I see no real alternatives.


I don't think this is true, I haven't gotten plasma crashes for a long time now. Even early into Plasma 5 it was stable.

Also bloat is so very debatable. You can't on one hand complain about Gnome being unusable and then turn around and say Plasma is bloated. Uh, that "bloat" is the difference between the two!


The difference between the two is that Gnome is at least trying to constantly deliver stable and high quality desktop experience. I don’t like the way Gnome works, I don’t like the way their organization is run but I like when my computer just works. I don’t have time to fuck around with buggy Plasma widgets. I was a Plasma 5 users for two years until 2022 and I still use it on my Steam Deck occasionally. My opinion is based on experience.


I've never had a gnome extension not break between releases. They're basically plasma widgets, except not even made by the gnome devs. Terrific...


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: