Hacker News new | past | comments | ask | show | jobs | submit | cribbles's comments login

These "what ifs" are kinda funny because the origins of JSX can be traced back to Facebook's XHP[1], which took explicit inspiration from E4X[2], an early JS standard that looked and behaved similar to the library described here.

[1] https://engineering.fb.com/2010/02/09/developer-tools/xhp-a-...

[2] https://en.m.wikipedia.org/wiki/ECMAScript_for_XML


E4X had the unfortunate downside of returning actual DOM instances, which needed to be updated imperatively. That's why JSX eclipsed it, and there hasn't been a serious proposal for HTML templating in JS since then.

But maybe we can revive the general idea with a modern take: https://github.com/WICG/webcomponents/issues/1069


> had the unfortunate downside of returning actual DOM instances, which needed to be updated imperatively.

Isn't this what we have in TFA?


Yes, for elements. The project here also supports a notion of components, though, which E4X didn't contemplate.


There are separate proposals from web components that get rid of imperative updates. https://eisenbergeffect.medium.com/the-future-of-native-html...


Also E4X was only ever implemented in Firefox, never really got traction even in Firefox.

But even considering the single implementation problem, it also was just not a good language model, nor was it well specified or defined and it brought with it a pile of weird baggage and complexity.

Then because it was The Future there was no real thought into proper interop with JS (it was essentially a completely independent spec so adopted general syntax but specified in a way that meant JS could not simply adopt that syntax).


> E4X had the unfortunate downside of returning actual DOM instances, which needed to be updated imperatively

Firefox never shipped the optional E4X DOM APIs. I wrote a polyfill for them at the time.[1]

1. https://github.com/eligrey/e4x.js/blob/master/e4x.js


With "imperatively" you mean that the user of the templating system has to do it imperatively, and that is bad? Asking because imperative updates seem to be the way to go within the implementation, instead of creating new instances of elements every time.


> which needed to be updated imperatively

VanillaJSX seems to suffer from the same problem though.


Fun fact, E4X is the reason JavaScript has ‘for(of)’ instead of ‘for each’ (the reason we didn’t get ‘for (:)’ is even dumber - it would conflict with ‘:type’ annotations a few TC39 members were convinced would magically be in the language)


Like the type annotations that are now in TypeScript?


Yup, that were in typescript, pascal (and rust, etc when they came out).

But there was no real progress after years of them pushing this syntax, but failing to actually define a type system that was coherent, or a model that would allow it.

As a result I proposed `for (of)` largely to prevent sane enumeration from being blocked on the intransigence of two people.

It's also worth noting that for(:) enumeration would not even preclude their syntax - it's certainly not grammatically ambiguous - and most real world code in languages that support enumeration directly and support inference doesn't explicitly specify the types , so the ugliness of `for(let a:type:expression)` would have be rare anyway.

shrug

Given that ECMA literally killed E4X a few years later the blanket ban on "for each" or "foreach" (because it would be "confusing" in E4X) is arguably worth than for(:), but again shrug


There is a proposal to add them, though it does seem to be stalled.


There were proposals almost 2 decades ago. They've never gone anywhere because proponents of type specifiers don't want to do the necessary corollary: specifying the type system.

Typescript and similar can do it because they don't have to specify the type system, and can't change it in meaningful ways over time. Things in the language standard cannot be easily changed, if they can be changed at all.


> the necessary corollary: specifying the type system.

It's clearly not strictly necessary though. Python has shown that.

I mean I agree it is pretty mad to just say "you can write types but they mean whatever" but surprisingly in practice it seems to work ok.


It is necessary if you’re creating a standard.

The Python implementation can do whatever it wants because “Python” does not mean “The Python Language Specification”. It means the one specific implementation, and whatever that impl does is definitionally correct.

The ability for a language specification does to hand wave behaviour is very limited, and for JS is non existent (the only places where there is divergence between implementations is some squirrely edge cases of property and prototype chain mutation during for(in) enumeration).

So you can’t say “types mean whatever”, you have to specify what the implementation is required to do when it encounters those annotations. Even if they are not meant to have any semantic impact the lack of semantic impact must be specified: e.g the language specification would be required to state “here is the valid grammar for these annotations”, and specify that they are explicitly ignored and must not be evaluated or examined in any way.


> The Python implementation can do whatever it wants

No you're misunderstanding how it works in Python. This isn't like Rust where "the implementation is the specification".

The Python type checking standards explicitly don't define semantics (though I think they give guidelines). The standard Python implementation - CPython - does not include a static type checker. There is no "official" implementation.

In fact there are at least 4 Python static type checkers and they are all third party projects and they do differ in interpretation of types sometimes. The most popular ones by far are Mypy and Pyright (and Pyright is the far superior option).

So it is exactly the same as what is proposed for JavaScript. It definitely sounds mad and I do agree that it would be better if they just actually specified semantics, but not bothering isn't the complete disaster you might imagine.


No I think you’re misunderstanding my point, in your defense I was unclear: in an environment like JS you cannot leave anything as “it’s up to the environment” - the js engines must be 100% consistent which means the exact semantics of the syntax must be specified. E.g if you were to say add an optional type suffix to the language, say:

    OptionalType := (‘:’ <Expression>)?
You have to specify what that means.

Does the expression get evaluated? E.g

    let x : Foo.Bar = …;
Does this resolve Foo or subsequent property access of Bar.

Is the expression unrestricted? E.g could it be (a=>a)()

If you want something to be invoked at runtime you have to specify how and when that occurs (and you’re now going to have to specify what is being passed).

You have to specify when evaluation or calls happen, etc.

The problem for an environment like JS is you cannot add a language feature and not specify the exact behaviour.

E.g it’s not “you must define a type system” (though for the parties involved in pushing this when I was involved it would have been), it’s that even if you aren’t actually interested in defining a type system you have to do a lot of design and specification work because there cannot be ambiguity or gaps where different engines will disagree on what is valid, or will disagree on what portions result in any evaluation, or what semantic effects occur. The specification also needs to handle other things that don’t matter in the Python use cases: what happens if I do have a library that does type checking, but then my code is included in an environment that also does type checking but does it differently.

In Python it’s acceptable to say “don’t do that”, but in JS that’s not sufficient, the implementations need to agree on the result, so the result needs to be specified, and ideally the specification would need to provide semantics that simply support that.

Note that none of this is unsolvable, it’s just a lot of work, and not defining the type system doesn’t remove that specification work.


> the scene is moving towards Leipzig

Speaking as someone who's split ~half my time between Leipzig and Berlin for the last 5 years, this is not true.

Leipzig's club scene is an extension of its university population. It's younger, straighter, whiter, and about two orders of magnitude smaller. People visit the clubs while they're going to school there, then they graduate and move elsewhere. Often to Berlin.

Why does this illusion exist? Because Leipzig is about an hour away from Berlin by train. Berliners visit for a weekend and think "wow, it's like Berlin in the 90s! Still cheap! And look at all these cool young kids at these scrappy clubs -- so that's where the underground has gone!". Then they go back to Berlin and spread the word to credulous out-of-towners, who go on to repeat this truism to people who have never visited either city.

In reality, Berlin's club scene -- both "mainstream" and "underground" -- dwarfs that of any other city. Nothing short of an asteroid hit is likely to change that.


The main reason why Leipzig won't become the "next Berlin" is rents are increasing there, as fast as in Berlin. It is still cheaper, but not "let's try and find out" cheap.

I think, for a moment around 15 years ago, Leipzig was dirt cheap and had quite a bit of momentum, because you didn't need a business plan to try and make things happen. People, students from west Germany, payed 100€ for rent and 30€ for an atelier. Lots of raw excitement and empty buildings.

But the momentum died maybe 8 years ago. What's left is nothing like Berlin. Cheaper, but still expensive; liberal-ish, but all kartoffel. Leipzig doesn't feel exciting, but small now. And it's deep in enemy territory, very depressing region, the mere thought doesn't spark joy at all.


Techno has always been about that in Europe - young and white. And it is not in Berlin anymore.

Berlin can have the heritage, NOW it’s not there anymore, do u get it?


If we limit this claim to _just_ German manufacturing competitiveness (which is often synonymous with "European manufacturing" in the context of these discussions -- and I will assume this is true of your comment as well, given the nuclear energy remark), this really isn't true.

What changed my view on this was one of Adam Tooze's newsletters from last year.[1] The relevant points here are:

- Germany manufacturing is less gas-intensive than the global average.

- Energy costs only constitute a small and decreasing share of total industrial costs -- about 5.8% for the German manufacturing industry as a whole, and 3% for leading export sectors (namely the auto industry).

- Most German manufacturing -- and this is true of European manufacturing more broadly -- is primarily in high-margin, value-added sectors where quality, rather than cost, are the competitive factor.

Another important detail lost in the invocation of "cheap gas" is that Europe as a whole, and German in particular, has never had particularly "cheap gas." European natural gas prices have long been well above those in the US, and Germany's have been above the European average for the last 15 years or so.

I agree with both Tooze and the majority view that, notwithstanding the above points, Germany's post-war energy policy has been a catastrophe. It has certainly limited its manufacturing potential. But the effect of the Ukrainian war on its _already_ limited manufacturing sector, due to _already_ not-so-cheap energy, tends to be exaggerated.

[1] https://adamtooze.substack.com/p/chartbook-150-why-cheap-rus...


Air travel accounted for 2.5% of CO2 emissions in 2020, and its total contribution to global warming was probably closer to around 3.5%.[1]

Only 11% of the world's population travelled by air in 2018, with at most 4% taking international flights, and 1% of the world's population accounting for more than half of total emissions.[2]

Passenger air travel is projected to grow by about 44% by 2050,[3] and will probably take up an even more substantial slice of overall emissions by then because technologies to decarbonize air travel (other than direct carbon capture) do not yet exist.

The argument you are making is probably least compelling when applied to air travel compared to any other form of consumerism, and HN's readership (generally speaking) is uniquely culpable here.

[1] https://ourworldindata.org/co2-emissions-from-aviation

[2] https://www.sciencedirect.com/science/article/pii/S095937802...

[3] https://www.eurocontrol.int/article/aviation-outlook-2050-ai...


People aren’t going to give up travel and it’s unrealistic to expect them to do so. If we rule that out, what alternatives remain?

I assume the answer is going to come in the form of reducing emissions from a typical flight and extracting CO2 from the air.


“Curtis’s stitched-together compositions are less collages than they are Rorschach blots: look into their murk, and you can find your own worldview confirmed.”

This passage, which I see as being core to the author’s critique, doesn’t really jive with me. Basically he’s saying these movies are enigmatic, they offer space for reflection, they resonate with a lot of people across the ideological spectrum.

Well, you can look at that cynically, or you could say that’s precisely — almost definitionally — what makes them effective art.

To go a little further than that: I don’t think it’s a fair claim. Each of his movies since Bitter Lake have had the same general arc of “emerging ideological apparatus promises to resolve social contradictions and empower the common person, fails to do so.” In this sense, his narrative angle is broadly _anti-confirmatory_.

Then there’s the aesthetic critique, which, whatever. I really can’t fault someone for finding fault with Adam Curtis’s style. And it’s least overbearing in TraumaZone out of all his movies I’ve seen, so I get why this author favors it.

As it happens, I’m about halfway through TraumaZone right now. It’s great. Poses some interesting questions, doesn’t offer any easy answers.


I was going to comment on the same passage. There's definitely something Rorschach about Curtis's work.

But the author's focus on ambience, Eno and what he calls "texture" I think misses the point. I appreciate Curtis's eschewing of an explicit "voice of God" narration, like you get in most PBS documentaries. The Rorschach Test forces you to peer into the murk; you can't consume these documentaries passively.

That is, they make you think. That different viewers can come away with different takes on a film is a good thing, not a criticism. I think the author is unduly dismissive.


TraumaZone is in my humble opinion a masterpiece. But to me it doesn't show that his previous work is any less incredible. In his previous work he explicitly sets out a (usually unconventional) thesis and then attempts to explain it in detail.

In TraumaZone the subtitle explains what he wants to do this time, "Russia 1985–1999: TraumaZone What It Felt Like to Live Through The Collapse of Communism and Democracy". He wants only that you get a feeling of what it was like to experience what these people experienced and he does that incredibly well as far as I can judge.

From TFA:

> And yet this is hardly a sufficient explanation.

I find it incredible that this isn't sufficient for the author. It fits with everything!


I would like someone with a right wing politics to explain how their worldview is confirmed by his works. To me the messaging seems undoubtedly Marxist and his Wikipedia page mentions that Curtis was influenced by Max Weber. He is also quoted as implying that he is left libertarian, though it's interesting to note that he also implies that he doesn't know that there is already an existing tradition of left libertarianism since he predicts one will emerge.


I think the point where the message of Curtis' films most nearly intersects with the conservative mindset is in viewing many (most?) efforts to reform society as doomed or counterproductive.


How does it seem Marxist? I think Marxist type of materialist critique is absent from his works and he tends toward airy notions of "power" and a "great man" view of history.


Marxist, as in his contrasting of power relations (usiLly shadowy owners/bureaucrats vs consumers, if not proles), and explicit in that all operate in global market. Ie dynamics are always worldwide, connected, interchangeable.

Also Marxist by extension, because Curtis is taking after Debord and baudeillard: wafting underneath it all is a vapor,the commoditization of everything, the flattening of reality, mass media as a factory for dispositions.

Fwiw - I don't necessarily see the great man view, it's more a Ted talky way of anchoring his stories. E.g. sure Clinton xyz, but could just as well be interchangeable usa president xyz.


Thanks for your perspective, I have more orthodox understanding of Marxism that leads to different conclusions but I can see the connections you make as well.


+1 daydream's comment on LLMs.

But I think the steam on FP hype was already starting to run out for unrelated reasons.

The briefest account I can offer on "peak FP" from my (web developer) perspective is that

1) functional programming was already enjoying a moment of renewed interest and vitality due to the increasing ubiquity of multi-core,

2) React/Redux -- which ~solved[1] many problems with increasingly complex frontend web/mobile state management -- really started to become mainstream around 2016-2017,

3) Node/TypeScript was in the midst of an popularity / enterprise adoption explosion (in part due to #1) and only served to amplify the general enthusiasm around FP among JS-literate engineers (in part due to #2).

In the intervening years, the React paradigm more-or-less "won" and multi-core is taken for granted. A huge chunk of our industry has probably never known a time where FP wasn't celebrated. For that reason, it no longer seems to be answering any pressing problems, and naturally there are fewer articles being written about it.

[1] I expect this to be a point of consternation, since this is HN, but the point is that React was at least _perceived_ to have been an antidote to a variety of issues people faced with Angular, Backbone, plain old jQuery apps, etc. and the unifying theme of those issues was (rightly or wrongly) perceived as "OO / mutable state bad."


I think the reluctance to blame phones is not just that people have a personal, dopaminergic fixation on them.

There are a couple of other factors at play:

1. Thinking people, especially of the sort that frequent Hacker News, want there to be a more complex and multivariate reason behind the "big problems" like loneliness and alienation. It feels wrong to just say "phones bad."

2. Many of us grew up with emerging technologies, like video games and the internet itself, that were reflexively rejected by generations older than us for reasons that seem poorly thought out in retrospect. We pattern match on the type of blithe Luddism we grew up with, and are instinctively reluctant to say "phones bad" for fear of falling symptom to it.

But I agree with you, and the thesis of this article. The "phones" argument has such obvious explanatory power for a wide range of regressive social phenomena that there needs to be overwhelmingly strong evidence for some other catalyst to drop it from consideration.


I would greatly appreciate a moratorium on this genre of article until there is compelling accompanying evidence that a meaningful portion of ChatGPT's users are unaware of these shortcomings. I have yet to encounter or even hear of a non-technical person playing around with ChatGPT without stumbling into the type of confidently-stated absurdities and half-truths displayed in this article, and embracing that as a limitation of the tool.

It seems to me that the overwhelming majority of people working with ChatGPT are aware of the "con" described in this article -- even if they view it as a black box, like Google, and lack a top-level understanding of how an LLM works. Far greater misperceptions around ChatGPT prevail than the idea that it is an infallible source of knowledge.

I'm in my 30s, so I remember the very early days of Wikipedia and the crisis of epistemology it seemed to present. Can you really trust an encyclopedia anyone can edit? Well, yes and no -- it's a bit like a traditional encyclopedia in that way. The key point to observe is that two decades on, we're still using it, a lot, and the trite observation that it "could be wrong" has had next to no bearing on its social utility. Nor have repeated observations to that effect tended to generate much intellectually stimulating conversation.

So yeah, ChatGPT gets stuff wrong. That's the least interesting part of the story.


>I would greatly appreciate a moratorium on this genre of article until there is compelling accompanying evidence that a meaningful portion of ChatGPT's users are unaware of these shortcomings. I have yet to encounter or even hear of a non-technical person playing around with ChatGPT without stumbling into the type of confidently-stated absurdities and half-truths displayed in this article, and embracing that as a limitation of the tool.

There was the chatGPT program for reviewing legal documents that the creator posted here weeks ago. Several people pointed out the dangerous shortcomings in the application, to which the creator completely ignored (it got the entire directionality of the ycombinator SAFE wrong, among other things) and numerous posters exclaimed things like "going to use this on my lease!". so, I think you are being a bit disingenuous with this whole "it's just wikipedia" thing and pretending like no one would use it ignorantly. It's just obviously not true and that's perusing comments here.


I used ChatGPT to write cover letters and to create job specific resumes(with an additional tool).

Then those documents resulted in employment.

I had to edit some, and I went over all of them.

I have to assume people look at the thing they understand may be inaccurate (because you can't possibly miss THAT fact) and give it at least a quick once over. Lacking that, it's a failure of the person, not the tool.


How are you going to tell if it accurately analyzed a legal document if you don't know how to accurately analyze a legal document? It's a tool that's being sold for jobs it shouldn't be doing, if that's the characterization that helps you understand the issue and not turn this into "blaming the tool for something it shouldn't be doing"


Ask and verify or integrate with a tool that cuts the inaccuracies out. Sometimes that is not possible.

There are plenty of pieces of the legal system that would benefit, today, from adding a well-made ChatGPT process. Perhaps not perfectly, in such a flawed system.

As an example, ChatGPT could assess the actions leading to a charge and compare the law to the actions of an individual.

Before you bash the idea, I happen to know of a case where ChatGPT outperformed the US Federal government in this analysis.

1 success is worth the cost.


Wow what an amazing and impossible to argue against anecdote that defies any examples I've seen.


[flagged]


Perhaps you have issues with reading comprehension? This is a thread about how chatGPT is being sold as a service to analyze legal documents, and it quite obviously fails at that. If your solution is to see a lawyer you are making my point that chatGPT is not helpful for this thing that people are saying chatGPT is helpful for.


> Perhaps you have issues with reading comprehension? This is a thread about how chatGPT is being sold as a service to analyze legal documents

No it is not


Certainly my posts were and it's a mystery as to what point you think you are achieving by trying to debate something with me that I was never discussing


Hire a lawyer

Directly addresses your concerns

This thread started out on cover letters


Okay but I posted about the examples of chatGPT giving legal advice, so there's something you fundamentally don't seem to be grasping about the pointlessness of you talking to me about resumes.


Here's an ACM blogger that was taken in by ChatGPT

https://news.ycombinator.com/item?id=34473783

If you know math, you immediately recognize that a smallest degree polynomial that has values 0,1,4,9,16,25,35 at 0,1,2,3,4,5,6 respectively is f(x) = xx - x(x-1)(x-2)(x-3)(x-4)(x-5)/720

So you know that f(n)=n(n+1)(2*n+1)/6 won't work and ChatGPT is bullshiting you.


The issue is in the prompt, not the output.

There's nothing preventing it from involving RLHF


I showed ChatGPT to some non-technical people, and they immediately asked it political-related questions, such as about carbon emissions. (I assume hoping it would affirm their belief.) These things are very nuanced -- even if the response is technically accurate, it can still leave out important items or falsely suggest importance via the specific wording.


> Is what ChatGPT tells me accurate?

> ChatGPT is trained on a large corpus of text, but like any AI model, it is not perfect and can make mistakes. The information provided by ChatGPT should be used as a reference and not as a substitute for professional advice. Additionally, the accuracy of the information provided by ChatGPT is limited by the knowledge cut-off date, which is 2021.


The worst (real) criticism I've seen of ChatGPT: "yeah I played with it but I don't really know what to do with it"


we still use Wikipedia because of convenience and not reliability, so I'm not sure what your point is. Humans will choose convenience over basically any other quality. See: kcups. Doesn't mean kcups are a net win for the world


Thanks for the Wikipedia analogy, given another five years of time for refinement, ChaGPT will be viewed/used similar to Wikipedia.

it "could be wrong" has had next to no bearing on its social utility .

> Can you really trust an encyclopedia anyone can edit? Well, yes and no -- it's a bit like a traditional encyclopedia in that way.

> The key point to observe is that two decades on, we're still using it, a lot, and the trite observation that it "could be wrong" has had next to no bearing on its social utility. Nor have repeated observations to that effect tended to generate much intellectually stimulating conversation.


+1. Not even sure this is “eligible” for an hn post. It actually makes less sense than the CNN ones I saw earlier, and boy those were terrible takes.


Let me start by saying that I really like Elixir as a language and ecosystem. That said, I don’t think it’s a good choice for a first language.

Elixir excels for building highly available networked backend applications. Not that it can’t be good for other things, but I personally tried my hand at it as a scripting language (as a complete newcomer to programming might). For this purpose I had to learn things like Erlang interop, module composition, setting up a mix project, etc. — just to get a small project off the ground. Even as a seasoned developer familiar with syntactically and conceptually comparable languages (Ruby, OCaml…) I found the self-instructional overhead to be atypically high for a programming environment.

This is a bit of a tangent, but I’ve noticed that whenever “best first language” discussions come up, responses tend to be biased toward whichever language people first scrapped something together in for fun. For me that was TI-BASIC. By no means do I think that was a “good” first language - it’s just what was around. But it passed the litmus test of letting me quickly iterate from writing calculator-crashing output loops to making little games for my friends.

For its virtues, Elixir was originally developed to serve a community of experienced web devs trying to solve problems most newcomers don’t know about and won’t encounter — and it shows. I think most novices would be better served by a more “mainstream” scripting language like Python, Ruby, JS, etc.


As the original submitter, I will one-up your ad hominem by pointing out that the blogger was an enthusiastic booster of peak oil theory in the mid-00s (as one might infer from the domain name). I don’t endorse her analysis, but I nevertheless found the data compelling and was interested to hear others’ perspectives.


Isn't that an obvious red flag? Given the hundreds of years of oil reserves available in the oil sands, peak oil theory is complete bunk now, and only slightly less obviously so 20 years ago.


The theory didn't say we'd run out of oil. It said we'd run out of "cheap" oil that's easy to get to. Sands are harder to extract oil from at an affordable cost. They only became financially viable as the cost of oil rose.

Technology improved in drilling and exploration so the theory is less relevant today. Still the current cost of oil (due to politics mostly) is pretty much that.


That's not at all what the theory said, that's just pure goalpost shifting to the point of uselessness. Tar sands are profitable with $50 oil.


I guess you're younger than I am. I recall reading the articles about how the world will collapse with $40 oil...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: