Hacker News new | past | comments | ask | show | jobs | submit | throwitaway1123's comments login

Node is the only major outlier. Bun supports this convention as well: https://bun.sh/docs/api/http#export-default-syntax

> The front end bundler ecosystem has been a hot mess forever though.

The bundler situation wasn't ideal for a few years, but I've been really happy with esbuild lately. It's incredibly fast, has zero dependencies (besides the Golang sys module which is a de facto part of Go's extended standard library), and is much easier to configure than Webpack. I even heard DHH praise esbuild on The Changelog podcast recently, and he's a notable anti-build evangelist.


Yeah Vue 2 used object getters and setters, and there were a few tricky caveats. For example, setting an array item using an index wouldn't trigger the setters (e.g. `myArray[0] = ''`). You had to remember these edge cases and use `Vue.set` [1]. Proxies made everything so much simpler, but they can't really be polyfilled at all, because they require deep JS engine integration (e.g. to register a callback that gets notified whenever a property is added to an object).

[1] https://v2.vuejs.org/v2/guide/reactivity.html


There are flags you can set to tune memory usage (notably V8's --max-old-space-size for Node and the --smol flag for Bun). And of course in advanced scenarios you can avoid holding strong references to objects with weak maps, weak sets, and weak refs.

I agree with the general sentiment of your comment and I think there are several factors at play here.

* The human brain is not capable of evaluating and re-evaluating every possible option amongst the plethora of technical choices developers are faced with. This forces us to develop certain coarse grained mental heuristics (prejudices and biases) to navigate technology, and even if these broad generalizations are roughly true initially, we tend not to re-evaluate them over time. This leads to stale biases (e.g. some library/language was missing an API 10 years ago, and someone formed an immutable opinion on it).

* These broad generalizations lack nuance. I watched a talk recently by Dan Abramov where he calls these heuristics (I'm paraphrasing) a form of information compression [1]. That compression is lossy — it doesn't preserve the original context in which the heuristic was formed.

* There's also some insecurity at play here too. Developers want to believe that they've chosen The One True Solution, and harshly invalidating the alternatives is one way to reenforce that fantasy.

* And of course, social media has exacerbated this problem by rewarding inflammatory hot takes. You won't get nearly as many views/upvotes/likes for a sober take that says "technology X is well suited for this narrow use case" as you will for a hot take that says "why technology X failed", or "why everyone hates technology X".

You might enjoy this link: https://blog.aurynn.com/2015/12/16-contempt-culture

[1] https://www.youtube.com/watch?v=17KCHwOwgms


I think a general lack of criticism also plays a part. When someone has decided that X is the best approach, they tend to point to blog articles that favour their point of view as "proof", while dismissing other point of views as "uninformed".

A typical example are all those "We rewrote our service from X to Y and got huge benefits" articles.

- They are ignoring the fact that the new version has the benefit of years of experience with the actual problem domain and can be optimized

- They also tend to use a different stack such as a more specialized database, async processing using message queues etc. that provides huge benefits.

Someone will always cherry pick some aspect of that article (language or choice of database) as proof that their point of view is correct, while ignoring the fact that they are not comparing an apple with an apple.

To get a real comparison they should have written a third system using their new architecture and the old langauge, but that would of course be hard to justify outside of academic research. The developers probably wouldn't do it anyway, because if the old language proved just as effective it would be harder to justify why they chose a new language. Resumé Driven Development is unfortunately a real thing.


> To get a real comparison they should have written a third system using their new architecture and the old langauge, but that would of course be hard to justify outside of academic research.

Good point. You need a control group to make sure you're measuring the thing you think you're measuring.


This anecdote about the double equality operator might have originated from Eich's chat with Lex Fridman where he states (at about 5 minutes and 26 seconds) that during the original 10 day sprint JavaScript didn't support loose equality between numbers and strings: https://www.youtube.com/watch?v=S0ZWtsYyX8E&t=326s

The type system was weakened after the 10 day prototyping phase when he was pressured by user feedback to allow implicit conversions for comparisons between numbers and serialized values from a database. So it wasn't because he was rushing, it was because he caved to some early user feedback.


You can perform file I/O in JavaScript synchronously. The localStorage API in the browser is synchronous (and in Node via the --experimental-webstorage option), and of course requiring a CommonJS module is also synchronous (and there are many other sync filesystem APIs in Node as a sibling comment pointed out).

You just can't perform network I/O synchronously. Although a network attached file system allows for both network and file I/O technically, but that's a really pedantic point.


> You just can't perform network I/O synchronously.

Sure you can, you just shouldn't ever do it because it blocks the UI: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequ...


Yeah I should've said there's no Node API for making synchronous HTTP requests (unless you count executing a child process synchronously). Even the older http.request API used in Node prior to the introduction of fetch is async and accepted a callback. Browsers have all sorts of deprecated foot guns though (like the synchronous mode of XMLHttpRequest).


> See my comment here: https://news.ycombinator.com/item?id=41829905

Your comment isn't equivalent to the original code.

Your one liner is doing this: `process.stdout.write('')`

Jitl's example is doing this: `new Promise(resolve => stream.write("", resolve))`

He's passing in a promise resolver as the callback to stream.write (this is basically a `util.promisify` version of the writable.write chunk callback). If the data is written in order (which it should be), then I don't see how the `stream.write` promise could resolve before prior data is flushed. The documentation says: "The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled". [1]

[1] https://nodejs.org/api/stream.html#writablewritechunk-encodi...


It's so easy to miss a tiny but important detail when working with or discussing Node streams and come out the other side bamboozled by bugs. This whole thread exemplifies why I tell people to stay as far away from streams as possible. Though in the case of stdout/stderr we can't avoid it.


> It's so easy to miss a tiny but important detail when working with or discussing Node streams and come out the other side bamboozled by bugs. This whole thread exemplifies why I tell people to stay as far away from streams as possible

100%.

Case-in-point: After coming back I realize that indeed I did overlook the callback/promise making the difference for the write call. Sorry for spreading lies!


It's a little bit faster (on my machine at least) if you combine the filter and map into a flatMap (it's still not as performant as the imperative solution though).

  function process3(input) {
    return input
      .flatMap((n) => (n % 2 === 0 ? n * 2 : []))
      .reduce((a, b) => a + b, 0)
  }


If it needs to be fast use oldskool for() loops.

  function processfor(input){
    let sum = 0
    for (let i = 0; i < input.length; i++){
      if (input[i] % 2 !== 0){ continue }
      sum += input[i] * 2
    }
    return sum
  }
https://jsfiddle.net/gaby_de_wilde/y7a39r15/5/


Yeah the comment I was originally responding to included a for loop (in the Pastebin link). My point is that if you're set on going down the functional route you don't need separate map and filter steps, you can just use flatMap which is effectively a filter and map combined (returning an empty array filters out the current value, since an empty array gets flattened into nothing).

Of course, if you want the most performant solution an imperative for loop is faster (which is what I said in my last comment).


input.reduce((a, b) => b % 2 !== 0 ? a : a + (b * 2), 0)

or if you want it really silly.

input.reduce((a, b) => a + (b % 2 && b * 2), 0)

dont ask me why but for(a of b) is slower than for(i=0;i<b.length;i++)


Array.prototype.reduce can basically do almost anything a for loop can do, since it gives you access to state from one iteration to the next. The only reason I didn't remove the flatMap in my original example and convert it all to reduce, is because there's no longer any method chaining which was the point of the original comparison between the Go and JS examples.

> dont ask me why but for(a of b) is slower than for(i=0;i<b.length;i++)

Probably because for of loops use the iterator protocol. So I'm assuming under the hood the JS engine is actually invoking the Symbol.iterator method (which is slower).

  #! /usr/bin/env node --experimental-strip-types
  
  function processIterator(input: number[]) {
    let sum = 0
    for (let i = input[Symbol.iterator](), r; (r = i.next()); ) {
      if (r.done) return sum
      if (r.value % 2 === 0) sum += r.value * 2
    }
  }
  
  function processfor(input: number[]) {
    let sum = 0
    for (let i = 0; i < input.length; i++) {
      const value = input[i]
      if (value % 2 === 0) sum += value * 2
    }
    return sum
  }
  
  
  const input = Array.from({ length: 1_000_000 }, (_, i) => i)
  
  console.time('normal for loop')
  console.log(processfor(input))
  console.timeEnd('normal for loop')
  
  console.time('iterator for loop')
  console.log(processIterator(input))
  console.timeEnd('iterator for loop')


Apps weren't what drove people to the original iPhone (the original iPhone didn't have an app store). Apple was essentially the first mainstream company to commit fully to the current smartphone design — a flat, rectangular, portrait aspect ratio brick, with a single slab of capacitive multi-touch glass. There were many other competing form factors at the time. Apple correctly deduced that touching your screen is the most intuitive way to interact with smaller devices, and they had a huge first mover advantage by committing to that paradigm early.


They also included WiFi in every model and iOS had transparent prioritization of WiFi over cellular. Apple's deal with Cingular (AT&T) also gave the iPhone plan unlimited data.

That meant the iPhone had a full fledged browser that you could actually use. The browsers on PalmOS and Windows Mobile were jokes compared to Safari and most devices didn't have WiFi so we're always stuck on relatively slow cellular. A lot of smartphone plans also didn't include unlimited data. The BlackBerry plans were equally terrible, tied to BBM accounts, and the browsers were even worse.

The iPhone also had a real e-Mail client that could connect directly to a POP/IMAP server. A lot of competing smartphones only supported e-Mail through gateways run by the carriers or an enterprise connection. Even lacking features early on like BCC early iOS Mail was a lot better than the competition for normal users.

I think these all come down to Apple approaching the iPhone from asking what normal people might want to do with their phones instead of what "corporate" wanted people to do with their smartphones. This was 180° from the design approach of RIM, Microsoft, Palm, and even Nokia.


Nokia did pioneer using the smartphone as a decent digital camera (among other things).

For instance decent enough to take a picture of an A4 page and be able to read it afterwards.

And IMAP support.

And Opera mini was a good enough browser, though mostly for text, as indeed 3G cellular (which the first iPhone didn't have) then cost 1000€/Go (funnily enough, that felt cheap and fast at the time, because it indeed was compared to what came before).

(Also video calls, though those are still niche for phones.)

And I hear Nokias were themselves quite primitive compared to what Japan had ?


> The browsers on PalmOS and Windows Mobile were jokes compared to Safari and most devices didn't have WiFi so we're always stuck on relatively slow cellular.

Again, everything in your comment this seems like Apple made an arguably better offering in an existing market. That's not a first mover advantage.

If anything you can say, Apple was late to the party and learned from the mistakes of others?


> Again, everything in your comment this seems like Apple made an arguably better offering in an existing market. That's not a first mover advantage.

I never claimed Apple had a first mover advantage. They made a smartphone much more aligned to consumer desires than any of the competition. Palm, Microsoft, RIM, and Nokia all approached smartphones from the angle of business/enterprise users.

You can call Apple's approach being late to the party but that presumes that them entering some market is a forgone conclusion. Apple has rarely if ever been truly first to market with a product.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: