Hacker Newsnew | past | comments | ask | show | jobs | submit | fsdjkflsjfsoij's commentslogin

This is basically where I am at. Most libraries and applications are trivially typed even in languages with relatively poor type systems like Go and Java. The necessity, or even benefit, of weak and/or dynamic typing is extremely rare and most languages have workarounds or escape hatches for those exceedingly rare cases.

Dynamic typing also makes a ton of optimizations basically impossible and even after monumental efforts languages like Javascript are still quite slow, inconsistent and memory inefficient outside of trivial benchmarks.


The take I have is that with a dynamically typed language you still have a static analysis step. It’s just that the analysis happens in your and other developers’ brain and it is objectively worse. I honestly can’t think of a single thing in favor of dynamic typing.

I write Clojure in my day job and it’s insane how often we have issues where it would have been immediately caught by a static type check.


May I interest you in some unmentioned static analyzer possibilities for Clojure: https://github.com/clj-kondo/clj-kondo https://github.com/jonase/eastwood You seem dissatisfied with Clojure in your day job -- I am sure that there are others that would be happier in your situation.


I'm aware of these, but thank you nevertheless. Clojure has plenty of nice things balancing the scales, so it's not all pain and misery!


I'd love to learn more. Are you part of a large team (enterprise or SMB, whatever you can share)?

How have you experienced using Type Clojure, spec, Malli, etc. to determine correctness?

I've only worked on solo projects with Clojure, with most of it fitting into my head. I imagine with teams of size N > 1 things can change quite a bit.


> Javascript are still quite slow, inconsistent and memory inefficient outside of trivial benchmarks.

I thought this meme was dead already. Of course, you might not be able to squeeze out the same amount of performance compared to a brilliantly written C or Rust program, but for what it is, JavaScript is pretty damn fast already.

DOM manipulation on the other hand, is still a very common bottleneck people come across when writing typical JavaScript code.


> but for what it is, JavaScript is pretty damn fast already.

It's fast compared to other dynamically typed language implementations but it's still very slow compared to basically all of the popular statically typed languages.


I think what cracks me up about this conversation is that it's an almost verbatim repeat of a conversation I had on reddit a few weeks ago.

There's a certain segment of the developer population that I don't think realizes just how fast C and C++ are. Javascript is _relatively_ fast when compared to other dynamic languages, but not when compared to C, C++, FORTRAN, etc.


> It's fast compared to other dynamically typed language implementations

true

> it's still very slow compared to basically all of the popular statically typed languages.

Not true.

The main slowdown for javascript (AFAIK) is the checks the optimizer has to put into place to ensure the assumptions it's made about the type are still valid. If, however, those assumptions are valid then javascript ends up emitting pretty much the same assembly that you'd see for and highly optimized statically typed language. In fact, there are some circumstances where it can beat a language like C++ or Rust due to the fact that it has to incorporate runtime information into optimizations.

With C++ or rust, if you add dynamic dispatch, unless you are doing PGO and whole program optimization, you are pretty much sunk with 2 memory lookups on every function call. This is the case where javascript can end up beating C++/Rust.

(All of this is talking about hot code after warmup. During the initial execution javascript will almost certainly always be slower).

Part of the proof of this was asm.js, the precursor to wasm. V8 at the time it was introduced could execute asm.js nearly as fast as what firefox could do with it's optimized asm.js compiler. That is, when you stripe out all the actions that make javascript slow, it very often ends up being just as fast as a compiled language.

What stuff ends up making it slow? Generally speaking, stuff that makes the types unpredictable (adding fields, removing fields, sending in a number and a string and expecting the VM to be able to handle both).

You can see a lot of this writeup around the discussions about why Dart was originally "optionally typed". Basically, the entire selling point to make dart fast was simply to remove the abilities to dynamically change types like you have in javascript. With that, the VM authors at the time were capable of making a VM that's every bit as fast as what Java has.


> That is, when you stripe out all the actions that make javascript slow, it very often ends up being just as fast as a compiled language.

ok...

> In fact, there are some circumstances where it can beat a language like C++ or Rust due to the fact that it has to incorporate runtime information into optimizations.

People used to make the same claim about Java and in every single example I've ever seen the Java/Javascript is extremely optimized, performance isn't consistent across VM versions, and the C++/Rust is extremely naive (usually allocating unnecessarily and not using arenas in hot paths that are allocation heavy).


None of these tests really show JS winning, but they are showing that it's within spitting distance of C++ in multiple tests [1]

And, you can look at the code yourself, most of the examples read pretty much exactly the same as their C++ counterparts.

Mind you, this is also a test that looks at execution start to finish and doesn't give warmup time (which will always favor statically compiled languages).

> performance isn't consistent across VM versions

That's true for C++ compilers, so why would you expect performance to remain constant with a JIT compiler?

[1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


absolutely spot on, there was a time when Hotspot was going to bring Java to the promised land.

It never happened.


> With C++ or rust, if you add dynamic dispatch, unless you are doing PGO and whole program optimization, you are pretty much sunk with 2 memory lookups on every function call. This is the case where javascript can end up beating C++/Rust.

GCC at least is capable of speculative devirtualization by using local heuristics, without PGO. And of course it is capable of devirtualizing in many cases when the knowledge actual type can be constant-propagated.

Also note that the vast majority of calls are not dynamic in C++ (as opposed to most dynamic languages), so devirtualization is significantly less impactful.


Fair point, AFAIK C++ templates are pretty heavily used to avoid a dynamic dispatch. Couple that with the fact that polymorphic OO has fallen out of favor generally and I could see this mattering a lot less now-a-days.



Trivial benchmarks where all of the significant data structures and networking are done in C++ and the tiny amount of Javascript is just passing some strings around. I guess it's "fast" if you can restrict yourself to little more than hello world where you do nothing more than pass a few strings to functions written in faster languages.

Notice that in the Java, C#, Go, Rust, Swift or Ocaml benchmarks almost all of the underlying data structures and much of the networking stack are built in the respective language. This is not possible with Javascript, Python, Ruby etc. because it would be ludicrously slow and extremely memory inefficient.


> Dynamic typing also makes a ton of optimizations basically impossible and even after monumental efforts languages like Javascript are still quite slow, inconsistent and memory inefficient outside of trivial benchmarks.

Please provide evidence for this extreme claim. I can point to benchmarks[1] where JavaScript is competitive with or even better than compiled static-typed languages like Java.

(any argument that those are "trivial benchmarks" is automatically invalid without actual empirical evidence)

[1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> Javascript are still quite slow

Can you describe your usecase for which JavaScript is slow? There are many languages that are slower than js like python or elixir, but they are doing just fine, that's why I won't agree that js is slow, but sure there are cases for which js just wasn't designed, and any CPU intensive task will be slow, but there are ways to get around it as well.


I hope Go will eventually get sum types but I hope they're better than Rust's where each variant is its own type.


How is this mess better than Rust's enums? A Rust enum + match statement results in really easy to read and clean code.


For a trivial example, let's say I have some message type that has multiple variants with differing fields:

    enum Message {
        Action1(String, Option<Mode>),
        Action2(String),
    }
Action1 and Action2 are variants not types and I cannot make functions that take an Action1 or an Action2 as a parameter. To get around this people will often make each enum variant just a container for some struct with the same name but that's more verbose and matching becomes slightly uglier. Code like this is fairly common:

    struct Action1(String, Option<Mode>);

    struct Action2(String);

    enum Message {
        Action1(Action1),
        Action2(Action2),
    }
Ideally I'd like to be able to succinctly say something like:

    type A = B | C
and have A be dependent on B and C but have both B and C be completely independent types.


That is a pretty compelling point... I ran into this less than an hour ago and was slightly frustrated.

Though, I still like the flexibility of adding additional fields to `Action1`. In golang I end up with fields that are conditionally populated based on the state of an Enum which is less than ideal and leads to lots of error checking (though this is relatively rare).

It doesn't have to be one or the other though. A bit more polishing on the Rust-style enums (perhaps in a different language) could lead to pretty ergonomic code.


> Rust's where each variant is its own type.

Is this... right? You can't use enum variants as types in function signatures, variable declarations, etc.


> If you're more of a generalist though? I can see productivity gains of 25% maybe.

I really doubt this. Copilot/ChatGPT are not useful at all for the hard problems and they're not trustworthy enough for the small problems. Even for the simplest things I have to comb over the generated code very carefully because I've seen them be subtly wrong dozens of times in my relatively short use. If I have to go over everything they generate with a fine tooth comb it's just easier and less error prone to write it entirely myself.


Mirrors my experience exactly and in CoPilot's case with VSCode until very recently it would suppress normal IntelliSense breaking that whole muscle memory flow.

I'm convinced I've spent more time scrutinizing, undoing, and debugging the CoPilot code than it saved me on typing.


if you're developing code for a large codebase with stable requirements in a company with good engineering practices, and your work output is the working code itself, it's hard to see that it would be very useful.

if you have to glue together multiple unfamiliar APIs and packages, maybe even seeing a couple new ones per day, you often throw away your code, and your work output is something else with writing code only a means to that end, it hugely speeds things up even accounting for debugging the frequent cases where it's subtly wrong.

this latter situation is more common than many software engineers realize.


> if you have to glue together multiple unfamiliar APIs and packages, maybe even seeing a couple new ones per day, you often throw away your code,...

Maybe take the time to be familiar with them? I think that is a description of an unhealthy job. It's like asking an average mechanic to handle optimizing for an f1 car in day 1, then asking him to improve the endurance of a rally car the next day! Then telling them the main point is to market the sponsors, so no need to be perfect.


i don't think the car analogy is good because cars are expensive to build and maintain, usually are used more than once, and it's very bad if they crash. these properties are true of some software but not the kind of software i'm talking about.

maybe a better analogy would be if you travel to new cities a lot, and have to navigate, you don't want to memorize a map of all the city streets and learn in detail the transit schedules on each rail and bus line, you just care about a few routes a few times. and Google Maps can make your life a lot easier. on the other hand if you lived in that city, it would quickly become pointless to take out your phone all the time to plan a commute to work.


So the maintainer now has to tutor this guy for god knows how long until he gets it right? All three issues brought up by kazinator are reasonable grounds for ignoring the patch. Do you think that the already overworked maintainers should have to spend significant amounts of their time tutoring every potential contributor? Are you going to pay them for that?


Miculas is not dumb; he's obviously a self-starter who pulled the investigation together through gdb, and the kernel interface, plus surrounding research into the history of the problem. He got excited to close the array access problem and lost sight of the necessity to fix only that and not change anything else, like the bounds check on the index. All it needed was a nudge to put his attention to that detail.


It doesn't sound like you've ever maintained an active free software project.

It's not "tutoring" to perform a code review and request additional changes to the patch. That's a standard feedback mechanism in free software development.

kazinator's analysis may be right, but it doesn't sound like it would have taken the maintainer more than a few minutes to explain his objections and given the contributor a chance to shore up his patch.

There's plenty of examples in the history of the kernel project of much larger patches going round and round until they land. From kazinator's description, it's possible the patch could have been made ready in a single review round.

> All three issues brought up by kazinator are reasonable grounds for ignoring the patch.

The patch (and bug report) wasn't ignored. That's kind of the point.


A lot of these maintainers are already overworked and don't have time to tutor every single person that wants to put kernel contributor on their resume. A lot of people have this extremely idealistic view of open source but the reality is that the vast majority of patches you receive will be atrocious and you shouldn't have to tutor every single "potential" contributor. This case is even worse since this issue was posted to a security mailing list which means time to resolution is important.


> A lot of these maintainers are already overworked and don't have time to tutor every single person that wants to put kernel contributor on their resume.

This seems like a very cynical view of what happened here, and completely unfounded.

Obviously these big open-source projects need gatekeepers, but if the gatekeepers often make aspiring contributors feel violated and humiliated rather than offering feedback to get their initial contributions across the finish line, general interest and enthusiasm for contributing are going to dry up. Maybe that's by design, but it doesn't feel sustainable in the long term.


It would be quite sad if we've gotten to a point where saying "I think my patch is better than yours" from a domain expert to a newbie is considered humiliating and violating.

Talking about utilitarian consequences, Linus had been way worse and while I fully agree from a moral perspective that his most extreme behaviors were bad, from a purely utilitarian perspective that didn't prevent Linux from being one of the most successful OSS projects out there.


> It would be quite sad if we've gotten to a point where saying "I think my patch is better than yours" from a domain expert to a newbie is considered humiliating and violating.

Isn't this a little bit disingenuous? Do you really think the author of OP's blog post is upset simply because a Linux kernel maintainer has far more domain expertise than him and is in principle capable of writing a "better" patch?

That would be pretty irrational, I agree!


The real disingenuous part is where OP lied about the maintainer saying his patch was better.

It would be pretty bad if he said that, but he never did.


I agree that Miculas (the submitter of the original patch) should not have chosen formatting typically associated with direct quotations, but now you're the one putting words in his mouth: he didn't say Ellerman said his (Ellerman's) version was better, he (Miculas) said Ellerman said he liked his (Ellerman's) version better.

Ellerman's actual words were "I wanted to fix it differently", so while I don't think Miculas' paraphrasing was entirely appropriate (why paraphrase at all?), I also don't think it was a misrepresentation of what Ellerman actually said.


Besides the fact that he didn't actually say that at all.


Because it could be seen as communicating: "my ten minutes of coding around the array access problem is more perfect than your hours of investigation to root cause the issue plus your ten minutes of coding".


> feel violated and humiliated

A lot of contributors are going to feel like this if their patch doesn't get accepted immediately and the maintainer in this case was extremely respectful even though the offered patch had multiple major issues. Contributors shouldn't feel entitled to free tutoring.


> multiple major issues

Seems debatable and debated, reading through the rest of this thread.

Regardless, I'll reiterate my point more straightforwardly: it's the maintainers' prerogative to treat aspiring contributors however they like, but actions have consequences, and some of those consequences may be deleterious to a project's health.


> Seems debatable and debated, reading through the rest of this thread.

Most of the people commenting in this thread haven't looked at the code and anyone saying the two patches are equivalent can be safely ignored. Out of the issues kazinator pointed out, 1 and 2 are definitely major issues and 3 is debatable although it is definitely suboptimal.

> some of those consequences may be deleterious to a project's health

Of course, but I think it might not be in the way the "everyone can be a valued contributor" crowd thinks.



> the "everyone can be a valued contributor" crowd

Of course! Who will keep the hordes of dimwits and dilettantes and LinkedIn jockeys at bay, if not those whose merit has already been proven?


> Who will keep the hordes of dimwits and dilettantes and LinkedIn jockeys at bay

This is unironically a very serious problem and why most of the best open source projects are basically corporate FAANG projects that are really only open source for optics.


On its face, you won't catch me arguing with that. It's a hard problem, for sure.


> Maybe that's by design, but it doesn't feel sustainable in the long term.

That could be, but on the other hand, the Linux kernel is more than 30 years old and is on its 6th version...

One might also consider the rarity of maintainers who can write high quality code and donate large amounts of time. To find and retain someone like that one might also be required to allow them to be curt with random people on the internet from time to time. I can see how drive by contributions could get old fast from their point of view.


> That could be, but on the other hand, the Linux kernel is more than 30 years old and is on its 6th version...

Indeed, and while we're speaking in broad generalities, surely no strategy that worked to launch a project from tiny seed to maturity and market saturation has ever then failed to keep it relevant in the long term...

> I can see how drive by contributions could get old fast from their point of view.

Indeed. But (again addressing generalities) my issue here is that the contribution in question doesn't read as "drive by" to me.


> surely no strategy that worked to launch a project from tiny seed to maturity and market saturation has ever then failed to keep it relevant in the long term...

If the rate of submissions falls then it certainly makes sense to adjust. Until then, don't fix what isn't broken.


Frankly, if you treat effortful but green contributors the same way you treat grindset morons and LinkedIn jockeys, I would put money on the submission rate from the latter continuing to increase long after most of the former have decided your project isn't worth their time.

This is a bit of a moot point, though, since I don't think it's very common for Linux kernel maintainers to behave this way. Correct me if I'm wrong.


> This seems like a very cynical view of what happened here, and completely unfounded.

It is not unfounded at all. The author clearly stated the reason they wanted their commit to be accepted was should they could be contributor.


Do you think there is any reason somebody might want to be credited for work they did other than having the ability to put it on their resume?


He had the credit amongst those who were paying attention. It wasn't like this was done in secret. Others chimed in to agree that the maintainers commit was better. So the only credit he didn't get was a commit credit which is important to be a "Linux Contributor" which clearly explictly wrote was the reason they asked for their commit to be used or to work on another patch.

I think it would be odd to disbelieve the author.


[flagged]


[flagged]


Where did they declare that they want this for their CV?


At least your attempt at continuing the strawman is better. But still, come on. You know better than this.


What are you even talking about? This is my first reply on this thread, but keep projecting?


Ok. Here is a lesson on forums. The comment thread you replied to is full of a guy trying to use a strawman argument. Except he stuck with the same one. You came along with a new strawman argument. So because you replied to a series of strawman arguments with a new attempt at a strawman arguments I wrote my comment in context of where it is in the thread of replies. So I congratulated you on being different. However, I then went on to shame you for using a strawman argument in the first place.

The lesson here, is always remember context. The point at which you start your input is surrounded with the context of before it. So if you were closer to the start the context of the previous strawman attempts wouldn't have been in play.


You need to work on your reading comprehension. This will be my last reply to you.


Personal attacks will get you banned here, regardless of how wrong someone else is or you feel they are. Please don't post like this to HN.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


This was not a personal attack, it was a statement of fact. The user in question was obviously either engaging in bad faith or could not be bothered to read back for ~15 seconds to understand the context of my post. When I pointed this out to them, they doubled down. They were not just wrong, in my personal opinion or objectively in whatever sense---they wasted my time and insulted me.

Speaking of pointing the above out, why is this comment flagged? Is this considered a personal attack too? https://news.ycombinator.com/item?id=37677835

---

You can see from my other comments in this thread that I prefer to engage respectfully with other commenters, even ones I disagree with, and I believe my total commenting history going back to 2016 will reflect that I do in fact take the intended spirit of the site to heart. But if speaking forthrightly to a bad faith and/or willfully ignorant interlocutor like this user drags me down to their level in your view, you ought to ban me right now. Thanks.


Statements of fact can be personal attacks. Here's an example I've mentioned a few times: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

If you find yourself inspecting someone else's comments or behavior for how much bad faith you perceive, or what they couldn't be bothered to do, or anything of the "who said what" variety, you've already fallen off the path of curiosity* and this is a clear sign that you should refrain from posting. That's not a criticism—it happens to all of us and it takes work and practice to avoid it. But it's work we all have to do as a community, if HN is to last for its intended purpose. (Btw that's also the answer to why 37677835 was flagged - it wasn't a personal attack but it was a step off the garden path and into the weeds.)

When a commenter uses phrases like "bad faith" or "willfully ignorant" or "down to their level' about another commenter, it's usually a sign that they're too activated by a conversation to really practice the principle "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." (https://news.ycombinator.com/newsguidelines.html) In such a case it is better to refrain from posting until you no longer feel that way. Why? because you (I don't mean you personally, but all of us) are not functioning in the range of curiosity when you feel that way—it's impossible; and second, if you reply with what feels to you like "speaking forthrightly" in that moment, you're likely to post things that land with the other commenter, and also the general reader, as attacks, not to mention get replied to by moderators asking you not to post that way.

* https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


A statement becomes a personal attack when it's intended as an attack, not simply because it's hurtful to the recipient. For instance, (addressing the pg example you mentioned elsewhere) it's not generally a personal attack when an old and/or sick person's doctor says they have a few months left at best, though hearing it will probably upset them. It's likely (though absolutely not certain!) that one middle schooler pointing out another's acne intends for it to be hurtful, but I don't think you could reasonably blame a very small child for pointing out the same—though it would certainly be a teachable moment.

Semantics aside, I understand from experience how difficult it is to moderate situations like this, but as a user-privileged participant in any discussion I reserve the right to defend myself (e.g.) when somebody rolls out some half-baked strawman and then accuses me of going off topic. In this case I find it particularly ironic that in a thread full of completely baseless character attacks against Miculas, the author of the linked article (e.g. he just wants credit so he can put it on his resume), I'm the one who gets threatened with a ban for questioning that narrative and not then meekly rolling over for the ensuing tide of bullshit. If you want to see a real, unequivocal, totally one-sided personal attack, you need look no further than the original comment I replied to^[1], and you can't throw a stone from there without hitting another one. I note that those comments remain largely unflagged, which is not a surprise given how acutely vulnerable users of platforms like HN are to the bandwagon effect and similar biases.

My behavior in this regard is not going to change, but my experience in this thread is certainly raising some questions.

^[1] https://news.ycombinator.com/item?id=37676623


If you don't like the phrase "personal attack", I can put it this way instead: please don't post pejoratives using personal language.

In the context of moderating an internet forum, we have to go by effects, not intent: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que..., for a number of reasons: (1) intent isn't observable or predictable; (2) effects are both observable and predictable; and (3) it's effects which ruin threads.

Everyone always feels like they're the one being singled out and treated unfairly—this isn't a reliable feeling, it's a universal bias. If other comments broke the site guidelines as badly as you did, and the mods didn't say anything, that's because we didn't see them, not because we're against you.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


> You need to work on your reading comprehension.

To which you state

> This was not a personal attack, it was a statement of fact.

It's a matter of opinion, not fact.


On the other hand, to dig yourself out of the hole of “I’m too busy and there’s nobody around to help”, you have to slow down and do this, sometimes.


> to dig yourself out of the hole of “I’m too busy and there’s nobody around to help”, you have to slow down and do this

Exactly! If you treat new contributors as crap because you are overworked, you will push away potential long-term collaborators who could share the load thus ensuring you will be working alone far into the future. This is a failure state and if I had a manager doing that to those he was managing, he wouldn't be a manager for long.


Who knows how long this would have been unaddressed without Ariel Miculas initial input.

Part of the job of a maintainer is to curate and foster input so the project has a robust community in the future.

The maintainer in this case did a miserable job of that.


You can't do the exact same thing in Go because Go doesn't have a way to define a sum type yet outside of generic constraints. The closest you can get is using an interface and a type switch but that won't give you exhaustive matching.


These presentations look like something straight out of some dystopian corporate hellscape fiction.


I could feel my social credit score rising with each nod in agreement, as the UN SDG AAA+ rated company, with WEF endorsement, told me how my consumerist lifestyle will not affect the global boiling.


I wish Apple would return to pre-COVID live keynotes. But that would mean demos would fail, wifi networks would be crowded, audible boos and gasps from price announcements. Yeah, Apple isn't going back anytime soon.


Yeah they're so type-A and "we are the best at this" that they end up being overproduced hostage videos with the presenters standing wide-legged and bug-eyed reciting very human words at the camera.

The pandemic is over, bring back the theatre stage with a live audience.


These were peaceful. The Vision Pro announcement earlier this year was basically a black mirror episode.


The dad that was watching videos (or something like that) in his Vision Pro instead of spending time at his kids birthday.


That and the lonesome father looking at photos of his (dead?) family in a dark apartment really left a bad impression from me on the headset that I can't shake. I will never "unsee" that. Sure, use it for work...but at home, no way. Baffled that Apple thought would jive well considering how much attention they put into those presos and makes me wonder how much of the headset marketing team is single & childless.


That's accurate dad behavior. Having cellphone cameras was a temporary decrease compared to holding a camcorder or viewfinder camera.


The "mother nature" sketch was... interesting. And throughout the presentation I thought well it seems this new generation iPhone doesn't bring anything new, it seems like a finished product. So not creating a new one every year would probably be better for the environment than whatever efforts and greenwashing they do engage in.


It's a bit bizarre, the presentation felt claustrophobic in a way I can't explain, and haven't experienced with their prior work.


The excessive use of what felt like drone cameras, or steady cam, and continual camera movement played a part. It almost feels like they were trying to pretend you were watching it in AppleVision™.


Thank you for the summary. I wasn't planning on watching it and certainly won't now.


You might love the Devolver Digital E3 Press Conferences. They've done a masterful job dismantling the E3 presentation format, but I find it's generally true for all these kinds of longform corporate product ads.

The first one is here from 2017: https://www.youtube.com/watch?v=JKgEsuEBhqI


I was expecting it to end with "all these presentations were rendered live on the A17 pro, on this iPhone". None of it really seems "real".


Hey, It's me. Did I wake you from your depression nap?


r/nba and r/hockey are particularly interesting examples because the threads before the shutdown were overwhelmingly in favor of keeping the subs open and all the upvoted posts were openly trashing the mods over their decision. The poll was also posted in the ModCoord discord servers so was almost certainly not representative of the average r/nba or r/hockey user.


In Spring apps the same reflection caches are also often duplicated many times.


The best parts of the show were the original parts that focused on the Empire and Lee Pace did a great job as Brother Day. The show dropped off hard whenever it focused on the actual settlement on Terminus and it remained extremely unfaithful to the books all the way through.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: