Hacker News new | past | comments | ask | show | jobs | submit | drewhk's comments login

You clearly did not read the article...

One way to think about it is to ask yourself, is your personal project actually _playtime_? Playing is not goal oriented and therefore very relaxing. There is nothing wrong with that! I am happy to "play" programming and I learned a lot of techniques that I used years later - and then actually finishing it. Do not deny yourself playtime!


Agree completely. Reframe recreational programming as your favorite video game, and you’ll feel much more satisfied after a session that produces nothing, because that was never the point.


> But honestly, why ruin your fun projects by turning them into work.

Agreed! For fun, it is completely fine to not finish things if you enjoy the process more than the actual result. You achieved your goal of having fun and likely still learned a lot from it. Of course if it bothers you then sure, improve on your ability to close things, but first evaluate if this truly an issue coming internally from yourself, or some imagined external pressure that your work is worthless if you don't finish it (in the hobby context). Is it important to show it to others for example, or is this a solitary activity purely for yourself? In some hobbies I strive to finish, because I want to show it to others, in others, I don't care at all.


The email verification email itself does not show up properly in Fastmail for some reason. I had to switch to the text only view to get the actual link...


Thanks for the feedback!


The problem with generics type erasure is less of an issue though in practice because the ecosystem is generally compiled from typed code and hence the compile-time guarantees reduce the dangers of erasure. This is unfortunately not true in TypeScript where you encounter plain JS all the time (sometimes TypeScript wrappers of dubious quality) causing more havoc. So while _theoretically_ type erasure could be considered having similar problems, in _practice_ it is much more manageable in Java. I guess if the whole JS ecosystem would be TypeScript only it would be less of an issue there as well, but right now it can be messy.


One more addition, there is a subtle but very important difference between how TypeScript's "erasure" works compared to Java's.

In the case of Java, an explicit cast is emitted in the bytecode, which upon execution checks the compatibility of the actual runtime type with the target type during runtime. Yes, this makes it a runtime error compared to a static bytecode verifier/compiler error, but the behavior is well defined and the error does not propagate further from the original mistake.

In comparison, Typescript does not emit "verification code" at the casting site ensuring for example that all asserted fields exist on the runtime object at that point. The result of this is that the type mismatch will be only evident at the point where for example a missing field is accessed - which can be very far from the original mistake.

If you wish, you can consider type issues caused by Java's erasure as runtime, but _defined behavior_, while in TypeScript it is kind-of undefined behavior that can lead to various error symptoms not known in advance.


> the JVM has no type safety whatsoever.

This is just partially true (or completely untrue in the mathematical sense since your statement is "_no_ type safety _whatsoever_" :P ). The whole purpose of the bytecode verifier is to ensure that the bytecode accesses objects according to their known type and this is enforced at classloading time. I think you meant type erasure which is related - generic types do not exist on the bytecode level, their type parameters are treated as Object. This does not violate the bytecode verifier's static guarantees of memory safety (since those Objects will be cast and the cast will be verified at runtime), but indeed, it is not a 100% mapping of the Java types - nevertheless, it is still a mapping and it is typed, just on a reduced subset.


It is not so clear-cut though. There is a hierarchy here that the article misses a bit I think. There will be participants of various awareness level:

1. Members of the audience that do not notice anything at all 2. Members of the audience that only notice it subconsciously, affecting some overall feeling of quality (an analogy would be typography which operates mostly in this realm) 3. Members of the audience that consciously notice that something is off, somewhere 4. The conductor that exactly knows that the piano is off 5. The tuner that exactly knows what and where is wrong with the piano


Well, instead of complaining here about 1 and l, I just filed a ticket on their GH repo and it already got resolved... https://github.com/internet-development/www-server-mono/issu...


I just saw it was updated. Great news!

> Added missing symbols + - =

> Changed the top of 1 to distinguish from letters.

https://github.com/internet-development/www-server-mono/rele...


I liked the 1=l because I thought it was intentional. This quick “fix” unsettles me.


I am a bit worried of the overuse of Little's formula, especially in these catchy educational presentations. In reality queue sizes will dictate your consumer observable latency profile, which is in turn is dictated by the actual distribution of the service time - it is not a constant.

If you think about it, if you have an ideal system that serves users like a clockwork, every X ms with no jitter, while your arrival is also completely regular, every Y ms (Y < X), then basically a queue length of 1 is sufficient. In reality, just like we all observe in real-life queues, service time is far from constant, and outliers result in queue buildup. This is why often cutting the tail of service-time latency results in better overall latency than simply reducing average service-time.

Little's formula of course holds also in the above scenario, but it handles long-time averages and does not give you any indication what extreme behavior is lurking under the mask of these averages.


> ...the actual distribution of the service time - it is not a constant.

I'm concerned by the number of misunderstandings expressed in short time here.

1. Nobody claims service time is constant.

2. Little's law is one of the few parts of queueing theory that remarkably does not depend on service time distribution.

3. Many results for typical simplified M/M/c systems apply well also to any other service time distribution provided (a) arrivals are Poisson, and (b) the server uses time slicing multiprocessing. These are not very severe requirements, fortunately!

Long-term average sounds restrictive but it really just means a period long enough to exhibit some statistical stability. Most systems I see sit mainly in those regimes.


> I'm concerned by the number of misunderstandings expressed in short time here.

I have a feeling you misread my comment completely, and the misunderstandings are on your part?

> Nobody claims service time is constant.

Neither did I. Neither did I claimed that Little's Formula requires a constant service time.

> Little's law is one of the few parts of queueing theory that remarkably does not depend on service time distribution.

I did not say otherwise either. My point is that it is way less useful and enlightening than these edutainment posts make it. Two systems with the exact same parameters and results by Little's Formula might behave completely differently, and in many cases, counterintuitively.

> Many results for typical simplified M/M/c systems apply well also to any other service time distribution provided (a) arrivals are Poisson, and (b) the server uses time slicing multiprocessing. These are not very severe requirements, fortunately!

This was not my point. Or do you claim that queue size distributions DO NOT depend on service time distribution? Because that WAS my point. Averages do not tell the story you are most interested in. The whole point of queues is that service and arrival times have distributions with deviation. I personally think queues and buffers are very-very important and I am a huge proponent of predictable user-observable latencies as they improve general system health and resiliency under load.

> Long-term average sounds restrictive but it really just means a period long enough to exhibit some statistical stability. Most systems I see sit mainly in those regimes.

Long-term averages do not talk about pathological transient behavior, do not help you with queue sizing - or setting ideal timeouts. Also, statistical stability is misleading, the convergence time to the given statistic might be arbitrarily slow. Also, if we talk about real-world systems (which you do), they exhibit feedback do to clients retrying, throwing off the formula and potential ergodicity.


With these clarifications I realise we are in violent agreement and indeed the misunderstandings were on my part. I apologise, and an grateful you took the time to expand!


> But any child can look at the sun (with eye protection :) and see that it is a disc, not a point of light. The disc is about 0.5 degrees, which is not so small.

Page 11-12 explicitly discusses "sun-as-a-disk", the resulting shadow penumbra and other sources of inaccuracies.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: