Hacker News new | past | comments | ask | show | jobs | submit | knight-of-lambd's favorites login

There's a fantastic comment on the article written by Jessamyn West (Tom's daughter):

I saw this zipping by on Twitter. Heya I’m Tom West’s daughter and I always pop onto threads about SOANM to bang the drum of good work-life balance as well. My dad was an amazing man and we got along well, but he was a pretty distant dad and basically left the raising of my sister and me to my mom who might have wanted to do something professionally as well.

The things he did at work, and his and Kidder’s ability to talk about them, were quite interesting and have lessons that can be relevant in many different timelines and lifestyles. But you know, I wound up moving to Vermont and helping people use technology to solve problems in their lives. You need all the things–technology, an understanding of people, and a life–and my dad possibly only had two of those at any one time. Thanks for writing this.


Not sure why people are opposed to unions in software dev - the basic function of a union is to equalise the power imbalance between employer and employee.

Now, people who question the need for unions in dev, I understand where they're coming from.

Software development is rather unusual in its lack of unionization, and IMO, that's because no-one has properly figured out how to commoditise our labour yet. We're the modern equivalent of medieval guilds of craftsmen - we get paid far above the median precisely because we can't yet be commoditized or automated.

Yet.

IOW The lack of a need for unionization in developers shows how lucky we are.


Incompetent people will create incompetent things regardless of the tool. Simpler tools lead to simpler messes while complicated tools lead to complicated messes.

I've seen an attitude that people think they can inoculate themselves from inept programming by using obtuse frameworks as if martin-fowler-speak acts as a drill sergeant making disciplined coders out of the herd.

But after 20 years of bouncing around startups I've never seen the intended results actually happen a single time. Not even close. Not once. Never.

Instead it leads to larger, less maintainable, more convoluted messes that have to be trashed quicker. Giant ceremonial cargo cult style monstrosities with huge circuitous logic - 4, 5, maybe 6 layers, a router calling a controller, calling a service, calling a provider, calling an event model, which runs a single if statement ... as if that's how we protect ourselves against incompetence.

These approaches just lead to wasteful projects where they end up rewriting the whole thing in whatever the framework/language de jour is instead of writing easily maintainable, quickly understandable code that's designed to work for the next 10 years. I've talked to many programmers who are embarrassed by the language they are using ... wtf is that?! They've turned programming into fast fashion.

Then people like to ask what someone's favorite language is, usually when they first meet them, as a social cue, as if we are a bunch of highschool kids following pop music. I mean what on earth... we're supposed to building the future here, not running around like a bunch of spastic fanboys from platform to platform, just to mess everything up all over again in bold new ways using slightly different syntax.

The best thing to do is give people the least abstract thing with the fewest conformity requirements ... essentially make it open ended and then the messes are easier to spot and easier to fix. You won't get 4 folders with 26 files handling simple tasks like uploading images to an S3 bucket (saw this huge mess just last week and guess what?! It's broken. I know, surprising right?)

Anyway, new shiny fancy tools with GoF buzzwords won't ever fix incompetence, it'll only make it worse.


Philosophy major here. Didn't read the article, but will point out:

The significance of Gettier problems as we investigated it is the exposure of an entirely _wrong_ mode of philosophy: philosophizing by intuition. Ultimately, the reason Gettier problems are significant is because for philosophers, the textbook Gettier problem works because _for philosophers_ the problem captures their intuitions of knowledge, and then proves the case of knowledge fails.

Most normal people (i.e., not philosophers) do not have the same intuitions.

After Gettier analytical philosophers spent decades trying to construct a definition for knowledge that revolved around capturing their intuitions for it. Two examples are [The Coherence Theory of Knowledge][0] and [The Causal Theory of Knowldege][1]. Ultimately nearly all of them were susceptible to Gettier-like problems. The process could be likened (probably) to Goedel's Incompleteness proof. They could not construct a complete definition of knowledge for which there did not exist a gettier-like problem.

Eventually, more [Pragmatic][2] and [Experimental][3] philosophers decided to call the Analytical philosophers bluff: [they investigated if the typical philosopher's intuition about knowledge holds true across cultures][4]. The answer turned out to be: most certainly not.

More pragmatic epistemology cashes out the implicit intuition and just asks: what is knowledge to us, how useful is the idea, etc. etc. There's also a whole field studying folk epistemology now.

[0]: http://www.oxfordscholarship.com/view/10.1093/acprof:oso/978...

[1]: https://en.wikipedia.org/wiki/A_Causal_Theory_of_Knowing

[2]: https://en.wikipedia.org/wiki/Pragmatism

[3]: https://en.wikipedia.org/wiki/Experimental_philosophy

[4]: https://en.wikipedia.org/wiki/Experimental_philosophy#Episte...


> Or if the answer is technological: How come we accept the tyranny of advertisers on our most personal devices?

This alone deserves a entire novel to be answered properly but in short: it reaches out to social, political and economical reasons. As I programmer I so wish the problems to be only technological but alas, they are not.

1. Most people view anything they don't understand and/or are not good at as magic that "is simply there and works this way". Most non-technical people I knew in my life -- not a small number, has to be north of 300 -- simply had no idea you could block ads. And something like the Pi-Hole they would never imagine even being possible. It'd be a spy movie tech for them.

2. Possibly controversial anecdotal observation of mine during my whole life: most normal folk are very maleable and accepting for the realities around them. Many of my peers call them "sheep" or other derogatory terms and even though on rare occasions I am pissed enough to agree with them, I still understand and realize that people have jobs and lives to deal with and they don't want to go out of their way to try and make a difference in the world. I used to be mad at that but nowadays, with me having suffered years of depression and burnout, I understand that regular folk all too well...

3. Related to the above: not many want to change society at large. As much as it boggled my mind, I actually heard a chunk of the people I met agreeing to have ads served to them. At certain point you have to wonder would you be the hero of the story that aims to bring down tech giants that harvest personal data, or the villain? I know I would view myself as the hero for sure but you gotta wonder sometimes.

---

As an even more controversial aside, IMO the current breed of capitalism is ruthless and tries to fill every minute of people's leisure time with a ton of activities -- like doing taxes in huge convoluted procedures, poking you with notifications on your phone to hatch virtual eggs quicker with the diamond currency of your mobile game of choice or whatever, Facebook et al never leaving you alone about somebody posting something, and lots and lots of others.

What I am trying to say is that most people I see around me are way too tired and broken to NOT accept tyranny.


My top 3, in order of how I try to apply them (i.e., if 1 doesn't help, move on to 2, etc.). I learned these all from reading various philosophy works, by the way, so perhaps cognitive hack #1 should be "read books".

1) Suspension of judgement (from Sextus Empiricus, Zhuang Zi, Ecclesiastes): avoid forming an opinion at all about things that are not evident. The way I do this is by thinking through an opposing argument or two, and using language like "it seems" or "it appears" rather than "I know", "I think", etc. This technique saves time and energy by helping me avoid getting wrapped up in opinion-based thinking and helps me develop equanimity.

2) Suspension of value-judgements (from Epictetus, Marcus Aurelius, Seneca, Zhuang Zi, Ecclesiastes): being aware and in control of the value-judgement loop (this thing is good or bad). I do this by shifting the language in my mind from "that is bad" to "I feel this way because..." Again, like #1, this is about inverting the locus of control in my cognitive discourse such that my mind can easily go its own way from there, only on a more productive path.

3) Awareness of the mode of thinking I'm in, and the kind of learning that's appropriate to the task or objective at hand (from Plato). There are several modes of thinking or learning (eikasia, pistis, dianoia, episteme, techne, phronesis, and noesis, for example). Simply being aware of which mode you should be in for a task is much more valuable than it might appear at first glance. I see these less as bins to put various kinds of thought in and more as tools to apply to a problem.

Reviewing this, a common thread is self-awareness developed to a point of disciplined introspection and intentional change by adopting these kinds of cognitive tricks. Also, reading is good for you. :)


This seems like a pretty shallow analysis. Plenty of thinkers who have preceded Harai have offered views on the political qualities of technology that strike me as more nuanced, potent, and true.

Lewis Mumford, writing in the 30s and 60s, conceived of two broad politico-technic tendencies throughout history--authoritarian and democratic--both could exist and certain forms of technology fell in one bucket or the other. Langdon Winner, working off the ideas of Mumford and many others (stretching far back as Plato) developed a very sophisticated look at technics and the ways in which political structures favor or disfavor certain technologies, and the ways in which technological systems collide with political, economic, and social systems to result in a deep integration between technology and politics.

Mr. Harai's article, contrarily, seems to be little more than an untempered reaction to current events, without requisite incorporation of the prior development and analysis of concepts in the fields of the history and philosophy of technology. I more or less think Harai's conclusion is sound--our current technical configurations and over-reliance on centralized information technologies does trend toward a more authoritarian, rather than democratic, political state of affairs. However, I think his argumentation is incredibly weak and speculative. It could have been rendered a lot stronger, and more useful, by engaging with the prior developments in the field. Perhaps this judgement is a little unfair--this is a book excerpt converted into a digestible Atlantic article--but I can't help but wonder if part of our lack of control over technological growth and our lack of full consideration of the ways in which our technological developments intersect with politics is due to too many taking a stance like Harai's—one that is unduly speculative and seems to forget the many warning flags and conceptual tools the tradition has equipped us with, which, if properly employed, may help us correct our unfortunate trajectories.


Most of my experience with IBM has been pretty reliable:

1. They bring the A players to the sales process 2. They bring the B players to manage the account (technically and commercially) 3. They bring the C/D players to do the actual work.

Wondering if I'm an outlier or if this is a standard mode of operation for them? I wonder how this plays in with their OSS strategy? Who works on things like OpenShift?


Maybe that suggests we aren't engaging in dialogue? I think ubiquitous social media access is destroying the cultural "melting pot".

The cost of communication, price and latency, has always defined the structure of our communities. Letters, cars, phones, internet, every invention expanded the reach of individual "community" while cost acted as friction to prevent bad ideas from gaining too much momentum - individuals still had to engage with their local community too.

Social media finally brought the cost to essentially zero. We can talk to anyone or about anything at any time. Great for early adopters and their critical/creative tendencies, but mass adoption by consumers has enabled mass tribalist tendencies to fully decouple from proximity constraints. This might be the "boiling point" for historical conflicts, but "supercritical" seems more appropriate here.

Media consolidation has been synonymous with consolidation of broadcasting viewer opinions and their political parties - many unhappy with both choices but faced with the false dichotomy of "their guy" and "the enemy". Meanwhile grass-roots are numerous but now diffuse and ineffective, cut off from broadcasting influence and digitally disconnected.

Hopefully society starts correcting back to offline engagement. Anecdotes suggest we are, but undoing the damage will still take a while.


[Comment number 2, describing what this theorem says for people with patience but not necessarily much mathematical expertise. Read comment number 1 first. There isn't a number 3.]

OK, time for ingredient number 2 which is the idea of a manifold. Roughly, an n-dimensional manifold is a geometrical object which, whenever you look at it closely enough, looks like "ordinary" n-dimensional "Euclidean" space. Once again, let's begin in dimension 2. "Ordinary" 2-dimensional space is something mathematicians write as R^2: jus as Z^2 consisted of pairs of integers, R^2 consists of pairs of real numbers. (Real numbers are what you probably think of just as "numbers".) We can think of this as an infinite plane; the two numbers are coordinates. Now, for a fairly typical example of a 2-dimensional manifold, consider a sphere -- the surface of the earth, say, if you smooth that out a bit. Any small portion of it looks just like a small portion of the plane, which is why there is a Flat Earth Society. But the whole thing has a different structure -- e.g., the sphere is finite in extent in a way the plane isn't. Another example is the surface of a ring do(ugh)nut. This is also finite in extent, but it turns out to be genuinely different from the sphere, and there's a whole lot of interesting mathematics around that, which I am going to ignore here.

Now, just as we considered those very special transformations acting on Z^2 before, we are going to consider transformations acting on a manifold: things that pair up each point on the manifold "before" the transformation with some point "after" it. For instance, suppose we take the (idealized) surface of the earth; one example of a transformation would be what you get by rotating the earth about its axis through 10 degrees so that every point moves west (and e.g. Kansas City lands more or less on top of Denver). But our transformations can be more complicated: imagine our sphere to be a thin sheet of very flexible rubber forced somehow to remain on the surface of a globe; we can then push things around however we like provided the sheet never tears or overlaps itself. These things (with some technical restrictions I won't go into) are called diffeomorphisms. And, though it's harder to visualize, we can do much the same thing with any manifold of any dimension. If M is our manifold then we call the set of all its diffeomorphisms Diff(M). And, just like SL(n,Z), this thing is a group: we can compose two diffeomorphisms to get another diffeomorphisms, and because we insisted on no tearing or double-covering every diffeomorphism can be undone by another diffeomorphism.

All right, nearly there. Ingredient number 3 is the idea of a group homomorphism. Suppose we have two groups; call them G and H. Suppose that for each element of G we somehow pick out an element of H. We'll write f(g)=h to mean that element g in G yields element h in G; "f" is the name we're giving to our correspondence between G and H. Unlike the transformations considered above, we aren't going to insist on any sort of invertibility; f might map lots of different g to the same h, and there might be some h that aren't the "image" of any g. Here's a simple example: consider SL(2,Z) again, and consider a manifold M that's just ordinary 2-dimensional space, what we called R^2. Everything in SL(2,Z), remember, maps (x,y) to (ax+by,cx+dy) -- and we can do that just as well when x,y are arbitrary real numbers as when they are integers, and the resulting thing is in fact a diffeomorphism. So everything in SL(2,Z) gives rise to a thing in Diff(R^2).

In this case, because these are "the same" transformation in some sense, composition of things in SL(2,Z) matches up nicely with composition of things in Diff(R^2). This sort of matching-up-nicely can happen even in less straightforward cases. If f(g1 compose g2) always equals f(g1) compose f(g2) then we call f a group homomorphism between SL(n,Z) and Diff(M). The specific transformations in SL(n,Z) and in Diff(M) needn't have anything much to do with one another -- but the relationships between them need to have compatible structures, in some sense.

So, we saw above that you can embed a copy of SL(2,Z) inside Diff(R^2). More generally, there's a copy of SL(n,Z) inside Diff(R^n). What these guys have done is to put some limits on correspondences of this kind between SL(n,Z) and Diff(R^m) where m is smaller than n: the idea is that SL(n,Z) is an n-dimensional thing and that you can't squash it into something of much lower dimension without destroying its structure.

Unfortunately there's one more bit of technical detail needed before stating their result. Ingredient number 4 is the idea of a finite-index subgroup. Zimmer's conjecture restricts not only group homomorphisms from SL(n,Z) itself, but also from certain smaller things that in some sense contain most of the structure of SL(n,Z). So, suppose you have some subset S of SL(n,Z) which is also a group: composites and inverses of things in S are always in S themselves. Then it turns out that SL(n,Z) can be partitioned into "copies" of S. One is S itself; if x is anything that isn't in S, then the compositions "x compose s", as s runs over everything in S, form another of these copies, and x itself is in that copy. Any two of these copies are either identical or disjoint, it turns out; so the whole of SL(n,Z) is made up of a bunch of these things. If there are only finitely many of these disjoint copies, we say that S has "finite index"; so e.g., maybe there are 12 of them, meaning that S is in some sense 1/12 as big as all of SL(2,Z). (Just to be clear, SL(n,Z) is infinite.)

We can finally state the theorem properly: If S is a finite-index subgroup of SL(n,Z), and f : S -> Diff(M) is a group homomorphism, and the dimension of M is m where m < n-1, then the image of f -- the set of things in Diff(M) corresponding to things in S -- is finite.

That is: you can't fit something with the same structure as "most of" SL(n,Z) into the diffeomorphisms of something with dimension < n-1, unless you collapse that structure "almost completely".


The DS "GPU" is indeed very bizarre and shares more in common with the GBA 2D rasterizer than a modern 3D GPU architecture. That it's a scanline renderer that can handle quads directly should be a pretty big tell :)

I implemented a cheap subset of it used in Super Mario 64 DS for my online model viewer ( https://noclip.website/#sm64ds/44;-517.89,899.85,1300.08,0.3... ), but implementing all of the quirks and the weird featuresets might be nearly impossible to do in a modern graphics API. 2D rasterizers don't have to be slow (as SwiftShader and ryg show), and you can get the bizarre conditions exactly correct. I'm not sure what a GPU-based implementation would even add.

EDIT: The math to be able to handle the bilinear quad interpolation on a GPU was worked out by reedbeta last year: http://reedbeta.com/blog/quadrilateral-interpolation-part-2/ . That's a big roadblock gone, but there's still a lot of other questionable things.


> Why did you decide to leave Intel?

> Overall, the job wasn't exciting to me. I didn't have to work that hard, and one day I had this realization while sitting in my gray cubicle (I was in a sea of gray cubicles surrounded by gray walls, listening to white noise and all alone): I'm like, “Man I am so tired. I need to go home and take a nap.” I went home, but as soon as I got there I realized, “I'm not tired anymore.” Working at Intel was a draining environment, and I knew I wanted to leave.

I had a very similar experience at Intel. The best analogy would be like being in a sexless marriage.


Having implemented security protocols and used state machines to track state, this is a surprisingly easy (if embarrassing) mistake to make. It’s very easy to spend a lot of energy validating the documented state transitions and essentially forget to ban all other transitions.

For me this is much less “OMG how could they be so careless” and much more “There but for the grace of a diligent set of testers go I”


Another excellent resource is "A Philosophy of Software Design" [0], from John Ousterhout (known for Tcl/Tk, Raft, RAMCloud, Log-Structured File System). Like Niklaus Wirth, he is striving for designs that fight complexities and yield simple and powerful abstractions. This book is so much better than overrated books like "Code Complete" and "Clean Code" for a fraction of the page count.

[0] https://www.amazon.com/Philosophy-Software-Design-John-Ouste...


> If we could ever produce such matter then we could start some spacetime engineering.

Imagine the exotic "pollution" caused by civilizations which are advanced enough to tinker with the fabric of spacetime, but not advanced enough to understand (or care about) the long-term consequences of that tinkering.

Destabilizing solar systems or even entire galaxies, maybe even violating causality..

Imagine the home (and only) planets of younger, insignificant civilizations being caught up in those cataclysms, all traces of their existence wiped from cosmic history in what might amount to the equivalent of a traffic accident or "oil spill" of a greater civilization.

..maybe that has already happened and some of the leviathanic voids we've observed were a result of that? :)

[0] https://en.wikipedia.org/wiki/Void_(astronomy)


I think you're oversimplifying and ignoring a few key factors here.

1) As has been shown time and time again, the rewards of productivity gains accrue to the owners of capital far more than to the workers who are more productive. The result is widening inequality on many levels, which extrapolated a few decades into the future ends up looking... not great. I hope we can agree that de facto serfdom is a bad thing.

2) You can't just look at the number of new jobs when making employment generalizations - you have to look at the quality and type of jobs that were created to replace the ones lost. It's like if you took away someone's house and then gave them a tiny apartment a year later, then claimed "Look, they got housing back! Everything's great again!". In the case of employment, the vast majority of jobs created in the last decade have been contract positions and low-end temp work [1]. If you look at the sectors where employment has grown when new jobs data comes out, low-end service jobs continuously top the list (ex. [2]). We can debate whether or not people actually want these types of new jobs as opposed to traditional real estate/financial services/salaried construction/etc. jobs that were lost in 2009-2010.

3) We haven't hit the automation crisis yet in any meaningful way. When autonomous trucks and warehouse robots become prevalent, then we will be in an actual crisis.

The automation crisis right now is not that everyone is going to lose their jobs overnight, but that automation is exacerbating big problems in the jobs market that lead to massive inequality, the prevalence of underemployment, and the trend towards bad jobs with few or no benefits and workers needing to work gigs on the side to make ends meet or afford the things that their normal job would have provided in decades past. When large swathes of workers lose their jobs to automation in the future, these problems are going to continue to compound unless we start making big policy changes. Yes, people will likely find new jobs to replace the ones lost, but will they actually be better or worse off than before?

[1] https://krueger.princeton.edu/sites/default/files/akrueger/f...

[2] https://www.nytimes.com/2013/01/05/business/economy/services...


"From the earliest days of information theory it has been appreciated that information per se is not a good measure of message value. For example, a typical sequence of coin tosses has high information content but little value; an ephemeris, giving the positions of the moon and planets every day for a hundred years, has no more information than the equations of motion and initial conditions from which it was calculated, but saves its owner the effort of recalculating these positions. The value of a message thus appears to reside not in its information (its absolutely unpredictable parts), nor in its obvious redundancy (verbatim repetitions, unequal digit frequencies), but rather in what might be called its buried redundancy--parts predictable only with difficulty, things the receiver could in principle have figured out without being told, but only at considerable cost in money, time, or computation. In other words, the value of a message is the amount of mathematical or other work plausibly done by its originator, which its receiver is saved from having to repeat."

—Bennett, Charles H. "Logical depth and physical complexity." The Universal Turing Machine: A Half-Century Survey.


The original article [1] is, in many ways, clearer than the economist's take on it and avoids political diversion.

They summarize Baumol's theory for entrepreneurship as assuming that the total amount of entrepreneurial spirit remains fixed, but, for structural reasons, some of it can get channeled into unproductive rent-seeking behavior (like getting favorable regulation).

They then say that their empirical work (three decades of data so more than one recession in there) shows that there is a decline in new firm formation, "in each state and nearly all metropolitan areas, and in each broad industrial sector, including high tech."

According to Baumol, this should be offset by an increase in "unproductive" behavior. This appears to be harder to measure, but they point to [2], which argues that both labor and capital (i.e., returns to shareholders) have been in decline.

This is all very complicated from the perspective of innovation and entrepreneurial endeavors in tech, where money is often lost despite the fact that the product is useful.

[1] https://hbr.org/2017/06/is-america-encouraging-the-wrong-kin... [2]https://home.uchicago.edu/~barkai/doc/BarkaiDecliningLaborCa...


They don't preclude it, but they didn't happen to include it in our particular history. In particular, in the evolutionary history of the brain as an energy-optimizing controller of the body, a "System 1" would have been selected against extremely early on, when it directed the internal organs to act according to "heuristics" that wasted calories.

Another way of putting it: AI research of the 60s-80s was extremely impressive and is ridiculously underappreciated right now. Many ideas were simply way ahead of their time. It's sad that most software engineers today do not bother learning anything about the past of computing, assuming everything made in those decades is automatically "outdated". (This is not restricted to AI. For example, most OO programmers don't know what is Smalltalk.)

The guy the article about says as much.

“I’ve stopped giving general advice to other researchers,” said Dickmanns, now 82 years old. “Only this much: One should never completely lose sight of approaches that were once very successful.”

I really feel for those researchers. Seeing how your life's work is ignored by people who brute-force their ways through the problems you've already solved must be rather depressing.


I reject the premise. Why do we need to accept that there is a difference between "me" and an "automaton"? The whole idea supposes, unnecessarily, that consciousness is real, then demands we explore its nature. It's angels on pinheads!

I found the following comment very insightful in a past discussion:

https://news.ycombinator.com/item?id=11042400

I reproduce the relevant part:

Dependencies (coupling) is an important concern to address, but it's only 1 of 4 criteria that I consider and it's not the most important one. I try to optimize my code around reducing state, coupling, complexity and code, in that order. I'm willing to add increased coupling if it makes my code more stateless. I'm willing to make it more complex if it reduces coupling. And I'm willing to duplicate code if it makes the code less complex. Only if it doesn't increase state, coupling or complexity do I dedup code.

State > Coupling > Complexity > Duplication. I find that to be a very sensible ordering of concerns to keep in mind when addressing any of those.


This isn't about the state monad, it's about seq. "But now seq must distinguish ⊥ from λx → ⊥, so they cannot be equal. But they are eta-equivalent. We have lost eta equality!" - once you lose eta equality you lose everything. A whole raft of monads become impossible, not just state.

Equivalence in Haskell is defined only up to ⊥; to work in Haskell we rely on the "Fast and Loose Reasoning is Morally Correct" result that if two expressions are equivalent-up-to-⊥. and both non-⊥ then they are equivalent.

(Or, y'know, work in Idris with --total and avoid this problem entirely).


Reminds me of the opening lines of the SICP lectures[0]:

"I'd like to welcome you to this course on Computer Science. Actually that's a terrible way to start. Computer science is a terrible name for this business. First of all, it's not a science. It might be engineering or it might be art. We'll actually see that computer so-called science actually has a lot in common with magic. We will see that in this course. So it's not a science. It's also not really very much about computers. And it's not about computers in the same sense that physics is not really about particle accelerators. And biology is not really about microscopes and petri dishes. And it's not about computers in the same sense that geometry is not really about using a surveying instruments."

[0]: https://www.youtube.com/watch?v=2Op3QLzMgSY


I get intensely vivid 'stygian blue' hypnagogic hallucinations right before I fall asleep. It's an intensely dark/bright blue. Very hard to describe. It usually manifests as a blob of color that pulses from the outer-edges inwards.

Humorously, Negativland, the amazing decades old plunderphonics group, had a whole prank site[1] and a Over the Edge[2] radio show[3] regarding the 'fourth primary color', Squant. This was also the only color with it's own smell as well. Negativland also had a plugin for web-browser that allowed you to 'see' this magical fourth primary color.

[1]https://www.negativland.com/archives/015squant/story.html

[2]https://en.wikipedia.org/wiki/Over_the_Edge_(radio)

[3]https://www.youtube.com/watch?v=lt8biSTQ7wk


Not a single word about Renderman?!!

Before GeForce 3 was a thing, those of us with access to NeXT machines already had a blick what would mean to write shaders.

https://en.wikipedia.org/wiki/Pixar_RenderMan

https://en.wikipedia.org/wiki/RenderMan_Shading_Language

"The RenderMan companion : a programmer's guide to realistic computer graphics.", 1990

I am also missing 3DFx's work on shading languages before they got acquired by NVidia.


Pretty much off topic, but Řrřola, the author of this blog post, also makes mind blowing 256 byte demos.

E.g. Puls from 2009: https://www.pouet.net/prod.php?which=53816 (check the youtube link if you don't have an MS-DOS ready)

I understand little about extreme sizecoding, but I suspect it's a similarly obsessed mathy story as this blog post, to double use the same bytes as code and content in a way that things actually work and look great.


All of these come after the question: “What is life worth?”

In both the US and in most Asian cultures work is worth more than life for the vast majority of the average lifespan. Working long hours is considered a badge of honor. In the US you work towards retirement—that’s the goal. That’s the endgame.

The fundamental question is: is that right? Is it worth it? Should we spend 40 years trying to make as money as we can so that we can then do nothing? The entire system, from top to bottom, is designed for this. And we give little breaks in between those 40 years so people don’t become depressed and kill themselves.

Whenever I hear about people justify why they work so hard it’s almost always because they want to become rich so they won’t have to work. It makes me think of that line from Office Space, “You don't need a million dollars to do nothing, man. Take a look at my cousin. He's broke and he don't do shit.“

We do what the media tells us because we never stop to think about what we actually want.


It's also easier to give forgiveness than permission, especially in an institutional context. Forgiveness after the fact doesn't imply approval of the act the way permission beforehand does.

> Which would then require that a candidate only pander to an even smaller number of densely populated cities.

NYC/LA/Chicago combined is only 15M people. That's clearly not enough people to win a majority of a 300M+ population. Perhaps you mean a wider class of cities. But 80% of Americans live in cities! If you can't get elected by "pandering" to a whopping 80% of your electorate then something is terribly wrong. We have to choose a very "just right" definition of "city" for your assertion to be true.

But there's an even more important misunderstanding here.

When discussing this topic, many people get distracted by the fact that removing the electoral college would increase the voting power of Massachusetts residents and decrease the voting power of Montana residents in presidential elections.

This is a distraction because neither Massachusetts nor Montana is the winner in the current system! It's states like Pennsylvania and Wisconsin that have a truly distortionary amount of power in the current system. Those states are not particularly rural or particularly urban.

Here's the key observation that could make changes to the electoral college politically feasible: if you don't live in a presidential swing state, getting rid of the electoral college increases the power of your region in selecting the president regardless of the size of your state/city.

So, this is not a case of favoring one small set of states over another small set. Removing the electoral college would substantially weaken the power of 5-6 states over the Executive branch and increase the power of the other 45 states (some more than others, sure, but nearly everyone wins compared to the status quo).

> Because, in the end, minority interests will always be cast aside once the election is over anyway.

This is exactly why I favor keeping Congress as it is but reforming the electoral process for the Executive branch. Montana has a very loud voice in the Senate and House relative to its population. I do not suggest changing that.

> It would be far better to...

1. An omniscient and beneficent dictator would be even better; the point of my post was to suggest plausible alternatives :-)

2. There are many merits to federalism, but this is quite a bag of worms and we're already straying OT.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: