I sometimes chuckle that most of the cool Lexuses are now Priuses under the hood!
But that’s a good thing. I own a Toyota Corolla hybrid myself, and hybrids are one of those things that I can say I’ve completely changed my mind about. I used to dislike the idea of hybrids, because I assumed that the overall system complexity must be higher than either an ICE, or an EV. And I’m a sucker for simple systems, so I thought it’s either EV, or nothing.
Then I looked into how hybrids work. Specifically Toyota hybrids. And came away totally amazed. It’s an amazing system, and much simpler than a traditional ICE car. It doesn’t really have a gearbox, or a clutch, or a starter. The engine is a normal atmospheric engine, so no turbines, overcompression, and the issues that come with it. Furthermore, the engine is typically configured to use the Atkinson cycle, which puts less stress on it. And, the engine has a chain drive, so no belts to change, and by design gets stressed a lot less than in a traditional ICE, because the stop-start load is carried by the electric motor. Also, it can’t really have trouble starting in cold weather, much like an EV. All of that boils down to crazy reliability.
I’m at a point where if I’m looking for a car that uses fuel, it’s only cars that use the Toyota hybrid tech (or similar) that I’m looking at. At the moment it’s just Toyota, Lexus, most Ford hybrids (but not all).
Other companies have hybrids that are liable to my original concerns about massively increased complexity. As an example, VW hybrids have an electric motor within their DSG gearbox. So you have all the complexity of their modern ICEs (turbines, DSG, whatnot) plus additional hybrid related complexity.
The older I get, the more I realize that so much of the divide in the tech field is simply between the two camps of "the tools are the interesting part" vs "getting things done with the tools is the interesting part".
First, there is a huge difference between art and engineering that the author completely misses.
Because most people are not competent to judge the quality of two similar product does not mean they don't care about quality. They just usually can't tell, so they go for the cheaper (which has a higher probability of being worse). And it drives the prices down, and the quality with it. Or it reinforces monopolies, because only those who already produce at scale can produce better quality at lower price.
But if there was a way to correctly tell people: "look, this smartphone is 20% more expensive, but it will last twice as long and it will be more convenient for you in ways you can't understand right now", nobody would go for the worse quality, right? The problem is not that people don't care about quality, it's that they are not competent to judge it and marketing does the rest.
Then the article talks a lot about art. Interestingly, the author says "I'm pride to not understand art, but let me still explain to you how it works". And then proves it by giving contradictory examples like "people don't care about quality, they will just listen to ABBA or go to the Louvre". You have to not understand ABBA or what's in the Louvre to think like that.
So here is my rant: it's okay to be proud to not be knowledgeable about stuff. But then don't be surprised if people notice that you have no clue if you write about it.
(Yes, I noticed the irony of writing a pedant comment about a mediocre article that prides itself in being mediocre and criticises pedantry :-) ).
There is this really useful word, immanence, that I wish were more widely known. Basically it means “focused on this world and human experience.”
I learned this in A Secular Age, a fantastic book by the philosopher Charles Taylor. Basically the book is about how modern western society is increasingly immanent, focused on this reality and not on what is after life; e. g., heaven and hell, punishments and rewards after life, personal experience after death, and so on.
I bring this up because I think a decent working definition of spirituality is how one navigates this immanent situation. It seems to me that there quite likely is something beyond the bounds of human experience, but we aren’t quite sure what it is. A successful version of spirituality, to me, would be a way of navigating this situation in such a way that is psychologically healthy and philosophically justifiable. Simply denying existence outside of human frames seems like a denial of reality.
In our relentless pursuit of the perfect interface between mind and machine, we build monuments to our own discontent. Each click, each tactile response, each millimeter of travel becomes a meditation on what we seek but cannot name. We spend fortunes to recreate the feeling of something we've never felt, chase echoes of satisfaction that fade with each new acquisition, and find ourselves surrounded by shrines to our own restlessness. In this symphony of springs and switches, we are all apprentices to an art that has no master.
The best way to achieve a good abstraction is to recall what the word meant before computer science: namely, something closer to generalization.
In computing, we emphasize the communicational (i.e. interface) aspects of our code, and, in this respect, tend to focus on an "abstraction"'s role in hiding information. But a good abstraction does more than simply hide detail, it generalizes particulars into a new kind of "object" that is easier to reason about.
If you keep this in mind, you'll realize that having a lot of particulars to identify shared properties that you can abstract away is a prerequisite. The best abstractions I've seen have always come into being only after a significant amount of particularized code had already been written. It is only then that you can identify the actual common properties and patterns of use. Contrarily, abstractions that are built upfront to try and do little more than hide details or to account for potential similarities or complexity, instead of actual already existent complexity are typically far more confusing and poorly designed.
There is a tendency to call something an evolutionary puzzle if some behavior is not optimal, and so elaborate, and often quite clever, possible explanations for why it's secretly optimal are devised.
But it's a mistake to think that non-optimal phenotypes require special explanations, when the obvious one is at hand: the optimal solution hasn't been found yet. In this case, the mechanisms that ensure that young are cared for are simply being triggered or activated by situations it wasn't "designed for", and nothing has evolved to prevent that from happening because it's an infrequent occurrence in the first place (and perhaps because any solution to it would be more expensive that it's worth?).
Life works with what its got, and natural selection (and other forms of selection) are not explanations, they're frameworks, models of a type of process. Actual explanations involve actual causal chains with actual stuff in the world.
What is often hidden in these kinds of posts is a relationship with money that's not fully understood. There's thousands upon thousands of different jobs out there, but your post implies that there's none outside of tech. I'm assuming this is because of the lifestyle it afforded you which you are used to, addiction to cash flow, dependents to support, or perhaps an identity fused with high earnings.
Do a thorough think of how much money you really need. And convince yourself that a dollar more isn't worth any amount of extra effort.
For most people it took a 3-5 years in school or lower level jobs to get good enough to crack into the fruits a high earning software engineering career. If you were young when this happened, you didn't even notice those years go by. Now you're older, but the same rules apply. They just feel different and usually unmotivating. You may need to spend a few years at the bottom again to make some progress down a different skill tree.
When you're winning for so long, it's hard to imagine eating shit for years just to make bread again elsewhere. Harness some excitement around that and commit fully, or realize that you have a pretty great life and find a way to stay cozy in tech (like divorcing your identity from your job).
edit: also if you've only been in big tech, then get out. it's so much more fun elsewhere.
Tailwind threads usually include comments and questions that are answered in the documentation, so here's some useful links for people that haven't used Tailwind before.
A core part of Tailwind is that you reuse styles by using a templating system vs using custom CSS classes e.g. you have a button.html file that contains the styling for your buttons for reuse, so you don't have to keep repeating the same utility classes everywhere.
> It’s very rare that all of the information needed to define a UI component can live entirely in CSS — there’s almost always some important corresponding HTML structure you need to use as well.
> For this reason, it’s often better to extract reusable pieces of your UI into template partials or JavaScript components instead of writing custom CSS classes.
@apply (e.g. .btn { @apply py-2 px-4 bg-indigo-500 text-white }) is only meant for reusing styles for simple things like a single tag and is generally recommended against because you should use templates instead:
> For small components like buttons and form elements, creating a template partial or JavaScript component can often feel too heavy compared to a simple CSS class.
> In these situations, you can use Tailwind’s @apply directive to easily extract common utility patterns to CSS component classes.
Inline styles can't be responsive and can't target hover/focus states e.g. there's no inline way to write "text-black hover:text-blue py-2 md:py-4 lg:py-5 lg:flex lg:items-center" and the CSS equivalent is very verbose.
> But using utility classes has a few important advantages over inline styles:
> Designing with constraints. Using inline styles, every value is a magic number. With utilities, you’re choosing styles from a predefined design system, which makes it much easier to build visually consistent UIs.
> Responsive design. You can’t use media queries in inline styles, but you can use Tailwind’s responsive utilities to build fully responsive interfaces easily.
> Hover, focus, and other states. Inline styles can’t target states like hover or focus, but Tailwind’s state variants make it easy to style those states with utility classes.
Opinion: As utility classes are quick to type and change, and it's easy to find where you need to edit to change the styles of a component, it's an order of magnitude quicker to iterate on designs compared to CSS that's scattered across files and shared between pages in hard to track ways. CSS specificity and cascading aren't often used, and mostly just cause complexity and headaches so are best avoided. Tailwind-style vs classic CSS is similar to composition vs inheritance in OOP with similar pros/cons, where complex inheritance is generally discouraged now in OOP languages. Yes, the Tailwind approach is going against the standard CSS approach, but CSS wasn't initially designed for highly custom responsive designs so it's not surprising if its "best practices" don't fit everywhere.
Also, Tailwind is really for custom responsive UI and website designs. If your site is mostly Markdown documents and you don't need a custom design or complex mobile/desktop styling, the above isn't going to make any sense and plain CSS or something like Bootstrap is likely a better choice.
Unfortunately, the answer to your question is very likely "No." There are a few subtle reasons why this is the case, and I'm going to attempt to explain them. I've seen this situation many times, and the outcome is almost always "HN doesn't change." This isn't due to laziness; adding the feature is two lines of Arc. The reason is social.
Social software is hard. In fact, it's one of the hardest types of software ever to be built. Things that might seem like small conveniences or improvements often have counterintuitive effects. These effects are not readily understood by people who aren't running the site, because only the people running the site can see them in detail.
For example, suppose we were to implement the black bar link. Firstly, this means that the black bar now becomes a "superslot", pinned at the very top of HN. It turns what was otherwise a subtle gesture into a feature. It will inevitably raise questions about whether the black bar is really warranted for so-and-so, or whether it's fair that they get the superslot. But criticisms like that can be ignored.
The bigger problem is one that Dan has pointed out many times: it's good to have readers dig a little for information. Only people who are motivated will end up showing up in the thread. And those are exactly the kinds of people who you want showing up in that thread, because the point is to honor whomever died, not to catapult the entire community (and then some) at the thread. After all, every single person who ever visits HN will immediately click the black bar if it was clickable. Are you sure this is the kind of effect that would be a Good Thing?
Then there is the truth that doing nothing leads to the optimal outcome. Suppose the black bar was changed, and it was a mistake. This mistake costs time, because now the moderator has to deal with the consequences. It's not just a matter of reverting the change; when stuff like that gets reverted, people get curious why. So it'd be natural to have to write an explanation, which ends up sparking discussion about very tricky subjects. Again, community software is hard, and explaining subtle reasons for doing X is a delicate process. All of this translates to the potential of wasting some unknown quantity of time; time you won't get back, and time that you won't be doing your duties of running HN.
Then there's the most subtle point: it would break tradition. pg was the originator of the black bar, along with the christmas colors. It might seem cheesy to people who haven't been here since 2006, but there's something magical about seeing HN behave exactly as it was originally written, even when that behavior is sometimes arguably less optimal than it otherwise could be. Because, again, every change has social effects, and these are very hard to predict.
Lastly, the person who died might not want all of the attention. Are you sure you really want to be spotlighted by the entire (tech) world when you pass? It wasn't till I had some uncomfortable moments in the spotlight that I realized that fame is sometimes something that people choose to avoid.
For all of those reasons and then some, the black bar is likely going to stay as it is. If only as a hat tip to the person who originally created the tradition.
People always like to say "thinking, not typing, is the bottleneck". This is a totally wrong way to think about it, because for the most part you don't think while you type. You think, then you type. They happen at different times! The more time you spend typing, the longer it is before you can start thinking again.
This happened to me and I found this tool super helpful to get my site unblocked: https://dnsblacklist.org/
I purchased a valuable premium domain to host a personal art collection (of anime cels). For some bizarre reason, the site was inaccessible from my work computer and it was de-listed from Google even if I typed the url itself into search.
I hired a square space specialist to figure out why, to no avail. I then begged our company’s CISO to investigate and it turns out we had some firewall setting on UniFi that blocked the domain because it appeared on a list. Once I checked way back, it turns out that it was as an anime porn aggregator years back. I personally reached out to all the web filters out there (Google, Symantec, bing) and one by one filed tickets for them to mark it as art instead of pornography and it worked. I am now properly crawled on Google but still MIA on Bing, search console is giving me some BS error that’s incomprehensible, typical of MSFT.
Someone gave me an analogy some time ago that made a lot of sense.
If you shine a flashlight through a tree blowing in the wind and vary the brightness to convey information, the signal can get distorted pretty easily.
However, if you have a constant brightness source and vary the color, it’s a lot easier to figure out what the source is trying to convey.
The fundamental reason for this is simple. Humans are prone to cognitive dissonance. Meaning, we do absurd things to avoid painful thoughts. And anything that questions our sense of identity, is a painful thought.
So if my self-image is, "I've advanced our understanding of the fundamental nature of reality," then the idea that my contributions weren't useful becomes painful. So we avoid thinking it, challenge people who question our past contributions, and so on.
The natural result of this cognitive dissonance is a feeling of undue certainty in our speculations. After all certainty is merely a belief that one idea is easy to believe and its opposites are hard to believe. We imagine that our certitudes are based on fact. But they more easily arise from cognitive biases.
And this is how a group of intelligent and usually rational people descend into theology whose internal contradictions can't be acknowledged.
One of the most valuable life lessons is you can't get anyone else to care about what you want them to care about basically ever. You need to focus on the things you can control and one of the things you can't control is what someone else is going to care about.
So if you want something done and someone else has to agree, you have to figure out how the thing you want coincides somehow with their interests and concerns.
Then you explain the thing you want to them in terms of how it advances/affects the interests and concerns of the other person. So in the framing of TFA, product are never ever ever under any circumstances going to give a shit about your architecture proposal (because that is entirely in the domain of your concerns). But they may care about how the architecture is going to prevent them from delivering features that are on the roadmap coming up and how you have a solution that can fix that for example (because now you are in the domain of their concerns). Notice this is not just "your architecture proposal", it is how your architecture proposal is going to get them what they want, and if you want to do this you need to think deeply and make sure you really understand what they want, not just what you want.
You're not trying to change their mind. You're trying to get what you want by showing them how it will also get them something they want.
I'm putting this here because I really wish someone had told me this 25 years ago near the start of my career.
There was an article posted on here[1] a while back that I only just found again, introducing the term "expedience." The idea was that we think we live in a world where people have to have "the best" sweater, be on "the best" social network, drive "the best" car, etc. But when you look at what really WINS, it's not the best, it's the most "expedient" - i.e. sufficiently good, with built-in social proof, inoculated of buyer's remorse, etc.
Is Amazon "the best" place to go shopping? No, you might find better prices on individual items if you put a little more work into it, but it's the most expedient. Is Facebook/Instagram/Tiktok/insert here "the best" social network? No, but it is the most accessible, easy-to-use, useful one. Is a Tesla (perhaps outdated example since X) "the best" car - no, but it is the most expedient.
There is a tangent here that intersects with refinement culture as well. Among the group of society that (subconsciously) care about these "expedient" choices, you see everyone and everything start to look the same
Housing decisions seem to be this weird place where people have a really hard time disentangling their thinking about various things. As someone who's rented most their life, I'm constantly reminded by others that the money I give to my landlord will never come back. But these same people tend to be a little surprised when I point out to them that the money spent on home loan interest, property taxes, the cost of maintenance and upkeep, etc., is also something you'll never get back. Or that a house you live in has some peculiarly unfortunate characteristics when viewed as an investment: extreme illiquidity, not being able to sell only a part of it, generally not being able to sell it without simultaneously buying (or renting) another asset in the same class so that you can keep a roof over your head, how property taxes function kind of like an expense ratio that would be considered highway robbery for any other investment class, etc.
But if you point all that out, people also go all weird and assume this must mean that you're trying to argue that owning a home must be categorically a bad idea. Possibly because we're so caught up in thinking of one's own domicile as an investment that we've lost the ability to think about it as just being another useful thing one might own. I've worked out the math and confirmed that, for the amount I drive, renting a car when I need one is much less expensive than owning one. But I own one, anyway, because I value the convenience, and that doesn't seem to be a difficult concept for anyone to grasp. Similarly, I currently own a home and it's much more expensive than my previous living arrangement, but it's worth it to me because I get to choose the appliances, decide whether or not I get to have decent insulation, etc. And that's valid, too. I just don't harbor any illusions that I'm saving any money by paying for these luxuries.
This is pretty cool, I drop the following shell script on all my servers:
#!/bin/bash
if [[ -z $1 && -z $2 ]]; then
echo "No Message passed"
else
if [[ -z $2 ]]; then
curl -s --form-string "token=MYAPPTOKEN" --form-string "user=MYUSERTOKEN" --form-string "message=$1" https://api.pushover.net/1/messages.json
else
curl -s --form-string "token=MYAPPTOKEN" --form-string "user=MYUSERTOKEN" --form-string "title=$1" --form-string "message=$2" https://api.pushover.net/1/messages.json
fi
fi
It's SUPER basic and probably shitty but for me it's perfect. I can add " && push 'Command is Done'" to the end of any command and get a notification on my watch/phone/(and desktop? But I don't have pushover on my desktop installed). Great for things you threw into a screen/tmux session and want to know when they finished.
If you guys want to go down an unusually interesting rabbit hole, the book Open-Focus Brain by Les Fehmi and (especially) the associated audio exercises, are all about this.
The exercises are something like guided meditations, but unique in my experience, and I never do exercises like that. It's a pity that his work isn't better known*. He died a couple years ago.
The past light cone of an object constrains what past events can affect an object regardless of whether the object has awareness of not. So you don’t need memory to experience time, even if you need memory to understand it.
How do you ensure the data infrastructure you’re building doesn’t get replaced as soon as you leave in the future?
If this is a core conceit of the thinking then my answer is who cares?
Why do you want to try and influence a situation you're not even involed in?
Taking it back to the best lesson I was ever given in software engineering "don't code for every future".
Do what you're asked to and not get caught up in projecting your own biases into trying to make a "solid base" for the future when you can't know the concerns of said future.
What a strange post. This guy namedrops Scott Adams and writes a few lines referencing Adams' 2013 book How to Fail At Almost Everything And Still Win Big, but it's pretty unclear to me if he understood the point that Adams was trying to make. In Ch6 Goals vs Systems, Adams writes:
> To put it bluntly, goals are for losers. That's literally true most of the time... The systems-versus-goals point of view is burdened by semantics, of course. You might say every system has a goal, however vague. And that would be true to some extent. And you could say that everyone who pursues a goal has some sort of system to get there, whether it is expressed or not. You could word-glue goals and systems together if you chose. All I'm suggesting is that thinking of goals and systems as very different concepts has power. Goal-oriented people exist in a state of continuous presuccess failure at best, and permanent failure at worst if things never work out. Systems people succeed every time they apply their systems, in the sense that they did what they intended to do. The goals people are fighting the feeling of discouragement at each turn. The systems people are feeling good every time they apply their system. That's a big difference in terms of maintaining your personal energy in the right direction.
Looking at the blog post author's emboldened claim, "Systems don't work without goals", and his implication that every olympic athlete who does not claim his or her medal emoji is a failure, it seems clear to me that he either missed Adams' point entirely or has his own agenda with respect to "goals" as a buzzword. I will also point out that the project that his blog links to is a $10/month subscription service that itself functions as a system to help people achieve their goals -- so perhaps this is someone who has a vested interest in this semantic battle.
The analogy I like best is that the economy functions like a slime mold. When it senses a chemical gradient that indicates food in some direction it grows out in a very wide search pattern, expanding a lot of resources to move that way. When it finds the actual food, most of the growth withers away and you're left with just a path to the food.
As a primarily Windows developer of video games I find developing on Linux far more frustrating, fragile, and error prone than Windows.
However it's less about which is better or worse, and more about the devil you know. Setting up a brand new macOS for development is excruciating. Learning Linux is an extremely long, time consuming, and painful experience. But once you've built up some scar tissue it doesn't seem so bad!
> The cleanest way is to just use Linux on Windows
Hard hard hard disagree. I could not possibly disagree more strongly. I violently and disrespectfully disagree.
> The other option is to use Windows proper
This is the way. Don't use Cygwin or MinGW-64. That is bad advice.
I write a lot of C++ code. It typically needs to run on Windows, macOS, Linux, and Android. My experience is that Windows developers are generally content to step out of their comfort zone and work on macOS and Linux "natively". My personal lived experience is that Linux-first developers are far and away the least likely to even think of Windows or other platforms. They frequently refuse to request a Windows machine so they can even test another platform. Which means I have to fix all their broken code, grumble grumble. If you hard coded /usr/lib or similar you're a bad person.
My recommendation is to simply learn Windows and do things the native Windows way. Stop assuming all platforms are Linux. Cross-platform is a solved and relatively simple problem. Trying to force everything to behave The Linux Way is pain and suffering.
Personally I think "The Linux Way" is generally quite bad. Try something different! Be open to new experiences! Don't dip your toes in the water, jump in the deep end! You might learn a thing or two. And who knows, you may even discover it's quite pleasant.
I've been in cloud / IT space for about 15ish years now, and at some point in the last few years I became quite jaded with all the new shiny tech things. I had a lot of trouble buying into the K8s ecosystem when I could do magical wonderous things with SSH and Ansible, and most conversations with my colleagues at the time were unconvincing at best - they would say K8s can do blah blah, and I would point to our Ansible playbooks that were already doing the same thing. The problem for me was less about the differences in technology but more about finding the willingness to learn something just because I have to.
I've since learned to recontextualize these things in terms of people. CI/CD isn't good because it's better than SSH, it's good because people who speak English can understand a simple devops pipeline but not my custom SSH wizardry. It's a way of inviting developers and even non-tech mgmt folks into my arcane world of development / production and allowing them to see what's going on under the hood. My incentive now isn't about what CI/CD tech is better or worse, rather does it allow my team / peers to understand what I'm trying to achieve and join in. And ultimately that's what I get paid to do - I don't get paid to do cool tech stuff, I get paid to make other people's work easier, or at least that's how I see devops / CI/CD. I know I can always find easier ways to do things, but will they necessarily understand them?
In Latin, "null" was not a noun (i.e. substantive), but an adjective applied to nouns.
It was used in precisely the same word contexts as the words for "one", "two", "three" etc. and in those contexts you could substitute any cardinal numeral. Like any cardinal numeral, it could be used to answer questions about how many things are in a certain place.
So it was really the number "zero".
For the empty set, the most appropriate Latin word was the noun "nihil" ("nothing"), sometimes contracted to "nil" (with long "i") hence the NIL of LISP for the empty list.
So the concepts of "zero" and "empty set" were distinguished in Latin and also in the other known ancient languages.
Some modern programming languages use "null" in a wrong way, when they should have stuck to the NIL of LISP. A null integer or floating-point number denotes the quantity "zero", but a null pointer is not a quantity. A null pointer points to nothing, so it denotes the object NIL.
(For the purposes of this post, I'm including HTML in the XML family.)
XML/HTML is good when:
1. You have two dimensions of markup you want to do. That is, you have a clear distinction between what is a new "tag" and what is an attribute on that tag. If you can't almost instantly decide whether some feature you want to add works as an attribute or a tag, you probably shouldn't be in XML.
2. Almost every tag one way or another contains some text, the third dimension that XML supports. A proliferation of tags that never contain any text is a bad sign. A handful may not be a problem, e.g. "hr" in HTML, but they should be the exception.
3. You have a really good use case for XML namespacing, the fourth dimension of information that XML supports, in which case there's almost no competition for a well-standardized format, as long as you're also using the previous three dimensions.
There's sort of this popular myth that XML is useless, which I think isn't because it's true or that XML is bad, I think it's because in general, most times you want to dump out a data structure #1 isn't true, let alone #2 or #3. In a lot of data sets, you've only got the two dimensions of "simple structure" and "text", not annotations on the structure itself. (Or, perhaps even more accurately, they end up implicit in the format itself, and the format is constant enough for that to be just fine.) A lot of stuff in the 1990s and 200xs used XML "because XML" even though it clearly failed #1. XML is really klunky when you don't want that second dimension because the XML APIs generally can't let you ignore it, or they wouldn't actually be XML APIs.
On the other hand, when you learn this distinction, you do come across the occasional JSON-based format that clearly really ought to be XML instead. You can embed anything you want into JSON, but when you're manually embedding a second structure dimension into your JSON document, it loses its advantages over XML fast. If you've ever seen any of the various attempts to fully embed HTML into JSON, without leaving any features behind, you can begin to see why XML or XML-esque standards like HTML aren't a bad idea. HTML is much easier to read for humans than HTML-in-JSON-with-no-compromises.
And if you've truly got the four-dimensional use case, XML is really quite nice. When you need all the features, suddenly the libraries, completely standardized serialization, and XPath support and such are all actually convenient and surprisingly easy to use, for what you're getting.
Some examples: HTML is a generally good idea. SVG is a middling idea; it passes #1 and #3 but fails #2. SOAP and XML-RPC is generally a bad idea; SOAP fails #1 and #2 but sort of uses #3 and XML-RPC fails all three. XMPP I actually think is pretty solid as an XML format (mere network verbosity problems can be solved with an alternate encoding, though admittedly that becomes non-standard), and in a lot of ways, the real problem with XMPP isn't so much the format itself as that people are not used to dealing with the four-dimensional data structures that result. People expecting IRC-esque flat text are not expecting such detail. Using the fourth dimension of namespaces for extensibility is neat, but few developers understand it, or want to.
This is missing a few things from the trainings I’ve had on giving feedback, though it has a lot of the good stuff.
For one, it mentions to give positive feedback, but it fails to mention that you should not give that positive feedback at the same time that you’re giving constructive feedback. This gets called the “feedback sandwich” where, to ease the awkwardness of giving constructive feedback, we tend to sandwich it between complimentary feedback. The problem is that people often focus on the stuff that feels good and fail to really hear the constructive part.
Secondly, while it mentions to include the impact, it doesn’t mention the first two parts of good feedback. The model I learned goes by the initialism SBI, for situation, behavior and impact. In X situation, you did Y and it caused Z. You don’t have to format it exactly this way, but having all three components of the feedback is key to making the feedback actionable.
The other thing that’s necessary for great feedback culture is to really understand the concept that “feedback is a gift.” It’s really easy to be defensive or disagree with feedback you hear. But you need to understand that feedback doesn’t represent objective truth, it represents a perspective that you didn’t have before hearing it. As such, it is always a positive to hear, even when it’s critical. You may not agree with the perspective you’re hearing, but simply knowing that the other person feels that way is more information than you had prior to getting the feedback. And having more information is almost always better.
But that’s a good thing. I own a Toyota Corolla hybrid myself, and hybrids are one of those things that I can say I’ve completely changed my mind about. I used to dislike the idea of hybrids, because I assumed that the overall system complexity must be higher than either an ICE, or an EV. And I’m a sucker for simple systems, so I thought it’s either EV, or nothing.
Then I looked into how hybrids work. Specifically Toyota hybrids. And came away totally amazed. It’s an amazing system, and much simpler than a traditional ICE car. It doesn’t really have a gearbox, or a clutch, or a starter. The engine is a normal atmospheric engine, so no turbines, overcompression, and the issues that come with it. Furthermore, the engine is typically configured to use the Atkinson cycle, which puts less stress on it. And, the engine has a chain drive, so no belts to change, and by design gets stressed a lot less than in a traditional ICE, because the stop-start load is carried by the electric motor. Also, it can’t really have trouble starting in cold weather, much like an EV. All of that boils down to crazy reliability.
I’m at a point where if I’m looking for a car that uses fuel, it’s only cars that use the Toyota hybrid tech (or similar) that I’m looking at. At the moment it’s just Toyota, Lexus, most Ford hybrids (but not all).
Other companies have hybrids that are liable to my original concerns about massively increased complexity. As an example, VW hybrids have an electric motor within their DSG gearbox. So you have all the complexity of their modern ICEs (turbines, DSG, whatnot) plus additional hybrid related complexity.