Interesting is subjective, but I timed out my brain trying to think of a title. I've been trying to get into blogging more about technical stuff, and I've always enjoyed other people's writeups of troubleshooting sessions. So here's my attempt at one.
It does come with a bit of a puzzle in the form of a sample project, as I started off challenging my friends to try to figure it out before I wrote the explanation.
I encountered this a while back and it led me to dig into the ESP-IDF documentation to try to understand the behavior of a device I did not write the code for. Yes, it's software, but it's a footgun from the manufacturer. This in particular I find to violate the principle of least astonishment:
> It is a possible situation that there are multiple APs that match the target AP info, e.g., two APs with the SSID of "ap" are scanned. In this case, if the scan is WIFI_FAST_SCAN, then only the first scanned "ap" will be found.
The default if esp_wifi_set_config() is not called or, as I see in almost all the sample code that comes up with a quick web search, it is left untouched in the wifi_config_t struct, appears to be WIFI_FAST_SCAN. When you're looking directly at the relevant manual section during a HN discussion of ESP WiFI behavior, it may be obvious, but for a developer focused on the main product functionality who just copies and pastes an example and moves on when it appears to work, I'm not surprised if this always-incorrect-by-default behavior makes it into the vast majority of shipped ESP-based products.
I could have sworn there used to be a "sort by SSID" default, as in it would do the full scan and then connect to the AP with the alphabetically earlier hardware address. In any case, the symptom I was plagued with was that this particular device would consistently connect to the furthest-away access point rather than the one in the same room, resulting in unusable dropouts.
I realize it’s hard to face, but it’s ok to admit that (cool as the tech is) some things just have a net negative on the world. It’s just engineers and data scientists using their enormous talents to make world a worse place, instead of a better one.
It’s not going to happen, but the only solution is to just stop developing it.
The cat's out of the bag, if someone stops then someone else would start.
I would entreat people to consider the net effect of anything they create. Let it at least sway your decisions somewhat. It probably won't be enough to not do it, but I think of it more as the ratio between net positive :: net negative, and paying attention to that ratio should help swing it at least somewhat -- certainly more than giving up and ignoring the benefits :: harms.
The whole idea of developing AGI (even if LLMs are probably the wrong approach) is so strange when you think about it.
The smartest people in the world are working very hard in order to make themselves completely redundant and cheaply replaceable. If they succeed, they will turn their main skill and defining characteristic into a meaningless curiosity.
And life will not be better, the manual work of today will still need to be done and robots are not up to the task. Even with better programming for cheap, the hardware (and I mean even simply the metal) is too expensive compared to human hands.
The trend is that once ML models are able to compete with humans on a task, they perform is much faster and cheaply (although very often with a much higher error rate for now).
IMO thinking will be useful to some extent because the bottleneck will be getting instructions from the models into people to perform them. So if the model has to spend less time communicating with you (limited by human cognition, not the model), you will be more productive.
All work that can be done remotely can also be done by AI. Currently there are limitations like you having more context from communicating with your colleagues.
I suspect at some point, companies will try to mandate that all work-related communication (even face to face) has to be made accessible to AI. They will no doubt try to go two steps forward and then back off one step as a compromise so it will be a local model in the name of privacy or something like that.
At that point, AI will have the same work-related knowledge you have. In fact it will be able to cross reference knowledge from all workers in the company. So why would you work from home if it can do everything you could but faster and cheaper?
What is your conclusion? Will it be a benign benefit, bringing us closer to post-scarcity, or will we be doomed to a dystopian existence? Can you elaborate on how you see it?
Post-scarcity can work if everyone can say "here are me needs, they are now fulfilled, I have a happy and satisfying life". The issue is some people have a deep need to have more than others so they will always strive to have more, including power. AI, if it's ever created will be just another tool they'll use to get it.
I tried this with a random commit from one of my projects: https://github.com/masto/LED-Marquee/commit/775d48fc0dd969de.... You can read my human-crafted message there. By comparison, what follows is the one that Gemini came up with which is A: useless, and B: wrong. I don't want to be mean, but I would put this up there as almost the perfect example of what generative AI should never be used for. It cannot read your mind, and the commit description is where you say what the intention of the change is, not summarize what it contains.
Fix: Remove platformio.ini from .gitignore and rename platformio.ini.dist
This commit removes platformio.ini from the .gitignore file and renames
platformio.ini.dist to platformio.ini. This allows the project's PlatformIO
configuration to be tracked by Git and ensures consistent build settings across
development environments. It also adds comments explaining how to configure OTA
updates and override settings with a separate marquee.ini file.
This pops up at an interesting time. I'm thinking about starting a business that will require me to sell services to enterprise customers, and I feel much the same way about phone calls. I thought I would just have to get good at it, but maybe there's an opportunity to rethink the base assumptions. If my potential customers would rather have an e-mail exchange, I'd be all for it, so at the very least I can present that option up front.
I was working at a dead-end IT job that was eating away at me because the company was a terrible fit, and I couldn't help but wonder if I could put my skills to better use than clearing paper jams and running mail merge reports. But the easy path was to just keep doing the same thing every day.
Don't live another day unless you make it count
There's someone else that you're supposed to be
There's something deep inside of you that still wants out
And shame on you if you don't set it free
And that was the day I quit.
That being said, I had the same reaction to the link at the bottom of that post; I recognize anything can be transformative to the right person at the right time, but I struggled to identify the message in an instrumental DJ set.
I'm fortunate enough to have worked for Google during a period of time when Big Tech started to gain at least a modicum of self-awareness of its toxic culture and history of excesses and indiscretions. I arrived on the scene slightly late to witness the worst of it, but the stories were actively circulating, and the structure was very much still present. SRE teams had bars next to their desks, and office parties ended with ambulances. One of the first things I had to deal with as a new manager was a sexual harassment concern (which I was terribly unprepared to handle and it showed). And if you looked around the office, you saw a lot of people who looked a hell of a lot like me.
But as I said, there was some awareness creeping in. Along with that, the folks in charge had the courage and empowerment to do something about it. And when I say the folks in charge, I don't mean the CEO. This was a company that was still running on a sort of quasi-anarchy of conscientious under-management: my first impression in 2013 was that there was no clear power structure, but everyone was trying to do the right thing and it somehow worked out. And most importantly, people could speak up if something didn't seem right.
There are many examples, but to pick one, I remember my first trip to Dublin and being invited to join their local SRE managers' meeting. I watched someone bring up the topic of alcohol being omnipresently displayed around the office and how it was, at a bare minimum, not a good look. There followed a thoughtful and reasoned discussion that concluded with the decision to put it away. Not a ban on fun, but a firm policy that, among others to follow, helped SRE culture mature into something more appropriate for a workplace, while maintaining the essential feeling of camaraderie and mutual support.
There were also top-down initiatives with varying degrees of success. When an executive puts something into OKRs, there's a good chance that by the time it reaches 13 levels down the org chart, it has turned into your manager demanding that you cut the ends off of 4.5% more roasts by the end of Q3 so they can show leadership on their promo packet. Nevertheless, there were a lot of good ideas, and a lot of good things were implemented. Through my job, I had access to training on topics like privilege and implicit bias that I believe have had a lasting positive impact on me as a person and as a leader. I also had access to people who thought about and fought about these things on a far deeper level than I will ever be able to, and I am grateful if even a sliver of their courage rubbed off on me.
It wasn't just a song and dance. At least down near the bottom, we cared, and we tried very hard to make things better. We failed a lot of the time as well, in the sense that those top-down targets that were set were rarely achieved, which I suspect is at least part of the reason for dropping them. They've tried nothing and they're out of ideas.
What we're seeing now is just more of the slide in the wrong direction that, unfortunately, started a while ago. Google in the mid-2010s was a place where people spoke up, to a fault. Yes, they complained about the candy dispensers running low or not having a puppy room, but they also told a senior vice president that he had been saying "you guys" a lot and do you know what happened? He thanked them, apologized, and corrected himself. Google in the 2020s is a place where you keep your mouth shut, sit down, and do what you're told. I don't know what it's like inside Meta, but I'm not surprised at this turn, because they're basically all following the same playbook, handed to them by Elon.
I'm embarrassed that I've hesitated to speak my mind because I am looking for a job and what if someone reads this on my profile and decides I'm not a team player? Well, I'll say it clearly: I am on team try to be a good person and do the right thing and I am very much a team player. I believe that encouraging hate, and dropping DEI goals is wrong. And if that makes me not a good fit for your organization, I think we're on the same page.
> they also told a senior vice president that he had been saying "you guys" a lot and do you know what happened? He thanked them, apologized, and corrected himself.
And you look back on this as a nostalgic memory? Something useful and productive?
More than 10 years and the only major things are nebulous sexual harassment concern (without any details, of course), booze and “guys”. Remarkable achievement for DEI crowd.
What a sad story, you can’t say “courage”, “allyship” anymore and get a promotion!
It does come with a bit of a puzzle in the form of a sample project, as I started off challenging my friends to try to figure it out before I wrote the explanation.