I do understand why it makes it very hard to compare but it's certainly not meaningless. Google's AI overviews are pretty much the only way that I use AI.
I mean we're all talking about how Google is 'catching up' and 'taking over' Open AI right ? In that case, it genuinely is meaningless. AI Overviews, even if it had the usage OP assumes, is not a threat to Open AI or chatGPT. People use chatGPT for a lot of different things, and AI overviews only handles (rather poorly in my opinion) a small, limited part of the kind of things it gets used for. I use AI mode a lot. It's better than Overviews in every conceivable way, and it's still not a chatGPT replacement.
I guess I use Google's AI that's built into their search engine the same way I've used ChatGPT or Claude. I ask it a question and it provides me an answer. I can then interact with it further if I want or look through the rest of the search results.
I still miss it. Worse was they didn't just remove the hardware on newer models, but older models that did have the hardware available had the functionality removed overnight by an iOS update. If I recall it was over some licensing/patent dispute. (plus the feature itself was somewhat polarizing, not everyone found it intuitive)
What water are they testing? The drinking water on Alaska, for example, is Boxed Water. I'm not sure if that's what they use for coffee and tea, but they didn't actually mention testing the coffee or tea (that I could find).
They do not used bottled (or boxed) water for coffee.
That comes from the coffee machine built into the galley, which uses the aircraft’s onboard potable water tanks.
Those tanks are filled from a hose by the ground crew during refueling.
(At least for major US airlines. I understand some other carriers serve instant coffee packets. Even then, the hot water still comes from the aircraft tanks.)
I wonder how air Canada reconciles this. There was a popular globe and mail article a while ago that gave awful rankings to air Canada's water tanks -- so the company put up signs in the bathroom saying the water is non-potable and called it a day.
Not super comforting if they're then using the same 'non-potable' water to make coffee...
Is there any reason to expect there would be "toxins", given that it's just water? I can imagine how there might be accumulated toxins it's a pack of chicken breasts left in a hot car for 8 hours, but if it's water it should be fine? After all, boiling water is a tried and true way of making water safe to drink.
Yes, there are substances that slip through, but it works well enough for most cases that it's probably fine. Otherwise you get into weird edge cases like "what if there are prions in the water?!?" or whatever.
Completely orthagonal -- I absolutely can't stand the taste of the "Boxed Water" Alaska uses. I swear I can taste the cardboard or whatever they use to package it. I always bring my own water instead.
About 10 years ago I was working in Manhattan and I was walking down 42nd towards the train station after work. I looked off towards the sunset and thought "Wow, it's setting right between all the buildings." Then I looked the other way and saw dozens of people taking photos. I had accidentally seen Manhattanhenge and probably ruined a few photos.
> The bottleneck isn’t code production, it is judgment.
It always surprises me that this isn't obvious to everyone. If AI wrote 100% of the code that I do at work, I wouldn't get any more work done because writing the code is usually the easy part.
I'm retired now, but I spent many hours writing and debugging code during my career. I believed that implementing features was what I was being paid to do. I was proud of fixing difficult bugs.
A shift to not writing code (which is apparently sometimes possible now) and managing AI agents instead is a pretty major industry change.
Anything you do with AI is improved if you're able to traverse the stack. There's no situation where knowing how to code won't put you above peers who don't.
It's like how every job requires math if you make it far enough.
Well you should be surprised by the number of people who do not know this. Klarna is probably the most popular example where the CEO was all about creating more code, then fired everyone before regretting
Klarna, now there's a company that seems to have no idea what direction it's going in. In the past month, they've announced they're going to be at the forefront of Agentic AI for merchants so... agents can figure out what merchants are selling? They're somehow offering stablecoins to institutional investors to use USDC to extend loans to Klarna? And then they're starting some kind of credit-card rewards program with access to airline lounges?
I think it depends on the sort of work you do. We had some hubspot integration which hadn't been touched for three years break. Probably because someone at hubspot sunset their v1 api a few weeks too early... Our internal AI tool that I've build my own agents on updated our data transfer service to use the v3 api. It also added typing, but kept the rather insane way of delivering the data since... well... since it's worked fine for 3 years. It's still not a great piece of software that runs for us. It's better now than it was yesterday though and it'll now go back to just delivering business value in it's extremely imperfect form.
All I had to do was a two line prompt, and accept the pull request. It probably took 10 minutes out of my day, which was mostly the people I was helping explaining what they thought was wrong. I think it might've taken me all day if I had to go through all the code and the documentation and fixed it. It might have taken me a couple of days because I probably would've made it less insane.
For other tasks, like when I'm working on embedded software using AI would slow me down significantly. Except when the specifications are in German.
I'll stare at a blank editor for an hour with three different solutions in my head that I could implement, and type nothing until a good enough one comes to mind that will save/avoid time and trouble down the road. That last solution is not best for any simple reason like algorithmic complexity or anything that can be scraped from web sites.
No shade on your skills, but for most problems, this is already false; the solutions have already been scraped.
All OSS has been ingested, and all the discussion in forum like this about it, and the personal blog posts and newsletters about it; and the bug tracking; and theh pull requests, and...
and training etc. is only going to get better and filtering out what is "best."
A vast majority of the problems I’m asked to solve at work do not have open-source code I can simply copy or discussion forums that already decided the best answer. Enterprise customers rarely put that stuff out there. Even if they did, it doesn’t account for the environment the solution sit in, possible future integrations, off-the-wall requests from the boss, or knowing that internal customer X is going to want some other wacky thing, so we need to make life easy on our future selves.
At best, what I find online are basic day 1 tutorials and proof on concept stuff. None of it could be used in production where we actually need to handle errors and possible failure situations.
Obviously novel problems require novel solutions, but the vast majority of software solutions are remixes of existing methods. I don’t know your work so I may be wrong in this specific case, but there are a vanishingly small number of people pushing forward the envelope of human knowledge on a day-to-day basis.
My company (and others in the same sector) depends on certain proprietary enterprise software that has literally no publicly available API documentation online, anywhere.
There is barely anything that qualifies as documentation that they are willing to provide under NDA for lock-in reasons/laziness (ERPish sort of thing narrowly designed for the specific sector, and more or less in a duopoly).
The difficulty in developing solutions is 95% understanding business processes/requirements. I suspect this kind of thing becomes more common the further you get from a "software company” into specific industry niches.
The point is that the best solution is based on specific context of my situation and the right judgment couldn't be known by anyone outside of my team/org.
Sometimes people who don't work in software seem surprised that I don't type faster than I do given my line of work, and I explain to them that typing speed is never the bottleneck in the work that I do. I don't pretend to know for sure if this holds true for every possible software job but it's not a concept I've seen surprise many software engineers. This almost seems like the next level of that; they certainly do more than just write code I want faster, but except for problems where I have trouble figuring out how to express what I want in code, they're not necessarily the solution to any problem I have.
If they could write exactly what I wanted but faster, I'd probably stop writing code any other way at all because that would just be a free win with no downside even though the win might be small! They don't write exactly what I want though, so the tradeoff is whether the amount of time they save me writing it is lost from the extra time debugging the code they wrote rather than my own. It's not clear to me that the code produced by an LLM right now is going to be close enough to correct enough of the time that this will be a net increase in efficiency for me. Most of the arguments I've seen for why I might want to consider investing more of my own time into learning these tools seem to be based on extrapolation of trends to up to this point, but it's still not clear to me that it's likely that they'll become good enough to reach a positive ROI for me any time soon. Maybe if the effort to actually start using them more heavily was lower I'd be willing to try it, but from what I can tell, it would take a decent amount of work for me to get the point where I'm even producing anything close to what I'm currently producing, and I don't really see the point of doing that if it's still an open question if it will ever close the remaining gap.
> I explain to them that typing speed is never the bottleneck in the work that I do.
Never is a very strong word. I'm not a terribly fast typist but I intentionally trained to be faster because at times I wanted to whip out some stuff and the thought of typing it all out just annoyed me since it took too long. I think typing speed matters and saying it doesn't is a lie. At the very least if you have a faster baseline then typing stuff is more relaxing instead of just a chore.
An often repeated point on this forum: A lot of our comms are text. You don't want stall and lose attention to comms. Makes sense to train and have the flow on auto.
I've never had an issue where communication with my team has been hindered due to my typing speed not being high enough. If anything, I've been in plenty of text-based communications where it might have been beneficial for everyone to slow down a bit in how quickly they were sending messages in favor of more thoughtfully reading through everything before responding.
How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?
Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
"Writing code" is not the goal. The goal is to design a coherent logical system that achieves some goal. So the practice of programming is in thinking hard about what goal I want to achieve, then thinking about the sort of logical system that I could design that would allow me to verifiably achieve that goal, then actually banging out the code that implements the abstract logical system that I have in my head, then iterating to refine both the abstract system and its implementation. And as a result of being the one who produced the code, I have certainty that the code implements the system I have in mind, and that the system it represents is for for the purpose of achieving the original goals.
So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.
And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.
The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.
The actual act of typing code into a text editor and building it could be the least interesting and least valuable part of software development. A developer who sees their job as "writing code" or a company leader who sees engineers' jobs as "writing code" is totally missing where the value is created.
Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.
Where's the beautiful human generated code? There's the IOCCC but that's the only code comleo that's a competition based on the code itself, and it's not even a beauty pageant. There's some demo scene stuff, which is more of a golf thing. There's random one-offs, like not-Carmack's inverse square, or Duff's device, but other than that, where're the good code beauty pageants?
In my experience (and especially at my current job) bottlenecks are more often organizational than technical. I spend a lot of time waiting for others to make decisions before I can actually proceed with any work.
My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)
A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.
I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.
Does voice transcription count as AI? I'm an okay typer, but being able to talk to my computer, in English, is definitely part of the productivity speed up for me. Even though it struggles to do css because css is the devil, being able to yell at my computer and have it actually do things is cathartic in ways I never thought possible.
All you did was changing the programming language from (say) Python to English. One is designed to be a programming language, with few ambiguities etc. The other is, well, English.
Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.
The thing about this metaphor that people don't seem to ever complete is.
Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...
The standard library is FUCKING HUGE!
Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ...
Now, if I say something to my LLM like:
> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?
I'm able to ... "from physics import Torque, Energy, dimensional_analysis"
And that part of the stdlib was written in 1922 by Bridgman!
And extremely buggy, and impossible to debug, and does not accept or fix bug reports.
AI is like an extremely enthusiastic junior engineer that never learns or improves in any way based on your feedback.
I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.
People need to decide if their counter to AI making programmers obsolete is "current generation AI is buggy, and this will not improve until I retire" or "I only spend coding 5% of my time so it doesn't matter if AI can instantly replace my coding".
And come on: AI definitely will become better as time goes on.
Exactly. LLMs are faster for me when I don't care too much about the exact form the functionality takes. If I want precise results, I end up using more natural language to direct the LLM than it takes if I just write that part of the code myself.
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
> Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.
I think OP is closer to the latter. How I typically have been using Copilot is as a faster autocomplete that I read and tweak before moving on. Too many years of struggling to describe a task to Siri left me deciding “I’ll just show it what I want” rather than tell.
Even without location they can get a pretty good idea of your location from your IP address or any other signals. Their neighbor also might have allowed access to their phonebook or something like that to make the connection obvious.
Cell tower data is readily available for a modest price. It's not hard to triangulate someone with "good enough" accuracy for marketing purposes.
Also, the world is filled with millions of Bluetooth-logging devices. They're everywhere from department stores (to monitor foot traffic) to the side of the road (to monitor traffic speed).
Reading cell towers also is supposed to be behind the location tracking flag. Including bluetooth by the way, which is why so many apps need this permission these days to even link a BLE device.
And tracking bluetooth emissions shouldn't matter as they are randomised while not in an active connection.
I use voice typing for almost the same thing every day.
I run to/from daycare to drop off my son and I title the run "Daycare drop-off". It constantly types "Take care drop-off" which drives me nuts. Those words don't even make sense together. A simple Markov chain should do better.
reply