On the one hand: I'm a software engineer, I know none of the engineers involved wanted to put out a bad product or are happy about the situation. On the other hand, my Sonos system became almost unusable with the latest update. One would think that the ability to control the _volume_ is part of a speaker's MVP. It's now unbelievably laggy, when it works at all.
Agreed. And a point that keeps getting missed whenever this discussion comes up is there's a big difference between junior and senior programmers. What should a junior programmer do? Write as much code as possible! KLOC is a pretty good measure of progress for junior programmers, how else do you get better at programming? To know when not to write a line of code, you have to know how to write the line of code in the first place.
That's one ideal, sure. I don't see it though -- who's going to keep all these individual driving pods clean? How is access going to be gated to them? SF is a dense city, only New York has higher population density. Its BART and muni and bus service isn't bad, either -- and partly because it's such a pain to drive downtown, it gets used, and being used keeps it safe.
Oh my goodness the lights! Am I misremembering or is it really only the past few years that it's felt like half the cars on the road have their highbeams on constantly? Not even a big car thing, though big cars make it worse; it's any car with LED headlights.
My car, and several of my recent rentals, have had automatic high beams. They’re usually not great about not just blinding people. Turning it off is one of the first things I generally figure out.
That said, yes, high hoods and higher, brighter lights make this a distinction without a difference.
I've honestly been asking myself if this is because of so many aging drivers who are trying to compensate for vision loss, when they should probably be staying off the roads at night? There can be a lot of denial in accepting one's reduced capacity.
Or, is it people who have become so accustomed to automatic-everything that they don't really know or think about the light controls? This could also explain the reverse problem of people driving around a night with only daytime running lights or parking lights.
Or, is it really just selfish jerks like those who want massive tanks to crush their opponents? They would like to give others sunburn, if possible. This could also explain those turning on fog lights in all conditions or those who attach off-road auxiliary lights to their trucks but use them on the streets.
I think it's automatic highbeams mostly. They either make the driver lazy or unaware of who they're blinding. Sorta like how it's a bit jarring to go back to a car with completely manual lighting after driving one that automatically flips on low beams when it gets dark.
I have automatic high/low beams function and it works pretty well, sees other cars from far away and switches to low earlier than a human would. The problem is I very rarely use high beams, and never on highways, so it was even stranger for me to see so many Americans drive with high beams.
In Europe the annoying habit is to drive with front anti-fog lights on all the time, some people do it but it happens in the US too.
I don't get what's wrong with foglights. They don't blind nearly as much as a low beam in a corner (or a slightly misaligned one), but make it easier for others to see you.
Yes. Most implementations of matrix high beams are bad.
The VW Passat for example (and most derivatives of it) is straight up dangerous because the camera system doesn't work properly. The only way to make it detect my car is to blast it with my high beams (which are 50W incandescent bulbs, so the other person isn't totally blind).
Having to do that is absolutely ridiculous but unfortunately there won't be a recall if the customers aren't the ones who have to live with the problem.
How much was it the advent of better technology that didn't require those supply chains? Iron. In which case the lesson is: have more and more complex supply chains, tie us all together, don't allow us to be independent because civilisation _is_ that interdependence.
I tend to roll my eyes at a lot of the hype a fair bit, too, but the tricks that are getting media play aren't the only tricks this pony can do. They're just the one that are generating most the hype because that's what captures peoples imaginations. And understandably. The basic formula of futurism has always been and will always be, "Take X everyday thing and come up with a version of it that's fundamentally the same as how it works now, only with robots."
But it's also pretty darned good at doing things that might not be as popularly relatable. There are some teams reporting that GPT4 can outperform Mechanical Turk labelers for many corpus labeling tasks, for example. That might then significantly reduce the cost of developing NLP products that may or may not use a GPT-series network as part of the production implementation.
ChatGPT and GPT4 can also do a pretty darned good job at producing synthetic data. Other methods may be able to produce higher-quality data, but at typically at the cost of a much larger development effort. That could, for example, be useful for privacy-preserving machine learning where you want to develop a model that's tailored to a specific task, but you don't want to (or, if you're lucky enough to live and work in a sensible regulatory regime, maybe even can't) use actual sensitive data to train the model.
The underlying embedding model may or may not outperform BERT for specific tasks. I haven't poked at that issue much, but it wouldn't surprise me.
One of the best uses for GPT right now, before the door is slammed shut, is using it for bootstrapping services that require a lot of text content (using GPT to solve some of the chicken or the egg / two-sided momentum problem early on).
If you wanted to jumpstart a competitor to Goodreads for example, GPT can do elaborate book summaries and character discussion (etc). Use that to get a foundation of content (so the first users don't arrive to a nearly empty shell of a site).
This won't be possible for much longer, and bots like GPT will become drastically more difficult to build (regulation, content restrictions, et al.), and also likely more expensive to utilize for valuable purposes (they'll maximize its commercial value, once they understand fully how to bracket all of that and extract properly). GPT got out of the barn before the door was closed (before most realized the door was even open), a few more might make it out before the vast web of restrictions set in (whether content services like Reddit & Stack trying to block use for building services like GPT, or big media empires doing the same, or image/video owners trying to preserve the value of their works, or governments regulating, and so on).
GPT is in the early wild like Uber was, before governments around the world locked down on that premise, which created a captured market in many locations (whether by Uber or a local competitor that beat Uber there). That open situation never lasts. And when other (financially interested) entities see something like GPT's commercial exploitation potential, the barriers go up, they all want a part of what's possible. There won't be many GPTs in fact, for the exact same reason there aren't many Ubers. There will be a few prominent generalized gigantic bots/services that were early (were able to be built up before the lockdown of content access & consequences changed); and then there will be a lot of highly focused, niche bots that splinter the market over time and do more narrow things extraordinarily well. It'll also sort of follow the search market in this regard.
If you wanted to jumpstart a competitor to Goodreads for example, GPT can do elaborate book summaries and character discussion (etc)
This sort of plan seems just slightly above the level of SEO spam. I've played with ChatGPT book summaries and they're terrible in the sense of being very banal as well as being full of errors.
Plus creating a site that is compendium of answers ChatGPT/etc gives to questions would have to be a poor quality site given that people can just ask the questions directly to ChatGPT.
If you mean an open source alternative to GPT (and similar) - a few of these early bots will have quasi unfair first mover advantages, in being built up before the restrictions are all put into place by all the various interested entities (which is a big list, from celebrities to copyright owners to big media to corporations to governments and everything nearby or inbetween). The GPTs of the world will have the money to pay said interested parties after the fact, to preserve their creation (ala YouTube/Google).
And after the restrictions are put into place, the highly focused (narrow) GPTs will be backed by large piles of venture capital money (which will be necessary, especially in medicine, law, and other similar high value, high regulation, high risk fields). It'll end up costing a lot of money to build the highly focused GPT type services (and they'll outperform the generalized services at what they do; just as Pinterest / Linkedin / Twitter / etc have been better at what they specifically do than Facebook can ever be as a generalized service; and for the same reason Google can never kill Wolfram).
> High risk would be reading legal documents and recommending what the lawyer should do.
Explored this myself, and it's a definite no. GPTs cannot do this reliably (we've tested multiple LLMs). Simply put, it's because it's a rule-extraction problem, and not a generative problem, and rule-extraction situations are typically too sensitive for a "99%" solution like LLMs. Basically, that 1% failure rate is enough to screw any gains you get from the 99% successes. For years, all I've ever heard from the DS space was to always be mindful that the model you apply to a problem fits the nature of the problem. The past few months, this has somehow become a lost wisdom. "LLMs for everything"
For this reason, HN comments lately have had my pupils permanently fixated to the top of my eye sockets.
Ironically I don't think it's a particularly good copywriter unless you're specifically aiming for cheesy ad style or utterly generic, it's just that standards are low, and lots of copy isn't really written for humans anyway. We just care about and have fewer pre-written examples of other stuff more...
(its real strength as a writer over humans other than being low effort is niche stuff like composing entire sentences with alliteration)
I have too, the three I picked randomly didn't work. Because like for every other queries there is no repeatability, you can ask it the same thing every day and get different answers for the same exact prompt
sure, when I was growing up my mom drove myself and my four siblings around in a Suburban. I understand the baseline desire to signal virtue with regards to large motor vehicles, but even a brief glance at the photo in the article should be enough for any sane person to see that driving more than a single child around in such a vehicle is blatantly untenable.
downvoters might also be surprised at how safe large motor vehicles are for transporting children, especially in areas with spontaneous, often-unavoidable wildlife crossing. try hitting a mule deer crossing the road in the dark in one of those little trucks and see how many kids riding (on the flatbed??) survive.
Who knows! Maybe? The email that's written by ChatGTP, but no human actually reads it because at the other end it's summarised by ChatGTP too: maybe that'll be it. LLMs almost but not quite cancelling out the meaningless guff they add to the world.
It can both be true that a) LLMS and their successors will be the sort of huge multiplier for intellectual pursuits that _machinery_ is for physical work; and also b) they are AWFUL for education, and will result in the general population losing skills.
Right? I'd have been entirely unsurprised if Bing saw a 100% increase, given how buzzy all this has been. (Finally, an actual reason to use Bing!) 15% seems like a failure.