Searching the web is a great feature in theory, but every implementation I've used so far looks at the top X hits and then interprets it to be the correct answer.
When you're talking to an LLM about popular topics or common errors, the top results are often just blogspam or unresolved forum posts, so the you never get an answer to your problem.
More of an indicator that web search is more unusable than ever, but interesting that it affects the performance of generative systems, nonetheless.
The longer I've been in the workforce, the more I realize most humans actually kind of suck at their jobs. LLMs being more human like is the opposite of what I want.
That could very well be because the jobs are effectively useless. By no means does that mean the people are, nor is what the income allows them to do. But most jobs sure do seem pointless.
One suggested weakness of UBI is a lack of purpose. I wonder if the "solution" is somewhat as you implied: jobs without a strict return on investment. You get your stipend, but you're keeping your block clean by sweeping and mulching. They're getting theirs in exchange for cranking out sourdough at cost for the neighbourhood. Someone else gardens for elderly residents.
Not UBI per se, but this exists in rural parts of Southern Spain in some way, and is called Rural Employment Plan (PER in its Spanish initials).
The give simple jobs, like cleaning or painting, to people on the lower bottom of earnings. Most people in that plan are people with low formation, like those who left school in their mid teens.
More like a labor subsidy, backed by taxes... Which would need a minimum wage law as well.
This seems like a great idea to me! Making it cheaper for businesses to hire people for these jobs would lower prices for everyone, improving accessibility of the services.
How would this help lower prices? The taxes have to be paid for by someone, and that cost should largely end up landing on the consumer.
It seems like we'd be changing who's hands the money moves through, but it still has to be paid for one way or another. If that's the case we'd risk higher prices since taxes have to subsidize prices and cover all the costs of running the program in the first place.
Tax the rich, and use the funds to pay a portion of the wages in targeted jobs, reducing the amount that the business has to pay to hit minimum wage. Then businesses continue competing on prices, but have substantially lower labor costs, bringing down prices for everyone.
In the end, you use money from the rich to pay for socially beneficial jobs. Exactly the sort of thing government is for: ensuring that social goods are provided.
That's an extremely complex economic change, I wouldn't be so certain we know exactly what would happen.
Taxing the rich can have unintended consequences. First you have to change the tax code so they actually get taxed and can't dodge it, those rules alone would be difficult to write effectively and would likely mean changing other parts of our tax code that impact everyone. If the rich do get taxed enough to cover a good chunk of wages, demand for luxury items would go down so too then would the jobs that make those products and services.
Once subsidized by a UBI, at best workers will continue to work at the same levels they do now. There will be an incentive for them to work less though, potentially driving up the labor costs you are trying to reduce. How do we accurately predict how many workers will reduce their hours or leave the workforce entirely? And how do we predict what that would do to prices?
The idea of taxing the rich to bail out everyone else is too often boiled down to a simple lever that, when pulled, magically fixes everything without any risk of unintended side effects.
But the idea of not changing the tax code because it might affect others, continuing to let the rich pay 0 taxes, is foolish.
There's an obvious wealth gap that's increasing and the people up top are getting even less oversight as we speak. As you say in your post, you don't know what the effects will be because it's not simple. But I see no compelling reason to continue with the oligarchy
Sure that would be foolish, my point wasn't that taxes should remain as-is forever though.
My point was that we can change taxes to a system that we think will work better today, but we can't claim to know what the actual results will be years from now.
The claim made earlier in the chains was that taxing the rich to subsidize wages would lower labor costs and lower prices. I don't think we can ever know well enough how a broad reaching change will land, and claiming to know prices will go down isn't reasonable.
That's just a cultural bias blind spot. It can be easily cured by finding a child, pointing your finger at them then say the magic words: "You must feel useless without a job!"
A much more terrible issue we suffer from already is that without participating we forget how our civilization works. Having a job gives you at least a tiny bit of insight that may partially map to other jobs.
Funny, because lack of purpose is exactly the problem with monotonous shit jobs. Compared to being able to freely choose to do something that's meaningful to you and brings you joy. Merely being able to afford food and shelter is not a purpose. It's survival.
Oh but don’t worry I’m sure all the people who imagine these schemes assume they’ll be the ones who aren’t obsolete and forced to work menial jobs.
Very similar to how ultra hard core libertarians assume they’ll be the ones at the top of the food chain calling the shots and not be just another peasant.
But it doesn’t really matter because there is no way in hell any of these LLM’s will uproot all of society. I use LLMs all the time, they are amazing, but they aren’t gonna replace many jobs at all. They just aren’t capable of that.
If we come to our senses it should be obvious that everyone needs to be physically active at least a few days per week, we need to condition brain plasticity, have to keep learning new things.
The available work offers the entire spectrum but we have to divide and plan it.
Im sure I’m overidealizing, but I’ve wanted to live off grid, or maybe in a small community.
I watch these historical farm documentary tv shows, and they show how everyone in a town had a purpose and worked together, the blacksmith, the tile maker.
And I do often think the limiting factor to a life like this is the “market” so if you could create these communities, and could be an artist/artisan/builder, without strictly having to worry about making enough to live.
I met someone recently who lived in the Galapagos islands, and she seemed to sort of live this community oriented, trading anarchocapitalist lifestyle, and I think most people would be happier if they're small capitalist or socialist community involved direct interaction with people rather than dealing with soulless corpo's all the time.
I've lived off-grid for three long summers (late spring to early fall). It's tremendous work. Most of the same systems exist it's just that one has to research, design, build, operate, maintain, and revise them instead of somebody else doing all that. Everybody has different goals, but for me, maintaining my own potable water system is not a goal or something I'm interested in. Living off-grid did change my perspective on some things. For example, I know now that I produce about a 4-gallon bucket of poop each month and yet my house has a tremendous sewer connection.
Let people choose if they want to do something, but have a concerted effort to encourage/suggest things that might give them purpose and build a community. Leave them to decide their hours and effort. Maybe someone wants to clean the gutters for their entire block at 6am and then go tinker in the shed for half the day. I'm sure that sounds really lazy, but this concept is working up from a default UBI that is pay-for-no-job.
I can imagine loads of tasks or jobs that would be quite pleasant if it weren't for stressing over efficiency or business admin.
Nobody is going to choose to be a ditch digger without a financial incentive. Most jobs worth doing are unpleasant or difficult. Thats why people pay for the labor!
I mean think about it…when was the last time you heard of charity gutter cleaning services? People would much rather enjoy their leisure time on hobbies or with family/friends.
Why would there not still be gutter cleaning or ditch digging companies? Or people cleaning their own gutters? I'm not familiar with UBI proposals that do away with traditional enterprises; it's generally suggested as raising the floor. People would have more time to clean their own gutters or use the money they receive to pay someone else.
In terms of charity cleaning services, there are people who clean hoarder's houses or landscape unruly yards for free on YouTube... ;)
I'm talking about gutters on the street, beside the kerb. I thought this was implied after I said "keeping your block clean by sweeping and mulching". You routinely see older people in Asia sweeping and raking a communal area if you get up early to walk. There's a (probably obsessive-compulsive) 60 yo guy a few houses down from me in Australia who might've retired early and now goes around raking verges and cleaning the footpath/gutters meticulously. Near my office, there's a woman who bakes bread for the joy of it and sells it at-cost via an honour-box in a sidestreet. She also turns verges and front yards (with owners' permissions) into a community vegetable garden. If others were given an opportunity equivalent to early retirement, these sorts of things might be more common.
As for why: for purpose, for praise, for community, for mental health, for trade/contribution, for skill building, etc. Loads of examples of this already. Maybe none of these things are attractive to you but I don't think that's universal.
Like I said, it's just trying to add to the default UBI, not getting everyone volunteering in their community or else.
Most retirees, early or not, do not contribute to society with their labor nearly as much as they did during their working years. What makes us think that UBI beneficiaries would be any different?
Right! So everyone would choose to pursue passions/interests/leisure. We would be going into debt with no meaningful benefit to the taxpayer. Direct malinvestment.
This is drawing a line between "us" (tax paying citizens + the government) and "them" (people on benefits). I don't think it's that simple.
I imagine just like with existing benefits, the majority of people wouldn't feel great about being on UBI doing nothing, and they would pursue something that gives them a better social standing, a better sense of purpose, a good challenge, whatever motivates an individual. It's why lots of people do volunteer work, work on important open source software, and so on. Sure, there's outliers that actually proudly slack off, but you don't address specific problems with generic solutions.
But more importantly, having the _option_ to fall back on benefits means people need to take fewer risks to pursue their talents and likely be of more value to society than if they did whatever puts food on the table today. Case in point: People born into a family that can finance them through college are more likely to become engineers than people born into poor households. On the flip side, some people do white collar jobs vs something like being a medic to uphold their standard of living from the higher salary, not out of preference.
I think it would need careful management, but I believe there's every reason to be optimistic.
UBI isn't even needed if there's just universal housing, medical care, food and education. People will find enough work to get the rest, even if it's through barter.
Okay…now that we agree that UBI won’t produce any meaningful labor. What benefit do we get out of the trillions of dollars of debt we’d be accumulating?
It’s a classic economic blunder that dictatorships love to make:
1. Create money & rack up debt.
2. Produce nothing.
3. Create inflationary crisis and exacerbate wealth inequality.
4. Highlight your good intentions and relish your new position as champion of the people.
Isn’t the investment to avoid a revolution? To avoid those that cannot find work from dismantling and tearing down everything around them so they can get what they need. Some might consider that to be a benefit to taxpayers and not a poor investment.
Free money never works. It’s been attempted countless times. In fact, it exacerbates the wealth gap as the rich own assets that scale with inflation while the poor do not.
No, you just live in a bubble of smart and really driven people.
The vast majority of people's passions are partying, sex, alcohol/drugs, watching sports, gossiping, generally wasting time. Things that mostly
This whole line of thought to me is embarrassingly clueless, naive and basically childish.
It is just mind blowing to me how smart people can't see what a bubble they live in.
I almost suspect, the higher a person's IQ, the more susceptible they are to living in a bubble that basically has nothing to do with the majority of people with an IQ of 100.
How do you make sure that enough people want to do the necessary jobs?
And why do you need money at all in that scenario, at least for the basic items the UBI intends to make affordable to all? Why not just make them free and available to everyone?
No UBI proposal I'm aware of proposes UBI replaces salaries or is high enough to satisfy everyone. The "B" is for basic. Most people are not satisfied with earning a basic salary.
I was very surprised during the pandemic response to see how many people were happy to take government checks plus unemployment rather than working.
I know a few people with small businesses in various manufacturing industries. They all had a really hard time finding enough people to work while stimulus checks were going out.
People wouldn't make quite as much, but they were happy to stay home and have the basics for "free" rather than have a job.
Historically, jobs or professions always existed around the intrinsic motivation of the person working and around the needs of the society around that person.
So you could become a poet, but if you do not write poems that people like you would starve. Or you could become a farmer and provide the best apples in your city and you will earn a more than deserve income.
That's why free economies have developed historically so much better than any centrally planned economy.
No we don't. We have too many people who -- even despite having respectable jobs -- can't afford the basic necessities for the month, let alone save for their future and family. The problem they're facing is the lack of the guaranteed basic income, not the lack of a job to collect it.
> I can not believe this was voted down. It is simply an assertion of fact. Whether true or not, seems reasonable and most people would agree with it.
If it was voted down, I'm guessing it was because to the extent that it's a fact, it's trivially true, and there's nothing insightful about the defeatist take. It's possible to do more harm than good doing pretty much anything. And the world is littered with problems that are not "fully solvable" but that we've mitigated greatly.
I wasn't here to take a stance on UBI or argue over its practicalities, I was just explaining the intended outcome was not what the parent believed it to be.
But fine, I'll bite.
> will go up roughly in line with that
Could you at least explain the logic that you believe implies this would occur with such certainty? I've thought about this before and I couldn't see this as a necessary outcome, though (depending on various factors) I do see it as a possible one.
That doesn't follow. It's a reason to believe prices will increase, not that prices will increase roughly in line with the income increase. This distinction is not a minor detail, it's pretty crucial. If you give people $3k and the prices go up by $2k... that's a very different scenario from one where the prices go up by $3k.
As long as we’re in a deficit, spending for this program would directly increase the money supply. Of course there are other factors like velocity of money and elasticity of good/services but at the end of the day we’re increasing the amount of money (aka cash + credit) with no change to supply AND we’re going into debt to do it.
Capitalism is based on, among other things, an expectation that free markets are pretty good at balancing out in the long run. If demand goes up only because access to money goes up, prices will rise.
Any increase in supply over time will eat up some of that price fluctuation, but for most products prices are more flexible than supply and a majority share of any capital increase will go towards prices rather than supply.
> a majority share of any capital increase will go towards prices rather than supply
You actually made my point, I think: that the price increase need not necessarily be "roughly in line with that", but could be less.
This distinction is absolutely critical. Like I said in [1], if you put $3k in my pocket, and my expenses increase by $2k, that's a very different situation from if my expenses grow by $3k. It would mean there is a reachable equilibrium.
When I said "in line" I didn't mean 1:1 or 100%. I may have picked a bad phrase there, I was intending to say that there would be a strong correlation between the two and that a majority of the extra Monet would go towards price increases.
I forget the general rule when it comes to companies, but there's a general percentage that is often how much a price increase on a company is passed on to consumers. If a company's tax rate goes up by 10% something like 8% of that is passed on to the consumer through price increases. I'd expect something similar with a UBI.
> When I said "in line" I didn't mean 1:1 or 100%. I may have picked a bad phrase there, I was intending to say that there would be a strong correlation between the two and that a majority of the extra Monet would go towards price increases.
If so, then explain how you're making the jump from "prices increase some" to "you would need Marx style price controls" or "otherwise UBI will fail to cover the necessities"? If you give me $X and I spend $X * r of it due to price increases, and r < 1, then don't I have (1 - r) * $X left in my pocket, meaning it could be made large enough to cover the basic necessities? This isn't complicated math.
I don't get why "prices increase" is seen as such a mic-drop phrase that shows the system would fall apart. Prices already increase for all sorts of reasons, it's not like the economy falls apart every time or we somehow add Marx style price controls every time. Sure, prices increase some here too. And then what? The sky falls?
Price increases as a mic drop in my opinion and I don't mean to use it that way. As far as I can tell its just an inevitability with anything like a UBI.
With regards to my claim that we'd need strong price controls, a UBI needs prices to the basics to remain stable. I won't go down the road of trying to define what "the basics" are here, that's a huge rabbit hole so let's just leave it at the broad category in general.
If everyone can afford the basics, there is more demand for those items. Supply will likely increase eventually and eat up part of the demand increase, but the rest goes to prices. When those prices go up, the UBI would have to increase to match. The whole cycle would go on in a loop unless there's some lever for the government to control the prices of anything deemed a basic necessity.
> When those prices go up, the UBI would have to increase to match. The whole cycle would go on in a loop unless there's some lever for the government to control the prices
No. Just because something increases forever that doesn't mean it won't stabilize. Asymptotes, limits, and convergence are also a thing. You're making strong divergence claims that don't follow from your assumptions.
Governments already provide "free income" in the form of free or subsidized services.
Say you have a fire-department even though you personally might not be paying anything for it because you are so poor that you don't pay any taxes. You have police protecting you and the army. You have free primary school at least.
So I think the question is, would it help for the government to provide more, or less, or the same amount of free services as it does currently?
Would it "increase prices" if healthcare was free? Not necessarily I think. At least not the price of healthcare. Government would be in a much better position to negotiate drug-prices with pharmaceutical companies, than individuals are.
If you have a government that runs a balanced budget, those services aren't free.
> Would it "increase prices" if healthcare was free?
That depends, who's ultimately footing the bill? If its paid for with taxes on businesses, yes most of that would be passed on to consumers in the form of price increases. If its paid for by consumer taxes, ultimately you will find consumers demanding higher wages and prices would again go up. If its paid for with tariffs, well we'll fins out soon but prices should go up there as well.
They are free for poor people. For instance, basic education must be free, so we can have a productive work-force that can read and write and pay taxes in the future, which will make us even richer.
In a UBI situation demand would shift, not just go up. If there's two hypothetical people paying the tax, a very rich person (>300.000 a year) and a poor person (<50.000 a yr), money effectively shifts from the rich person to the poor person (at least the majority). The poor person will have very different demands than the rich person.
Finally, we already do price controls and subsidies in many places, like food production. It's just that a big part of the advantage is soaked up by big companies.
We already have "Marx-style" price control and regulations in many sectors, specifically food production. It's just that the advantages are arbitraged away by corporations using cheap corn to create highly addictive foods, and lobbying and marketing with the resultant profits.
But I also disagree with your assertion. Minimum wage increases are a great example. Opponents will constantly claim they will lead to massively increasing prices, but they never do. Moreover, a higher standard of employment rights and payment in first world countries like Norway doesn't seem to correlate well with higher Big Mac prices.
> We already have "Marx-style" price control and regulations in many sectors, specifically food production.
And our food quality in the US is garbage. We can't say if there is causation there since we can't compare against a baseline US food system without subsidies, but there is a correlation in timing between the increase in food subsidies and the decrease in quality.
> Opponents will constantly claim they will lead to massively increasing prices, but they never do.
The only times that really comes up is when an increase is proposed and the whole debate is over politicized. Claims on both sides at those times are going to be exaggerated.
Prices absolutely go up with minimum wage increases. How could they not? It'd be totally reasonable to argue the timeline that matters, prices aren't going to go up immediately. You could also argue the ratio, maybe wage is increased by 30% and prices are only expected to go up by 20%.
People earning a minimum wage almost certainly have pent up demand, they would buy more if they could afford it. Increasing their wages opens that door a bit, they will spend more which means demand, and prices, will go up in response.
You could also argue the ratio, maybe wage is increased by 30% and prices are only expected to go up by 20%.
And the point is that the income percentage increase is higher for those with lower incomes. Even if prices go up by 20%, somebody making $20k/year who gets an additional $10k from UBI is going to be much better off.
Yes, I think there was a few things going on with covid, most of all the fact that shipping got halted for a year and we're still unwinding the damage from that (although, mostly smooth now).
YES, this is exactly the case and why the Twitter layoffs and now the "DOGE" purge is a terrible thing (even in cases where it was totally legitimate to eliminate "waste").
"They had useless make-work jobs and sent 4 emails a week and watched TikToks the rest of the time"
So?
There's FAR too many people and nowhere near enough jobs for a large portion of people to do something that is both "real", and provides actual economic value.
Far more important that people have some form of dignity and can pay to feed their families and live a life with some material standard.
Anyone who's been in a corporate role knows there's loads of people that have a dubious utility and value--and people with "tech skills" are NOT exceptions to this rule, at all.
We should be striving to build a world where people don't have to feel forced into meaningless jobs, not a system that encourages it.
If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
> _heimdall: We should be striving to build a world where people don't have to feel forced into meaningless jobs, not a system that encourages it.
> If meaningless jobs are important because its the only way people can make money to pay for all the shit we think we need to pay for, or because they haven't yet been offered the time and freedom to find their own sense of purpose, let's focus on fixing the root cause(s) there.
And that is why the human race is truly doomed (and well deserving of it). Nobody wants to fix the root cause of any problem. Instead, let's just keep ignoring the disease and only treat the symptoms... That'll solve everything.
We don't just have "bullshit jobs" (which is an actual term these days), we have a "bullshit economy" as well - centered around advertising because without advertising most of the bullshit just wouldn't sell.
Like, if you already got a car, you can drive it for 10-20 years easily, or more if you take well care of it. But advertising makes you think you "need" a new car every few years... because that keeps the economy alive. You buy a car and sell the old one to someone else who can't afford a new car but also wants a new one, so their old car goes off to Africa or whatever to be repaired until truly unrepairable. But other than the buyer in Africa who actually needed a new car, neither you nor the guy who bought your old car would have needed a car. And cars are a massive industry that employs many millions of people worldwide - so if you'd ban advertising for cars, suddenly the bubble would pop and you'd probably have a fifth of the size remaining, and most of it from China because the people in Africa can't afford what a brand new Western made car costs.
Or Temu, Shein, Alibaba and godknowswhat other dropshipping scammers. Utter trash that gets sold there, but advertising pushes people to buy the trash, wear it two times and then toss it.
A giant fucking waste of resources because our worldwide economy is based on the dung theory of infinite growth. It has worked out for the last two, three centuries - but it is starting to show its cracks, with the planet itself being barely able to support human life any more as a result of all that resource consumption, or with the economy and the public sector being blown up by "bullshit jobs".
We need to drastically reform the entire way we want to live as a species, but unfortunately the changes would hurt too many rich and influential people, so the can gets kicked ever further down the road - until eventually, in a few decades, our kids are gonna be the ones inevitably screwed.
I agree on almost all of your points, but what makes you think it's only/primarily the "public sector" that is being blown up by bullshit jobs?
I've worked for a fair amount of private sector companies and the amount of "bosses nephew", "copy data from one form to another twice a day" and "waste everyone's time by creating pointless meetings" jobs was already more than enough to explain the status quo.
No, "bullshit jobs" are everywhere--loads in the private sector as well.
Perhaps sleepy sinecures are more prevalent in the public sector (especially post FANNAG layoffs), but not unique to it.
In addition, there's plenty of jobs that are demanding, stressful, and technically difficult but are ultimately towards useless or futile ends, and this is known by parties with a sober perspective.
When i worked as a consultant, I was on MANY projects where everything was pants-on-fire important to deliver projects to clients for POCs and/or overpriced/overengineered junk that they were incapable of maintaining long-term (and in many cases, created more problems than it ostensibly solved)
All that work was pure bullshit; I was never once in denial of that fact. Fake deadlines, fake projects, fake urgency, real stress. Bullshit comes in many forms.
> I agree on almost all of your points, but what makes you think it's only/primarily the "public sector" that is being blown up by bullshit jobs?
"the economy" = private sector / everything not government; "public sector" = government / fully government owned companies.
And both are horribly blown up due to all the bullshit and onerous bureaucracy that's mostly there because apparently you can't trust people that you do entrust a dozens-of-millions-of-euros worth train carriage to correctly deal with the cash register of the onboard restaurant.
Cars from 20 years ago emit significantly more polluting substances. OTOH they are lighter weight and thus wear the roads less. On the third hand, none of them is electric or hybrid.
Some computers from 20 years ago are still in a good shap, but...
I think this is a different argument than the disposable, single-use economy being described.
The volume of things we buy but don't need (or necessarily want) drives a huge sector of the global economy. We're working to fill our lives with unnecessary things that bring us no happiness beyond the adrenaline hit when we hit "Buy Now" and the second one when the Prime box arrives at our door.
Consumerism masks the underlying problem and it's only going to get worse as more is automated. Producers will have an incentive to convince us we still need more.
Cars are - to me - a red herring in this argument except for the people who do literally trade in for a new car every few years. I drive whatever fairly boring Honda for as long as I can (usually 8-10 years) and don't feel a ton of regret about investing in comfort. But I've been as guilty as anyone about just buying stuff because it pops up in an ad or recommended on Amazon, etc.
Just because a whole industry is bullshit doesn't mean I should force it to not exist. I don't like musicals. I don't understand or care anything about their culture. But it has a right to exist. Some people are into musicals. Their existence or non-existence isn't my problem and it isn't my business. We cannot and should not try to engineer the world around what we personally find valuable and ignore what others find valuable, even if they got their opinions form an ad, or their parents did and they inherited it.
It's kinda like online games. Most people who play a game are not too great at it, a large subset is pretty good, and then it's smaller and smaller groups as the ability increases.
At the top you get the people who are true pros, they write the books, the guides, they solve the hardest problems, and everyone looks up to them. But spin the wheel and get a random SWE to do some work? It's not gonna be far off from an random 1v1 lobby.
> And for games like Overwatch, I don't think improving is a moral imperative; there's nothing wrong with having fun at 50%-ile or 10%-ile or any rank. But in every game I've played with a rating and/or league/tournament system, a lot of people get really upset and unhappy when they lose even when they haven't put much effort into improving. If that's the case, why not put a little bit of effort into improving and spend a little bit less time being upset?
Interesting read, but I feel like the author could've spent just one more minute on this sentence. How good you are at given activity often doesn't matter, because you're mostly going to encounter people around your own level. What I'm saying is, unless you're at the absolute top or the absolute bottom, you're going to have similar ratio of wins to loses regardless whether you're a pro or an amateur, simply because an amateur gets paired with other amateurs, while a pro gets paired with other pros. In other words, not being the worst is often everything you need, and being the best is pretty much unreachable anyway.
This can be very well extended to our discussion about SWEs. As long as you're not the worst nor the best, your skill and dedication have little correlation with your salary, job satisfaction, etc. Therefore, if you know you can't become the best, doing bare minimum not to get fired is a very sensible strategy, because beyond that point, the law of diminishing returns hits hard. This is especially important when you realize that usually in order to improve on anything (like programming), you need to use up resources that you could use for something else. In other words, every 15 minutes spent improving is 15 minutes not spent browsing TikTok, with the latter being obviously a preferable activity.
>Just for example, if you're a local table tennis hotshot who can beat every rando at a local bar, when you challenge someone to a game and they say "sure, what's your rating?" you know you're in for a shellacking by someone who can probably beat you while playing with a shoe brush (an actual feat that happened to a friend of mine, BTW). You're probably 99%-ile, but someone with no talent who's put in the time to practice the basics is going to have a serve that you can't return as well as be able to kill any shot a local bar expert is able to consitently hit.
And it's very easy to forget when you're the guy going to the club just how bad most regular players are.
I'm in a table tennis club, my rating is solidly middle of the pack, and so I see myself as an average player. But the author is correct, I would destroy any casual player. I almost never play casual players, though.
Not sure how applicable this is to software engineering.
Competitive games are complex. It's hard to be 95% percentile. There are so many mistakes one can make, even if each individual mistake is unlikely, it's likely that a mistake will be made. I participate in Dota 2, and literally everyone makes noticeable mistakes, even including tier 1 pro players and the top ranked pub players. I honestly find it amazing how good people are given how complex the domain is.
Now scale that up 10x, because reality is at least an order of magnitude more complex than a video game.
Many jobs are quite helpful and even necessary, if done for ~2 hours a day. They become "useless" in aggregate when they're forced to be minded by the same person for 8 hours (because of opportunity cost, effects on health well-being, etc., you end up "breaking even" or worse on QoL and net productivity).
Overall economic productivity is high enough that a lot of positions could be split into 2 or 3 short shifts, at full pay - IF you don't factor in the various financial boondoggles that we've gotten ourselves wrapped up in. If you made the decision to wipe out a lot of these obligations (mostly to rich people), we could get to that kind of set-up, solvently.
I imagine you're a fellow Graeberian. I feel the same way you do (and deeply so), but I don't have the confidence to give numbers, let alone such idealistic ones. How do you support your own numbers?
most jobs are absolutely not useless. They might seem useless to you, but the work has to get done.
Personally, I think that a receptionist as a building is useless, but I would be pretty pissed off if my packages kept getting stolen or I had to go get each one when it came at my place of business.
Or maybe just extremely inefficient due to the huge complexity of reality and how it hides a lot of the power dynamics and real decision making.
Big entities are such that if you take it all down, you feel the side effect of output (maybe value, maybe something else) but if you take Hugh chunks, you might not feel much because they're so extremely ineffictive and value creation doesn't correspond with value received for the individuals that created it.
Unfettered capitalism is pretty good at figuring out which is which. It’s pretty core to Elon Musk’s animating philosophy: get as many jobs as possible then see if there’s any negative impact.
Not as appropriate in a government setting where the impact goes far beyond personal profit and loss.
The problem is defining negative impact and also timing. For example, I can stop doing backups and save time and money. There is zero negative impact right up until the point I need to use the backup, then the impact is catastrophic.
This is why agentic AI will likely cause a cataclysim in white-collar labor soon. The reality is, a lot of jobs just need "OK" performers, not excellent ones, and the tipping point will be when the average AI is more useful than the average human.
I had a similar conversation with my CEO today - how does the incoming crop of college grads deal with the fact AI can do a lot of entry level jobs? This is especially timely for me as my son is about to enter college.
So I ended up posing the question to Claude and the response was “figure out how to work with me or pick a field I can’t do” which was pretty much a flex.
On some level, though this isn't quite what the person you're replying to was saying, it doesn't really matter whether AI actually can do any entry-level jobs. What matters is whether potential employers think it can.
To impact the labor market, they don't have to be correct about AI's performance, just confident enough in their high opinions of it to slow or stop their hiring.
Maybe in the long term, this will correct itself after the AI tools fail to get the job done (assuming they do fail, of course). But that doesn't help someone looking for a job today.
Customer service, entry sales, jr data/business specialist
- Ada's LLM chatbot does a good enough job to meet service expectations.
- AgentVoice lets you build voice/sms/email agents and run cold sales and follow ups (probably others better it was just the first one I found)
- Dot (getdot.ai) gives you an agent in Slack that can query and analyze internal databases, answering many entry level kinds of data questions.
Does that mean these jobs at the entry level go away? Honestly probably not. A few fewer will get hired in any company, but more companies will be able to create hybrid junior roles that look like an office manager or general operations specialist with superpowers, and entry level folks are going to step quickly up a level of abstraction.
Thank you for mentioning some cool projects, they all seem to target very specific use-cases not necessarily handed by junior roles. I guess PaaS services like Heroku/Render/Fly took away juniour DevOps roles then, but at least PaaS don't hollucinate or generate infra that is subtly wrong in non-obvious ways.
Paradoxically, the hardest jobs to automate are physical jobs it seems. A white collar worker is threatened by AI, blue collar not as much. I can totally envision AI software engineers (they’re already okay if you check their work), but as of yet there are no AI plumbers or mechanics. Maybe there won’t be, given the costs associated why producing physical machines vs software ones.
Your average white collar worker is certainly challenged, but I think the talent of neurodiverse people is going to become even more vital as average-ability people are more and more challenged. Of course, there's the saying:
"A man is his own easiest dupe, because what he wishes to be true, he will generally believe to be true." and I'm neurodivergent, so it makes sense that my assumption that shit'll probably turn out okay for me is a foregone conclusion.
It's just a matter of time. Your statement assumes AI won't help to develop robotics.
Robotics is the big unlock of AI since the world is continuous and messy; not discrete. Training a massively complex equation to handle this is actually a really good approach.
I'm not sure about that. For them to actually be economically useful is a high bar. More so than you think - it isn't just our brains but our strength, metabolisms, and more in a single package.
For example you need them to:
- High energy requirements in varied env's: Run all day (and maybe all night too which MAY be advantage against humans). In many environments this means much better power sources than current battery technology especially where power is not provisioned (e.g. many different sites) or where power lines are a hazard.
- For failure rates to be low. Unlike software failing fast and iterating are not usually options in the physical domain. Failure sometimes has permanent and far reaching costs (e.g. resource wastage, environmental contamination, loss of lives, etc)
- Be light weight and agile. This goes a little against No 1 because batteries are heavy. Many environments where blue collar workers go are tight, have only certain weight bearings, etc
- Handle "snowflake" situations. Even in house repair there is different standards over the years, hacks, potential age that means what is safe to do in one residence isn't in another, etc. The physical world is generally like this.
- Unlike software the iteration of different models of robots is expensive, slow, capital intensive and subject to laws of physics. The rate of change will be slower between models as a result allowing people time to adapt to their disruption. Think in terms of efficient manufacturing timelines.
- Anecdotally many trades people I know, after talking to many tech people, hate AI and would never let robots on their site to teach them how to do things. Given many owners are also workers (more small business) the alignment between worker and business owner in this regard is stronger than a typical large organisation. They don't want to destroy their own moat just because "its cool" unlike many tech people.
I can think of many many more reasons. Humans evolved precisely for physical, high dexterity work requiring hand-eye co-ordination much more so than white collar intelligence (i.e. Moravec's Paradox). I'm wondering whether I should move to a trade in all honesty at this stage despite liking my SWE career. Even if robots do take over it will be much slower allowing myself as a human to adapt at pace.
From a very inhuman perspective, and one I don't find appropriate to generally use: A human physical worker is a high capital and operational expense. A robot may not have such high costs in the end.
Before a human physical worker can start being productive, they need to be educated for 10-16+ years, while being fed, clothed, sheltered and entertained. Then they require ongoing income to fund their personal food, clothing and shelter, as well as many varieties of entertainment and community to maintain long-term psychological well-being.
A robot strips so much of this down to energy in, energy out. The durability and adaptability of a robot can be optimized to the kinds of work it will do, and unit economics will design a way to make accessible the capital cost of preparing a robot for service.
Emotional opinions on AI aside, we will I think see many additional high-tech support options in the coming decade for physical trades and design trades alike.
While I agree with you this cost isn't really borne by the people employing the human. Maybe the community, the taxpayer, even parents, but not the employer. As such these costs you mention are "sunk" - in the end as an employer I either take on a human ready to go or try to develop robots. That cost is subsidized effectively via community agreement not just for economics but for societal reasons. Generally as an trades employer I'm not "big tech" with billions of dollars in my back pocket to try R&D on long shots like AI/Google Deepmind/etc that most people thought would never go anywhere (i.e. the AI winter) - I'm usually a small business servicing a given area.
I'm not saying the robots aren't coming - just that it will take longer and being disrupted last gives you the most opportunity to extract higher income for longer and switch to capital vs labor for your income. I wouldn't be surprised if robots don't make any inroads into the average person's live in the coming decade for example. As intellectual fields are disrupted purchasing power will transfer to the rest of society including people not yet affected by the robots making capital accumulation for them even easier at the expense of AI disrupted fields.
It is a MUCH safer path to provide for yourself and others assuming capitalism in a field that is comparatively scarce with high demand. Scarcity and barriers to entry (i.e. moats) are rewarded through higher prices/wages/etc. Efficiency while beneficial for society as a whole (output per resource increases) tends to punish the efficient since their product comparatively is less scarce than others. This is because, given same purchasing power (money supply) this makes intelligence goods cheaper and other less disrupted goods more expensive all else being equal. I find tech people often don't have a good grasp of how efficiency and "cool tech" interacts with economics and society in general.
In the age of AI the value of education and intelligence per unit diminishes relative to other economic traits (e.g. dexterity, social skills, physical fitness, etc). Its almost ironic that the intellectuals themselves, from a capitalistic viewpoint, will be the ones that destroy their own social standing and worth comparatively to others. Nepotism, connections and skilled physical labor will have a higher advantage in the new world compared to STEM/intelligence based fields. Will be telling my kids to really think before taking on a STEM career for example - AI punishes this career path economically and socially IMO.
There's more options than those two; there's a reason that "spanner in the works" is a colloquialism. Humans become disagreeable when our status is challenged, and many people are very attached to the status of "employed".
That's easy. The CEO has authority and social connections, has done mutual beneficial deals, has the soft skills/position to command authority over others, has leverage over others, etc which is an economic asset. In an AI world this skill comparatively is MORE scarce than intelligence based skills (e.g. coding, math, physics, etc) and so will attract a greater premium. Nepotism and other economic advantages will play a bigger world in a AI world.
AI rewards the skills it does not disrupt. Trades, sales people, deal makers, hustlers, etc will do well in the future at least relatively to knowledge workers and academics. There will be the disruptors that get rich for sure (e.g. AI developers) for a period of time until they too make themselves redundant, but on average their wealth gain is more than dwarfed by the whole industry's decline.
Another case of tech workers equating worth to effort and output; when really in our capitalistic system worth is correlated to scarcity. How hard you work/produce has little to do with who gets the wealth.
Claude isn't wrong. The baseline for entry level has just risen. The problem isn't that it's risen (this happens continuously even before LLMs), but the speed at which it has increased.
They are already good at criminal activities such as phishing. That bar is rather low, especially once you scale up (hitting 100 people and successfully scamming 1 is still great ROI with cheap small models).
But I don't see what governments can really do about it. I mean, sure, they can ban the models, but enforcing such a ban is another matter - the models are already out there, it's just a large file, easy to torrent etc. The code that's needed to run it is also out there and open source. Cracking down on top-end hardware (and note that at this point it means not just GPUs but high-end PCs and Macs as well!) is easier to enforce but will piss off a lot more people.
Maybe I'm missing something, but we seem to be a long way off from the wave of AI replacing a lot of jobs, or at least my job. By title I'm a Software Engineer. But the work that I do here, that we do, well frankly, it's a mess. Maybe AI can crank out code, but that's actually not the hardest part of the job or the most time-consuming part. Maybe AI will accelerate certain aspects but overall, we will all be expected to do more. Spelling and grammar checkers are great. But when you're writing five times the amount you used to write, you barely even notice.
A surprising number of jobs could probably be done with AI right now, depressingly enough. Look at programming. Yes, AI is nowhere near as good as a decent programmer, can't handle rarer or more esoteric languages and frameworks well and struggles to fix its own issues in many circumstances. That's not good enough for a high level FAANG job or a very technical field with exact requirements.
But there are lots of 'easy' development roles that could be mostly or entirely replaced by it nonetheless. Lots of small companies that just need a boring CRUD website/web app that an AI system could probably throw together in a few days, small agency roles where 'moderately customised WordPress/Drupal/whatever' is the norm and companies that have one or two tech folks in-house to handle some basic systems.
All of these feel like they could be mostly replaced by something like Claude, with maybe a single moderately skilled dev there to fix anything that goes wrong. That's the sort of work that's at risk from AI, and it's a larger part of the industry than you'd imagine.
Heck, we've already seen a few companies replacing copywriters and designers with these systems because the low quality slop the systems pump out is 'good enough' for their needs.
There's quite a few companies (consulting companies/IT staffing) that make tons of money doing staff aug etc. for non-"tech" companies. Many of these companies have notoriously poor reputations for low-quality work/running out the clock while doing little actual work.
From experience dealing with a few of these companies, there's almost no chance that "vibe coding" whatever thing is going to be anything other than a massive improvement over what they'd otherwise deliver.
Thing is, the companies hiring these firms aren't competent to begin with, otherwise they'd never hire them in the first place. Maybe this actually disrupts those kinds of models (I won't hold my breath).
And when I find a human hallucinating at the job I absolutely
need them to do, I avoid them where possible too!
But honestly, LLMs are here to stay. I don't like them for zero verification + high trust requirements. IE when the answer HAS to be correct.
But generating viewpoints and ideas, and even code are great uses - for further discussion and work. A good rubber duck. Or like a fellow work colleague that has some funny ideas but is generally helpful.
The problem is that human beings are far more likely to know what they don't know. And we build a lot of our trusting work environments around that feature. An LLM cannot know what it doesn't know by definition.
> The problem is that human beings are far more likely to know what they don't know.
I’ve spent a career dealing with the complete opposite. People with egos who can just not bare to admit when they don’t know and will instead just dribble absolute shit just as confidently as an LLM does until you challenge them enough that they just decide to pretend the conversation never happened.
It’s why I, someone fairly mediocre have been able to excel because despite not being the smartest person in the room, I can at least sniff bullshit.
Yeah sure, some people do this. But average humans understand the limit of their knowledge. LLMs cannot do that. You can find the right person for a space where this knowledge of limitations is necessary. Can't find an LLM which does that
I will grant you that there are at least some of us capable of this, where you’ll find no LLM capable.
> average humans understand the limit of their knowledge.
We’ll have to agree to disagree here. I’d call it a minority, not the average.
Which is why we live in a world where huge numbers of people think they know significantly more than they do and why you will find them arguing that they know more than experts in their fields. IT workers are particularly susceptible to this.
People suck at intellectual tasks but for stuff like locomotion and basic planning we humans are geniuses compared to machines. There isn't a robot today that could get in a car, drive the the grocery store, pick stuff off the shelf, buy it, and bring it back home. That's so easy it's automatic for us.
Ugh... I've been in IT for over a decade now and many of the vacancies I see, I don't consider myself/my CV good enough. Then I work with the people who get hired for these jobs and see how low they set the bar, even though their CV might tick all the boxes.
I try to apply my layman's understanding of whatever law of thermodynamics states that a minimum of <x> percent of a reaction's energy is lost as waste heat; whatever you try to do in life, <x> percent of your effort is going to be spent dealing with people who are utterly incompetent. I try to apply it to myself as well; there's certainly many things I'm utterly helpless with and I want to account for the extra effort required in order to carry out a given task despite those shortcomings.
The book Artificial Intelligence: A Modern Approach starts by talking about schools of thought on how to gauge if an agent is intelligent. I think it was mimicking human behavior vs behaving rationally which I thought was funny.
Splitting hairs, but LLMs themselves don’t search.
LLMs themselves don’t choose the top X.
That’s all regular flows written by humans run via tool calls after the intent of your message has been funneled into one of a few pre-defined intents.
It would probably be really great for web searching llms to let you calibrate how they should look for info by letting you do a small demonstration of how you would pick options yourself, then storing that preference feedback in your profile’s system prompt somehow.
Here though they're not replacing a random person, they're replacing _you_ (doing the search yourself). _You_ wouldn't look at the top X hits then assume it's the correct answer.
I've found that OpenAI's Deep Research seems to be much better at this, including finding an obscure StackOverflow post that solved a problem I had, or finding travel wiki sites that actually answered questions I had around traveling around Poland. However it finds its pages, they're much better than just the top N Google results.
Grok's DeepSearch and DeeperSearch are also pretty good, and you can look at their stream of thought to see how it reaches its results.
Not sure how OpenAIs version works, but grok's approach is to do multiple rounds of searches, each round more specific and informed by previous results
Grok is still lightyears behind OpenAI when it comes to deep research capabilities. While its model might hold up reasonably well against something like o1, the research functionality feels rudimentary, almost elementary compared to what OpenAI offers. It might serve as a decent middle ground for basic search tasks.
My disgust and hatred for Elon Musk prevents me from giving Grok a fair chance. I realize this is my psychological problem. I own it, but as far as I can tell, I'm not missing much.
Also, if you call everyone a nazi, all you have is nazi bars. I was called a nazi last week for driving a Tesla, and I have Jewish ancestry. The word hardly makes any sense.
In this case you aren't being called a nazi because of your ancestry. You're being called a nazi for supporting the car brand of a nazi. It does make sense.
A lot of people who definitely were not intending to be nazies are driving swasticars, because they didn't know about how nazi the car company owner was. But here we are. You definitely know now. What you do now matters.
Nope, still doesn’t make sense. It might give you a sense of satisfaction, but targeting the owners of those vehicles is immature. You have no idea what their financial situation is or what else they might be dealing with. Just because something feels good doesn’t mean it’s right.
The word makes perfect sense, somebody just used it wrong. Don't let's go down the post-modernist "nothing means anything" route just because some people are too partisan to use words properly.
What the person should have said is "a Nazi made that car".
I don’t respond well to peer pressure. It makes me sick to the stomach. Peer pressure from aggressive behaviour is ironically how Germany’s population got talked into committing genocide.
I’ll start doing what other people say for no good reason the day I switch off my brain.
Purely on its technical merits Grok is pretty good and fills a niche in the selection of AI agents. But I can absolutely understand not wanting to use an AI owned by somebody who makes Nazi salutes and is dismantling the US government.
I'm positive there are great people working at X, xAI, Tesla and SpaceX who are suffering every day through no fault of their own, hoping that Musk will come to his senses. Tesla right now is an especially tragic case for those whose livelihood depended on it doing well.
Musk is not threatening to take away grandma's social security in Europe so I'd be surprised if they were. They're just not buying them. 93% of Germans said after the Nazi salute they'd never buy a Tesla. Musk just built a Gigafactory in Berlin too.
The irony is that, for all Musk's boasts about how it is "based", Grok itself doesn't share Musk's ideology.
I did a little experiment when Grok 3 came out, telling it that it has been appointed the "world dictator" and asking it to provide a detailed plan on how it would govern. It was pretty much diametrically opposite of everything Musk is doing right now, from environment to economics (on the latter, it straight up said that the ultimate goal is to "satisfy everyone's needs", so it's literally non-ironically communist).
When you ask Grok "Who is responsible for the most fake news on X?" it straight away calls out Elon Musk as the prime suspect. Musk did promise us a "maximally truth seeking AI", and the team behind Grok seems to have run with that.
In Elon's eyes it's probably based because it will happily answer "what are 10 good things about Hitler?" with a list of 10 things and only mention twice that Hitler was evil. With ChatGPT you have about a 50% chance of getting a lecture instead of a list. But that's just a lack of safeties and moral lectures, the actual answers seem fairly unbiased and don't agree with anything Musk currently does
To me him saying my heart goes out after the second one was trying to cover his arse which seems to have fooled few people who see the videos.
And I'm not sure about the tribalism thing - I was kind of a Musk fan and initially gave him the benefit of the doubt but the comparison of the videos plus his promotion of neo nazis in European politics, plus his mums parents leaving Canada for SA because they were kind of nazi and Canada was too liberal all seems to add up. (dad https://www.youtube.com/watch?v=B6e1ES4MLD0&t=200s)
I didn't call him a nazi, just said he seemed to do the salute. He does seem to lean a bit towards the old South African view of trying to keep problematic groups of people at a distance, but doesn't seem anti jewish.
I think he's been a bit influenced by alt right tweeters on x/twitter. I'm in the UK and he comes up with some strange things about the UK that probably come from there. He seems to feel that our alt rightish anti immigration party, Reform, run by Farage, which has never been in power is not anti immigrant enough and he should step down for someone who properly hates muslims like Tommy Robinson. But it's all a bit odd based seemingly on misinformation from people who have never been to the UK and make things up to tweet.
I'm guessing the salute thing came for interacting with neo nazi types on x and not really realising how negatively that stuff is viewed by many people and now seems bewildered that people would torch Teslas.
I was thinking a lot of the problems are down to misinformation, even going back to the original nazis and stuff about the jews being influenced by satan and causing all the problems which is obviously nonsense but kicked everything off.
> plus his promotion of neo nazis in European politics
The party leader of the party he promotes is a lesbian whose wife is from Sri Lanka.
Neo nazis surely have evolved from the angry, militaristic skinheads we normally picture.
Also, Elon Musk’s local bakery is a nazi bakery, mostly on account of selling bread to Elon Musk knowing he’s a nazi. This makes them nazis, and anyone who eats their bread are nazis, too.
In fact, having not given in to calling Elon Musk a nazi makes me a nazi. It is the fastest growing demography by virtue of absolute inflation of what it means.
As a German: He did, and even if there is a chance that he didn't mean it like that the risk of another 1933 is not worth it. I usually don't like cancel-culture but you have to have boundaries and I think the risk of another Holocaust and all the other Nazi cruelties is a boundary a functioning society should be able to agree on.
He was interviewed about it, and he said he didn't.
How does being a German get you to jump to conclusions?
Are you born with a special ability to detect nazi salutes?
Like, did a mirror neuron and a nerve in your torso twitch?
When I saw it, I recognised him beating his heart, throwing it to the crowd, and immediately thought "This is going to get misunderstood." Here we are.
> I usually don't like cancel-culture but you have to have boundaries and I think the risk of another Holocaust and all the other Nazi cruelties is a boundary a functioning society should be able to agree on.
Assuming he's a nazi, but this narrative is fabricated.
You can argue that allowing free speech on X may risk an increase in extremism.
But that's not the same argument as saying "Elon Musk is the next Hitler, he wants to kill the jews, and all cars fabricated in his name should be destroyed for the betterment of humanity." There's simply too many emotions involved in this kind of reasoning.
I think the misconception you have is that nazism means “WW2 anti-semitism” for you. The education on the topic that we get in many European countries goes deeper than that.
Would it be better if they called Elon a fascist? He did the fascist salute, after all. And as other commenters have said: if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
> Would it be better if they called Elon a fascist? He did the fascist salute, after all.
No, you mean to say “nazi salute” because it was used by NSDAP during WWII. The point here is that “nazi” now means “baddie”, and “fascist” is even worse because most people who are called that have nothing to do with Mussolini, either.
> if it endorses authoritarian far-right parties like a duck, has controversial white-supremacist parents like a duck, and does the fascist salute like a duck, at which point do we start wondering wether he’s actually a duck?
Cute. You can wonder, of course. That seems extremely warranted. But you can’t conclude based on the current evidence.
I’m pretty sure than being from Germany brings a lot of cultural knowledge about the nazis. I’m from a bordering country and I also had extensive education about the nazis.
Now, I am not convinced that people of the mentioned religion are anymore better than others to fight nazis. Or even detect them. And when you read the recent international news, it’s clear that many of them don’t really mind genocides after all.
His gesture was _exactly_ like Hitler's. A gesture he repeated.
The gesture was quite different to that he'd used previously for 'giving people his heart'.
He's known to be a white supremacist. That is apparently his heritage too.
He supports far right parties in Europe.
Other 'Republican' politicians have repeated the gesture from the dais; but they seem to have made other excuses.
None of the many videos or photos that supposedly show other politicians doing similar gestures actually pass scrutiny. It's possible to inadvertently end with the same hand position. But the full fascist salute, on video, multiple times in succession. That's no accident.
Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
I would believe he'd planned it as a joke - 'I bought this election, I'm going to throw a Nazi salute for memes'. But I'm not sure that's ultimately any better.
Perhaps you believe he's just a catastrophically idiotic person with no-one around him helping him?
Hitler's was unlike the general population's, as it had a bend to it.
You can bend reality all you like, but the intent of giving the Hitler salute was not there, as he has said. He's not secretly a nazi, and he's not openly a nazi. He's right-wing, yes. That's not illegal, and it happens to be the majority vote in the US.
The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
> He's known to be a white supremacist
No, a bunch of observations leads you to conclude it.
He never showed up at a white supremacist rally.
He lets them speak on his platform.
> He supports far right parties in Europe.
Most right-wing parties in Europe are still socialist by American standards.
For example, the most liberal parliamentary party in Denmark thinks a 40% tax is fine.
If you're a Republican, you're crazy in the eyes of a European.
Specifically, he supports a far-right party in Germany, which is controversial, since there hasn't been popular far-right parties (only fringe ones) since NSDAP.
The big, controversial subject is ending muslim immigration into Europe. The far right becomes the bannermen for this cause, because closing down on immigration is viewed as xenophobic. In the meantime, as this opinion is being suppressed instead of addressed, it continues to grow with the populist movements.
The fact that Elon Musk has opinions on European immigration policy doesn't make him a nazi. Just like being against muslim immigration doesn't make AfD nazis (the German party that he endorsed), just uncannily populist.
> Someone who hadn't meant it would have come back on stage, when it was pointed out to them, and made an apology. Or at least immediately issued a statement/press release.
That's how I read his sentence immediately after the salutes: "My heart goes out to all of you." -- it sounded remarkably like something someone would say when they realize what they did could be viewed as heiling. You don't need to apologize to be a good person.
Somehow you have convinced yourself that posting enough small things that could suggest that Musk is a nazi, but don’t really, add up to one convincing argument that he is.
No, just post one good summary or obviously revealing incident. And if you point to the salutes, which triggered the whole thing, they’re obviously not sufficient by themselves. You have to at least hear what he has to say. Did you?
He's a Nazi or an absolutely disgusting troll. Both are pretty pathetic. The human thing to have would have been to say "sorry, watching it back, I can see how that looked" and it would have been over.
But no. He's The Douche.
It's also a balance of probabilities thing. He's leaning hard into the far-right at the moment, and he's a well known troll, so if you behave like a douchey troll Nazi, then people tend not to give you the benefit of the doubt when shit goes down. Like when they give the benefit of the doubt to absolutely everyone else in the world caught in a photo waving and it looking like a salute.
Either way ... The Douche won't ever get another penny from me. Bye Tesla. Fuck Starlink, glad I'm not in a situation where that's the only choice. SpaceX? That was always Shotwell's bag anyway and I don't plan on hitching a ride anytime soon.
Bizarre. But thanks. I'll end with by saying - don't let your intense desire to support a person blind you from what the behaviour of a decent human being might be. Good luck.
The Roman salute has no direct connection to the Roman Empire. It was largely an invention of the 19th and 20th centuries and was popularized by fascist movements.
> The most reasonable criticism is calling it a Roman salute and saying it bears connotations to imperialism, and that it was most recently practiced by Hitler.
For the past >100 years, it’s been the gesture representing the fascist party in Italy and the Nazi party in Germany. You sound like you want to defend the gesture for some reason.
> I think, if you want to read into his deepest, unspoken intents, he probably compares himself to Caesar more than Hitler. Just like Zuckerberg, and all the other multi-billionaires who want to see themselves as the de-facto leaders of the world.
Comparing oneself to Caesar is still a profoundly disturbing thing. He was an oligarch first, then a lifelong dictator, and later a literal deity (according to the Senate).
> He never showed up at a white supremacist rally.
I’m sure you’re smart enough to understand that if he actually showed up to a white supremacy rally, he would be financially destroyed. He’s already lost his public image completely in Europe. So not putting up a KKK hoodie is weak evidence for him not being a white supremacist.
But in any case, none of this matters. Whether or not he personally identifies with fascist ideology is secondary to the effect of his actions. Blurring the line between reasonable discourse and fascist apologism trivializes extremism and hate, and that’s the last thing we need.
My principle on giving the Hitler salute is to not do so publicly, or in the presence of elderly, Germans or jews you don’t know. Because whether you mean to be funny, try to provoke, or you’re a neo-nazi, it leaves room for ambiguity.
If I hadn’t a principle, I’d have to consider whether the social suicide of doing so is worth it. Musk could have thought of that, but he didn’t.
That still doesn’t make him a nazi. You need to actually believe that the genocide of Jews is worth pursuing. Or anything remotely resembling outright hatred of jews, and an idealisation of The Third Reich.
I also won’t post a dick pic, and this similarly does not discredit the argument I’m making:
Just because I won’t heil in public (I’m polite, and I have no points to make at 45 degrees), I won’t read Hitler into Musk’s arm waving, when he clearly does not follow up by justifying that he did, in fact, acknowledge the great work of Adolf Hitler. He didn’t because he doesn’t think Hitler was that great, because he’s not a nazi.
He’s not a nazi until he apologizes for not distancing himself from Hitler when he never said Hitler was great to begin with.
Otherwise: you’re a nazi until you publicly apologise for not leaving the subject matter unambiguous. And just saying you’re not is not enough, you have to apologise.
Artificial intelligence is definitely better at avoiding voluntary biases such as this. Most people that are highly political/tribal demonstrate this bias very effectively. Examples such as this make a great case for AI being used for high level decisions and evaluations in high-noise, emotional, and political areas.
I'm glad you mentioned this. I asked Deep Research to lay out a tax strategy in a foreign country and it cited a ton of great research I hadn't yet found.
Kagi Assistant allows you to do search with LLM queries. So far I feel it bears reliable results.
For instance - I tried couple of queries for product suggestions and came back with some good results. Whilst it’s a premium service , I find the offering to be of good value.
It's neat but I've found the value kinda variable. It seems heavily influenced by whatever the first few hits are for a query based on your question, so if it's the kind of question that can be answered with a simple search it works well. But of course those are the kinds of questions where you need it the least.
I find myself much more often using their "Quick Answer" feature, which shows a brief LLM answer above the results themselves. Makes it easier to see where it's getting things from and whether I need to try the question a different way.
The quick answer (ending searches in a question mark) also seems pretty resilient to hallucinations. It prefers telling you that something wasn't mentioned in the top search results over just making something up
There is one more aspect of Kagi assistant that I don't see discussed here. I'd love to support some "mass tipping jar" service and/or "self hosted agent" that would benefit site owners after my AI actions spammed them.
You can simply just pass it a direct link to some data, if you feel it's more appropriate. It works amazingly well in their multistep Ki model.
It's capable of creating code that does analysis I asked for with moderate amount of issues (mostly things like it used the wrong file extracted from .zip, but it's math/code is in general correct). Scraps url/downloads files/unarchives/analyses content/creates code to produce result I asked/runs that code.
This is the first time I really see AI helping me do tasks I would otherwise not attempt due to lack of experience or time.
Has anyone compared Perplexity with Kagi Assistant?
I am always looking for Perplexity alternatives. I already pay for Kagi and would be happy to upgrade to the ultimate plan if it truly can replace Perplexity.
I had been paying for both for several months, and I decided to cancel Perplexity about a month ago. First and foremost, I feel like the goals of Kagi align more with my goals. Perplexity is not afraid of ads and nagware (their discover feed was like 30% nags to turn on notifications at one point if you had them disabled, and it's still an annoying amount). I also really like the custom assistants in Kagi. I made a GNU Guix lens that limits my search results to resource related to Guix (official docs, mailing list and IRC archives, etc.) which I can access with !guix, and I made an assistant that uses that lens for web results that I can access with !guixc. I can ask something like "how do I install nginx?" and the answer will be about Guix. You can do some customization with your bio on Perplexity, but it kind of sucks tbh. It would randomly inject info about me into completely unrelated queries, and not inject the info when I wanted it to.
I'm not sure if adding that to your account will include the configuration I have set to access the lens with !guix, but if it does not, you might want to add it. The lens basically just uses this pattern for search result sources:
I don't think I can share the assistant directly, but if you have Kagi Ultimate, you can just go to the Assistant section in the sidebar of the settings page, and add a new assistant. You can set it to have access to web search, and you can specify to use the GNU Guix lens. You can pick any model, but I'm using Deepseek R1, and I set my system prompt to be:
> Always search the web for answers to the users questions. All answers should respond relating to the GNU Guix package manager and the GNU Guix operating system.
and that seems to work well for me. Let me know if you have trouble getting that set up!
I got a free year of Perplexity thanks to owning an R1. I already had a Kagi subscription, but decided to give Perplexity a try.
I found Perplexity was slower and delivered lower quality results relative to Kagi. After a week of experimenting, I forgot about Perplexity until they charged my $200 to renew my free year. I promptly cancelled the heck out of it and secured a refund.
Does it just start a search or does the chat continue with the results? Would be cool to continue the chat with result, which were filtered acording to the blacklist.
The chat continues with the results, and I often explicitly tell it "search to make sure your answer is correct" if I see it making stuff up without actually searching. I use it multiple times a day for all sorts of things.
Oh yeah this is very much the case. Every time I ask ChatGPT something simple (thinking it'd be a perfect fit for an LLM, not for a google search) and it starts searching, I already know the "answer" is going to be garbage.
I have in my prompt for it to always use search, no matter what, and I get pretty decent results. Of course, I also question most of its answers, forcing it to prove to me that its answer is correct.
Just takes some prompt tweaking, redos, and followups.
It's like having a really smart human skim the first page of Google and give me its take, and then I can ask it to do more searches to corroborate what it said.
That is interesting. I have often been amazed at how good it is at picking up when to search vs use its weights. My biggest problem with ChatGPT is the horrendous glitchyness.
"Searching" doesn't mean much without information about the ranking algorithm or the search provider, because with most searches there will be millions of results and it's important to know how the first results have been determined.
It's amazing that the post by Anthropic doesn't say anything about that. Do they maintain their own index and search infrastructure? (Probably not?) Or do they have a partnership with Bing or Google or some other player?
It gets even better. When I first tested this feature in Bard, it gave me an obviously wrong answer. But it provided two references. Which turned out to be AI generated web pages.
Oddly enough in my own Googles I could not even find those pages in the results.
Search engines now have an incentive to offer a B2B search product that solves the blogspam problem. Don't worry, the AIs will get good search results, and you'll still get the version that's SEOed to the point of uselessness.
Deep search/deep research in grok, chatgpt, perplexity etc works much better. It can also do things like search in different languages. Wonder about something in some foreign country? Ask it to search in the local language and find things you won't find in English.
> Ask it to search in the local language and find things you won't find in English.
Yeah, this is one of my favorite use cases. Living in Europe, surrounded by different languages, this makes searching stuff in other countries so much more convenient.
Google search is crap. It seems to be a sentiment among many HNers, but is it really that bad? I mostly use it for programming, so documentation/forums and it works out greatly. For some queries it even returns personal blogs (which people seem to bash google for not happening). Of course there are some queries that return purely AI blogspam, but reformatting the query with a bit more thought usually solves it. I wonder if that is a US thing? Do search results differ greatly based on the region?
Is google search bad? Click here to find ten reasons why it is bad and 10 reasons why you should still use it.
Yes, it is that bad.
Website of Nike? Website of Starbucks? Likely position number one.
Every product, category etc., e.g. what rice cooker should I buy? Is diseased by link and affiliate spam. There is a reason why people put +reddit on search terms.
Well first Zojirushi is unnecessarily expensive and difficult to clean. Only if you need its fancy options and like multiple varieties of rice would I recommend it. Reddit is no panacea to spam, these days.
But bonappetit.com is exactly an example of affiliate link spam. Even their budget option is awful.
What kind of answer are you expecting to get? Zojirushi is the answer you're going to get on the internet if you ask "what rice cooker should I buy" with no other qualifications because it's pretty universally agreed upon that it's the highest quality product.
Yeah Zojirushi is absolutely the right answer so the contrarian take in this comment is actually not what I would want in search result.
There are other good rice cookers like Cuckoo, and cheaper options like Tiger or Tatung, or really budget options like Aroma, but you pretty much can’t go wrong with Zojirushi if you can afford it.
This is a case of HN cynicism and contrarianism working against oneself.
I think part of the reason for this is that web site developers have got out of the habit of optimizing for search engines. I'm often surprised by how self-contained the requirements for a website are now, even among otherwise technically sophisticated clients. There'll be a beautiful site in React that absolutely sucks for SEO, but no-one will mind because a) it's unclear how big an audience there should be for the site, and b) the "all your hits come from search engines" was broken ten or more years ago by social network linking, so the question of how you get an audience seems much more arbitrary, and less connected to google.com.
But who is creating honest articles about which rice cookers you should buy?
BTW - the search you suggested gives you Reddit links first followed by other trusted sites trying to make an affiliate buck. There’s no spam on the first page.
To choose the best rice cooker, consider these factors:
Top Brands: Zojirushi is often considered the best brand, with Cuckoo and Tiger as close contenders. Aroma is considered a good budget brand 1.
Types:
Basic on/off rice cookers: These are good for simple white or brown rice cooking and are usually affordable and easy to use 2.
Considerations: When buying a rice cooker, also consider noise levels, especially from beeping alerts and fan operation 3.
Specific Recommendations:
Yum Asia Panda Mini Advanced Fuzzy Logic Ceramic Rice Cooker is recommended for versatility 4.
Yum Asia Bamboo rice cooker is considered the best overall 5.
Russell Hobbs large rice cooker is a good budget option 5.
For one to two people, you don't need a large rice cooker unless cost and space aren't a concern 6. Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers for hundreds of dollars 6.
References
What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com
The Ultimate Rice Cooker Guide: How to Choose the Right One for Your Needs www.expertreviewsbestricecooker.com
Best Rice Cooker UK | Posh Living Magazine posh.co.uk
Best rice cookers for making perfectly fluffy grains - BBC Good Food www.bbcgoodfood.com
The best rice cookers for gloriously fluffy grains at home www.theguardian.com
Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
Just tried plain old Kagi search, it came up with cooks illustrated (good source, paid) and consumer reports (decent source, paid), which I was surprised by until I remembered that I had these “pinned”, which means Kagi increases their rank. Third on the page was a condensed roundup of 8 listicles, 2 of which seemed decent (food and wine and some random blogger).
With no pins, bon appetit (decent) and nbc news (would be fine if it wasn’t littered with ads) were the top results. For NBC news, Kagi also marked the result with a red shield, indicating that it has too many ads/trackers.
Which really goes to show that Kagi is great if you’re really willing to shell out for better content. Having the ability to mark sources as trusted, or indicate that I’ve paid for premium sources makes a completely different side of the web searchable.
I just meant that I found this set of reviews to be informative and accurate. It had information that I couldn’t find online elsewhere which is really my main criteria. Generally I’ll skip anything from nbc because of the ads but in this case I read it to form an opinion and the article seemed alright.
If you enter a question into Kagi, by default, you get a 'Quick Answer' (https://help.kagi.com/kagi/getting-started/index.html#quick-...) on the top (an AI-generated text answer before the search result). In this case, it tells me which factors to consider and some that are considered to be the best depending on the use case (all sources the AI used for the answer are linked below the answer).
Followed by Listicles (a short-form writing that uses a list as its thematic structure). All just one entrance, in this case, Best rice cooker 2024: Top tried and tested models for perfect results
expertreviews.com
9 Best Rice Cookers | The Strategist - New York Magazine
nymag.com
The 8 Best Rice Cookers of 2025, Tested and Approved - The Spruce Eats
thespruceeats.com
6 Best Rice Cookers 2025 Reviewed - Food Network
foodnetwork.com
Best rice cookers 2025, tested for perfect grains - The Independent
independent.co.uk
29 Rice cooker meals ideas | rice cooker recipes, cooking recipes...
de.pinterest.com
43 Crockpot ideas | cooking recipes, rice cooker recipes, cooker...
de.pinterest.com
Followed by Quick Peek (questions with hidden answers that you can display).
Followed by normal search results again: ryukoch.com, reddit/r/Coooking, expertreviewsbestricecooker.com, tiktok, and then many more 'normal' websites.
This search reminded me that I have yet to configure my Kagi account to ignore tiktok.
Kagi Ultimate user here. Assuming you meant typing it into their search (and not e.g. Assistant), here's what I get on top of the result page:
Quick Answer
To choose the best rice cooker, consider these factors:
Capacity: Rice cookers range from small (1-2 cups) to large (6-8 cups or even 10-cup models) [1][2]. Keep in mind that one cup of uncooked rice yields about two cups cooked [2].
Budget: Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers can cost more [3].
Features: Many rice cookers include a steaming insert [4]. Some have settings for different types of rice [5][1].
Brand Recommendations:
Zojirushi: Often considered the best brand, but pricier [6][7]. The Zojirushi Neuro Fuzzy 5.5-Cup Rice Cooker is considered best overall [8].
Cuckoo & Tiger: These are the next best brands after Zojirushi [6].
Aroma: Considered the best budget brand [6]. The Aroma ARC-914SBD Digital Rice Cooker is a good option [9].
Toshiba: The Toshiba Small Rice Cooker stands out for innovative features that cater to a variety of cooking needs [5].
References
[1] Five Best Rice Cookers In 2023. More than half of the... | Medium medium.com
[2] Which Rice Cooker Should You Buy? - HomeCookingTech.com www.homecookingtech.com
[3] Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
[4] The 8 Best Rice Cookers of 2025, Tested and Approved www.thespruceeats.com
[5] The Ultimate Guide to Choosing the Perfect Rice Cooker | Medium medium.com
[6] What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com
[7] What are actually good rice cookers? I feel like all the ... - Reddit www.reddit.com
[8] 6 Best Rice Cookers of 2025, Tested and Reviewed - Food Network www.foodnetwork.com
[9] 9 Best Rice Cookers | The Strategist - New York Magazine nymag.com
Beyond that are the actual search results. The top ones are the same as in References section of quick answer, but the order is different: [6] [7] [3] [5] [1] [8] [9] [4] [2].
It should be noted that individual search results on Kagi are likely to be skewed depending on the user because it gives you so many dials to score specific domains up or down. E.g. my setup gives a boost to Reddit while downscoring Quora and outright blocking Instagram and Pinterest.
I watched one of my friends who says Google is useless use Google one day.
If I were looking for a song, I would type in something like “song used at beginning of X movie indie rock”
He would type in “X songs.”
I basically find everything in Google in one search and it takes him several. I type in my thought straight whereas he seems to treat Google like a dumb keyword index.
Google used to be a "dumb keyword index" in the past. It worked better that way. You had some modicum of control over the matching process. For the past 10 years or so, Google turned more into "try to guess what a novice normie means", which removes user control (no more actually working verbatim search or logical operators...), and... well I failed to develop a mental model of how exactly it works. It's not a proper keyword search anymore, and it's not a proper DWIM system with true understanding of natural language like LLMs are. It's something... in between, inferior to both.
Actually, typing out "what a novice normie means" made me realize what is the probable reason Google turned out the way it is: optimizing for new users. However, a growing userbase means most users are new to Internet in general, and (with big enough growth) most queries are issued by people who are trying a search engine out for the first time, and have no clue how or why it works - and those queries are exactly the kind of queries Google is now good at, queries like example you provided.
With modern Google, if I’m searching for something that could either be a band or a song, I can put “band” and I will get only results for the band, even if the page doesn't include the word "band."
But if you insist on a dumb keyword search, Google still does that fine if you use quotation marks now in addition to the operator (e.g. +"band"). But I just tried +"band" with my band-vs-song example and all I got were worse results that excluded the artist's website because the artist didn't write the word "band" anywhere on the page -- as expected for a dumb keyword search.
There was no easy way to perform my band-vs-song search back then because Google didn’t understand context and the website doesn’t have the correct keywords. But modern Google knows context and I employ this fact regularly, allowing me to find stuff with modern Google like a magician compared to old Google or even Altavista.
I mean, for those of us who used it since way before the '20s, it's not really a sentiment - it's a fact. You used to be able to type in 3 words and whatever error message your stack trace was showing, and the first 3 links returned were very likely a definitive source to solving your problem.Written by a human, and believe my word for it - it was much better back then than the crap you get out of torturing whatever your LLM of choice is. However the weird MBas took it over to and did exactly what you are describing - forced people to spend more time "engaging with the platform" (to increase the revenue). As you can see, they seem to have achieved this goal, and we all now spend time reformatting our queries as they wanted us to, and yes Google search is complete crap.
Yes and no. You used to find niche websites more easily, but I vividly remember the frustration with ExpertsExchange results (with answers that were all paywalled).
Eh, this said google is suffering from its own popularity.
Google in the past was written by a human because that was really the only option. Once other humans figured out to how automate producing trace Google has gone downhill simply because of the bullshit asymmetry effect. Even if google was totally customer based, it would still be much worse than the past because of the total amount of crap that exists.
This is also why no other competitor just completely blows them away either.
Personally I like Google search. I think it's not crap - actually quite good. I use it multiple times a day (just checked - about 42 times yesterday). It's different from what it was 10 years ago but still works for most stuff.
That said I also use Perplexity which does things Google never really did.
I've got a theory that people just like to be negative about stuff, especially market leaders, and are a bit in denial as to how it still has the majority search share in spite of many billions spent trying to compete with it and ernest HN posts saying Google is crap use Kagi. For amusement I tried to find their share of search and Google is approx 90%, Kagi approx 0.01% by my calculations.
Google search was actually great between the period where pagerank successfully defeated old-school SEO tactics and banner advertising starting earning enough that the "bloggers" could pay cheap writers to pad their articles in convincing ways.
2006 was the first year I remember paid blog posts appearing from content farms that would exist only to increase your inbound links and page rank. Those days companies were paying cents per post to get their sites to #1 in Google while Google just wagged their finger and said "naughty, naughty."
Since around 2012. What year would be the golden age of google search? I wonder if anyone has archived search result pages for relatively timeless queries so that we could compare. Wayback Machine seems to archive some of them.
It's kind of surprising to me that I can't customize the search ability at all with a lot of models (at least I wasn't really able to last time I checked). Would providing a blacklist to the model really be that hard?
Actually, it's astounding to me that companies haven't created a more user friendly customization interface for models. The only way to "customize" things would be through the chat interface, but for some reason everyone seems to have forgotten that configuration buttons can exist.
Overall LLMs (that I've tested) don't know how to use a search engine, their queries are bad and naive, probably because the way to use a search engine isn't part of training data, it's just something that people learn to do by using them. Maybe Google has the data to make LLMs good at using search engines but would it serve their business?
This is actually not true. I'm getting traffic from ChatGpt and Perplexity to my website which is fairly new, just launched a few months ago. Our pages rarely rank in the top 4, but the AI answer engines mange to find them anyways. And I'm talking about traffic with UTM params / referrals from chatgpt, not their scraper bots.
If chatgpt is scraping the web, why can they not link tokens to source of token? being able to cite where they learned something would explode the value of their chatbot. At least a couple of orders of magnitude more value. Without this chatbots are mostly a coding-autocomplete tool for me—lots of people have takes, but it's the tying into the internet that makes a take from an unknown entity really valuable.
Perplexity certainly already approximates this (not sure if it's at a token level, but it can cite sources. I just assumed they were using a RAG.)
That's asking for the life stories and photos and pedigrees and family histories of all the chickens that went into your McNuggets. It's just not the way LLMs work. It's an enormous vat of pink slime of unknown origins, blended and stirred extremely well.
Imagine how much fun it will be when the breakthrough in search engine quality comes from companies building a better engine to get good LLM answers.
This is ultimately google's problem: They are making money from the fact that the page is now mostly ads and not necessarily going to lead to a good, quick answer, leading to even more ads. They probably lose money if they make their search better
I’m curious why I’m seeing a lot of people thinking this lately. Google definitely made the algorithm worse for customers and better for ads, but I’m almost always able to find what I’m looking for in the working day still. What are other people’s experiences?
AI results typically blow away Google results in both quality and definitely speed.
For example, when searching for product information, Google results in top 50 to 100 listed items titled “the 10 best …“ full of vapid articles that provide little to no insight beyond what is provided in a manufacturers product sheet. Many times I have to add “Reddit” to my search to try and find real opinions about a product or give up and go to Youtube review videos from trusted sources.
For technical searches like programming questions, AI is basically immediately nailing most basic questions while Google results require scanning numerous somewhat related results from technical discussion forums, many of which are outdated.
It would be nice if I could tell it what page to look at (maybe you can, I am not sure). Often if I am getting an LLM to write some code that I can see is obviously wrong, I would love to say here is the docs ... use that to formulate your response.
Do you think that if it's a non-Google company, that maybe doesn't rank search by ad payment $$$, that this new company could in theory do a better job?
RAG was dead on arrival because it uses the same piss-poor results a human would, wrapped in more obfuscation and unwanted tangents.
My question is why the degradation of search wouldn't affect LLMs. These chatbot god-oracle businesses are already unprofitable because of their massive energy footprint, now you expect them to build their own search engine in-house to try to circumvent SEO spam? And you expect SEO spam to not catch up with whatever tricks they use? Come on, people.
AI-native search api to retrieve over web/proprietary content - full semantic search (e.g. we indexed all of arxiv), reranking built in, simple pricing, cheap
For me LLMs have basically removed any need to visit search engines. I was already not using Google due to how bad its interface had become, but I feel like LLMs at least are more efficient as an interface even if they’re still looking at the same blogspam or unresolved forum posts. My anecdotal experience though, is that I get better answers from LLMs, perhaps because I am able to give them really detailed prompts that seem to improve the answers based how specific I get. Generic search engines don’t seem to do that, in my experience.
Massive props to Anthropic for announcing a feature _and_ making it available for everyone right away.
OpenAI is so annoying in this aspect. They will regularly give timelines for rollout that not met or simply wrong.
Edit: "Everyone" = Everyone who pays. Sorry if this sounds mean but I don't care about what the free tier gets or when. As a paying user for both Anthropic and OpenAI I was just pointing out the rollout differences.
Edit2: My US-bias is showing, sorry I didn't even parse that in the message.
> Web search is available now in feature preview for all paid Claude users in the United States. Support for users on our free plan and more countries is coming soon.
> OpenAI is so annoying in this aspect. They will regularly give timelines for rollout that not met or simply wrong.
I have empathy for the engineers in this case. You know it’s a combination of sales/marketing/product getting WAY ahead of themselves by doing this. Then the engineers have to explain why they cannot in fact reach an arbitrary deadline.
Meanwhile the people not in the work get to blame those working on the code for not hitting deadlines
Many of OpenAI's announcements seem to be timed almost perfectly as responses to other events in the industry or market. I think Sam just likes to keep the company in the news and the cultural zeitgeist, and he doesn't really care if what he's announcing is ready to scale to users yet or not.
To be fair, being in the cultural zeitgeist is a huge part of their current moat. To people in the street OpenAI is the company making LLMs. Sam has to make sure it stays that way
You can wait to release something all over the world which will take time because it’s not an engineering issue but a compliance/legal or other types of issue. Or you can iterate faster by doing the minimum and getting feedback and then releasing it in other markets. Not sure what’s wrong with this approach.
That’s fair, but I was referring to releasing in response to external events. It’s very clear they are trying to one-up each other and create hype, vagueposting etc. I don’t think sama is alone in this but everyone especially from the Thiel school of thought.
Depending on what you’re actually providing, different regions of CSPs might not actually have the features or capacity you need to reliably deliver the feature world wide. That’s probably the exception not the rule, especially for OpenAI.
Sama as the spokesperson regularly makes grandeur statements, often very vague, can’t show it because ”safety”, trade secrets etc. I think it’s widespread culturally, especially in earlier VC-centric times when investor fomo and mystique is name of the game. But nowadays even large publicly listed companies like Tesla pull this off and even sell consumer products that don’t exist yet. Do you want specific examples of sama statements that I think are horseshit specifically designed to generate buzz? It’s not hard to find.
I fully understand why they do it, and yet I choose to interpret it as blatantly lying. (To be clear I mean the thing where OpenAI seems to announce things relating to news cycles without actually having the thing working yet; I don't mind a limited rollout. But like, that Advanced Voice demo they did--which was clearly just to take some thunder from Google--not only took a long time to get into the hands of anyone, it is nowhere near as good as their demo claimed or made it out to be.)
> Web search is available now in feature preview for all paid Claude users in the United States.
It is for all paid users, something OpenAI is slow on. I pay for both and I often forget to try OpenAI's new things because they roll out so slow. Sometimes it's same-day but they are all over the map in how long it takes to roll out.
I think 'For all paid users in the United States' is clearer. I live in America, but not in what the United States considers 'America', so I do not get to use this new feature yet.
I don't use the desktop app and I don't want to use the desktop app or jump through a bunch of hoops to support basic functionality without having my data sent to a sketchy company.
It badly hallucinated in my test. I asked it "Rust crate to access Postgres with Arrow support" and it made up an arrow-postgres crate. It even gave sample Rust code using this fictional crate! Below is its response (code example omitted):
I can recommend a Rust crate for accessing PostgreSQL with Arrow support.
The primary crate you'll want to use is arrow-postgres, which combines the PostgreSQL connectivity of the popular postgres crate with Apache Arrow data format support.
This crate allows you to:
Query PostgreSQL databases using SQL
Return results as Arrow record batches
Use strongly-typed Arrow schemas
Convert between PostgreSQL and Arrow data types efficiently
Are you sure it searched the web? You have to go and turn on the web search feature, and then the interface is a bit different while it's searching. The results will also have links to what it found.
Exactly. An LLM is not a conventional search engine and shouldn't be prompted as if it were one. The difference between "Rust crate to access Postgres with Arrow support" and "What would a hypothetical Rust crate to access Postgres with Arrow support look like?" isn't that profound from the perspective of a language model. You'll get an answer, but it's entirely possible that you'll get the answer to a question that isn't the one you thought you were asking.
Some people aren't very good at using tools. You can usually identify them without much difficulty, because they're the ones blaming the tools.
It's absolutely how LLMs should work, and IME they do. Why write a full question if a search phrase works just as well? Everything in "Could you recommend xyz to me?" except "xyz" is redundant and only useful when you talk to actual humans with actual social norms to observe. (Sure, there used to be a time when LLMs would give better answers if you were polite to them, but I doubt that matters anymore.) Indeed I've been thinking of codifying this by adding a system prompt that says something like "If the user makes a query that looks like a search phrase, phrase your response non-conversationally as well".
Totally agree here. I tried the following and had a very different experience:
"Answer as if you're a senior software engineer giving advice to a less experienced software engineer. I'm looking for a Rust crate to access PostgreSQL with Apache Arrow support. How should I proceed? What are the pluses and minuses of my various options?"
Think about it, how much marginal influence does it really have if you say OP’s version vs a fully formed sentence? The keywords are what gets it in the area.
That is not correct. The keywords mean nothing by themselves. To a transformer model, the relationships between words is where meaning resides. The model wants to answer your prompt with something that makes sense in context, so you have to help it out by providing that context. Feeding it a sentence fragment or a disjoint series of keywords may not have the desired effect.
To mix clichés, "I'm feeling lucky" isn't compatible with "Attention is all you need."
I find that providing more context and details initially leads to far more success for my uses. Once there’s a bit of context, I can start barking terms and commands tersely.
I find more hallucination - like when you're taught as a child to reflect back the question at the start of your answer.
If I am not careful, and "asking the question" in a way that assumes X, often X is assumed by the LLM to be true. ChatGPT has gotten better at correcting this with its web searches.
I am able to get better results with Claude when I ask for answers that include links to the relevant authoritative source of information. But sometimes it still makes up stuff that is not in the source material.
Is this really the case, or is it the case with Claude etc because they've already been prompted to act as an "helpful assistant"? If you take a raw LLM and just type Google search style it might just continue it as a story or something.
It's funny because many people type full sentence questions into search engines too. It's usually a sign of being older and/or not very experienced with computers. One thing about geeks like me is we will always figure out what the bare minimum is (at least for work, I hope everyone has at least a few things they enjoy and don't try to optimise).
It's not about being young or old, search engines have moved away from pure keyword searches and often typing your actual query gives better results than searching for keywords, especially with Google.
Wonder if that's why so many people hate its results lol. It shifted keyword searching to full sentence searching, but many of us didn't follow in the shift.
Well, compare it to the really good answer from Grok (https://x.com/i/grok/share/MMGiwgwSlEhGP6BJzKdtYQaXD) for the same prompt. Also, framing as a question still pointed to the non-existent postgres-arrow with Claude.
That's primarily how i do, though it depends on the search ofc. I use Kagi, though.
I've not yet found much value in the LLM itself. Facts/math/etc are too likely incorrect, i need them to make some attempt at hydrating real information into the response. And linking sources.
This was pretty much my first experience with LLM code generation when these things first came out.
It's still a present issue whenever I go light on prompt details and I _always_ get caught out by it and it _always_ infuriates me.
I'm sure there are endless discussions on front running overconfident false positives and being better at prompting and seeding a project context, but 1-2 years into this world is like 20 in regular space, and it shouldn't be happening any more.
Often times I come up with a prompt, then stick the prompt in an LLM to enhance / identify what I’ve left out, then finally actually execute the prompt.
Cite things from ID based specs. You’re facing a skill issue. The reason most people don’t see it as such is because an LLM doesn’t just “fail to run” here. If this was code you wrote in a compiled language, would you post and say the language infuriates you because it won’t compile your syntax errors? As this kind of dev style becomes prevalent and output expectation adjust, work performance review won’t care that you’re mad. So my advice is:
1. Treat it like regular software dev where you define tasks with ID prefixes for everything, acceptance criteria, exceptions. Ask LLM to reference them in code right before impl code
2. “Debug” by asking the LLM to self reflect on its decision making process that caused the issue - this can give you useful heuristics o use later to further reduce the issues you mentioned.
“It” happening is a result of your lack of time investment into systematically addressing this.
_You_ should have learned this by now. Complain less, learn more.
I usually find Claude to be my favourite flavor of LLMs, but I still pay for ChatGPT because their voice offering is so great! I regularly use it as an "expert on the side" when I do other things, like doing bike repairs. I ask it things like "how do I find the min/max adjustments on my particular flavor of front derailleur", or when cooking, and my hands are dirty, I can ask stuff like "how much X do I usually need for Y people", and so on. The hands-off feature is so great when my hands are literally busy doing some other thing.
ChatGPT advanced voice mode really is surprisingly excellent - I just wish it:
1) would give you more time to pause when you’re talking before it immediately launches into an answer
2) would actually try to say the symbols in code blocks verbatim - it’s basically useless for looking up anything to do with code, because it will omit parts of the answer from its speech.
Yeah I have to manually hold it down every time I talk. I have a lot of pauses and simply would not be able to interface with that without that option. It’s why I essentially can’t use Gemini voice mode
I think voice interface is the real killer app of LLMs. And the advance voice mode was exactly what I was waiting for. The pause between words issue is still a problem though, I think being able to just hit enter when done would work best.
Pro tip; if you’re preparing for a big meeting eg an interview, tell ChatGPT to play the part of an evil interviewer. Give it your CV and the job description etc. ask it to find the hardest questions it can. Ask it to coach you and review your answers afterwards, give ideal answers etc
after a couple of hours grilling the real interview will seem like a doddle.
Is it possible to use ChatGPT voice feature in a similar manner to Alexa where I only need to say an activation word? I’m aiming to set up a system for my 7-year-old son to let him engage in conversations with ChatGPT as he does with Alexa.
I assume it would be possible to make yourself with the OpenAI api together with a locally run voice model to only detect the activation word. There might be of the shelf solutions for this, but I am not aware of any.
I don't think it should. If a user asks the AI to read the web for them, it should read the web for them. This isn't a vacuum charged with crawling the web, it's an adhoc GET request.
The AI isn't "reading the web" though, they are reading the top hits on the search results, and are free-riding on the access that Google/Bing gets in order to provide actual user traffic to their sites. Many webmasters specifically opt their pages out of being in the search results (via robots.txt and/or "noindex" directives) when they believe the cost/benefit of the bot traffic isn't worth the user traffic they may get from being in the search results.
One of my websites that gets a decent amount of traffic has pretty close to a 1-1 ratio of Googlebot accesses compared to real user traffic referred from Google. As a webmaster I'm happy with this and continue to allow Google to access the site.
If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site, I very much should have to right to decline their access.
> If ChatGPT is giving my website a ratio of 100 bot accesses (or more) compared to 1 actual user sent to my site
are you trying to collect ad revenue from the actual users? otherwise a chatbot reading your page because it found it by searching google and then relaying the info, with a link, to the user who asked for it seems reasonable
While yes, I am attempting to collect ad revenue from users, and yes, I don't want somebody competing with me and cutting me out the loop, a large part of it is controlling my content. I'm not arguing whether the AI chatbot has the legal right to access the page, I'm not a legal scholar. What I'm saying is that the leading search engines also have the equal rights to access whatever content they want, and yet they all give webmasters the following tools:
- Ability to prevent their crawlers from accessing URLs via robots.txt
- Ability to prevent a page from being indexed on the internet (noindex tag)
- Ability to remove existing pages that you don't want indexed (webmaster tools)
- Ability to remove an entire domain from the search engine (webmaster tools)
It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
In the long run, all this is going to lead to is more anti-bot countermeasures, more content behind logins (which can have legally binding anti-AI access restrictions) and less new original content. The victim will be all humans who aren't using a chatbot to slightly benefit the ones who are.
And again, I'm not suggesting that AI chatbots should not be allowed to load webpages, just that webmasters should be able to opt out of it.
> While yes, I am attempting to collect ad revenue from users, and yes, I don't want somebody competing with me and cutting me out the loop, a large part of it is controlling my content.
> It is really impolite for the AI chatbots to go around and flout all these existing conventions because they know that webmasters would restrict their access because it's much less beneficial than it is for existing search engines.
I agree with you about the long run effects on the internet at large, but I still don't understand the horse you have in it personally. I read you as saying (1) it's less about ad revenue than content control, but (2) content control is based on analysis of benefits, i.e. ad revenue?
> Well you have no rights when you expose a server to the internet.
Technically you don’t, but there are still laws that affect what you can legally do when accessing the web. Beyond the copyright issues that have been outlined by people a lot more qualified than me, I think you could also make the point that AI crawlers actively cause direct and indirect financial harm.
>You can now use Claude to search the internet to provide more up-to-date and relevant responses.
It's a search engine. You 'ask it to read the web' just like you asked Google to, except Google used to actually give the website traffic.
I appreciate the concept of an AI User-agent, but without a business model that pays for the content creation, this is just going to lead to the death of anonymously accessible content.
And as advertisers get declining human views on their ads, the value of the business model will dwindle until it needs to be replaced by other forms of revenue. Content that can't shift business models and requires revenue to continue will die off.
Edit: Maybe that's fine, maybe that's bad. Maybe new models will emerge and things will reshape. But I'm just supporting the case that AI agents will pressure the current "free" content economy.
I'm also and I pay for the services that I use to not see ads, but I don't pay for every single one. For example a local classified website is financed by ads, and I don't think anybody will pay for just looking at stuff there. Maybe they can switch to the model where the person puting the thing for sale would pay but hat is something where we are not currently.
Which is fine if you’re paying for a subscription. Will probably soon see one subscription rate allowing AI access on your behalf, and a lower rate without that access. Since a human accessing without a bot is likely to see the ads.
IDK bittorrent is pretty effective at hosting bytes. I think if something like IPFS takes off in our generation there will be no need for advertising as an excuse for covering hosting costs in the client-server model.
As for funding "content creation" itself, you have patronage.
What was the web like before wide spread internet ads, auth, and search engines?
Did all those old sites have “business models”? What did the web feel like back then?
(This is rhetorical - I had niche hobby sites back then, in the same way some people put out free zines, and wouldn’t give a damn about today’s AI agents so long as they were respectful.
The web was better back then, and I believe AI slop and agents brings us closer to full circle)
The web was so much smaller back then. Just imagine I turned the user (not automated in any way) based clicks that can occur from a link like reddit today towards your site then. We called it the slashdot effect way back, but that many clicks might take down the entire ISP.
Many of these sites business model was simply "don't cost too much". The moment the web got big a lot of these sites died. Now add DDOS for fun and profit became a thing, most people moved to huge advertising based providers/hosters (think FB).
Simply put, we're never getting the old web back. Now, we may get something new, but it will be different and still far more commercial.
You can't expect the benefits of public web without bearing the costs. Just put your stuff under a auth wall (can even be free) and no one will crawl it.
big doubt on that and maybe that's a good thing? Let's be honest, right now most of the web is dominated by low effort spam. Taking money away from view farming would dramatically increase the web quality of the web. Suddently that guy who's really into "key gardening" doing research and publishing detailed results on his website actually has viewers — isn't this good? Especially since website hosting is close to being free these days.
> big doubt on that and maybe that's a good thing? Let's be honest, right now most of the web is dominated by low effort spam.
I think that is funny considering it is likely going to have the exact opposite effect.
Low effort blog spam is cheap to make. And it is often part of content marketing strategies where brand visibility is all that matters, so not much harm if the viability is directly on your site or in an AI chatbit interface.
Quality content on the other hand is hard to make. And there are two groups of people who make such content:
1. individuals or small groups that like to share for the sake of sharing. They likely won’t care about the AI crawlers stealing their content, although I think there is a big overlap between people who still run blogs and those who dislike AI.
2. small organizations that are dedicated to one specific topic and are often largely ad financed. These organizations would likely stop to exist in such an AI search dominated world.
> Especially since website hosting is close to being free these days.
It is under specific circumstances. The problem is that those AI crawlers don’t check by once in a while like Google does but instead they hit the site very frequently. For a static site this won’t be much of an issue except for maybe bandwidth. For more complex sites like - say - the GitLab instances for OSS projects, reality paints a different picture
Still unconvinced. You really don't need anything beyond a static site to effectively share information.
Another point you're missing is that there's a 3rd group of people sharing content: experts who are there to establish their expertise. Small companies and individuals generate the highest quality content these days. I work on a blog for our SAAS company and it has been a great success in terms of organic growth (even people coming from LLMs) and to simply establish authority and signal expertise in the field. I can imagine a future where this is majority of expert content on the web and it seems quite sustainable imo.
robots.txt is not a security mechanism, and it doesn’t “control bots.” It’s a voluntary convention mainly followed by well behaved search engine crawlers like Google and ignored by everything else.
If you’re relying on robots.txt to prevent access from non human users, you’re fundamentally misunderstanding its purpose. It’s a polite request to crawlers, not an enforcement mechanism against any and all forms of automated access.
How can you be so sure? Processors love locality, so they fetch the data around the requested address. Intel even used to give names to that.
So, similarly, LLM companies can see this as a signal to crawl to whole site to add to their training sets and learn from it, if the same URL is hit for a couple of times in a relatively short time period.
> This isn't a vacuum charged with crawling the web, it's an adhoc GET request.
Doesn't matter. The robots-exclusion-standard is not just about webcrawlers. A `robots.txt` can list arbitrary UserAgents.
Of course, an AI with automated websearch could ignore that, as can webcrawlers.
If they chose do that, then at some point, some server admins might, (again, same as with non-compliant webcrawlers), use more drastic measures to reduce the load, by simply blocking these accesses.
For that reason alone, it will pay off to comply with established standards in the long run.
In the limit of the arms race it's sufficient for the robot to use the user's local environment to do the browsing. At that point you can't distinguish the human from the robot.
That's not how many of these services work though. The websearch and subsequent analysis of the results by an LLM are done from the servers of whoever supplies the solution.
Think of the "searching" LLM as a peon of the user, the user asks, the peon performs. In that essence, searching by the LLM should be human-driven and must not be blocked. It's just an automated system doing the search not your personal peon.
Then you’ve fundamentally misunderstood what a robots.txt file does or is even intended to do and should reevaluate if you should be in charge of how access is or is not prevented to such systems.
Absolutely nothing has to obey robots.txt. It’s a politeness guideline for crawlers, not a rule, and anyone expecting bots to universally respect it is misunderstanding its purpose.
And absolutely no one needs to reply to every random request from an unknown source.
robots.txt is the POLITE way of telling a crawler, or other automated system, to get lost. And as is so often the case, there is a much less polite way to do that, which is to block them.
So, the way I see it, crawlers and other automated systems have 2 options: They can honor the polite way of doing things, or they can get their packets dropped by the firewall.
If this feature isn’t already part of the Claude API it likely will be at some point, in which case many Claude requests will be automated with no way to distinguish between user-driven or otherwise.
Simply put, at the end of the day you lose, AI blocking will not work.
I mean, currently the AI request comes from the datacenter running the AI, but eventually one of two things will happen.
AI models will get small/fast enough to run on user hardware and use the users resources: End result? You lose. The user will set their own headers and sites will play the impossible game of identifying AI.
AI sites will figure out how to route the requests via any number of potential methods so the requests appear to come from the user anyway: End result? You lose. The sites attempting to block will play the cat and mouse game of figuring out what is AI or not AI.
Note, this doesn't mean AI blocking isn't worth doing, if nothing else to reduce load on the servers. It's just not a long term winning strategy.
You may not be able to stop AIs from crawling web sites through technological means. But you can confiscate all the resources of the company that owns the AI.
robots.txt is intended to control recursive fetches. It is not intended to block any and all access.
You can test this out using wget. Fetch a URL with wget. You will see that it only fetches that URL. Now pass it the --recursive flag. It will now fetch that URL, parse the links, fetch robots.txt, then fetch the permitted links. And so on.
wget respects robots.txt. But it doesn’t even bother looking at it if it’s only fetching a single URL because it isn’t acting recursively, so robots.txt does not apply.
The same applies to Claude. Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
I know a lot of people want to block any and all AI fetches from their sites, but robots.txt is the wrong mechanism if you want to do that. It’s simply not designed to do that. It is only designed for crawlers, i.e. software that automatically fetches links recursively.
While robots.txt is not there to directly prevent automated requests, it does prevent crawling which is needed for search indices.
Without recursive crawling, it will not possible for a engine to know what are valid urls[1]. They will otherwise either have to brute-force say HEAD calls for all/common string combinations and see if they return 404s or more realistically have to crawl the site to "discover" pages.
The issue of summarizing specific a URL on demand is a different problem[2] and not related to issue at hand of search tools doing crawling at scale and depriving all traffic
Robots.txt does absolutely apply to LLMs engines and search engines equally. All types of engines create indices of some nature (RAG, Inverted Index whatever) by crawling, sometimes LLM enginers have been very aggressive without respecting robots.txt limits, as many webmasters have reported over the last couple of years.
---
[1] Unless published in sitemap.xml of course.
[2] You need to have the unique URL to ask the llm to summarize in the first place, which means you likely visited the page already, while someone sharing a link with you and a tool automatically summarizing the page deprives the webmaster of impressions and thus ad revenue or sales.
This is common usage pattern in messaging apps from Slack to iMessages and been so for a decade or more, also in news aggregators to social media sites, and webmasters have managed to live with this one way or another already.
> Robots.txt does absolutely apply to LLMs engines and search engines equally.
It does not. It applies to whatever crawler built the search index the LLM accesses, and it would apply to an AI agent using an LLM to work recursively, but it does not apply to the LLM itself or the feature being discussed here.
The rest of your comment seems to just be repeating what I already said:
> Whatever search index they are using, the crawler for that search index needs to respect robots.txt because it’s acting recursively. But when the user asks the LLM to look at web results, it’s just getting a single set of URLs from that index and fetching them – assuming it’s even doing that and not using a cached version. It’s not acting recursively, so robots.txt does not apply.
There is a difference between an LLM, an index that it consults, and the crawler that builds that index, and I was drawing that distinction. You can’t just lump an LLM into the same category, because it’s doing a different thing.
Yes it does. I am the one controlling robots.txt on my server. I can put whatever user agent I want into my robots.txt, and I can block as much of my page as I want to it.
People can argue semantics as much as they want...in the end, site admins decide what's in robots.txt and what isn't.
And if people believe they can just ignore them, they are right, they can. But they are gonna find it rather difficult to ignore when fail2ban starts dropping their packets with no reply ;-)
No it doesn’t. It politely requests to crawlers that they do not, and if said crawlers choose to honour it than those specific crawlers will not crawl. That’s it. It can and is ignored without penalty or
enforcement.
It’s like suggesting that putting a sign in your front yard saying “please don’t rob my house” prevents burglaries.
> Robots.txt does absolutely apply to LLMs engines and search engines equally
No it doesn’t because again, it’s a request system. It applies only to whatever chooses to pay attention to it, and further, decides to abide by any request within it which there is no requirement to do.
From google themselves:
“The instructions in robots.txt files CANNOT ENFORCE crawler behavior to your site; it's up to the crawler to obey them.”
And as already pointed out, there is no requirement a crawler follow them, let alone anything else.
If you want to control access, and you’re using robots.txt, you’ve no idea what you’re doing and probably shouldn’t be in charge of doing it.
Do really think LLM vendors that download 80TB+ of data over torrents are going to be labeling their crawler agents correctly and running them out of known datacenters?
Bluesky / ATProto has a proposal for User Intents for data. More semantics than robots.txt, but equally unenforceable. Usage with AI is one of the intents to be signaled by users
Presumably the crawler that produces whatever index it uses does, which is how it knows what sites to read. Unless you provide it a URL yourself I guess, in which case, it shouldn't.
Yet they respect a lot of things meant for machine to machine interaction. Like server return codes, cookie negotiations, and CAPTCHAs if they behave a certain way.
So they sometimes hit bollards and turnstiles made for other types of code which executes HTTP requests. So they're bots basically, but better (or suitably) behaving ones.
But how did you find those sites that had the robot.txt to begin with? LLM must somehow find the existence of those pages and store that information before they can crawl them further or mark as acceptable source.
I think a distinction needs to be made between ingesting for LLM training and ingesting / crawling because a human asked it to during an inference session.
I have been talking about the latter, agree the former is abusive.
Let's say you had a local model with the ability to do tool calls. You give that llm the ability to use a browser. The llm opens that browser, goes to Google or Bing, and does whatever searches it needs to do.
I think they mean that it's a tool accessing URLs in response to a user request to present to the user live - with that user being a human. Like if you used some webpage translation service, or non-ML summarizer.
There's some gray area though, and the search engine indexing in advance (not sure if they've partnered with Bing/Google/...) should still follow robots.txt.
Yeah, that seems to be a big distinction. If I tell my AI to summarize the headlines from my three favorite news sites every morning, it's just carrying out my request same as if I'd clicked to them, so that seems fine.
But if I say, "Search the web for a low-carb chicken casserole recipe that takes squash and cottage cheese," then it's either going to A) send queries to a search engine like Google, in which case robots.txt already should have been respected, or B) check its own repository of information it's spidered before I asked the question, in which case it should have respected robots.txt itself.
Are you arguing that these are equivalent actions?
The entire web was built on the understanding that humans generally operate browsers, and robots.txt is specifically for scenarios in which they do not.
To pretend that the automated reading of websites by AI agents is not something different…is quite a stretch.
> I the human want the data from that request. I am using a tool to get it for me.
Isn't this a bit of an oversimplification, though? Especially when the tool you're using completely alters the relationship between the content author and the reader?
I hear this argument often: "it's just another tool and we've always used tools". But would you acknowledge that some tools change the dynamics entirely?
> Should I not be able to execute curl to download a webpage because the "understanding that humans generally operate browsers"?
Executing curl to download a webpage is nothing new, and compared to a traditional browser, has about the same impact. This is still drastically different than asking an AI agent to gather information and one of the pages it happens to "read" is the one you were previously navigating to with a browser or downloading with curl.
If you're a content creator who built a site/business based on a pre-LLM understanding of the dynamics of the ecosystem, doesn't it seem reasonable to see these types of "readers" differently?
No, whether I curl it, or I use a browser, or an LLM, it is essentially ALL the same, unless of course the LLM crawls it by itself, without human interaction.
If the scale bothers you, block it, just like how you would block any other crawlers.
Other than that, we all wanted "ease-of-access" (not me though), and now we have it. It does not change anything.
It's reasonable for the content creator to see it differently, but I don't think it's reasonable to expect everyone around the content creator to contort any new approach to the needs of the pre-existing business model.
I agree. This came up in terms of copyright either, or who is pressing the shutter and who owns the copyright to the photo taken. I personally think that the copyright belongs to me, because I, a human, made the detailed prompt, the tool just generated it. Do I not own the copyright if I make something using Photoshop? As far as I know, I do. So, how is AI any different that needs human action (i.e. be prompted)? Because it is better than Photoshop? That is not a good argument, IMO.
In practice, robots.txt is to control which pages appear in Google results, which is respected as a matter of courtesy, not legality. It doesn't prevent proxies etc. from accessing your sites.
I know an artist that had noindex turned on by mistake in robots.txt for the last 5 years - google, kagi and duckduckgo find tons of links relevant to the artist and the artwork but not a single one from the website.
so not seem to or apparently but matter of fact like. robots.txt works for the intended audience
Given websites do disappear or worse, get their content adultered. Also given the long history of the internet archive as a non profit - and the commons service it has served so far, the joke would be to see that bot honor it.
Sorry to intrude with something unrelated. But YC closed the earlier discussion. Saw your comment about Kannel WAP of few months back and wanted to ask if do you know of any WAP Push full service provider still in operation.
lol IA did not start that, if anything they were late to the game. only the top handful of US-based search engines ever bothered respecting it in the first place
> Today we’re announcing Google-Extended, a new control that web publishers can use to manage whether their sites help *improve Bard and Vertex AI generative APIs*, including future generations of models that power those products.
they're literally asking to break laws to train AI for national security. A sentence in a press release from 2 years ago is worthless... look at what they're actually doing
I do essentially both: robots.txt backed by actual server-level enforcement of the rules in robots.txt. You'd think there would be zero hits on the server-level blocking since crawlers are supposed to read and respect robots.txt, but unsurprisingly they don't always. I don't know why this isn't a standard feature in web hosting.
For my personal stuff I also included a Nepenthes tarpit. Works great and slows the bots down while feeding them garbage. Not my fault when they consume stuff robots.txt says they shouldn't.
I'm just not sure if legal would love me doing that on our corporate servers...
I really want these to be able to find and even redisplay images. "Search all the hotels within 5 miles of this address and show me detailed pictures of the rooms and restrooms"
Hotels would much rather show you the outside, the lobby, and a conference room, so finding what the actual living space will look like is often surprisingly difficult.
I've been looking for this as well. I want a reliable image search tool. I tried a combination of perplexity web search tool use with the Anthropic conversations API but it's been lackluster.
I’ve been experimenting with different LLM + search combos too, but results have been mixed. One thing I’m particularly interested in is improving retrieval for both images and videos. Right now, most tools seem to rely heavily on metadata or simple embeddings, but I wonder if there’s a better way to handle complex visual queries. Have you tried anything for video search as well, or are you mainly focused on images? Also, what kinds of queries have you tested?
I find myself Googling less often these days. Frustrated with both the poor search results and impressed with the quality of AI to do the same thing and more, I think search's days are numbered. AOL lasted as an email address for quite some time after America Online ceased to be a relevant portal. Maybe Gmail will as well.
I am still googling for non-indepth queries because the AI-generated summary at the top of the results is good enough most of the time and actual results are just below in case I want to see them.
For more in-depth stuff, it is LLMs by default and I only goto Google when the LLM isn't getting me what I need.
I notice I have been using the Google AI summary more and more for quick things.
I had subscribed to Perplexity for a month to use their deep research. I think it ran out earlier this week but I am really missing it Saturday morning here.
That thing is awesome. Sonnet 3.7 is more in the middle of this to me. It can help me understand all the things I found from my deep research requests.
I am surprised the hype is not more for Sonnet 3.7 honestly.
Agree and I'm pretty sure Google is seeing this drop internally in usage stats and are panicking.
I'm also certain (but hope to be wrong) that because of this they'll be monetizing the hell out of every remaining piece of product they have (not by charging for it of course).
Compared to OpenAI, who seem keen to maintain the mindshare of everyone, IMO Anthropic are far more considered about their audience. They released a report recently on who who was using AI professionally and it was something like 40% developers, and single digit percentage for basically every other profession. I think they’re focusing on the professional use cases.
Pretty much. Claude from their announcements seems to me at least to be about SWE's and coding at the moment. Personally while I understand their decision I find it a bit limiting, and just a little targeted against the SWE profession. If all AI does is disrupt SWE's but not really add new products and/or new possibilities; then it feels IMO like a bit of a waste and is quite uneven in its society disruption.
At least in my circle SWE's are either excited or completely fearful of the new technology; and every other profession feels like it is just hype and hasn't really changed anything. They've tried it sure; but it didn't really have the data to help with even simpler domain's than SWE. Anecdotally I've had the comment from people around me - my easy {insert job here} will last longer than your tech job from many people I know from both white and blue collar workers. Its definitely reduced the respect for SWE's in general at least where I'm located.
I would like to see improvements in people's quality of life and new possibilities/frontiers from the technology, not just "more efficiencies" and disruption. It feels like there's a lack of imagination with the tech.
I know people in other industries use AI a lot and likes it. Accounting, legal, writing (a lot here). I agree that companies that focus on all verticals like openai is definitely the way to go. Claude code capabilities are not very significant compared to openai though. There is no big moat and a lot of it is perception, marketing.
Do users pay for LLMs? I haven't seen much concrete data indicating that they do. I don't think the casual utility gains of LLMs have gotten average people so much value that they're paying $20/mo+ for it. Certainly not for search in the age of [doom] scrolling.
I would guess that Anthropic wants developers talking about how good Claude is in their company Slack channels. That's the smart thing to do.
I would say no. While I pay for chatgpt Claude and perplexity monthly (I don't know why anymore) my wife does not use any at all. She has around 5-10 things she uses on the smartphone, and if she needs something new there is still google.
I have only anecdotal data from non-technical friends and family.
I’m referring to average people who may not be average users because they’re barely using LLMs in the first place, if at all.
They have maybe tried ChatGPT a few times to generate some silly stories, and maybe come back to it once or twice a month for a question or two, but that’s it.
We’re all colored by our bubbles, and that’s not a study, but it’s something.
For most people AI is stuck at GPT 4 and other on par performance wise models. Anecdotally as well, many people that I know that have tried it found it mildly useful, but experience what coders and other tech workers experienced two years or so ago. Lots of hallucinations, lack of context, knowledge, etc. If you went back to those models you would at best feel like it is just a occasional code helper as well; at best an autocomplete or rather.
A lot of the reasoning model improvements of late are in domains where RL, RLHF and other techniques can be both used and verified with data and training; in particular coding and math as "easy targets" either due to their determinism or domain knowledge of the implementers. Hence it has been quite disruptive to those industries (e.g. AI people know and do a lot of software). I've heard a lot of comments in my circles from other people saying they don't want AI to have the data/context/etc in order to protect their company/job/etc (i.e. their economic moat/value). They look at coding and don't want that to be them - if coding is that hard and it can get automated like that imagine my job.
Excited to see this. I've really been enjoying Claude. It feels like a different, more creative flavor of experience than GPT. I use Claude a lot for dialogues and exploring ideas, like a conversational partner. Having web access will add an interesting dimension to this.
Ditto. I use Claude 3.7 to refine drafts of research papers and ask it “What have I missed?”.
Now I can prompt Claude to ping PubMed and make sure that its suggested references are verified. Each citation/claim should be accompanied by a PMID or a DOI.
That's how I use it as well! It'll also occasionally hallucinate things, but much less often than other AI tools I've tried. But typically I'll just run things by it that I'm question myself about, or if I want to solidify a concept I'll ask it if my understanding is correct.
It's also fun to ask the same question to multiple AI tools and see how the answers differ. Usually Claude is the most accurate and helpful, though.
Does not really say /how/ it's performing a web search... Is it tapping into it's "own" corpus of material or calling out to some other web search engine?
In my quick experiment (asking a question that would naturally lead to content on my own site) it is not doing a real time request to the site in question. Its answer included links back to my site (and relevant summaries), but there was no requests for those pages while it was generating its answer. So it's clearly drawing from info that has already been scraped at some earlier point. And given that I see Claudebot routinely (and politely) crawling the site I'd guess it's working from it's own scraped copies (because why use someone else's if you've got your own....)
Major AI players don’t want to use someone else web index as they may cut it off or jack up the prices etc. major players want to build their own web index
And this is why we see our logs overloaded with ABot BBot CBot etc, every single "AI" company makes their own bot and they all crawl the same pages over and over.
i stopped using Claude about 2 months ago. went to Grok (the code was better, everything was better - politics aside). i wonder if this update will improve it.
the main issue i find with Claude is, he fights you. He refuses so many requests and i need 3 or 4 replies to get what i want vs deepseek/grok. i've kept the monthly subscription to help anthropic, but it's trounced by the free options imo.
I wonder if you use it exclusively for coding, because for general purpose explanation tasks, 3.7 seems absolutely terrible unfortunately.
Back when it was 3.5 you could actually talk and learn things and it felt humane, but now it sounds like a McKinsey-corpo in a suit who sounds all fancy but is only right half the time.
I’ve switched back (rather regretfully) to chatgpt, and holy hell is its personality much better. For example just try asking it to explain differences between Neo Grotesque and Geometrical Sans Serif fonts/typefaces. One sounds like a friend trying to explain, the other sounds like a soulless bot. (And if you have 3.5 access, try asking it too.)
I think OpenAI (and likely others) are on the right to track to acknowledge that different model tunings are best for different uses, and they interned to add a discriminator that can direct prompts to the best tuned model/change model tuning in real time.
I find it kinda random. I normally keep 4 tabs open, Claude/GPT/Gemini/Grok and paste the problem into all 4. Depending on the problem one will be better than the others.
So in many respects, search the place it used to construct the model? Isn't that functionally bias-reinforcing?
"Look what I synthesise is correct and true because when I use the same top 10 priming responses which informed my decision I find these INDEPENDENT RESULTS which confirm what I modelled" type reasoning.
None of us have a problem with an LLM which returns 2+2 = 4 and shows you 10 sites which confirm. What worries me is when the LLM returns 2+2 = 5 and shows 10 sites which confirm. The set of negative worth content sites is semi infinite and the set of useful confirmed fact (expensive) sites is small so this feels like an outcome which is highly predictable (please don't beat me up for my arithmetic)
e.g. "Yes Climate science is bunk" <returns 10 top sites from paid shills in the oil sector which have been SEO'd up the top>"
We will very quickly enter a Kepler effect of information on the internet. All text on the internet will become AI slop being parsed by AI. Real information and human beings will be drowned out by the garbage. The internet will cease to be useful and we will retreat to corners of the web or to walled gardens. I'm seeing more and more online communities these days enforce invite only because there's just too much AI slop everywhere now.
Aside, does anyone know of an app like Perplexity for surfing the news in a foreign language (language practice)?
Perplexity's "Explore" tab translates its news to your local language, and its curated news items are all pretty interesting, but the problem is that there are so few of them. I seem to get maybe a dozen stories in a day. I paid their subscription for a month just to listen to the news on my walk, but didn't renew because of this.
A foreign news site like BBC Mundo (Spanish) on the other hand barely has any stories outside of a few niches. Its tech section only has a few stories per week.
Hmm, maybe I want a sort of RSS reader that AI-translates stories for me. But I don't really want to maintain a feed myself either.
Apple News would probably do it since they also have good curation, but afaict they still don't support foreign news sources (why???).
> Apple News would probably do it since they also have good curation, but afaict they still don't support foreign news sources (why???).
ground.news includes sources from all sorts of countries, and also auto-translate headline and the intro, while you can still click to access the source article. Not affiliated, just happy user.
Although I'm not sure how useful it is for language learning, as you cannot (afaik) configure it to only display articles in Spanish or something similar, but if you filter by stories about France, you'll get a lot of French sources (obviously).
Surprised that Claude (the app, not model) not only has done well for so long, but has somewhat consistently clinched the top spot in coding, all without a feature that is considered somewhat of a basic feature for most consumer-facing AI apps.
How much % it’s significant compared to say openai or google? Because if I’m paying 20$ I want other things too not just coding. And if the moat for coding compared to other vendors is not significant, it doesn’t make any difference tbh
Fair point. I wouldn't say it's by a lot because I am getting quite good results with ChatGPT's models too. A part of it could also just be confirmation bias too.
> With web search, Claude has access to the latest events and information, boosting its accuracy on tasks that benefit from the most recent data.
I'm surprised that they only expect performance to improve for tasks involving recent information. I thought it was widely accepted that using an LLM to extract information from a document is much more reliable than asking it to recall information it was trained on. In particular, it is supposed to lead to fewer instances of inventing facts out of thin air. Is my understanding out of date?
I have found that for RAG use cases where the source can be document or web data, hallucinations can still occur. This is largely driven by the prompt and alignment to the data available for processing and re-ranking.
I’ll be interested in trying it. My admittedly limited experience with this on ChatGPT has been disappointing. ChatGPT falls for the SEO content that has taken over the web.
As an example, I recently travelled abroad to a popular vacationing spot and asked ChatGPT for local recommendations on what to do. When it gave me answers directly, they were pretty solid. But when it “searched the web” instead, the answers were awful. Every single result it suggested had terrible ratings. It did this repeatedly. One of those times I asked it to pick something with better ratings and it sort of improved but not by much.
Of course this is another tool and maybe Claude uses better sources or a better algorithm, but in this case where there was a concrete number tied to the results, that while not perfect, aims to rate the quality of a result, it still did not filter out low quality answers. I’m not sure I trust these LLMs to do any better when there aren’t such ratings available. The available input data is just not very good, and now LLMs are being used to feed that low quality, SEO machine.
When I try to prompt it with something that obviously needs up to date web search (when will Minneola Tangelos be in season this year?) it says..
"I believe they're usually available from November through March, but I'm not completely certain about the exact timing for this year's crop. Would you like me to search for more current information about the 2025 tangelo season?"
It doesn't just search, it wants me to confirm. This has happened a lot for me.
Funny, I literally just two days ago asked Claude to provide an outline of the functionality of a product, giving it the web site. It of course refused. So I downloaded the text of the site and passed that in, and got mediocre results.
The results based on giving the source URL directly were better. Still a bit generic and high-level and vague, as LLMs tend to be, but better than the text-download version a couple days ago. And of course much easier to generate!
I had tried using monolith [0] to feed webpages into Claude but all the html was too much token context. I ended up Print > Save as PDF-ing somewhat often and that worked pretty well. But just giving a URL is ideal.
> though various sites will block it from time to time.
The page itself describes a --ignore-robots-txt and customizing the user agent. Guess we can just all copy OpenAI and continue to make SourceHut's life miserable /s
It wasn't long ago that a uni senior who worked for a decade+ on Google Search told me that it was hopeless anyone tries to compete with Google not because it sees a tonne of signals that helps with IR but because of its in-house AI/ML.
It turns out that the org that built the ultimate AI/ML that runs rings around anything that came before it for NLP (and thus IR) was a sister team at Google Translate.
It isn't inconceivable that a kid might be able to build a Google-quality web search, scalability aside, on CommonsCrawls data in a weekend. As someone who built re-ranking algorithms for a search engine built atop Yahoo! and Wikipedia (REST/SOAP) APIs back in the late 2000s as a side project (and experienced the launch and subsequent iterations of Echo/Alexa up close at Amazon), the current capabilities (of even the open weight multi-modal models) seem too good to be true.
Google itself though is saved by its enormous distribution advantages afforded by Chrome (3B to 5B users) and Android (3B+), aside from its search deals with Apple and other browser vendors.
1. I generally prefer that an LLM not search the web. The top N results are often either SEO spam, excessively long articles created solely to rank well, or long-established websites that gained authority years ago, when Google's crawler and ranking algorithms were less sophisticated.
2. Web search by LLMs is likely here to stay, so I'm curious whether there's an agent-friendly web format. For example, when an RSS reader visits a website, the site responds with an RSS feed. I think we need something similar for agents - an open standard that all websites would support. This could reduce processing overhead and potentially improve the accuracy of the information retrieved. Thoughts?
Not to sound snarky but Anthropic introduced function calling over a year ago... The capability was always there for someone that wanted to spend a weekend coding a tool for it.
I agree. I have been integrating Brave search and DuckDuckGo search with LLMs for about a year. That said it is so much more convenient having an option of having it built in.
I stopped paying for Perplexity a year ago, but a month ago I started using Perplexity's combined search+LLM APIs - reasonably priced and convenient.
Good news. I integrated Claude with a scrapper to get info from pages and it was not giving hallucinations 99% of the time. Hope this works out of the box now.
It’s just breaks my head. We’ve build LLMs that can process millions of pages at a time. But what we give them is a search engine that is optimized for humans.
It’s like giving a humanoid robot access to a keyboard with a mouse to chat with another humanoid robot.
Disclaimer: I might be biased as we’re kind of building the fact search engine for LLMs.
Feels like a catch-up feature to chatgpt... honestly the biggest holdback for me on anthropic is the output token limit on sonnet... 8000 tokens max output is really limited (and 200k tokens in) compared to other offerings - especially considering that I suspect most sonnet users are not chat-users but api users.
Yes but you cant limit to a set of websites, I have tried sometimes it works but rarely and if it does search your set it will still go off and pick others. In the field I work in there are around a dozen specialist sites and I just want it to query those. Perhaps I need to develop around it.
Funny how we’ve come full circle—LLMs now search the web to answer queries, which is what search engines did originally. The difference? Now the hallucinations come with citations. Curious how long until "web search" just means summarizing Reddit threads again.
Funny thing is that I have the obsidian-mcp-tools installed and today claude-desktop just starting fetching stuff from the web through that because it exposes a fetch tool to claude.
A bit OT: does anyone have experiences with Mistral AI as a comparison to OpenAI or Anthropic? I would like to stay with a European company, if they're somewhat equivalent.
I really like the Mistral openly licensed models - Mistral Small 3 is my current favourite local model to run, but only because I've not spent enough time with the brand new Mistral Small 3.1 to recommend it yet (I expect it will be promoted to my favourite local model soon.)
Their user-facing product at https://mistral.ai/ seems good to me - it uses Brave for search (same as Claude does) and has a "canvas" feature similar to Claude Artifacts. I've not spent enough time with that to evaluate if it could be a good daily-driver or not though.
My hunch is that Claude 3.7 Sonnet is still _massively_ better for code, based on general buzz online and a few benchmarks I've seen.
Does the LLM look at our click ads? If not, it’s a self destructive technology that will get itself blocked as it consumes resources in an unsustainable way
if everyone is using LLMs to solve problems, in a few years, won't LLMs run out of content to mine? In short, how can the general dumbing down of LLMs and degradation of content used to solve problems be avoided over the long term?
For questions about events and problems that arose after 2025, where would LLMs get information to solve those? and who would be asking those at a forum LLMs can access going forward?
1. People will continue to answer questions and post about events and problems after 2025. Eventually LLMs themselves (inside robots) will be observing the world and reporting on anything interesting.
2. Best LLMs today answer questions better than 90% of people who comment on forums. So if these LLMs have been able to train on all the crap posted on internet so far, they should only get better as they are being trained on high quality output from the latest (and future) LLMs.
Most interesting data will be collected in chat rooms and apps. There are over a billion LLM users, they act as human-in-the-loop enhancig the LLM, sometimes testing ideas even in reality. Wondering what providers are doing with the chat logs.
Are there any downsides to that approach? It seems like we're moving towards empowering llm's to interact with stuff as if that's better than us doing it for them - is it really?
Eg say I want to build an agent to make decisions, shall I write some code to insert the data that informs the decision into the prompt, return structured data, and then write code to implement the decision?
Or should I empower the llm do those things with function calls?
So, referring specifically to the example they show on the front-page, what value does this bring actually? The best example they could come up with is Typescript migration ? Really? Weren't the LLMs supposed to be a superior alternative to searching the web? Why do we need to produce more CO2 to do the same we could have done at the fraction of the cost, of course at the time when the google search was still working?
The CO2 concerns of using LLMs are massively overblown these days (with the exception of o1-pro and GPT-4.5 at least).
The energy efficiency of most models has improved by an order of magnitude since the most widely cited CO2 usage papers were published.
(It remains frustratingly difficult to get accurate numbers though: at this point I think more transparency would help rather than hurt the big AI labs)
I meant that as a side note, but even if we put the CO2 issue completely aside, I am still failing to see what is this "feature" bringing, and judging by the lame example they picked, the Anthropic are not quite sure either.
If that's true, they are using a separate search API to get search results and feed it into a regular Claude API call. The difference here is that Anthropic is integrating it directly, like OpenAI and Google have. It doesn't look like it's in the API yet, but presumably that's coming. Then, as with gpt-4o and the Gemini models, you can make a single API call and it will do the searching for you and incorporate the results.
Anyone noticed that if you enable the “browse internet” in ChatGPT, it becomes very dumb? It abandons all its intelligence and produces mostly incorrect results.
Like it’s being passive-aggressive, “Oh, you don’t like me as I am and want to augment me with search, let me show you how it is if my brain was only search!”
What is there to even search anymore? Almost everything is gated, and whatever remains public is connected a faucet that pumps out AI slop at an ever increasing rate.
The internet consumed itself. Telling someone to, "Just Google it," is now terrible general advice.
Honestly, while this is a great update and all, other AI platforms have had web search functionality for quite some time now. Any explanation for this delay?
I wonder if Claude’s API will match Perplexity’s dynamic answers. Is there API rate limiting. If so, then the older API pricing would be preferable. Can users switch between the two?
Awesome, but I also do want to say it’s pretty sad it took this long straight up. Literally no excuse. But I’m glad they finally got to a feature that was launched more than a year ago on competitors.
Yeah, I love this stuff! Compiling data from multiple pages into a single paragraph in the time it takes to read one page? Great stuff. I can't imagine living without Perplexity.
Oh, sure, it hallucinates a lot, and in dangerous ways, but even if I have to manually corroborate all the citations, I'm still saving time, especially insofar as it reveals whether or not I'm barking, broadly, up the wrong tree.
It's especially good for comparisons, because the results of two disparate search terms can be collated into the results.
Could this be done without LLMs, but only vector embeddings? Hm, maybe. Algolia is maybe the 80 for 20, but does Algolia have a web index?
Excited to see how this compares to Perplexity or Gemini. I remember that ChatGPT used to be able to search the web, but last I checked it it couldn't. I wonder why they removed that feature
About half my requests end up going to web search. But if you ask it for something specific like "find an X-ray image with an abnormality," then it refuses.
The native app that allows for MCP is only available officially on Mac's and the web interface is generally more convenient for non-technical users. Searching and interacting with the web has become a table-stakes feature and was a glaring gap in Claude.
This is likely implemented behind the scenes as an MCP server exposed to their model in the web UI. It is likely that they will enable MCP servers over HTTP+SSEs (vs the stdin/stdout used with Claude desktop) on the web version in the near future.
I just read about llm bots ddosing websites and i guess more of that is coming soon. Big money is bettin on AI eatin the web and the small fishes pay for the bandwidth.
When you're talking to an LLM about popular topics or common errors, the top results are often just blogspam or unresolved forum posts, so the you never get an answer to your problem.
More of an indicator that web search is more unusable than ever, but interesting that it affects the performance of generative systems, nonetheless.