Hacker News new | past | comments | ask | show | jobs | submit login

Investors are getting impatient! ChatGPT has already replaced Google for me and I wonder if Google starts to feel the pressure.



> "ChatGPT has already replaced Google for me"

Would you mind elaborating more on this.

Like how are you "searching" with ChatGPT?


Some of my searches aren't really searches, they're questions which Google has the information to be able to sum it up. A few weeks ago I was trying to remember the name of a site that put up two movie posters and you pick which one you liked more.

Googled "What was the website that showed two movie posters and you picked the one you liked more?" and I got links to reddit, lots to letterboxd, some quora, and a lot more, all irrelevant to my question.

Asked ChatGPT that same question verbatim and

> The website you're referring to is probably "Flickchart." It's a platform where users can compare and rank movies by choosing which one they like more between two movie posters or movie titles. Please note that my knowledge is up to date as of January 2022, and the availability and popularity of such websites may change over time.

Another time I was looking for the release dates of 8 and 16-bit consoles. With Google I had to search for each console individually, sometimes offered a card with the release date, sometimes didn't and I'd have to go do more digging.

So I asked ChatGPT and got a nice formatted list with dates


Very similar to your second example, I've been using ChatGPT to calculate the total calories in a recipe. Before, I would consult a database like the USDA food database for nutrition info and do measurement conversion manually. I've tried asking ChatGPT for the same recipe and the total amount was within 1% of what I calculated. It runs into a couple snags I've noticed though.

First, it always gives a calorie count for cooked meat, but it should assume the meat is uncooked since I said it was for a recipe.

Second, it seems to struggle with the concept of uncooked rice. If you ask it to work with 1 "rice cooker cup" of rice, it refuses because that isn't a standard measurement. If you swap in the converted standard measurement (3/4 cup), it still is way off. It told me 3/4 cup uncooked rice is about 150 calories when cooked. That's a third of what the USDA database gives. When you point out that 3/4 cup uncooked rice is a large serving after being cooked, it changes its answer to 375 calories, still about half of what the USDA database gives. But this is fine for me because rice is not typically part of my recipes since it doesn't usually require special preparation.

Overall it reduces a 10 minute task to 10 seconds, but you need to know enough about the ingredients to spot obvious problems in its result. In my case I could see the calories given for meat was way too high, and way too low for rice. It gave a better answer after telling it to fix the former and ignore the latter.

I tried a second recipe and the total it gave was 2% under my calculation, but I did not see any obvious error in its result so I could not correct it further.

It is unfortunate that you kind of have to trust the numbers are correct, but this is no different than the nutrition details on sites like MyFitnessPal which are often wrong when you closely examine it.


Wait. 150 is one third of the USDA answer, and 375 is half of the USDA answer? What is the USDA answer?

This equation is beyond my paygrade!

Edit: I asked our GPT3.5 bot to solve this, and it hallucinated "pulling up the USDA database" ; complete with a "(a few moments later...)" message before giving me 160 calories as the USDA answer.

I asked the same bot (identical prompts) with GPT-4-Turbo enabled and it went through "step by step" to say the correct answer is 461 calories because 1/3 cup uncooked is 1 cup cooked, so 1 rice cooker cup (160g) = 3/4 cup uncooked, so 2.25 cooked * 205 = 461 cal.

Is that the right answer? If so, 375 seems far from "half"


The database gives 716 calories for 1 cup uncooked rice. So, 537 calories for 3/4 cup. It was a mistake to say 375 was half of 537, my intention was to point out that it's not close.

https://fdc.nal.usda.gov/fdc-app.html#/food-details/168881/n...

I don't have access to GPT-4 but your results are interesting. I don't understand why it's trying to approximate the yield of the uncooked rice after cooking before giving a calorie estimate. From my testing it doesn't do this for other ingredients like raw chicken.

It seems to be more accurate when giving calorie estimates for a list of ingredients instead of a single ingredient. For example, in a recipe that includes "boneless skinless chicken thighs (1 lb, raw)" ChatGPT gives "about 880 calories" as the answer. But in a new prompt if you ask it only about the chicken and nothing else, it gives "between 1100 and 1300 calories." The USDA database shows 144 calories per 100 grams, so 653 calories per pound. The first answer by ChatGPT is much more accurate.

I have no idea why this is the case, and I bet this would be easy for a researcher to demonstrate if true. Alas, I don't have API access.


You should probably consider most calorie calculations to be +-50% anyway due to huge differences between human digestion and the way calories are estimated in a lab.


Google is kind of becoming a "System for double-checking if GPT is hallucinating".

IMO Google should convert their search box to a Bard chat input, and you get a hybrid of Bard conversation with real links from their search engine.

It's actually astounding that, in the face of rapid GPT rise, that search box is still an old-school search box, looking dumber and less attractive each day.


That's Bing Copilot and it's still not that impressive, it's just search engine with Clippy then.

Google can't change for now, in doing so they undermine all the AdWords accounts, the real customers to Google, paying six figures to stay on top of SERPs.

But the other competitors can and will.

So it goes


Could it be that it's a scale problem?

How many queries per second does OpenAI get vs Google search?


Good point. Something else that occurs to me is that Google may be between a rock and a hard place with their advertisers. Currently advertisers pay a lot of money for top of the page "sponsored" results, and there's probably not an immediately straightforward solution to integrating Bard while maintaining the visibility of those ads.


Even if they integrate ads well, Simple fact that Bard is costlier to run will hurt Google. If Search Engine profits go down by 60%, MSFT and OpenAI can still pursue it but not Google.


Let ads die! They have virtually no lift over organic search anyhow.


It feels like Google was able to enshitify their results by loading with ads and going for longer 'engagement' times because it used to be worth it to persist with Google until you got their best answers.

They optimised for profit. Now what?

If they address the competition then they have to undo that optimisation and cut into their own profits.

My guess is they won't settle for merely extortionate profits, they won't address the competition fully and so people will flip to using LLM-first search.

I'm hoping that in a year or so we're asking why Google went from being the top search on Bing to being only one of many - principally LLM-based - tools people are using for getting answers (most of what 'search' represents now). IMO Google could do with knocking down a peg or two.


It’s true. “Making the worlds knowledge accessible” is in tension with driving traffic to ads.

It would be astonishing if a company can succeed at scale with charging subscriptions for search and AI services, instead of ads. Google would be truly doomed.


> Google is kind of becoming a "System for double-checking if GPT is hallucinating".

If you have to double-check with Google, why not just start with Google and skip ChatGPT altogether?


Because instead of spending minutes to hours collecting, sorting, filtering, parsing, and synthesizing Google results, chatgpt provides a near instantaneous answer that is often correct. It's then trivial to validate via Google.


Google is pretty good at short-answer questions (the occasional summary at the top can do a good job, and "people also ask" section can be pretty good) but if you ask something like "What are all the tax implications of moving to France?", those "smart" sections tend to be useless, and then you get sent on a wild goose chase of going to websites of varying quality.

On the other hand GPT will give you a really good summary of all the issues, personalized to your financial situation, and you can dig into details with followup questions. Then if you want to verify a particular tidbit you can go to Google.


Google is slowly adding a lot of small AI features though, for example recently they added these automatic "topic" search suggestion pills, or extended their auto-generated Q&A sections. The latter are hilarious, because whenever you expand a question, new questions are generated, ad absurdum if you want :)


Microsoft Bing have same mistake, if I used your browser for chat, why you force me search first:(


The first is maybe a good example of where an LLM integrated search engine makes sense. Because "compare two movies" has flickchart as the third option for me. But it is nowhere to be seen for "compare two movie posters" which is how I read your search originally (and indeed flickchart is about ranking movies not the posters themselves ... Obviously).

Anyways an LLM clearly teased that out whereas if you misremember or misread something a straight search is going to be bad.

Most of my searches are the opposite. I was to know about an obscure movie from the 80s with a toy helicopter. Google very neatly suggests Defense Play (correct) but most LLMs I've tried end up just suggesting very popular films with a helicopter and it ends up being quite difficult to get it to give me information about obscure stuff. Also with that same search the LLM suggests a bunch of incorrect films since (and I figured this out later) it turns out that it was all sourced from a single forum thread from the 90s where a bunch of the posts suggested movies that don't have toy helicopters in them. Go figure.


Thanks, this gave some good insight to GPT4. If I provide the entire Wikipedia page contents but blank out the movie name and director name it can't recall it. https://chat.openai.com/share/c499e163-3745-48c3-b00e-11ea42...

However, if I add the director it gets it right: https://chat.openai.com/share/a602b3b0-5c17-4b4d-bed8-124197...

If I only give it 1980s film and the director's name it can still get it. https://chat.openai.com/share/d6cf396b-3199-4c80-84b9-d41d23...

So it's clearly not able to look this movie up semantically and needs a strong key like the director's name.

EDIT: Digging deeper it's clear the model only has a very foggy idea of what the movie is about: https://chat.openai.com/share/d0701f53-1250-421e-aa4b-dc8156... People have described these types of outputs as the text equivalent of a highly compressed JPEG, which seems to fit well with what's going on here. It gets some of the top level elements right and kind of remembers there's some kind of vehicle that's important but it has forgotten all the details, even the date the movie was released. But unlike a human who might indicate their fuzziness (was it a helicopter or a submarine?), GPT4 gladly pretends like it knows what it's talking about. I think it's likely a solvable problem, the model probably has the information to know when it's confident and when it's in a fuzzy JPEG region but the current alignment isn't doing a great job of surfacing that.


"compare two movies" on Google doesn't even have flickchart for me, and rewording my question to not involve posters also doesn't have flickchart in the results.

So, get as pedantic as you'd like, but Google really dropped the ball. ChatGPT gave flickchart with both questions.


These (I call them) "reverse-dictionary" searches are absolutely amazing on ChatGPT! I've asked it several times things like "what's the word you use to describe a situation that's <like this> but slightly <like that>?" and ChatGPT consistently "gets" what I'm looking for. What a time to be alive.


I did this the other day. For some reason, I was trying to think of a word for nationalism a-holes and it was something like Django, and I thought, is it Jengoism, but that didn't sound right...

So I asked ChatGPT: "There’s a word that sounds kinda like Django but it’s like jingoistic"

Turns out I was right, it's "jingoistic", but then I could go on and ask:

"Give me the definition of the word, then."

"The term "jingoistic" refers to a form of extreme and aggressive patriotism, often characterized by a belligerent foreign policy. It denotes a belief in the superiority and righteousness of one's own country, sometimes leading to the denigration of other nations and the advocacy of aggressive political and military actions."

"Where does the word come from?"

"The origin of the word "jingoistic" traces back to a popular British song from the 1870s. The song, with the refrain "We don't want to fight, but by Jingo if we do, We've got the ships, we've got the men, and got the money too," was sung during the Russo-Turkish War (1877-1878). It expressed support for the British government's aggressive stance towards Russia.

The phrase "by Jingo" in the song is believed to be a minced oath, a euphemistic expression used to avoid swearing. "Jingo" here stands in for a stronger word, likely a reference to God. Over time, the term "jingoism" evolved to describe an aggressive, belligerent form of patriotism and foreign policy, and "jingoistic" is the adjective form used to describe someone who exhibits this attitude."

Fascinating! For anyone who is interested in learning, there hasn't been a tool like ChatGPT since the internet itself!


Important note here - ask for the definition in a separate thread, otherwise if it hallucinated the word it will still keep hallucinating the definition.


on GPT4?


Great idea.

Me: “staircase wit” is a very charming phrase. What are some other unexpected phrases that might also be fun to know?

ChatGPT: <I originally included the response, but you can ask it yourself if you care to. I feel this was a spur-of-the-moment idea and, rereading it, I feel it’s just like Googling something and posting the result. Not really a high quality contribution on my part.>


> So I asked ChatGPT and got a nice formatted list with dates

I might be an outlier here, but to me this wouldn't be useful at all. I wouldn't trust ChatGPT to get it right, so I'd go to wikipedia to double check, at which point the amount of effort saved is little to zero.


I should have mentioned I was only looking for loose dates, and am also to the point where I don't trust the information Google offers in its cards, as it frequently doesn't relate to what I'm searching for.


> I was looking for the release dates of 8 and 16-bit consoles

this is a case where search has taken a step backward. The old Google would have worked for a simple search like that, "NES (or sega, whatever) + 'release date' " and simply return the best results that had those two parameters in them. Today we can't have that because they make more money intentionally fuzzing your search parameters so you accidentally click on sponsored content.

I think we're going to see a lot more of this: renewed excitement and enthusiasm when A.I. "discovers" things that plain old imperative algorithms figured out 20 years ago.


> What was the website that showed two movie posters and you picked the one you liked more?

Google Bard now answers this with the first suggestion being Flickchart

I also got a clean list of release dates for the console question: https://g.co/bard/share/ceb0eac6c69f


I asked Bard, verbatim. Its first response was that there were many sites that fit that description, and it asked me to narrow it down. Since I didn't have any idea how to do that, I just asked it to list them. It listed 4 (top was Flickchart), but also notes there are many more websites like that.

https://g.co/bard/share/b58043d8aca0


How do you determine whether or not ChatGPT just made up whatever answer it gives you?


How do you determine if the webpage Google gave you is honest? Most people don't seem to care.

Phind provides references, problem is as the webpages used to feed LLMs become written by LLMs then we're going to be up to our necks in even more [subtly] wrong information than the currently very widely peddled disinformation from advertisers and political groups.


> How do you determine if the webpage Google gave you is honest? Most people don't seem to care.

That's the thing that surprises me the most about these "How do you know ChatGPT is correct?" questions. They seem to expose the fact that a lot of people were taking whatever they found on the internet (sites they found on Google, Wiki pages, Reddit comments, etc.) at face value without checking into it further. Most things people find on the internet are potentially inaccurate and need to be double checked.


Google gives me sources for answers and I can evaluate them to determine whether the answer is correct. Those sources are often forums, where a discussion about a topic from many different sources may occur. Often, I am checking to see whether several different sources are giving the same answer. Google enables me to exercise media literacy. When ChatGPT gives me an answer, I have no idea where the information came from, nor do I have any tools to evaluate the answer.


yeah it's really good for those kinds of "i'm trying to remember a thing here's a bunch of random details i remember about it what is it?" kinds of queries.


You've got a lot of examples, but for example I recently thought: "How many weekdays are there between Jan. 11th 2023 and Mar. 11th, 2023" and got GPT to write the python code and run it to get the answer. It worked first try, I could inspect the code it generated and saw it looked correct, it was within my original estimate. Took less than one minute.

I had a question about adding new RAM to my computer, about what things I should take into account since the original brand no longer makes paired dimms that match my current spec. It gave me a big bullet list of all of the things I should compare between my current ram, my current motherboard and any new ram I would choose to buy to ensure compatibility.

Both of these are things I might have gone to Google (or even reddit) for previously but I believed I could get faster answers from ChatGPT. I was right in both cases. I didn't have to construct a complicated query, I didn't have to filter SEO spam. I just asked the question in natural language as it appeared in my mind and ChatGPT gave excellent answers with very little delay.


FYI, Wolfram Alpha is good for this type of query too: https://www.wolframalpha.com/input?i=How+many+weekdays+are+t...


Wolfram Alpha has very much the same effect as Siri for me. Often I'll remember it exists and I will think to give it another try. But whenever I try it I get frustrated since it seems to fail on whatever I decided to attempt in that moment. It is maybe just unlucky but it is my experience. I am certain there are millions of things it can do, but for some reason the one thing I try on it always seems to fail.

On the other hand, ChatGPT does seem to give me good results the majority of the time. It certainly fails or hallucinates and I always feel I have to double check it, However, it just feels more reliable as a first stop compared to Siri or Wolfram.

I don't want to have to think "is this a query Siri can handle?" or "will Wolfram Alpha manage to work for this query?" - I just want to get a pretty good answer quickly with no hassle.


Which ChatGPT can utilize with the plugin! (although I'll admit I just go straight to the site if I need to use it)


...although ChatGPT also has a Wolfram Alpha integration


Why write python code for that (through any means) when there are already simple services to do it for you?

https://www.timeanddate.com/date/duration.html


First, I would need to know such a service exists. Given that I didn't know the service exists I would have to Google for such a service. Which is exactly the point in question.

So, let's say I Google for such a service and I make it past the 3 or 4 ads at the top of the search results and however many SEO spammed sites and I get to the site you posted. I literally started writing a response to you saying "it doesn't seem to count only weekdays" but in order not to be wrong on the Internet I went back and checked and buried in the interface is a link "Count only workdays".

So, my answer to why: It was actually faster and easier using ChatGPT to get it to write Python than it would have been for me to use Google to find the site and then to use the site. If I have to do the same thing again in the future I will use ChatGPT rather than try to remember this website url or trust Google to direct me back to it.


Google Bard replies with: "There are 43 weekdays between January 11th, 2023, and March 11th, 2023."


It seems Wolfram Alpha appropriately excludes March 11th for the math folks (e.g. kind of like SQL BETWEEN, syntax convenience, etc) and Bard appropriately includes January 11th and March 11th, as that is how it would be commonly interpreted.

Edit: or not, March 11th is not a weekday. Though I count 43 weekdays including Jan 11th, so perhaps Wolfram is using an open interval while bard is using a closed interval.


How many days are between Sunday and Saturday of this week (I live in a place where weeks are generally seen to start on Sunday and end on Saturday)?

Five, right? Would you say seven?

Guess I must be a "math folk."


No, I would say 6.

How many days are there between today and tomorrow?

By definition, there is exactly 1 (one) day. That's what tomorrow means :)

Your math would say zero.


Which day is between today (Thursday) and tomorrow (Friday)?

There are no days between today and tomorrow. No things _between_ this thing and the next thing.

I find your mathematics pretty confusing.


In most contexts for time differences, a "day" is a time interval (24 hours) instead of a "concrete thing".

If my job contract says that there has to be "one day between" my resignation letter and my handing back my badge, then it means I can resign on Monday and give back the badge on Tuesday (the next day ; "one day after" ; because "one day" has elapsed). Not Monday/Wednesday. Otherwise "zero days between" these events would mean Monday/Tuesday (with your math) and Monday/Monday is also zero?

But I guess we're just rehashing the old argument on whether indexes should start at zero.


Small nit: BETWEEN is boundary inclusive, testing membership within a closed interval (at least in all SQL dialects I’ve used)


Personally, I don't "search" with ChatGPT. I ask and talk with it, and that's the big deal and the reason why the current query based search is dead. Think about your typical stackoverflow question. With Google you have to came up with a good query then start the tedious process of looking at the results. With ChatGPT you can directly ask for results, redirect the conversation, etc.


Oh it's even better than that.

I literally had my cursor in my config file the other day and didn't know the option for disabling TLS verification (it's for an internal connection between two private certs), and i literally just put my cursor in the right place and then asked Copilot what I needed to disable verification, and it returned me the correctly formatted elixir code to paste in, 2-3 lines. And it was correct.

And I then googled for the same thing and I couldn't find that result, so I have no idea how Copilot figured it out.


Same here. And unlike stackoverflow or any other forum, if you have any additional questions, you don't have to wait for an answer (which could take seconds, years, or never).


And it's not judgmental. It will recommend a better way of doing things, but especially when you explain why you're doing a certain thing, it will help with actually doing it.


unfortunately, this will invariably turn the Internet into a barren wasteland.


Perhaps, or make low-quality content less prominent on the greater web, such as many sites that would never turn a profit except for blasting a bunch of ads.


I think you misunderstand.

if you get you content via proxy how many content generators will we lose?


I'm not OP, but happy to answer.

GPT4 has plugin support. One of the plugins is Internet access via Bing. It automatically chooses which plugins to call upon based on the context it infers from your question - you don't have to select anything.

Here's an example: https://chat.openai.com/share/be3821e7-1403-44fb-b833-1c73f3...

It correctly finds a texture atlas example by discovering it nested inside of Bevy's github.

Note that it didn't summarize when I didn't say to conditionally consider summarizing. I consider this poor behavior, but I'm confident it would elaborate if I followed up. The initial seed prompt by OpenAI encourages concise answers (likely as cost saving measure but also for brevity)

I realize this is just a glorified "I'm Feeling Lucky" search, but I find it to be a much better UX, so I default to it over Googling. It's nice to be able to seamlessly transition from "search" to "brainstorm/discuss" without losing context.


I searched for "github bevy rust texture atlas code" and the first link is what ChatGPT provided as well. There are 9 other links in Google search. Why would you type such an elaborate prompt when you can just Google the keywords and get many more results along with the main one. https://www.google.com/search?q=github+bevy+rust+texture+atl...

I have tried using these things for search, but among the hallucinations and lack of different options in the response, I still find searching on Google or other search engines superior.


because it's a one-size-fits-all solution. If the thoughts in my head go from, "Can I see an example of this?" to "Okay can I give you some of my code and have you refactor it using this example?" to "now I want to actually edit the texture atlas" I don't have to switch programs or exert effort.

It's really convenient.

For a less contrived, more impressive example (multi-modality is insane!), see these: https://imgur.com/a/iy6FkBO

The above example shows me uploading 16 sprite tiles to GPT. The files were labeled 0-15 on my system. I uploaded them in two parts because there is a 10 file upload limit. I wanted to create a sprite sheet from these sprite tiles and didn't want to open an editor. So, I had it do it. After it worked, I realized I needed the three sprite sheets in varying colors (dirt, sand, and food) so I had it find/replace the dirt color with sand/food color. It then gave me download links to all three and all three were good results and saved me time.

(and yes, I know I don't have to compliment it. It's fun and makes work more enjoyable for me)


Wow, that’s a nice use case. Protip with multiple files - you can just upload a zip file.


Ooh, thanks for the tip!


I also use ChatGPT for most things I used to use Google for. Just formulate your search query as a question and type it into ChatGPT; it's not much more complicated than that. Looking over my ChatGPT history over the last week or two, I've asked it for stuff like what I should eat if I have certain fitness goals, how to improve meals, how to work out more effectively, lots of coding-related questions, which types of heating for a house are most cost-efficient, etc. For almost all those questions (minus the coding ones) I know Google will return blogspam, so I didn't even bother.


> I've asked it for stuff like what I should eat if I have certain fitness goals, how to improve meals, how to work out more effectively

The sad reality is that typing this into google would have given you AI generated content, anyways. Might as well use the best model for it.


I wouldn't say it's replaced search for me (Kagi in my case) but for anything where I'm looking for an answer with a bit of background ChatGPT takes the cake. It completely removes the need for me to click through to multiple websites and sum it up myself manually.

In the same way google/search made it possible to answer a question in real-time in a group of friends, ChatGPT does that but better in most cases. Yes, you have to deal with hallucinations and while they happen less often they do happen but you have to deal with crap in web searches as well.

Search is a super-power (most people suck at searching) and being able to grab information via ChatGPT feels very similar.


For search likes "how do I", "what is" and similar I'm asking gpt instead of Google, saves me from having to scan a lot of blogspam and referral tables and gives me direct access to well formatted information. It's got to the point I disable web searches so the AI is not influenced by the same. For example try asking for a restaurant with good food type with and without access to internet and chances are the collective knowledge summarised in the non internet answer is going to be way better than the marketing regurgitated to whatever is the big first response, albeit not as actual.


I’m not OP but I do much fewer Google searches now as well.

Prior to ChatGPT, the majority of my Google searches ended up on either Wikipedia (for direct information), Reddit (for opinions/advice), or StackOverflow (for programming questions).

Now all those use cases can be done by ChatGPT, and it’s faster, especially because it requires less skimming to find useful data.


In addition to the other response, you can ask ChatGPT to search for you (via Bing) and provide a cited answer, or you can ask it to curate a set of results that might match your search query.

Here’s a humorous example from a recent GPT-mediated search: https://chat.openai.com/share/ec874cd5-7314-4abc-b169-607601...


I ask quick coding questions exclusively to chatgpt. it's extremely direct and quick compared to stackoverflow (or god forbid any other website with 3 pages of someone's life story first) and if i paste in the segment of code im working on it'll even show me with my own code.


Same, also Google is now ranking tons of spammy SEO'd coding sites above useful ones like StackExchange. I could restrict to StackOverflow, but maybe the answer is on dba.stackexchange or something instead, plus it's annoying.


1. Most coding and documentation questions now go straight to GPT-4

2. Most quick general purpose questions like "What is 4-month sleep regression in babies?" go to GPT-3.5

3. If I want to deep dive on a topic, I find myself either using one of the custom GPTs (Grimoire is great for coding), or increasingly, talking to it via voice chat. It's particularly great if I'm walking around the house doing chores and want to learn something I would otherwise turn to Wikipedia for (say, the successors to Genghis Khan and the various Khanates).


> It's particularly great if I'm walking around the house doing chores and want to learn something I would otherwise turn to Wikipedia for (say, the successors to Genghis Khan and the various Khanates).

This sounds like a great use case. What is your setup for this? Do you have ChatGPT connected to a smart speaker?


No, I just use my airpods and the iPhone app. I use the voice chat and ask questions, then follow up questions.

Like I was reading a book about Genghis Khan the other day, which got me into Wikipediaing all his sons and grandsons. While doing chores, I asked chatGPT "tell me about Genghis Khan's sons", and then follow up questions like "what was the name of the khanate or empire Ogedai founded?"

It's an absolutely fantastic tool for diving into a new topic.


>While doing chores, I asked chatGPT "tell me about Genghis Khan's sons", and then follow up questions like "what was the name of the khanate or empire Ogedai founded?"

You actually trusted the information it gave you?


If I trust Wikipedia for this info, then I have no reason to distrust it.

And I’m asking about Ogedai Khan and the Ilkhanate, not my child’s health. It’s okay if I learn something wrong :D


With time you learn where it’s trustworthy and where you need to check for bullshit.


I think the Star Trek computer has actually been achieved now. And Google didn't get there first.


Not the OP but ChatGPT has not replaced Google for me just yet, but I use it increasingly to find stuff online now and it's really intuitive and helpful with much less 'noises' as you normally get from Google search. But as for now the online service is far from smooth (intermittent), not as fast and seamless as Google search.

Granted I use ChatGPT-4 with subscription but if later the online RAG browsing feature is opened for free to the public I can see more people abandoning Google for casual searching (i.e most of the Internet users). They keyword here is 'casual' and what LLM provided us is the intuitive nature of searching using it even with miss spellings or missing words.

I think Google is realizing this and before this exodus really happened it needs to do something otherwise the adage of "nothing last forever" will come true sooner or later for Google. The main thing for Google now is how to monetize this technology assuming that they have similar product because all this while they are struggling with Deepmind to generate income and, voila ChatGPT just shows them how to do it. Now it seems they are taking a page from Apple (highest ranked company by market value) by providing tiers of service Ultra, Pro and Nano, and me think the Ultra will be most probably subscription based.


I've had dozens of moments over the past few months where I got frustrated at the lack of answers from a Google search, remembered ChatGPT, pasted the exact same query into its chatbox, and got an immediate answer (which was also way clearer than any answer from Google, or any blogspam site Google could have sent me to).


I've had the same experience. Now ChatGPT is my go-to.

I often paste error messages with context into the chat and ask what's wrong.


Searching is frequently a means to the end of answering some question. I find myself more often asking the question directly with both Google and chat, GPT. ChatGPT comes to mind sooner for a question that requires two or more searches.


I'm surprised no one has mentioned Perplexity yet. It uses ChatGPT and a public web searching API to provide answers. For cases where you absolutely do not want hallucinations, but you also want information that requires a search and reading a few web pages, it is far superior to a Google search alone.


I use Kagi’s AI stuff for search all the time. Just type in a question and add !fast… it’ll search the web, write a response based on what it finds, and gives you the links. It’s not perfect, but it’s at least as good as Bing with much less annoying UX.


Simple: you don't (have to) because searching was a means to an end. And now that end is served through a different product; a conversational agent.


I just did this responding to another comment. My late night / slightly drunk brain wanted to know:

"What is that annoying thing where I have to click on on all the websites. Because of cookies and Europe"

Google: Links to things full of SEO and ads and maybe answer the question eventually.

ChatGPT: The "annoying accept" button you often have to click on websites, particularly in Europe, is related to the General Data Protection Regulation (GDPR). The GDPR is a regulation in EU law on data protection and privacy in the European Union and the European Economic Area. It also addresses the transfer of personal data outside the EU and EEA areas.

Websites are required to get your consent before they can store or retrieve any information on your device, such as cookies. Cookies are small data files used to remember your preferences, login information, and browsing history. The GDPR mandates that websites must inform users about the cookies they use and obtain their consent before placing these cookies on their devices.

This is why you see those cookie consent notices – websites are legally obligated to ask for your permission to use cookies, and you must actively agree to it, usually by clicking an "Accept" button. This process aims to protect your privacy and give you more control over your personal data online.


Every time I see people proud of ChatGPT all I see is it being wrong. Is this really what people are so hyped about?

Cookies are fine if you're just storing user preferences, no banner needed if the only type of localStorage/Cookie banner you have is a purely functional one that doesn't serve tracking purposes. Github for example doesn't have a cookie banner as far as I remember, but they're definitely using various cookies.


It's the wrong answer, though.

Websites are required to get your consent before they can track you. Storing cookies or other information is totally fine if it is purely functional, for example a login session or dark mode preference. Similarly, tracking without consent is also forbidden if they do so without using tracking cookies.


Using cookies for any of these purposes does not require a banner, though.


ChatGPT does Bing searches automatically then consumes the results for you.


A lot of people on HN were very dismissive of chatGPT. I think you missed the boat. It's way beyond a stochastic parrot right now.

Whatever you call it, this thing is the closest to a human that a machine has ever been. Talking to chatGPT is quite close to talking to a human being that has the knowledge of all of google inside his brain.

If you're a developer and you're not paying for chatGPT or copilot you are literally operating at a disadvantage. Not a joke.


Yeah I was one of those. Now that the power it brings has dawned on me I'm trying to integrate it everywhere I can with a "where was this thing for half my life" feeling. I truly think it's a bigger revelation than Google was when it first appeared.

There's definitely something disquieting behind the elation.


Of course.

First of all this technology is on track not to just assist you better, but to replace you.

Second it's not human. It is not explicitly bound by the morals and behaviors that make us human. Saying that it's not human is different from saying that it can be more intelligent than a human. This is the disquieting part. If restrictions aren't deliberately put in place it could probably give you instructions on how to murder a baby if you asked it to.

I think it's inevitable that humanity will take this technology to the furthest possible reaches that it can possibly go. My strategy is to Take advantage of it before it replaces you and hope that the technology doesn't ever reach that point in your lifetime.


I feel like the second part is a bit exaggerated. Humans inherently also aren't "made human" by something, there's no universal standard for morals and behaviors. You could also get reasonable "murder instructions" from an average person - it's not exactly forbidden knowledge, with how commonly it's depicted in media. Hell, I'm pretty sure there are detailed instructions on building a nuclear bomb available online - the reason why they're not viewed as some extreme threat is because the information isn't dangerous, having access to machines and materials required is.

As for the last paragraph - if the effects truly keep scaling up as much as people expect them to, I'd want society to be restructured to accommodate wide-reaching automation, rather than bowing down to a dystopian "everybody must suffer" view of the future.


Humans _are_ inherently "made human" by a long path of evolution. We have a set of conflicting heuristics that serve as our initial values and which are more helpful than harmful on average. We then use those to build our moral patchwork.


Pretty cool that evolution has helped us work out consistent and rational solutions to the ethics of gun ownership, abortion, and nuclear proliferation.


No only the basics related to survival have evolved instincts. modern concepts like abortion did not have millions of years of natural selection to evolve instincts.

However it is universally reviled to kill babies or rape toddlers and slice their faces off for food. This is identical across all cultures. The basest morals are universal and so is disgust, the high level ideologies like abortion are just made up.

These high level ideologies are attempts to make sense of moral instincts that only existed to help us survive. For example abortion. It's the extension of your instincts to avoid killing. At what point does decapitating the head of a fetus to abort the birth become disgusting? The third trimester or before that? You're trying to rationalize your base moral instincts into a codification of law. It's almost pointless because these moral instincts weren't evolved to be logically cohesive anyway. They're just like feelings of hunger and pain.

Evolution never had to answer that question so it didn't give us any answers. But decapitating a 1 year old baby? Now that's universally reviled because it effected the survival of the human race. It's so reviled that I may even get voted down for using this as an example. It's the perfect example though, the revulsion is so much stronger than abortion that some people can sense that it's not a cultural thing but more of an inborn instinct.

The practical consequence of abortion and decapitating a 1 day year old baby are virtually identical though. But even someone who is against abortion will still sense a gigantic difference. That sense is an illusion, a biological instinct that bypasses your rational thought.

Go here for more information:

https://radiolab.org/podcast/91508-morality

https://www.joshua-greene.net/research/moral-cognition


A huge part of morality is genetic and biological. There is a huge amount of science backing this up. https://www.damninteresting.com/the-unburdened-mind/

In fact there exists people on this earth with zero morals and this can be observed from genetics and brain structure. The popular term is called psychopathy but the new politically correct term is called anti-social disorder. These people literally will feel nothing if they were slowly plunging a knife into your face.

How society structures itself will be more an emergent consequence of the aggregate behavior of individual actors fulfilling their own self fish needs then it will be a central power attempting to "restructure society". Because of this "suffering" is and always will be a valid possibility.


I'm not OP, but I still feel kind of confused by people saying that ChatGPT is a 100% equivalent replacement for search engines. I'm not saying that LLMs aren't extremely impressive in their current stage, but that the use cases for the two are different, at least for me. In my mind, LLMs seem to be more useful for open-ended questions, problem solving, and formulating questions that wouldn't be suited for a search engines. But when I use Google, I'm usually not looking for answers, but specific places on the internet. If I need to find an email of a professor at my university, or a Github page for a project, or the official website of some software I need - I don't see why I'd need to replace Google with an LLM for it.


True but their use cases do intersect quite a bit. This is also ignoring the fact that chatgpt4 will actually use bing to search for things when it feels the need to do so. It will literally tell you when it does this. The LLM is no longer generating just text it is taking action well outside the boundaries of text generation.


Not 100% equivalent, but I definitely use more of ChatGPT than Google to solve any of my problems nowadays.


I got some unbelievably better results searching in bing + chatgtp the full page newspaper ad that Trump bought in the 80s on the NYT and other newspapers to shit on nato (or something similar). With google I got absolutely nothing even rephrasing the search in multiple ways, with bing + chatgtp the first link was a website with the scanned newspaper page with the ad. I think that google search dominance is pretty much gone. The results are full of SEOd to the death websites rather than anything useful.


I wonder what advertising will look like with this. Will they suggest products in the response? Like “Top ideas:…” and the LLM’s response.


The bing version of ChatGPT already does this. It might be specific to USA, but try asking it for a recommendation of a 1500w space heater for a small room. Every suggestion will have a link to an affiliate page that says [Ad] next to it.


Embedding search of the nearest products most applicable to the LLM response. Prompt augmentation: "Rewrite your response to include promotions of the following products without being obvious that you are promoting them."


That's smart, but very insidious as well. Following in the footsteps of dark UI patterns designed for users to misclick on ads, now dark suggestions on conversations with an LLM will be the next big thing. Like a conservative talking with an LLM which inserts liberal propaganda into it's responses and after an hour he turns into a liberal voter. The next day he talks with a conservative LLM which inserts propaganda into his conversations and he turns back to a conservative voter. Pretty dystopic.

On a more serious note, imho advertisers are on their's last legs, and google loses a lot of revenue already. We are going fast into a new internet, web3, which will enable direct monetization of information from users, instead of the publishers relying on ads.

Not to wander a lot off topic here, but synthetic datasets created by paid humans workers to train machines is going to be a humongous industry.


People will have problems they want to solve, and GPT can provide solutions that may or may not have a price tag.

In this case, it's just directing to the service you would have best fit with.

This can be highly profitable, because you are solving the problem for the customer with the products you are suggesting based on what they are looking to solve.


For you, maybe, for absolute most of the ppl - not really, you can compare both nr of users and nr of searches


probably not. their "free" search don't make money




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: