Hacker News new | past | comments | ask | show | jobs | submit login

This week I created a Microsoft account just to check how they are integrating GPT into Bing, and while they made me believe that I was using a Chat-enabled Bing, it was worthless for me.

I basically wanted to know how I can do raycasting in Three.js, but after a lot of using ChatGPT for trying to solve my issue, I learned that I can't do what I want by using the normal raycaster integrated Three.js: Intersect a geometry which has a displacement map on it.

ChatGPT failed to understand that the raycaster works on the CPU, but the displacement map of the material is applied on the GPU side, so the displaced geometry won't be used, only the original one. It managed to explain this to me, that this was not possible, but each sample code repeatedly did as if it was possible, until I gave up.

Then I created the Microsoft account and started to ask for solutions, and it was the most useless garbage, despite of the web claiming that it is using GPT-4.

In my eyes MS has failed at integrating a chatbot; maybe it's ok for cooking or having fun, I haven't tried that. And OpenAI has nothing else but a chatbot and other nice AI. Let someone better come and OpenAI will be a remarkable entry in the history books (first popular AI application) with a final entry that OpenAI got acquired by Microsoft.

Let's see what Google makes out of it.




>ChatGPT failed to understand that the raycaster works on the CPU, but the displacement map of the material is applied on the GPU side, so the displaced geometry won't be used, only the original one. It managed to explain this to me, that this was not possible, but each sample code repeatedly did as if it was possible, until I gave up.

Reflecting on it, what's crazy is that this comment will likely wind up in the training data for GPT-N+1, and then GPT-N+1 will get it right.


What's also crazy is that GP's comment exist in the first place. Five years ago, complaining that a chatbot failed to understand the complexities of realtime 3D rendering when explained to it in plain English would be seen as confusing science fiction with reality. Hell, five years ago, we'd be reminded to avoid using anthropomorphizing language (such as "the software failed to understand"). Today, that same language is the most apt description of what we observe.


I can tell you what will be the answer of Bard:

"I am just an AI language model, I can't help you"


...instead of pretend knowing the answer and splitting out nonsense


The problem is that it still spews tons of non sense. It literally makes up imports when commanded to use a library/package even if it "knows" said package, considering it itself suggested to use it in the first place!


Partial information at-least gives me something to correct / some hint of the problem space to explore further.


ChatGPT's partial answers were helpful, yet the whole process was time-consuming. I guess that googling for it would have taken at most 5-10 minutes, but I wanted to see how ChatGPT behaves.

But Bing on the other hand, it didn't even bother to spit out ChatGPT-like sentences but only pointed me to some non-helpful Stack Overflow entries.

I just re-logged-in into Bing to search for my query, I found it in the history and re-visited it, now it is handing me out an answer which looks like it was generated with ChatGPT, but it's still the same buggy code. Chatting with it shows the same problems which ChatGPT has.

I wonder if GPT-4 could give me the correct answer (manually displacing the vertices, not in a shader).

Wow, now after chatting and then performing a normal query and then going back to the chat-mode, the entire chat history was gone...


This is what ChatGPT-4 says regarding this specific case (you can see the prompt used as well, it's just a copy-paste of your comment): https://pastebin.com/PPy4vMrU

It seems to write code that does the displacement manually "// Iterate over the geometry's vertices and apply the displacement, geometry.vertices.forEach{...}"


Thank you for the feedback! Looks like I'm going to subscribe to plus then.


It constantly tells me it can't do things it just did, and makes stuff up all the time.


I agree that the bing integration is worse than the chatgpt site itself. I also notice that common people arround me use the chatgpt site. Not bing.


Bing is super lovely in terms of eye candy, but apparently they don't even offer a real history like the ChatGPT sidebar has.


Microsoft GitHub Copilot X might do better than GPT-4 Bing Chat regarding coding. I'm still on their waiting list for that but it looks promising.


Chat in the IDE is going to be game changing for me. I find myself switching between IDE and ChatGPT when co-pilot just isn't giving me a sensible suggestion, and I've found the combination of the two to be pretty epic. Can't wait to have it all tightly integrated. Also very interested in voice.


Is using embedding + retrieval plugin a potential solution in this case? (honest question - I don't have expertise in this area). There are many helpful explanations on Three.js Discourse forum, Three.js/Pmndrs Discord, official docs, blogs, and private resources like books and course materials. If someone could create embeddings for all of these resources and make it accessible via plugin, we could get more accurate and up-to-date answers (I noticed that GPT-3.5 occasionally produce deprecated code like THREE.Geometry, while GPT-4 seems to handle it better). Practically, private resources must be excluded.


I've also not seen great results on three.js or A-Frame questions from any of these models. I'm guessing it's simply because there's a limited corpus of text from which to learn, however I wonder if LLM's lack of inherent spatial awareness contributes. Admittedly three.js and related concepts can be confusing for humans too


I suspect it could soon be feasible to fine-tune on code with limited open data through a bootstrapping approach.

Give it the source code, a test library and access to a dev environment and with some prompting it could start to experiment with increasingly complex use cases, learning from successful attempts. This would depend on the model’s ability to understand what a successful outcome is so it can define test cases, which might be harder if the output isn’t text but not impossible.

Being able to give a model expert knowledge on an undocumented library or language seems like it could help accelerate adoption of new technologies that might otherwise suffer from network effects as users get used to AI-assisted development. Not to mention automated testing, finding edge cases and making pull requests to fix them, security, etc.

A human taking time to experiment with their assumptions and gain experience with an unfamiliar subject is in a sense creating their own training data as well.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: