Hacker News new | past | comments | ask | show | jobs | submit login

That really is like how Stallman is described to use the internet, just with AI.

Regarding deletion, did you know about this service? https://joindeleteme.com/




Yeah but no. I want to know how to instruct gptbots to spit out actionable code snippets that I can run to have delete stuff. I dont want to pay an Opterly, or join-delete-me a monthly subscription to delete my footprint.

I know their services are valuable, im not against them - I am saying that in the age of GPT-code-slave-bots I'd rather learn the process of figuring out how to tell the AI what I want and I also iteratively learn through the process.

Its wonderful being able to explore ideas so fluidly with the GPTs even though we know/discover their limitations, mal-intent, and other filters/guardrails/alignments and allegiances


> I am saying that in the age of GPT-code-slave-bots I'd rather learn the process of figuring out how to tell the AI what I want and I also iteratively learn through the process.

Why?

It's your choice, but everyone knows AI is like poison for deterministic problem-solving. Learning how to better rely on an unreliable machine only guarantees that you're feeble when you have to do something without it. Like relying on autopilot when you don't know how to fly a plane, or trying to get HAL-9000 to open the airlock when you weren't trained on the manual process.

Using AI to automate takedown requests is just pointless. The only reason automated takedowns work is that their automated messages are canned and written by lawyers with a user as the signatory. If you have AI agents write custom and non-binding requests to people that hold your data, nobody will care. At that point you may as well copy-and-paste the messages yourself and save the hassle of correcting a brainless subordinate.

> Its wonderful being able to explore ideas so fluidly with the GPTs even though we know/discover their limitations, mal-intent, and other filters/guardrails/alignments and allegiances

It's as if the first-world has rediscovered the joy of reading, after a brief affair with smartphones, media paranoia and a couple election cycles dominated by misinformation bubbles. Finally, an unopinionated author with no lived-experience to frame their perspective! It's just what we've all been waiting for (besides the bias).


You misunderstand what I am saying. I LOVE reading and learning.

What I said what I like to tell the bots to give me a python snippet to do a chore, and explain to me how they are doing it and teach me along the way, and document the functions so i can learn, and read them and know what they do,

For example, and HNer posted their VanillaJSX Show HN: today - and it had some interesting UI demos -- so I am right now building a Flask app that uses VanillaJSX elements for a player.js tied to yt-dlp to have a locally hosted Youtube front page player that will download the youtube video locally, display my downloads on a page with a player and the VanillaJSX player elements, just so I can see if I can.


But why would you use AI for that when it's inferior to the Flask docs and the VanillaJSX example code? I've written projects with ChatGPT in the driver's seat before, you end up spending more time debugging the AI than your software. For topics like history or math I would expressly argue that relying on AI as a study partner only makes you dumber. The way it represents code really isn't too far off either.

Again, it's your choice how to spend your time. I just cannot fathom the idea of learning from an LLM, a tool designed to expand on wrong answers and ignore the discrimination between fantasy and reality. It feels to me like this stems from a distrust in human-written literature, or at least a struggle to access it. Maybe this is what it feels like getting old.


When one doesnt know how to frame the questions succinctly, a GPT that can structure a vague request into a digestable response... plus - you act like one using the LLM is completely devoid of their own discernment capabilities or intelligence.

When I pickup a hammer, I dont expect it to build the actual house. But when I Intent and WIllfully use it on its designated task, its the same as weilding a GPT/AI - you have to be really specific.

I admit I totally agree with having to debug the code that it generates. But since I know how to goad it into my intent for learning a thing.

Also - I wrote up extensively about using a discernment lattice to attempt to corral the AIs "expertness" as much as possible to keep it on a subject.

I also force it to use, cite and describe sources I tell it to use when I am telling it to be an expert.

https://i.imgur.com/Fi5GYRl.png

https://i.imgur.com/wlne9pT.png

https://i.imgur.com/Ij8qgsQ.png




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: