Install a cookie autodelete extension. That will let you whitelist cookies you want for persistent logins and discard the rest. They can usually be configured to purge on tab closing.
Autodelete is old tech if you ask me. If you open the sites before the autodelete happens, then the tracking still happens. Temporary Containers is an addon that solves this elegantly.
Create a container that runs with an AI agent that does your browsing "for you" whereby it does the connection, cookie management, anonymously tor' wrapping as required/set/needed such that you have an abstraction between you and all your browsing, and the browser can dev/null the ads and never let them render and poison the reply with synthetic data crafting all packets that go back to the cookie providers?
I also want ti to auto crawl an delete PII from all ad / identity brokers / white-pages/scam-spam. a "Delete me from the internet" bot
Yeah but no. I want to know how to instruct gptbots to spit out actionable code snippets that I can run to have delete stuff. I dont want to pay an Opterly, or join-delete-me a monthly subscription to delete my footprint.
I know their services are valuable, im not against them - I am saying that in the age of GPT-code-slave-bots I'd rather learn the process of figuring out how to tell the AI what I want and I also iteratively learn through the process.
Its wonderful being able to explore ideas so fluidly with the GPTs even though we know/discover their limitations, mal-intent, and other filters/guardrails/alignments and allegiances
> I am saying that in the age of GPT-code-slave-bots I'd rather learn the process of figuring out how to tell the AI what I want and I also iteratively learn through the process.
Why?
It's your choice, but everyone knows AI is like poison for deterministic problem-solving. Learning how to better rely on an unreliable machine only guarantees that you're feeble when you have to do something without it. Like relying on autopilot when you don't know how to fly a plane, or trying to get HAL-9000 to open the airlock when you weren't trained on the manual process.
Using AI to automate takedown requests is just pointless. The only reason automated takedowns work is that their automated messages are canned and written by lawyers with a user as the signatory. If you have AI agents write custom and non-binding requests to people that hold your data, nobody will care. At that point you may as well copy-and-paste the messages yourself and save the hassle of correcting a brainless subordinate.
> Its wonderful being able to explore ideas so fluidly with the GPTs even though we know/discover their limitations, mal-intent, and other filters/guardrails/alignments and allegiances
It's as if the first-world has rediscovered the joy of reading, after a brief affair with smartphones, media paranoia and a couple election cycles dominated by misinformation bubbles. Finally, an unopinionated author with no lived-experience to frame their perspective! It's just what we've all been waiting for (besides the bias).
You misunderstand what I am saying. I LOVE reading and learning.
What I said what I like to tell the bots to give me a python snippet to do a chore, and explain to me how they are doing it and teach me along the way, and document the functions so i can learn, and read them and know what they do,
For example, and HNer posted their VanillaJSX Show HN: today - and it had some interesting UI demos -- so I am right now building a Flask app that uses VanillaJSX elements for a player.js tied to yt-dlp to have a locally hosted Youtube front page player that will download the youtube video locally, display my downloads on a page with a player and the VanillaJSX player elements, just so I can see if I can.
But why would you use AI for that when it's inferior to the Flask docs and the VanillaJSX example code? I've written projects with ChatGPT in the driver's seat before, you end up spending more time debugging the AI than your software. For topics like history or math I would expressly argue that relying on AI as a study partner only makes you dumber. The way it represents code really isn't too far off either.
Again, it's your choice how to spend your time. I just cannot fathom the idea of learning from an LLM, a tool designed to expand on wrong answers and ignore the discrimination between fantasy and reality. It feels to me like this stems from a distrust in human-written literature, or at least a struggle to access it. Maybe this is what it feels like getting old.
When one doesnt know how to frame the questions succinctly, a GPT that can structure a vague request into a digestable response... plus - you act like one using the LLM is completely devoid of their own discernment capabilities or intelligence.
When I pickup a hammer, I dont expect it to build the actual house. But when I Intent and WIllfully use it on its designated task, its the same as weilding a GPT/AI - you have to be really specific.
I admit I totally agree with having to debug the code that it generates. But since I know how to goad it into my intent for learning a thing.
Also - I wrote up extensively about using a discernment lattice to attempt to corral the AIs "expertness" as much as possible to keep it on a subject.
I also force it to use, cite and describe sources I tell it to use when I am telling it to be an expert.
I do that as well. The full setup is Firefox with Strict protection, Multi-Account containers, Temporary Containers, Privacy Settings, Decentraleyes, and uBO. What's unfortunate is I start getting sites that don't like this treatment at all. IKEA straight-up tells me that I'm probably a bot so it won't let me log in, but other sites have random broken functionality as well.
If you’d like to, feel free to reach out to me on robin.whittleton@ingka.ikea.com so that we can try to fix that IKEA behaviour. Or file an issue on webcompat.com and I’ll track it from there.