This is amusing to read because I just started learning org-mode the other day and ran into these immediately while trying to figure out how to link to a bulletpoint in a list later in the document (couldn't figure it out but it ended up being unnecessary in the end, so... progress).
To quickly link to bullet points, just use [[*My bullet point]] (if you omit the *, it may still work but Org also finds non-heading elements like table names with the same text).
Personally, I like to create custom ids for bullet points so that I can easily change the text in the bullet point later without breaking my links:
* My bullet point
:PROPERTIES:
:CUSTOM_ID: foo
:END:
This is easier with C-c C-x p (org-set-property).
Elsewhere, I can just write [[#foo]] to create a link to that bullet.
I built a Proxy-based microlib for making fluent REST calls just on a lark a couple years ago. It was never anything production-ready, but it was so handy that I used it in all of my JS projects until I moved away from webdev. The API was basically:
const api = new Proxy({route: baseRoute}, handler); // handler is the microlib's export
const result = await api.get.some.route.invoke(); // GET {baseRoute}/some/route
invoke() is just how it finally fires the call. I didn't feel like spending too much time making it automatic, the benefits just weren't large enough to justify compared to just calling invoke().
This is just copyright infringement reworded to pretend it's not. I own the things I write, and publishing it on the internet doesn't negate that. OpenAI doesn't have the right to claim it, no matter what they think, and neither does anyone else.
Firstly publishing something on Facebook explicitly gives them the right to "copy" it. It certainly gives them the right to exploit it (it's literally their business model.)
Secondly, Facebook is behind a login, so it's not "public" in the way HN comments are public. You'd have gained more kudos had you argued that point.
Thirdly this article I about MetaAI not OpenAI. So, no, OpenAI isn't claiming anything about your Facebook post.
I'll assume however that you digressed from the main topic, and were complaining about OpenAI scraping the web.
Here's the thing. When you publish something publically (on the internet or on paper) you can't control who reads it. You can't control what they learn from it, or how they'll use that knowledge in their own life or work.
You can of course control republishing of the original work, but that's a very narrow use case.
In school we read setwork books. We wrote essays, summaries, objections, theme analysis and so on. Some of my class went on to be writers, influenced by those works and that study.
In the same way OpenAI is reading voraciously. It is using that to assign mathematical probabilities to certain word pairings. It is studying published material in the same way I did at school, albeit with more enthusiasm, diligence and success.
In truth you don't "own the things you write" not in the conceptual sense. You cannot own a concept, argument or position. Ultimately there is nothing new under the sun (see what I did there?) and your blog post is already a rehash of that which came before.
Yes, you "own" the text, to the degree to each any text can be "owned" (which is not much.)
>Firstly publishing something on Facebook explicitly gives them the right to "copy" it. It certainly gives them the right to exploit it (it's literally their business model.)
This isn't necessarily true for a user content host. I haven't read Facebook's TOS, but some agreements restrict what the host can do with the users' content. Usually things like save content on servers, distribute it over the web in HTML pages to other users, and make copies for backups. This might encourage users to post poetry, comics, or stories without worrying about Facebook or Twitter selling their work in anthologies and keeping all the money.
>In school we read setwork books. We wrote essays, summaries, objections, theme analysis and so on. Some of my class went on to be writers, influenced by those works and that study.
Scholarly reports are explicitly covered under a Fair Use exception.
But also be careful not to anthropomorphize LLMs. Just because something produces content similar to what a human would make doesn't mean it should be treated as human in the law. Or any other way.
OpenAI is not reading voraciously, it is not a human being. It makes copies of the data for training.
If there were an actual AI system which was trained by continuously processing direct fetches from the Web, without storing them but directly using it when for internal state transitions, then that might make the reading analogy work. But then AI engineers couldn't do all the analysis and annotation steps that are vital to the training process.
If I were implementing it and wanted to obscure, I'd blur the whole screen momentarily, probably with a small message. I really doubt that's ideal for a commercial offering, though. I'm not really worried about unnerving people if I'm using an avatar, that comes with the territory as it is.
There are also seeeeeveral LOVE2D libraries with overtly sexual names. The most egregious example that comes to mind is the (now defunct compat library) "AnAL." There's also HUMP, Pölygamy, Swingers, Adult Lib (debatable but close enough), Gspöt, Möan.lua, fLUIds (also debatable, but there's a clear theme here), and yaoui.
If you're referencing the posted article, that is absolutely not what we're "observing" right now, that claim is political propaganda from the American right-wing. Khelif is not trans.
I think the parent wasn’t suggesting this. I think that they suggested that in many strength-oriented disciplines biological women with the most masculine features (e.g. high testosterone level) will win.
I have fond memories of discovering Phoenix ~v0.1 during high school and mostly never looking back. Had a short stint with Chrome until it intermittently stopped even attempting to load pages. Switched back and couldn't imagine daily driving Chrome.
The name "Autopilot" for their lane-keeping feature clearly relies on most people not knowing how autopilot works in planes, because it's heavily implied to be autonomous driving and it's not. It's just a Tesla scam.
THIS. If you asked a bunch of pilots about the capability level they'd expect from something called "autopilot", then compared with the answers you'd get from a bunch of Joe Averages...yeah. When people don't know squat, and they hear some Marketing-speak, and they'd like to believe - the next thing you know, they've convinced themselves that Santa's magic elves are making it all work, and nothing bad could ever possibly happen.
I don't think its clear they are relying on people not knowing how autopilot works on planes because I don't think most people know autopilot on planes exist for the most part. 10-15% of the US population has never been on a plane.
reply