This is a simple application of AP but it really demonstrates the power of the technology.
For example, if HN implemented ActivityPub this comment could be directly replied to by users on Mastodon or PeerTube or any other AP service, directly from that interface. Like email, it just works.
IMO it genuinely represents the way we should be thinking about web applications and services.
Yes but it unfortunatly completly ignores the topic of spam and moderation. So we will have another 10 years of experimenting at the expense of the users and the standard. And we may end up with some centralisation again like with email.
Still a great tech though. I just wish the authors would think about IRL and not just assume a perfect sphere in a frictionless vaccum.
Not really. Moderation is a local matter, something you sort out with the moderators of the instance you're on. Instances that don't moderate well are blacklisted. It's not like there haven't been trolls before. In practice the system works fine.
If HN did implement this then would they have to blacklist bad instances? What is to stop people from spinning up bad instances faster than HN could blacklist? So we'd surely end up with a whitelist which isn't very social.
I think it's coming at the problem from the wrong direction to take an existing community like HN and make it federate. It's already too big. ActivityPub services work best with a lot of small actors, not a few big ones.
And for the record, Mastodon's scale is pretty damn big, considerably bigger than HN. There are millions of users.
How would I be able to see the whole tree of replies, though? Would I need to rely on an exponentially large number of services being available at request time?
Messages are also sent between servers directly, so HN would store the full tree, if you replied to a comment your AP server would forward the message to the HN one and it would appear there.
I think all that is the responsibility of the viewer. When you view a single message on Mastodon, then Mastodon is responsible for fetching the rest of the thread for your display.
In practice, it looks like Mastodon also stores / caches local copies of messages and profiles, pretty much as they are received. (Including thumbnails!)
Author here! Let's hope my little hobby server holds up against HN. :)
This was very much an experiment in building something that talks ActivityPub Server-to-Server. Chess happened to be a fun, not too difficult idea to get the experiment going. (The chess parts are largely built on existing work, such as chess.js and icons from the same set Wikipedia uses.)
I did not expect to test your little server! I figured it would fall away, then occasionally be discovered by people searching for chess and/or ActivityPub.
Someone will probably come along and build a GUI that works with the toots. It would be a neat browser plugin project. Are you planning to open source this?
I find this type of peer pressure to be counterproductive in many instances. It reminds me too much of the insistent bug fix "requests" open source maintainers have to deal with.
Take the less-than-production-grade code you find in many hobby projects, add in the tendency of HN to tear to shreds (however well-meaning) any code which gets presented, and open sourcing hobby projects can be a recipe for disaster.
Edit: I'm not saying open source is bad (obviously) or that it's always toxic. But it definitely can be.
I don't think I was particularly rude or pushy, I just stated my opinion and asked them to expand on theirs.
>add in the tendency of HN to tear to shreds (however well-meaning) any code which gets presented
I don't think HN is this bad unless the code in question is actively doing harm (e.g. bad crpyto). I'll always be more critical of good closed source software than bad open source software.
For the record, I’m leaning towards yes. And I don’t much fear response, the code is actually quite decent in my own opinion. I just want to experiment a bit more, see if there’s anything else I want to accomplish, before throwing it out there.
By a GUI client, I mean a frontend that you can log in with your Mastodon/Pleroma/AP account in order to play in real time and/or view other players' games. Possibly integratable into AP-compatible clients.
That sounds like quite a bit of work, haha. I don't plan on going there myself. :)
I'm not sure if clients even have functionality to integrate? Most have an 'open in browser' option that I guess could work, but you'd still need a session with the AP provider. (And those APIs probably all differ too!)
This is the exact type of experimentation and hacking I would expect from a technology that has a lot of potential and interest from devs. I hope ActivityPub and further decentralization efforts continue to gain mindshare.
I have no idea how this technology works, and I wonder what problem it solves over, say, email? Will chess-moves be public for everyone to see? Or will privacy be protected? Can the user(s) perhaps decide about the level of confidentiality?
Technically, all the messages castling.club itself generates are all public, but the rest of the thread can be private depending on user settings.
One thing that's kind of cool: Each message and game detail page can also be requested as JSON using the Accept header. For messages, ActivityPub actually requires this. But for castling.club, these documents also include SAN and FEN for the chess moves and board states. And there's a JSON-LD vocabulary to describe it: https://castling.club/ns/chess/v0
That's a very technical advantage, I suppose. And how beneficial that is in practice (outside chess) remains to be seen. :)
I think the benefit here is that the moves are public, so people can easily spectate ongoing games and review old ones. That is difficult to solve by email alone
Yeah, true. But after the whole Facebook debate, I think future social technology needs stronger privacy protection built-in. If the only option is to share with everybody, then I suppose that's a bit simplistic and probably not what people need/want.
> after the whole Facebook debate, I think future social technology needs stronger privacy protection built-in. If the only option is to share with everybody, then I suppose that's a bit simplistic and probably not what people need/want.
I think one of the big problems with Facebook and the like is the idea that things can be uploaded to the Web but remain "private". It's an insidious lie IMO, and is one reason I refused to go near it.
Any "revelations" about data brokers, (mis)use, breaches, etc. are just confirmations that the premise itself is flawed (from a user privacy perspective; I know it's a lucrative business proposition).
Attempting to build "privacy" into a decentralised publishing protocol seems to me like a bottomless rabbit hole without any real solution (a bit like DRM). It's perhaps an interesting question in terms of fundamental CS research, but AFAIK no practical implementations exist even in centralised systems, so it seems counterproductive to burden protocols with constraints that aren't actually possible to satisfy.
Even encrypted email only remains private if both parties keep it that way. Privacy can't be "imposed" by an author/"owner". Consider that even proprietary silos like Snapchat have spawned tools to automatically strip their "privacy" features ( e.g. https://drfone.wondershare.com/snapchat/snapchat-screenshot-... ). An open protocol which encourages third-party clients (both human-operated and bots) would be in a much worse situation.
So you are saying that since the recipient of an email can forward it to a third party, we should abolish the privacy aspect of email altogether, and make all emails public?
> So you are saying that since the recipient of an email can forward it to a third party, we should abolish the privacy aspect of email altogether, and make all emails public?
That's absolutely not what I said, and I can't see anything in what I wrote that could be sincerely interpreted into such a weak straw man:
- My only mention of email was descriptive ("Even encrypted email only remains private if both parties keep it that way") not prescriptive ("We should do XYZ")
- The only prescriptive remark I made was to avoid delaying/constraining protocols with requirements that are difficult/impossible to actually implement. I believe that 'private sharing', as found on social media sites, is an example of such an impossible requirement.
- At no point did I say that any existing technology should be "abolished"
Based on this, I'm going to assume that your comment was not made in good faith. Even then, what you say doesn't seem to make much sense. In particular:
- Emails are public. That's why sensitive information like passwords and financial credentials should never be sent via email, unless the email body is encrypted before sending. Email transports are only encrypted opportunistically (STARTTLS), and even if a client/server enforce their connections to be secured, the message may hop between subsequent relays through unencrypted channels before arriving at the recipient. These days there are alternative mechanisms which might provide more security, e.g. composing a message in a browser connected to gmail.com over HTTPS and sending it to another Gmail address, but (a) this isn't "private" since our plaintext is being shared with a third party (Google, who is mining it to profile us; this is also why Facebook's claims of "privacy" are a lie) and (b) it's unlikely that any email protocols or formats would actually be used in such a setting; Gmail/Exchange/etc. are more like self-contained messaging platforms, which interoperate with email.
- I don't understand what "abolish" would even mean, in the context of email. Encrypting emails, whether it's with GPG or pen + paper, is not something that any centralised authority can 'turn off'; it's purely at the whim of the users. If we include steganography as a "privacy aspect" then it's not even possible to know if it's being used or not.
In 2011 in Twitter, we wrote a prototype engine that would let you run tiny javascript programs in iframe "cards", which show up below the tweet where images do. The idea was that if you tweeted with #cardname, then twitter would include a card-program from the public card registry.
One of the first programs was a chess board which would stay up-to-date with all of the previous "moves" in a reply chain.
I guess as a (former) shareholder I probably benefited from twitter's switch from trying to knit together amazing primitives like this in favor of becoming an ads-driven behemoth, but the engineer in me is sad about all of the amazing projects that never released.
Sometimes you have to do the wrong thing for a while to find the right thing. The nice thing about ActivityPub (or any federated protocol) is no one can just unilaterally shut down a project like this through policy or API changes that, whoops, prevent this use case.
The wonderful modular world you saw was only delayed, and now it's going to happen with much cheaper CPU, transfer, and storage.
I tried a similar idea, but for security reasons I tried to develop a DSL that would "compile" to CSS/HTML/JS instead of allowing authors/developers to write JS. I gave up (not only was it to difficult for me to maintain the codebase) but also because I realized most "cards" would just be ad-driven junk with poor UX and UI that would try and get personal info. from users.
Didn't Google try something similar with their homepage once? It was unsafe though because you had people submitting JS and having it execute it other people's browsers.
I was wondering the same thing! Especially considering modern messengers all talk HTTP any way.
I think it should be possible. For example, federation in Plemora seems lightning fast on a local instance. (Mastodon is a bit heavier and slower it seems. Then again, I’m currently fighting interop issues with Plemora instances.)
So it seems it should be possible to build a decently responsive messenger with ActivityPub.
Glad you like it! The goal was definitely to be minimal, even more so than previous projects and sites of my own.
That may have really helped with HN load too? That and caching. Pretty much everything is served from nginx cache.
But I was tailing the access log at several points, and HN hugs actually didn’t look that bad in terms of request rate. I can imagine a heavier dynamic site having trouble, though.
I love these low-key ways to play games over lightweight protocols. I know this is a bad example but Facebook Messenger used to have this but they took it away. Not sure why. It was one of the few things I actually enjoyed about it.
If the creator of this is reading this - can you please stop flipping the board every time we click 'Next move'? It makes it impossible to follow the game. Thanks
Definitely on the todo! Also want to show an arrow for piece moved.
The board flipping is because they are technically toot detail pages, and in the toots I thought it’d be better to show the board from the side of the player whose turn it is?
Though I’ve also found, while waiting for the other player, planning ahead is also difficult because of this.
Maybe it just needs to show both, side by side, always?
ActivityPub is still on my (ever-expanding) to-research list, but maybe I'll ask here: is this the protocol that will finally let me move my social networking into Emacs?
For example, if HN implemented ActivityPub this comment could be directly replied to by users on Mastodon or PeerTube or any other AP service, directly from that interface. Like email, it just works.
IMO it genuinely represents the way we should be thinking about web applications and services.