Hacker News new | past | comments | ask | show | jobs | submit login
Teddit – An alternative Reddit front-end focused on privacy (codeberg.org)
203 points by ecliptik on May 31, 2023 | hide | past | favorite | 93 comments



I’ve used teddit but prefer libreddit https://github.com/libreddit/libreddit

> It is a private front-end like Invidious but for Reddit.

- Fast: written in Rust for blazing-fast speeds and memory safety

- Light: no JavaScript, no ads, no tracking, no bloat

-Private: all requests are proxied through the server, including media

-Secure: strong Content Security Policy prevents browser requests to Reddit

of course you cant post, comment, or upvote from it.


>Libreddit is themed around Reddit's redesign whereas Teddit appears to stick much closer to Reddit's old design. This may suit some users better as design is always subjective.


I really like libreddit. It was very easy for me to set up and host my own instance behind a VPN. I registered a very simple .com domain of repeating letters so I can quickly just double-tap the word "reddit" on my phone's address bar and replace the word "reddit" with my libreddit's domain name and tap "go".

No more trying to type "old" over "www" or fight the "you must use the app"


Anyone who uses Kagi can add a rule to automatically redirect to their teddit instance or old.reddit [1].

There are also plenty of extensions for almost every browser that redirects reddit, which are easy to fork and update for a custom instance.

I added a custom !r bang that uses kagi to search reddit via a lens with that redirect. Works great

[1] https://help.kagi.com/kagi/features/redirects.html


For users interested in this:

Go to https://kagi.com/settings?p=redirects

Add these rules:

    ^https://reddit.com|https://old.reddit.com  
    ^https://www.reddit.com|https://old.reddit.com 
Theoretically you should be able to turn it into one rule, but this worked for me, so I stopped ;)


I use https://addons.mozilla.org/en-US/firefox/addon/redirector/ for nitter, teddit and invidious.


The problem with redirectors is that it first has to go to the site you want to avoid and then it gets redirected. I always wanted something that would re-write pages as they are displayed and I couldn't find anything so I wrote my own greasemonkey script that gives me all of my searxng/teddit/piped/rimgo/wikiless/quetre/libremdb/proxitok/neuters/scribe/.

It also removes most utm trackers and referral codes from urls and pulls tracking urls that put the actual url in the querystring and replaces them and will pre-redirect on most common url shorteners so when you hover on a link you are more likely to see where it's actually going.

I've thought about packaging it up for more general use, but it's one of those things on the todo list.


May I see it?


It's pretty rough, but it should work out of the box from this https://gist.github.com/bradmurray/61c824c0c3d5eb5e30b7bdfda...


There's also troddit.com, which has logins.

https://github.com/burhan-syed/troddit


It sucks that all this talk about scraping HTML will only push Reddit to deprecate Old Reddit even faster so users are forced to use a Javascript-heavy experience, complete with random HTML IDs. What then?


They've already gotten rid of the .compact frontend and actively removed workarounds/aliases that users discovered when it was first removed. Old reddit is definitely next up on the chopping block.


Lightweight (teddit frontpage: ~30 HTTP requests with ~270 KB of data downloaded vs. Reddit frontpage: ~190 requests with ~24 MB)

Was new reddit designed by actual morons? 24 MB and 190 requests! How did that pass any sort of QA?


Ad tracking, and obfuscation to make scraping harder, plus it's trendy to use lot's of JS libraries.


I just saw this comment on the Apolloapp reddit : >They’re trying to overvalue their services before going public. Execs want to cash out and move to a tropical island. I can’t wait for this cesspool to fail.


appending .i to the end of the url will still get you the compact frontend


I was quite sure you meant to say prefixing i. (which no longer works) but tried going to reddit.com/.i anyways and it actually works! I hope it stays operational, but surely they will also nuke it eventually :/


Maybe then, Reddit can go the way of Digg.


Old Dot Reddit is already so deprecated I'm astounded anybody even knows about it anymore. So what the heck are you talking about? It's pretty much gone so what's your go-to now that it's been a pretty garbage site to browse for more than a year? If it’s just inertia then Jesus. And now it's in it's just automatically submitting people's comments I never clicked enter before it just sent that yikes what the heck man you're the handle on your damn Tech the f** is going on.


Good luck. News is the API fees are going to destroy Apollo.


I believe Teddit uses scraping; I run a Teddit instance for myself and haven't needed to set up an API key.


From a very brief skim it doesn't appear to scrape: https://codeberg.org/teddit/teddit/src/branch/main/routes/ho...

But I'm not really a web dev so I might be misreading things.

Also: https://codeberg.org/teddit/teddit/issues/400



That's another feature that that seems on its way out, like locking up the api..


Do you mean that they've said it or just makes sense?

Curious if they've said anything about it.


The json thing is an API request, so it will come under the new API rate limits.


It would be great for someone to scrape Reddit and expose that information in a format compatible with the official API.

So if you call /get-comments/1234 it scrapes post 1234 and returns the JSON object exactly as the official API does.

Then third party clients can just point to this endpoint.


Currently, just add .json to the end of the url.

I have no idea what will happen to this with the changes.

If what you suggest is done, even a package, we could probably do a distributed pushshift[1] alternative to aggregate the data like the archiveteam's warrior[2] does and keep publishing the monthly data.

[1] - reddit.com/r/pushshift

[2] - https://wiki.archiveteam.org/index.php?title=ArchiveTeam_War...


Won’t they just get rate limited or hit with captchas


Right now it seems legal to scrape Reddit. But given their trajectory of making the API fairly expensive to use, do you think it's likely that they would also limit/prohibit scraping (assuming apps like Apollo start scraping as an alternative)?


My understanding is that scraping of public websites is generally legal, isn't it?


Legal, but probably against ToS


That ToS is meaningless if you scrape logged out.


This is the first classic example that I've encountered where a company will uses its power and ownership to completely render smaller, independent products unsustainable


$12,000 per 50 million requests according to a post the dev made on Reddit, which he claims translates to $20 million a year.


And per user it's about $25 a year


HTML is a perfectly good API.


Most HTML is undocumented and unstable, so I’d say it’s far from perfect.


It's perfect in that there's no way for them to differentiate from normal browser traffic. This is adversarial interoperability which is exactly how we're supposed to deal with corporations and their idiotic "terms". Nobody is forced to accept their "take it or leave it" BS.

https://www.eff.org/deeplinks/2019/10/adversarial-interopera...

If it's important enough, someone somewhere will care enough to fix it when it inevitably breaks. Look at youtube-dl and its variations, there's even a youtube.js now.

https://news.ycombinator.com/item?id=31021611


I would be very curious to know how many engineers inside google have as their sole responsibility to fight programs like YouTube-dl and so on.

I am willing to bet it’s a whole division.


They can fight all they want. In the end it doesn't matter how many engineers they throw at the problem. The only way they can win is for the world to descend ever deeper into DRM tyranny. Google must literally own your computer in order to prevent you from copying YouTube videos. It has to come pwned straight off the factory so that Google can literally dictate that you the user are prohibited from running certain bad programs that are known to hurt their bottom line.

I realize we're not too far off from that dystopia but it's still a battle that deserves to be fought for the principle of it.


Can’t they just fingerprint the incoming requests based on a litany of variables such as headers ip etc to prevent this “scraping”?

Sorry I don’t follow


Sure. We can also control those variables. Youtube-dl has a small javascript interpreter to figure out the audio and video URLs that YouTube generates for every client. In this thread it was also pointed out that people can use headless chrome.

It's simple. If they allow public access to their site at all, there's pretty much nothing they can do to stop any of this.


Reddit is a SPA though right


Yes and no, the terrible attempts to build a new front-end are, but the old front-end that runs on python with Pylons (I believe) as it’s “front” isn’t.

I like react, and I love typescript, but sites like Wikipedia, old.Reddit.com, stackoverflow, hacker news and so on are nice showcases of how you should never be afraid of the page reload, because your users won’t be unless you’re building something where you need to update screen states without user inputs. Like when your user receives a mail and can’t just reload the page because your users input would be lost if you do so. I think this last part is the primary reason (along with mobile) that Reddit has been attempting to move to React, because the “social” parts like chats and private messages don’t instantly show up for users in the old front-end. Unfortunately they haven’t been very successful so far.

You probably can scrape their current or their new.Reddit front-ends since you can scrape SPAs, but it’s much, much, easier to scrape the old.Reddit front.


Don't know to be honest. I assume new reddit is while old reddit isn't. Truth is the job is even easier in case of an SPA: you just need to use whatever internal APIs they built for themselves. Can't add idiotic restrictions to that without affecting the normal web service.


SPA being a problem was last decade. Headless chromium is pretty standard for scraping nowadays.


How does that work for clients though. Should Apollo ship a headless chromium in their mobile app


Can they launch a browser view hidden and scrape it? I have no idea if they can read from it.


Isn’t that very expensive to run at scale especially mixing in residential IPs to avoid blocking


Each person would spider for their own needs and most would use residential ips


Good to know, good time to donate if I can. I love apollo


What a poor timing for something like this.


Actually, perfect timing. It doesn't use the official API.


Well, teddit is not new if you mean that. I:ve been using it for a while - only downside is that it's a bit slow.


I've found that the instance hosted by privacytools.io is significantly faster than the official instance.

https://teddit.privacytools.io/


I personally like the Adminforge instance https://teddit.adminforge.de. It's much quicker than the original teddit.net.


Slightly off the main topic - but this is the first time I've seen codeberg.org (where Teddit is hosted). Looks like a serious competitor to GitHub, curious if anyone has worked with Codeberg and can list its pros and cons compared to GitHub / GitLab.


They have an FAQ: https://docs.codeberg.org/getting-started/faq/

One of the main pros seems to be that they're all-in on Free software.


For the Redirector browser plugin:

        {
            "description": "Reddit to Teddit",
            "exampleUrl": "https://www.reddit.com/u/rmhack",
            "exampleResult": "https://teddit.net/u/rmhack",
            "error": null,
            "includePattern": "^(https?://)([a-z0-9-]*\\.)reddit.com/(.*)",
            "excludePattern": "",
            "patternDesc": "Convert all Reddit http(s) subdomains to teddit.net",
            "redirectUrl": "https://teddit.net/$3",
            "patternType": "R",
            "processMatches": "noProcessing",
            "disabled": false,
            "grouped": false,
            "appliesTo": [
                "main_frame"
            ]
        },


There is also LibRedirect (https://libredirect.github.io/).


https://addons.mozilla.org/en-US/firefox/addon/redirector/ that is. I use the same add-on for nitter, teddit and invidious.


You and I are the same. :) Except that I have one more, for Wikipedia, to redirect to ?useskin=vector.


Nifty!


On my personal Teddit instance, pages load approximately eight times faster than on reddit.com! There are also some nice UX features, such as being able to see the entire nested threads (like here on Hacker News) without expanding them individually.


Did you do anything to speed up Teddit page loads? I self-host too and Teddit is a bit sluggish since it seems to load everything on the page before rendering[1].

I tried Libreddit as an alternative, which is much faster, but I prefer the look/feel of Teddit.

1. https://codeberg.org/teddit/teddit/issues/248


No, I'm sorry to say I haven't made any patches for that. My broadband speed is about 5MiB per second and it loads quickly on that, but Teddit does indeed time-out quite frequently on GSM.


> eight times faster than on reddit.com!

vs new or vs old? New is dog-slow IME, old is faster.


If you don’t mind that Reddit knows your IP, and if you want to explore the NSFW side of Reddit, try https://reddtastic.com

You can view multiple subreddits by joining their name with a plus sign, e.g. https://reddtastic.com/r/nsfw+gonewild

Edit: Yes, the home page is NSFW! You can also browse

- Reddit’s front page: https://reddtastic.com/r/

- r/popular: https://reddtastic.com/r/popular

- r/all: https://reddtastic.com/r/all

- Or any other subreddit like https://reddtastic.com/r/aww


Whoa MAJOR NSFW warning here!! Don't click this if you're not alone. I know the commenter mentioned but it sounded like it was an optional thing. Nope!


While we're on the topic of privacy focused frontends, anyone have recommendations for similar YouTube frontends?


The LibRedirect website has a great list of alternatives: https://libredirect.github.io/



Invidious is another[1].

I use Piped with Yattee[2] over Tailscale and it works great.

1. https://github.com/iv-org/invidious

2. https://github.com/yattee/yattee


Piped and Invidious.

I prefer Invidious for vague reasons, though either it or the instance I use most often seems to fail about 25--50% of the time on videos, with more popular content (e.g., music) failing most often, so I'll fall back to Piped.

Otherwise I use mpv / ytdl, and in fact greatly prefer that approach.

For those prefering a standalone GUI, there's VLC.


I also quite enjoy the mpv/yt-dl(p) setup, and I often pair it with ytfzf[1] to ease the search part. FreeTube[2] is also a nicely done desktop frontend, capable of proxying requests through invidious.

[1]: https://github.com/pystardust/ytfzf

[2]: https://github.com/FreeTubeApp/FreeTube


Search is the one thing that mpv & yt-dlp don't handle. The now-moribund mps-youtube did offer search and curation and remains hands down the best interface to that service I've ever used. (It was throttled by API limits, I strongly suspect intentionally by Google.)

I've started seeing some CLI youtube search front-ends, though I've yet to try them. Thanks for mentioning ytfzf, as I'm not sure that's popped up for me before.


invidious is the closest analogue, if its still alive.


Post about Reddit API pricing and the Apollo app https://news.ycombinator.com/item?id=36141083


For those hosting their own Reddit/libreddit instance doesn't that reduce anonymity to Reddit by having only a single user? I have been using public ones to mix my traffic in with others.


https://rdddeck.com is also a good reddit front-end. Its like Tweetdeck but for Reddit. Supports mobile as well.


Does it do the scraping before you visit the subreddit ? Is there any local caching involved ?

Or are all the improvements from skipping the client side garbage calls?


what are the chances teddit is stealing our data?


If its anything like nitter, it doesn't use the public API and so is not as vulnerable to Reddit's changes.


If you’re on iOS Hyperweb has support for Teddit redirects, very convenient.


Hyperweb looks great!

Is there any way to have Google Flights redirect the URL to default my currency to my home region?

Google Flights always shows results in the local currency.

Even when I’m logged into Google Flights, even when it has a cookie from my previous search, even when I’m on a VPN for my home currency region, it still defaults search results to the currency of wherever I’m sitting.

Can Hyperweb override this?


Very slow but I think that’s a feature. Makes Reddit less addicting


It's plenty fast for me because I self host it and don't use a public proxy. You can self-host it in a docker container locally if you want. The real plus is that there is no way to comment or even vote which makes reddit less of a time waster.


I ended up deleting my Reddit accounts. Spend less time manually typing subreddits in or even using Reddit at all. I’m sure something else will take over but for now this and tildes leaves not much. There is more time for books or other things.

I wonder if a huge swath of people started batch edit-deleting poisoning old comments, gpdr delete requests, and deleting accounts that Reddit would respond. It would greatly reduce their search clicks and we’ll be on stackoverflow, discord, or something else soon.


In my case I just run

      lynx gopher://gopherddit.com 
with a tweaked lynx.cfg to display images with sxiv and videos with mpv.

Only for /r/retrobattlestations, maybe /r/worldnews and /r/vintageunix.


I'd rather have a reddit replacement.


.net?

Oof. I mean I know it doesn’t really matter, but anything with a .net address just seems dead on arrival.

Are you married to the name / domain?


Also:

lynx gopher://hngopher.com




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: