Hacker News new | past | comments | ask | show | jobs | submit login
Raspberry Pi as a local server for self hosting applications (cri.dev)
356 points by christian_fei on Sept 14, 2020 | hide | past | favorite | 314 comments



The hardware is there (RPi + USB storage). The server software is there (NextCloud, Plex, n8n, etc). What isn't there is the plumbing. The next logical step after this blog post is making your services accessible to your phone over the public net. You'll immediately find yourself mired in domain name registration, VPS management, TLS cert management, dyndns, port forwarding, hole punching, etc etc.

There are lots of great tools that solve some of these problems. I have yet to find one that solves all of them.

I think we need something like Namecheap + CloudFlare + ngrok, designed and marketed for self-hosters and federators. You simply register a domain and run a client tool on each of your machines that talks to a central server which tunnels HTTPS connections securely to the clients.

Mapping X subdomain to Y port on Z machine should take a couple clicks from a web interface.


> The next logical step after this blog post is making your services accessible to your phone over the public net. You'll immediately find yourself mired in domain name registration, VPS management, TLS cert management, dyndns, port forwarding, hole punching, etc etc.

You'll need don't need any of that with Onion Services. Tor does not only anonymize, but offers easily configurable services with NAT punching, an .onion Domain and e2e crypto for free. And setting them up is easy enough https://community.torproject.org/onion-services/setup/

You'll just need tor or a tor browser to access those services, but that shouldn't be a problem for many self-hosting setups


Isn't tor exceptionally slow though? I haven't used it in a few years - has anything changed?


Most slowness comes from exit nodes which you don't need to access onion services. There're also single-hop onion services, which should be a bit faster while sacrificing server anonymity if i understand correctly


Is there a safe service I can try out to see how fast it can potentially be?


Depends on your definition of "safe", but there's an official Onion service for Facebook for example: http://facebookcorewwwi.onion


You can watch videos on it fine these days - though I wouldn't recommend, for social ethical reasons, using it for streaming movies/music off your personal server.

Unless of course, you are running a node.


It is. It's not practical for this use-case.


It’s slow, with NSA hunting at exit nodes.


Exit nodes aren't used if one connects to an onion service.

And for all we know, the NSA snoops traffic at all major internet exchanges, so tor exit nodes might get extra attention, but so do e.g. people who's search history suggests they might be sysadmins (if i remember reports on xkeyscore selector correctly)


If this idea sounds great but the Tor part is a bit much for you, have a look at tailscale.com

It sets up WireGuard (also feat. NAT hole punching) in a mesh between your devices. You can static route things to it using standard firewalling/iptables/etc if you feel the need too.

It's basically having a LAN but you're on the LAN even when you're not at home.

Edit: Hahaha. I discovered Tailscale myself through this thread, left the tab open..


Wow that is interesting, never thought about this usecase for Tor before, that looks like a fun project to figure out


> I think we need something like Namecheap + CloudFlare + ngrok, designed and marketed for self-hosters and federators. You simply register a domain and run a client tool on each of your machines that talks to a central server which tunnels HTTPS connections securely to the clients.

PageKite (https://www.pagekite.net) is what it sounds like you're looking for. It'll set you up with a url, SSL, and a tunnel in about 30 seconds. Highly configurable for your own domain if you'd like, multiple ports etc.


That PageKite pricing slider is really great. https://pagekite.net/signup/?more=bw


It looks like it could be good, but doesn’t slide on Safari on my iPhone.


and there is inlets too https://github.com/inlets/inletsctl


With a couple of fiber providers I've been lucky enough (it wasn't luck, I chose housing based on ISP availability) to get a business class gigabit with a static ipv4 address and ipv6 for ~$100/mo, solves lots of problems.

Plex does the plumbing for you, I think NextCloud might too.

Doing DNS just seems like another thing to setup which is just fine. Pay namecheap, then pay a big boy DNS provider (I like dnsmadeeasy) then register some domains.

I don't really want a solves everything tool because I don't see a way for it not to be really opinionated and hide everything behind its own abstraction which isn't really any better than the interface it is hiding. Maybe a series of how-to whitepaper kinds of thing to build up the requisite knowledge to figure these things out.

I'm not a fan of the old school configuration hell where you have to spend hours/days/weeks trying to figure out the correct set of software and config options to do something right, but I'm equally not a fan of completely canned solutions that hide everything in favor of a single button to push. I'm not a technician, I don't need to have everything done for me, but I do appreciate tools where right configuration interface is provided. That is, sane defaults, well documented options, meaningful errors and sanity checking, and options given in the right way.


This is exactly why containers took off for self-hosters. Installing Plex went from a blog post to `docker run plex` (ok there is a little more nuance than that but you get my point).

Docker allowed me to be far more competent on a Linux box than my skill set should have permitted at the time. I no longer needed to know how an application ran, just that it did. Provide some persistent storage and you’ll probably never have to configure that app again. Amazing.


But if you have a package manager, 'yum install plex', 'apt install plex', etc. should all have the same experience. (actually the correct incantation of docker to get plex running correctly has taken hours of my life away). There are indeed many blog posts about getting plex to run in docker.

The problem is package managers are bad (well, apt is bad, rpm is pretty ok, freebsd ports are pretty good, there are many others) and package maintainers are bad, it always seems like the job they give the intern to figure out instead of making it a cornerstone of usability.

Seriously, spend a day setting up things running on freebsd with the packaging you have and it will be a breath of fresh air. Nearly everything you can think of all put together in packages in one place, and most of them start working in an expected way with zero configuration fiddling, install and start.

If package management were better, docker might not have existed at all, people keep confusing it with decent package management.


I've got IPv6 from my ISP - I gave my pi a static v6 IP, set up cloudflare and told it that v6 IP, and cloudflare handles making it available to the v4 internet. Has worked pretty well so far, just for tinkering stuff, nothing 'production'.


Most home routers have functionality to do port forwarding and dyndns. Certbot for TLS/cert. Just get a free domain from the dyndns service. No VPS needed if you got your own hardware/Rasbery. Make sure you have backups. Just Dont expect a $30 PC to be without issues.


Of course they do, but a lot of people are behind CGNAT and do not have a public IP address assigned to their CPE, and as such reaching a device in the LAN from the outside - without the device reaching out to you first - is impossible.

And, of course, it can't reach out to your phone because it's also behind CGNAT. So, you need a VPS to act as a bridge between your phone and the device, which would connect to the VPS on boot and route traffic through a tunnel.


CGNAT is a pain but if you have that type of internet connection you should just forget about hosting anything from home. Maybe ipv6 works if you're lucky, but your ISP clearly won't make it easy for you.

A VPS with a VPN works but there's alternatives. Some online services provide free port forwards/port forwards for a price (ngrok style) or if push comes to shove and you only need access yourself, you can probably host your services on a Tor hidden service and bookmark it.

I don't think I've seen that many ISPs do CGNAT though. Even mobile carriers often expose some ports to the outside world for IoT crap with a SIM card. Maybe CGNAT is more prevalent in other countries but that doesn't mean other people can make use of these guides.


In general in the Balkans there are cable ISPs (biggest one owned by United Group for example) that give out CGNAT IP addresses to all residential cable users. You can buy a static IP address for a 5€/mo, but it's a painful procedure with bureaucracy for some reason.

On mobile networks, you are assigned a CGNAT IP address per cell base station per device, and they are then all mixed into a few public IPs. There are no ports open, and they cannot be opened, becase it is not possible to assign ports to a mobile device and have the user know which ports to use.

Because hundreds or more users share a single IP address, you'd have to randomly assign them ports and keep track of devices entering and leaving the service area to delete the port mappings, which is not economical.

Ironically, one mobile carrier - Telenor, has a "feature" where on 3G only with a certain APN they assign you a public IPv4 address to your mobile broadband interface. The only catch is that it is reachable from Telenor's network only, except on ports >10000.


Do those ISPs with CGNAT also provide IPv6?


No, and they have no excuse.

The cable ISP recently switched to a DOCSIS 3.1 network in certain areas, and almost all customers have a modem/gateway that has IPv6 support perfected (IPv6 is quite an old thing, would be weird for it not to work properly now - on the CPE side), but nope, they don't want to.

They don't like their users being able to host content, given the fact they make it extremely difficult to pay for a static IP and get a modem or have your gateway switched into bridge mode (the newest models have had that removed from firmware, and the ISP downgrades you to 3.0 speeds if you want bridge and pay for a static IP). I am not sure why, but the whole company is extremely antagonistic to the idea of a user having a public IP address of any kind.

Mobile carriers do not support it either, and have no excuse at all, given the modern LTE Advanced networks they have deployed, with VoLTE, and the modernized core infrastructure by Huawei to the highest standards of the 4th generation of networks. Except IPv6 of course.

This is the same in most Balkan countries and in other parts of the world.


That is odd. I'm not involved in networking or ISPs, but supposedly a big motivation for IPv6 is to reduce load on CGNAT systems -- they're expensive, and problems with them generate support calls.

e.g. https://www.retevia.net/prisoner/


It is absolutely not logical, I know. That's the weirdest thing. I just don't know WHY, but the whole thing is oriented against letting the user host anything AT ALL COST.


> behind CGNAT and do not have a public IP address assigned to their CPE, and as such reaching a device in the LAN from the outside - without the device reaching out to you first - is impossible.

It’s pretty easy and free to get IPv6 from HE’s TunnelBroker.


I believe TunnelBroker can't work with CGNAT.


Huh? Why can’t you assign a port/s to the CPE? You can even implement a port knocking scheme if you’re worried about some service/s on your home network being wide open to the world.


The point is that outside traffic isn't even reaching anything you even have control over, because you don't have a public IP (i.e. the ISP won't set up port forwarding for you). Let's say you wanted to directly send a packet to my phone. There's no way we could make that happen even with both of our cooperation because my phone doesn't get a publicly addressable IP.


Nat:ed ip's are a PITA. But any decent ISP will give you a public IP if you ask. We are running out of ipv4 addresses.


> if you ask

And pay ;)

Getting a static IP on an internet plan here in Australia will typically cost around $5 a month, and not all ISPs offer it on residential plans.


I hope that goes away with ipv6


For me its not the 30$ pc that has issues but the 10$ microsd card the os is on. I had to replace it for the third time in about 6 years now.

But this is not something new of course, everybody will tell you to either use good uSD cards or put the os somewhere else, like an external hdd or so.


Many Raspberry Pi distro spend all the card’s TBW writing /var/log stuffs for no reason. It’s a known issue often attributed to “cheap microSD cards”.


Yes, just to explain why:

microSD cards (and all flash storage) has a limited number of writes. If you let your rpi write logs, you will soon run out of writable space and if you are lucky you end up with a card that is read only. If unlucky it just stops working all toghether.


This is purely an anecdote, but I had to replace a few SD cards in my Pi until I changed power supply. It's not even the official, it's just one that's rated for 5.0 V, 2.5 A. I've used the same card for a few years now.


It's possible to boot from USB with the Pi4, I wouldn't run anything important from the MicroSD.


Even if direct USB boot does not work out you can just use the SD card to boot the kernel and have the root fs on USB.

With the PI4 you probably can even get better performance from using a USB storage device as it has USB 3.0 Ports.


A common setup is to pull the OS over the LAN and just use usb storage for all your actual stuff. Skip the sd card entirely except maybe as an OS fallback.


Go with the high endurance microSD cards. They are only a tiny bit more expensive and last a lot longer. If you’ve got a Pi with USB3 ports, I’d also recommend using an USB3 SSD flash drive for the majority of your writes.


Like the Samsung Edge series for Dashcams/IoT devices? I wish I could get some.


May I ask what fs did you use? Maybe F2FS would extend the life of the flash memory


Use IPv6.

It doesn't have the address exhaustion that caused providers to implement CGNAT, and dynamic IPv4 addresses.

No need for VPS management, DynDNS, port forwarding, hold punching. You still need public DNS, but you can use public DNS as your internal zone as well (no need for split DNS). You also still need PKI, so maybe setup a reverse proxy for SSL termination with a wildcard certificate.


I look forward to the day everyone can have IPV6. For now many of us still have to deal with NAT sadly - especially if we're open sourcing our software for users to deploy on any network.


Everyone can have IPv6 today by using a tunnelbroker. I used the free tunnel from https://www.he.net/ in the past, when I didn't have native v6. Today I don't need it anymore.


There's a comment above that indicates tunnel brokering can't handle NAT situations (at least CGNAT).

RFC3053[0] seems to indicate this can be a problem as well:

> 3. Known limitations

   This mechanism may not work if the user is using private IPv4
   addresses behind a NAT box.

Are you saying it works even behind a NAT?

EDIT: According to HE's own FAQ[1]:

> If you are using a NAT (Network Address Translation) appliance, please make sure it allows and forwards IP protocol 41.

That doesn't sound like something most ISPs are likely to support. Not sure about home routers but if it has to be configured manually we're back to square one.

[0]: https://tools.ietf.org/html/rfc3053

[1]: https://ipv6.he.net/certification/faq.php


I don't know exactly anymore, because I'm now with a different ISP which natively supports v6. So can't reproduce.

I mean I (probably) could, but don't want to, because now I have IPv4 via CGNAT, but not with a private IP, a public dynamic one probably shared with who knows how many others.

But I can use IPSEC/OpenVPN/Wireguard to somewhere else with that. Though my CPE supports GRE.

Anyways, there are large implementation differences in CGNAT from ISP to ISP and even different access technologies within the same.


Wow, am I getting this right? It handles NAT traversal for you behind the IPV6 address for free??


What do you mean by that exactly? Initially it's just an outgoing tunnel to one of their many exits, to reach any site which is reachable via v6. How you integrate that into your setup is up to you. Since they are (one of?) the pioneers you have many scripts available on many platforms which support that.

When you mean incoming tunnel, it's no different from the many dynamic DNS solutions, where it's again up to you to integrate that. But even for this they have something:

https://dns.he.net/


Yeah, dynamic DNS but for an IPV6 address was what I was meaning. Very interesting.


Have fun. It's cool to have. If only to get acquainted with that v6y stuff.


Or just setup Tailscale, which takes about two minutes.


Wow, yeah Tailscale looks like it basically does everything you'd want for this: https://tailscale.com/blog/how-tailscale-works/

I didn't even realize this was possible: https://tailscale.com/blog/how-nat-traversal-works/

I had seen some of the people working there comment on twitter, but I don't think those blog posts were written when I last looked them up and I didn't understand what they were actually doing.

This looks like the answer for most people if you don't need to give public access to the stuff you're hosting.

If you do though, I'm still not sure what the thing to do is. If I wanted to host my blog from home instead of via github pages or digital ocean, what's the right way to do that? Is there a reason nobody does this?


When I was young I served my websites off my home network. Dynamic DNS would update my A record if my IP changed, but I managed to trick my ISP into effectively giving me a static IP. DMZ'd a host on my network and set up a firewall, and you're off to the races.

Nowadays I just pay for a $5 VPS somewhere -- my uptime is significantly better this way!


Do you use the $5 VPS as like a reverse proxy and you're still self-hosting at home? Or did you move your self-hosted applications to the VPS?

I am setting up a self-hosted lab and looking at (securely) setting up remote access. Was leaning towards OpenVPN as pfSense supported it, but have been considering a locked down VPS remote proxy too (at least for some services) and happy to hear thoughts.


[Tailscale founder] One thing you can do here is use tailscale to connect all your devices together, including that VPS, and then set up a reverse proxy on the VPS that forwards queries to your various devices over tailscale.


Tailscale runs on WireGuard and therefore requires elevated permissions on each client device. That shouldn't be required for simply proxying a local port.

Does Tailscale offer domain registration and TLS certs?

Also, is there any way to allow public access to certain ports on certain machines, ie if you wanted to run your personal blog on your RPi?


I mean I suppose it requires elevated permissions but frankly it doesn't require any more permissions than most software, so this feels like a weird point to pick on. You need elevated permissions to bind 80 and 443, etc., right?

You mentioned accessing your own devices from anywhere, and that's what I use Tailscale for. It was a dream to set up, and for my own services, I don't need TLS or custom domains, really. I have a few shortcuts on my phone that work everywhere, Tailscale IPs are static.

> Also, is there any way to allow public access to certain ports on certain machines, ie if you wanted to run your personal blog on your RPi?

This is sorta outside the scope of what Tailscale aims to solve, but one of the cool things you could do is just run a proxy somewhere publicly accessible and route requests to your RPi.


> I mean I suppose it requires elevated permissions but frankly it doesn't require any more permissions than most software, so this feels like a weird point to pick on. You need elevated permissions to bind 80 and 443, etc., right?

I think maybe you're misunderstanding what my goal is. If I have a local webserver running on my laptop on port 8080, I want to expose that via HTTPS on a public domain. The server that terminates the HTTPS connection needs root to run on port 443, but my laptop doesn't need root to start the upstream webserver on 8080, and it shouldn't need root to tunnel it to the public server either.


[Tailscale founder here] If you're using a mac, you can just install Tailscale from the app store, which does not require root (thanks to the "magic" of Apple's extension signing).

Another experiment we're doing is integrating a completely userspace network stack, which could someday be good for this: https://twitter.com/bradfitz/status/1301937179636068352


I don't use mac.

I haven't dug into the WireGuard spec yet, so this might be an ignorant question: Do you think it would be possible to create a client that can talk with WG servers normally, but on the local side it forwards to a specific port, rather than a network interface? That would avoid the root requirement. I'm guessing the answer is no since it sounds like you guys are working on integrating a custom non-WG solution.


I think there is a userspace version written in Go that shouldn't need root access.


Unless I'm mistaken, wireguard-go[0] only runs the WireGuard protocol code in userspace rather than the kernel. It still requires configuring network interfaces which requires root.

[0]: https://github.com/WireGuard/wireguard-go


My RPi 4 has been running Tailscale at home for some time, forwarding to my home network. Works great and very stable.

I think somebody even compiled Tailscale to run natively on my Synology NAS.


I tried running nextcloud on an rpi. It just doesn't cut it. I had the 4gb model and nextcloud runs but its a horrible experience. You go on the web UI and click a photo and it takes 10 seconds to load. Moved my server to a ryzen 5 based setup and now everything is instant. I'm not sure what the limiting factor on the rpi was because the ram and cpu usage was low. Perhaps it was memory or storage speed.


I have a Pi4, 4gb model running my NextCloud instance in a docket container, along with pihole and home assistant in another folder.

It’s always run perfectly fine for me and my needs, and I even tested having shared video calling in NextCloud and it continued to work great.

I’m not sure your configuration, but it might be worth trying on a Pi4?


I was using the pi4 with 4gb ram. Were you booting of an sd card? That might have been my issue.


Booting off an SD card. I need to change that, but I’m lazy.

I do treat it like I would a Dropbox. I store photos I want to save, documents I want to save, etc. I was using it for recording trips for a brief bit, as well.

I’ve used it to share pictures with friends from our hikes, and I’m on a very fast internet connection.

My usage loads might be sufficiently low to not be a problem. I’m not constantly streaming from it like LTT does their NAS. For major software projects, I might use it as a remote git repo.

I probably have a high speed SD card.

There are times when it’s slow, but not too often. I forget what I’ve done to resolve that.

Also, running the NextCloud app on my computer has never been slow and that’s my normal use case for file management on it.


On SD card or USB w\UASP enclosured SSD?


And if USB, make sure to test the speed. Some controllers need quirks enabled[1] to get speed, including a lot of popular JMicron ones. Mine went from ~20MB/s to 300MB/s for a Samsung 850 SSD.

[1]: https://www.raspberrypi.org/forums/viewtopic.php?t=245931


This was using an sd card.


I moved all my home cloud stuff to an old dell small form factor computer that I bought for 40 bucks from a company selling all their old inventory. It is an i5 4500-something that beats the pi4 in everything except 4k output. It also has an SSD and 8gb ram.


Agree, the pi is way too low power to handle nextcloud comfortably. In the same price range, it's much better to buy a used NUC (even with a celeron!) compared to the beefiest Raspberry Pi 4 out there.


TBF I think this says more about Nextcloud than the Raspberry Pi. I also suspect it can be helped with some setup - I imagine the difference between disk I/O performance is going to be greater than the CPU differences if you're comparing a Pi4 to a recent-ish PC.


I'm doing the same, my disks are much faster than what should be necessary. Exactly the same setup and observations.


I had the same experience with a pi 3 and didn't even try a pi 4. I set up a modest intel box and it was interactive.

The 4 might have a better change with gigabit internet and faster usb, but I was still using a fast samsung fit drive, but not fast enough.

To be honest though - the intel box was modest, but still 4x the price. Additionally it's hard to beat a pi for installation - just insert the sd card. (there is always configuration)


There's a lot of solutions sibling comments have already brought up, but I don't know if it should be this automagical. Keeping services up to date requires effort, money, or a big reduction in freedom of what you can do with your server.

There's a full-automatic mail server program, maininabox, that tries to be this instant "just make it work" system. The result of the project is that the host OS was severely outdated for years because upgrading configuration automatically is difficult and because the system manages DNS for you, adding a new subdomein to your server is more of a challenge than it should be.

Similarly, automatic service install and management tools like Plesk, cPanel, ISPconfig have been around forever but they always provide some limitation. I think Sandatorm.IO is a quite recent tool of this sort that runs Docker so you have a bit more control.

All of these still require occasional maintenance though. If you can't figure out how to point a DNS name and a wildcard to your IP, then I'm not sure if you should be exposing services on the internet like that. If you don't update for a while your nice, powerful server Raspberry Pi might suddenly be DDOS'ing random websites without you even knowing about it, and all you can do to prevent that is to keep your (limited) software stack updated.

All attempts to make this easy for the general public have so far shown that people don't like to press the update button; even rebooting Windows is a risk some people just aren't willing to take, which is why Microsoft had to force reboots in Windows 10. With that kind of risk out there, freely connecting whatever to the web and forgetting about it, I'm glad there's some technical requirements before you can host something.


Sandstorm(.io) is very cool, and it does make managing your self-hosted web apps very easy. But it does not run Docker containers and it only runs on Linux x86-64. (There have been some attempts at running Docker containers with Sandstorm, but they are not easy to use.) Instead, the web applications must be specifically packaged for Sandstorm.


Oh, I suppose I was mistaken. Perhaps I confused it with one of its competitors I can't remember the name of right now.


I've been going down the rabbit hole looking through different software in this space. I started an awesome-list to track what I've learned:

https://github.com/anderspitman/awesome-tunneling


https://cloudron.io can do most of this minus the hole punching . The port forward ing is very router specific. I think maybe there is some upnp interface for this but not sure how widely it is supported.



Caddy is great, and it'll take care of managing the TLS certs. There's a lot left on my wishlist above...


Cloudflare argo tunnels are exactly what you're looking for https://developers.cloudflare.com/argo-tunnel/quickstart


I'm aware of argo tunnels. Unfortunately:

* Argo smart routing is 5 USD/mo + 0.1 USD/GB. The 5/mo is fine, but the data charges could add up quickly for something like Plex.

* CloudFlare doesn't sell domains.


> CloudFlare doesn't sell domains

They do have a domain registrar intermediary [1] announced two years ago [2]. It's in cooperation with dount.domains and has competetive pricing. It could be counted as they do sell domains.

Not affiliated with them in any way.

[1]: https://www.cloudflare.com/products/registrar/ [2]: https://blog.cloudflare.com/cloudflare-registrar/


Looks like you can only port domains you already have. Announced 2 years ago and still no general availability isn't a great sign.


Check out KubeSail! Not affiliated in anyway. They make it super easy to do the plumbing, networking and have a kubernetes cluster on a raspberry pi.

If you ever wanted to learn k8s without spending $80\month on a cluster, best way to learn it!


Thanks for the shout-out! I wanted to post "we do exactly this!" but didn't want to be an advertisement, so I appreciate it :P (co-founder of KubeSail, if anyone has any questions!)


Cool tech. Does KubeSail integrate domain purchasing? Why should I be required to learn kubernetes just to tunnel a local webserver to a public domain name?


We offer free built-in (kubesail managed) domains, but don't offer domains for purchase - that would be nice to add eventually but we're a small company, so holding off on that for now!

Ideally, you don't need to learn Kubernetes any more than you need to learn Linux in that example - our Repo Builder will do it's best to guess how to host your app on Kubernetes - and ideally the UI makes the rest feel like any other cloud platform. The benefit of not being locked in, and of learning open source tech instead of walled-gardens, is hard to express!


> Mapping X subdomain to Y port on Z machine should take a couple clicks from a web interface.

route53 can work like that, it also has a cli version. (But you can't get the domain there).


> (But you can't get the domain there)

There's Amazon Domains now.

Additionally, https://github.com/crazy-max/ddns-route53 works well as a dynamic DNS configurator for Route 53.

For most home users, a Docker-supporting server is the best option.

Traefik has ACME and labels-based configuration for Docker hosts. It is a good choice for multiplexing HTTPS services by subdomain names.

In my opinion the biggest limitation is that there is no universal API for network routing appliances, whether it is your $30 home combo WiFi/router or your $20,000 Cisco device.

An access-key-authorized version of UPnP would be sufficient for the vast majority of users. Or even iptables commands over public key authenticated SSH.

But giant corporations - Google, Microsoft, Apple, Amazon, Facebook - they are in the cloud business, Microsoft doesn't ship a home server technology really anymore.

The most popular home server software, like Plex, is really purposefully disruptive to giant software and media companies. By contrast you're going to have a bad time running your own Dropbox competitor from home, because that sort of technology is engineered around cloud computing.


Can it tunnel to local devices like a RPi or just AWS VMs?


You have to own the IP, and map the RPi to the standard ports (80/443, likely have to set that up from router). Alternatively just do x.com:yyyy if you don't mind (though you probably do for an external facing website).


No, it's just DNS. It doesn't provide any additional routing.


I'm currently thinking of using a reverse proxy through a wireguard tunnel. That should work also for non-static home ip addresses. (I already habe the domain and VPS)


> Mapping X subdomain to Y port on Z machine should take a couple clicks from a web interface.

This is already the case. Routers have mostly easy webinterfaces nowadays en the same goes for DNS options at any domainname provider. What people need is a bit of knowledge. It takes me a few mouse clicks in a webinterface to do this because I know what I am doing. Yeah you could dumb down anything to a single button but I don't think we should want that.


I tunnel everything through webRTC. It's a bit exotic but it gets you a direct bidirectional data connection to the self hosted device. You can put all users' self hosted content through a single domain name & SSL cert or you could have subdomains automatically provisioned for each device.

I'm using this WebRTC method for 3D printers at https://tegapp.io


Can you provide more details on what software you're using for WebRTC tunneling?


Sure, right now I'm using a nodejs WebRTC datachannels implementation but there's an up and coming rust implementation which I'm quite excited to try:

- NodeJS DataChannels: https://github.com/node-webrtc/node-webrtc

- Rust DataChannels: https://github.com/lerouxrgd/datachannel-rs


HomeDrive ( https://www.homedrive.io ) is plumbing exactly this! We are currently only hosting Nextcloud, but we plan to support more apps and custom dockers. It is as easy as plugging the box into the home router.

There are still many features to implement, but we are working towards "easy self-hosting at home", and looking for early adopters.


How would you compare yourselves to KubeSail, mentioned above?

Why limit it to specific software, rather than simply port mapping?


HomeDrive features plug-and-use, with no system maintenance required. We target not only developers/hackers, but also end-users who would like to have a small server at home, to host their own data and services for their digital lives. As a result, HomeDrive also maintains the OS to be reliable and secure, which is why we picked the hardware. We are investigating supporting more hardware, such as raspberry pi's.

KubeSail feels like a more accessible k8s+docker apps to me. I am not sure what security model KubeSail is assuming for the operating systems it is running on (or does it assume the OS is out of the scope?). Also KubeSail seems to target mostly developers/hackers.


Cloudflare -> Router (only allows 80 443 trafic from cloudflare ips) -> nginx -> all selfhosted services (wiki, hass...)

Problems with the "easy" one click ones is that they tend to not be very secure. If they are supposed to be public access. Plex uses their cloud to secure the access and Synology to


I need this.

Bought a namecheap registry for a small nfp.

Been swamped with how-to’s and learning things just to learn what to search for...

I could use something else that does it all...

But I want a level of authority none of those offer.. without the technical insight of “is this everything/enough”.


I can relate, thought about setting up a Caddy server to route through the different services (also nginx would be fine). Have to try it out and probably make a list of services in a HTML document returned on port 80/443


This is a good option if your ISP doesn't block ports 80/443 and you don't mind setting up port forwarding.

EDIT: Oh and if you don't mind managing your DNS records manually, including dynamic DNS.


It's against cloudflare tos to be doing mostly video/images for free. It would be a great idea as it would solve the problem of IP exposure but if it gained any traction it'd have to be shut down.


In my experience, you have to be way above 1 TB per day to get banned - I know because I pushed around 5000 simultaneous HD streams on my account back when I ran a pirate streaming service (Google my name if interested, it's on the Verge), and it lasted for a few days before I shut it down due to MPAA, but didn't get banned by CloudFlare. I still have the account that I did that on, it's from 2011. and still used for a few of my sites.


Start by having a look at subspace https://github.com/subspacecloud/subspace


The original repo is receiving very little updates and maintenance, if any. I'd recommend the community fork at https://github.com/subspacecommunity/subspace instead.


I use a VPN + static IP or DDNS to get access to my home cloud/server (both the VPN and DDNS can be setup on my router). Also, there are free DDNS providers.


You can run zerotier and have your services on your own private network accessible from anywhere.


ZeroTier might solve some of this.


For internal only things you can use Wireguard VPN and any dynamic DNS provider.


Doesn't ngrok handle all of this?


Everything except domain registration AFAIK. Unfortunately it's closed-source and the pricing is confusing. It's not clear to me whether it would be a good choice for exposing a Plex server to your family and friends, for example.


One thing to watch out for when doing something like this is that the Raspberry Pi will by default put your file system on the SD card it boots from. SD cards aren't meant to support a lot of write/erase cycles, so it's easy to end up with a corrupt SD card after a few months to a year depending on what you're doing on your Pi.

A workaround that can save you some headaches here is to only boot from the SD card (which means you're effectively only ever reading from the card), and then mount a filesystem on an external SSD drive. There are a couple of good guides here [1] [2].

[1]: https://www.stewright.me/2019/10/run-raspbian-from-a-usb-or-...

[2]: https://www.pragmaticlinux.com/2020/08/move-the-raspberry-pi...


Most writes are the logs, I use log2ram [1], it reduces SD writes substantially.

[1] https://github.com/azlux/log2ram


Or for those using journald, use Storage=volatile in /etc/systemd/journald.conf and then `systemctl force-reload systemd-journald`. Remove /var/log/journal to get rid of the old persisted logs.


Busybox's syslog logs to RAM by default. And it can be built with runit requiring no systemd if the distribution was built for this. Also Alpine Linux runs out of RAM entirely by default. Too bad RAM is constrained on these devices and I haven't been able to make it load g_serial in Alpine for the USB gadget console on the OTG port. I used a Pi Zero W.


Thx! Looks like just what I need.


Very true for a given value of true 8) You need to evaluate all the components. The RPi itself is a decent piece of kit, well tested and safe to use. Get a decent power supply to it - either a RPi branded one or at least a decent mobile charger from a brand that you trust.

I generally use a decent USB stick nowadays. RPi 4 from about a month or so ago onwards will do this out of the box. You can also put a second USB stick in and clone the thing every now and then.

You can PXE boot them as well (citation needed) and that brings nfs and iSCSI to bear. That's my long term plan for fleets of them.

For the semi casual user, I recommend the dual USB stick combo. Quite easy to set up and you can always whip out the backup and test it on another device.


> RPi 4 from about a month or so ago onwards will do this out of the box

Are you saying you don't need an SD card at all and it will just boot off USB?


Yes, it can do that now.


I buy quite a few of them as a tinkerer and recent (~two months) ones don't need firmware/BIOS fiddling to boot off USB. Make sure you get reasonably recent stock.


Is the documentation outdated? https://www.raspberrypi.org/documentation/hardware/raspberry... says “USB booting is still under development“ and https://www.raspberrypi.org/documentation/hardware/raspberry... says “The Pi 4B bootloader currently only supports booting from an SD card.”


Two minutes ago I plugged in a RPi 4 with no SD Card in it. It does have a USB stick plugged in.


I really think PI I doesn't make sence for this usecase, I strongly reccomend getting any random Intel-atom based mini pc off ebay, they cost similar, have proper storage and a heatsink. I have never come across one where ubuntu doesnt work.

Pi really shines when you are interacting with hardware sensors, or need ARM.


I am curious why you would use a fleet of them instead of a larger server.


They're great for tinkering with distributed systems. I.e. if you want to play with Kubernetes, you can build a master and 2 workers out of RPis for like $90. They won't have enough power to run anything substantial, but it's enough to play around with upgrading the control plane or mess with the network plane. Same thing if you want to play with failure scenarios on distributed databases, or whatever. It's doable with VMs, but every time I try to do anything fancy with VM or container networking, I end up spending hours reading documentation because my VMs refuse to communicate at all.

I also like it because they force me to use Ansible and actually make my installs repeatable. On my actual server, practically everything is some snowflake crap I did at 3am that I couldn't repeat if I had to.


I am an IT consultant and own the business. Home and work blur somewhat! I have a Dell tower (ESXi), a pfSense appliance, a micro mainboard based box, two ESP32s and a 24 port PoE+ switch in my attic. With a UPS and smoke detector. There's rather more stuff hidden away in cupboards and under the stairs. There's a 50m run of ethernet and SWA 240V (all ducted and marked etc) down to our summer house in the garden, which sports an eight port PoE+ switch, 2.4 and 5GHz wifi, and four double IP66 power outlets.

"Fleet" is a bit of an excessive term but I have four RPis at home, mainly for TV frontends for MythTV. There are rather more in the office.

The above is just a sample and not the whole story 8)


They only consume a few watts, have a footprint of a few square inches, and cost a few twenties of dollars. It's a sweet spot for doing lots of things. There are a few ways they aren't ideal, but still a great compromise.


But if you are running a fleet of them, wouldn’t a server be more efficient?


For whatever it's worth, I use Samsung Endurance SD cards [1] in all of my Raspberry Pi 4s. While I wouldn't say any of them are subjected to heavy load, I've never had an SD failure in the ~1 year of usage they've each seen.

[1] https://www.samsung.com/us/computing/memory-storage/memory-c...


Latest Pi’s can boot from external media. It’s possible to boot from an SSD


You can't just say that without including a link :P

Anyways, I assume that the following is what you are referring to: https://www.raspberrypi.org/documentation/hardware/raspberry...

In that case, the following from the linked page might be worth making note of:

> To enable USB host boot mode, the Raspberry Pi needs to be booted from an SD card with a special option to set the USB host boot mode bit in the one-time programmable (OTP) memory. Once this bit has been set, the SD card is no longer required. Note that any change you make to the OTP is permanent and cannot be undone.

Not that it matters much to me but still something worth being aware of if you later try to repurpose your RPi for something else I think.


That's specific to the

> Raspberry Pi 2B v1.2, 3A+, 3B, Compute Module 3

Further down,

>Raspberry Pi 3B+, Compute Module 3+

> The Raspberry Pi 3B+ and Compute Module 3+ support USB mass storage boot out of the box. The steps specific to previous versions of Raspberry Pi do not have to be executed.

Then the last one,

> Raspberry Pi 4

>The Raspberry Pi 4 currently requires non-default firmware to enable USB mass storage boot: see the USB mass storage boot section of the Pi 4 Bootloader Configuration page for more information.

But overall, it's possible in some way with all these versions,

>Available on Raspberry Pi 2B v1.2, 3A+, 3B, 3B+, and 4B only.


The OTP programming doesn't prevent the Pi from booting from an SD card, so it should be fine.


I've been using a RasPi for a Pihole for years and this is a constant peeve. No matter how many precautions it takes, it eventually dives. RasPi is neat for a lot of things but I'm not convinced it's an ideal selfhosting platform. By the time you invest in the necessary addons, you might as well have gotten a used actual server.


This is where I am at right now.

I have wasted alot of time and lost a not insignificant amount of external data.

I am about to buy a FreeNAS mini tower.


> I am about to buy a FreeNAS mini tower.

FreeNAS is a good choice. Should last you for years, baring any kind of weird/unusual hardware problems (which aren't expected anyway). :)


Newer Pi's do support booting to USB drives (flash or SATA adapter) and I think the firmware update enabling that on the 3/3b/4/4b allows SATA hats and others to work but I haven't read into or tested any of them.

Booting to USB protects the install better, allows easier access or setup on a PC, and I have more of them around to set up random OS or systems to test. I have several pis around and each that can boots to USB including the pihole systems at my parents place and grandma's apartment.

Plus, the SD copier tool in Raspbian works with USB drives (and VHDs) so I set up the pihole for my grandma, cloned the usb drive, then sent both drives to her with the pi and called to walk her through plugging things in then used my existing remote tool on her laptop to finish set up. Now she has a backup USB drive that may need updates but is ready to go if the existing USB fails with pihole ready and everything. Plus once I update the backup drive I can clone it to the corrupt USB and she can store that as backup.

I know many people have hard lines for supporting friends/family. I've taken it in stride with super useful setups for relatives (pihole and a remoting tool if I set up their laptop/desktop) and beer or similar cost for friends. One recently spilled warm garlic sauce in their laptop so I pulled it apart while they scrubbed and wiped and some local beer is cheaper than a new laptop any day.


You can also netboot them, which adds a little latency but in terms of speed is likely even better than the SD card, now that the Pis have real Gbit.


Good points, thanks!

Just stumbled upon this today by coincidence, will definitely follow the suggestion, cheers


How do people access these servers off of their home network (or do they not?).

That seems like most of the value to me, hosting some service you can access from anywhere without having to use Digital Ocean.

It seems like most residential ISPs don't provide a static IP and some block port 80? I think forcing ISPs to allow home users to serve traffic via some standard method would go a long way to enabling a more decentralized web.

I know Zero Tier, and Tailscale exist - but I don't really understand how they work (and I think they require intermediate server access anyway so might as well use Digital Ocean?).

I'd like a future where you could sell users a raspberry pi running a service they can just plug into their home switch and access it securely from anywhere.


My ISP provides me with a CGNAT IPv4 address, so I can't hope to access externally. They delegated a /56 IPv6 subnet to me though. So I just setup Prefix Delegation on my router and allowed in TCP port 443 in the IPv6 firewall. I setup an NGINX reverse proxy. I set a static IPv6 address in my subnet on my servers. My mobile phone provider has IPv6 dual stack. In my public DNS I setup an AAAA record. So I can access all of my services over IPv6 natively on my phone which meets my needs (syncthing, airsonic, bitwarden). I like that I don't have to have any Split DNS, only one set of records. I dont have to hijack the zone for my internal network. It's like I automatically 'roam' when connected to WiFi, it gives me a higher priority route to my server through wifi rather than over mobile network. It works really, really well.


This is a good example of the benefits of IPv6.

Interesting.


Same here. Static 256 ipv6 block no ipv4


Dyndns to solve the static IP issue, and if not all ports are blocked setup WireGuard on an open port and connect via that. To be honest I prefer to not expose a lot of these home server type projects directly on the web as a lot aren’t that secure. You’re better of going via WireGuard.

The only place you get stuck and need an intermediary vps is if you are behind CGNAT. I came across this recently that helps set all that up. https://github.com/erikespinoza/v4raider


> How do people access these servers off of their home network (or do they not?).

Wireguard, listening on the public IP with port forwarding, and using a dynamic dns client to ensure I can always connect even if the public IP changes.

> It seems like most residential ISPs don't provide a static IP and some block port 80?

Not the case here in my experience (Spain), but if you're fine being the only one with access you only need to forward the VPN port.

> I know Zero Tier, and Tailscale exist - but I don't really understand how they work

I only used ZeroTier a bit, but IIRC it was something like:

1) Create a new network in the ZeroTier One website 2) Download the ZeroTier client on your machine(s) 3) Enter the network ID 4) (optionally) authorize the device on the web UI 5) Now the device can connect to other ZeroTier peers on the network you created!

(So yeah, at least the "easy" way involves using their server, no need to selfhost it). Also this option should work without port forwarding.


I use WireGuard via Tailscale. it's been a breeze since switching from a self managed solution on my Pis. generating the keys, syncing them across the Pis, syncing the Pis keys to the clients, all too much work. Tailscale has automated this.


Yup, Wireguard is what I use. I toyed around with both Traefik and Caddy as reverse proxies (not simultaneously, of course), but found it to be much more complicated to set up than a VPN. I wouldn't touch a reverse proxy for personal use again.


Can the ZeroTier client create a tunnel without root access? That's the biggest weakness of WireGuard IMO. One of the things I like about ngrok is it doesn't require root.


> Can the ZeroTier client create a tunnel without root access? That's the biggest weakness of WireGuard IMO.

No idea about ZeroTier, but you should be able to use WireGuard without root access using the userspace implementation in Go[0] (that's the one used in non-rooted Android phones, Windows, and maybe the BSDs)

[0]: https://git.zx2c4.com/wireguard-go/about/


I tried wireguard-go and it required root to create a tunnel. I wonder if it would be possible to adapt it to forward to a local port rather than mapping directly to a network interface.


ZeroTier uses central servers to assist machines behind NATs in finding each other.

These central servers basically exchange the external IPs of each machine on the virtual network. The nodes on the virtual network then try their best to establish peer-to-peer connections using those external IPs.

I use it all the time with a number of colleagues working from home and it works great! We can all join a virtual LAN and see each others machines behind our home broadband routers.

ZeroTier runs fine on Raspberry Pi. I use it to link machines at home with machines at work, on AWS, Azure, etc.


I've got a static-ish address, meaning that my ISP hasn't changed my IP in many years, even with modem or router reboots. I've been meaning to get a dynamic DNS provider, but it hasn't been a priority.

In terms of accessing local services, I'm using StrongSwan on a VM with the relevant ports forwarded from my router. Ideally, the router would run StrongSwan, but until I switch to pfSense I'm living with this setup.

iOS and MacOS devices get a .mobileconfig profile which automatically connects when needed and disconnects when the device returns to my home WiFi network. My Linux travel laptop can also connect, but I haven't figured out how to make this happen automatically yet.


I've been using DuckDNS [1]. If your IP address changes it could take up to 5 minutes to update, but in practice I haven't had any problems with it.

[1]: http://www.duckdns.org/


Why does the duck not have a beak in Firefox?

Why can I not sign up with email?

Why is there no way to contact the creators to ask these questions?


Same here!


I’m using a 3$ VPS (hetzner) as a VPN server and access my local servers that way. You also get a regular VPN for free that way and setup is trivial if you use wireguard.


I use a $15/year vps for this. Acts as a bastion to all the servers I connect to.


Could you say where you got that? I haven't seen many servers in the "dirt cheap and really tiny" space.


I use ramnode.com. It’s one of the openvz servers they have


Digitalocean, Linode, hetzner, etc, etc.


Where did you get $15/yr VPS?


GCP also offers a "always free" micro-instance that's more than enough for the task at hand. https://cloud.google.com/free/docs/gcp-free-tier#always-free


It's free outside of the IPv4 address, right? Pretty sure they started charging for that, or perhaps it is still delayed due to COVID.


You can find many low priced (and low spec) VPS deals on https://lowendbox.com. Just keep in mind you generally get what you pay for.


You can get 2 VM from Oracle Cloud for free forever-ish.


I use ramnode.com for it. It’s under openvz


+1. Allows me to access my Jellyfin and file server from my laptop no matter where I am, all for a few bucks and a good learning experience with Wireguard.


My home IP is technically not static but doesn't change, even with router reboots. I still have dynamic dns set up, however, because I don't trust that to not change. My ISP threw a warning when I forwarded port 80 (something about the TV service) but I haven't had any issues (though I serve stuff mostly off 443). It's actually really convenient, especially since I have a few ten-year-old laptops I can use to host stuff. Since I got symmetric gigabit FTTH, I can do basically anything with it, even hosting big files.


Tor hidden service. Simple to configure, just works.


I've considered that before. Also means it's possible to share with other trusted people unlike going the VPN route.


"Simple" and "just" are the two words that trigger my BS radar the most.


Done! Super simple install and configuration too!


If you have a linux box that's always on on your network, you can throw in a simple cron entry to curl a dynamic dns provider (entrydns.org works pretty well in my experience), which updates their dns entry for a url you set. Set up OpenVPN on your router, VPN to the URL, voila, access to your self-hosted services.

You definitely shouldn't expose most of these things directly to the net, they're not always bulletproofed as much as one would like.


I just enabled OpenVPN server in my router (gl-inet, openwrt-based), transferred the client.ovpn file to my phone, and bam. Whenever I'm away from home, I light up the VPN on the phone, and bam. There's my TheLounge IRC instance and other stuff just as if I was local.

Oh, I also set up a dynamic DNS service on said router, even though my IP address seldom changes, Murphy's law says the most important time for me to be able to tunnel home would be after an outage or something that reassigns the IP.


Afraid.org's dynamic DNS. I have a single port forwarded for wireguard, since it's just for ''personal cloud'' purposes.


Up until recently I used dynamic DNS and it worked well for a small website and calender server (radicale).

For hosting an email server a static IP is all but required, so I got the free tier VM.Standard.E2.1.Micro VPS at Oracle Cloud. It has a static IP and I forward stuff to my rpi3 with dyndns. All you need for this is a credit card.


You can make public DNS records for your zerotier/wireguard/tailscale ip addresses (and make lets encrypt certs for them).

some routers block rfc1918 addresses from dns lookups, but you can turn that off, or put your virtual lan in a different range.

Then you can have sql01.example.network point to 100.13.14.15


I just have a domain with DNS at dreamhost and a dyndns cron-job (in my case running off my openwrt router, which forwards some ports to the pi).

My ISP doesn't seem to block any ports or anything like that, nor have I ever had any such trouble in Norway (been through a few ISP's here)


One way to get a static IP is to rent a cheap VPS, put wireguard on it and use DNAT to forward IP to the client PI as wg client. Works well with an NGINX reverse proxy on the PI redirecting traffic to anything on your LAN.


My domain is hosted on namecheap and it has complimentary dyndns service. On my pi I have a cron dealio that occasionally shoots off to namecheap's API to update my home subdomain.


I use a little Raspberry Pi to host a little website as well. I use Cloudflare and run ddclient on the Raspberry Pi to notify Cloudflare when my dynamic ip address changes.


Switch to a better ISP.


I don't see how this is seriously a suggestion. Not everyone lives in Silicon Valley and earns a >20k/yr paycheck. There are other things to consider like reliability, cost, that force your hand.


Anyway ISP that blocks port 80 or uses CGNAT is obviously not good a selection for such a user. So it's serious suggestion if you can choose ISP. There are many places where can choose ISP even not in SV (maybe outside US).


What else do you want? If your ISP blocks incoming connections then thats the end of the story. The answer is either complex things like tor or switching ISP.


My dynamic IP address with Comcast is pretty much static but I guess it is not a guarantee... I can get it to change by spoofing my router MAC address though.


Same. Mine has been the same for at least 5 years.


Dyndns or similar.


I use noip.com


Dynamic DNS and WireGuard


+1


I recently thought about getting an RPi4 but ultimately spent about a hundred dollars more to get a cheap NUC. It was a bit more expensive but it seems like a more robust platform. A real AC adapter instead of USB (apparently the RPi4 kind of botched it in some revisions? What I read wasn't confidence inspiring), takes normal SO-DIMM ram and a typical SSD, doesn't have a reputation for overheating.. it seems generally more straight forward with fewer 'gotchas.'


The USB thing, while not great, was a bit overblown. Official power source never had a problem.

As far as overheating, there are several passive cooling cases that handle the heat just fine. ETA Prime is one place to look for videos with tests.

No doubt you get more power and flexibility from a NUC, but Pis are pretty great for what they are.


If you have an i5 or i7, you may also have the advantage of remote KVM (via Intel's vPro) so that you can revive it remotely if needed.


Not to mention the built-in advantage of Intel's much better supported QuickSync video encode/decode if you're doing any sort of media streaming.


Won't matter if your main computer does the decoding, and for the price of one good computer that can stream to all devices including crappy ones versus all good ones and a cheap NAS to host, which would you choose?


Guess the question then becomes, do you need transcoding on the device streaming?


If it only costs you $100 to upgrade all your non-server devices from crappy to good, you have to tell me where you shop.


I'm torn, I don't think of Pis as traditional servers and have two dedicated servers on my network. AND I have a half dozen Pis that I use for hosting OctoPrint, DNS, SSH, IoT, Pi-Hole, etc... which are traditionally Server functions.

I tend to think of Pis in terms of single function appliances. They're obviously capable of more but they're so cheap you can just throw one at a single problem and forget about it.


Yes! They're the computer equivalent of buying a water filter for your kitchen sink or a dimmer switch for your bedroom. No point in redoing the plumbing or rewiring the entire house but you can throw one tiny cheap appliance in there and solve a single discrete problem pretty quickly.


I have a couple at hand near my desk at all times and can just flash a MicroSD card and boot one up in a couple minutes. They're incredibly handy for prototyping things.

I needed to test out an Onvif setup, I was able to flash a Pi Zero W and have a functional Onvif camera in 20 minutes.

We got some windows blinds and I noticed they used Z-Wave remotes so I pulled out a Pi and setup an HA server running them on schedules with-in half an hour.


What advantage does an NUC have over let's say your old laptop? They seem like they're the same price as laptops with laptop specs but are missing most critical components and are a bit smaller.


My last 'old laptop as a server' dying is actually what prompted me to buy the NUC. It was a chromebook pixel long out of warranty. Until it bit the dust it suited my needs. Being a bit smaller is a nice bonus, it might be silly but I think the NUC is kind of cute.


Seems rather expensive, didn't you have another laptop laying around? I have so many that I don't think I'll ever see the point to them, especially with thinkpads for cheap or free


The laptop I had before the last was an Asus Eeepc that developed a dodgy AC adaptor port. I still have it and it could be made to work, but it was never a very good computer in the first place. It runs loud and hot for what it is. Before that, and the oldest computer I still have in my closet, is my T60p thinkpad. I'd need to buy a replacement fan before using that one again, and I'm not sure it would be worth the hassle to resurrect (again.) If I were inclined to fix up an old thinkpad I think I'd rather get a second-hand X-series since they're not nearly so huge.


+ real time clock, definitely worth a few dollars.


Which NUC did you end up with?


NUC7CJYH1. With RAM and an SSD it was about $200, while the RPi4 kit I was considering was about $100.


Nice! I recently wrote a blog article about home-hosting on a RPI4 using kubernetes (https://kubesail.com/blog/k3s-raspberry-pi).

Such a bright future in home-hosting - really looking forward to seeing the movement grow! The https://www.linuxserver.io/ community is pretty great re: home-hosting apps as well.


I've set up a small cluster at home via Docker Swarm. I use it to not only an environment to prototype more complex apps, but also keep my devops knowledge up, too.

I originally had a Cluster Hat (https://clusterhat.com/) which piggybacked four Pi-0's on a Pi 3. Even cut out a nice acrylic case, too (https://climbers.net/sbc/clusterhat-review-raspberry-pi-zero...). It worked fine, and I even had Hadoop running on it for a while. The biggest issue was the ARM7 of the host (pi3) vs the ARM6 of the Zero's. This caused a lot of problems with deployments, essentially eliminating the redundancy you get from orchestration systems.


That's awesome! didn't know about https://fleet.linuxserver.io/ !

Gotta definitely try this, thanks


Yeah! They have a really great community in their chatroom as well - A lot of our Kubernetes templates are based on their excellent Docker images :)


Wow linuxserver.io looks amazing! But the sheer number of images available make me wish the table had a column with short descriptions to know at a glance what is each thing. Most of the items (at least those I clicked) don't even have a description or link to home page, so it's difficult to have a quick overview.


How do I home host KubeSail?


If you run anything semi-serious please consult this blog post by Jeff Geerling about SD card performance (buy A1 ones).

https://www.jeffgeerling.com/blog/2019/raspberry-pi-microsd-...


It’s an rpi connected to a dual core laptop on a home connection, presumably without a battery backup. Nothing about this is remotely semi-serious.


I haven't been into computer hardware lately, but I decided to pick up a NAS, and was pleased to learn that they're now just a complete computer - I've started using VMs living on my NAS to do this kind of thing, which is quite nice. Synology's interface is not bad either, but I imagine others have come up with even better ways to use these systems.

Obviously a RPi is a way cheaper way to get a lot of the same work done though :)


Friendly reminder that you can use Piku (https://github.com/piku) for Heroku-like deployments.


I see it's inspired by Dokku (because Dokku doesn't support ARM). I use that for a few of my apps on a VPS, nice tool.

http://dokku.viewdocs.io/dokku/


I prefer CapRover; it supports inbound webhooks so you can hook it up to Gitea/GitLab/GitHub with its own ssh deploy key and it will autodeploy from a specific branch, so you don’t need to push to it.


awwwww yeaaah! ty


I'm using my old Thinkpad T510 as a home server. It's been running for 7+ years already. I only need to dust the vents once in a while.

It has PiHole, Nextcloud, my humble little Netflix clone, and a few other things. If you use ffmpeg a lot, you ought to have more power than the RPi offers. I often SSH into it to use it as a SOCKS proxy in other countries.


Yep, older laptops make perfect home servers/routers. Built in "KVM switch", built-in UPS, low power/noise, etc.

Before I switched to Ubiquiti I ran pfsense on a VM on my old 410s for many years, among other virtual machines for home lab use.


>low power/noise

how does that compare to RPI?


It's a regular laptop fan, so it's inaudible unless ffmpeg is converting a movie. Then there's an audible hum, but it's no louder than the fridge's compressor.


Very cool 7 years is a damn loong time. Interesting seeing how the experiment with the RPi will go


On my Raspberry Pi 3 I'm running Hypriot OS which installs a minimal host OS and then just runs Docker.

Thanks to cloud-init (Old version though) you can even pre-configure the boot image with your SSH key etc. which allows you to automate your initial install.

https://blog.hypriot.com/downloads/

https://cloudinit.readthedocs.io/en/0.7.9/topics/capabilitie...


BalenaOS is also really good for this: https://www.balena.io/os/


Ubuntu for the RPi also uses cloud-init. I use it create a default user with my username instead of the ubuntu user, deploy ssh keys, install packages and configure the network on newly deployed Pi's.


that is interesting, thanks!

sounds like a super smooth dev and deploy experience


Docker on ARM is not a smooth experience. Hardly anyone builds containers for ARM so you end up either building everything yourself or finding weird -arm versions of popular containers.



Setting up a pi-hole DNS server for my wifi network was one of the best decisions I've ever made. Horrifying to see what percentage of traffic is on the ad server blacklist though...


I wasn't aware that my Samsung Smart TV had been logging almost my every action on the TV until I set up a PiHole server. Also, my respect for Apple grew by the fact that only device that wasn't doing loads of telemetry turned out to be my Macbook in the whole household.


Yes!! I was so grossed out by all the logs from my Smart TV. I'm embarrassed to say that I worked in ad tech (as an engineer) for years but I still didn't fully comprehend how pervasive that kind of tracking is in literally every environment.


> Also, my respect for Apple grew by the fact that only device that wasn't doing loads of telemetry turned out to be my Macbook in the whole household

Turns out that modern electronic devices are expensive. If you are not charged up-front, there's a good chance that you are being charged in some other way.


Let’s be fair, even if you did pay more up front, they’d still get ads from you. The whole thing is pervasive and we need legislation for it.


We need legislation against this as the "discount" isn't at all obvious.


Apple devices still contact the mothership nonstop even with telemetry disabled, for a bunch of different reasons, even if you don’t use any iCloud or Apple services. Don’t be fooled by the DNS logs.


I wanted to do that, but I had a look at Pi Hole and ran away screaming. Instead of proper packaging, they have a 3000 line install script they want you to pipe into Bash.

I went a saner route, and used dnsmasq and a blocklist[1] updated nightly via cron. Dnsmasq in turn queries Stubby that talks to uncensoreddns.org via DNS-over-TLS. Boom, DoT on my entire LAN.

[1]https://github.com/notracking/hosts-blocklists


They acknowledge that piping to bash is controversial in their install guide and they provide other options for installation. I think they were intending for it to be as accessible as possible to non-technical users and piping to bash was the easiest way to make installation a one-line command that requires zero additional knowledge and still works on the tiny raspberry pi zero w. I can't say I agree with it as a general practice but it wasn't enough to turn me off since their software takes like 15 minutes to set up, provides a nice monitoring dashboard, and runs on the raspberry pi I'd relegated to my junk drawer. Your route may be saner to you but it certainly isn't for a lot of people who tinker with raspberry pi and want something like pi-hole but don't have extensive technical knowledge (I am not one of those people I am just a lazy engineer so it works for me too).

https://docs.pi-hole.net/main/basic-install/


Would you feel better with a 3000 line install script inside a package? Or maybe you would prefer the same 3000 lines of code nicely compiled in a single binary?


I'd feel better if the install process didn't rely on manipulating the system package manager using janky scripts. That's a very poor way of handling dependencies, not to mention it's difficult to port.


Run it in a docker container


That solves nothing.


My assumption here was that you didn’t like some rando script running on your machine with escalated permissions.

I figured running it in a sandbox in a rocker container would be safer to you. Also, it’s easier to get up and running, though more difficult to update.


My favorite one continues to see the 1000+ dns requests that my Philips Hue lights send after disabling diagnostics on them. It was the same beforehand :)


Hah, what a strange dystopia we live in where it's impossible to stop your LIGHTBULBS from tracking you via the internet!! I'm a total curmudgeon about smart home stuff, I don't want any of it beyond a TV in my house if I can help it. It freaks me out seeing people with no technical knowledge outfit their entire home with Nest/Ring/Google Home/Echo/Phillips Hue and even smart refrigerators, while they know virtually nothing about how much privacy they've just relinquished to these companies and they don't have the technical skills to even attempt to mitigate it. I feel like a paranoid doctor who's starting to notice the damage done by cigarettes while the general public is still blissfully puffing away...


You are not alone. It's pretty heartbreaking to see how so many promising products are really surveillance nightmares. I've noped right out of using some nice-but-not-necessary features on some things, because their app wants location access, contact lists, or other things they have no business accessing. WTF?!


I was going to do this, but you can usually just change the DNS and add a hosts file to your router assuming it can run firmware that allows it (like tomato or ddwrt). It seemed pointless to try since the charts work for http and everything is https now. I didn't setup specs for traffic but the setup I use is much lighter. Just wanted to suggest this for anyone who might want to block on their home network. I also use it as a NAS with USB3.0 to SATA with An SSD.


You can set up DNS over https with pi-hole btw, I did that for mine. It's definitely not the only way to achieve this kind of ad blocking but if you're like me and have several old raspberry pi's laying around from abandoned projects then it's a nice way to put one to good use.


Definitely!


I like my rpi but my life got better when I bought a mini pc instead. Its pretty common to get a 16GB mem micro for not much more than an rpi with power supply, sd card, case.

https://computers.woot.com/offers/lenovo-thinkcentre-m73-240...


Yup, similar. I have a thinkcenter m600 series. Quite old but works really well.


I looked at these but got an old ThinkPad laptop instead. Integrated UPS, keyboard and mouse...


good point..


It really wasn't until I got myself a Pi 4 that I really appreciated what an improvement it was over the earlier generations. I have the Pi 4 and a 3B+ running BOINC, crunching away on World Community Grid[1], and the 4 is at least twice as fast at completing work units (it's too early for RAC numbers to settle down, yet). The Cortex-A72 is a huge step up from the A53. [ETA: both are actively cooled.]

Also, the Pi 4 eliminates the USB2 bottleneck the old Pis have, and has a couple of USB3 ports.

[1] Be sure to boot with arm_64bit=1 in config.txt or you will get no work units.


Thanks for the hint! Will include this in my config.txt


For anyone with a home server and has the need to remotely access your self-hosted websites, https://pomerium.io has been a wonderful piece of software in my stack.

You can safely expose your self-hosted websites to the internet and without the hassle of needing to have a VPN connection first.


that's cool!


A DigitalOcean vps can be a pretty inexpensive and easy option.

I've done this with a Pi and Dyndns. It's pretty easy to setup but not as good (for me) as a DO vps because my home ISP limits data heading out. I would have to purchase a business plan to fix that and it still wouldn't be better or cheaper than what DO and others can provide.

A Pi can be used for development on your home network and it excels at that. And the older RPis can run [1]CouchDB and be configured to "Live Sync" with a CouchDB running on a commercial VPS. That too is a pretty easy to setup and it provides some pretty nice options. For example, you can make your app use the Cloud based CouchDB while you're out and about and it will sync your data with your local CouchDB. Then when you get home you can turn off the cloud access and even delete your data on the cloud DB.

1. The latest version of CoudhDB (v3.0+) doesn't run on the new ARM based Pi 4.


yep, noticed the same. a lot of containers are not available for arm, and thus you'll need to build them yourself..


The pi4 even overclocked isn't a great number cruncher, and I don't think the gpu acceleration has landed yet..

So I might expect it to be on par with that old of a macbook but not beat it by nearly 2x, particularly if the macbook is being accelerated. (despite having 2x the core count) Which makes me think the MBP may be suffering from some serious thermal throttling, which wouldn't be uncommon on machines of that vintage.

I also assume the call line is:

https://github.com/christian-fei/raspberry-pi-time-lapse/blo...

which is noticeably missing the -hwaccel switch, which means its probably not using the GPU on the mac..


Definitely, good points. Gotta try your suggestion regarding the missing flag on the ffmpeg side and report back, out of interest. I’m on service battery since a long time in fact I consider this setup only temporary, and will soon upgrade to a modern machine


Set the encoder to omx on the pi as well to take advantage of accelerated h264 encoding.


will try!


> The pi4 even overclocked isn't a great number cruncher, and I don't think the gpu acceleration has landed yet..

Well, you do get about 96 Gflops. That was pretty respectable hmm... 12+ years ago.


This is related because I'm exclusively interested in building a local-network-only media server to serve my firestick and a laptop, but are plex (nice interface, for some reason requires a sign-in to their service, some sort of paid features) or emby (similar) the only options?

If I for example wanted to just access my media library through firestick and windows with a kinda neat interface without paying anybody or making an account on a third party website, is there a solution?

I suppose I could just use VLC, which is fantastic software that isn't particularly beautiful and get used to it, but I'd like a somewhat more "netflix-style" interface for navigating my content within a single rpi server on my network.


It seems that Kodi[1] would be your perfect choice, and that is indeed what I'm using in a very similar setup.

Setup your media (local or remote) library, and the scraper will enrich it with posters, synopsis, actors, etc.

[1] https://kodi.tv/


Thanks for the suggestion, I'll have to give that another try!

I've had success with Kodi in the past with my laptop but I've also had big performance issues on the firestick. To be fair, it's been a couple years since I tried that so I'll see if the newer versions perform better.


+1 to this


From the sound of it, making a pure FTP file server and then using Kodi client software on windows and firetv (or the firetv specific versions) is exactly what you’re after.


I have tinkered with RPI a lot in my previous life, I used to maintain the Qt eglfs QPA plugin. Back then, they were quite under powered CPU wise. Are the recent versions powerful enough to host websites and data for every day use? Like say, is it powerful to host a website, couple of blogs, a nextcloud/syncthing instance and say Emby/Jellyfin/Plex? Most importantly, I want to hear about setups that people are using for every day use and not just learning.

(For context, we get a lot of requests to port Cloudron to ARM/RPI but I am still not sure if these are just hobbyists/tinkerers or something people use everyday.)


Using Emby actively right now and it’s working like a charm! Nextcloud will probably be my next experiment


Does hardware transcoding work?


wouldn't know. installed it through a .deb file, so it's certainly better suited to use the hardware than through docker probably


Note that if you got all excited about n8n when learning about it from this webpage, as a potential open source Zapier:

n8n is not open source, despite being source available. The author goes to pretty great lengths to avoid confronting this fact.

https://github.com/n8n-io/n8n/issues/40


Wow. I'm instantly sympathetic with the author given the level of vicious fanaticism he was confronting there.


Drew being mean doesn’t change the fact that the author was (and to some extent still is) being shady.


Thanks for the heads up! Need to read about this in depth


For simple, web-based speedtests on a local LAN/WLAN, I like librespeed[1]. Really helps identify subpar WiFi coverage (for example), better than just signal strength. Runs nicely on a Pi, up to line rate (1 Gbps) on a Pi 4.

[1] https://github.com/librespeed/speedtest


awesome! thank you!


There are FreedomBox versions for Raspberry Pi 2, 3B, 3B+, and 4B.

https://wiki.debian.org/FreedomBox/Hardware#Also_Working_Har...


Ime devices with a proper CPU and SSD such as Intel NUC and Beelink are the MV solution to run server software without constant headaches due to slowness, limited memory and flash wearout. Ymmv of course. The Pi4 with 8G is getting close.


I have a RP4 on my desk collecting dust, need to get it up and running for something useful. This post has got me motivated to find a use for it.


My favorite use of a Raspberry Pi has been to run a Jenkins instance.

I use it for CI/CD on projects, but also for automating other tasks -- You can use Jenkins to wrap any arbitrary script with more higher-level logic and extensibility than a cron job.

For example, I use Jenkins to automate multiplatform builds for some side projects, to periodically ingest data into a database, perform cleanup jobs, etc.


That’s nice! Trying out n8n.io right now and it’s pretty sweet


They're great as dust collectors but not as good as arduinos. You can always use it to run nextcloud if you want your own services.


Glad to hear, enjoy!

Just set up a Tor proxy that I can connect to with one command from my PC, by connecting to the PI via SOCKs proxy, good times


My only gripe about Raspberry Pi, and only thing that prevents me from using it as a home server, is that the USB cannot power an external HD


I've wanted to make a headless audio player for my car, and this power issue has been the bottleneck for me. Additionally, spinning disks and moving cars are a bad combination.

Just last week I bought a 500gb SSD and usb adapter for about $75. My Pi 2 powers it without issue. The only problem I see is that the SSD can get fairly warm, so keeping it in my closed center console probably isn't a good idea.


Not sure, if it would make a difference to your use case, but I’ve had success powering an SSD from a 4GB Pi 4 USB 3 port.


I power a laptop drive with mine. One of those WD passport type things. Works fine for video playback.


For ffmpeg h.264 encoding on macOS it is not very wise to use x264 instead of h264_videotoolbox.


How does Cgnat fit into this? Or do we have to forego any hopes of it as long as cgnat exists?


Does anyone have any experience with running and upgrading FreedomBox for self-hosting.


i use the rasp pi 3+B 1 GB model to sense oil level in my oil tank. i now know oil tank level anywhere any time. Take a look at http://myoilguage.com/


that's interesting!


i use a pi as webserver. works like a charm


Nice!


This is probably way better achieved by a $5/mo VPS.

Benefits:

- way faster

- always on (presumably battery backed)

- you're not responsible for disk or hw failures

- better internet (faster, redundant connections)

- cheaper ($60/year vs $100 upfront)

- not fighting architecture differences

- comes with a public v4 address

- usually comes with a v6 address too

Downsides:

- data isn't under your control and is subject to military espionage

- your internet connection to move larger files to/from the remote device may not be great


$60/year is not cheaper than $100 one time, unless you plan to run the RPi for one year and then throw it away.


You do pay for the electricity cost. Assuming the Rpi consumes 10W and is on 24/7 and your electricity cost is $0.15/kwhr : 365d * 24h * 0.01kW * $0.15 = $13.14/yr in electric costs.


true, although this is an experiment and wanted to see the compatibility and benefits of having self-hosted software running in my home without exposure on the internet




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: