I was impressed by the performance, especially with ExpressJS, and started digging deeper until I noticed "cf-cache-status: HIT".
What I'm seeing is not really a "self-hosted website", but a Cloudflare-hosted cache of a self-hosted website.
Edit: I wanted to get an idea of the "real" performance, so I went to a non-existent page so the cache would miss. I waited 930ms for a 301, then another 590ms for the 404 page.
Of course it's using cdn/Cloudflare. I live in the Caribbean. Good thing it was! A topic such as Home Lab will see traffic from all over globally. No cdn would be terrible idea.
And of course 404 pages will be slower. Because Cloudflare still has to receive the request first, then check origin server roundtrip before responding.
The site is just a hobby page and the only thing public facing that I self hosted. Even my main blog which is hosted in North America uses full-page-cache CDN courtesy Bunnycdn.
I understand the need for Cloudflare in this case, I just didn't realize it was being used at first and got rather excited about the prospects of self-hosting. I don't mean to insult, I just got a false impression and wanted to make sure people here knew that the resources aren't really being served by your server, at least in most cases.
I just ran that Pingdom test again, 938ms for the 301 and 3861ms on the 404 on a missed cache. Now that it's in the cache, everything loads in less than 247ms.
No worries. I understand what you mean. I have not seen high response times (3 secs u report on first request) for 404s yet via Pingdom but then again the site has only 3 or 4 pages so not worried about 404 performance.
- 8-port POE Edgeswitch, powers the nanoHD AP in my hallway and acts as the backbone of the network.
- Raspberry Pi 3 as a dedicated pihole DNS server
- TPLink 16 port switch serves all the 'lab' gear.
- 2x R720's. One is off, the other is a VMWare ESXi host.
- ESXi is pretty light at the moment - running Unifi controller and a dedicated VM that just runs Elasticsearch for a side project. For a while I was heavily using it to simulate an AD deployment for a small business.
- Synology DS918 is a network drive and network time machine backup destination. It backs itself up to Backblaze nightly.
- Everything is on a simple Cyberpower 1500VA UPS. I can run everything for about 80 minutes without power, but if I shut down the R720 the runway gets much longer.
I got my start in about 2003-2004 by trying to run my own mini ISP out of my home during High School. I was doing everything over a residential DSL connection that only got 1.5mbit down and a fraction of that for upload.
I still remember asking my bud to help me 'test my mail server' which was a Qmail installation via the Qmail rocks guide. The machine was a Pentium 3 with 128mb of RAM. My bud Anand (Founder of Gyrosco.pe) sent a handful of test emails to it and brought it to its knees! Absolutely hilarious incident at the time.
I wish that I had more photos of that time in my life.
It's a single chokepoint for all DNS traffic on my network. So I do not need to configure any device or application to use it. For instance, my piece of crap Roku stick benefits from all the same ad blocking that my mac, iphone, etc... do.
There is also a lot that goes on outside of your browser. It blocks tracking logic embedded inside other things, too. For example, my TV cannot phone home to Samsung anymore to tell it what I am doing.
You can get really wild with firewall rules to truly prevent any DNS traffic from escaping your physical network.
This might be the best machine I’ve ever owned. It’s rock solid, a great Docker introduction and runs a load of small tasks without issue or intervention for a year+ so far.
I’ve been really happy with the DS1618+ we have at work as as well. I especially like that you can use the full size drive bays with SATA SSDs as cache drives. Some other products (Drobo for example) only let you mount cache drives on connectors for that explicit function or purpose.
Anyone know offhand if this year is getting a refresh to the 1618+? I think that Synology has a tick-tock upgrade cadence with consumer product lines getting upgrades in year x, and enterprise products get upgrades in year x+1.
I've been putting off researching home network/backup devices, because I feel like it is a big decision that I don't know enough about. And they all tend to have bad reviews.
Is there a good guide to start researching that isn't a top-ten list with just affiliate links?
Definitely have a look though the relevant reddit subs and by and large the relavant ones are positive and helpful, eg /datahoarder /synology /homelab. It's good to scan them pre purchase too - for example it's been interesting watch the Western Digital drive issues play out over those subs
There are some off-the-shelf solutions that are expensive and well regarded (eg Synology DS918 DS1019) but expensive. There are a lot of pretty nice systems you'll be able to hack together too. Freenas is excellent in this regard, but it gets into personal favorites very fast.
Don't you find the R720 to be astonishingly loud for something to keep at home? 1U/2U size servers are not really designed to be quiet. Maybe if it's way out in a far corner of your garage or basement somewhere...
My home test xen hypervisor is an older Dell desktop mid-tower workstation dual socket/8-core xeon, because it's reasonably quiet.
You can either manually control the fans with some form of IPMI tools or by disabling the fan offset (needs to be done every reboot though). Then it is actually very bearable, asuming you have good ventilation around it.
just curious, why people rely on VMs and dockers so much these days? Is that because process isolation within the OS has failed + installation and DLL hell issues have not been solved after 50 years of OS design?
For VMs mostly because of what you said. And generally, the ability to run Win-only software on a Linux host.
For Docker, I think it's just the ease of use. Imagine a standard web service you want to host, consisting of an Apache, some PHP, a MySQL database and maybe a Redis cache as well. Now imagine how the software's author would write setup instructions. 'Grab this and that version of these things, install them, then edit these 10 config files and make sure these lines are in there.'
With Docker it's literally 'Here, copy that docker-compose.yml to your server und run `docker-compose up -d`'.
I bought it a long time ago back in 2014? to prototype an idea ... then it went into a bin and has been there forever. I discovered it the other day and thought I might be able to use it to try and find what is causing interference for my bluetooth and other wireless devices (logitech univ. reciever). I keep getting these bursts of what I am calling interference throughout the day and would love to figure out what is causing it.
If people are showing off a lab. My lab always sparks a conversation. I wanted to prove I could beat twilio's texting rates. https://i.imgur.com/gKknrJn.jpg
things like [0] are really commonly used for grey market voip in all developing nations... There's devices which are specifically a GSM-based 2G voice interface on one side, with a SIM card, and a SIP phone you can register to asterisk on the other side. With 8 or 16 devices integrated into one enclosure.
It was a fun project - but I wouldn't use old cellphones. Having these old cellphones plugged in all day - basically just made the batteries enlarge and broke the casing. But you can get old android phones for nothing -- even cheaper than a raspberry pi.
It's pretty easy (at least for single devices) to use a timer powerpoint switch and guess/tweak an appropriate duty cycle to keep the battery topped up often enough without leaving it charging 24x7. I have an old iPad that's been running for 5+ years with 1hr/day charge time. (I cooked it's predecessor's battery to a screen-cracking-bulge in about 9 months.)
I've done something similar. I used to have a few Raspberry Pis with 3G modems attached to them, with prepaid SIMs that had "unlimited" SMS plans.
It actually did work for a while. The problem that you run into is that carriers rate limit the number of texts per minute that a particular phone number can send (this is a problem with Twilio too, but in that case it's much easier to spin up a new number).
The problem with texting rates isn't technical, it's that carriers essentially have a cartel and can set prices however they want, and have tools to detect and prevent such "abuse" of their cheap consumer plans for commercial usage.
Do you use this commercially? <- no. just as test.
"and have tools to detect and prevent such abuse" <- This was my assumption too. I tested all large carriers. Verizon, however doesn't seem to have these limits in place. With a verizon sim I can push out about 200 texts per hour per device. If you get rejected in my tests depends on the receiving carrier.
"it's that carriers essentially have a cartel and can set prices however they want" <- yes. Carrier to carrier is where things are interesting. Sprint gets quickly blocked by ATT for example. But, wow, a UNLIMITED TEXT plan is pretty cheap...
Not sure if I agree. It doesn't cost you any extra to use those links when you're shopping for gear (last I checked), it's a creative and unusual setup to see in someone's house these days, the photo delivers a good amount of impression, and the resulting website is just nice to look at. On top of that, he did go to the trouble to list all of the gear, which there's a good chance won't even be remotely paid for by the affiliation.
Yeah, the more I think about this, to me I think this counts as supporting the community.
Looks awesome! I went down the homelab-ish rabbit hole recently myself and it was probably the most fun I've had in a long time on a project. My goals were more about hosting a bunch of IoT stuff internally (sensors, cameras, etc), but I wrote up a bit about at least the rack/equipment here: http://cra.mr/my-journey-into-home-automation/
At some point, I'm hoping to dive a bit deeper on the software side, which was also super fun.
I have a raspberry 2b+ with several services: ldap, syncthing, gitea, cups server, minidlna, torrent server, NFS, nginx for PHP stuff like phpldapadmin, phpmyadmin, nextcloud, etc.
I realized that I don't need a public IP. I set up zerotier on all my machines and pointed a subdomain that I own to that IP. OK, other people can't visit it, but I rarely need them to.
It's perfect, but nextcloud is slow :( Maybe an upgrade to rpi 4 will help it be faster.
If you wanted it to be publicly accessible in the future, I think you can rent a cheap VPS (DO Droplet) and set up a VPN, and then connect to that VPN on your Pi. Then you can have Nginx on the VPS and reverse proxy to any services you'd like.
I haven't self-hosted a website in a long time. Do you still need a static IP in order to do so, or do you have some kind of service that updates the IP? How does it interact with DNS?
There are "Dynamic DNS" services with a small daemon you install on your server that continuously update the DNS address so it remains online even if your ISP changes your IP on you.
I use Cloudflare for my domains, it has a REST API, I wrote a small Python script which updates the A record for all my domains. My IP addresses change at least monthly, but I never noticed any outage.
In my experience the only time Att or Comcast change your IP is when there is a power outage. I have had to manually update the DNS record for my home computer only once in the last 3 years.
Something I want to experiment with is IPv6 DNS. IPv6 has enough address space that individual devices are universally uniquely addressable. With an AAAA record you can point DNS at an IPv6 address. So I want to try setting DNS to point to a specific device... though I haven't gotten around to it yet.
IPv6 DNS certainly works, as long as you're not using ISP provided home equipment that's too stupid to let you allow incoming connections on IPv6; but only if you always have IPv6 access where you want to access the things from.
IPv6 on public WiFi or corporate networks isn't a given, and where I live, neither is cell coverage, so I wouldn't want IPv6 only for important services.
I've been told that, and my router is pfSense, which claims that it is possible, but the documentation says something along the lines of "You can use NPT!" and then doesn't explain HOW.
I do mine with a dynamic IP. I use DynDNS (with the ddclient daemon running on my Linux router to update records when my IP changes). It hasn't failed me yet.
Most tech companies or larger IT departments have a space dedicated to testing, evaluating, or developing gear. The industry term for this is a "lab." People who collect gear and do IT at home as a hobby refer to their lab has a "home lab".
I don't know where it comes from, but the distinction's basically 'when a home server becomes home servers', or at least becomes a hobby, as opposed to just setting up and forgetting a Pi-hole once or something.
Originally, a "home lab" is specifically for learning and testing, primarily for professional reasons: i.e. someone going for a Cisco networking certification might get a stack of old Cisco routers and build test environments with them, someone learning windows networks might setup a windows domain on multiple PCs, ...
It's kind of morphed and people fold their home "production" environments into the term too.
Well, sure, but IME 'production' at home is essentially the same as lab tinkering. How many home labbers have even one home environment besides 'production'?
I meant "production" in the sense of "things you run to use, not just to learn". I.e. on /r/homelab you get many people with "I have a bunch of Ubiquiti gear, a NAS and a server running Plex", which isn't that "experimental", even if they did learn something while setting it up. Whereas the mentioned pile of Cisco routers might never see traffic that's not part of testing something. And of course the two can mix.
Well, I have a small living room server/home lab as well, but it's a 12 years old MacBook pro running Debian. I love the idea of having my own server where to run small personal projects, have some daemons running in the background and use it as a NAS, but I cannot justify any investment for it. Scrape parts or nothing.
I run everything on an old Dell desktop with a i7-2600 CPU and 8GB of RAM. I just run OpenSUSE and use it as a NAS(samba), remote desktop (x2go), pihole server, torrent host, etc..
I went down the edgerouter path (edgerouter x sfp) and have regrets, in case others get ideas (or if someone has fixes, I'm all ears). Don't get me wrong, the device works great, I was entertained by it running vyatta which I had experience with at the time, and I just got https://github.com/nextdns/nextdns running on it which is super awesome. But I have a bunch of devices connecting to wifi, which seems to work waaay better integrated with the unifi line of products. I'm probably going to swap it out one of these days, and I can make due with running the controller on a 24/7 box to smooth it out a bit, but I'd totally go unifi down the line if I had to do over.
I tried to start with an edgerouter and a Unifi AP and quickly switched to an all-Unifi setup. I recently switched from a cloud key to running the controller on a $5/mo Digitalocean VM when the UCK died for whatever reason. I’m not sure about how ultimately reliable it’ll be but I like it fine. The remote adopt function in the iOS app is extremely slick.
I have simple note server[1] running on raspberry pi. I want it to be accessible from Internet but my ISP uses double NAT and theirs no other way around it. :/
Like the other poster says, I have cheap vps which I connect to from my home machine using wireguard. The vps instance forwards Internet requests down the wireguard tunnel to my machine.
I have my own (not nearly as sophisticated) home lab, with an old self-built desktop acting as a kvm host.
while it's not as pretty, the biggest win for me was to move all of the vm disks onto a Synology that I'm mounting via iscsi. it tends to work really well.
What I'm seeing is not really a "self-hosted website", but a Cloudflare-hosted cache of a self-hosted website.
Edit: I wanted to get an idea of the "real" performance, so I went to a non-existent page so the cache would miss. I waited 930ms for a 301, then another 590ms for the 404 page.