Hacker News new | past | comments | ask | show | jobs | submit login
Free DDNS with Cloudflare and a cronjob (github.com/devrim)
337 points by aesopsfable 68 days ago | hide | past | favorite | 195 comments



Very cool! For anyone interested in a bash script instead of installing a Python runtime, I made this tool some time ago for the same purpose: https://github.com/ddries/d2c.sh


I built the exact same thing 5 years ago and I'm using it daily since then. I never have any problems with it. You don't need a config file for it, just a couple of CLI options and you are good to go. You can install it with pip, docker or downloading a binary:

https://github.com/kissgyorgy/cloudflare-dyndns


I guess this is something people have to make? I wrote one 6 years ago in Golang and rewrote it in Rust last year. I have stopped using it, but I had them running for 6 years without issues.

https://github.com/AndreasBackx/update-dns


You’re the redbean-docker guy!


Instead of using DDNS, I have been using Cloudflare tunnels to expose my home services to the internet. The setup is much simpler and it seems like it's more secure too

You specify a port and point it to a subdomain and it just immediately works, no maintenance necessary. The daemon only needs to be installed once with a simple terminal command


There are some limitations such as:

– TLS termination mandatorily happens at Cloudflare (i.e. your traffic is mitm'ed). That's because this free product is meant as a gateway drug (aka a loss leader) to Cloudflare's WAF/Anti-DDOS products (which require TLS termination to happen on their side for technical reasons).

– Other TCP protocols (including SSH) require every client to run the software too. So if you were thinking about bypassing the TLS termination restriction by creating a TCP tunnel instead of an HTTP(S) tunnel you can't.

– Max 100 MB uploads for HTTP(S).

– No media servers allowed.

Otherwise it's a really good service!


> – TLS termination mandatorily happens at Cloudflare (i.e. your traffic is mitm'ed). That's because this free product is meant as a gateway drug (aka a loss leader) to Cloudflare's WAF/Anti-DDOS products (which require TLS termination to happen on their side for technical reasons).

But on the flip side, this allows you to have a nice certificate on your outside connection without having to fiddle with letsencrypt or whathaveyou.


If someone finds LetsEncrypt challenging, they don't have sufficient network andsystem administrator skills to be running a private, public-facing web server. They should be running tailscale.


Well, one of the "challenges" is the one in a different comment: most registrars don't allow fine-grained control over who can update what DNS records.

Can it be done? Sure. But do I want to spend money on this for my home lab if I can work around it? Not a chance.

I'm kinda sensitive to the "MITM as a service" argument, but for my use case, it's not a problem.


> Well, one of the "challenges" is the one in a different comment: most registrars don't allow fine-grained control over who can update what DNS records.

Afaik, every major registrar allows you to add an NS record for the _acme-challenge subdomain, allowing you to put the _acme-challenge subdomain on a custom, self-hosted DNS server.

That in turn allows you to make the permissions as specific as you'd like. Personally I just run powerdns in docker for this.


Using CNAME delegation for the ACME challenge domain and directing that to a Route 53 zone is my preferred approach. Then (as long as you have CloudWatch configured) you get inherent auditing and very flexible privilege management.


That wouldn't work for this use case though would it? AWS doesn't allow downloading the certificate (I could be wrong)? Typically certificates can only be used with other AWS services. E.g. you can't download the certificate and serve it from a home server.


You're not wrong, but the idea here is not to use AWS' certificate manager but their DNS service on which you would only handle the acme-challenge subdomain. This would allow you to limit who can update which subdomain. The LE cert is obtained normally.


there is also https://github.com/joohoi/acme-dns and LE clients like lego supporting it.


You don’t need automated DNS fiddling for lets encrypt. Certbot can either hook into Apache or NGINX, or run its own standalone server for verification.


Aside from sibling comment, you also need automated DNS fiddling if you want CloudFlare Strict TLS support, because if LE can only connect to CF proxy, it will never issue via HTTPS.


You don't necessarily need to do that, Cloudflare can generate you a long-lived certificate to install on your origin server which isn't publicly trusted but is trusted by their proxies, so it works with Strict TLS. YMMV with other CDNs though, you might need to fall back to using LE with a DNS challenge in some cases.

https://developers.cloudflare.com/ssl/origin-configuration/o...


Not true. I have a CF rule that matches . well-known/acme-challenge and sets SSL off. The main setting is on full strict but the rule disables the auto redirect to https and the strict checking so an acme client behind a CF tunnel can bootstrap a cert with the HTTP-01 method.


You do need DNS fiddling if you want a wildcard cert, LE only accepts DNS challenges for those.


Certbot has a Cloudflare extension so all you need to do is provide a credentials file and it will automatically apply everything. I have a monthly cronjob running that runs the cloudflare certbot in Docker.


I've had excellent controls using NearlyFreeSpeech.net for DNS (minor cost) and time4vps.com (free). Maybe very old registrars restrict DNS records..?


Parent did not say it was challenging.

I find fiddling with LE tedious because it has to be repeated too often.


certbot and crontab needs to be setup just once, to solve cert problem


HTTPS when used in the ubiquitous manner it is now always strikes me as unnecessary complexity and tedium, and reasonings like yours addressing them with even more complexity and tedium.

Whatever happened to KISS?


It happened that the last S changed from "stupid" to "secure". If I use HTTPS I can safely enough connect to my home services through an open cafe Wifi, for example


I’ve been using caddy for a year which does everything for you. Basically nginx/haproxy but with https built-in via LE, no fiddling about with cert files and brittle LE scripts, also supports subdomains equally easily.


so public server via http only then?


The point of TLS is to prevent your traffic getting MITMed. This benefit disappears if you have to let someone MITM your traffic to get TLS.


This depends. The point of TLS is to protect your application from hostile networks. Cloudflare hasn't proven hostile yet.


They do allow ssh via a web browser. It may be a "beta" feature, but it doesn't require the client to run anything.


Requires your client to run a web browser though? That's a lot different from just an ssh client?


Not a web browser, the client has to install cloudflared to connect. It's pretty much exactly the Tailscale feature, but clunkier.

https://developers.cloudflare.com/cloudflare-one/connections...


GP was saying the client would require a web browser. The server of course needs cloudflared.

That said, personally I don't really have any devices that can use SSH but not a browser.


My past comment is wrong but you do need to install cloudflared on the client if you want to SSH without a browser.

Read the docs, you can do it either way.


I found it easier to use ssh over websocat over cloudflared. Then you just need websocat again on clientside and can use regular ssh client with it


> I don't really have any devices that can use SSH but not a browser.

No headless boxes?


Just seeing this. Yeah, you're technically right. But I never sit at my headless boxes. I SSH to them and then from there could SSH Jump if I really needed to use SSH out of those boxes.

Also I wouldn't use Cloudflare Tunnels so this is a moot point.


Some good points, thanks.

FWIW, I have been using it with Plex (just two users, me and my parents) and haven't gotten banned. The ToS are kind of unclear on whether this is allowed if I have to be honest.


Video streaming in general is one of their red lines, you're not supposed to shove any kind of video through their CDN unless the origin is another Cloudflare product (e.g. CF Stream or R2).


From the discussions I've read, it's not as clear cut, e.g.:

https://old.reddit.com/r/PleX/comments/152wfdh/can_i_use_a_c...


It rarely is clear cut with Cloudflare, many of their policies are ambiguous so you never really know if you're stepping over the line until you get an email from sales asking you to either cut it out, start paying, or pay more. Others experience might give you a rough idea of what they'll tolerate, but since none of it is in writing they can change their minds on a whim.


As I've painfully learned, Cloudflare's "free bandwidth" is only free until a point.

Cloudflare threatened to terminate our $15k enterprise license last week for serving 76 TB of API JSON files last month (90% cache hits).

I moved half of the traffic to a new domain with a Business license to see what they say...


My email is jgc@cloudflare.com. You can email me about this.


Just don't show up on some motivated salesperson's prospects report.

https://news.ycombinator.com/item?id=40481808#40482405


FWIW that thread looks to be a casino being black listed after trying to negotiate down cloud flares enterprise plan.


Agreed with sibling, but TBH if you're just using it for personal streaming, it's not likely to trip any bandwidth alerts on a free account, and CF will probably be happy that you're using it for personal stuff (because you'll probably take it with you to your day job too)


At which point is the MITM happening? What I mean is: browser → Cloudflare server → cloudflared on my server → web service. Is TLS only from browser to Cloudflare server, or is it browser to cloudflared?


>Is TLS only from browser to Cloudflare server, or is it browser to cloudflared?

It's encrypted between the browser and Cloudflare, but you can also create a cert and encrypt between Cloudflare and your origin server. (but that isn't mandatory)


I do the same with tailscale, which has a nice friendly UI for setting everything up.

I setup some Cloudflare DNS records to the tail scale 100.x IPs to make them easy to remember.


I use tailscale's DNS feature and run my own DNS server. That way I can have a subset of my services available on the internet via CF tunnels and when I connect to tailscale I get all of them directly, and I can use the same domain names


Some ISP DNS servers will not return internal IPs, Verizon FiOS and 172.x specifically.


How can you claim it's simpler in the light of the revelations in noname120's comment?

Dynamic DNS is literally one little service you run to "phone home" to the dynamic DNS provider. This service is bundled in consumer routers; just find it in the WebUI, put in the credentials and turn it on.

You know what could be simple: a periodic job that figures out your public IP address, and if it has changed, generates a hosts file entry for it, and e-mails it to you. If all you care about is just you having access to home while you are roaming about, that could do it. It also occurs to me that it makes a good backup strategy in case something goes wrong with DDNS while you are traveling.


Consumer firewalls, the largest names in open source firewalls, and at least one webserver/reverse proxy that I know of.

There also dozens of existing DDNS daemons out there already with far more developer, testing, and user eyeballs on them.

The firewall solution is preferred because the firewall knows when the external interface changes IP addresses, so there's no system or network overhead from having an agent repeatedly testing if the IP has changed, nor any downtime between when the IP changes and when the next check happens.


Assuming you can add a custom URL, you can still do this through the firewall instead of an event to check the public IP. I like using my own, custom domain for this use case. I've also used and put a couple of domains up on freedns.afriad.org for others to be able to use.

That said, the only hole in my firewall/router is a port for Wireguard.


"Instead of DDNS, I have been using Cloudflare tunnels to expose my home services to the internet."

Will this work if the "home services" include authoritative DNS.


only HTTPS

(last time I checked was last year though)


Thank you.

Perhaps DoH proxy is possible.


Do you get a cloudflare free subdomain or you need to supply your own ?


You need to have a domain that you manage DNS for in Cloudflare. Look up what a "registrar" is, a common one people go through would be Namecheap. Get a domain, and then look up how to set up a DNS zone in Cloudflare from an external registrar. If you plan on working in tech, this is one of those things you'll absolutely need experience with doing. Good luck!

Though it occurs to me their may just be a language barrier and you may have a domain that you manage your DNS in Cloudflare already. If that's the case, a subdomain is just an A record under your domain's DNS settings for anything other than the root domain. So, if your domain is "example.com", the A record could be like "service" with an IP of "192.168.1.10", and your subdomain would then be served on "service.example.com" for example. Subdomains are free, if you have a domain in the first place.

If you're asking if you would already need the subdomain configured in your DNS settings in Cloudflare, then yes, most likely. Though there are tools that create those for you, like external-dns in kubernetes.


You can buy domains directly through cloudflare


This is a small thing, but I think you should decouple providers in case shit hits the fan with one of them.

Let Cloudflare do DNS, let your registrar be a registrar.


Is this a real risk these days? I am interested (given I do use Cloudflare’s registrar and DNS (integrates nicely with IaC).


A mini horror story related to this just last year.

When I deleted the cloudflare DNS for one of my domains I also completely lost access to the cloudflare registrar for that domain.

Even though they should not be coupled at all and the UI makes it very much appear like they are decoupled.

I imagine this is a bug that's been fixed by now but it was still super panic inducing in the moment.

So ya keep those DNS and domains separate if possible.


Not sure, honestly.


Interesting! also asked Claude Opus:

Using the same provider for both domain registration and DNS hosting can introduce several risks. Here are some of the main risks and ways to mitigate them:

1. Single point of failure: If the provider experiences an outage or security breach, both your domain registration and DNS hosting could be affected simultaneously. This can cause your website or services to become unavailable. Mitigation: Consider using separate providers for domain registration and DNS hosting to reduce the impact of a single provider's issues.

2. Provider lock-in: Some providers make it difficult to transfer your domain or DNS management to another provider, leaving you dependent on their services. Mitigation: Choose a provider that allows easy domain transfers and supports standard DNS management protocols like EPP (Extensible Provisioning Protocol). Familiarize yourself with the transfer process before committing to a provider.

3. Security vulnerabilities: If the provider's security measures are inadequate, attackers may be able to gain unauthorized access to your domain and DNS settings, potentially leading to domain hijacking or DNS tampering. Mitigation: Select a reputable provider with strong security practices, such as two-factor authentication, IP restrictions, and regular security audits. Enable additional security features like DNSSEC (Domain Name System Security Extensions) to protect against DNS spoofing.

4. Lack of redundancy: Relying on a single provider for both domain registration and DNS hosting means you don't have a backup if that provider experiences issues. Mitigation: Consider using secondary DNS services from a different provider to ensure redundancy and failover capabilities.

5. Limited control and flexibility: Some providers may offer limited control over your DNS settings or have restrictions on the types of records you can configure. Mitigation: Opt for a provider that offers a comprehensive and user-friendly DNS management interface with support for various record types and advanced features like GeoDNS or failover.

To further mitigate risks, consider the following best practices:

1. Keep your domain registration and DNS hosting accounts secure with strong, unique passwords and enable two-factor authentication.

2. Regularly monitor your domain and DNS settings for any unauthorized changes.

3. Keep your contact information up to date with your domain registrar to ensure you receive important notifications and can respond promptly to any issues.

4. Familiarize yourself with the domain transfer process and keep backups of your DNS configuration to ease migration to another provider if needed.

5. Choose reputable providers with a track record of reliability, security, and customer support.

By being aware of these risks and implementing appropriate mitigation measures, you can minimize the potential impact of using the same provider for domain registration and DNS hosting.


Those are valid. I would personally be worried about getting kicked off Cloudflare and the procedure for transferring domains being messy and time consuming.

(Although CF seems perfectly happy hosting really shady sites that host primarily illegal content)


I asked Perplexity: Using the same provider for both domain registration and DNS hosting can present several risks. Here are the primary risks and ways to mitigate or prevent them:

## Risks

1. *Single Point of Failure*: - If the provider experiences an outage, both your domain registration and DNS services could be affected, leading to downtime for your website and email services[2].

2. *Security Vulnerabilities*: - Using a single provider increases the risk of DNS hijacking, DNS cache poisoning, and other DNS attacks if the provider's security is compromised[3][4].

3. *Limited DNS Features*: - Some domain registrars that offer DNS services may not provide advanced DNS features like DNSSEC, Anycast, or DDoS protection, which are crucial for security and performance[2][5].

4. *Vendor Lock-in*: - It may be more challenging to transfer your domain or DNS services to another provider if both are managed by the same company, potentially leading to higher costs or service disruptions[1].

## Mitigation Strategies

1. *Use DNSSEC*: - Implement DNSSEC (Domain Name System Security Extensions) to protect against DNS hijacking and cache poisoning. DNSSEC adds a layer of security by enabling DNS responses to be verified using digital signatures[3][4].

2. *Redundancy and Backup*: - Use secondary DNS providers to ensure redundancy. This way, if your primary DNS provider experiences an outage, the secondary provider can handle DNS queries, minimizing downtime[2][5].

3. *Choose a Reputable Provider*: - Select a provider that specializes in DNS services and offers robust security features, including DDoS mitigation, Anycast networks, and DNSSEC. This ensures that you are using the latest DNS technologies and security measures[2][7].

4. *Regular Security Audits*: - Conduct regular security audits of your DNS configurations and keep your DNS software updated to protect against vulnerabilities and exploits[7].

5. *Enable Two-Factor Authentication (2FA)*: - Use 2FA for accessing your domain and DNS management interfaces to prevent unauthorized access. Additionally, consider IP whitelisting to restrict access to trusted IP addresses only[3][5].

6. *Client Lock*: - Utilize client lock features provided by your registrar to prevent unauthorized changes to your DNS records without approval from a specific individual within your organization[3][5].

By implementing these strategies, you can significantly reduce the risks associated with using the same provider for domain registration and DNS hosting, ensuring better security, reliability, and performance for your online services.

Sources [1] Everything About Website Domain Registration : Best Practices And ... https://monsterhost.com/everything-about-website-domain-regi... [2] Should you keep your DNS management and domain registration ... https://blog.dnsimple.com/2015/03/benefits-and-drawbacks-of-... [3] What is DNS Hijacking and Mitigation Methods - GlobalDots https://www.globaldots.com/resources/blog/what-is-dns-hijack... [4] DNS Attacks: Tutorial & Prevention Best Practices - Catchpoint https://www.catchpoint.com/dns-monitoring/dns-attack [5] How to Prevent DNS Attacks: DNS Security Best Practices https://www.esecurityplanet.com/networks/how-to-prevent-dns-... [6] Unraveling the roles of domain registrars and web hosting providers https://www.godaddy.com/resources/skills/roles-of-domain-reg... [7] Top Five DNS Security Attack Risks and How to Avoid Them | Blog https://www.humanize.security/blog/cyber-awareness/top-five-...


I would have thought that most people who need this today (e.g. those who were using Google Domains DDNS) already have ddclient installed.

ddclient already works with Cloudflare: https://developers.cloudflare.com/dns/manage-dns-records/how...


There's one gotcha tho.

For Dynamic DNS you want minimal TTL, ideally less than 60 seconds, otherwise the DNS records will be cached and will not reflect the correct address during the short period of time window it changes.

Dedicated DDNS services usually have very short TTL (some offering as low as 5 seconds IIRC), but free Cloudflare accounts have a minimal TTL of 300 seconds (5 minutes), coupled with the crontab running every 5 minutes, your endpoint could be out of contact for 10 minutes if everything aligns right.


Then run the cron every minute. As you can see with his example, it doesn't even run every 5 mins.

For unproxied records you can set the TTL to 1 minute as per their documentation..

And normally your IP would change only when reconnecting, so it's not a big deal...


Another issue is a lot of ISPs will ignore your TTL and cache it for hours or more on their internal resolvers


I used ddclient with Cloudflare for years with no issues.

Recently upgraded my home router and the manufacturer operates a free dynamic dns service enabled with a toggle button. I have a cname record in my domain’s dns records pointing to the dynamic dns entry. I actually don’t even need that anymore. All the services I run at home are only for immediate family so only available remotely via a Wireguard vpn connection. I migrated that to the router also because it can do 900Mbs of Wireguard traffic and has a great vpn server management implementation. By default the client configs it generates points to the dynamic dns name. No real need for the cname but I have it out of habit.


Mikrotik at least has that DDNS functionality. It is really nice feature.


I didn’t need all the features or complexity of a Mikrotik router so I went simpler. I have a GL.iNet MT-6000. Underneath it runs openwrt and you can access the openwrt luci web interface or ssh to it if you want to do anything more complex than their web ui allows. So far besides enabling sftp so certbot can deploy a ssl cert to replace the default self-signed cert I haven’t needed to.

It also runs AdGuard Home so that is another thing I have been able to remove from my home server.


So does OPNsense. It’s such a joy to use that whole OS.


Wow! May I know the router model that does 900+mbps of Wireguard? The dedicated GLinet box I got for tailscale does only about 60-90mbps. Apple TV 4K does a paltry 20-40...


Is that being limited from your connection's uplink speed? My uplink is only 100mbit (I can sometimes get as much as 120mbps though). When I connect remotely through wireguard, I have to issue reaching that. I can set to my home connection at whatever the location I'm at has allowed.

I've got my additional services on a Ryzen R9 5900HX mini pc. My router is an N300 mini-pc with 4 network ports. I had trouble configuring wireguard on the router, so it's in a VM on the mini-pc and runs as well as can be expected.


The GL.iNet MT-6000 Flint2. I have a site to site Wireguard VPN with my brother which I use for offsite backups. It saturates my 500Mbs internet connection when performing backups. I have seen YouTube videos of others successfully testing the 900Mbs throughput claim. I stopped using tailscale myself about a year ago so can’t comment on the performance with it. It uses a user space Wireguard implementation instead of the kernel one so that may impact performance.


Nothing that afraid.org hasn't been doing for years at this point....

Which got me into a 4-year exploration of FreeBSD! I'm still a bit sad I had to replace it with Proxmox on Debian to get what I wanted.


Seems to rely on https://api.ipify.org/ to determine public IP.

Is there any Cloudflare service one can use to determine the IP instead? That way there’s not an extra company in addition to Cloudflare itself that you need to continue existing.


I feel like it's worth mentioning icanhazip.com [0] as well, since it's now run by Cloudflare [1]. Until recently switching to a custom CF worker, that's been by go-to for ages.

[0]: https://www.icanhazip.com/ [1]: https://major.io/p/a-new-future-for-icanhazip/


Does Cloudflare have a history of sunsetting products they've bought? Acquisitions by Google, Apple, Meta, etc. are yellow flags that the product may cease to exist soon. I wonder if Cloudflare has a better track record in that regard.


No one sunsets products like Google.

I’m in the middle of transferring all my domains from Squarespace thanks to Googles sale of that business to that incredibly lousy vendor.


Ugh, same. You’re right. Nothing is safe at Google or even a safe bet with Google. Look at third-party cookies. I can’t believe there isn’t outrage in the streets over the fact that they beat that drum for four straight years and now they suddenly have a change of heart.

At some point their rationale has to become irrelevant. It’s simply unprofessional behavior.


> Nothing is safe at Google

Google Ads


This narrative might be shifting in realtime with the LLM race and privacy wars. How do I advertise in Gemini? Do Google users want me to? I’ve worked in and around digital advertising and marketing for the better part of a decade. We look at Google with all the admiration one would have for a pet crocodile.


May I inquire who you're moving to, and where I might browse to in order to follow you away from Squarespace / Google Domains? :)


NameCheap for the ones they support. I don’t like how tightly wound Cloudflare domains are with the account. I’m nervous about putting too many eggs in one basket with them. I sometimes need to switch hosting a domain in a cloudflare account with another cloudflare account. They don’t let you do that without moving a domain to a third party registrar first. I just shortened that process.


the correct answer I think is cloudflare? I'm a little wary of internet homogenization like this but I haven't the time to worry about this sort of thing for my spare one-off domains


I have moved all of mine to Cloudflare.


This is how I ended up on Cloudflare. Burn by Google yet again.


Now we wait until you get burned by Cloudflare. Have we already forgotten the "We've discovered a technical problem with your domain: pay us $150,000 or fuck off"


Not that I'm aware of and this is likely now just a cloudflare worker that returns the IP they already have. I would imagine maintenance is basically zero as its feature complete.


True but there is no such thing as zero maintenance



The (above) shared url leveraging the cloudflare.com domain name seems to show ip v6 address, while I've noticed that the following defaults to showing ip v4 address: https://1.1.1.1/cdn-cgi/trace

Pick your poison as you wish - either is great! :-)


I’d pick the one that supports IPv6 and then make two separate requests to it. One request over IPv4 and one over IPv6.

  curl -4 https://www.cloudflare.com/cdn-cgi/trace

  curl -6 https://www.cloudflare.com/cdn-cgi/trace
Also the reason that the 1.1.1.1 one shows only IPv4 address is because 1.1.1.1 is itself an IPv4 address. So any connection to it will have to be using IPv4.


could you not retrieve your ipv6 directly from the system?


Yes, but getting it in a response from an external server means I don’t have to be specific about which interface to get the IPv6 address of and so on.


the same specific interface used to contact the external server


Which is neatly abstracted away so you don’t have to think about it unless you want to. And therefore reaching out to an external server and having it say where the request came from is the path of least resistance for a script that can work across different hosts with minimal machine specific configuration.

Listen, if you want to check the IPv6 address from the interface list go ahead I’m not trying to stop you.

But because I anyway need to reach a third party to know my own IPv4 address then yeah when that third party can also tell me IPv6 address I’m gonna do it that way.


That requires running it on the router/device which gets the public IP address. By using the service you can update your DNS IP address on a system that is behind the router.


What do you mean?

Every device gets a public ipv6 (usually).


I have a router connected to the internet, it gets the public IP address.

The router is connected to the internal network in my home and has the IP address of 192.168.1.1.

Behind the router is my computer which has a non-public IP address, for example 192.168.1.2. My computer is the one I want to run the program to update the DNS entry. My computer does not know what the public IP address is by looking at its interfaces.


We're talking ipv6 not ipv4


I think the same situation applies for IPv6 if you are behind a router.


No, it doesn't


I'll agree to disagree :)



As I've said before. The server behind the router does not have a public IPv6 address. It is NATed to the Internet. Getting the internal IPv6 address is useless for this case.

I do not like to have the servers/computers that are on the internal home network directly connected to the Internet.


Nating in the context of ipv6 is not a common thing. It is the exception, while it's the rule for ipv4


i.e., an arbitrarily-selected interface capable of reaching Cloudflare.


This is how I use it in my bash script:

  current_ip=$(curl -s -X GET https://1.1.1.1/cdn-cgi/trace | grep -Po "(?<=ip=)(.*)")


I find awk more clear for this kind of job. You can replace

  grep -Po "(?<=ip=)(.*)"
with

  awk -F= '$1 == "ip" { print $2 }'


Thanks for that, I agree.


Works on every cloudflare-MITMed domain btw.


Kind of, you can use the CF CDN to grab an external IP by using any CF site.

Example: https://www.fullspectrum.dev/a-less-suspect-way-to-get-exter...


You could host your own VPS for a few dollars specifically for the purpose of responding back to you with your own residential IP. But that wouldn’t be free.

In my experience, you have to be careful if relying on one IP source because if they give you the wrong one, then your servers could be MITM’d. I say this because I have a script which does this exact thing, and found a couple of these ‘what’s my ip’ services giving me someone else’s IP. Because of that, I randomly select a few IP addresses and ensure they are identical before I trust any of them.


there's a way to tell caddy server to host its own access.log

So you have some junk VPS or whatever that just has caddy hosting its log with an easy to remember domain (they're cheap enough), and you go like "curl http://easydomain.com/idreallylikemyip" and then once more: curl http://easydomain.com/N | grep "idreallylikemyip"

the code that used to work is on my github, i uploaded it there a week or two ago. Someone who needs a way to find out the public ipv4 of any device not just their own can probably figure out how to get it to work again!


You can have Caddy return the user's IP address directly: https://caddy.community/t/caddy-server-that-returns-only-ip-...

Similar things are also possible with nginx and Apache.


this was implemented in 2018, but it is good to know. it also doubled as a dumb way to pass messages between hosts.


You could do it with a cloudflare worker that just returns the ip address of the request


Nevermind. The other answer is better


I’ve used this: dig @1.1.1.1 ch txt whoami.cloudflare +short


That's pretty neat!

And they offer a similar service on their DNS resolver over IPv6.

This page lists the IPv6 addresses to use when connecting to their resolver over IPv6

https://developers.cloudflare.com/1.1.1.1/ip-addresses/

and with that I just tried

  dig @2606:4700:4700::1111 ch txt whoami.cloudflare +short
And it works, returning the IPv6 address that the request came from :)


If you too are tired of relying on outdated software from paid services like NoIP and DynDNS, and are in need for a reliable way to manage your home server with your own domain name, try this simple script with a free Cloudflare account. It just gets the job done...


> restart cronjobs

>

> sudo systemctl restart cron

Hello author, there's no need to restart cron, crontab -e applies changes automatically on exit. And the daemon is called "cron", not "cronjobs".


Hello back! thanks I did not know crontab -e auto-restarted it. fixing it now. also 'cronjobs'.


If for some reason your DDNS client supports dyndns but not Cloudflare (e.g. UniFi OS), you can use this Cloudflare Worker as an adapter:

https://github.com/willswire/unifi-ddns


I also wrote my own pseudo-DDNS recently! With inspiration from a couple of similar projects on github

Mine is a golang executable that runs directly on my OpenWRT-based router on a 30 minute cron job. The beauty of running it on my router directly is that I can simply query the `eth0` interface for my public ip address - no need for a `curl` to determine my public IP.

https://github.com/jackphilippi/r53-ddns


Some time ago, I built a similar project: https://github.com/dethos/worker-ddns

The main difference is that, for security reasons, it uses a "Cloudflare worker" to change the DNS record.

> Since Cloudflare API Token permissions aren't granular enough to limit the token access to a single DNS record, we place a worker in front of it (this way the token with extra priviledges never leaves cloudflare's servers).

It works very well, no complaints until now.


You can achieve the same on virtually any DNS hosting with RGAP[1]. The trick is to delegate name of your interest to server which runs RGAP DNS server and let it respond to queries for such domain name. Bonus: you can have more than one address running RGAP-agent and exporting its address to DNS.

[1]: https://github.com/SenseUnit/rgap


This kind of script should ideally run on your main router, and openwrt already has support for Cloudflare DDNS [1]

[1] - https://openwrt.org/packages/pkgdata/ddns-scripts-cloudflare


Did a very similar script with Deno/TS with DigitalOcean's DNS. I also setup a couple domains on a small Linux instance to deliver IP address responses. In order to facilitate IP lookups.

    ipv4.bbs.land
    ipv6.bbs.land


A bit of a tangent, but something like PowerDNS authoritative server comes with an API[0] that can be leveraged for similar functionality to what Cloudflare provides.

Decentralization of the internet has to start with Authoritative DNS. I know it's not free to host an authoritative server like this on a VPS, and there are DDoS considerations. But the flip side is that DNS is a metadata protocol and contains a wealth of information that anybody privacy focused should think twice about. It's also an incredibly powerful and important protocol to understand.

[0] https://doc.powerdns.com/authoritative/http-api/index.html


If you're privacy-focused, you should run your own recursive resolver. Running your own authoritative server doesn't help much with privacy if clients still go through centralized recursive resolvers to query your domain.


You should run both.

Consider Cloudflare (and large scale infrastructure providers like TLD operators) point of view on the traffic: If your private resolver is using root hints, it's IP is now correlated with the lookup of that domain even if they don't proxy the website. That's you and your users, and they can do that at scale - So it's important to point queries for your assets directly to your authoritative servers or rewrite inline without ever querying a internet source.

dnsdist[0] (also PowerDNS) allows you to load balance and apply rules across upstream resolvers which opens up allot of possibilities on the recursive side.

Trusted resolvers with a healthy number of users originating iterative queries from non-descript and changing IP's is probably the best way to anonymize your recursive traffic.

[0] https://dnsdist.org/


I'm calling out the elephant in the room - you’re putting way too much faith in these IP lookup services without questioning their obvious ability to screw you over with giving the wrong IP. Is no-one in here able to see this is terrible security??


If you can't trust cloudflare to give you a correct IP how could you trust it as a name server?


The script was not using Cloudlfare to grab the IP, it was using iptools website. Plus, it doesn’t even need to be malicious - it can simply hive you the wrong IP. I know this because I ran a script querying 40 different sites which provided the IP address, and I found at least 4 of them giving incorrect IP addresses over the course of 24 hours. This is why you shouldn’t trust any single source, but compare multiple different sources and THEN update cloudflare with the IP. You see?


This is a pretty nice option for Cloudflare domains. An alternative I use is DomainConnect, which provides free DDNS but the main backer of it is GoDaddy so I had to leave the domain I use it with registered there.


I wrote one of these in C# years ago after seeing my friend write one in GoLang even more years ago.

GoLang: https://github.com/wyattjoh/cloudflare-ddns

C#: https://github.com/nick-funk/dyn-dns

Mine is more barebones since I threw it together quickly in an afternoon. I feel like many a HomeLab person fighting their ISP is taking advantage of this Cloudflare API trick


I feel ripped off! (kidding, this is commonly "reinvented" by many. Here's my Cloudflare and Google DNS ones: https://github.com/zackoch/easy_cloudflare_dns_updater/tree/...

https://github.com/zackoch/easy-dynamic-google-cloud-dns


I use cloudflare with ddclient for a raspberry pi weather station on t-mobile (a regular line, not TMHI). This allows ms to view it anywhere.

It just sets the AAAA every 5 minutes via cloudflare's API and their CDN proxies it automatically for the ipv4 only clients. I leave the A record blank.

EDIT: Has to he this way because ipv4 is behind CGNAT on their network where ipv6 is fully routed public addresses. The home internet product is setup differently and you can't host stuff on it.


Since my IP hardly changes, I went from DDNS to an email notifying me when the IP changes with this cron:

  old_ip=`cat ~/.prev_ip`
  my_ip=`ifconfig em0 | awk '/inet/ {print $2}' 2>&1`
  my_email=me@example.com

  if [ "$my_ip" != "$old_ip" ]; then
    echo $my_ip > ~/.prev_ip
    echo $my_ip | mail -r $my_email -s "New IP: $my_ip" $my_email
  fi


> Since my IP hardly changes...

Same. Our wireline ISPs used to issue new public IPs every 1-12 weeks. Now it's more like 6 mos to never.

I'm thinking this is due to pressure from IPv4 exhaustion and the rise of easy DDNS. There's also an overall shift - from using tech to protect profit-generating services to using lobbyists.

To share an anecdote from the before times: I was once trying to setup a VPN endpoint on a client's DSL connection. Every time I initiated the connection, their public IP would change. The lease renewal was fairly quick and I could trigger 5 changes a minute.


For me it changes reliably on every reconnect, but there are no forced reconnects, and I now have my router not restarting basically ever since I am on openwrt and am done with setting everything up.


> For me it changes reliably on every reconnect,

What kind of reconnect?


router to provider network


I did something similar, but scripted a curl command to update the DNS A/AAAA records that have a short TTL to the hostname.

It’s also trivial if you run your own nsd/bind instance.


I’ve been favoring Tailscale lately for establishing magical access to machines at home. Because it permits two-factor authentication based on Google and other systems, it seems more secure than just having things exposed via public IP. That being said I definitely appreciate that being really on the internet has its uses!


Did anyone here here set up a good rsyslogd configuration where the receiving syslog collector limits incoming logs to only known ddns machines ?

I think I may be able to stitch something together with periodically reconfigured packet filters, but I'd appreciate an existing solution.

Bonus points if running on freebsd.


Put a filter on syslog-ng, IIRC that runs on the BSDs


That looks good.

I'd probably prefer doing this at lower layers like pf, since I know how to reload those configs via cron, and since I want to avoid unwanted or malicious packets to even make it to the syslog code.

I was just surprised to find no recipe online, it's apparently more of a niche case than I thought. Worth documenting, probably.


I've used this project which provides a Docker container: https://github.com/timothymiller/cloudflare-ddns


There are a lot of these on github. This one seems to be maintained well:

https://github.com/zebradil/cloudflare-dynamic-dns


This application would suit checking the external IP from multiple external sources before updating the Cloudflare API.

Also, if running a home server you’d want that 5min wait time brought down to something like 1 minute.


It's better to do a script on your router, which knows exactly when the ISP's DHCP changes. Mikrotik has an event to capture this, and *sense has built in scripts for various DDNS providers.


Similar project which runs in Docker:

https://github.com/favonia/cloudflare-ddns

It's cache friendly and respectful of rate limits


Nice, there seem to be a lot of these! I personally use https://github.com/qdm12/ddns-updater, it’s from the creator of gluetun


I have been using this for a couple of years, ticking away on an RPi. Works perfectly.


This is the script I use for this: https://github.com/g-rocket/cloudflare-ddns-updater


I wanted to do this a long time ago but I wouldn't trust my router with a Cloudflare API key. Paranoid or is there a way to limit that key to one domain or, even better, one DNS entry?


As the other commenter says, you can get pretty granular with the permissions. If you want to go even further, you can build a Cloudflare Worker that performs exactly the request that you want to do, and nothing else. Then you can configure your router to hit that instead of the API directly.


Yes you can generate a key which, for example, only allows you to edit DNS of a specific domain


you can setup the job on your trusted machine behind the router, could be raspberry pi or your desktop


For those who depend on Cloudflare extensively and have some traffic, I have a question:

I was researching whether it's worth it to switch my pet project to Cloudflare's various offerings (D2, Workers) instead of AWS/GCP, since Cloudflare has a very generous free tier.

But from quick googling (I think it's Reddit), some people said Cloudflare uses bait-and-switch where at some point you will need certain features that are only available in enterprise plan or something, basically significant cost increase.

Should I be concerned?

EDIT: I want to make it clear that I'm talking about significant cost increase, something that will catch many people by surprise.


For standard, legal web traffic Cloudflare will always be free. If you’re using us for just that and anyone on our sales team ever pressures you to upgrade, email me because it’s an explicit violation of our policies. Sales people are humans, so sometimes they make mistakes, but I can set it straight. Here’s my email:

matthewatcloudflaredotcom

So what are the cases you may have read about. They fall into two big buckets:

1. Streaming Video

A video stream is just a series of image files strung together. So some people have tried to use our free service to serve video. This causes two problems. First, a second of video is often as much as 10x the bandwidth as a typical web page load. We’ve done a lot to make bandwidth costs low, but it can add up fast.

Second, the people who tend to do this sort of janky video streaming are often streaming pirated video content. When that happens and we don’t shut it down we get sued. That’s costly.

We do offer a service to stream video. It’s creatively named Stream. It’s elegant and not janky and designed to be the least costly way to stream video content. It’s cheap but it’s not free.

2. Illegal Content

The site that is in the link you referenced was serving a gambling site to a jurisdiction where gambling is illegal. The problem was, the jurisdiction retaliated by blocking their IPs. If that only blocked the one gambling site, that’s their problem. But we share IPs between customers on our low end plans. So if a customer does something illegal somewhere and it causes an IP to get blocked then it causes harm to a bunch of other customers.

The solution is dedicated IP addresses. In a case like this we have a product called BYOIP (which is exactly what you think it is). It’s bespoke and expensive for us to maintain and customers who care about it tend to be customers who have budgets to pay for it, so it’s expensive. We could probably invest engineering resources to make it less bespoke, but there’s really not a ton of demand.

This customer was doing something illegal somewhere according to some government. We said — no judgment — but you’re getting our IPs banned and causing harm to other customers and we can’t let that happen. We presented a solution (albeit an expensive one). They balked and wrote a blog post. And now people assume there’s a bait-and-switch sales strategy. There’s not. Turns out people who use our Free plan rarely turn into million dollar customers. And people who are million dollar customers don’t really even consider our Free plan. So the world generally sorts itself correctly.

We get stymied by our policy of not talking about the details of customers without their permission, so it makes it hard to respond to blog posts like that one. But enough people have asked me about it and I’m tired enough about it that I’m going to make the decision to revise the policy: we won’t publicly disclose any details about a customer without their permission; but if you write a blog post complaining about us and leave out the salient details, then we’ll reserve the right to fill those details in.

Anyway, in 99.99% of cases, and especially if you’re not janky streaming or doing something illegal, our Free plan will work great for you and you’ll never hear from anyone on our Sales team.


> But from quick googling (I think it's Reddit), some people said Cloudflare uses bait-and-switch where at some point you will need certain features that are only available in enterprise plan or something, basically significant cost increase.

Cloudflare is only "free" for hosting websites; doing something like hosting just images or binary data and pushing hundreds of gigabytes or terabytes a month is likely to get your domain dropped from Cloudflare [0]. However, they do allow these non-website use cases (like hosting binary files, tons of images, etc) when using their third party products like R2 and/or Workers.

But, even with those stipulation, they do have a somewhat dubious sales tactic where, if you're pushing a lot of data, they:

- send you an email saying "you're using a lot of data"

- Have a line threatening you to "pay us to safeguard your website from potential suspension or restricted access"

- If you don't pay, you're in limbo on whether or not you're actually violating T&S and should make plans for being dropped by CF

Going over X0 TB/mo seems to be the threshold for getting put in this sales funnel, based on the few instances i've seen, but I can't confirm it. In some of these cases, the accounts survived, and in others they were dropped, so this isn't always a death sentence.

I would be incredibly grateful if Matthew Prince / eastdakota commented on this sales tactic, because it's obvious that some sales EVP at some point in time said "When Trust & Safety flags a customer for bandwidth reasons, we need to try to upsell them before T&S can review and make a determination for the account", which seems incredibly bad manners with how often CF speaks about their anti-"bandwidth rent seeking" philosophy[1].

0: https://community.cloudflare.com/t/the-way-you-handle-bandwi...

1: https://blog.cloudflare.com/aws-egregious-egress


it's only a bait and switch if you pay for something that they then pull out from under you. this is just called a free trial


If only this didn't require an API token with write access to the entire domain. Please Cloudflare, let us grant access to specific (or regexp!) records


Last I checked AWS has the same limitation. One workaround is creating a separate sub-zone and giving access only to that to whatever you need. But for a "cheap homelab" solution, that's gonna cost you a bit more per month.


66 comments and no mention of inadyn?

https://github.com/troglobit/inadyn


I did something similar with `curl` and `sh` about a year ago, when the version of `ddclient` on OpenBSD didn't properly support Cloudflare.


My internet router (Fritzbox) has DDNS built-in, so I just use the domain provided by the Fritzbox / AVM combined with DNAME records.


If you’re behind a CGNAT then this won’t help you much. For many residential installs that is the case unfortunately


Nice. Consider adding fallback services to api.ipify.org, such as ifconfig.me or icanhazip.com


I used duckdns and have for years


Psa: https://freedns.afraid.org/ still works and it’s free.


A dhcp lease hook is also useful to keep up with changes instantly.


Huh, I ignored this article because it sounded like such a solved problem, but it stayed on the home page long enough that I thought I might be missing something.

Not only was it exactly what I expected from the title, there were 3 obvious but unimportant flaws in the "Ubuntu/Debian" setup section:

- a cron line that runs every 60 minutes is commented as running every 5

- unnecessary crond restart. Not just reload, which would already be redundant, but a full restart

- unnecessarily restrictive heading. There's nothing specific to Ubuntu/Debian in those instructions

I mean, it's a fine solution, like the 100s of others out there. I'm not trying to throw shade on the author; they've made something a little more flexible than most one-offs, without going overboard like the ones that handle dozens of different services. But... why the front page? Why the upvotes? Can't you kids just stay off of the damn lawn?!


I created an account just to comment on this: I tell you something, you are "throwing shade" on the author - even if this is a "kid", were you born and immediately started to invent (insert complex tech) from scratch? This guy did a nice job and wanted to share his work with us and appearently many others appreciate it and thus it ended up on the front page. Comments which's only intention is to make some other's work smaller and seemingly "unworthy" are just sad and unnecessary.


Hey Tobi, thank you! This is HN, so that is fully expected, but it’s all fun and good. I made this super quickly and just wanted to share in case anyone needed it. I only saw it today—reached the front page and got 444 stars! I’m as surprised as the shade guy :)


I've using afraid.org for forever now. Works great


do they rate limit dns resolution and force you to enable JavaScript and click a captcha for the full cloudflare experience?


Nice idea, to note Cloudflare supports tunneling.


I did the same, with Linode dns and their api


It feels like this trick would violate the terms of service...? Caveat: I don't use Cloudflare.


Noip isn't free?

I have 3 domains there for years and I haven't paid them once.

Some time ago they started requiring that I mark the domains active each month. I wrote a script that intercepts that email and logs into their site to reup the domains. Recently that script broke and I haven't bothered to fix it because logging in once a month is a nothing burger.


"Yeah, but"; do I want to be putting up impossible-to-solve captcha loops in people's faces? Can you do this in a way that people who know your domain can go directly to your actual IP address, rather than a Cloudfare proxy?


If you set {proxied: false} - it'll resolve to your IP directly.


curl dns.he.net -H someauthheader

Second recommendation, instead of https://api.ipify.org/ https://cloudflare.com/cdn-cgi/tracert is my preference always

Thirdly Why does such a basic thing get that many upvote? Hnews turns into tiktok for sde


Well, these days SDE means "don't bother properly engineering your software, just throw away the entire system environment and re-make it!" aka containers, so...


Weird, this project is very similar to this one https://github.com/zackoch/easy_cloudflare_dns_updater/tree/... Did OP kang my project?


Haha, no, I didn’t! Honestly, I didn't put much thought into this project either. I needed it for myself to host a game we're developing (it’s 11GB per download) and was frustrated with the Noip client. I made this in about an hour and thought I’d share it too! Just seeing all the upvotes today :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: