Hacker News new | past | comments | ask | show | jobs | submit login

I’ve seen packages that do ”internet-detection” by calling out to icanhazip.com, and I just thought that was so irresposnible. What if your package got popular, how much money are you costing the hoster? For services like this, people just don’t consider the fact that there’s someone on the other side.



If you want, you can set up a similar service yourself by adding the following lines to an NGINX config:

    location = /ip {
            default_type text/plain;
            return 200 '$remote_addr';
    }

Requesting "yoursite.tld/ip" will then return your IP address. I set up something like this on all my servers and recommend that others do the same. It's easy to do the same for Apache and Caddy configs. That should help spread the load.

I'm curious as to what other overused utilities can be trivially done with pure server configs.


If you want JSON instead:

        location /ip {
                add_header Content-Type "application/json";
                return 200 '{"host":"$server_name","ip":"$remote_addr","port":"$remote_port","server_ip":"$server_addr","server_port":"$server_port"}\n';
        }


Is it easy to do the same for Apache? The best solution I found was some hacky way with an ErrorDocument directive which seems pretty gross.


You can use SSI, and echo the remote IP: https://httpd.apache.org/docs/2.4/howto/ssi.html

``` <!--#echo var="REMOTE_ADDR" --> ```


Doing this for HN asks you to login as an administrator.


I feel the same about dependency steps in CI, without a cache or any similar structure. Package repos like Rubygems, NPM and PyPi get utterly rinsed by the continual downloading and redownloading of stuff the client should have already stored.


This. And both with GitHub and GitLab it takes quite a bit of an extra effort to setup caching. It hurts to see 'npm ci' download half the internet every time a developer pushes to dev server.


Would be interesting to speculate about the greenhouse effect of all these repeated downloads


If nothing else, it is patently wasteful and, as a user, you don't really see CI billed in terms of network bandwidth. Just indirectly through the equivalent of mainframe minutes. And even then, that's not enough to discourage anyone from building a suboptimal pipeline.


It used to be possible to have a squid or even a backwards varnish cache handle lots of this but https everything has made that much harder to do. Still possible, however.


That's why the first step of CI for me, when possible, is to rsync a .tar.gz file from the server I'm deploying to. The tarball contains statically-linked binaries and other stuff I'll need for the build.

It's also a good reason for CI providers to mirror package repositories.


Nothing is going to change until the hosters make it a pain to abuse. Rubygems could require an api key to download from and rate limit that key.

Sure you could attempt to generate a bunch of keys and cycle them but it would be easier to just cache your gems.


The article was about abusive floods accounting for 90% of the traffic. The author was happy with legitimate use cases like packages doing detection, contrary to your comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: