Hacker News new | past | comments | ask | show | jobs | submit | mianosm's comments login

Epic trolling, you're sane.

I’m sure you’ll be making your own scenery files very soon!!!

Wild that as they grow and scale, sustainable or climate-sensitive power can't be sourced or relatively (directly) scaled.

It will be interesting to see over the next few decades if sustainable power sources (arguably Nuclear, especially under the recent Advance Act) won't just be a lagging situation.


75% of the emissions in the report are from "scope 3" which is mostly embodied emissions of equipment. So the necessary sustainable power sources would be in China mainly, not ones serving Google's datacenters.


They are focusing in the press release and the report on DC energy needs though. I see that as foreshadowing that things will be worse going forward.


Maybe because they are trying to influence policy goals? Google is a large customer but it is still a teensy tiny fraction of American electricity consumption. I don't understand why the press has decided that it falls on the information industry alone to decarbonize the US (and global) power grid.


Filtering (out) the meta categories would be nice (so you can look at just the meta categories or just the countries).

Something that I don't think is available at the source that would be a nice addition would be a per capita (looking at the population and using it as a basic sortable column for quick reference).

It's a very cool tool and resource, though.


awesome idea. will add


Great example of ground effect, between that and the amount of relative wind against its 'wings' providing a great angle of attack...the really impressive component to the entire situation is: how much thrust did that little guy start off with that it was able to maintain lift for so long?


The Jonestown massacre was actually grape flavor-aid:

https://www.vox.com/2015/5/23/8647095/kool-aid-jonestown-fla...

They really do appear to be all in on avoiding memory leaks from C/CPP:

> Over the next few years we plan to continue replacing C or C++ software with memory safe alternatives in the Let’s Encrypt infrastructure: OpenSSL and its derivatives with Rustls, our DNS software with Hickory, Nginx with River, and sudo with sudo-rs. Memory safety is just part of the overall security equation, but it’s an important part and we’re glad to be able to make these improvements.

It seems like a really challenging endeavor, but I appreciate their desire to maintain uptime and a public service like they do.


Big "lifelock" vibes...and if I understand it correctly, that lifelock character has had his identity successfully stolen and impacted multiple times.

Sometimes the person who says "I'm really smart, like mensa level", does some really ignorant and stupid things.


Sometimes you're also just hoping to find someone or something out there that interests you.


Could easily be considered the precursor and parallel to LPTHW, been around for a lot longer than 10 years it feels/seems like...


There's a high bar to set for most organizations. Leveraging certificates is excellent if the supporting and engineering actors are all in accordance with how to manage and train the users and workforce how to use them (think root authorities, and revoking issued certificates from an authority).

I've seen a few attempts to leverage certificates, or GPG; and keys nearly always are an 'easier' process with less burden to teach (which smart(er) people at times hate to do).


You can store your regular keys in gpg, it's a nice middle ground especially if you store them on a yubikey with openpgp.

Of course OpenSSH also supports fido2 now but it's pretty new and many embedded servers don't support it. So I'm ignoring it for now. I need an openpgp setup for my password manager anyway.


I use both PKCS#11 and OpenPGP SSH keys and in my opinion, PKCS#11 is a better user experience if you don't also require PGP functionality. Especially if you're supporting MaxOS clients as you can just use Secretive[0]. As you say, FIDO is even better but comes with limitations on both client and server, which makes life tough.

[0] https://github.com/maxgoedjen/secretive


Oh yeah I don't really use macOS anymore. And I do really need PGP functionality for my password manager.

I used pkcs11 before with openct and opensc (on OpenHSM PIV cards) and the problem I had with it was that I always needed to runtime-link a library to the SSH binary to make it work which was often causing problems on different platforms.

The nice thing about using PGP/GPG is that it can simulate an SSH agent so none of this is necessary, it will just communicate with the agent over a local socket.


> And I do really need PGP functionality for my password manager.

Just curious: is it https://www.passwordstore.org/?


Yes it is! It's great!


By the way, to elaborate, I love it because it's really secure when used with yubikeys, it's fully self hosted, it works on all the platforms I use including android and it's very flexible. There's no master password to guess which is always a bit of an Achilles heel with traditional PW managers. Because you have to use it so much you don't really want to have it too long or complex. This solves that while keeping it very secure.

The one thing I miss a bit is that it doesn't do passkeys. But well.


I use it as well (with a Yubikey) and I love it! On Android I use Android-Password-Store [1], which is nice too. There is just this issue with OpenKeychain that concerns me a bit, I am not sure if Android-Password-Store will still support hardware keys when moving to v2... but other than that it's great!

[1]: https://github.com/android-password-store/Android-Password-S...


SSH Certificates are vastly different then the certificates you are referencing.

SSH Certificates are actually just a SSH Key attested by another SSH Key. There's no revocation system in place, nor anything more advanced then "I trust key x and so any keys signed by X I will trust"


There is a revocation system in place (the RevokedKeys directive in the sshd configuration file, which seems to be system-wide rather than configured at the user-level. At least, that’s the only way I’ve used it)

I agree with the sentiment though, it is far less extensive than traditional X.509 certificate infrastructure.


when I said revocation system, I intended to convey something similar to Online Certificate Status Protocol, rather then a hardcoded list that needs to be synchronized between all the physical servers.

You are correct though, you can keep a list and deploy it to all the nodes for revocation purposes.

It's unfortunate that there's no RevokedKeysCommand to support building something like OCSP.


I am no familiar with SSH certificates either. But if there is no revocation system in place, how can I be sure access from a person can be revoked?

At our org we simply distribute SSH public keys via Puppet. So if some leaves, switches teams (without access to our servers) or their key must be renewed, we simply update a line in a config file and call it a day.

That way we also have full control over what types of keys are supported and older, broken kex and signature algorithms are disabled.


The certificates have a validity window that sshd also checks. So the CA can sign a certificate for a short window (hours), until the user has to request a new one.


One department in my cops y does this - you authenticate once with your standard company wide oidc integration (which has instant JML), and you get a key for 20 hours (enough for even the longest shift but not enough that you don’t need to reauth the next day).


> SSH Certificates are vastly different then the certificates you are referencing.

And the SSH maintainers will refuse offers of X.509 support, with a justification.


I like SSH certificates, and I use them on my own servers, but for organizations there's a nasty downside: SSH certificates lack good revocation logic. OCSP/CRL checks and certificate transparency protect browsers from this, but SSH doesn't have that good a solution for that.

Unless you regenerate them every day or have some kind of elaborate synchronisation process set up on the server side, a malicious ex-employee could abuse the old credentials post-termination.

This could be worked around leveraging TPMs, which would allow storing the keys themselves on hardware that can be confiscated, but standard user-based auth has a lot more (user-friendly) tooling and integration options.


It seems to me like short-lived certificates are the way to go, which would require tooling. I am actually a little surprised to hear that you're using long-lived certificates on your own servers (I'm imagining a homelab setup). What benefit does that provide you over distributing keys? Who's the CA?


I'm my own CA; SSH certificates don't usually use X509 certificate chains. I dump a public key and a config file in /etc/ssh/sshd_config.d/ to trust the CA, which I find easier to automate than installing a list of keys in /home/user/.ssh/authorized_keys.

I started using this when I got a new laptop and kept running into VMs and containers that I couldn't log into (I have password auth disabled). Same for some quick SSH sessions from my phone. Now, every time I need to log in from a new key/profile/device, I enroll one certificate (which is really just an id_ecdsa-cert.pub file next to id_ecdsa.pub) and instantly get access to all of my servers.

I also have a small VM with a long-lasting certificate that's configured to require username+password+TOTP, in case I ever lose access to all of my key files for some reason.


Not always, as quite often the donation is on hand, but not necessarily used.

Also: better to save a life and either average down whatever PFA count they have, or bolster their blood volume so they can start producing their own lower PFA infested hemoglobin.

Lesser of evils, and kind of a win win: if everyone was blood letting for the safety and humanity of others.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: