Hacker News new | past | comments | ask | show | jobs | submit | jand's comments login

"support or engage in terrorist or violent extremist offences"

What constitutes "support"? Hopefully your next government is OK with you back then liking the post of that one organization previously not classified as terroristic.


I wonder if arguing against someone who says "Group X eats babies with kitten sauce!" would count as supporting Group X...

At the very least you're connected via metadata, so Project Insight will still be interested in you.

The Holy Land Foundation case comes to mind.

> 9 Verifiers SHALL verify the entire submitted password...

Is this "don't microwave your hamster"-requirement a result of the bcrypt trouble [1] or how comes?

[1] https://security.stackexchange.com/questions/39849/does-bcry...


My suspicion is this to rule out a specific hash. One well known to everyone interested in computer security back in the 1990's. One that haunts our nightmares to this day.

Back in my day, you see, there was this hash known as NTLM, which actually took your password and stored and then matched it in two ways, the NT hash (MD5 of your password in UTF-16) and the LM hash (split the first 14 bytes of your password in ASCII, then add parity bits and use that as a DES key to encrypt a well-known string). That LM hash was because they wanted it to be backwards compatible with Microsoft LanMan, introduced for OS/2 back in 1987. Even back in the 1990's it was a well known weak link, and given how trivial it is today to brute force a match for MD5 (since all characters after the first 14 can be arbitrary), you can see that this is simple to brute force with modern computing power. Microsoft has recommended NTLM not be used since 2010, but it's still in Windows for backwards compatibility reasons, and there are almost certainly still servers running today that a NTLM hash could get you access to. So that's my guess as to what they are targeting.


It predates bcrypt by quite a bit - descrypt would truncate password at 8 characters (or technically 8 bytes) and was used on a lot of older Linux and *NIX systems.


Ran into this exact issue on a project. The developer that implemented the solution wanted to make sure that we handled those 6 digit PINs in the most secure way with salt and pepper and bcrypt, and at the end of the day the system only actually checked the first two digits of the PIN because bcrypt ignored the rest.


Still a problem irrespective of algorithms used. I recently set up an account on a website, letting my password manager do its thing. Couldn’t log in. Turns out the password was too long (20 chars when 16 were allowed) and was silently truncated during signup.

The login form of course used the entirety of the password, not truncating it. Fun stuff.


Similar problem with Microsoft Dynamics Great Plains. I think the save-password window accepts more characters than get stored, so trying to log with the seemingly correct password a few times gets you locked out.

It also doesn't sanitise or warn about the password having impermissible characters that will mess up the user's account the SQL Server that backs GP. Then, after an admin tries to reset the user's password (typic'ly to something like "Password1!"), the user can log in with the insecure 'temporary' password as many times as they want, but cannot change to a new password. When the user tries, GP claims success and says to use the new password at next login…but when logging out announces that the password failed to change.


I ran into that with Paypal. Login limited my password length to something small (I think 20 characters?) but the signup page accepted my random 32 characters just fine.

I found out I could just enter the first 20 characters and log in. I've had other websites that simply broke. The worst one had a password reset page that also didn't verify their own password length limits, sending me in a frustrating password change loop.


Does this permit taking a sha 512 digest hash of the user input and returning that digest hash to the backend for proper password hashing?

My interpretation is that the entire password is being verified, even though the backend is only ever verifying a sha 512 digest hash of it.

(Oh and why would you do this? To be able to support arbitrary length passwords without opening yourself up to ddos attacks. Support as long passwords as the user wants - only the digest hash is sent.)



> The response has been what's called "safety certification":

This is the most scary part for me. Certifications are mostly bureaucratic sugar and on the other hand very expensive. This seems like a sure way to strangle your startup culture.

If customers require certifications worth millions, nobody can bootstrap a small business without outside capital.


Assuming the level of certification will be proportionate the potential risk/harm, then this is actually totally ok. Like would you want to fly in a plane built but a bootstrap startup that had no certifications? Or go in a submarine to extremely deep ocean tours of the titanic? Or put in a heart device? Or transfer all of your savings to a startup's financial software that had no proof of being resilient to even the most basic of attacks?

For me, its a hard no. The idea of risk/harm based certification and liability is overdue.


Problem is that it's rarely proportional.

There's a different thread on HN about the UK Foundations essay. It gives the example of the builders of a nuclear reactor being required to install hundreds of underwater megaphones to scare away fish that might otherwise be sucked into the reactor and, um, cooked. Yet cooking fish is clearly normal behavior that the government doesn't try to restrict otherwise.

This type of thing crops up all over the place where government certification gets involved. Not at first, but the ratchet only moves in one direction. After enough decades have passed you end up with silliness like that, nobody empowered to stop it and a stagnating or sliding economy.

> Like would you want to fly in a plane built but a bootstrap startup that had no certifications?

If plenty of other people had flown in it without problems, sure? How do you think commercial aviation got started? Plane makers were startups once. But comparing software and planes (or buildings) is a false equivalence. The vast majority of all software doesn't hurt anyone if it has bugs or even if it gets hacked. It's annoying, and potentially you lose money, but nobody dies.


Commercial aviation was regulated because planes were killing people, and when it came in, air travel became the safest form of transportation. That isn't a coincidence. If the vast majority of software doesn't hurt anyone if it has bugs, then it won't require any certifications. If you heard me arguing for that, then you heard wrong. I am advocating for risk/harm based certification/liability.


Aren't you arguing for the status quo then? There are very small amounts of software that can cause physical harm, and those are already regulated (medical devices etc).


Financial harm and harm via personal records being hacked should also be included. The Equifax leak for example should have resulted in much worse consequences for the executives and also new software compliance regulations to better safeguard that sort of record-keeping.


Why aren't they installing grates on the intakes?


There will be grates but fish are small and obviously grates must have holes in them.


Is there an existing comparison between pgManage and pgAdmin somewhere?

At first glance, it seems they serve the same purpose. Am i missing something? (besides the support for some other DBs - but pgManage states to target postgres primarily)


The main difference is that PgManage supports other databases too (although being mostly focused on Postgres support). Some new features introduced originally in Postgres get ported to other databases too (table structure editor and ER diagam tools for example)


I am not very good at telling jokes.

But even i can tell, that this is low effort. You really get a laugh out of it? Like "hahahaha, they said PHP 5"?


This site is very, very old.


And php 6 doesn't exists.


That's a little thing we call a joke.


Wonder if they support Perl 6 though.


Yes.


Yeah, agreed.


half-serious question:

What is the omnipresent "defined by a domain name" in the "claims"-section of the patent (see [1]) all about? To me it seems unfit as a defining criterion for a network.

[1] https://portal.unifiedpatents.com/patents/patent/US-7949785-...


"Was Hans nicht lernt, lernt Hänschen nimmermehr."

german, roughly - "What Hans did not learn, his son will never learn."

It is so obviously wrong as a generalized truth but painfully accurate on occasion. Cool about it: The reaction of others towards this saying is a great signal on their views regarding life, society and education.


> They will get all the data collected by the service, which is none.

(disclaimer: not rooted in knowledge, but in pessimism)

If the web service / whatever cannot provide the requested data, it would be in violation of the order.

So all you really need are harsh fines for not complying with the order and the problem morphs from a technical one to a business decision.


Under current law, you cannot be compelled to share more data than you're already collecting. However, the term "collecting" is a bit wider than the most common use. If you receive some piece of data, you are collecting it, even if you don't keep logs, or don't do anything with it.

It's not much different than how detective work is done in the physical domain. You cannot be compelled to ask a customer for more info, but you can be compelled to share info that the customer gave to you, even if you didn't ask for it or never wrote it down.


Devmem TCP sounds a lot like direct memory access. Am i mistaken if i think of it as a security nightmare? Do you have by chance any links to security considerations?


DMA is only a problem if transfers can be initiated by an untrusted party. This patch looks to basically just be making it so that transfers which would normally go device->RAM->device with no actual processing of the data transferred instead just go device->device instead (but still initiated by the OS, not the device or the network!). This doesn't affect access control at all, as the trip to RAM doesn't impact that (though it does mean it only works if the machine is not looking at or touching the data at all, which probably means it's only useful in situations where there's no encryption).


i think it's intended for use in private networks (not the internet) like san's


I created a user-mapped array TCP-devmem variant for 2.14 in which packets can be written from userland in zerobuf-style.

Security is/was pretty good once an LSM permits the connection creation.


Please excuse the silly question: Would proper directory and file ownerships not prevent this traversal?

If nginx does not run as root, how can it read other files than the ones explicitly assigned to the nginx user?


It would absolutely prevent it. Run app as one user, nginx as other, go-rwx on all app files, set the group of the "static" files as www-data and g+r on them and now web server can't access app files.

It's LITERALLY app hosting 101 and people did it that way 20+ years ago.


Ah the wonders of 022 umask. Personally I would always recommend making files unreadable to other users. If not for all files then at least significant directories like everything under /home, etc.

It may require more fiddling with group memberships, but it's well worth it.


I don't know about everyone else, but at this point I'm no longer doing a proper installation of nginx for personal stuff. I always just spin up a docker image... and I'm not checking if it runs as root or not, really.

Probably really screwing things up. Ouch.


Typical umask is 022 so most things are readable by nginx workers but not writable, they don’t need to be explicitly assigned (e.g. to www-data). If your application generates sensitive data of course you should probably use a 077 umask.


You could make an argument that bitwarden vaults constitute sensitive information.


You are correct.

Unfortunately, nginx (and other web servers) generally need to run as root in normal web applications because they are listening on port 80 or 443. Ports below 1024 can be opened only by root.

A more detailed explanation can be found here: https://unix.stackexchange.com/questions/134301/why-does-ngi...


> Ports below 1024 can be opened only by root.

Or processes running with the CAP_NET_BIND_SERVICE capability! [1]

Capabilities are a Linux kernel feature. Granting CAP_NET_BIND_SERVICE to nginx means you do not need to start it with full root privileges. This capability gives it the ability to open ports below 1024

Using systemd, you can use this feature like this:

    [Service]
    ExecStart=/usr/bin/nginx -c /etc/my_nginx.conf
    AmbientCapabilities=CAP_NET_BIND_SERVICE
    CapabilityBoundingSet=CAP_NET_BIND_SERVICE
    User=nginx
    Group=nginx
(You probably also want to enable a ton of other sandboxing options, see `systemd-analyze security` for tips)

[1]: https://man7.org/linux/man-pages/man7/capabilities.7.html


Nginx is started as root but it does not run as root, it changes its user after opening log files and sockets. (unless you use a lazy docker container and just run everything as root inside it).


Even in (the official) docker image, a nginx user is created: (latest, layer 6)

/bin/sh -c set -x && groupadd --system --gid 101 nginx && useradd --system --gid nginx --no-create-home --home /nonexistent --comment "nginx user" --shell /bin/false --uid 101 nginx .....

[1] https://hub.docker.com/layers/library/nginx/latest/images/sh...


Nginx workers shouldn’t run as root and certainly don’t on any distro I know. Typically you have a www-data user/group or equivalent. Dropping privilege is very basic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: