Hacker News new | past | comments | ask | show | jobs | submit login

In at least some of those cases, I would expect curl to exit with an error and the pipeline to abort.



All bash sees are bytes coming in on stdin, and eventually an EOF. It neither knows nor cares what caused the EOF.


Sure, but if the server response is not pipelined (as is probably often the case), then bash should never see anything.


HTTP pipelining is about reusing a TCP connection for multiple requests. It doesn't influence when curl outputs data and wouldn't apply here anyway. I don't think there's any mode which would cause curl to buffer the entire response before writing any of it.


Yes, I'm aware of how HTTP pipelining works. It was a poor choice of terminology. My point is that by default curl does buffer some of the response. And if the connection was terminated before the first buffer was output, then I would expect this to result in an error which would abort the shell pipeline.


Yes, in some cases curl won't produce any output, like if the web server is down, or the connection fails before anything is returned. And yes, it would also happen if curl buffers some of the response and then dies. I don't really see why that's interesting.


The default buffer size is typically the page size, which is typically 4096 bytes. I would expect a large number of these scripts to be less than 4096 bytes meaning curl would output nothing before producing an error and the partial script would never be evaluated.


That's the default buffer size for pipes, which won't matter here. When curl terminates, whatever's buffered in the pipe will be flushed. The only thing that could prevent downloaded data from being received by the shell would be internal buffering in curl, if it does any.


Good point. curl doesn't do any internal buffering. I was thinking that the pipeline should be aborted if the curl exits with a non-zero status, but of course this is not the case.


Yeah, it would be nice if there were a way for a part of the pipeline to signal that something bad happened and everything should stop. Ideally, some sort of transaction system so the script is guaranteed to run fully or not at all. But instead we have this crazy thing.


Some scripts also detect this and are written so that there is no code executed before the file is not complete. Of course, that's a minority.


Definitely including a checksum and validating that before executing would be ideal.


To elaborate, this is quite easy: you wrap the entire contents of the script into a function definition, then call the function as the last line.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: