The only reason I ever remember preferring FTP over HTTP in the 1990s was because FTP usually meant that I could resume large downloads if my connection dropped mid-download. That was a big deal when it took several hours to download something. That benefit largely disappeared for me as broadband got faster and the connections became more reliable.
Psst... Someone tell CPanel Inc and all the millions of horrid cPanel shared hosting providers which still promotes this nonsense as the default way to manage files. It is somewhat sad when you meet a new young fresh developer where deployment = filezilla.
But you could always get a normal installer even then, and I think (don't quote me on that, I never really used SF) Sourceforge stopped doing that after their acquisition ?
Sourceforge stopped it, filezilla did not.
And Filezilla explicitly defended the sourceforge practice when it was still in place. They're a very shady project.
But the default is ftp. If you don't explicitly choose sftp and an attacker blocks the ssh port and spoofs an ftp server, FileZilla will happily send the password for the attacker to intercept.
Public FTP servers were where I downloaded most of the software for my computers, back in the 90s. There's nothing really like it anymore - you can't have anonymous sftp.
But perhaps we don't care anymore. The web is gradually consuming all that came before it.
> But perhaps we don't care anymore. The web is gradually consuming all that came before it.
It is partly the web/HTTP eating everything but also that FTP is legitimately a bad protocol and is less tolerant of horrid shit going on in layers below it (like NAT) than HTTP is.
I think my favorite "feature" of the FTP protocol has to be ASCII mangling, wherein the FTP server tries to mess around with line endings and text encoding mid-transfer. It's so bad that vsftpd, one of the better FTP servers for Linux systems, pretends to support it but silently refuses to perform the translation.
I wrote a custom FTP server once (it was database-backed instead of filesystem-backed - e.g. you could do searches by creating a directory in the Search directory) and I added in insulting error messages if a client tried to exercise one of the more antiquated features of the spec (e.g. EBCDIC mode)
>There's nothing really like it anymore - you can't have anonymous sftp.
Strictly speaking there's nothing stopping someone from writing an anonymous sftp server that lets anyone log in as a 'guest' user or similar - it's just that nobody has (as far as I'm aware).
"Unauthenticated SSH" is basically what the git:// protocol is. I wonder if you could use git-daemon(1) to serve things other than git repos? Or you could just convert whatever you want to serve into a git repo, I guess.
You could, but since git isn't designed for handling large binary files the performance will be poor. That's why there are large file support plugins like (the aptly named) Git LFS[0] and git-annex[1].
IPFS requires a stateful thick client with a bunch of index data, no? Would it be efficient to, say, build a Debian installer CD that goes out and downloads packages from an IPFS mirror? Because that's the kind of use-case anonymous FTP is for.
Many many years ago I was on the team that managed the compute cluster for the CMS detector at the LHC (Fermilab Tier-1).
When we would perform a rolling reinstall of the entire worker cluster (~5500 1U pizza box servers), we would use a custom installer that would utilize Bittorrent to retrieve the necessary RPMs (Scientific Linux) instead of HTTP; the more workers reinstalling at once, the faster each worker would reinstall (I hand wave away the complexities of job management for this discussion).
I'm not super familiar with IPFS (I've only played with it a bit to see if I could use it to backup the Internet Archive in a distributed manner), but I'm fairly confident based on my limited trials that yes, you could build a Debian installer CD to fetch the required packages from an IPFS mirror. No need to even have the file index locally. You simply need a known source of the file index to retrieve, and the ability to retrieve it securely.
You have to be really careful though because the default is to give users shell access. If you think you can limit that by forcing users to run some command you'll run into trouble because the user can specify environment variables.
The user also by default gets allowed to set up tunneling which would allow anonymous users to use your network address.
> There's nothing really like it anymore - you can't have anonymous sftp
Nonsense. http is exactly like anonymous ftp and it does a much better job of it. Pretty much every anonymous ftp site started also serving their files via http decades ago -- which is why ftp is no longer needed.
Case in point: Debian makes all these files available over http. This isn't going away.
It really isn't as convenient if you have to download lots of files at one time though. FTP has mget. That's probably why FTP lives on for scientific data (NCBI, ENSEMBL, etc). Yes, you could use some tool like wget or curl to spider through a bunch of http links, but that's more work.
Not quite, ftp CLIENTS have mget. The ftp protocol has absolutely no awareness of mget. In fact, ftp is terrible at downloading more than one file at a time because it has no concept of pipelining and keepalive, both things that http supports.
With a nice multi protocol client like lftp, http directory indexes work just like an ftp server:
lftp has a ton of features. background jobs, tab completion, caching of directory contents, multiple connections, parallel fetching of a SINGLE file using multiple connections.
Yes, it looks like '/usr/bin/ftp' from 1970, but it's far far far more advanced than that.
(where 'x' is an asterisk, but HN's formatting eats it)
More work, in the sense that it's more command line options to remember, I agree, but otherwise it's easier to integrate in scripts and much more flexible than mget.
(I don't miss FTP for the sysadmin side of maintaining those servers.)
a public facing httpd that uses the default apache2 directory index can be configured to , of course, allow anonymous access and with a log level that is neither more or less detailed than an anonymous ftpd circa 1999.
> Public FTP servers where where I downloaded most of the software for my computers, back in the 90s. There's nothing really like it anymore
Modulo UI details, the common download-only public side of public FTP servers is a pretty similar experience to a pretty barebones file download web site. Anonymous file download web sites are, to put it mildly, not rare.
In the "transition" from FTP to HTTP, the level of abstraction in popular use has shifted out of the protocol and into resources (mime-types) [1], rel types [2], server logic [3][4], and client logic [5].
In the past, I've said that this extensible nature of HTTP+HTML is what made them so successful [6], but once specialized protocols began to falter, tunneling other semantics over HTTP became not just a niceity, but also a necessity (for a diverse set of reasons, like being blocked at a middlebox, being accessible from an the browser where most people spend their time, etc).
Apache works better for this than FTP. I use it all the time: just configure it to serve indexes. Apache lets you configure the index to include CSS, fancy icons, custom sorting, and other stuff. All over HTTPS.
> The web is gradually consuming all that came before it.
It's about cost, too. HTTP can be cached very efficiently, but FTP not at all. If I were the operator in charge and I had the choice between next-to-free caching by nearly anything, be it a Squid proxy, apt-cache or nexus, or no caching and having to maintain expensive servers, I'd choose HTTP.
> FTP is not nearly as trivial, plus it's a stupid, broken protocol that deserves to die.
I agree with you, but FTP has one very valid use case left: easy file sharing, especially for shared hosting. FTP clients are native to every popular OS, from Android to Windows (only exceptions I know are Win Mobile and iOS), and there's a lot of ecosystem built around FTP.
There is SCP and SFTP but they don't really have any kind of widespread usage in the non-professional world.
> [Has] one very valid use case left: easy file sharing, especially for shared hosting.
Nope. Nope. Nope. Not easy. Not secure. Not user friendly. Not anything good. Have an iPhone and need to FTP something? Don't have installation rights on your Windows workstation and need to FTP something? Unpleasant if not confusing as all hell.
Dropbox or a Dropbox-like program is significantly easier to get people on board with.
Any "ecosystem" built around FTP is rotten to the core. Blow it up and get rid of it as soon as you can.
Some vendors insist on using FTP because reasons, but those reasons are always laziness. I can't be the only one that would prefer they use ssh/scp/rsync with actual keys so I can be certain the entity uploading a file is actually them and not some random dude who sniffed the plain-text password off the wire.
Windows has first-class support (obviously); but Samba gives Linux and BSD support that, in modern Desktop Environments, is exactly as good. Mobile devices don't tend to have OS-level support for it, but there are very good libraries to enable individual apps to speak the protocols (look at VLC's mobile apps.)
Even Apple has given up on their own file-sharing protocol (AFP) in favor of macOS machines just speaking SMB to one-another.
Yes, it's not workable over the public Internet. Neither is FTP, any more. If you've got a server somewhere far away, and want all your devices to put files on it, you're presumably versed with configuring servers, so go ahead and set up a WebDAV server on that box. Everything speaks that.
Uh, hell no. Never ever I'd expose a SMB server to the Internet. SMB is really picky when the link has packet loss or latency issues, plus the countless SMB-based security issues.
> Even Apple has given up on their own file-sharing protocol (AFP) in favor of macOS machines just speaking SMB to one-another.
Is there a way to tune SMB to work better over low bandwidth / high latency links? The last time I tried it through a VPN it was working at less than 10kb/s
But we're talking about picking a thing to replace FTP for the use-cases people were already using FTP for. It doesn't matter if it doesn't do something FTP already doesn't do, because presumably you were already not relying on that thing getting done.
FTP is used to exchange files, a task that HTTP/HTTPS and/or email and/or IM and/or XMPP and/or Skype and/or Slack and/or a hundred other services can do just as well if not better.
...But it does work on iOS. It's just not built in. For example, Transmit for iOS supports FTP, and includes a document provider extension so you can directly access files on FTP servers from any app that uses the standard document picker.
The post I replied to implies that iOS is (somehow) "artificially limited" to be unable to access FTP - or at least I interpreted it that way.
FWIW, I'm not convinced that "web-based" is a better alternative for read/write file access, assuming you mean file manager webapps. No OS can integrate those into the native file picker, so you can't avoid the inefficiency of manually uploading files after changing them. WebDAV works pretty well though, if that counts...
It's just needlessly exclusionary. One of the greatest things about "the web" is it's pretty accessible by anyone with a browser that's at least semi-mostly-standards-compliant.
Have you looked at the spec? If you do, then you'll understand.
Imagine a file transfer protocol that defines the command to list files in a folder, but does not specify the format of the response other that it should be human-readable.
https://www.ietf.org/rfc/rfc959.txt LIST and NLST commands for example. No way to get a standard list of files with sizes and modification dates. yay!
Oh, and the data connection that is made from the server to the client. That works wonders with firewalls of today.
It was an ok spec when it was invented, but today it's very painful to operate.
> It's about cost, too. HTTP can be cached very efficiently, but FTP not at all.
It's ironic that you mention cost and caching but lot of services used for software distribution of one kind of another (e.g. Github releases) are following the "HTTPS everywhere" mantra and HTTPS can't be cached anywhere other than at the client.
> and HTTPS can't be cached anywhere other than at the client.
No. Nexus for example can certainly cache apt, as well as Squid can do if you provision it with a certificate that's trusted by the client.
Also, Cloudflare supports HTTPS caching if you supply them with the certificate, and if you pay them enough and host some special server that handles the initial crypto handshake you don't even have to hand over your cert/privkey to them (e.g. required by law for banks, healthcare stuff etc)
To clarify; what I meant is that HTTPS can't be cached by third parties. If I want to run a local cache of anything served over HTTP it's as easy as spinning up a Squid instance. With resources served over HTTPS I can't do that.
Well, there is WebDAV. At least Windows and OS X support it (Windows from Explorer, OS X from Finder), no idea about mainstream Linux/Android/iOS support though. Also, no idea if WebDAV can deal with Unix or Windows permissions, but I did not have that problem when I set up a WebDAV server a year ago.
IIRC WebDAV uses GET for retrieval, so the read parts can be cached by an intermediate proxy and the write part be relayed to the server.
As someone who once tried to write a WebDAV server I cannot with good convince recommend it. It's Bizarre extended HTTP protocol that should not exist.
Out of curiosity: why did you try to write your own WebDAV server? Apache ships a pretty much works-OOTB implementation - the only thing I never managed to get working was to assign uploaded files the UID/GID of the user who authenticated via HTTP auth to an LDAP server.
More specifically a CalDAV server which is a bizarre extension of WebDAV that shouldn't exist. We wanted one to connect to our internal identity server. That project was abandoned.
Actually i thought about Gopher (i even have my own client - http://runtimeterror.com/tools/gopher/ - although it only does text) since it basically behaves as FTP++ with abstract names (sadly most modern gopherholes treat it as hypertext lite by abusing the information nodes).
Gopher generally avoids most of FTP's pitfalls and it is dead easy to implement.
edit: thinking about it not sure I agree with the anonymous part considering the swarm can be monitored. the access log is essentially publicly distributed.
Neither is FTP really, the user's IP is still logged somewhere, you just use a common user (anonymous) with everyone else. The modern name of such a feature would probably be something like 'No registration required'.
It goes to show how much the meaning of the word 'anonymous' changed over the last 30 years.
Not quite; the "currently accessing" list is public. While it is of course possible to make an access log from this with continuous monitoring, it's not possible to arbitrarily query historical data.
FTP was older and had more widespread client support in the 90s, so you were more likely to have at least a basic client preinstalled. That's obviously moot by now since just about everything ships with a web browser and/or something like curl.
FTP has a standard way to list directories. Using HTTP that way would require you to either have a well-known index file or parse HTML looking for links which don't return text/html responses.
The downside is that FTP still has issues with firewalls – I had to troubleshoot that earlier this month, actually – and is another service to maintain if you are already running an HTTP server.
In the case of either single-file downloads or something like a Linux package manager, the URLs are well known so directory listings are irrelevant. HTTP has a number of good options for CDNs and caching, so if you care about performance or reliability that's a turn-key service.
In the cases where directory listings were more valuable, people usually wanted a richer UI than just a file listing, too, and there are tons of options for that in the web world.
In the old days, we had decent clients for http but they were all manual operated GUIs, and the decent clients for FTP were all CLI and/or extremely automatable. Also in the old days it was unusual to have a machine with a GUI maybe 30 years ago.
Also there was a day before dependency resolving automatic downloading GPG verifying distribution clients. I was there... Say you needed to roll back (or security upgrade) some software, you'd quite possibly go to a FTP site thats trusted, download some tar.gz or tar.z or shar archive that was trusted, make, make install, plus or minus some configuration of course. You can't do that via http in 1990 while telnet'd into a server, the http infrastructure and commands and servers hadn't been invented yet.
Its also important to point out that at least some of us were fooling around with unix and FTP (and FTP to non-unix OS, I vaguely remember some TOPS20 server in the late 80s early 90s... simtel20?) before http ever existed. So naturally there were a lot of tools and processes and experience getting things done with FTP when HTTP arrived.
Decent FTP client CLIs were extremely advanced. Not exactly "wget someurl" and hope for the best LOL. Automatic login with differing accounts per site, text mode graphs and stats as downloads commence, tab completion, local help command functionality, multi-connection support... Mirrored and often lead the developments in modem/BBS download functionality (think like Zmodem on Telix not kermit)
Its sort of like those "how could you have thought using telnet was a good idea compared to ssh?" Well, I had 15 years of unix experience before OpenSSH was deployable, so we had a lot of telnet experience...
It's hardly sudden! The last time I encountered anyone using FTP by choice was around the turn of the millennium. Since then, it's always been supported reluctantly and under protest, because some integration partner critically important to a profit center couldn't or wouldn't support anything less terrible.
All the reasons listed in the article. It's also insecure. It's Hell with firewalls because of the way it negotiates what ports to use, instead of using a standard port for data and control.
> but if you have an anonymous FTP server accepting glob patterns, there are two more fundamental questions to ask: Do you really need to run an anonymous FTP server anymore?
In the old days we had Windows and wanted to install Netscape or download Linux we used anonymous ftp servers. Now Windows has a web browser installed that we can use to download anything we want.
Anonymous ftp servers are outdated now. Like Gopher was when web browsers teplaced them.
ftpmail. Now that is outdated. I vaguely remember around 1990 you'd send an email to some peculiar address and in more than an hour but less than a day, the service would execute the commands you gave it and return ... perhaps uuencoded files via email? Some encoding? Text just appeared as text.
I think I got new phrack issues that way. Or some other zine. I also obtained some software this way. Some of it was even FOSS/legal.
It was a way of working around various quota or security limitations. You'd get your file just as well as FTP, merely slowly. There were of course limitations on the ftp-mail service, you couldn't fetch an entire OS distro this way.
The docs on that page do bring back some memories. Obviously real ftpmail service didn't have a "open" line it was more like "open anonymous@simtel20.something.something.something.mil"
thats what we did for a good time during the first Bush administration with our 2400 baud modems, LOL.
There's really nothing stopping FTP from working on CDN's and proxies. For public distribution Bit-torrent is far superior though. I still think SFTP/FTP/FTPS is the best way to upload files, is there any better free alternatives !?
There are a fair number of businesses in old industries that "went digital" 10-20 years ago still have the same tech running, including FTP. I work with a company that requires uploading files via FTP into a co-mingled directory (everyone uses the same password). Company is a major player in my industry, too.
I would be curious if some government services were still running FTP, judging by the number of Y2K-era government websites I run into.
yesterday, I have received a link to download some tutorial from china: http://pan.baidu.com/s/1sl993lV. I have performed the download using firefox, but the connection is so bad, there are many failures. I have tried with curl that fails also. A real nightmare. All these problems would not exist if it was a simple ftp link: filezilla is perfectly optimized to work around shitty connections.
And today I learn that ftp is falling in oblivion. What a sad time.
Curl is perfectly capable of resuming a download over HTTP unless the server is doing something stupid. Blame Baidu if you like, but HTTP is not the problem here and there is no benefit to FTP.