NFS has been in the kernel for ages and works roughly the same way, kernel driver with a userland component.
NFS has the advantage of having been in the kernel longer with most low hanging fruit security bugs fixed. ksmbd is brand new. Quite the expectation to have it be completely bug free.
edit: wait a minute, this was fixed with kernel 5.15.61
The criticism isn't that anyone expects bug free code, rather that introducing new remotely accessible attack surface to the kernel in 2022 when we know it's likely unsafe is silly. Building an SMB server in the kernel because "well, NFS was secure eventually" overlooks the fact that NFS shouldn't be in the kernel either.
Right, but that argument can be used to stop the inclusion of _any_ new large functionality in the kernel. Neither the Linux users nor the maintainers are currently interested in a feature-freeze of the kernel. If you want a micro-kernel, Linux was never the solution.
Yes, new features will have problems. They remain disabled in build configs of most distros at this point. Over time, more stability will encourage more consumers to enable it too.
This is not new for the Linux community. They will deal with this one like they have with other new features in the past.
> that argument can be used to stop the inclusion of any new large functionality in the kernel.
Sure, it could be, but there are shades of gray. Arguing that adding a full fat SMB server to the kernel is not the same thing as suggesting that file systems should be 100% in user space. You and I both know that the threat posed by introducing a large amount of remotely reachable code is not at all the same as that posed by a new kernel filesystem.
The last few decades of arguing about micro vs monolithic have surely convinced everyone that there is a time and a place for both. Yes, embedding the server in the kernel gives us lower latency by eliding a context switch (a few hundred cycles) but this comes at a pretty high cost that we're going to be paying for years to come.
Call me overly dramatic, but ever since the NSO group stuff went public I've been a lot more risk averse when it comes to introducing remote attack surface because we know that these bugs are being used to kill people. Making kernel compromise harder means possibly saving someone's life. Do I think someone is going to die over a ksmbd bug? Probably not. Would I want to be the one that checked in a huge remotely accessible blob of code? No, so I think carefully about what code I put where to minimize the risk.
Look, I agree with you. Increased attack surface scares me too. I'm just saying that line of argument doesn't persuade the maintainers or the users. We need to find a better way to protect ourselves. And turning off build flags is one way to do that, one that the community has adopted already.
The other point I'll make is that other kernel features scare me far more than this SMB server. Think io_uring, eBPF, or similar systems. Their attack surfaces are far larger, and yet they have become mainstream. Unfortunately, the horse has already left the barn. We need to find better ways to secure our systems. Arguing for fewer features has been tried for decades, and hasn't helped. Not here in the kernel, not in the browser, not anywhere.
I wish the world was easier to secure, but it's not.
Right but servers builtin in kernel are the worst of all cases.
Not only you have service where (if not firewalled) anything can connect and try the luck sending shit to it, it is often integrated with many other systems inside the kernel so it increases effort to rewrite any of that. Protocol clients in kernel have far less problems, for one you only connect to defined endpoint so attacker just to start would need to MITM you, and it is usually smaller codebase than the server
It is also usually stuff where you want to add new features relatively often and "upgrade kernel to use this new server feature" is not thing people like very much.
Providing interfaces to make userspace implementations faster have far better payoff, generic "make disk access and shoving stuff between disk and network fast" will help any file serving demon, not just SMB (point which original Samba proves, as with new improvements it is currently faster than ksmbd)
How do people running Ceph and other exotic filesystems deal with performance? What performance is considered reasonable performance in your opinion? It might not align with others, most people don't push that crazy amounts of data. I know IBM went from in-kernel NFS to Ganesha for their Spectrum Scale product recently.
"Crazy amounts of data" isn't the main concern, it's latency. It's the people storing giant amounts of data who generally don't worry about that so much.
Ceph isn't a filesystem, it's a service layer (self-described "storage platform") that runs on top of some other unspecified filesystem. Think git-annex or hadoop, not ext4.
Anyway the way Ceph does that is replication, just like those other solutions. There may be 4 nodes with filesystems that contain that data, and Ceph is the veneer that lets you not have to worry about the implementation-detail of where it lives.
That's a valid observation. All of the old stuff has been battle tested and reviewed many times. Newer stuff is bound to have bugs that still have not been found. And even old stuff turns up surprises every now and then. For instance
I found a buffer overflow in the OpenSolaris code a few hours ago that originated in a commit made in 2007. It predates that Linux bug by at least a year.
It is amazing how many old bugs have survived to the present day. :/
SMB1 was slow - very slow. Novell IPX/SPX was far faster.
SMB2 changed the protocol to include multiple operations in a single packet, but did not introduce encryption (and Microsoft ignored other SMB encryption schemes). It is a LOT faster.
SMB3 finally adds encryption, but only runs in Windows 8 and above.
NFS is a bit messy on the question of encryption, but is a much more open and free set of tools.
Just looked it up. It looks like the NFS server inside Netware was twice as fast as SCO on the same hardware.
I wonder if it would maintain a speed advantage today.
"NetWare dominated the network operating system (NOS) market from the mid-1980s through the mid- to late-1990s due to its extremely high performance relative to other NOS technologies. Most benchmarks during this period demonstrated a 5:1 to 10:1 performance advantage over products from Microsoft, Banyan, and others. One noteworthy benchmark pitted NetWare 3.x running NFS services over TCP/IP (not NetWare's native IPX protocol) against a dedicated Auspex NFS server and an SCO Unix server running NFS service. NetWare NFS outperformed both 'native' NFS systems and claimed a 2:1 performance advantage over SCO Unix NFS on the same hardware."
Novell NCP was faster in all contexts, as far as I know.
"SMB1 is an extremely chatty protocol, which is not such an issue on a local area network (LAN) with low latency. It becomes very slow on wide area networks (WAN) as the back and forth handshake of the protocol magnifies the inherent high latency of such a network. Later versions of the protocol reduced the high number of handshake exchanges."
It at least doesn't lock anything up that has a file open when the network goes down. NFS is a nightmare with that. NFS is more idiomatic on *nix but still a huge pain when dealing with matching file perms across systems.
> It at least doesn't lock anything up that has a file open when the network goes down.
I must admit I feel quite a bit of irrational fury when this happens (similarly, when DNS lookups hang). That some other computer is down should never prevent me from doing, closing, or killing anything on my computer. Make the system call return an error immediately! Remove the process from the process table! Do anything! I can power cycle the computer to get out of it, so clearly a hanging NFS server is not some kind of black hole in our universe from which no escape is possible.
> I must admit I feel quite a bit of irrational fury when this happens (similarly, when DNS lookups hang).
Neither of those reactions are in anyway irrational. In fact, they're not only perfectly reasonable and understandable but felt by a great many of us here on HN.
This is not the fault of NFS. The same thing would happen if a local filesystem suddenly went missing. The kernel treats NFS mounts as just another filesystem. You can in fact mount shares as soft or interruptible if you want.
> It at least doesn't lock anything up that has a file open when the network goes down. NFS is a nightmare with that.
Yeah, we've been bitten by this too, around once a year, even with our fairly reliable and redundant network. It's a PITA, your process just hang and there's no way to even kill it except restarting the server.
This is too bad. The sweet spot was "hard,intr" at least when I was last using NFS on a daily basis (mid 1990s). Hard mounts make sense for programs, which will happily wait indefinitely while blocked in I/O. This worked well for things like doing a build over NFS, which would hang if the server crashed and then pick right up right where it left off when the server rebooted.
Of course this is irritating if you're blocked waiting for something incidental, like your shell doing a search of PATH. In those cases you could just control-C and continue doing what you wanted to do (as long as it didn't actually need that NFS server).
However I can see that it would be difficult to implement interruptibility in various layers of the kernel.
I think the current implementation comes reasonably close to the old "intr" behavior.
AFAICT the problem with "intr" wasn't that the kernel parts were impossible to implement in the kernel, but rather an application correctness issue, as few applications are prepared to handle EINTR in any I/O syscall. However, with "nointr" the process would be blocked in uninterruptible sleep and would be impossible to kill.
However, if the process is about to be killed by the signal, then not handling EINTR is irrelevant. Thus in 2.6.25 a new process state TASK_KILLABLE was introduced (https://lwn.net/Articles/288056/ ), which is a bit like TASK_UNINTERRUPTIBLE except the task can be interrupted by a fatal signal, and the NFS client code was converted to use it in https://lkml.org/lkml/2007/12/6/329 . So the end result is that the process can be killed with Ctrl-C (as long as it hasn't installed a non-default SIGTERM handler), but doesn't need to handle EINTR for all I/O syscalls.
Depends on the use case. SMB auth is more robust and easier to integrate with AD, but NFS is simpler and typically faster for file access and transfer speeds. SMB is good for shares used by end users, NFS is good for shares used by services.
I've found NFSv4 to be more stable and performant than SMB when using it between Linux machines. Seems to handle multiple concurrent clients well, too.
And for people saying "Buy you can't get security updates".
I would rather have a dynamically-linked binary that includes all the dependences in it, at which you can upgrade the dependencies with a tool over the binary, than the madness of shared libraries in system paths. (Well, you kinda get that with appimage and similar).
Those countries generally still offer pfizer for that age group, which has lower myocarditis risk than moderna (presumably because of the dose mostly). I think a lot of European countries also still use astra zeneca, not sure about J&J but they are similar platforms.
I will note that this ffmpeg work by google is still ongoing. I do remember an ffmpeg (then libav) developer mentioning that this will never end since this is C and since ffmpeg is doing parsing of data.
> several people are starting to write parsers in rust
ffmpeg is what it is because it's way more than "parsers". It's all the parsers, and the filters, and everything you need to feed it any audio/video file you have, and get any audio/video file you want in return. That's why out of all the tools out there that's the one everyone and everything uses, from VLC to Chrome to your phone to your TV to your car to whatever.
This is like saying "sqlite is still the king of embedded sql but several people are starting to write databases in rust".
PS: another reason for ffmpeg massive success is simply that Michael is an insanely productive and dedicated developer, as seen in this story and also in the "libav splits because they don't like him but then the project fails because they don't have him" (yes, this is a vast oversimplification).
NFS has been in the kernel for ages and works roughly the same way, kernel driver with a userland component.
NFS has the advantage of having been in the kernel longer with most low hanging fruit security bugs fixed. ksmbd is brand new. Quite the expectation to have it be completely bug free.
edit: wait a minute, this was fixed with kernel 5.15.61
That's at least 20 releases ago.