Hacker News new | past | comments | ask | show | jobs | submit | mangix's comments login

Comments here are cute.

NFS has been in the kernel for ages and works roughly the same way, kernel driver with a userland component.

NFS has the advantage of having been in the kernel longer with most low hanging fruit security bugs fixed. ksmbd is brand new. Quite the expectation to have it be completely bug free.

edit: wait a minute, this was fixed with kernel 5.15.61

That's at least 20 releases ago.


The criticism isn't that anyone expects bug free code, rather that introducing new remotely accessible attack surface to the kernel in 2022 when we know it's likely unsafe is silly. Building an SMB server in the kernel because "well, NFS was secure eventually" overlooks the fact that NFS shouldn't be in the kernel either.


Right, but that argument can be used to stop the inclusion of _any_ new large functionality in the kernel. Neither the Linux users nor the maintainers are currently interested in a feature-freeze of the kernel. If you want a micro-kernel, Linux was never the solution.

Yes, new features will have problems. They remain disabled in build configs of most distros at this point. Over time, more stability will encourage more consumers to enable it too.

This is not new for the Linux community. They will deal with this one like they have with other new features in the past.


> that argument can be used to stop the inclusion of any new large functionality in the kernel.

Sure, it could be, but there are shades of gray. Arguing that adding a full fat SMB server to the kernel is not the same thing as suggesting that file systems should be 100% in user space. You and I both know that the threat posed by introducing a large amount of remotely reachable code is not at all the same as that posed by a new kernel filesystem.

The last few decades of arguing about micro vs monolithic have surely convinced everyone that there is a time and a place for both. Yes, embedding the server in the kernel gives us lower latency by eliding a context switch (a few hundred cycles) but this comes at a pretty high cost that we're going to be paying for years to come.

Call me overly dramatic, but ever since the NSO group stuff went public I've been a lot more risk averse when it comes to introducing remote attack surface because we know that these bugs are being used to kill people. Making kernel compromise harder means possibly saving someone's life. Do I think someone is going to die over a ksmbd bug? Probably not. Would I want to be the one that checked in a huge remotely accessible blob of code? No, so I think carefully about what code I put where to minimize the risk.


Look, I agree with you. Increased attack surface scares me too. I'm just saying that line of argument doesn't persuade the maintainers or the users. We need to find a better way to protect ourselves. And turning off build flags is one way to do that, one that the community has adopted already.

The other point I'll make is that other kernel features scare me far more than this SMB server. Think io_uring, eBPF, or similar systems. Their attack surfaces are far larger, and yet they have become mainstream. Unfortunately, the horse has already left the barn. We need to find better ways to secure our systems. Arguing for fewer features has been tried for decades, and hasn't helped. Not here in the kernel, not in the browser, not anywhere.

I wish the world was easier to secure, but it's not.


Right but servers builtin in kernel are the worst of all cases.

Not only you have service where (if not firewalled) anything can connect and try the luck sending shit to it, it is often integrated with many other systems inside the kernel so it increases effort to rewrite any of that. Protocol clients in kernel have far less problems, for one you only connect to defined endpoint so attacker just to start would need to MITM you, and it is usually smaller codebase than the server

It is also usually stuff where you want to add new features relatively often and "upgrade kernel to use this new server feature" is not thing people like very much.

Providing interfaces to make userspace implementations faster have far better payoff, generic "make disk access and shoving stuff between disk and network fast" will help any file serving demon, not just SMB (point which original Samba proves, as with new improvements it is currently faster than ksmbd)


> rather that introducing new remotely accessible attack surface to the kernel in 2022 when we know it's likely unsafe is silly.

This is the worst possible take on this.

> Building an SMB server in the kernel because "well, NFS was secure eventually" overlooks the fact that NFS shouldn't be in the kernel either.

The way Linux works, NFS unfortunately has to be in the kernel to achieve reasonable performance.


How do people running Ceph and other exotic filesystems deal with performance? What performance is considered reasonable performance in your opinion? It might not align with others, most people don't push that crazy amounts of data. I know IBM went from in-kernel NFS to Ganesha for their Spectrum Scale product recently.


Ceph/cephfs have kernel clients (and FUSE ones too), but not server. Server is userspace.

It's easier to limit client attack space because to just start attacking client you'd need to MITM the client-server traffic


"Crazy amounts of data" isn't the main concern, it's latency. It's the people storing giant amounts of data who generally don't worry about that so much.


We usually run those services with local nvme disks, they're not as portable but we get great performance.


Ceph isn't a filesystem, it's a service layer (self-described "storage platform") that runs on top of some other unspecified filesystem. Think git-annex or hadoop, not ext4.

Anyway the way Ceph does that is replication, just like those other solutions. There may be 4 nodes with filesystems that contain that data, and Ceph is the veneer that lets you not have to worry about the implementation-detail of where it lives.


Ceph actually does manage its own backing filesystem too these days, after the bluestore migration a few years age.


That's a valid observation. All of the old stuff has been battle tested and reviewed many times. Newer stuff is bound to have bugs that still have not been found. And even old stuff turns up surprises every now and then. For instance

https://nvd.nist.gov/vuln/detail/CVE-2021-27363

4300 affected kernel versions has to be a record of sorts.


I found a buffer overflow in the OpenSolaris code a few hours ago that originated in a commit made in 2007. It predates that Linux bug by at least a year.

It is amazing how many old bugs have survived to the present day. :/


>I found a buffer overflow in the OpenSolaris code a few hours ago that originated in a commit made in 2007.

That's because there is no OpenSolaris anymore....


AirPrint is also great.


Funnily my old printer can already do air print. I think it came with an early firmware update.


Isn't SMB better than NFS?


NFS is free, modular, and feature-rich.

SMB1 was slow - very slow. Novell IPX/SPX was far faster.

SMB2 changed the protocol to include multiple operations in a single packet, but did not introduce encryption (and Microsoft ignored other SMB encryption schemes). It is a LOT faster.

SMB3 finally adds encryption, but only runs in Windows 8 and above.

NFS is a bit messy on the question of encryption, but is a much more open and free set of tools.


In the SMB1 section are you trying to say that SMB1 was faster over IPX/SPX or did you mean to say Novell NCP was faster than SMB1?


Just looked it up. It looks like the NFS server inside Netware was twice as fast as SCO on the same hardware.

I wonder if it would maintain a speed advantage today.

"NetWare dominated the network operating system (NOS) market from the mid-1980s through the mid- to late-1990s due to its extremely high performance relative to other NOS technologies. Most benchmarks during this period demonstrated a 5:1 to 10:1 performance advantage over products from Microsoft, Banyan, and others. One noteworthy benchmark pitted NetWare 3.x running NFS services over TCP/IP (not NetWare's native IPX protocol) against a dedicated Auspex NFS server and an SCO Unix server running NFS service. NetWare NFS outperformed both 'native' NFS systems and claimed a 2:1 performance advantage over SCO Unix NFS on the same hardware."

https://en.wikipedia.org/wiki/NetWare#Performance


Novell NCP was faster in all contexts, as far as I know.

"SMB1 is an extremely chatty protocol, which is not such an issue on a local area network (LAN) with low latency. It becomes very slow on wide area networks (WAN) as the back and forth handshake of the protocol magnifies the inherent high latency of such a network. Later versions of the protocol reduced the high number of handshake exchanges."

https://en.m.wikipedia.org/wiki/Server_Message_Block


It at least doesn't lock anything up that has a file open when the network goes down. NFS is a nightmare with that. NFS is more idiomatic on *nix but still a huge pain when dealing with matching file perms across systems.


> It at least doesn't lock anything up that has a file open when the network goes down.

I must admit I feel quite a bit of irrational fury when this happens (similarly, when DNS lookups hang). That some other computer is down should never prevent me from doing, closing, or killing anything on my computer. Make the system call return an error immediately! Remove the process from the process table! Do anything! I can power cycle the computer to get out of it, so clearly a hanging NFS server is not some kind of black hole in our universe from which no escape is possible.


> I must admit I feel quite a bit of irrational fury when this happens (similarly, when DNS lookups hang).

Neither of those reactions are in anyway irrational. In fact, they're not only perfectly reasonable and understandable but felt by a great many of us here on HN.


This is not the fault of NFS. The same thing would happen if a local filesystem suddenly went missing. The kernel treats NFS mounts as just another filesystem. You can in fact mount shares as soft or interruptible if you want.

https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storag...


Soft mount can lead to data inconsistency, so it's not always a good choice.


> It at least doesn't lock anything up that has a file open when the network goes down. NFS is a nightmare with that.

Yeah, we've been bitten by this too, around once a year, even with our fairly reliable and redundant network. It's a PITA, your process just hang and there's no way to even kill it except restarting the server.


> It's a PITA, your process just hang and there's no way to even kill it except restarting the server.

If you can bring the missing server back online, the NFS mount should recover.


This sounds like a Linux client bug (failure to properly implement the “intr” mount option), not the fault of NFS itself.


It’s a failure to use the intr mount option. I’ve never had a problem using soft mounts either, which make the described problem non existent


intr/nointr are no-ops in Linux. From the nfs(5) manpage (https://www.man7.org/linux/man-pages/man5/nfs.5.html ):

> intr / nointr This option is provided for backward compatibility. It is ignored after kernel 2.6.25.

(IIRC when that change went in there was also some related changes to more reliably make processes blocked on a hung mount SIGKILL'able)


This is too bad. The sweet spot was "hard,intr" at least when I was last using NFS on a daily basis (mid 1990s). Hard mounts make sense for programs, which will happily wait indefinitely while blocked in I/O. This worked well for things like doing a build over NFS, which would hang if the server crashed and then pick right up right where it left off when the server rebooted.

Of course this is irritating if you're blocked waiting for something incidental, like your shell doing a search of PATH. In those cases you could just control-C and continue doing what you wanted to do (as long as it didn't actually need that NFS server).

However I can see that it would be difficult to implement interruptibility in various layers of the kernel.


I think the current implementation comes reasonably close to the old "intr" behavior.

AFAICT the problem with "intr" wasn't that the kernel parts were impossible to implement in the kernel, but rather an application correctness issue, as few applications are prepared to handle EINTR in any I/O syscall. However, with "nointr" the process would be blocked in uninterruptible sleep and would be impossible to kill.

However, if the process is about to be killed by the signal, then not handling EINTR is irrelevant. Thus in 2.6.25 a new process state TASK_KILLABLE was introduced (https://lwn.net/Articles/288056/ ), which is a bit like TASK_UNINTERRUPTIBLE except the task can be interrupted by a fatal signal, and the NFS client code was converted to use it in https://lkml.org/lkml/2007/12/6/329 . So the end result is that the process can be killed with Ctrl-C (as long as it hasn't installed a non-default SIGTERM handler), but doesn't need to handle EINTR for all I/O syscalls.


Depends on the use case. SMB auth is more robust and easier to integrate with AD, but NFS is simpler and typically faster for file access and transfer speeds. SMB is good for shares used by end users, NFS is good for shares used by services.


I've found NFSv4 to be more stable and performant than SMB when using it between Linux machines. Seems to handle multiple concurrent clients well, too.


NFS works great until you lose the network and your client locks up.


Mount your NFS shares with the `intr` and `vers=4.2` options.


Are you really talking about NFSv4?


The responses vary, but what did you have in mind when you asked?


Not my experience. With this phone, even doing basic stuff is painful. I still haven't figured out a way to listen to music.


Or avoid all of this with static libraries :).


This is the only sane way...

And for people saying "Buy you can't get security updates".

I would rather have a dynamically-linked binary that includes all the dependences in it, at which you can upgrade the dependencies with a tool over the binary, than the madness of shared libraries in system paths. (Well, you kinda get that with appimage and similar).


I assume this includes the Johnson and Johnson vaccine.

Moderna is banned in certain european countries for people under 30 because of myocarditis risk.


Those countries generally still offer pfizer for that age group, which has lower myocarditis risk than moderna (presumably because of the dose mostly). I think a lot of European countries also still use astra zeneca, not sure about J&J but they are similar platforms.


Vulkan has VulkanCompute.


I will note that this ffmpeg work by google is still ongoing. I do remember an ffmpeg (then libav) developer mentioning that this will never end since this is C and since ffmpeg is doing parsing of data.

edit: I found the video: https://youtu.be/ydqNot4csmE?t=637


Probably.

ffmpeg is still the king of multimedia although several people are starting to write parsers in rust.


> several people are starting to write parsers in rust

ffmpeg is what it is because it's way more than "parsers". It's all the parsers, and the filters, and everything you need to feed it any audio/video file you have, and get any audio/video file you want in return. That's why out of all the tools out there that's the one everyone and everything uses, from VLC to Chrome to your phone to your TV to your car to whatever.

This is like saying "sqlite is still the king of embedded sql but several people are starting to write databases in rust".

PS: another reason for ffmpeg massive success is simply that Michael is an insanely productive and dedicated developer, as seen in this story and also in the "libav splits because they don't like him but then the project fails because they don't have him" (yes, this is a vast oversimplification).


I just saw an HP prebuilt with Ryzen and dual channel memory. Quite impressive for a prebuilt.

Time to throw Alienware in the dumpster.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: