> Add a borrow checker to C++ and put rust to bed once and for all
Ah yes, C++ is just one safety feature away from replacing Rust, surely, any moment now. The bizzare world C++ fanboys live in.
Every single person that had been writing C++ for a while and isn't a victim of Stockholm syndrome would be happy when C++ is put to bed once and for all. It's a horrible language only genuinely enjoyed by bad programmers.
C++ surpassed C performance decades ago. While C still has some lingering cachet from its history of being “fast”, most software engineers have not worked at a time when it was actually true. C has never been that amenable to scalable optimization, due mostly to very limited abstractions and compile-time codegen.
Doesn’t it say something if Rust programmers routinely feel more comfortable making aggressive optimizations and have more time to do so? We maintain code for longer than the time taken to write the first version and not having to pay as much ongoing overhead cost is worth something.
How can it not? Experts in C taking longer to make a slower and less safe implementation than experts in Rust? It's not conclusive but it most certainly says something about the language.
"Barely" or not is completely irrelevant. The fact is that it's measurably faster than the C implementation with the more common parameters. So the point that you're trying to make isn't clear tbh.
Also I'm pretty sure that the C implementation had more man hours put into it than the Rust one.
I think that would be really hard to measure. In particular, for this sort of very optimized code, we’d want to separate out the time spent designing the algorithms (which the Rust version benefits from as well). Actually I don’t think that is possible at all (how will we separate out time spent coding experiments in C, then learning from them).
Fortunately these “which language is best” SLOC measuring contests are just frivolous little things that only silly people take seriously.
Yes, I know that both Axum and Actix Web exist. However, these are web application frameworks designed for building web applications in Rust. On the other hand, my project is a standalone web server, similar to Apache httpd, NGINX, or Caddy.
While there are standalone web servers in Rust, such as Static Web Server and binserve, they primarily focus on serving static files rather than providing general-purpose web hosting capabilities. My goal is to create a more versatile web server that can handle a wider range of web hosting applications.
Pingora is not a standalone web server (like NGINX or Apache httpd), it's a Rust-based framework developed by Cloudflare for building network services, particularly HTTP proxies.
River (built on Pingora) is designed to be a reverse proxy. My web server on the other hand is designed to be a general-purpose web server (it also supports reverse proxying).
It sounds like this is a general purpose web server rather than a web framework like axum and actix-web. It's in the category of nginx, apache or caddy.
Are there actual, living people thinking that donating to Ukraine will influence anything? These donations might as well be a margin of error compared to government funding, aid packages and what not.
The Ukraine is simply too small of a country to actually win a war against Russia, all that these aid packages are doing is prolonging the war that Ukraine cannot win.
It's like saying "if only Germany had <insert Wunderwaffe> it would've won WWII". At some point they were going to run out of men anyway (they did). Sort of like how Ukraine will eventually run out of men if they continue.
Ukraine just recently got at parity in artillery with russia, which is why they are pushing this ceasefire and this is why Zelenskyy is walking away from it just like that.
The downside with kernel-level rootkits is you essentially have to compile it for many different kernel versions if you want it to work everywhere. I think I've read about some malware that literally contacted a server, sent the kernel version, and the server would compile the rootkit on demand.
And that's service right there - they tried to distribute the most compatible malicious kernel exploits for you, but sometimes you need a bespoke compilation for your system -- and these guys step up and make sure you get one!
That's customer service, I think a lot of (all of) our trillion dollar overlords could do a thing or to and learn about providing reliable service.
In case you're interested, it's usually done via ptrace by attaching to every process and modifiying the syscall arguments every time it makes one. There are performance issues associated with that (ptracing is rather expensive) and IMO it's more complex than LD_PRELOAD. Furthermore, ptrace may be disabled altogether on some installations, even for root. See yama_ptrace_scope
I wouldn't say it's a practical approach. Works for a cool demo, sure, but as an adversary I would be hesitant to use this widely.
And you can detect when you are being ptrace()d because a process cannot be ptrace()d twice. Unless they changed Linux again.
There are also timing issues that show up, and you can do any number of anti-debugging tricks which would reveal that the environment is being manipulated. Which is an instant red flag.
In general if the attacker is running at the same privilege level, you can probably evade it or at least detect it. I’m somewhat surprised there isn’t a basic forensics tool that automates all of these tests already.
“sus: [-h] [-v] [-V] [-o file] [-O format]
Sus tests for common indicators of compromise using on generic tests for common methods of implementing userland rootkits. It will check for LD_PRELOAD, ptrace(), inotify() and verify the system binaries match the upstream distribution hashsums. It can be used to dump the file system directly (warning, slow!) for comparison against the output of `find`. See EXAMPLES for more.”
Implementation is left as an exercise for the reader.
There's also TracerPid field in /proc/PID/status, which is non-zero when a process is being ptraced
> Sus tests for common indicators of compromise
There's a lot of stuff that Linux malware tends to do that almost no legitimate program does, this can be incorporated into the tool. Just off the top of my head, some botnet clients delete their executable after launch, in addition to being statically linked, which is an almost 100% guarantee that it's malware.
Check for deleted executables: ls -l /proc/*/task/*/exe 2>/dev/null | grep ' \(deleted\)$'
Although a malicious process can just mmap libc for giggles, and also theoretically libc can be named in a way that doesn't contain "libc". A more reliable method is parsing the ELF header in /proc/PID/exe to determine if there's an ELF interpreter defined.
You can also check for processes that trace themselves (TracerPid in status == process id), this is a common anti-debug tactic.
You can also hide the process by mounting a tmpfs on top of it's proc directory, tools like ps ignore empty proc directories due to the possibility that the process has terminated but it's proc directory is still around. This is obviously easily detectable by checking /proc/mounts or just listing empty directories with numeric names in /proc
Another heuristic can be checking /proc/PID/cmdline for two NUL bytes in a row, some malware tries to change it's process name and arguments by modifying the argv array, however they are unable to change the size of cmdline, hence having multiple NUL bytes is a viable detection mechanism. Legitimate programs do this too, but it's rather uncommon.
You can obviously combine these heuristics to make a decision whether the process is malicious, as by themselves they aren't very reliable
I am reasonably sure that the intended behaviour of Linux is that a process can only be ptraced by one other process level at a time.
However, a few years ago, I discovered that a process inside a container being ptraced could be ptraced by a second process running as root at the host level.[1][2] I don't know if that's been patched away since then, but my assumption at the time was that it meant that the "there can be only one" aspect of ptrace was more of an arbitrary decision, not a hard limit.
[2] I'm not sure if the "double ptrace" scenario made it into the final document, but it's the same techniques discussed in there, just attach a tracer to the containerized process from inside the container before you attach gdb or asminject.py from outside of the container.
It is not real, OP is suggesting that it be written. I was so very tempted to write it. The only downside to writing it is that once you have it in circulation, it moves the goal posts and rootkit authors with some skill will use less obvious techniques to hide their software.
That specific technique only works if root can still load kernel modules, but if I could throw that together with minimal knowledge of the Linux kernel's inner workings, there's probably a sneakier way.
I'm of the opinion that all systems should at least have a static busybox, it's useful for more than just rootkit hunting, for example if you somehow break your installation because of glibc shenanigans (rare but happens)
Sorry but it is largely all-or-nothing in this case, if someone has access to the user the app runs as, you are screwed. It doesn't matter whether you use env vars or files.
I'm assuming the parent intended to say "if someone gained access to your user you are pwned anyways", which is true, unless you actually go to the effort of storing the secrets securely using OS-provided mechanisms. Env vars are not that.
> which isn't feasible in the real world
Well of course it isn't, how would you justify those sweet cybersecurity experts' paychecks otherwise? Not saying cybersecurity isn't important, but there's way too much snake oil in the industry nowadays (always has been?).
Ah yes, C++ is just one safety feature away from replacing Rust, surely, any moment now. The bizzare world C++ fanboys live in.
Every single person that had been writing C++ for a while and isn't a victim of Stockholm syndrome would be happy when C++ is put to bed once and for all. It's a horrible language only genuinely enjoyed by bad programmers.