Hacker Newsnew | past | comments | ask | show | jobs | submit | mleonhard's commentslogin

I think that async in Rust has a significant devex/velocity cost. Unfortunately, nearly all of the effort in Rust libraries has gone into async code, so the async libraries have outpaced the threaded libraries.

There was only one threaded web server, https://lib.rs/crates/rouille . It has 1.1M lines of code (including deps). Its hello-world example reaches only 26Krps on my machine (Apple M4 Pro). It also has a bug that makes it problematic to use in production: https://github.com/tiny-http/tiny-http/issues/221 .

I wrote https://lib.rs/crates/servlin threaded web server. It uses async internally. It has 221K lines of code. Its hello-world example reaches 102Krps on my machine.

https://lib.rs/crates/ehttpd is another one but it has no tests and it seems abandoned. It does an impressive 113Krps without async, using only 8K lines of code.

For comparison, the popular Axum async web server has 4.3M lines of code and its hello-world example reaches 190Krps on my machine.

The popular threaded Postgres client uses Tokio internally and has 1M lines of code: http://lib.rs/postgres .

Recently a threaded Postgres client was released. It has 500K lines of code: https://lib.rs/crates/postgres_sync .

There was no ergonomic way to signal cancellation to threads, so I wrote one: https://crates.io/crates/permit .

Rust's threaded libraries are starting to catch up to the async libraries!

---

I measured lines of code with `rm -rf deps.filtered && cargo vendor-filterer --platform=aarch64-apple-darwin --exclude-crate-path='*#tests' deps.filtered && tokei deps.filtered`.

I ran web servers with `cargo run --release --example hello-world` and measured throughput with `rewrk -c 1000 -d 10s -h http://127.0.0.1:3000/`.


I think Cisco SNMP vulnerabilities have been appearing for 20 years or more. I wish someone would add a fuzzer to their release testing script.


I took an "Architecting on AWS" class and half of the content was how to replicate complicated physical networking architectures on AWS's software-defined network: layers of VPCs, VPC peering, gateways, NATs, and impossible-to-debug firewall rules. AWS knows their customers tho. Without this, a lot of network engineers would block migrations from on-prem to AWS.


Ages ago I deployed a sophos virtual appliance in AWS, so I could centrally enforce some basic firewall rules, in a way that my management could understand. There was only 1 server behind it, the same thing could have been achieved simply using the standard built in security rules. I think about it often.

I do find Azures implementation of this stuff pretty baffling. Just in, networking concepts being digested by software engineers, and then regurgitated into a hierarchy that makes sense to them. Not impermeable, just weird.


I had a very interesting conversation with an AWS guy about how hard they tried to make sure things like Wireshark worked the same inside AWS, because they had some much pushback from network engineers that expected their jobs to be exactly the same inside as on-prem.


Main source of issues leading to overcomplex networking that I ever seen was "every VPC gets a 10./8" like approach replicated, so suddenly you have complex time trying to interconnect the networks later.


IPv6 solves this but people are still afraid of it for stupid reasons.

It's not hard, but it is a little bit different and there is a small learning curve to deploying it in non-trivial environments.


Another issue (also driving some of the opposition to v6) is the pervasive use of numerical IPs everywhere instead of setting up DNS proper.


I think this part is somewhat legitimate. Every network engineer knows "it's always DNS," to the point that there are jokes about it. DNS is a brittle and inflexible protocol that works well when it's working, but unfortunately network engineers are the ones who get called when it's not.

A superior alternative to DNS would help a lot, but getting adoption for something at that level of the stack would be very hard.


I find that a lot of "it's always DNS" falls down to "I don't know routing beyond default gateway" and "I never learnt how to run DNS". Might be a tad elitist of me, I guess, but solid DHCP, routing, and DNS setup makes for way more reliable network than anything else.

DNS just tends to be part that is visible to random desktop user when things fail


>Might be a tad elitist of me, I guess, but solid DHCP, routing, and DNS setup makes for way more reliable network than anything else.

Depends on the network. If you are talking about a branch office, for sure.

>I find that a lot of "it's always DNS" falls down to "I don't know routing beyond default gateway"

I see it mostly with assumptions. Like DNS Server B MUST SURELY be configured the same as DNS Server A, thus my change will have no unexpected consequences.


Solid management of the services is important, yes. Also being prepared for when requirements change. I remember to this day when a bunch of small (rack-scale) deployments suddenly needed heavy-grade DNS because one of the deployed projects would generate a ton of DNS traffic. My predecessor set up dnsmasq, I didn't have a reason to change it before that, afterwards we had to setup total of 6 DNS servers per rack (1 primary authoritative, 2 secondary updating themselves from authoritative, 3 recursive).

I would say situation also changes a lot if you know/can deploy anycast routes for core network services - for example fc00::10-12 will always be recursive nameservers, and you configure routing so that it picks up the closest one, etc.


How does one handle errors with MESH?

To handle errors in HTMX, I like to use config from [0] to swap responses into error dialogs and `hx-on-htmx-send-error` [1] and `hx-on-htmx-response-error` [2] to show the dialogs. For some components, I also use an `on-htmx-error` attribute handler:

    // https://htmx.org/events/
    document.body.addEventListener('htmx:error', function (event: any) {
        const elt = event.detail.elt as HTMLElement
        const handlerString = elt.getAttribute('on-htmx-error')
        console.log('htmx:error evt.detail.elt.id=' + elt.getAttribute('id') + ' handler=' + handlerString)
        if (handlerString) {
            eval(handlerString)
        }
    });
This gives very good UX on network and server errors.

[0]: https://htmx.org/quirks/#by-default-4xx-5xx-responses-do-not...

[1]: https://htmx.org/events/#htmx:sendError

[2]: https://htmx.org/events/#htmx:responseError


Yes. With HTMX, one can put a page definition and its endpoints in one file. It has high cohesion.

There's no integration with routers, state stores, or rpc handlers. There are no DTOs shared between the frontend and backend. It has low coupling.

High cohesion and low coupling bring benefits in engineering productivity.


VPN providers do not have reputations for making secure or reliable software.

Here's a good privacy proxy (VPN) setup: Set up a second wifi router, enable the "Internet kill switch", and connect it with Wireguard to a reputable VPN service. I recommend GL.iNet routers and Mullvad.

With this setup, one can move individual devices between the privacy wifi and identity-broadcasting wifi.


According to IIHS insurance loss data, https://www.iihs.org/research-areas/auto-insurance/insurance... , here's the chance of being injured while riding in one of these cars (and filing an insurance claim) relative to the average US vehicle:

    - Nissan Leaf -15% (Select "Small" & "4-door Cars" on the page)
    - Chevy Bolt -34%
    - Subaru Crosstrek -28% ("Station wagons" & "Small")
    - Tesla Model 3 +26% ("Luxury cars" & "Midsize")
So the choice of Nissan Leaf was OK from a safety perspective, but the Chevy Bolt is better. The Tesla is much worse.


BART is a government organization and all California government employee pay is public. You can see that BART has about 40 software engineers and they earn about 70% of the market rate:

https://transparentcalifornia.com/salaries/search/?q=compute...

https://transparentcalifornia.com/salaries/search/?q=compute...

It seems to me that they are over-worked & under-paid and are doing a good job given the circumstances.

NIMBYs have blocked BART in Silicon Valley. BART doesn't reach Menlo Park, Palo Alto, Stanford, Mountain View, Sunnyvale, Los Altos, Santa Clara, or Cupertino. A few years ago, it finally reached San Jose.

A separate train (CalTrain) goes from SF through Silicon Valley. Last year they switched to electric trains which are faster and run more frequently. The SF CalTrain station is inconvenient (20-mins walk from downtown, under a highway), but they are working to extend CalTrain to the central SF station: https://en.wikipedia.org/wiki/Salesforce_Transit_Center#Futu... .

So Silicon Valley transit is getting better, slowly.


    Location: San Francisco, CA
    Remote: Yes
    Willing to relocate: Yes
    Technologies: Rust, TypeScript, Golang, Java, Ruby, Python, Postgres, htmx, GitHub Actions, CircleCI, AWS, ECS, Kubernetes, Datadog, Sentry, React, HTML/CSS, Linux, Snowflake, and many others.
    Résumé/CV: https://joblin.app/profile/409373959
    Email: michael206@gmail.com
I'm a generalist software engineer with strong opinions on how to build and maintain software while balancing quality, speed, and cost. I've worked at FAANG and startups. I'm a fan of design docs, lunch&learns, clarity, and deleting code. I care a lot about doing a good job and making things meet user needs. I also love unblocking teammates & other teams.

I've published a lot of code online, mostly Rust libraries, with good tests. I like working in the office, feeling camaraderie with my teammates.


> There is no “simplifying force” acting on the code base other than deliberate choices that you make. Simplifying takes effort, and people are too often in a hurry.

There is a simplifying force: the engineers on the project who care about long-term productivity. Work to simplify the code is rarely tracked or rewarded, which is a problem across our industry. Most codebases I've worked in had some large low-hanging-fruit for increasing team productivity, but it's hard to show the impact of that work so it never gets done.

We need an objective metric of codebase cognitive complexity. Then folks can take credit for making the number go down.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: