Hacker Newsnew | past | comments | ask | show | jobs | submit | more vlowther's commentslogin

$WORK project heavily utilizes the test op to enable atomic updates to objects across multiple competing clients. It winds up working really well for that purpose.


Who generates the test op? Client? Or the server?


I reach for adaptive radix trees over b-trees when I have keys and don't need to have arbitrary sort orderings these days. They are just that much more CPU and memory efficient.


What is the hassle with maintaining hydraulic disk brakes on a bicycle? On my motorcycle, you replace the brake fluid and inspect the rotors to make sure they are in tolerance every couple of years, check the pads for wear, the lines for damage and replace if needed every oil change, and that is pretty much it. I would imagine that bicycle hydraulics are even easier to maintain, if only because they don't have nearly as much energy to dissipate as motorcycle brakes do.


The hassle is a bunch of things:

1. The knowledge of how to deal with hydraulics across the industry is slowly accumulating. We've been fixing bikes with bowden cables for a hundred years give or take a bit. We've been dealing with hydraulics for 25 years give or take a few, and it's a really recent development that they've trickled down to the low end of the market and penetrated the road market at all. The knowledge of how to fix them is not as widespread as you'd think.

2. Compounding the above, we have two competing systems (naturally). SRAM runs DOT fluid and Shimano runs mineral oil. The bleeding procedures are different (naturally). Surprisingly, the hygroscopic nature of DOT fluid is a non-issue. Both systems run hard for a year are basically due for a fluid change.

2.5. Some of the bleed procedures are consistent within a manufacturer's line over time. Often times they are not. Step one of a bleed is usually RTFM because the brakes you're looking at are probably different from the last three sets you've done.

3. Everything is small. The entire master cylinder assembly has to fit inside the brake lever. On a road bike, your hand wraps around all of that, plus the shift mechanism. Access to the reservoir cap, which is also the bleed port is about as good as they can make it, but it's still one more damn thing to peel back the rubber hood.

4. The calipers are similarly small, leaving less room for sealing, etc.

5. There's little room for manufacturing variation in the mounts as well. In theory this would affect mechanical disc brakes as well. In practice, they hold up better when the caliper is mounted cockeyed. Park Tool makes an extremely elaborate facing kit to rectify problems with mounts. The shop I work at part-time has one, and I've had to use it. The fact that the rotor is non-rigid does buy you some tolerance back, but sometimes it isn't enough.

All of the above adds up to a lot more hassle than mechanical disc brakes not because it's insurmountably hard to do the work. The major factor is that a good set of mechanicals is so damn simple and reliable. The set on my mountain bike has 7000 miles of touring, plus mountain biking, and I doubt I've spent cumulatively 3 hours working on them.


I think this is overstated.

1. Despite being relatively new, bicycle hydraulic brakes are dead simple. There isn't much to learn, unlike the switch to discs which was a much bigger change. You also don't really need to fix them when a pre-bled set of Shimano MT-200 brakes is 25$, in case you find yourself with a hairy problem a bleed won't fix.

2.1 The industry is converging on mineral oil, as SRAM now has mineral oil brakes. Also, their hygroscopic nature actually is a problem: good quality mineral oil brakes last years between bleeding, which is rare for DOT fluid.

2.2 Mineral oil bicycle hydraulic brakes are very different from hydraulic oil on cars or some bikes - it's a sealed hydrophobic system. You don't need to bleed them unless there is a leak or dirt gets past the seal, and if you don't the worst that happens is that they gradually get mushy.

Meanwhile, if you don't bleed motorcycle or car brakes you can very suddenly lose braking power when you need it, for an unwisely designed sealed DOT system your brake line can literally burst open. But that's not a problem with modern good quality mineral oil hydraulic brakes.

2.3 This is a SRAM problem. Shimano hydraulic brakes are bled almost identically, and though doing a full flush changes a bit every now and then the change is always very minor (and you rarely need to do a full flush)

3. Things don't need to be big. You're not going to be messing with the master cylinder realistically speaking. Also on the most widespread hydraulic brakes there is no rubber hood around the bleed port, just screws that have o-rings around them.

4. The calipers aren't small at all. Look at top of the line weight weenie road bike calipers to see what an actually small caliper is. Common mountain bike calipers are not especially limited by size, they just don't need to be bigger than they are. In mountain bikes they get bigger if you pay more.

5. This is completely false. 2-piston or better yet 4-piston hydraulic calipers, along with convex-concave washers (this is very very important) offer by far the most flexibility as to mount facing. You can easily trick yourself into thinking that mechanical disc brakes are better due one of the pads being rigid and easy to offset as to accommodate a skewed rotor: this almost always backfires hard down the road once the pads start to wear askew. Also, for extreme cases we have floating rotors. The big thing though is to use concave/convex washers: after doing that I've often had bikes that just wouldn't work right with mechanical calipers work perfectly with cheap hydraulic calipers, I'm talking about a good 5 degrees out of alignment on the mounts in the pitch/roll axis.

I've put 9000 miles on my bike and I've only bled it once in 4 years. I haven't had to do any hydraulic specific maintenance at all, the most time I've spent was changing pads, changing/truing rotors, and aligning my calipers (often while waiting for my concave convex washers to come, after which it took all of 2 minutes and a business card).


The bicycle parts are lighter/smaller/cheaper. They are more fiddly. But energy-wise, because they are so much smaller/lighter/cheaper they are dissipating a smaller amount of energy across a vastly smaller set of parts. Heat is still an issue. A 500lb motorcycle+rider dissipates heat across a pair of front disks, both of which can weigh more than an entire bicycle front wheel. A 200lb bicycle+rider focuses everything on a single disk that weighs in at a couple ounces.


They're not such a hassle. There's millions of people riding about on MT-200 brakes that never saw an Allen key since they left the factory. They used to be a lot more finicky, and some niche ones still are, but mass market hydraulic brakes are extremely easily to maintain.


Having written one of these, a few optimizations will go a long way:

1. syscall.Iovec allows you to build up multiple batches semi independently and then write them all in a single syscall and sync the file with the next one. It is a good basis for allowing multiple pending writes to proceed in independent go routines and have another one have all the responsibility for flushing data.

2. It is better to use larger preallocated files than a bunch of smaller ones, along with batching, fixed size headers and padding write blocks to a known size. 16 megabytes per wal and a 128 byte padding worked well for me.

3. Batching writes until they reach a max buffer size and/or a max buffer age can also massively increase throughput. 1 megabyte max pending write or 50 ms time passed worked pretty well for me for batching and throughput to start with, then dynamically tuning the last bound to the rolling average of the time the last 16 write+sync operations (and a hard upper bound to deal with 99th percentile latency badness) worked better. Bounded channels and a little clever math makes parallelizing all of this pretty seamless.

4. Mmap'ing the wals makes consistency checking and byte level fiddling much easier on replay. No need to seek or use a buffered reader, just use slice math and copy() or append() to pull out what you need.


Thanks! Could you please point me to a reference for (1)

etcd/wal actually does do preallocations (https://github.com/etcd-io/etcd/blob/24e05998c68f481af2bd567...)

Yet to implement max buffer age! Any references for this would be bomb!

Is mmap() really needed here? Came across a similar project that does this? Really gotta dig deep here! https://github.com/jhunters/bigqueue


Can't share my references with you directly, the implementation I wrote is closed-source and is heavily intermingled with other internal bits. But I can provide examples:

1. syscall.Iovec is a struct that the writev() systemcall uses. You build it up something like this:

    func b2iov(bs [][]byte) []syscall.Iovec {
        res := []syscall.Iovec{}
        for i := range bs {
            res = append(res, syscall.Iovec{Base: &bs[i][0], Len: uint64(len(bs[i])}
        }
        return res
    }
Then, once you are ready to write:

    func write(fi *os.File, iov []syscall.Iovec, at int64) (written int64, err error) {
        if _, err = fi.Seek(at, io.SeekStart); err != nil {
            return
        }
        wr, _, errno := syscall.Syscall(syscall.SYS_WRITEV, fi.Fd(), uintptr(unsafe.Pointer(&iov[0])), uintptr(len(iov)))
        if errno != 0 {
            err = errno
            return
        }
        written = int64(wr)
        err = fi.Sync()
        return
    }
These are not tested and omit some more advanced error checking, but the basic idea is that you use the writev() system call (POSIX standard, so if you want to target Windows you will need to find its equivalent) to do the heavy lifting of writing a bunch of byte buffers as a single unit to the backing file at a known location.

2. Yeah, I just zero-filled a new file using the fallocate as well.

3. I handled max buffer age by feeding writes to the WAL using a channel, then the main reader loop for that channel select on both the main channel and a time.Timer.C channel. Get clever with the Reset() method on that timer and you can implement whatever timeout scheme you like.

4. No, it is not needed, but my WAL implementation boiled down to a bunch of byte buffers protected by a rolling CRC64, and for me just mmap'ing the whole file into a big slice and sanity-checking the rolling crcs along with other metadata was easier and faster that way.


It is not hard research, it is "just" a lot of plain old boring engineering.


> Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time

For HTTPS booting, yes.

> how does SecureBoot work with iPXE?

It doesn't, unless you manage to get your iPXE (along with everything else in the chain of control) signed.


Oooh. 6809 resurrection project. I studied that arch obsessively when I was a kid with a Tandy Color Computer.


Funny I noticed this one when I was reading a paper about people using an OS-9 variant together with the Turbo9. See https://www.mdpi.com/2079-9292/13/10/1978

I had a Coco 1 and a Coco 3, with the OS-9 operating system you could run C and Pascal compilers as well as use BASIC09 which was like a modern bytecode interpreted language (Python). The 6809 stands out because it was designed to support compiled languages, shared libraries and such. It not only had the right addressing modes but it had just enough registers that code generation was straightforward.

When I replaced my Coco3 with a 286 machine I switched to hardware that was more than an order of magnitude faster but MS-DOS was a big step back from OS-9.

Lately I have been obsessed with the lost 24-bit generation of micros and was quite delighted to discover the eZ80, would be nice to see the 6809 follow the same route but Motorola was too committed to the ultimately doomed 68k.


Funny fact: Motorola didn't ship 68040 CPUs over 40Mhz because they didn't think customers would tolerate even a small heat sink and fan combination being required for the chip.

*looks at massive heat sink and fan on my desktop system...


I mean, was 68k really "doomed"? Millions of machines shipped with it, it went through like 6 generations (and then ColdFire), and it's still used in military hardware with new chips made for that industry, and its existence heavily influenced everything about the Unix environments & protocols we use today.

It was a great ISA (though clearly completely different from the 6809), and these days big endian looks like a serious anachronism.


Wasn't it the VAX that set the precedent for all the "32-bit" machines ever since?

Every company that invested in it, however, had to either go out of business or switch to another architecture, usually the first. Apple and Sun Microsystems survived. Fragmentation, with the Apple Macintosh, Amiga, Atari ST, Sinclair QL, etc. (all similar machines that struggled with cost-performance tradeoffs in different ways), didn't help.

The BBC Micro developers probably had more space to really think about how to make a home computer and looked at many architectures and came to the conclusion that the 68k line punched below its weight. People in the early 1980s didn't see that one of the greatest rugpulls in computing was coming and were blindsided by IBM "doing it again" by maintaining long-term software compatibility with the 8088 and derivatives the way they did for the old 360... Companies that invested in the x86 didn't regret it, though many of them were crushed by an increasingly competitive market.


I absolutely agree that Motorola fucked up bigtime with the 68k->88k->PPC transition. They listened to the pundits who said the CISC ISA couldn't scale, and then Intel proved that (mostly) wrong but it was too late for 68k/ColdFire then. The rug-pulling actually is probably in large part responsible for both Atari and Commodore just packing it in, and led to Apple being in the wilderness for a decade as they did the first of 3 ISA transitions for the Mac.

Also yes the 68k was clearly a dog when it came to interrupt responsiveness and made no sense in home computers or video games, at least not until the '020.

It was meant to be a bargain VAX. And that's why I say it was influential. A whole generation (X) of us grew up who could never hope (or care) to touch a VAX, but we had Atari STs, Amigas, etc at home, and were shelling into SunOS 68k boxes for fun or school etc. It lasted quite a while after VAX was a done deal, too.

To me 68k was the Unix workstation and hobbyist market and defined 32-bit ISAs as well. I never touched an x86 until I could run Linux on it. I've never learned or written x86 assembly seriously in my entire career despite writing (and enjoying) 6502 68k, ARM, MIPS, RISC-V, others

Outside of the UK, ARM didn't play a role except for in some embedded stuff. Obviously that changed :-)


>Also yes the 68k was clearly a dog when it came to interrupt responsiveness and made no sense in home computers or video games, at least not until the '020.

In what world? The interrupts were very fast, taking just a few cycles, and the behavior was deterministic.

This is why they were popular in realtime applications, and remained so for decades.

Back in the day, they were famously used in e.g. the Eurofighter Typhoon.


The worst case interrupt latency on the original 68000 is pretty terrible? The instructions take ages and the interrupt handling mechanism adds additional delays.

I'm sure the 68020 was better.


>The worst case interrupt latency on the original 68000 is pretty terrible?

It has a fixed cost of 44 cycles. Is this somehow bad?

I would call it extremely tight.

edit: 44 cycles... after the previous instruction finishes. Some instructions (infamously slow division, and many-reg movem) can be slow. Worst case is what matters for realtime, but it is still under 400 cycles.


In home computers and video games? Holy crap ... compare that to the 6502. Maximum 13 cycles, including the execution of the instruction that initiated it, and the return from the interrupt...

And yes, obviously that's easy when you only have 3 registers ... But this has a serious real world effect for things like video games.


>But this has a serious real world effect for things like video games.

At 1MHz, sure.


Well, sure ... and also plaid a part in making an 8mhz Atari ST feel more sluggish than a 1mhz C64 in some things.

Obviously in throughput, a different story


Atari ST (68k at 8MHz) had other issues that made it sluggish. It is no Amiga.

In most scenarios, most interrupts aren't going to interrupt divisions, as most instructions aren't over 16 cycles[0], thus most interrupts will not be much longer than 44 cycles. In microseconds, a non-issue for a microcomputer.

Realtime programmers of the time had predictable behavior, and could simply avoid using the specially long instructions (movem and div) altogether if they needed to bound latency below the upper limit. Performance isn't as important as meeting hard deadlines.

0. https://wiki.neogeodev.org/index.php?title=68k_instructions_...


We need to remember that everyone -- not just Motorola -- thought CISC was a dead-end by the late 1980's. Even Intel!

And we also need to acknowledge that everyone was RIGHT! Intel ultimately threw billions into keeping x86 competitive, and they seem to have finally hit the wall now with performance per watt measures, which is what cost them Apple's business and is now driving Microsoft to ARM.


>And we also need to acknowledge that everyone was RIGHT!

And x86 will, too, be replaced.


It was simply phased out at some point, like so many other processors. I think it's wrong to say it was "doomed". For one of the minicomputer systems I worked with until around 1995 all the interface boards used m68k CPUs and were running their own little operating systems. This made the whole mini quite efficient - all the intelligence in the peripherals meant that a lot of data transfers could be performed independently of the main CPU.


About 35 years ago, I did some embedded development on a 68k platform running OS-9 (ported to 68k from 6809). I was really impressed with the POSIX-like environment, but thoroughly disgusted with the very poor documentation. I found that I had to write a lot of stub code to fully determine and understand the behavior of the system calls.

This was after I had purchased and fully digested the only documentation I could find: "OS-9 Insights"

https://www.amazon.com/OS-9-insights-advanced-programmers-gu...


Me too! I remember laboriously typing in the demo program from "TRS-80 Color Computer Assembly Language Programming" and watching it bubble sort the text on screen by directly accessing video memory.


One of the great thinkers of the modern era. He will be missed.


Terrorists willing to blow themselves up at a time of their choosing is the key there. With nuclear waste hot enough to be a menace, the window between getting the material and being painfully incapacitated and/or dead by the material is pretty damn small. Especially since nuclear plants tend to have things like lots of checkpoints and security.


More than that. If you have ever had to fix a bug in code common to multiple maintained releases of a project, being able to apply the same patch to them all as its own thing instead of having multiple cherry-picked commits with identical content would be nice.


I think giving a patch its own identity is a pretty neat concept and clearly different than the git approach, so thanks for this example!


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: