Hacker News new | past | comments | ask | show | jobs | submit login

I might be wrong, but I think I've read somewhere (here on HN) that kernel has no idea about disk head position. It's job of HDD's firmware to reorder read/write instructions it received from kernel for optimal performance.

Also, firmware can "remap" some (bad) sectors into reserve area without kernel knowing.




Modern disks use Logical Block Addressing, so block numbers do correlate with head position but there's no detailed info at the level of cylinders/heads/sectors. Block remapping is a theoretical possibility, but if you see even a single block being remapped in the SMART info it means the disk is dying and you should replace it ASAP.


Some modern disks, depending on firmware and applications, do in fact do a lot of remapping; they have wear leveling enabled, generally aimed at shoveling data such that the head tends to move less and give you better latencies. Wouldn't surprise me if normal disks are starting to that regardless of usage as reducing tail latencies never hurts much.

There is also a difference between remapping a sector and reallocating a sector. Remapping simply means the sector was moved for operational reasons, reallocating means a sector has produced some read errors but did read fine.

A disk can operate fine even with 10s of thousands of reallocated sectors (by experience). The dangerous part is SMART reporting you pending and offline sectors, doubly if pending sectors does not go below offline sectors. That is data loss.

But simpy put; on modern disks the logical block address has no relation to the position of the head on the platter.


> But simpy put; on modern disks the logical block address has no relation to the position of the head on the platter.

WD kind of tried that with device managed SMR devices and they show absolutely horrible re-silvering performance.

Without a relatively strong relations of linear write/read commands and their physical locations also being mostly linear, spinning rust performance is not on a usable level.


DMSMR is an issue yes, but CMR disks already do it and it's not as much of an issue as you think. On a CMR this is entirely fine.

The issue with SMR is that because a write can have insane latencies, normal access gets problems.

CMR doesn't have those write latencies, so you won't face resilvering taking forever.

It also helps if you run a newer ZFS, which has sequential resilvers that do in fact run fine on an SMR disk.

I will also point out that wear leveling on a DMR disk tries to achieve maximum linear write/read performance by organizing commonly read sectors closer to eachother.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: