Hacker News new | past | comments | ask | show | jobs | submit | h2hn's comments login

No way man.

If you solved that problem you would be rich.



And the motivation: it's used as test inputs. Not sure I agree with completely destroying the readability of a perfectly readable file - but ok.


The motivation, as far as I see, was only that it "may be used."

But see the post of belorn where he argues that the error handling seems to be good.


Is the date on that accurate, did that happen in 2016? Kinda hurts the theory that they were trying to avoid Unix similarity.


It says: Mar 9, 2015


Cool but requires shit (js at least)



are you american or what?

That's really insane man. Take care of yourself. :)


I don't see any problem here.

I've been using startpage for the last 5 years and I'm not looking back. I woudn't have any problem using any other search engine, nowdays any search engine works. The 3 or 4 times that I googled these years I found it pretty weird.

tl;dr: There's plenty of room, just not enough gray matter. :)


But...Startpage uses other search engines, primarily Google.


Truuuue. :)

But my point is my searches usually end up in:

wikipedia, blogpost, *overflow, twitter, reddit, papers ...

So making a search engine for only those sites, could be a usable search engine indeed and I would use it if I had to. :)


I spent the last week testing ext4/btrfs/zfs on Linux and I found that zfs is rather slow and btrfs has improved its performance a lot in the last years (I should refined the script a bit, upload some graphics and make a post).

https://gist.github.com/liloman/d525131fab9b9a440140905921e9...

I'll give it a try on bcachefs. :)

The script needs a 512MB spare disk partition and some basic changes but the fundamental work is there.


Awesome video.

I've watch the video at least 2 times it's great and interesting. :)

I cannot recommend all the CCC media highly enough. ;)



Ok, so zfs for single drive users doesn't fix single data corruption.

Definitely I'm going to use my solution so. All the next generation FS stuff is cool (btrfs also indeed) but for the simplest use case people just need safe data and fix it it the disk goes bad.


Why don't you use copies=2 with a single disk?


I started a really simple and effective project the last month to be able to fix from bitrot in linux(MacOs/Unix?). It's "almost done" just need more real testing and make the systemd service. I've been pretty busy the last weeks so I've only been able to improve the bitrot performance.

https://github.com/liloman/heal-bitrots

Unfortunatly, btrfs is not stable and zfs needs a "super computer" or at least as much GBs of ECC RAM as you can buy. This solution is designed to any machine and any FS.


Please stop spreading this misinformed statement. I assume you are referring to the ZFS ARC (Adaptive Replacement Cache). It works in much the same way as a regular Linux page cache. It does not take much more memory (if you disable prefetch) and will only use what is available/idle. We use Linux with ZFS on production systems with as low as 1GB memory. We stopped counting the times it has saved the day. :-)

ECC is a nice to have, but ZFS does not have special requirement over say a regular page cache. The only difference is that ZFS will discovery bit-flips instead of just ignoring them as ext4 or xfs would do.


> ECC is nice to have.

Actually it seems ECC is important for ZFS filesystems see:

http://louwrentius.com/please-use-zfs-with-ecc-memory.html


To be clear, it is not ZFS that requires or even mandates ECC. Since ZFS uses data as present in memory and has checks for everything post that, it is prudent to have memory checks at the hardware level.

Thus, if one is using ZFS for data reliability, one ought to use ECC memory as well.


> Actually it seems ECC is important for ZFS filesystems see:

The inflection made by the previous comment tends to lead people to think ECC RAM is needed for ZFS specifically. As the blog post you link to points out it's equally applicable to all filesystems.


It's not required, but it doesn't make sense to use ZFS but not to use ECC memory. That's the point. It's like locking the backdoor but leaving the front door wide open.


Interesting.

That's rigth the kind of hardware I was referring to, 1 GB of plain RAM. Truly, I haven't tested ZFS yet for that reason I've always read that ZFS has big requirements so I refrained to try it. It seems I should give it a try. ;)

Btrfs is another story I've used it for years and I'd prefer not to have to use it anymore untill it'll become "stable" and "performance". :)


FreeNAS != ZFS. The former is a specialised storage system that has to meet a very different set of criteria than a lightweight server with 1GB ram.


Is zfs able to repair from single data (copy) corruption?

My main issue is to be able to repair a "silent" data corruption on a single drive machine. Am I able to use x% of my "partition" to data repair or do I need to use other partition/drive to mirror/raid it?

If I understand right zfs can detect bitrot ("not really" a big deal) but without any local copy It can't self heal.

My use case is an arm A20 SoC (lime2) to storage local backups among other things, so I need something that detects and repairs silent data corruption at rest by itself (using a single drive).

A poor man NAS/server. ;)


Not sure if it will fit your needs or not, but for long term storage on single HDs (and back in the day on DVD), I would create par files with about 5-10% redundancy to guard against data loss due to bad sectors. http://parchive.sourceforge.net/ total drive failure of course means loss of data, but the odd bad sector or corrupted bit would be correctable on a single disk. This was very popular back in the binary UseNet days....


You can create a nested ZFS file system and set the number of copies of the various blocks to be two or more. This will take more space, but there'll be multiple copies of the same block of data.

Ideally, though, please add an additional disk and set it up as a mirror.

ZFS can detect the silent data corruption during data access or during a zpool scrub (which can be run on a live production server). If there happen to be multiple copies, then ZFS can use one of the working copies to repair the corrupted copy.


Got it but not for my use case then cause I don't want to halve my storage capacity.

Anyway I will try to use it for my main PC which has several disks and continue to use my solution for single disk machines (laptop, vps, SoC...). :)


Note it won't necessarily halve the capacity. Selectively enable it for the datasets requiring it, and avoid the overhead with the rest.


No but parity archives solves a different problem, with only some percent of wasted storage you can survive bit-errors in your dataset. It's like reed-solomon for files.

In order to achive the same with ZFS you have to run RAID-Z2 on sparse files.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: