Hacker News new | past | comments | ask | show | jobs | submit login
A deep dive into Bitcoin Core v0.15 (diyhpl.us)
145 points by okket on Sept 2, 2017 | hide | past | favorite | 16 comments



To me the most interesting thing here is "Rolling UTXO set hashes". Bitcoin has a (theoretical future) scaling problem because the blockchain keeps growing forever without bound. No matter what you think about Moore's law, everyone knows it can't go on forever, so an infinitely large blockchain is not sustainable. With UTXO set hashes included in blocks, ancient blockchain history can be discarded on a rolling basis, turning the infinitely large blockchain into a fixed size rolling chain.

This doesn't solve the whole problem as the UTXO set itself also grows, but I suspect appropriate incentives could manage this problem (fees for increasing UTXO set size and/or bounties for reducing it).

With these problems solved, Bitcoin would theoretically be able to operate indefinitely without issue. Of course many practical scaling problems remain, the most important being the transaction rate cap.


> so an infinitely large blockchain is not sustainable.

This is a bit misleading. With 1MB blocks a 2TB drive holds around 40 years. So put that out a century and we're still talking nearly nothing.

Even with 32MB blocks it's not exactly a gigantic commitment, needing a drive every few years.

Having a nice way to checkpoint utxo is good to get people bootstrapped but it's not like block chain size is a serious issue.


People who think the problem with large blocks is disk space or internet speed are the worst tbh


Without getting into the 'vim vs emacs' type debate rolling through the bitcoin community, why?

Why is storage not 'the problem'?

I am not trying to kick a hornets nest here, I am genuinly trying to understand.


Cause a 2TB drive is so cheap? People wanting to run a full node can easily afford it. But if BTC stays with small blocks and becomes a settlement layer, nobody will even care about running a node in the first place. Why would I want to run a node to validate payments between corporations?

Increasing blocks increases the load on nodes somewhat, but it keeps the system much more peer to peer, as transaction fees won't end up as high.

There's no obvious easy solution, but making the network expensive and congested is certainly not a real solution.


The problems with large blocks are not disk space.


rolling UTXO set hashes: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017...

high-performance merkle set implementation http://diyhpl.us/wiki/transcripts/sf-bitcoin-meetup/2017-07-... and https://github.com/bramcohen/MerkleSet

"Making UTXO set growth irrelevant with low-latency delayed TXO commitments" https://petertodd.org/2016/delayed-txo-commitments

"TXO commitments do not need a soft-fork to be useful" https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017...


The problem would be still to maintain the UTXO set. Fortunately, by using a two-party dynamic authenticated data structure, it is possible to have only miners(and also powerful full nodes willing to hold full-state, like block explorers etc) to store the whole set, while others can check proofs of set transformations included into blocks, storing just 32 bytes. For details, see introduction to the paper https://eprint.iacr.org/2016/994 (and blockchain processing benchmark graph, 2nd half of pg. 14)


> We also implemented support for the sha-native instruction support. […] The new AMD rizon [sic] stuff contains this instruction, though, and the use of it gives another 10% speedup in initial block download for those AMD systems.

Wow, using native SHA256 hashing improved performance that much? I suppose Bitcoin does use an awful lot of hashes.


Every single signature hashes a few kilobytes of data, on average, and there are ~10k signatures per block (more with segwit..). That's 10's of megabytes of hashing per block, even without considering adversarial transactions that purposefully inflate those numbers. That's in the range of 10-100 MB. On my CPU openssl benchmarks about 180MB/s for sha256, so 50-500ms of hash time per block. That's not insignificant.


SHA is the core hashing algorithm that the Bitcoin blockchain is built upon, and it is a fundamental part of it. I'm actually surprised it didn't have more impact.


SHA-NI on ryzen is pretty fast... about 2 cycles per byte on large inputs.

It didn't use to be such a large difference but we optimized everything else... :)

Now only if Intel would ship SHA-NI in a CPU worth using.


That's an understatement, like saying "I suppose Snoop does use an awful lot of weed." ;)


Biggest surprise here is that after these guys pushing segwit for well over a year now the wallet interface still doesn't have support for using it!?


This was on purpose and an example of good engineering. Novice users trying to send funds to SegWit addresses before the network activated could have lost funds. The plumbing was all there to activate (and make use of via command line), however users will get access via UI in the next update.


> however users will get access via UI in the next update.

Update after next-- we'll be doing a short release for segwit support right after 0.15.

Feature freeze for 0.15 (next release) was scheduled on July 16th ( https://github.com/bitcoin/bitcoin/issues/9961 ), segwit lock-in wasn't until aug 9th. We'd shortcut the process for a serious emergency, but not for something short of one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: