blocksize – why is there still a block size limit?


Because throughout 2012-2015, after Satoshi Nakamoto disappeared, “thou shall not hard-fork” became a de facto social rule of the Bitcoin network (BTC) so old non-upgraded nodes won’t become stranded on upgrades.
This is why SegWit was the only possible way to increase TPS capacity without breaking UTXO view for old nodes (further increase via soft-forking would only be possible through extension blocks, which would break UTXO view for old nodes).

The other de facto social rule is that node running should be as widely accessible as possible.
The Bitcoin network kept the limit low for this reason as well, and current stewardship of the project maintains that decision.

Bitcoin code is free open-source (MIT license) and anyone can fork the code and change the limit for himself.
However, he can’t force others to accept the changed rules by running such software.
What happens if he manages to convince only a subset of the network to run patched node software?
A permanent hard fork, and then the forked network has to convince the social / markets layer to give it value, and use the value to bid for mining hashes.

We have seen this play out with main forks of Bitcoin.

See also here for a brief history of the blocksize limit, and here and here for a timeline of events that led to the first permanent fork of Bitcoin.

Note that “Nakamoto Consensus” doesn’t decide which blockchain wins the branding / ticker. It is market & social forces which map blockchains to recognized currencies.



Source link

Author

Leave a Comment