Block Header - How Does Bitcoin Work?

A roadmap to a better header format and bigger block size | jl2012 at xbt.hk | Feb 09 2016 /r/bitcoin_devlist

A roadmap to a better header format and bigger block size | jl2012 at xbt.hk | Feb 09 2016 /bitcoin_devlist submitted by BitcoinAllBot to BitcoinAll [link] [comments]

Finally running a Bitcoin node!

Today, I have finally downloaded the full block chain and am running a full bitcoin node (took ~3days).
I have a blockchain explorer, but I still need to work on installing and running lightning network, and exploring other tools. It's a good feeling. I will write up what I bought, what I returned, and how I got on.
Just wanted to tell someone... pretty excited to be a part of the Bitcoin Full Node community.
[email protected]:~ $ bitcoin-cli getblockchaininfo { "chain": "main", "blocks": 651540, "headers": 651540, "bestblockhash": "0000000000000000000bc917f7f0f326bb3d63c3399b7b1f503f7126d8168470", "difficulty": 19298087186262.61, "mediantime": 1601998567, "verificationprogress": 0.9999914360960969, "initialblockdownload": false, "chainwork": "00000000000000000000000000000000000000001458b8a8029ab60b7e7a2908", "size_on_disk": 344436513497, "pruned": false, "softforks": { "bip34": { "type": "buried", "active": true, "height": 227931 }, "bip66": { "type": "buried", "active": true, "height": 363725 }, "bip65": { "type": "buried", "active": true, "height": 388381 }, "csv": { "type": "buried", "active": true, "height": 419328 }, "segwit": { "type": "buried", "active": true, "height": 481824 } }, "warnings": "" } 
submitted by tookthisusersoucant to Bitcoin [link] [comments]

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

Technical: The Path to Taproot Activation

Taproot! Everybody wants to have it, somebody wants to make it, nobody knows how to get it!
(If you are asking why everybody wants it, see: Technical: Taproot: Why Activate?)
(Pedants: I mostly elide over lockin times)
Briefly, Taproot is that neat new thing that gets us:
So yes, let's activate taproot!

The SegWit Wars

The biggest problem with activating Taproot is PTSD from the previous softfork, SegWit. Pieter Wuille, one of the authors of the current Taproot proposal, has consistently held the position that he will not discuss activation, and will accept whatever activation process is imposed on Taproot. Other developers have expressed similar opinions.
So what happened with SegWit activation that was so traumatic? SegWit used the BIP9 activation method. Let's dive into BIP9!

BIP9 Miner-Activated Soft Fork

Basically, BIP9 has a bunch of parameters:
Now there are other parameters (name, starttime) but they are not anywhere near as important as the above two.
A number that is not a parameter, is 95%. Basically, activation of a BIP9 softfork is considered as actually succeeding if at least 95% of blocks in the last 2 weeks had the specified bit in the nVersion set. If less than 95% had this bit set before the timeout, then the upgrade fails and never goes into the network. This is not a parameter: it is a constant defined by BIP9, and developers using BIP9 activation cannot change this.
So, first some simple questions and their answers:

The Great Battles of the SegWit Wars

SegWit not only fixed transaction malleability, it also created a practical softforkable blocksize increase that also rebalanced weights so that the cost of spending a UTXO is about the same as the cost of creating UTXOs (and spending UTXOs is "better" since it limits the size of the UTXO set that every fullnode has to maintain).
So SegWit was written, the activation was decided to be BIP9, and then.... miner signalling stalled at below 75%.
Thus were the Great SegWit Wars started.

BIP9 Feature Hostage

If you are a miner with at least 5% global hashpower, you can hold a BIP9-activated softfork hostage.
You might even secretly want the softfork to actually push through. But you might want to extract concession from the users and the developers. Like removing the halvening. Or raising or even removing the block size caps (which helps larger miners more than smaller miners, making it easier to become a bigger fish that eats all the smaller fishes). Or whatever.
With BIP9, you can hold the softfork hostage. You just hold out and refuse to signal. You tell everyone you will signal, if and only if certain concessions are given to you.
This ability by miners to hold a feature hostage was enabled because of the miner-exit allowed by the timeout on BIP9. Prior to that, miners were considered little more than expendable security guards, paid for the risk they take to secure the network, but not special in the grand scheme of Bitcoin.

Covert ASICBoost

ASICBoost was a novel way of optimizing SHA256 mining, by taking advantage of the structure of the 80-byte header that is hashed in order to perform proof-of-work. The details of ASICBoost are out-of-scope here but you can read about it elsewhere
Here is a short summary of the two types of ASICBoost, relevant to the activation discussion.
Now, "overt" means "obvious", while "covert" means hidden. Overt ASICBoost is obvious because nVersion bits that are not currently in use for BIP9 activations are usually 0 by default, so setting those bits to 1 makes it obvious that you are doing something weird (namely, Overt ASICBoost). Covert ASICBoost is non-obvious because the order of transactions in a block are up to the miner anyway, so the miner rearranging the transactions in order to get lower power consumption is not going to be detected.
Unfortunately, while Overt ASICBoost was compatible with SegWit, Covert ASICBoost was not. This is because, pre-SegWit, only the block header Merkle tree committed to the transaction ordering. However, with SegWit, another Merkle tree exists, which commits to transaction ordering as well. Covert ASICBoost would require more computation to manipulate two Merkle trees, obviating the power benefits of Covert ASICBoost anyway.
Now, miners want to use ASICBoost (indeed, about 60->70% of current miners probably use the Overt ASICBoost nowadays; if you have a Bitcoin fullnode running you will see the logs with lots of "60 of last 100 blocks had unexpected versions" which is exactly what you would see with the nVersion manipulation that Overt ASICBoost does). But remember: ASICBoost was, at around the time, a novel improvement. Not all miners had ASICBoost hardware. Those who did, did not want it known that they had ASICBoost hardware, and wanted to do Covert ASICBoost!
But Covert ASICBoost is incompatible with SegWit, because SegWit actually has two Merkle trees of transaction data, and Covert ASICBoost works by fudging around with transaction ordering in a block, and recomputing two Merkle Trees is more expensive than recomputing just one (and loses the ASICBoost advantage).
Of course, those miners that wanted Covert ASICBoost did not want to openly admit that they had ASICBoost hardware, they wanted to keep their advantage secret because miners are strongly competitive in a very tight market. And doing ASICBoost Covertly was just the ticket, but they could not work post-SegWit.
Fortunately, due to the BIP9 activation process, they could hold SegWit hostage while covertly taking advantage of Covert ASICBoost!

UASF: BIP148 and BIP8

When the incompatibility between Covert ASICBoost and SegWit was realized, still, activation of SegWit stalled, and miners were still not openly claiming that ASICBoost was related to non-activation of SegWit.
Eventually, a new proposal was created: BIP148. With this rule, 3 months before the end of the SegWit timeout, nodes would reject blocks that did not signal SegWit. Thus, 3 months before SegWit timeout, BIP148 would force activation of SegWit.
This proposal was not accepted by Bitcoin Core, due to the shortening of the timeout (it effectively times out 3 months before the initial SegWit timeout). Instead, a fork of Bitcoin Core was created which added the patch to comply with BIP148. This was claimed as a User Activated Soft Fork, UASF, since users could freely download the alternate fork rather than sticking with the developers of Bitcoin Core.
Now, BIP148 effectively is just a BIP9 activation, except at its (earlier) timeout, the new rules would be activated anyway (instead of the BIP9-mandated behavior that the upgrade is cancelled at the end of the timeout).
BIP148 was actually inspired by the BIP8 proposal (the link here is a historical version; BIP8 has been updated recently, precisely in preparation for Taproot activation). BIP8 is basically BIP9, but at the end of timeout, the softfork is activated anyway rather than cancelled.
This removed the ability of miners to hold the softfork hostage. At best, they can delay the activation, but not stop it entirely by holding out as in BIP9.
Of course, this implies risk that not all miners have upgraded before activation, leading to possible losses for SPV users, as well as again re-pressuring miners to signal activation, possibly without the miners actually upgrading their software to properly impose the new softfork rules.

BIP91, SegWit2X, and The Aftermath

BIP148 inspired countermeasures, possibly from the Covert ASiCBoost miners, possibly from concerned users who wanted to offer concessions to miners. To this day, the common name for BIP148 - UASF - remains an emotionally-charged rallying cry for parts of the Bitcoin community.
One of these was SegWit2X. This was brokered in a deal between some Bitcoin personalities at a conference in New York, and thus part of the so-called "New York Agreement" or NYA, another emotionally-charged acronym.
The text of the NYA was basically:
  1. Set up a new activation threshold at 80% signalled at bit 4 (vs bit 1 for SegWit).
    • When this 80% signalling was reached, miners would require that bit 1 for SegWit be signalled to achive the 95% activation needed for SegWit.
  2. If the bit 4 signalling reached 80%, increase the block weight limit from the SegWit 4000000 to the SegWit2X 8000000, 6 months after bit 1 activation.
The first item above was coded in BIP91.
Unfortunately, if you read the BIP91, independently of NYA, you might come to the conclusion that BIP91 was only about lowering the threshold to 80%. In particular, BIP91 never mentions anything about the second point above, it never mentions that bit 4 80% threshold would also signal for a later hardfork increase in weight limit.
Because of this, even though there are claims that NYA (SegWit2X) reached 80% dominance, a close reading of BIP91 shows that the 80% dominance was only for SegWit activation, without necessarily a later 2x capacity hardfork (SegWit2X).
This ambiguity of bit 4 (NYA says it includes a 2x capacity hardfork, BIP91 says it does not) has continued to be a thorn in blocksize debates later. Economically speaking, Bitcoin futures between SegWit and SegWit2X showed strong economic dominance in favor of SegWit (SegWit2X futures were traded at a fraction in value of SegWit futures: I personally made a tidy but small amount of money betting against SegWit2X in the futures market), so suggesting that NYA achieved 80% dominance even in mining is laughable, but the NYA text that ties bit 4 to SegWit2X still exists.
Historically, BIP91 triggered which caused SegWit to activate before the BIP148 shorter timeout. BIP148 proponents continue to hold this day that it was the BIP148 shorter timeout and no-compromises-activate-on-August-1 that made miners flock to BIP91 as a face-saving tactic that actually removed the second clause of NYA. NYA supporters keep pointing to the bit 4 text in the NYA and the historical activation of BIP91 as a failed promise by Bitcoin developers.

Taproot Activation Proposals

There are two primary proposals I can see for Taproot activation:
  1. BIP8.
  2. Modern Softfork Activation.
We have discussed BIP8: roughly, it has bit and timeout, if 95% of miners signal bit it activates, at the end of timeout it activates. (EDIT: BIP8 has had recent updates: at the end of timeout it can now activate or fail. For the most part, in the below text "BIP8", means BIP8-and-activate-at-timeout, and "BIP9" means BIP8-and-fail-at-timeout)
So let's take a look at Modern Softfork Activation!

Modern Softfork Activation

This is a more complex activation method, composed of BIP9 and BIP8 as supcomponents.
  1. First have a 12-month BIP9 (fail at timeout).
  2. If the above fails to activate, have a 6-month discussion period during which users and developers and miners discuss whether to continue to step 3.
  3. Have a 24-month BIP8 (activate at timeout).
The total above is 42 months, if you are counting: 3.5 years worst-case activation.
The logic here is that if there are no problems, BIP9 will work just fine anyway. And if there are problems, the 6-month period should weed it out. Finally, miners cannot hold the feature hostage since the 24-month BIP8 period will exist anyway.

PSA: Being Resilient to Upgrades

Software is very birttle.
Anyone who has been using software for a long time has experienced something like this:
  1. You hear a new version of your favorite software has a nice new feature.
  2. Excited, you install the new version.
  3. You find that the new version has subtle incompatibilities with your current workflow.
  4. You are sad and downgrade to the older version.
  5. You find out that the new version has changed your files in incompatible ways that the old version cannot work with anymore.
  6. You tearfully reinstall the newer version and figure out how to get your lost productivity now that you have to adapt to a new workflow
If you are a technically-competent user, you might codify your workflow into a bunch of programs. And then you upgrade one of the external pieces of software you are using, and find that it has a subtle incompatibility with your current workflow which is based on a bunch of simple programs you wrote yourself. And if those simple programs are used as the basis of some important production system, you hve just screwed up because you upgraded software on an important production system.
And well, one of the issues with new softfork activation is that if not enough people (users and miners) upgrade to the newest Bitcoin software, the security of the new softfork rules are at risk.
Upgrading software of any kind is always a risk, and the more software you build on top of the software-being-upgraded, the greater you risk your tower of software collapsing while you change its foundations.
So if you have some complex Bitcoin-manipulating system with Bitcoin somewhere at the foundations, consider running two Bitcoin nodes:
  1. One is a "stable-version" Bitcoin node. Once it has synced, set it up to connect=x.x.x.x to the second node below (so that your ISP bandwidth is only spent on the second node). Use this node to run all your software: it's a stable version that you don't change for long periods of time. Enable txiindex, disable pruning, whatever your software needs.
  2. The other is an "always-up-to-date" Bitcoin Node. Keep its stoarge down with pruning (initially sync it off the "stable-version" node). You can't use blocksonly if your "stable-version" node needs to send transactions, but otherwise this "always-up-to-date" Bitcoin node can be kept as a low-resource node, so you can run both nodes in the same machine.
When a new Bitcoin version comes up, you just upgrade the "always-up-to-date" Bitcoin node. This protects you if a future softfork activates, you will only receive valid Bitcoin blocks and transactions. Since this node has nothing running on top of it, it is just a special peer of the "stable-version" node, any software incompatibilities with your system software do not exist.
Your "stable-version" Bitcoin node remains the same version until you are ready to actually upgrade this node and are prepared to rewrite most of the software you have running on top of it due to version compatibility problems.
When upgrading the "always-up-to-date", you can bring it down safely and then start it later. Your "stable-version" wil keep running, disconnected from the network, but otherwise still available for whatever queries. You do need some system to stop the "always-up-to-date" node if for any reason the "stable-version" goes down (otherwisee if the "always-up-to-date" advances its pruning window past what your "stable-version" has, the "stable-version" cannot sync afterwards), but if you are technically competent enough that you need to do this, you are technically competent enough to write such a trivial monitor program (EDIT: gmax notes you can adjust the pruning window by RPC commands to help with this as well).
This recommendation is from gmaxwell on IRC, by the way.
submitted by almkglor to Bitcoin [link] [comments]

Mainnet project: an important change. If you are a donor, please read.

Hi everybody.
It has been one week since the mainnet project got the funding and I have an important update to make.
A little bit about the progress: I've found a wonderful developer, who is helping with the library, so it is starting to take some shape. I'm ironing out our REST API, got some useful feedback, continuing to do so. About 0.17% of the total funding spent so far.
The important update though is that I have decided to take the development and spending private, instead of public. Before I explain what that means and why, I understand that it might upset some donors. So, if you have pledged any amount and disagree with my change for any reason - please contact me (DM, or [email protected]) and I'll refund your pledge completely, no questions asked.
(Please sign any message using the address that you used to prove that you sent the funds, see the list of donors here to find your pledge and the link the the funding donation to find which address you sent from).
If more than 50% of pledges ask for money back, I'll just return everything to everybody in full and we'll consider the project cancelled. At that point anyone willing to take on the project (via a new Flipstarter or something), I'll donate the domain to them. Everything that is done so far is MIT licensed, so anyone is free to take it at any moment.
Let the market decide!
I've got to tell you that I'm a bit disappointed with our progress so far. I expected a lot of people willing to earn some money, but I've got only 4 relevant developers, 3 of them passed a very simple test, only one is actually doing anything.
This was not expected by me, when I had promised to work publicly and with BCH developers.
Another problem is that I have a certain vision that I described in the project description. In addition to that vision there is also a lot of experience talking to read.cash users. A lot of them are in countries with very bad Internet (2G, few kilobytes per second), using very old Android phones (10+ years, the size of an iPhone 4 and the speed half of that of iPhone 4).. And I also really hope that someday we will have 100MB blocks, 1GB, 1TB blocks. But now I'm tied in arguments with BCH developers who argue that many current solutions are good enough already and we don't need to change them - just build on top of a few convoluted and complex protocols, just download a block when needed (again, Africa, 2G, 100MB blocks), just download 640,000 block headers, listen to the whole mempool (with 1TB block we'll have 1TB mempool) - it's fine, blocks are tiny... Just send a few queries (now)... Just download a mempool fully.
(To those of you that know what this is about, please don't name names, I'm not here to play the blame game, everybody is entitled to their own opinions. It's fine.)
If your wallet becomes too big - create a new one. It's fine.
Sidenote: my read.cash wallet that gets the fees takes a few hours to open now, and it's barely 9 months old! I find current solutions unacceptable, I want my wallet to open up immediately and handle 100MB blocks as well as 60KB blocks.
I don't want to develop for tiny blocks or tiny wallets that need to be changed every few months.. I want huge blocks! I don't want mainnet to be as brittle as to break at the first sight of success.
A few of these discussions got me really tired and I have no leverage on these guys. They have money now, they have their vision, I have mine, described on the site, they don't want to do it my way. I didn't collect the funds to do it their way.
Yet I have made a commitment to work with them.
This is very tiresome. I feel like I've got myself into a trap - I have to work with these people, they don't want to work on my stuff.
This is just stupid.
One more thing is that now that I have Slack - I'm caught in endless private discussions of people trying to sell me their vision of how stuff should be done or questions about me or read.cash... I didn't sign up for that, I barely have any time to do the work, I don't have time for this, sorry.
Change #1: Private development
Having said that, I'm moving the project to private development.
Frankly, all I care about is to get this project done. I added an additional burden on myself to be do the public development. And it's tiresome.
The plan would be to hire some outside developers, using regular contracts, so that they don't have THEIR ideas on how to do the project and they'll just do what I described.
I think everybody cares about the end result - library working, document being written, etc...
Change #2: Private spending
Hired developers also means salaries. When people (in the real world) know salaries of other people, it leads to conflicts. I went through this experiment (public salaries) once in my life, I won't go through that again. Even people knowing your budget become a problem, since they start to bargain with you. (Again, we're talking about outside developers, they are not interested in BCH success, they are interested in getting as much money as possible)
By private spending I mean that I'll post periodically how much is done and how much funds is approximately left, but no details on who got what for what. Right now there's 99.83% funds left.
Some of you might see it as a money grab or something else - I can't blame you, but I'd rather see this project cancelled by market forces than drown in endless fights about why we should do exactly nothing or their idea, hope for small blocks and use what we have no matter how convoluted or hard it is, or why somebody's hourly rate should be bigger than that guy's.
Will this lead to everyone cancelling their donations? It sure could! It's voluntary funding after all, I can't force anyone to love what I do or how I do it.
If you donated and want a refund to your original address - just ping me.
When this post is 48 hours old, if more than 50% pledges remain, the project will move on as described above. If 50%+ cancels - everybody gets refunds to their original addresses.
submitted by readcash to btc [link] [comments]

6 Reasons Why Serum Won't Succeed

6 Reasons Why Serum Won't Succeed

The world of DeFi is exploding but is it all it’s made out to be?

DeFi (decentralised finance) is most certainly the buzz in the crypto world this minute. It’s bringing similar feelings which was the 2017/18 ICO phase, where a mammoth of new projects begun to explode onto the scene, each with their own promise of new innovation and use case.
Hindsight has shown us that most of those projects have ultimately failed, or worse, were outright scams that took advantage of not so wise investors looking to make a buck. Obviously, not all projects fit that description, with many teams still around today working on and delivering their individual visions. Crypto is, after all, still a big experiment of new technology.

Enter DeFi: Serum

DeFi has exploded into the limelight over the last few months, with some tokens appreciating hundreds of percent in price. It appears to be the catalyst that has driven a huge market shift in the crypto world, and for those who’ve been around a number of years, this is a welcome change.
In this piece, I’m going to examine a particular project called Serum.
Serum is the world’s first completely decentralized derivatives exchange with trustless cross-chain trading brought to you by Project Serum.
The Serum Project is aiming to create both a decentralised exchange and a cross-chain swapping mechanism. In this article, I’m going to focus solely on the cross-chain swapping aspect of Serum.
Although the Serum whitepaper is quite short and lacking in detail, it is useful to derive some understanding of how the cross-chain swapping protocol should work. Throughout this review, I will use it to describe how the imagined protocol works.

Overview

Let's assume Alice wants to trade some BTC for ETH and Bob wants to trade some ETH for BTC using Serum. These two users are matched and agree on a price using an on-chain order book on the Solana blockchain (whitepaper provides no practical details on how to do this).
Once these users are matched, Bob must send the ETH he wants to trade to an Ethereum smart contract, plus some amount of ETH ~200 USD worth (see section 4 below) to the smart contract as collateral. Alice will also need to send some collateral to the smart contract. Once this initial setup process is complete Alice then has to send her BTC to Bob’s BTC address and if Bob receives the BTC from Alice he can then release his ETH from the smart contract sending it to Alice’s ETH address. Upon completion of this both Alice and Bob are refunded their ETH collateral.
So what happens if something goes wrong? For example, say Alice never sends BTC to Bob, after some period of time Bob can initiate a dispute. When the dispute begins both Alice and Bob present a portion of the Bitcoin blockchain information to the smart contract (see section 3). The smart contract then decides whether or not Alice did send BTC to Bob. If she hasn’t then the smart contract returns Bob's ETH and collateral to Bob and also takes Alice’s ETH collateral and gives that to Bob. The same occurs in reverse if Alice sends BTC but Bob never approves the transfer of ETH from the smart contract.
This scheme seems pretty simple, there’s no oracles and no centralised parties, however, it has a number of disadvantages.

1. User-Provided Collateral Is Bad for User Experience

Each time a user conducts a swap they must reserve some percentage or fixed amount to cover the collateral for the swap. This collateral amount needs to be present to prevent griefing attacks where users initiate swaps with no intention of ever following through and sending funds to the alternate participant.
However, this creates a poor user experience as both Alice and Bob need to have at least the value of the dispute fee committed to the contract in collateral before they conduct a swap. This is totally foreign from the normal exchange experience in which you only require a single coin and a single transaction to begin trading. For example, if using Serum to trade Bitcoin you would need to hold Bitcoin and ~200$ of Ethereum and also interact with the Ethereum chain before any swap occurs. This adds unnecessary complexity and confusion, especially for newcomers to the crypto space.

2. ETH Must Always Be on One Side of the Swap

Although the Serum method of cross-chain swapping could occur on any blockchain with smart contracts, the Serum whitepaper makes it clear the Serum arbitration contract is going to be deployed on the Ethereum blockchain. This means one party must always be locking the full value of the trade in ETH using an Ethereum smart contract.
This makes it impossible, for example, to do a single step trade between Bitcoin and Monero since the swap would need to be from Bitcoin to ETH first and then from ETH to Monero. This is comparable to other proposed cross-chain swap systems like Thorchain and Blockswap, however since those networks use AMM’s (automated market makers)and decentralized vaults to take custody of funds, the user needs not to interact with the intermediary chain at all.
Instead in Serum, the user wanting to swap Bitcoin to Monero will need to do the following steps:
  1. Send Ethereum collateral to the Serum arbitration contract
  2. Send Bitcoin to the user they are swapping with.
  3. Receive Ethereum
  4. Send Ethereum back to Serum arbitration contract
  5. Receive Monero
  6. Send Ethereum out of Serum arbitration contract
  7. Receive back Ethereum collateral
It might be possible to remove or simplify step 4, depending on how the smart contract is built, however, this means a swap from BTC to Monero would require 2 Ethereum and 1 Bitcoin transaction in the best-case scenario. Compared with the experience of other cross-chain swapping mechanisms, which only require the user to send a single transaction to swap between two assets, this is very poor user experience.

3. Proving Transactions on Arbitrary Chains to a Smart Contract Is Not Trivial

Perhaps the most central part of the Serum cross-chain swapping mechanism is left completely unexplored in the Serum whitepaper with only a brief explanation given.
“[The] Smart Contract is programmed to parse whether a proposed BTC blockchain is valid; it can then check which of Alice and Bob send the longer valid blockchain, and settle in their favor”
This is not a trivial problem, and it is unclear how this actually works from the explanation given in the Serum whitepaper. What actually needs to be presented to the smart contract to prove a Bitcoin transaction? Typically when talking about SPV the smart contract would need the block headers of all previous blocks and a merkle inclusion proof. This is far too heavy to submit in a dispute. Instead, Serum could use NIPoPoW, however, these proofs only work on chains with fixed difficulty and are still probably prohibitively too large (~100KB) to be submitted as a proof to a contract. Other solutions like Flyclient are more versatile, but proof sizes are much larger and have failed to see much real-world adoption.
Without explaining how they actually plan to do this validation of Bitcoin transactions, users are left in the dark about how secure their solution actually is.

4. High Dispute Fees Force Large Collateral on Small Trades

Although disputes should almost never happen because of the incentives and punishments designed into the Serum protocol, the way they are designed has negative impacts on the use of the network.
Although the Serum whitepaper does not say how the dispute mechanism works, they do say that it will cost about ~100 USD in GAS to dispute a swap.
Note: keep in mind that the Serum paper was published in July 2020 when the gas price was about 50 Gwei, as Ethereum use has picked up over the past month we have seen average GAS prices as high as 250 Gwei, with the average price right now about 120 Gwei.
This means that at the height of GAS prices it could have cost a user ~500 USD to dispute a swap.
This means for the network to ensure losing cross-chain swaps aren’t made each user must deploy at least $200 in collateral on each side. It may be possible to lower this to collateral if we assume the attacker is not financially motivated, however, there is a lower bound in which ransom attacks become possible on low-value trades.
Further and perhaps more damagingly, this means in a trade of any size the user needs to have at least 300 USD in ETH laying around. 100 USD in ETH for the required collateral and 200 USD if they need to challenge the transaction.
This further adds to the poor user experience when using Serum for cross-chain swapping.

5. Swaps Are Not Set and Forget

Instead of being able to send a transaction and receive funds on the blockchain you are swapping to, the process is highly interactive. In the case where I am swapping ETH for Bitcoin, the following occurs:
If the Bitcoin transaction is never received then I need to wait for a timeout to occur before I can participate in the dispute process.
And on the Bitcoin side (assuming the seller is ready), the following must take place:
If the Seller never accepts the Bitcoin I sent to him then I need to wait on line for the dispute process.
This presents a strange user experience where the seller or seller’s wallet must be left online during this whole process and be ready to sign a new transaction if they need to dispute transactions or unlock funds from a smart contract.
This is different from the typical exchange or swapping scenario in which, once your funds are sent you can be assured you will receive the amount you expected in your swap back to you, without any of your wallets needing to remain online.

6. The Serum Token Seems to Lack a Use Case

The cross-chain swapping protocol Serum describes in its whitepaper could easily be forked and launched on the Ethereum blockchain without having any need for the Serum token. It seems that the Serum token will be used in some capacity when placing orders on the Solana based blockchain, however, the order book could just as easily be placed off with traditional rate-limiting schemes.
There is some brief mention of future governance abilities for token holders, however, as a common theme in their whitepaper, details are scarce:
Serum is anticipated to include a limited governance model based on the SRM token. While most of the Serum ecosystem will be immutable, some parameters without large security risks (e.g. future fees) may be modified via a governance vote of SRM tokens.

Conclusion

Until satisfactory answers are given to these questions I would be looking at other projects who are attempting to build platforms for cross-chain swaps. As previously mentioned, Thorchain & Blockswap show some promise in design, whilst there are some others competing in this space too, such as Incognito and RenVM. However, this area is still extremely immature so plenty of testing and time is required before we can call any of these projects a success.
If you’ve got any feedback or thoughts about Serum, cross-chain swapping or DeFi in general, please don’t be shy in leaving a comment.
submitted by Loooong_Loooong_Man to CryptoCurrency [link] [comments]

[ Bitcoin ] Technical: Taproot: Why Activate?

Topic originally posted in Bitcoin by almkglor [link]
This is a follow-up on https://old.reddit.com/Bitcoin/comments/hqzp14/technical_the_path_to_taproot_activation/
Taproot! Everybody wants it!! But... you might ask yourself: sure, everybody else wants it, but why would I, sovereign Bitcoin HODLer, want it? Surely I can be better than everybody else because I swapped XXX fiat for Bitcoin unlike all those nocoiners?
And it is important for you to know the reasons why you, o sovereign Bitcoiner, would want Taproot activated. After all, your nodes (or the nodes your wallets use, which if you are SPV, you hopefully can pester to your wallet vendoimplementor about) need to be upgraded in order for Taproot activation to actually succeed instead of becoming a hot sticky mess.
First, let's consider some principles of Bitcoin.
I'm sure most of us here would agree that the above are very important principles of Bitcoin and that these are principles we would not be willing to remove. If anything, we would want those principles strengthened (especially the last one, financial privacy, which current Bitcoin is only sporadically strong with: you can get privacy, it just requires effort to do so).
So, how does Taproot affect those principles?

Taproot and Your /Coins

Most HODLers probably HODL their coins in singlesig addresses. Sadly, switching to Taproot would do very little for you (it gives a mild discount at spend time, at the cost of a mild increase in fee at receive time (paid by whoever sends to you, so if it's a self-send from a P2PKH or bech32 address, you pay for this); mostly a wash).
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash, so the Taproot output spends 12 bytes more; spending from a P2WPKH requires revealing a 32-byte public key later, which is not needed with Taproot, and Taproot signatures are about 9 bytes smaller than P2WPKH signatures, but the 32 bytes plus 9 bytes is divided by 4 because of the witness discount, so it saves about 11 bytes; mostly a wash, it increases blockweight by about 1 virtual byte, 4 weight for each Taproot-output-input, compared to P2WPKH-output-input).
However, as your HODLings grow in value, you might start wondering if multisignature k-of-n setups might be better for the security of your savings. And it is in multisignature that Taproot starts to give benefits!
Taproot switches to using Schnorr signing scheme. Schnorr makes key aggregation -- constructing a single public key from multiple public keys -- almost as trivial as adding numbers together. "Almost" because it involves some fairly advanced math instead of simple boring number adding, but hey when was the last time you added up your grocery list prices by hand huh?
With current P2SH and P2WSH multisignature schemes, if you have a 2-of-3 setup, then to spend, you need to provide two different signatures from two different public keys. With Taproot, you can create, using special moon math, a single public key that represents your 2-of-3 setup. Then you just put two of your devices together, have them communicate to each other (this can be done airgapped, in theory, by sending QR codes: the software to do this is not even being built yet, but that's because Taproot hasn't activated yet!), and they will make a single signature to authorize any spend from your 2-of-3 address. That's 73 witness bytes -- 18.25 virtual bytes -- of signatures you save!
And if you decide that your current setup with 1-of-1 P2PKH / P2WPKH addresses is just fine as-is: well, that's the whole point of a softfork: backwards-compatibility; you can receive from Taproot users just fine, and once your wallet is updated for Taproot-sending support, you can send to Taproot users just fine as well!
(P2WPKH and P2WSH -- SegWit v0 -- addresses start with bc1q; Taproot -- SegWit v1 --- addresses start with bc1p, in case you wanted to know the difference; in bech32 q is 0, p is 1)
Now how about HODLers who keep all, or some, of their coins on custodial services? Well, any custodial service worth its salt would be doing at least 2-of-3, or probably something even bigger, like 11-of-15. So your custodial service, if it switched to using Taproot internally, could save a lot more (imagine an 11-of-15 getting reduced from 11 signatures to just 1!), which --- we can only hope! --- should translate to lower fees and better customer service from your custodial service!
So I think we can say, very accurately, that the Bitcoin principle --- that YOU are in control of your money --- can only be helped by Taproot (if you are doing multisignature), and, because P2PKH and P2WPKH remain validly-usable addresses in a Taproot future, will not be harmed by Taproot. Its benefit to this principle might be small (it mostly only benefits multisignature users) but since it has no drawbacks with this (i.e. singlesig users can continue to use P2WPKH and P2PKH still) this is still a nice, tidy win!
(even singlesig users get a minor benefit, in that multisig users will now reduce their blockchain space footprint, so that fees can be kept low for everybody; so for example even if you have your single set of private keys engraved on titanium plates sealed in an airtight box stored in a safe buried in a desert protected by angry nomads riding giant sandworms because you're the frickin' Kwisatz Haderach, you still gain some benefit from Taproot)
And here's the important part: if P2PKH/P2WPKH is working perfectly fine with you and you decide to never use Taproot yourself, Taproot will not affect you detrimentally. First do no harm!

Taproot and Your Contracts

No one is an island, no one lives alone. Give and you shall receive. You know: by trading with other people, you can gain expertise in some obscure little necessity of the world (and greatly increase your productivity in that little field), and then trade the products of your expertise for necessities other people have created, all of you thereby gaining gains from trade.
So, contracts, which are basically enforceable agreements that facilitate trading with people who you do not personally know and therefore might not trust.
Let's start with a simple example. You want to buy some gewgaws from somebody. But you don't know them personally. The seller wants the money, you want their gewgaws, but because of the lack of trust (you don't know them!! what if they're scammers??) neither of you can benefit from gains from trade.
However, suppose both of you know of some entity that both of you trust. That entity can act as a trusted escrow. The entity provides you security: this enables the trade, allowing both of you to get gains from trade.
In Bitcoin-land, this can be implemented as a 2-of-3 multisignature. The three signatories in the multisgnature would be you, the gewgaw seller, and the escrow. You put the payment for the gewgaws into this 2-of-3 multisignature address.
Now, suppose it turns out neither of you are scammers (whaaaat!). You receive the gewgaws just fine and you're willing to pay up for them. Then you and the gewgaw seller just sign a transaction --- you and the gewgaw seller are 2, sufficient to trigger the 2-of-3 --- that spends from the 2-of-3 address to a singlesig the gewgaw seller wants (or whatever address the gewgaw seller wants).
But suppose some problem arises. The seller gave you gawgews instead of gewgaws. Or you decided to keep the gewgaws but not sign the transaction to release the funds to the seller. In either case, the escrow is notified, and if it can sign with you to refund the funds back to you (if the seller was a scammer) or it can sign with the seller to forward the funds to the seller (if you were a scammer).
Taproot helps with this: like mentioned above, it allows multisignature setups to produce only one signature, reducing blockchain space usage, and thus making contracts --- which require multiple people, by definition, you don't make contracts with yourself --- is made cheaper (which we hope enables more of these setups to happen for more gains from trade for everyone, also, moon and lambos).
(technology-wise, it's easier to make an n-of-n than a k-of-n, making a k-of-n would require a complex setup involving a long ritual with many communication rounds between the n participants, but an n-of-n can be done trivially with some moon math. You can, however, make what is effectively a 2-of-3 by using a three-branch SCRIPT: either 2-of-2 of you and seller, OR 2-of-2 of you and escrow, OR 2-of-2 of escrow and seller. Fortunately, Taproot adds a facility to embed a SCRIPT inside a public key, so you can have a 2-of-2 Taprooted address (between you and seller) with a SCRIPT branch that can instead be spent with 2-of-2 (you + escrow) OR 2-of-2 (seller + escrow), which implements the three-branched SCRIPT above. If neither of you are scammers (hopefully the common case) then you both sign using your keys and never have to contact the escrow, since you are just using the escrow public key without coordinating with them (because n-of-n is trivial but k-of-n requires setup with communication rounds), so in the "best case" where both of you are honest traders, you also get a privacy boost, in that the escrow never learns you have been trading on gewgaws, I mean ewww, gawgews are much better than gewgaws and therefore I now judge you for being a gewgaw enthusiast, you filthy gewgawer).

Taproot and Your Contracts, Part 2: Cryptographic Boogaloo

Now suppose you want to buy some data instead of things. For example, maybe you have some closed-source software in trial mode installed, and want to pay the developer for the full version. You want to pay for an activation code.
This can be done, today, by using an HTLC. The developer tells you the hash of the activation code. You pay to an HTLC, paying out to the developer if it reveals the preimage (the activation code), or refunding the money back to you after a pre-agreed timeout. If the developer claims the funds, it has to reveal the preimage, which is the activation code, and you can now activate your software. If the developer does not claim the funds by the timeout, you get refunded.
And you can do that, with HTLCs, today.
Of course, HTLCs do have problems:
Fortunately, with Schnorr (which is enabled by Taproot), we can now use the Scriptless Script constuction by Andrew Poelstra. This Scriptless Script allows a new construction, the PTLC or Pointlocked Timelocked Contract. Instead of hashes and preimages, just replace "hash" with "point" and "preimage" with "scalar".
Or as you might know them: "point" is really "public key" and "scalar" is really a "private key". What a PTLC does is that, given a particular public key, the pointlocked branch can be spent only if the spender reveals the private key of the given private key to you.
Another nice thing with PTLCs is that they are deniable. What appears onchain is just a single 2-of-2 signature between you and the developemanufacturer. It's like a magic trick. This signature has no special watermarks, it's a perfectly normal signature (the pledge). However, from this signature, plus some datta given to you by the developemanufacturer (known as the adaptor signature) you can derive the private key of a particular public key you both agree on (the turn). Anyone scraping the blockchain will just see signatures that look just like every other signature, and as long as nobody manages to hack you and get a copy of the adaptor signature or the private key, they cannot get the private key behind the public key (point) that the pointlocked branch needs (the prestige).
(Just to be clear, the public key you are getting the private key from, is distinct from the public key that the developemanufacturer will use for its funds. The activation key is different from the developer's onchain Bitcoin key, and it is the activation key whose private key you will be learning, not the developer's/manufacturer's onchain Bitcoin key).
So:
Taproot lets PTLCs exist onchain because they enable Schnorr, which is a requirement of PTLCs / Scriptless Script.
(technology-wise, take note that Scriptless Script works only for the "pointlocked" branch of the contract; you need normal Script, or a pre-signed nLockTimed transaction, for the "timelocked" branch. Since Taproot can embed a script, you can have the Taproot pubkey be a 2-of-2 to implement the Scriptless Script "pointlocked" branch, then have a hidden script that lets you recover the funds with an OP_CHECKLOCKTIMEVERIFY after the timeout if the seller does not claim the funds.)

Quantum Quibbles!

Now if you were really paying attention, you might have noticed this parenthetical:
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash...)
So wait, Taproot uses raw 32-byte public keys, and not public key hashes? Isn't that more quantum-vulnerable??
Well, in theory yes. In practice, they probably are not.
It's not that hashes can be broken by quantum computes --- they're still not. Instead, you have to look at how you spend from a P2WPKH/P2PKH pay-to-public-key-hash.
When you spend from a P2PKH / P2WPKH, you have to reveal the public key. Then Bitcoin hashes it and checks if this matches with the public-key-hash, and only then actually validates the signature for that public key.
So an unconfirmed transaction, floating in the mempools of nodes globally, will show, in plain sight for everyone to see, your public key.
(public keys should be public, that's why they're called public keys, LOL)
And if quantum computers are fast enough to be of concern, then they are probably fast enough that, in the several minutes to several hours from broadcast to confirmation, they have already cracked the public key that is openly broadcast with your transaction. The owner of the quantum computer can now replace your unconfirmed transaction with one that pays the funds to itself. Even if you did not opt-in RBF, miners are still incentivized to support RBF on RBF-disabled transactions.
So the extra hash is not as significant a protection against quantum computers as you might think. Instead, the extra hash-and-compare needed is just extra validation effort.
Further, if you have ever, in the past, spent from the address, then there exists already a transaction indelibly stored on the blockchain, openly displaying the public key from which quantum computers can derive the private key. So those are still vulnerable to quantum computers.
For the most part, the cryptographers behind Taproot (and Bitcoin Core) are of the opinion that quantum computers capable of cracking Bitcoin pubkeys are unlikely to appear within a decade or two.
So:
For now, the homomorphic and linear properties of elliptic curve cryptography provide a lot of benefits --- particularly the linearity property is what enables Scriptless Script and simple multisignature (i.e. multisignatures that are just 1 signature onchain). So it might be a good idea to take advantage of them now while we are still fairly safe against quantum computers. It seems likely that quantum-safe signature schemes are nonlinear (thus losing these advantages).

Summary

I Wanna Be The Taprooter!

So, do you want to help activate Taproot? Here's what you, mister sovereign Bitcoin HODLer, can do!

But I Hate Taproot!!

That's fine!

Discussions About Taproot Activation

almkglor your post has been copied because one or more comments in this topic have been removed. This copy will preserve unmoderated topic. If you would like to opt-out, please send a message using [this link].
[deleted comment]
[deleted comment]
[deleted comment]
submitted by anticensor_bot to u/anticensor_bot [link] [comments]

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

A new whitepaper analysing the performance and scalability of the Streamr pub/sub messaging Network is now available. Take a look at some of the fascinating key results in this introductory blog

Streamr Network: Performance and Scalability Whitepaper


https://preview.redd.it/bstqyn43x4j51.png?width=2600&format=png&auto=webp&s=81683ca6303ab84ab898c096345464111d674ee5
The Corea milestone of the Streamr Network went live in late 2019. Since then a few people in the team have been working on an academic whitepaper to describe its design principles, position it with respect to prior art, and prove certain properties it has. The paper is now ready, and it has been submitted to the IEEE Access journal for peer review. It is also now published on the new Papers section on the project website. In this blog, I’ll introduce the paper and explain its key results. All the figures presented in this post are from the paper.
The reasons for doing this research and writing this paper were simple: many prospective users of the Network, especially more serious ones such as enterprises, ask questions like ‘how does it scale?’, ‘why does it scale?’, ‘what is the latency in the network?’, and ‘how much bandwidth is consumed?’. While some answers could be provided before, the Network in its currently deployed form is still small-scale and can’t really show a track record of scalability for example, so there was clearly a need to produce some in-depth material about the structure of the Network and its performance at large, global scale. The paper answers these questions.
Another reason is that decentralized peer-to-peer networks have experienced a new renaissance due to the rise in blockchain networks. Peer-to-peer pub/sub networks were a hot research topic in the early 2000s, but not many real-world implementations were ever created. Today, most blockchain networks use methods from that era under the hood to disseminate block headers, transactions, and other events important for them to function. Other megatrends like IoT and social media are also creating demand for new kinds of scalable message transport layers.

The latency vs. bandwidth tradeoff

The current Streamr Network uses regular random graphs as stream topologies. ‘Regular’ here means that nodes connect to a fixed number of other nodes that publish or subscribe to the same stream, and ‘random’ means that those nodes are selected randomly.
Random connections can of course mean that absurd routes get formed occasionally, for example a data point might travel from Germany to France via the US. But random graphs have been studied extensively in the academic literature, and their properties are not nearly as bad as the above example sounds — such graphs are actually quite good! Data always takes multiple routes in the network, and only the fastest route counts. The less-than-optimal routes are there for redundancy, and redundancy is good, because it improves security and churn tolerance.
There is an important parameter called node degree, which is the fixed number of nodes to which each node in a topology connects. A higher node degree means more duplication and thus more bandwidth consumption for each node, but it also means that fast routes are more likely to form. It’s a tradeoff; better latency can be traded for worse bandwidth consumption. In the following section, we’ll go deeper into analyzing this relationship.

Network diameter scales logarithmically

One useful metric to estimate the behavior of latency is the network diameter, which is the number of hops on the shortest path between the most distant pair of nodes in the network (i.e. the “longest shortest path”. The below plot shows how the network diameter behaves depending on node degree and number of nodes.

Network diameter
We can see that the network diameter increases logarithmically (very slowly), and a higher node degree ‘flattens the curve’. This is a property of random regular graphs, and this is very good — growing from 10,000 nodes to 100,000 nodes only increases the diameter by a few hops! To analyse the effect of the node degree further, we can plot the maximum network diameter using various node degrees:
Network diameter in network of 100 000 nodes
We can see that there are diminishing returns for increasing the node degree. On the other hand, the penalty (number of duplicates, i.e. bandwidth consumption), increases linearly with node degree:

Number of duplicates received by the non-publisher nodes
In the Streamr Network, each stream forms its own separate overlay network and can even have a custom node degree. This allows the owner of the stream to configure their preferred latency/bandwidth balance (imagine such a slider control in the Streamr Core UI). However, finding a good default value is important. From this analysis, we can conclude that:
  • The logarithmic behavior of network diameter leads us to hope that latency might behave logarithmically too, but since the number of hops is not the same as latency (in milliseconds), the scalability needs to be confirmed in the real world (see next section).
  • A node degree of 4 yields good latency/bandwidth balance, and we have selected this as the default value in the Streamr Network. This value is also used in all the real-world experiments described in the next section.
It’s worth noting that in such a network, the bandwidth requirement for publishers is determined by the node degree and not the number of subscribers. With a node degree 4 and a million subscribers, the publisher only uploads 4 copies of a data point, and the million subscribing nodes share the work of distributing the message among themselves. In contrast, a centralized data broker would need to push out a million copies.

Latency scales logarithmically

To see if actual latency scales logarithmically in real-world conditions, we ran large numbers of nodes in 16 different Amazon AWS data centers around the world. We ran experiments with network sizes between 32 to 2048 nodes. Each node published messages to the network, and we measured how long it took for the other nodes to get the message. The experiment was repeated 10 times for each network size.
The below image displays one of the key results of the paper. It shows a CDF (cumulative distribution function) of the measured latencies across all experiments. The y-axis runs from 0 to 1, i.e. 0% to 100%.
CDF of message propagation delay
From this graph we can easily read things like: in a 32 nodes network (blue line), 50% of message deliveries happened within 150 ms globally, and all messages were delivered in around 250 ms. In the largest network of 2048 nodes (pink line), 99% of deliveries happened within 362 ms globally.
To put these results in context, PubNub, a centralized message brokering service, promises to deliver messages within 250 ms — and that’s a centralized service! Decentralization comes with unquestionable benefits (no vendor lock-in, no trust required, network effects, etc.), but if such protocols are inferior in terms of performance or cost, they won’t get adopted. It’s pretty safe to say that the Streamr Network is on par with centralized services even when it comes to latency, which is usually the Achilles’ heel of P2P networks (think of how slow blockchains are!). And the Network will only get better with time.
Then we tackled the big question: does the latency behave logarithmically?
Mean message propagation delay in Amazon experiments
Above, the thick line is the average latency for each network size. From the graph, we can see that the latency grows logarithmically as the network size increases, which means excellent scalability.
The shaded area shows the difference between the best and worst average latencies in each repeat. Here we can see the element of chance at play; due to the randomness in which nodes become neighbours, some topologies are faster than others. Given enough repeats, some near-optimal topologies can be found. The difference between average topologies and the best topologies gives us a glimpse of how much room for optimisation there is, i.e. with a smarter-than-random topology construction, how much improvement is possible (while still staying in the realm of regular graphs)? Out of the observed topologies, the difference between the average and the best observed topology is between 5–13%, so not that much. Other subclasses of graphs, such as irregular graphs, trees, and so on, can of course unlock more room for improvement, but they are different beasts and come with their own disadvantages too.
It’s also worth asking: how much worse is the measured latency compared to the fastest possible latency, i.e. that of a direct connection? While having direct connections between a publisher and subscribers is definitely not scalable, secure, or often even feasible due to firewalls, NATs and such, it’s still worth asking what the latency penalty of peer-to-peer is.

Relative delay penalty in Amazon experiments
As you can see, this plot has the same shape as the previous one, but the y-axis is different. Here, we are showing the relative delay penalty (RDP). It’s the latency in the peer-to-peer network (shown in the previous plot), divided by the latency of a direct connection measured with the ping tool. So a direct connection equals an RDP value of 1, and the measured RDP in the peer-to-peer network is roughly between 2 and 3 in the observed topologies. It increases logarithmically with network size, just like absolute latency.
Again, given that latency is the Achilles’ heel of decentralized systems, that’s not bad at all. It shows that such a network delivers acceptable performance for the vast majority of use cases, only excluding the most latency-sensitive ones, such as online gaming or arbitrage trading. For most other use cases, it doesn’t matter whether it takes 25 or 75 milliseconds to deliver a data point.

Latency is predictable

It’s useful for a messaging system to have consistent and predictable latency. Imagine for example a smart traffic system, where cars can alert each other about dangers on the road. It would be pretty bad if, even minutes after publishing it, some cars still haven’t received the warning. However, such delays easily occur in peer-to-peer networks. Everyone in the crypto space has seen first-hand how plenty of Bitcoin or Ethereum nodes lag even minutes behind the latest chain state.
So we wanted to see whether it would be possible to estimate the latencies in the peer-to-peer network if the topology and the latencies between connected pairs of nodes are known. We applied Dijkstra’s algorithm to compute estimates for average latencies from the input topology data, and compared the estimates to the actual measured average latencies:
Mean message propagation delay in Amazon experiments
We can see that, at least in these experiments, the estimates seemed to provide a lower bound for the actual values, and the average estimation error was 3.5%. The measured value is higher than the estimated one because the estimation only considers network delays, while in reality there is also a little bit of a processing delay at each node.

Conclusion

The research has shown that the Streamr Network can be expected to deliver messages in roughly 150–350 milliseconds worldwide, even at a large scale with thousands of nodes subscribing to a stream. This is on par with centralized message brokers today, showing that the decentralized and peer-to-peer approach is a viable alternative for all but the most latency-sensitive applications.
It’s thrilling to think that by accepting a latency only 2–3 times longer than the latency of an unscalable and insecure direct connecion, applications can interconnect over an open fabric with global scalability, no single point of failure, no vendor lock-in, and no need to trust anyone — all that becomes available out of the box.
In the real-time data space, there are plenty of other aspects to explore, which we didn’t cover in this paper. For example, we did not measure throughput characteristics of network topologies. Different streams are independent, so clearly there’s scalability in the number of streams, and heavy streams can be partitioned, allowing each stream to scale too. Throughput is mainly limited, therefore, by the hardware and network connection used by the network nodes involved in a topology. Measuring the maximum throughput would basically be measuring the hardware as well as the performance of our implemented code. While interesting, this is not a high priority research target at this point in time. And thanks to the redundancy in the network, individual slow nodes do not slow down the whole topology; the data will arrive via faster nodes instead.
Also out of scope for this paper is analysing the costs of running such a network, including the OPEX for publishers and node operators. This is a topic of ongoing research, which we’re currently doing as part of designing the token incentive mechanisms of the Streamr Network, due to be implemented in a later milestone.
I hope that this blog has provided some insight into the fascinating results the team uncovered during this research. For a more in-depth look at the context of this work, and more detail about the research, we invite you to read the full paper.
If you have an interest in network performance and scalability from a developer or enterprise perspective, we will be hosting a talk about this research in the coming weeks, so keep an eye out for more details on the Streamr social media channels. In the meantime, feedback and comments are welcome. Please add a comment to this Reddit thread or email [[email protected]](mailto:[email protected]).
Originally published by. Henri at blog.streamr.network on August 24, 2020.
submitted by thamilton5 to streamr [link] [comments]

POW and POS Algorithms

POW and POS Algorithms
The main process for the network is the transaction process, which is technically related to the addition of a new block to the ledger. Adding a block is carried out according to a certain consistent algorithm - the consensus algorithm.
At the same time, consensus is ensured by constancy: each network node reaches the same state after processing each transaction and each block. That is, the consensus algorithm ensures that all nodes of the network always have the same version of the blockchain and eliminates conflicts between nodes.
The most popular of these are two varieties: Proof of Work and Proof of Stake.
The Proof of Work (PoW) algorithm requires the author of a new block to solve a mathematical problem, and this problem is solved only by direct enumeration of the value. The classic option is to find a special value for the hash of the block header containing a link to the previous block implemented in Bitcoin.
The task is solved by all applicants for the addition of the block, and the chance to solve it faster than others and, accordingly, receive a reward depends on the amount of available computing power. The greater the processing power of the node - the higher the chance. The necessary speed at the same time is provided by the selection of the complexity of the problem to be solved, depending on the aggregate power of the network nodes.
Proof of Stake (PoS) is an alternative to PoW and a solution to the problem of high electricity costs for Bitcoin mining. However, the idea of ​​PoS was mentioned back in 2011 at the Bitcointalk forum. Instead of mining in PoS, network participants freeze a certain number of tokens in wallets. After that, the algorithm selects among the participants the next block producer, depending on the size of the bet. In this way, participants reinforce good faith not with the cost of computing, but directly with assets within the network.
In PZM Cash, we abandoned the delegated Proof of Stake and applied the significantly improved “classic” PoS algorithm. Due to this, the opportunity to participate in the work of the network (and receive remuneration) has received a very wide circle of users - there are no significant restrictions or any significant barriers to this. POS is also subject to centralization, however, buying coins is much easier than buying equipment and setting up mining (as in POW), which provides an internal mechanism that restrains monopolization due to the large number of coin purchasers and owners.
https://preview.redd.it/ydkjylyf43q41.png?width=1200&format=png&auto=webp&s=f485f43b210318f8b2ae7580c75cf884a1271311
submitted by PZMCash to PZMCash [link] [comments]

1. 90+% of businesses that accept BTC also accept BCH 2. BTC miners can mine BCH just by flipping a switch [...]

1. 90+% of businesses that accept BTC also accept BCH 2. BTC miners can mine BCH just by flipping a switch [...] submitted by Egon_1 to btc [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to btc [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to btc [link] [comments]

Building Ergo: Lite full nodes

Ergo allows any user to run a full node with low resources – meaning you can help maintain the network with a device as simple as a Raspberry Pi.
In a previous post, we looked at Ergo’s SPV mode, which allows for secure, efficient mobile clients. This enables users to make transactions using almost any device.
At the other end of the scale, you might want to run a full node. If you’re a miner, this will require that you download the full blockchain, because you’ll need the whole UTXO (unspent outputs) set to mine new blocks. But you can still run a full node without that UTXO set – vastly reducing the specification and expense of the hardware needed.
Ergo blocks
In Ergo, just like Bitcoin, Ethereum and other blockchains, blocks are broken into sections. In Bitcoin, there’s simply a block header and the transactions themselves. But in Ergo, we have some extra sections that enable new functionality:
The ‘extension’ section contains certain mandatory fields (including links for NiPoPoW, once per 1,024 block epoch) and parameters for miner voting, such as current block size. It can also contain arbitrary fields.
What this means in practice is that different types of node and client can download only those sections of the blocks they need – reducing the demands for storage, bandwidth and CPU cycles.
Lite full nodes
While miners need to download everything, lite full nodes only need the transactions and proofs. This means they have a cryptographic guarantee of transactions, without holding the full UTXO set itself.
Lite full nodes check the proofs generated by full nodes (including miners) who do hold the full blockchain, providing a guarantee of ledger validity. In Ethereum, these nodes are called Stateless Clients.
For Ergo, it means you can run a full node and maintain the network with a device as simple as a Raspberry Pi with 512 MB RAM. This provides the ideal balance between ensuring the security of the network and placing an unnecessary burden on users who wish to do so – improving decentralisation and democratising participation in the Ergo network and community.
Share post:
Facebook
Twitter
Ergoplatform.org
submitted by kushti to ergoplatformorg [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
Share post:
Facebook
Twitter
Ergoplatform.org
submitted by kushti to ergoplatformorg [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to CryptoMarkets [link] [comments]

THE ABILITY TO VENTURE DOWN A WHOLE BUNCH OF RABBIT HOLES IN PARALLEL IS THE ETHEREUM COMMUNITY'S STRENGTH!!!1!

"Where else do you see as many parallel tracks like sharding, PoS, Plasma, generalized state channels, optimistic rollup, ZK rollup, stablecoins, DAOs all happening at the same time?"
Had to share this as the original post was downvoted and many will miss it. Here is Vitalik's response to 12 pages of criticism by "Checkmatey " As posted here Why ETH Won't Sustain a Monetary Premium
vbuterin:
I hear quite often this opinion that having a present 21M limit is really really important, and that Ethereum should adopt it if it wants to stand a chance at getting any SoV status. And yet, when I've asked people in the ethereum community how important it is (I asked this most publicly at the "Controversial Questions" panel at EDCON 2019 in Sydney), the response is typically "eh, not that big a deal". Ethereum people seem to by and large value pragmatism and assign less importance to trying to have commitments that we publicly pretend are infinitely strong (but in reality are quite malleable, as we discovered in the Binance rollback crisis when I was surprised to learn that maximalist ideology now does NOT consider even multi-day reversions of the chain to be violations of "immutability").
And there's a good reason for not publicly committing to no possibility of retreat from a fixed issuance formula, and that reason is this. There is an unavoidable tradeoff between stability of issuance level and stability of security level. This is simple to see. You need to pay miners (or in PoS validators) to secure the chain, and the security level is roughly proportional to how many of those you attract, which is roughly proportional to how well you pay them. Payment to miners/validators equals issuance + transaction fees. Hence, if issuance is zero, the security level depends on the level of transaction fees, which is quite volatile. So if you want a guarantee of security, then you have to admit the possibility that if transaction fees are low during some period of time then you will have issuance. Ethereum does not have less stability than bitcoin; rather, it chooses stability of level of security over stability of issuance, and given how tiny an impact a 0.5% change in issuance will actually have on anyone's fortunes it should be clear that this is the correct choice.
Ultimately, this burning mechanism is of greatest benefit to current ETH holders and is to the detriment of holders and users in the future.
Huh? What is the evidence for this? This was just asserted without any argument backing up the idea that there is a detriment to anyone.
One can only conclude that the monetary policy of Ethereum is relatively fluid and influenced by people rather than code. This uncertainty reflects an un-sound monetary policy (subject to human tampering) and instils a defendable perception of centralised governance.
Given how central fees are to bitcoin's long-term security narrative, and how central (i) block size changes like segwit, and in the future sig compression via schnorr and (ii) layer 2 protocols like LN, are to fee levels, can't you argue that the security policy of Bitcoin is relatively fluid and influenced by people rather than code?
Narratives have shifted from world computer, to unstoppable dAPPS, to token issuance and now to open finance applications.
Shifted? As far as I can tell, narratives were rarely subtracted, mostly new ones added. And that's what you should expect for a general purpose technology.
Furthermore, the ETH 2.0 beacon chain very much resembles Bitcoin by design, handling consensus and global state only with applications and bloat pushed to shards (sidechains or L2+ in Bitcoin’s case).
This author needs to understand the concept of tight coupling to see why shards are not like sidechains. That claim is as incorrect as claiming that the bitcoin block is a sidechain to bitcoin headers.
Whilst the Open finance ecosystem presents impressive technological and engineering successes, there remains a lingering risk of over reliance on third party protocols for value accrual to the ETH token.
Yes, general purpose technology requires at least one application to succeed. We know that. BTW ETH itself being used for payments is also a totally reasonable application, and has not been denounced.
A relatively centralised governance and an unsound monetary policy with signs this will only deteriorate in time.
Once again bare assertion with no evidence. How do we know that the monetary policy and governance will only deteriorate over time when all evidence suggests (i) issuance only going down, not up, and (ii) DAO-like forks becoming more difficult, not less?
Ethereum has historically required more specialised, high performance hardware for the operation of nodes. This is generally a result of a larger scope of transactions and heavier demand on block-space from Turing Completeness.
Actually it's largely because of IO issues, which will be solved by stateless clients.
The author challenges readers to consider how far advanced Bitcoin is in achieving the goals of digital, sound, immutable money whilst Ethereum has ventured down numerous dead end rabbit holes.
THE ABILITY TO VENTURE DOWN A WHOLE BUNCH OF RABBIT HOLES IN PARALLEL IS THE ETHEREUM COMMUNITY'S STRENGTH!!!1! Where else do you see as many parallel tracks like sharding, PoS, Plasma, generalized state channels, optimistic rollup, ZK rollup, stablecoins, DAOs all happening at the same time?
I would even argue that the frame that there must be a single dominant application narrative is one that we should reject; instead, the Ethereum community should be proud of its own great internal diversity.
submitted by c-i-s-c-o to ethereum [link] [comments]

Building Ergo: SPV security

There’s often a tension in the crypto world between security and convenience. That trade-off is unacceptable if we want these technologies to be widely used. Here’s how Ergo addresses one common and very important issue.
We all know that the most secure way to use Bitcoin, or any crypto, is to download a copy of the blockchain and run a full node yourself. That way, every time you or anyone else makes a transaction, your client checks the blockchain to ensure it’s valid. You don’t have to trust anyone else.
A full Bitcoin node checks all the blocks in the blockchain (using headers) and makes sure there are no fraudulent transactions. It’s a very secure way of using crypto – but there’s a problem. It requires significant bandwidth, storage and processing power. That kind of commodity hardware is expensive, and using a full node to validate and make transactions is in any case unsuitable for mobile devices. This is particularly true for Bitcoin, where the blockchain is over 270 GB and counting.
SPV
Simplified Payment Verification (SPV) is designed to address this problem, as described in the Bitcoin white paper:
Satoshi notes that this is not a perfect solution, and is vulnerable to an attacker overpowering the network and fooling SPV users.
Moreover, while SPV mode is intended for resource-limited devices, even this ‘lite’ approach is not always feasible. Ethereum’s headers alone total around 5 GB to download. Thus Ethereum mobile clients do not validate chain validity and so blindly have to trust third parties.
There are proposals to reduce the requirements for SPV mode by checking just a few random headers, instead of all of them. But it’s hard to do that securely.
Efficient SPV
Several years have been spent researching and developing secure protocols that allow for efficient SPV clients. The two best-known and most reliable protocols are NiPoPoWs and FlyClient.
Ergo implements NiPoPoWs, or Non-interactive Proof-of-Proof-of-Work. This technology can be explored in full on this dedicated website: https://nipopows.com:
This enables us to build a mobile SPV client that requires around just 100KB of block headers to be downloaded.
A super-efficient Ergo wallet with SPV security is in development, so stay tuned for more updates!
submitted by eleanorcwhite to CryptoCurrencies [link] [comments]

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.
  • Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
  • Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
  • The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
  • With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.

https://preview.redd.it/s2gmpmeze3151.png?width=256&format=png&auto=webp&s=9759910dd3c4a15b83f55b827d1899fb2fdd3de1

1. What is Bitcoin (BTC)?

  • Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
  • Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
  • The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
  • The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
  • Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).

2. Bitcoin’s core features

For a more beginner’s introduction to Bitcoin, please visit Binance Academy’s guide to Bitcoin.

Unspent Transaction Output (UTXO) model

A UTXO transaction works like cash payment between two parties: Alice gives money to Bob and receives change (i.e., unspent amount). In comparison, blockchains like Ethereum rely on the account model.
https://preview.redd.it/t1j6anf8f3151.png?width=1601&format=png&auto=webp&s=33bd141d8f2136a6f32739c8cdc7aae2e04cbc47

Nakamoto consensus

In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW).
The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer.
Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs.
As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”).
Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so.
With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic.
Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.

The blockchain

Block production

The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979.
With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”).
An illustration of block production in the Bitcoin Protocol is demonstrated below.

https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d

Block time and mining difficulty

Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty.
Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly.
Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.

What are orphan blocks?

In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency.
It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency.
Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted.
The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network.
However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.

3. Bitcoin’s additional features

Segregated Witness (SegWit)

Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017.
SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin.
SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become.
https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e
The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit.
Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade.
Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values.
For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890.
Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid.
This can create many issues, as illustrated in the below example:
  1. Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
  2. Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
  3. At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
  4. Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
  5. As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID.
Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.

Lightning Network

Lightning Network is a second-layer micropayment solution for scalability.
Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins.
Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ.
A list of curated resources relevant to Lightning Network can be found here.
In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions.
Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel.
https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8
One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel.
However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.

Schnorr Signature upgrade proposal

Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain.
https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4
However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys.
This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block.
https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e
The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually.
Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.

4. Economics and supply distribution

The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years.
As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
submitted by D-platform to u/D-platform [link] [comments]

Bitcoin 101 - Merkle Roots and Merkle Trees - Bitcoin ... LS Headers on a Small Block Ford??? YES, THEY FIT! - YouTube EB82 – Mike Hearn - Blocksize Debate At The Breaking Point Blockchain/Bitcoin for beginners 9: Bitcoin difficulty, target, BITS - all you need to know Blockchain/Bitcoin for beginners 7: Blockchain header: Merkle roots and SPV transaction verification

I know that the current bitcoin block size limit is about 1,000,000 bytes (1 megabyte) of data. This includes both the block header as well as the transaction data. On the other hand, the typical size of a block header is 80 bytes, or 128 bytes if padding is included (as depicted below): Now, I have been completely unable to find any information as to whether there is a limit on the size of ... The block header is the first piece of information propagated by a node when it finds a valid block solution. Other nodes on the network can validate the node's hash solution and determine whether the proposed block warrants the further checking required to secure its place as the top-most link in the longest chain of valid proof of work. Block Size. This is size of the block in bitcoin, as mentioned before average size is 1 MB. Block Header. This is very important part in Block structure and this has further 6 items. We will explain this later in this post. Transaction Counter. This depicts the number of transactions stored in that block. Transactions Block Headers¶. Block headers are serialized in the 80-byte format described below and then hashed as part of Bitcoin’s proof-of-work algorithm, making the serialized header format part of the consensus rules. The main way of identifying a block in the blockchain is via its block header hash. The block header hash is calculated by running the block header through the SHA256 algorithm twice. A block header hash is not sent through the network but instead is calculated by each node as part of the verification process of each block.

[index] [44938] [45145] [11736] [6680] [28164] [16743] [38660] [27056] [16967] [25913]

Bitcoin 101 - Merkle Roots and Merkle Trees - Bitcoin ...

Blockchain/Bitcoin for beginners 7: Blockchain header: Merkle roots and SPV transaction verification ... content and creation of bitcoin blocks - Duration: 46:48. Matt Thomas 11,094 views. 46:48 ... A talk on the Bitcoin block size debate and a discussion of arguments for both larger and smaller block sizes give by Leonhard Weese on 18 November, 2015 at Tuspark's Causeway Bay location in Hong ... # Blockstream Sidechain Proposal: https://blockstream.com/sidechains.pdf # RSK Bitcoin Mainnet Sidechain w/ Solidity EVM https://media.rsk.co/bamboo-mainnet-... How are transactions stored (and recorded) in the Blockchain blocks? Using a concept called a Merkle tree - in this video I break it down in depth with a simple example and show how a SPV node ... Most people on earth have never even heard of Merkle roots. But bitcoin programmers deal with them every day. This is old school technology in terms of softw...

#