Crypto Caselaw Minute #51: 8/29/2019 by Stephen Palley ...
The 51% Attack. What is it?. In the world of Bitcoin and ...
51% Attack Definition - Investopedia
51% Attack - Privacy Canada
What is a full-node worth, if only miners can do a 51% attack. Where is the security?
I think I have a fairly good understanding about bitcoin, but this is something that confuses me. Running a full node „protects“ the network, because you actively validate transactions. But how is it possible, that 51% of hashing power (full-node runners have 0) can swap the whole chain? Where is the „added security to the network“, if I run a node without any hashing power (which only miners have), that an attacker would need to conquer? Any answer or link is greatly appreciated! If there is a good reason to do that, I will probably set up one of my raspberries to participate. :)
Taproot! Everybody wants to have it, somebody wants to make it, nobody knows how to get it! (If you are asking why everybody wants it, see: Technical: Taproot: Why Activate?) (Pedants: I mostly elide over lockin times) Briefly, Taproot is that neat new thing that gets us:
Multisignatures (n-of-n, k-of-n) that are just 1 signature (1-of-1) in length!! (MuSig/Schnorr)
Better privacy!! If all contract participants can agree, just use a multisignature. If there is a dispute, show the contract publicly and have the Bitcoin network resolve it (Taproot/MAST).
Activation lets devs work get back to work on the even newer stuff like!!!
Cross-input signature aggregation!! (transaction with multiple inputs can have a single signature for all inputs) --- needs Schnorr, but some more work needed to ensure that the interactions with SCRIPT are okay.
Block validation - Schnorr signatures for all taproot spends in a block can be validated in a single operation instead of for each transaction!! Speed up validation and maybe we can actually afford to increase block sizes (maybe)!!
SIGHASH_ANYPREVOUT - you know, for Decker-Russell-Osuntokun ("eltoo") magic!!!
OP_CHECKTEMPLATEVERIFY - vaulty vaults without requiring storing signatures, just transaction details!!
So yes, let's activate taproot!
The SegWit Wars
The biggest problem with activating Taproot is PTSD from the previous softfork, SegWit. Pieter Wuille, one of the authors of the current Taproot proposal, has consistently held the position that he will not discuss activation, and will accept whatever activation process is imposed on Taproot. Other developers have expressed similar opinions. So what happened with SegWit activation that was so traumatic? SegWit used the BIP9 activation method. Let's dive into BIP9!
bit - A field in the block header, the nVersion, has a number of bits. By setting a particular bit, the miner making the block indicates that it has upgraded its software to support a particular soft fork. The bit parameter for a BIP9 activation is which bit in this nVersion is used to indicate that the miner has upgraded software for a particular soft fork.
timeout - a time limit, expressed as an end date. If this timeout is reached without sufficient number of miners signaling that they upgraded, then the activation fails and Bitcoin Core goes back to the drawing board.
Now there are other parameters (name, starttime) but they are not anywhere near as important as the above two. A number that is not a parameter, is 95%. Basically, activation of a BIP9 softfork is considered as actually succeeding if at least 95% of blocks in the last 2 weeks had the specified bit in the nVersion set. If less than 95% had this bit set before the timeout, then the upgrade fails and never goes into the network. This is not a parameter: it is a constant defined by BIP9, and developers using BIP9 activation cannot change this. So, first some simple questions and their answers:
Why not just set a day when everyone starts imposing the new rules of the softfork?
This was done classically (in the days when Satoshi was still among us). But this might argued to put too much power to developers, since there would be no way to reject an upgrade without possible bad consequences. For example, developers might package an upgrade that the users do not want, together with vital security bugfixes. Either you live without vital security bugfixes and hire some other developers to fix it for you (which can be difficult, presumably the best developers are already the ones working on the codebase) or you get the vital security bugfixes and implicitly support the upgrade you might not want.
Sure, you could fork the code yourself (the ultimate threat in the FOSS world) and hire another set of developers who aren't assholes to do the dreary maintenance work of fixing security bugs, but Bitcoin needs strong bug-for-bug compatibility so everyone should really congregate around a single codebase.
Basically: even the devs do not want this power, because they fear being coerced into putting "upgrades" that are detrimental to users. Satoshi got a pass because nobody knew who he was and how to coerce him.
Suppose the threshold were lower, like 51%. If so, after activation, somebody can disrupt the Bitcoin network by creating a transaction that is valid under the pre-softfork rules, but are invalid under the post-softfork rules. Upgraded nodes would reject it, but 49% of miners would accept it and include it in a block (which makes the block invalid) And then the same 49% would accept the invalid block and build on top of that, possibly creating a short chain of doomed invalid blocks that confirm an invalid spend. This can confuse SPV wallets, who might see multiple confirmations of a transaction and accept the funds, but later find that in fact it is invalid under the now-activated softfork rules.
Thus, a very high threshold was imposed. 95% is considered safe. 50% is definitely not safe. Due to variance in the mining process, 80% could also be potentially unsafe (i.e. 80% of blocks signaling might have a good chance of coming from only 60% of miners), so a threshold of 95% was considered "safe enough for Bitcoin work".
Why have a timeout that disables the upgrade?
Before BIP9, what was used was either flag day or BIP34. BIP34 had no flag day of activation or a bit, instead, it was just a 95% threshold to signal an nVersion value greater than a specific value. Actually, it was two thresholds: at 75%, blocks with the new nVersion would have the new softfork rules imposed, but at 95% blocks with the old nVersion would be rejected (and only the new blocks, with the new softfork rules, were accepted). For one, between 75% and 95%, there was a situation where the softfork was only "partially imposed", only blocks signaling the new rules would actually have those rules, but blocks with the old rules were still valid. This was fine for BIP34, which only added rules for miners with negligible use for non-miners.
The reasons miners signalled support was because they felt they were being pressured to signal support. So they signalled support, with plans to actually upgrade later, but because of the widespread signalling, the new BIP66 version locked in before upgrade plans were finished. Thus, the timeout that disables the upgrade was added in BIP9 to allow miners an escape hatch.
The Great Battles of the SegWit Wars
SegWit not only fixed transaction malleability, it also created a practical softforkable blocksize increase that also rebalanced weights so that the cost of spending a UTXO is about the same as the cost of creating UTXOs (and spending UTXOs is "better" since it limits the size of the UTXO set that every fullnode has to maintain). So SegWit was written, the activation was decided to be BIP9, and then.... miner signalling stalled at below 75%. Thus were the Great SegWit Wars started.
BIP9 Feature Hostage
If you are a miner with at least 5% global hashpower, you can hold a BIP9-activated softfork hostage. You might even secretly want the softfork to actually push through. But you might want to extract concession from the users and the developers. Like removing the halvening. Or raising or even removing the block size caps (which helps larger miners more than smaller miners, making it easier to become a bigger fish that eats all the smaller fishes). Or whatever. With BIP9, you can hold the softfork hostage. You just hold out and refuse to signal. You tell everyone you will signal, if and only if certain concessions are given to you. This ability by miners to hold a feature hostage was enabled because of the miner-exit allowed by the timeout on BIP9. Prior to that, miners were considered little more than expendable security guards, paid for the risk they take to secure the network, but not special in the grand scheme of Bitcoin.
ASICBoost was a novel way of optimizing SHA256 mining, by taking advantage of the structure of the 80-byte header that is hashed in order to perform proof-of-work. The details of ASICBoost are out-of-scope here but you can read about it elsewhere Here is a short summary of the two types of ASICBoost, relevant to the activation discussion.
Overt ASICBoost - Manipulates the unused bits in nVersion to reduce power consumption in mining.
Covert ASICBoost - Manipulates the order of transactions in the block to reduce power consumption in mining.
Now, "overt" means "obvious", while "covert" means hidden. Overt ASICBoost is obvious because nVersion bits that are not currently in use for BIP9 activations are usually 0 by default, so setting those bits to 1 makes it obvious that you are doing something weird (namely, Overt ASICBoost). Covert ASICBoost is non-obvious because the order of transactions in a block are up to the miner anyway, so the miner rearranging the transactions in order to get lower power consumption is not going to be detected. Unfortunately, while Overt ASICBoost was compatible with SegWit, Covert ASICBoost was not. This is because, pre-SegWit, only the block header Merkle tree committed to the transaction ordering. However, with SegWit, another Merkle tree exists, which commits to transaction ordering as well. Covert ASICBoost would require more computation to manipulate two Merkle trees, obviating the power benefits of Covert ASICBoost anyway. Now, miners want to use ASICBoost (indeed, about 60->70% of current miners probably use the Overt ASICBoost nowadays; if you have a Bitcoin fullnode running you will see the logs with lots of "60 of last 100 blocks had unexpected versions" which is exactly what you would see with the nVersion manipulation that Overt ASICBoost does). But remember: ASICBoost was, at around the time, a novel improvement. Not all miners had ASICBoost hardware. Those who did, did not want it known that they had ASICBoost hardware, and wanted to do Covert ASICBoost! But Covert ASICBoost is incompatible with SegWit, because SegWit actually has two Merkle trees of transaction data, and Covert ASICBoost works by fudging around with transaction ordering in a block, and recomputing two Merkle Trees is more expensive than recomputing just one (and loses the ASICBoost advantage). Of course, those miners that wanted Covert ASICBoost did not want to openly admit that they had ASICBoost hardware, they wanted to keep their advantage secret because miners are strongly competitive in a very tight market. And doing ASICBoost Covertly was just the ticket, but they could not work post-SegWit. Fortunately, due to the BIP9 activation process, they could hold SegWit hostage while covertly taking advantage of Covert ASICBoost!
UASF: BIP148 and BIP8
When the incompatibility between Covert ASICBoost and SegWit was realized, still, activation of SegWit stalled, and miners were still not openly claiming that ASICBoost was related to non-activation of SegWit. Eventually, a new proposal was created: BIP148. With this rule, 3 months before the end of the SegWit timeout, nodes would reject blocks that did not signal SegWit. Thus, 3 months before SegWit timeout, BIP148 would force activation of SegWit. This proposal was not accepted by Bitcoin Core, due to the shortening of the timeout (it effectively times out 3 months before the initial SegWit timeout). Instead, a fork of Bitcoin Core was created which added the patch to comply with BIP148. This was claimed as a User Activated Soft Fork, UASF, since users could freely download the alternate fork rather than sticking with the developers of Bitcoin Core. Now, BIP148 effectively is just a BIP9 activation, except at its (earlier) timeout, the new rules would be activated anyway (instead of the BIP9-mandated behavior that the upgrade is cancelled at the end of the timeout). BIP148 was actually inspired by the BIP8 proposal (the link here is a historical version; BIP8 has been updated recently, precisely in preparation for Taproot activation). BIP8 is basically BIP9, but at the end of timeout, the softfork is activated anyway rather than cancelled. This removed the ability of miners to hold the softfork hostage. At best, they can delay the activation, but not stop it entirely by holding out as in BIP9. Of course, this implies risk that not all miners have upgraded before activation, leading to possible losses for SPV users, as well as again re-pressuring miners to signal activation, possibly without the miners actually upgrading their software to properly impose the new softfork rules.
BIP91, SegWit2X, and The Aftermath
BIP148 inspired countermeasures, possibly from the Covert ASiCBoost miners, possibly from concerned users who wanted to offer concessions to miners. To this day, the common name for BIP148 - UASF - remains an emotionally-charged rallying cry for parts of the Bitcoin community. One of these was SegWit2X. This was brokered in a deal between some Bitcoin personalities at a conference in New York, and thus part of the so-called "New York Agreement" or NYA, another emotionally-charged acronym. The text of the NYA was basically:
Set up a new activation threshold at 80% signalled at bit 4 (vs bit 1 for SegWit).
When this 80% signalling was reached, miners would require that bit 1 for SegWit be signalled to achive the 95% activation needed for SegWit.
If the bit 4 signalling reached 80%, increase the block weight limit from the SegWit 4000000 to the SegWit2X 8000000, 6 months after bit 1 activation.
The first item above was coded in BIP91. Unfortunately, if you read the BIP91, independently of NYA, you might come to the conclusion that BIP91 was only about lowering the threshold to 80%. In particular, BIP91 never mentions anything about the second point above, it never mentions that bit 4 80% threshold would also signal for a later hardfork increase in weight limit. Because of this, even though there are claims that NYA (SegWit2X) reached 80% dominance, a close reading of BIP91 shows that the 80% dominance was only for SegWit activation, without necessarily a later 2x capacity hardfork (SegWit2X). This ambiguity of bit 4 (NYA says it includes a 2x capacity hardfork, BIP91 says it does not) has continued to be a thorn in blocksize debates later. Economically speaking, Bitcoin futures between SegWit and SegWit2X showed strong economic dominance in favor of SegWit (SegWit2X futures were traded at a fraction in value of SegWit futures: I personally made a tidy but small amount of money betting against SegWit2X in the futures market), so suggesting that NYA achieved 80% dominance even in mining is laughable, but the NYA text that ties bit 4 to SegWit2X still exists. Historically, BIP91 triggered which caused SegWit to activate before the BIP148 shorter timeout. BIP148 proponents continue to hold this day that it was the BIP148 shorter timeout and no-compromises-activate-on-August-1 that made miners flock to BIP91 as a face-saving tactic that actually removed the second clause of NYA. NYA supporters keep pointing to the bit 4 text in the NYA and the historical activation of BIP91 as a failed promise by Bitcoin developers.
We have discussed BIP8: roughly, it has bit and timeout, if 95% of miners signal bit it activates, at the end of timeout it activates. (EDIT: BIP8 has had recent updates: at the end of timeout it can now activate or fail. For the most part, in the below text "BIP8", means BIP8-and-activate-at-timeout, and "BIP9" means BIP8-and-fail-at-timeout) So let's take a look at Modern Softfork Activation!
Modern Softfork Activation
This is a more complex activation method, composed of BIP9 and BIP8 as supcomponents.
First have a 12-month BIP9 (fail at timeout).
If the above fails to activate, have a 6-month discussion period during which users and developers and miners discuss whether to continue to step 3.
Have a 24-month BIP8 (activate at timeout).
The total above is 42 months, if you are counting: 3.5 years worst-case activation. The logic here is that if there are no problems, BIP9 will work just fine anyway. And if there are problems, the 6-month period should weed it out. Finally, miners cannot hold the feature hostage since the 24-month BIP8 period will exist anyway.
PSA: Being Resilient to Upgrades
Software is very birttle. Anyone who has been using software for a long time has experienced something like this:
You hear a new version of your favorite software has a nice new feature.
Excited, you install the new version.
You find that the new version has subtle incompatibilities with your current workflow.
You are sad and downgrade to the older version.
You find out that the new version has changed your files in incompatible ways that the old version cannot work with anymore.
You tearfully reinstall the newer version and figure out how to get your lost productivity now that you have to adapt to a new workflow
If you are a technically-competent user, you might codify your workflow into a bunch of programs. And then you upgrade one of the external pieces of software you are using, and find that it has a subtle incompatibility with your current workflow which is based on a bunch of simple programs you wrote yourself. And if those simple programs are used as the basis of some important production system, you hve just screwed up because you upgraded software on an important production system. And well, one of the issues with new softfork activation is that if not enough people (users and miners) upgrade to the newest Bitcoin software, the security of the new softfork rules are at risk. Upgrading software of any kind is always a risk, and the more software you build on top of the software-being-upgraded, the greater you risk your tower of software collapsing while you change its foundations. So if you have some complex Bitcoin-manipulating system with Bitcoin somewhere at the foundations, consider running two Bitcoin nodes:
One is a "stable-version" Bitcoin node. Once it has synced, set it up to connect=x.x.x.x to the second node below (so that your ISP bandwidth is only spent on the second node). Use this node to run all your software: it's a stable version that you don't change for long periods of time. Enable txiindex, disable pruning, whatever your software needs.
The other is an "always-up-to-date" Bitcoin Node. Keep its stoarge down with pruning (initially sync it off the "stable-version" node). You can't use blocksonly if your "stable-version" node needs to send transactions, but otherwise this "always-up-to-date" Bitcoin node can be kept as a low-resource node, so you can run both nodes in the same machine.
When a new Bitcoin version comes up, you just upgrade the "always-up-to-date" Bitcoin node. This protects you if a future softfork activates, you will only receive valid Bitcoin blocks and transactions. Since this node has nothing running on top of it, it is just a special peer of the "stable-version" node, any software incompatibilities with your system software do not exist. Your "stable-version" Bitcoin node remains the same version until you are ready to actually upgrade this node and are prepared to rewrite most of the software you have running on top of it due to version compatibility problems. When upgrading the "always-up-to-date", you can bring it down safely and then start it later. Your "stable-version" wil keep running, disconnected from the network, but otherwise still available for whatever queries. You do need some system to stop the "always-up-to-date" node if for any reason the "stable-version" goes down (otherwisee if the "always-up-to-date" advances its pruning window past what your "stable-version" has, the "stable-version" cannot sync afterwards), but if you are technically competent enough that you need to do this, you are technically competent enough to write such a trivial monitor program (EDIT: gmax notes you can adjust the pruning window by RPC commands to help with this as well). This recommendation is from gmaxwell on IRC, by the way.
An Overview of Arizona Primary Races - Part 4: Legislative Districts 11-20
Welcome back to my omnibus compendium of Arizona’s upcoming primary races in the style of my 2018 summaries. The primary is set to take place August 4th – early voting ballots should have been mailed out on or around July 8th. Arizona’s a really interesting state (I may be a hair biased), since it not only is home to 2-3 swing House seats and a high-profile Senate race, but also tenuous majorities in both state houses that could – theoretically – neuter Ducey’s trifecta this fall. And counties have their races this year as well, so I’ve highlighted some of the fireworks ongoing in Maricopa. And this is before factoring in the fact that our state is a COVID-19 hotspot, with an unpopular Republican Governor doing almost nothing to stop it. If you’re interested about which district you live in, check https://azredistricting.org/districtlocato. If you want to get involved with your local Democratic party, find your legislative district on the previous link (NOT CD), and then search for your LD’s name at this link. Feel free to attend meetings, they’re a great way to get involved with candidates and like-minded individuals. If you wish to donate to a “clean elections” candidate (mentioned in the post as “clean”), you will have to live in that candidate’s legislative district to give qualifying $5 contributions (check here if anyone needs it in your area), but they are allowed to accept a limited amount of “seed money” from people outside of the district. The three CorpComm candidates can take $5’s statewide. If you do not want to vote at the polls, you will need to request an early ballot using the website of your county’s recorder prior to July 4th. Example links for Maricopa, Pima, and Pinal. Others available if needed. Race ratings for listed primaries will be listed as Safe/Likely/Leans/Tilt/Tossup (alternatively Solid instead of Safe if my mind blanks) and are not indicative of my own preference for that seat. I’ll denote my personal primary preferences at the end of this series, as well as the best Republican ticket for the Dems if someone here really really wants to pull a GOP ballot in the primary. I do not advise it, but since I can't stop ya, you'll get my best suggestions. Write-in candidates have yet to file, which could give us an outside chance at getting some Libertarians on the ballot (the Greens have lost their ballot access). If you have any questions about voting in the primary, which races are the most contested, and how to get involved with other Democrats in Arizona, feel free to PM me. All fundraising numbers here are as of 7/18/2020 (“Q2”). District stats are listed for the race that involved the top Democratic vote-getter in the past two midterm cycles plus the last two presidential races, taken from Daily Kos’s legislative sheet – Clinton’16, Obama’12, Sinema’18, and Garcia’14 (not his 2018 run). Part 1: Statewide and Congressional Races Part 2: Maricopa County Races Update 1: Congressional and County Rating Updates Part 3: Legislative Districts 1-10 ALL OPINIONS ARE MY OWN SOLELY IN MY CAPACITY AS A VOTER IN ARIZONA, AND NOT REPRESENTATIVE OF ANY ORGANIZATIONS I WORK/ED FOR OR AM/WAS A MEMBER OF. THIS POST IS IN NO WAY ENDORSED BY THE ARIZONA DEMOCRATIC PARTY OR ANY SUB-ORGANIZATION THEREOF, OR ANY FILED CANDIDATE. Alright, let’s get cracking, y’all. I’m going to try to save time and characters on the safer seats when I can, although of course I’ll expound on any fun stuff that comes up. Legislative District 11 (McSally+9.93, Trump+13.9, Douglas+16.7, Romney+19.3) The first district in this writeup installment is LD11, a district very close geographically and politically to LD8. Unlike LD8, however, LD11 has slowly been trending towards Democrats, instead of away from them. Encompassing the southern half of Pinal (including a large chunk of Casa Grande) and bits of Pima, LD11 could swing under the right conditions, but is probably a safe seat this year. That’s disappointing, since the incumbents in the district are pretty darn nasty. Incumbent Senator Venden “Vince” Leach ($98K COH), a sort-of Great Value Mitch McConnell, loves to spend his time filing SB1487 complaints against various liberal towns in Arizona – basically, suing cities over their attempts to go above and beyond state law when it comes to certain issues. Leach leads the SB1487 leaderboard with 4 SB1487 suits, most recently targeting Pima County over COVID-19 safety regulations that were slightly stricter than state law. Joining the suit were his House counterparts, COVID-19 conspiracy-monger Bret Roberts ($22.4K COH) and actual goddamn Oathkeeper and Charlottesville trutherMark Finchem ($27K COH). Facing Finchem and Roberts is the Democratic House nominee for LD11, Dr. Felipe Perez ($24.2K COH). Perez has made few waves online and I haven’t seen him even in the same tier of candidates as Girard in LD8, so he’s probably not going to supercharge this district into Dem. territory. But given the spike in public approval for the healthcare industry due to COVID, he may get lucky. On the Senate side, Leach’s opponent will be one of retired public administrator Linda Patterson ($4.7K COH, Clean) and Marine drill instructor Joanna Mendoza ($14.5K COH). Anything could happen between now and August, but Mendoza currently has a significant organizational, political (endorsements) and fiscal advantage over Patterson, and will probably be the nominee come August. A well-run race could feasibly knock out Finchem or Roberts, but I’ve yet to see that happen. Still, it’s far out enough that I’m not going to slam the door shut on a Perez win just yet. hunter15991 Rating: GOP primary unopposed, Safe Mendoza, Perez unopposed, Safe Leach, Safe Roberts, Likely Finchem general Legislative District 12 (McSally+17.19, Trump+24.5, Douglas+17.84, Romney+33.35) Really not going to focus much on this district to save space, as it’s a snoozefest. House Majority Leader Warren Petersen ($84.8K COH) is running for Senate to replace outgoing Sen. Eddie Farnsworth. Petersen faces Haitian DREAMer. former teacher, and 2018 LD-12 House nominee Lynsey Robinson ($1.4K COH). Robinson’s a great person, but lost her House race against Petersen by the 1v1 equivalent of 20 points, and shows no sign of knocking him off this time around. Petersen’s runningmates, Rep. Travis Grantham ($39K COH) and Queen Creek Councilman Jake Hoffman ($107.7K COH) are unopposed in both the primary and general. hunter15991 Rating: Primaries all unopposed, Safe Petersen general, GOP House unopposed Legislative District 13 (McSally+21.59, Trump+26.96, Douglas+26.22, Romney+31.62) Moving on to another Safe GOP district with not much activity – LD13! Stretching from the whiter Yuma neighborhoods all the way to Phoenix exurbs in Maricopa County (and the mirror image of LD4 to its south), LD13 routinely sends Republican slates to the legislature. This year, incumbents Sen. Sine Kerr ($58.5K COH), Rep. Tim Dunn ($60.4K COH), and Rep. Joanne Osborne ($15K COH) are all fighting to hold their seats. Kerr is unopposed in both the primary and general, while Dunn and Osborne are in the opposite situation – they’ve got two elections between now and inauguration day. Democratic paralegal Mariana Sandoval ($3.1K COH, Clean) will put up little resistance for the GOP in the general, but the entrance of former Senator and former Speaker Pro Tem Steve Montenegro ($27.8K COH) could really shake up the LD13 House primary. Montenegro, a Salvadoran-American legislator who resigned his Senate seat to run for the CD-8 special election primary (he placed 3rd, ultimately losing to then-Sen. Debbie Lesko), was a rising star in the AZ-GOP before his resignation and contemporaneous sexting scandal. This Senate run could be a good way for him to get his foot back in the door, and since his election would single-handedly double the amount of non-white Republicans in the legislator, I would figure that some Arizona Republicans are excited that Montenegro is throwing his hat back into the ring. I haven’t seen much about this primary online, but there’s vague general on GOP pages dinging Montenegro for his ties to a 2016 National Popular Vote bill in the legislature, which is a big purity sticking point for the further-right members of the Arizona GOP. That being said, the chatter is vague at best, and Montenegro has enough conservative cred (with endorsements from people like Joe Arpaio and former Rep. Trent Franks back during his special election run) that he will primarily face issues over the sexting scandal. I’ll give Osborne and Dunn a slight advantage over their incumbency, financial well-being, and the issues in Montenegro’s closet, but this is a really tight race and Montenegro could very well end up back in the legislature this time next year. hunter15991 Rating: Dem. unopposed, Kerr unopposed, Tilt Osborne, Tilt Dunn, All Safe GOP general Legislative District 14 (McSally+23.83, Trump+26.24, Douglas+22.88, Romney+26.84) This is yet another district where Democrats stand no real chance in competing this year, and haven’t in quite some time. Situated in SE Arizona, LD14 once incorporated some ancestrally Democratic mining towns in Greenlee and Graham County, but they’ve grown red enough in the past couple of decades that this district is now held by three GOP legislators. Former House Speaker and current Sen. David Gowan ($60.9K COH) (who was previously in the news for trying to use a state vehicle to assist in a failed Congressional campaign) faces realtor Bob Karp ($12.9K COH, Clean) in the general, while House incumbents Rep. Gail “Tax porn to build the wall” Griffin” ($50.5K COH) and Rep. Becky Nutt ($47.4K COH) face retired union activist Ronnie Maestas-Condos ($686 COH, Clean) and teacher Kim Beach-Moschetti ($13K COH, Clean). All 3 races will probably be easy GOP wins. hunter15991 Rating: Candidates unopposed in primaries, All Safe GOP general Legislative District 15 (McSally+8.01, Trump+16.61, Douglas+11.06, Romney+25.44) LD15, up in Northern Scottsdale and Phoenix, is one of the final frontiers of suburban expansion for Arizona Democrats, along with the Mormon suburbs of the far East Valley (LD12, 16, and 25). A very wealthy area, LD15 has routinely been a GOP stronghold – but their hold on the area has been dissipating steadily rapidly in the Trump era. In 2018, two Dem. House candidates both managed to outperform the “single-shot” performance of a 2016 candidate, and Kristin Dybvig-Pawelko ($48.6K COH, hereafter “KDP”) improved on the district’s 2016 State Senate margin by several points despite facing a significantly more difficult opponent than the 2016 Democrat. KDP is running again this year, as a single-shot candidate for the State House. Her opponents have yet to be set in stone, as both GOP Representatives are vacating their seats to run for higher office, and there are three GOP candidates in the August primary vying for two nominations. Veteran Steve Kaiser ($13.6K COH) and State House policy adviser Justin Wilmeth ($16K COH, $5.2K self-funded) are the nominal establishment picks for both seats, and have been endorsed by a whole host of GOP legislators. However, they face stiff competition from businessman Jarret Hamstreet ($23.2K COH, $10K self-funded), who boasts endorsements from GOP power-players like the local Chamber of Commerce and the NRA, as well as tacit support from the incumbent Senator in the district Heather Carter ($101.2K COH) (somewhat of an Arizona Lisa Murkowski). I’ve been able to find very little chatter on the race, but with Hamstreet’s significant fundraising advantage I definitely think he secures one of the two nominations this November. While the district is still quite red, KDP is no spring chicken, and facing Kasier, Hamstreet, or Wilmeth will be a lot easier than her run against Carter in 2018. If I’m going to be honest, it is the GOP Senate primary that is almost as important as the House general election. Heather Carter has gotten on the bad side of quite a few conservative legislators during her tenure in the Senate, holding up GOP budgets with her partner in crime Paul Boyer in 2019 over a stalled child sexual assault statute of limitations bill and this year over an amendment to give additional funding to firefighters for PPE and to students for tuition support. That amendment failed 15-15 thanks to one Kate Brophy McGee - more on her later. Carter’s actual attempts at moderation (as opposed to McGee’s performative bullshit) has inspired current State Rep. Nancy Barto ($9.9K COH) to challenge her for the Senate. Barto has the support of both Kaiser and Wilmeth (as well as most of the GOP establishment) but has been routinely lagging behind Carter in fundraising (both in terms of current COH and overall amount raised). Carter has been bringing in more “moderate” and pro-public education GOP volunteers from all over Phoenix and is sure to put up a fight in August. As it stands, I think she narrowly pulls it off. There is no Democratic Senate opponent in the general, so winning the primary automatically wins the seat. If you’ve got GOP friends in AZ who just can’t bare phonebanking for Democratic candidates but complain about the state of the Republican party, send them her way. Carter has beliefs. Barto has none. Slate totals:
Barto coalition (KaiseWilmeth/Barto): $40.5K
hunter15991 Rating: Dem. unopposed, Tilt Carter, Lean Hamstreet, Tilt Kaiser, GOP Sen. unopposed in general, Likely Hamstreet, 2nd GOP unopposed Legislative District 16 (McSally+17.58, Trump+28.37, Douglas+17, Romney+28.11) LD16, out on the border between Pinal and Maricopa County, is probably the reddest district in Arizona that could still be relatively considered “suburban”. The only Democratic candidate is write-in House candidate Rev. Helen Hunter ($783 COH), and while her background is stellar (incl. past work with the NAACP, Mesa PD’s Use of Force Committee, and other community involvement), there isn’t even a guarantee she’ll make it onto the November ballot. Meanwhile, Rep. Kelly Townsend ($15.5K COH) has tired of the State House (just like she tired of her furry fursona, and is running unopposed for State Senate. The real drama, therefore, is in the GOP State House primary to win Townsend’s old seat. Incumbent Rep. John Fillmore ($12.9K COH) is running for another term, and seems set to win one of the two nominations. Townsend’s former seat is contested by respiratory therapist Liza Godzich ($14.6K COH) (who wins the “most moderate” title by default by virtue of taking COVID kinda seriously), CorpComm policy advisor Jacqueline Parker ($16.4K COH), and school choice activist/general lunatic Forest Moriarty ($17.7K COH). Moriarty has the valuable Townsend endorsement, but has not been able to consolidate support easily elsewhere – Parker’s CorpComm ties let her bring quite a few assets of her own to bear, as well as endorsements from Congressman Andy Biggs and the NRA. This election will be a test of Townsend’s downballot coattails, as well as those of the school choice movement in AZ parlaying any support they may have into legislative results. Success for Moriarty here could go as far as inspiring Townsend to run for Governor. We’ll see if it comes to that. hunter15991 Rating: No Dem. filed (pending write-in), Townsend unopposed, Lean Fillmore, Tossup ParkeMoriarty, GOP unopposed in general Legislative District 17 (Sinema+3.53, Trump+4.09, Douglas+3.12, Romney+14.16) One of the reasons I significantly delayed writing these writeups was because I was dreading writing about LD17. Not to doxx myself completely, but in 2018 I had far too many negative encounters with the incumbent Democratic Representative, Jennifer Pawlik ($101.3K COH) that made me routinely question my support of her. I’m still trying to heal the wounds in multiple relationships I have with friends that were caused by Pawlik’s actions. I deeply regret ever lifting a finger to help her when I had opportunities in other districts. But because her actions never got physical, because the stakes are so high this year, and because too much unsubstantiated negative talk about a candidate can get a post deleted - I don’t wish to publicly expound on her actions (nor put words in the mouth of other people who interacted with her). Feel free to PM if interested. Pawlik as a candidate is a grab-bag. On paper she’d be a strong option for a suburban district – a teacher and education funding activist with a prior win during the 2018 wave. However, behind the scenes she is quite a poor campaigner in ways that directly impact Democratic candidates’ odds and presences in the district, including her own - which makes me more apprehensive of her odds of re-election than her fellow Jennifer in HD18 (Rep. Jennifer Jermaine), who’s quite similar to Pawlik on the whole. Pawlik’s Senate runningmate this year is local businessman and first-generation American Ajlan “AJ” Kurdoglu ($51.5K COH). AJ’s a good guy and more serious of a campaigner than Pawlik, and is on well enough terms with her that no inter-candidate drama will probably happen this fall (which would be a welcome change for the district). He’s been slightly outpacing her in fundraising and seems to be hitting the ground running. The Republican incumbents in this district are Sen. JD Mesnard ($102.6K COH), who moonlights as legal counsel for an organization categorized as a hate group by the SPLC, and Jeff Wenninger ($117.8K COH), a backbench Bitcoin bro. Wenninger and Mesnard have both been in their seats for a while, and this cycle were backing Chandler Vice Mayor (and JD Mesnard’s mom) Nora Ellen for the other State House seat – Ellen lost to Pawlik in 2018. But in a stroke of luck for Pawlik, Ellen failed to qualify for the ballot this year. However, in a similar stroke of luck for the GOP Liz Harris ($27.3K COH, $21.3K self-funded) - a local realtor (like Ellen) - did qualify. I’ve yet to discern just how close she is with Mesnard and Wenninger, and how much cash she is willing to dump into this race, but in terms of how random non-GOP establishment candidates the LD17 Republicans could have done far worse than Harris. All the pieces in this district would point to a shift even further left than it was in 2018, and had I not known what I know about Pawlik this would be a Tilt D House/Tossup Senate. But I don’t know if she’s changed since 2018 - and if she hasn’t, there is no guarantee that she won’t snatch defeat from the jaws of victory. hunter15991 Rating: Primaries uncontested, Tilt Mesnard, Tossup House (Pawlik/Harris), Safe Wenninger Legislative District 18 (Sinema+18.58, Clinton+10.39, Garcia+12.5, Romney+1.93) Like LD10 in the previous part of my writeup, the situation in LD18 is another blast of the proverbial Gjallarhorn for the AZ-GOP’s suburban chances. Once a very competitive district (fully red as recently as 2016), LD18 is now held by 3 Democrats – Sen. Sean Bowie ($106.3K COH), Rep. Jennifer Jermaine ($65.7K COH), and Rep. Mitzi Epstein ($60.8K COH). Bowie and Epstein have carved rather moderate paths in their respective houses having been elected back when this district was more competitive, while Jermaine’s tacked a bit more to the left, and has been a prominent voice for increasing education funding (prior to running for the State House she was a public school funding activist and IIRC Moms Demand Action member) and for missing indigenous women (Jermaine is part indigenous herself). The GOP’s troubles in this district started around the filing deadline, when one of their candidates, Alyssa Shearer, withdrew from the primary. Super anti-abortion nut Don Hawker ($619 COH) filed as a write-in candidate to replace her, but it’s uncertain if he’ll qualify for the general election. Their other House candidate, Bob Robson ($11K COH) is on paper a solid candidate (being a former Speaker Pro Tem of the state house), but lost by the equivalent of 6% to Epstein in 2016 and by 19% when he ran for Kyrene Justice of the Peace (a district that roughly matches the boundaries of LD18. Robson’s an old warhorse) - going 0 for 2 since 2014. It’s a sign of the times that he and discount Scott Roeder are the two potential House candidates for the GOP in this district. In the Senate, the GOP doesn’t fare much better. Real estate agent Suzanne Sharer ($4.2K COH) is trying to run a semblance of a decent race against Sen. Bowie, but keeps using her campaign Twitter (@blondeandsmart – I promise you that’s a real handle) to retweet QAnon shit. Sharer is going nowhere in November. That’s if she makes it to November, given her past retweets advocating for people to drink bleach to cure COVID. hunter15991 Rating: Primaries uncontested, All Safe Dem. general Legislative District 19 (Sinema+44.97, Clinton+40.25, Garcia+32.38, Obama+34.3) LD19 is a safe Democratic district in the West Valley, where all the drama is happening in the primary. Rep. Lorenzo Sierra ($9.3K COH) and Rep. Diego Espinoza ($25.2K COH) are both running for re-election, defending their seats against challenger Leezah Sun ($5.1K COH), a local activist. Sierra and Espinoza haven’t been particularly conservative in their voting records in the legislator, but have taken some flack from the more progressive wing of the party lately for outside corporate expenditures in this primary. I’m honestly unsure why these PACs are weighing in given that Sun isn’t running all that good of a campaign, but I guess better spend it here than in tighter primaries. Assistant State Minority Leader Lupe Contreras ($7.2K COH) is unopposed in his primary. In the general, there’s one GOP candidate for both House and Senate, but both are write-ins and could possibly not qualify for the ballot. For now, Democrats are unopposed in this district in the general. hunter15991 Contreras uncontested, Safe Sierra, Safe Espinoza, Uncontested Dem. general Legislative District 20 (Sinema+3.7, Trump+8.01, Douglas+0.04, Romney+12.87) LD20 is another suburban district where Democrats could see sizable gains this fall. Won by Sinema and Maricopa County Recorder Adrian Fontes, and almost snagged by David Garcia during the 2014 Superintendent race, LD20 has been on the Arizona Democratic Party’s mind for a few cycles now. Their candidates this year are strong – 2018 Senate nominee Doug Ervin ($94.6K COH) has filed for a rematch after losing by 4 in 2018 (where an independent ex-GOP candidate took 7% - Ervin claims Quelland actually hurt him more than district Republicans), and retired teacher Judy Schweibert ($158.2K COH) is running for House. Both are running bang-up campaigns and seem set to make November a problem for local Republicans, and Ervin has eschewed the public funding he took last time in order to be able to fundraise better for the slugfest ahead. The local GOP, however, isn’t taking this lying down. Representatives Shawnna Bolick ($161.8K COH) - who was almost bumped off the ballot for using a PO Box as her filing address - and Anthony Kern ($73.4K COH) - an ex-cop on the Brady “untrustworthy cop” list - have been building their warchests in preparation for this cycle after narrowly hanging on in 2018 (despite both Democrats in that race running with public funding). While Bolick has typically stayed out of especially heinous controversy on social media (despite once posting that all masks come from Wuhan and are thus contaminated with COVID), Kern’s time on the force seems to have stuck with him, and his Twitter feed is full of a lot of pro-cop posts and whatnot. With Schweibert running as a single-shot candidate this year I can see Kern’s tendency of accidentally discharging his foot into his mouth finally coming back to bite him. On the Senate side the past election results are slightly more promising than the House, but the opponent is tougher as well. Sen. Paul Boyer ($50.5K COH) is probably the closest there is to a living John McCain in the Arizona Legislature (not to deify him too much – he’s still conservative), having blocked two GOP budgets in the past two years along with Sen. Heather Carter (see LD15). In 2019 this was over a child sexual assault reform bill (extending the statute of limitations), and in 2020 this was over a lack of funding to firefighters and university students in the emergency “skinny” COVID budget the legislature passed in the spring. His attempts at moderation are visible outside of that: Boyer’s abysmal Q2 fundraising – per his own words – came from not fundraising at all during the 5 month long legislative session despite campaign finance rules only banning lobbyist contributions during the session (and I guess that’s commendable self-policing), and on his website he stops just short of calling for abortion to be banned, which makes him Margaret fucking Sanger among the current AZ-GOP. That’s not to say that people shouldn’t support Ervin with all it takes – hell, if anything he’ll need more help to oust Boyer. Ultimately I think Ervin holds a narrow lead in this race with the absence of Quelland and with far better fundraising than what the LD20 slate had last year, but the election is still quite far away. If I had to pick one Democrat to win in this district, it’d be Schweibert. hunter15991 Rating: Primaries uncontested, Tilt Ervin, Tilt Schweibert, 2nd House uncontested
Don't blindly follow a narrative, its bad for you and its bad for crypto in general
I mostly lurk around here but I see a pattern repeating over and over again here and in multiple communities so I have to post. I'm just posting this here because I appreciate the fact that this sub is a place of free speech and maybe something productive can come out from this post, while bitcoin is just fucking censorship, memes and moon/lambo posts. If you don't agree, write in the comments why, instead of downvoting. You don't have to upvote either, but when you downvote you are killing the opportunity to have discussion. If you downvote or comment that I'm wrong without providing any counterpoints you are no better than the BTC maxis you despise. In various communities I see a narrative being used to bring people in and making them follow something without thinking for themselves. In crypto I see this mostly in BTC vs BCH tribalistic arguments: - BTC community: "Everything that is not BTC is shitcoin." or more recently as stated by adam on twitter, "Everything that is not BTC is a ponzi scheme, even ETH.", "what is ETH supply?", and even that they are doing this for "altruistic" reasons, to "protect" the newcomers. Very convenient for them that they are protecting the newcomers by having them buy their bags - BCH community: "BTC maxis are dumb", "just increase block size and you will have truly p2p electronic cash", "It is just that simple, there are no trade offs", "if you don't agree with me you are a BTC maxi", "BCH is satoshi's vision for p2p electronic cash" It is not exclusive to crypto but also politics, and you see this over and over again on twitter and on reddit. My point is, that narratives are created so people don't have to think, they just choose a narrative that is easy to follow and makes sense for them, and stick with it. And people keep repeating these narratives to bring other people in, maybe by ignorance, because they truly believe it without questioning, or maybe by self interest, because they want to shill you their bags. Because this is BCH community, and because bitcoin is censored, so I can't post there about the problems in the BTC narrative (some of which are IMO correctly identified by BCH community), I will stick with the narrative I see in the BCH community. The culprit of this post was firstly this post by user u/scotty321"The BTC Paradox: “A 1 MB blocksize enables poor people to run their own node!” “Okay, then what?” “Poor people won’t be able to use the network!”". You will see many posts of this kind being made by u/Egon_1 also. Then you have also this comment in that thread by u/fuck_____________1 saying that people that want to run their own nodes are retarded and that there is no reason to want to do that. "Just trust block explorer websites". And the post and comment were highly upvoted. Really? You really think that there is no problem in having just a few nodes on the network? And that the only thing that secures the network are miners? As stated by user u/co1nsurf3r in that thread:
While I don't think that everybody needs to run a node, a full node does publish blocks it considers valid to other nodes. This does not amount to much if you only consider a single node in the network, but many "honest" full nodes in the network will reduce the probability of a valid block being withheld from the network by a collusion of "hostile" node operators.
But surely this will not get attention here, and will be downvoted by those people that promote the narrative that there is no trade off in increasing the blocksize and the people that don't see it are retarded or are btc maxis. The only narrative I stick to and have been for many years now is that cryptocurrency takes power from the government and gives power to the individual, so you are not restricted to your economy as you can participate in the global economy. There is also the narrative of banking the bankless, which I hope will come true, but it is not a use case we are seeing right now. Some people would argue that removing power from gov's is a bad thing, but you can't deny the fact that gov's can't control crypto (at least we would want them not to). But, if you really want the individuals to remain in control of their money and transact with anyone in the world, the network needs to be very resistant to any kind of attacks. How can you have p2p electronic cash if your network just has a handful couple of nodes and the chinese gov can locate them and just block communication to them? I'm not saying that this is BCH case, I'm just refuting the fact that there is no value in running your own node. If you are relying on block explorers, the gov can just block the communication to the block explorer websites. Then what? Who will you trust to get chain information? The nodes needs to be decentralized so if you take one node down, many more can appear so it is hard to censor and you don't have few points of failure. Right now BTC is focusing on that use case of being difficult to censor. But with that comes the problem that is very expensive to transact on the network, which breaks the purpose of anyone being able to participate. Obviously I do think that is also a major problem, and lightning network is awful right now and probably still years away of being usable, if it ever will. The best solution is up for debate, but thinking that you just have to increase the blocksize and there is no trade off is just naive or misleading. BCH is doing a good thing in trying to come with a solution that is inclusive and promotes cheap and fast transactions, but also don't forget centralization is a major concern and nothing to just shrug off. Saying that "a 1 MB blocksize enables poor people to run their own" and that because of that "Poor people won’t be able to use the network" is a misrepresentation designed to promote a narrative. Because 1MB is not to allow "poor" people to run their node, it is to facilitate as many people to run a node to promote decentralization and avoid censorship. Also an elephant in the room that you will not see being discussed in either BTC or BCH communities is that mining pools are heavily centralized. And I'm not talking about miners being mostly in china, but also that big pools control a lot of hashing power both in BTC and BCH, and that is terrible for the purpose of crypto. Other projects are trying to solve that. Will they be successful? I don't know, I hope so, because I don't buy into any narrative. There are many challenges and I want to see crypto succeed as a whole. As always guys, DYOR and always question if you are not blindly following a narrative. I'm sure I will be called BTC maxi but maybe some people will find value in this. Don't trust guys that are always posting silly "gocha's" against the other "tribe". EDIT: User u/ShadowOfHarbringer has pointed me to some threads that this has been discussed in the past and I will just put my take on them here for visibility, as I will be using this thread as a reference in future discussions I engage:
When there was only 2 nodes in the network, adding a third node increased redundancy and resiliency of the network as a whole in a significant way. When there is thousands of nodes in the network, adding yet another node only marginally increase the redundancy and resiliency of the network. So the question then becomes a matter of personal judgement of how much that added redundancy and resiliency is worth. For the absolutist, it is absolutely worth it and everyone on this planet should do their part.
What is the magical number of nodes that makes it counterproductive to add new nodes? Did he do any math? Does BCH achieve this holy grail safe number of nodes? Guess what, nobody knows at what number of nodes is starts to be marginally irrelevant to add new nodes. Even BTC today could still not have enough nodes to be safe. If you can't know for sure that you are safe, it is better to try to be safer than sorry. Thousands of nodes is still not enough, as I said, it is much cheaper to run a full node as it is to mine. If it costs millions in hash power to do a 51% attack on the block generation it means nothing if it costs less than $10k to run more nodes than there are in total in the network and cause havoc and slowing people from using the network. Or using bot farms to DDoS the 1000s of nodes in the network. Not all attacks are monetarily motivated. When you have governments with billions of dollars at their disposal and something that could threat their power they could do anything they could to stop people from using it, and the cheapest it is to do so the better
You should run a full node if you're a big business with e.g. >$100k/month in volume, or if you run a service that requires high fraud resistance and validation certainty for payments sent your way (e.g. an exchange). For most other users of Bitcoin, there's no good reason to run a full node unless you reel like it.
Shouldn't individuals benefit from fraud resistance too? Why just businesses?
Personally, I think it's a good idea to make sure that people can easily run a full node because they feel like it, and that it's desirable to keep full node resource requirements reasonable for an enthusiast/hobbyist whenever possible. This might seem to be at odds with the concept of making a worldwide digital cash system in which all transactions are validated by everybody, but after having done the math and some of the code myself, I believe that we should be able to have our cake and eat it too.
This is recurrent argument, but also no math provided, "just trust me I did the math"
The biggest reason individuals may want to run their own node is to increase their privacy. SPV wallets rely on others (nodes or ElectronX servers) who may learn their addresses.
It is a reason and valid one but not the biggest reason
If you do it for fun and experimental it good. If you do it for extra privacy it's ok. If you do it to help the network don't. You are just slowing down miners and exchanges.
Yes it will slow down the network, but that shows how people just don't get the the trade off they are doing
I will just copy/paste what Satoshi Nakamoto said in his own words. "The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server."
Another "it is all or nothing argument" and quoting satoshi to try and prove their point. Just because every user doesn't need to be also a full node doesn't mean that there aren't serious risks for having few nodes
For this to have any importance in practice, all of the miners, all of the exchanges, all of the explorers and all of the economic nodes should go rogue all at once. Collude to change consensus. If you have a node you can detect this. It doesn't do much, because such a scenario is impossible in practice.
Not true because as I said, you can DDoS the current nodes or run more malicious nodes than that there currently are, because is cheap to do so
Non-mining nodes don't contribute to adding data to the blockchain ledger, but they do play a part in propagating transactions that aren't yet in blocks (the mempool). Bitcoin client implementations can have different validations for transactions they see outside of blocks and transactions they see inside of blocks; this allows for "soft forks" to add new types of transactions without completely breaking older clients (while a transaction is in the mempool, a node receiving a transaction that's a new/unknown type could drop it as not a valid transaction (not propagate it to its peers), but if that same transaction ends up in a block and that node receives the block, they accept the block (and the transaction in it) as valid (and therefore don't get left behind on the blockchain and become a fork). The participation in the mempool is a sort of "herd immunity" protection for the network, and it was a key talking point for the "User Activated Soft Fork" (UASF) around the time the Segregated Witness feature was trying to be added in. If a certain percentage of nodes updated their software to not propagate certain types of transactions (or not communicate with certain types of nodes), then they can control what gets into a block (someone wanting to get that sort of transaction into a block would need to communicate directly to a mining node, or communicate only through nodes that weren't blocking that sort of transaction) if a certain threshold of nodes adheres to those same validation rules. It's less specific than the influence on the blockchain data that mining nodes have, but it's definitely not nothing.
The first reasonable comment in that thread but is deep down there with only 1 upvote
The addition of non-mining nodes does not add to the efficiency of the network, but actually takes away from it because of the latency issue.
That is true and is actually a trade off you are making, sacrificing security to have scalability
The addition of non-mining nodes has little to no effect on security, since you only need to destroy mining ones to take down the network
It is true that if you destroy mining nodes you take down the network from producing new blocks (temporarily), even if you have a lot of non mining nodes. But, it still better than if you take down the mining nodes who are also the only full nodes. If the miners are not the only full nodes, at least you still have full nodes with the blockchain data so new miners can download it and join. If all the miners are also the full nodes and you take them down, where will you get all the past blockchain data to start mining again? Just pray that the miners that were taken down come back online at some point in the future?
The real limiting factor is ISP's: Imagine a situation where one service provider defrauds 4000 different nodes. Did the excessive amount of nodes help at all, when they have all been defrauded by the same service provider? If there are only 30 ISP's in the world, how many nodes do we REALLY need?
You cant defraud if the connection is encrypted. Use TOR for example, it is hard for ISP's to know what you are doing.
Satoshi specifically said in the white paper that after a certain point, number of nodes needed plateaus, meaning after a certain point, adding more nodes is actually counterintuitive, which we also demonstrated. (the latency issue). So, we have adequately demonstrated why running non-mining nodes does not add additional value or security to the network.
Again, what is the number of nodes that makes it counterproductive? Did he do any math?
There's also the matter of economically significant nodes and the role they play in consensus. Sure, nobody cares about your average joe's "full node" where he is "keeping his own ledger to keep the miners honest", as it has no significance to the economy and the miners couldn't give a damn about it. However, if say some major exchanges got together to protest a miner activated fork, they would have some protest power against that fork because many people use their service. Of course, there still needs to be miners running on said "protest fork" to keep the chain running, but miners do follow the money and if they got caught mining a fork that none of the major exchanges were trading, they could be coaxed over to said "protest fork".
In consensus, what matters about nodes is only the number, economical power of the node doesn't mean nothing, the protocol doesn't see the net worth of the individual or organization running that node.
Running a full node that is not mining and not involved is spending or receiving payments is of very little use. It helps to make sure network traffic is broadcast, and is another copy of the blockchain, but that is all (and is probably not needed in a healthy coin with many other nodes)
He gets it right (broadcasting transaction and keeping a copy of the blockchain) but he dismisses the importance of it
The miner tax is nothing, but a dressed up 51% attack
With Bitcoin Cash being a minority SHA256 hash power coin and the miner centralization in China this has been a long time coming. I hope we can weather the attack, otherwise I think it would spell the end for mining as a way to secure crypto currencies in favour of other approaches (such as trust webs or POS). As to the idea itself: If you believe in centralized monetary systems, just use fiat issued by central banks. They have big "dev funds" ;) I hope Roger and Amaury realize how much damage this is doing and retract their support for this 51% attack. I also hope "selfish" and ideologically pure miners join forces to use the 12.5% decreased earnings of the attackers to defeat them and earn more. (Attackers could use the 12.5% to mine of course, but that is also the case in a normal 51% attack)
Alternative ways to fund developers:
Companies using the system and therefore having developers work on it. (See Linux environment)
Students and junior programmers earning experience. (See Linux environment) I understand Amaury wants to eat; all he has to do is "retire" to a crypto related job and earn good money. Then repeat this process with newly educated programmers over and over.
Programmatic protection from 51% attacks:
Allow nodes to configure certain blocks to force-orphan. This would allow users to easily orphan long forks caused by 51% attacks. I hear Bitcoin Unlimited are big on configurability, I think they should add this option in May. Node operators (ie. users and hodlers) are and must remain the ones in control as Satoshi wanted it.
https://preview.redd.it/sk4qzvg2o5p51.jpg?width=1024&format=pjpg&auto=webp&s=42c422967888ffb44288e1cf7b8e9b4ffd6e20a1 Quite recently, the world has seen a remarkable event — an anonymous bitcoin whale sent a total of $2.24 billion in a series of transactions. Large transactions are certainly not uncommon for the bitcoin network. Whale Alert — blockchain tracker and analytics system — regularly reports large and interesting transactions. Although $2.24 billion is the largest ever cryptocurrency transfer, the most remarkable thing about this event is not the transfer amount but the incredibly little fee the sender paid for the transaction — less than $1. With no checkups and intermediaries. If such a transfer had happened a few years ago, it would have considered abnormal and unreal. Today it is a matter of fact. With each passing day, using cryptocurrency for making transfers is getting more attractive than bank transfers. The UMI network, which enables instant payments with no fees, fits in best with new realities. Let's explore this issue. Freedom from Bank Charges The BTC whale sent the above-mentioned amount in seven successive transactions within one hour. The total amount was 241,500 BTC, which was equivalent to $2.24 bln. Each transfer cost around 0.0001 BTC or just $0.93, giving a total of about $6.51 for the seven transfers. Let's compare now how much you would be charged for an identical transfer in a bank. In big banks, the international transfer fee is at least 1% of the amount, but it is often higher than that depending on the conditions. For instance, the VISA system charges 1 to 10% of the transaction amount for an international transfer (minimum of $10). Therefore, the more you transfer, the more you pay for it. To transfer the above-mentioned amount via a bank, a customer would have to part with as much as $22.4 mln (!!!), in the best-case scenario — that is if the fee is the minimum 1%. In other words, a bank would make a fortune — virtually at the drop of a hat, making no effort whatsoever — by simply taking someone's money. What makes it more absurd is that banks intend to further raise their fees amid coronavirus pandemic. Cryptocurrencies are a completely different story. In most cryptocurrency networks, fees do not depend on the transaction amount — the same fee could be charged for transferring $1 or $1 bln. In the cryptocurrency world, it doesn't matter how much money you transfer. If you make a big transfer, no one makes you pay the “tithe”. With UMI, you don't have to pay anything to anyone — not a dime. But we'll get back to this a little later. Freedom from Excessive Limits and Unneeded Checkups First, let's consider another important factor — the very possibility to make unhindered transfers of large amounts via bank especially foreign transfers. The irony is that even if someone chooses to pay this multi-million dollar fee, the transfer would far from being 100% successful. In most countries, including the Russian Federation, a $22 bln transaction would be virtually impossible to run in a state-owned bank, let alone private banks. Even going through a bunch of mandatory procedures as well as wasting lots of nerves and time wouldn't save the day. This is why a payment of this size is virtually impossible:
The overwhelming majority of the world's banks just don't happen to have such large amounts on their correspondent accounts. Even if we assume they do have sufficient funds on the books, this money doesn't just sit idle — bankers use it in their own favor, for instance, to grant loans, make deposit payments, etc. No bank would agree to send all its reserve funds to another bank on your orders. Moreover, banks have no right to violate the law in terms of reserve requirements, including currency norms. And processing such a large amount contradicts the established rules and regulations. So, even if money is technically recorded on the customer's account, transferring it to another bank, especially in a foreign country, is still a virtually impossible task.
In almost all states transactions of this scale are only allowed on the level of governments, the International Monetary Fund, the World Bank, or mega-tycoons with a declared multi-billion dollar income, such as Bill Gates, Warren Buffett, and the like. In other words, only customers with a special status can make especially large transfers without restrictions. Any “abnormal” transactions fall under suspicion and automatically frozen. If you have always run $500 transactions on a monthly basis, any incoming or outgoing $10,000 transfer would most probably be frozen. Let alone billions of dollars. An average owner of a large business will only be allowed to transfer billions of dollars after they get approval from FATF on an individual basis. Obviously, they must also be verified using KYC (Know Your Customer) and AML (Anti-money laundering) procedures and must establish the provenance of each dime they transfer. They have to do all this to transfer THEIR money, with a huge fee of billions of dollars.
The situation is even worse because it is equally true for receiving monetary transfers. In other words, if you have a large amount successfully transferred to you, there is no guarantee that you can use this money. Sadly, even if the money leaves the sender's bank, the recipient's bank can instantly freeze it. On the very same day, you could get a visit from bank or government officials along with the state security service and a special interrogation. If you cannot provide provenance data for the funds, the transfer could easily remain frozen for good. Naturally, this system opens the doors for various types of abuse of power and manipulation by bankers, governments, and state services.
For the existing banking system, any big transaction makes you a suspect of some manipulations resulting in a frozen transfer. More importantly, it isn't only true for multi-billion or multi-million transfers. Any transaction involving hundreds of thousands, dozens of thousands, or simply thousands of dollars may be deemed suspicious and sanctioned. It means that anyone who runs relatively big transactions runs the risk of encountering certain problems at any time. Cryptocurrencies are a step toward free transfers The above-described situation proves that digital money helps people get rid of many problems related to bank transfers: high fees, payment amount limits, specification of personal data, verification procedures. With digital money, you don't have to prove or explain anything. This is a real revolution that makes people free from fees and manipulation. Cryptocurrencies allow people to be a master of their funds and no one has the power to change this. No one charges you crazy fees and no one can steal your coins. With each passing minute, cryptocurrencies are becoming part of our life, and rather than profit from trading, investment, mining, or staking, they are regarded as a convenient way of sending funds. Only cryptocurrencies make people feel completely safe and allow them to transfer whatever amount wherever they want. This is a huge step towards changing the existing financial system, and it has already been made. But UMI Goes Even Further It may appear that problem with bank fees concerns only large businessmen. In real fact, regular people living live paycheck to paycheck are more sensitive to this issue. Being on a tight budget, most people have to pay for any bank transaction. You always pay fees charged by banks — when you pay utility bills, buy online, deposit money to your bank card, receive money, transfer money between your accounts or withdraw cash from an ATM. Overall, bank fees cost people a lot of money. It's curious to know how much your pay banks every year for mediation. Now let's see how you can make transactions using UMI. In terms of fees, UMI is more profitable than banks and even more profitable than most of the other cryptocurrencies, including bitcoin. There are no fees in the UMI network at all, even hidden ones. All transactions are instant. That is, if you sent $2.24 billion through the UMI network, it would be instant and completely free. There are no limits, verifications, and other nervous procedures. Instant, free, and secure — here and now. This is the key advantage of UMI as a payment instrument. Our cryptocurrency empowers all people — from large businessmen to factory workers — with profitable and absolutely safe funds transfers. UMI gives all people around the globe equal opportunities. This is the next step toward a free financial world. We are the first to make it. Sincerely yours, UMI team
I'm just getting into Bitcoin and am mildly obsessed. I'm finding it hard to understand why the 21 million bitcoin limit can't or won't be adjusted. Self interest has steered the market's direction for so long, why would it not with Bitcoin?
I think everyone knows what 51% attack is. One single entity like a person or a mining pool owns 51% of the hashrate. Some questions came in my mind regarding this situation. I read in a lot of online blogposts and websites that it would be possible to double spend if you own the majority of hashing power in Bitcoin. I don't understand that point, because if your transaction is confirmed and included in a block it is not possible to get it out of it again. So you are still unable to double spend just because you find 51% of all blocks, because your consensus rules are still the same? The next point I don't understand is the argument that you can produce a longer chain than the "original" (provided that you own 51% for a very long time), because in your chain you will still just find a new block every 10 minutes on average, because of the difficulty adjustment. So your chain won't get longer and will be unable to become the "original one" than. The only thread which comes in my mind is that you could censor the network transactions, because you can select which transactions getting confirmed for the most blocks. I hope you can help me and clarify this. Thanks
AMA Recap telos Foundation with Crypto Hunters On August 02, 2020 at 12:00 WIB Indonesia Time / August 01 2020 at 10:00 PM ( PST ) in the Crypto Hunter Telegram Group, AMA TELOS started with Mr.Douglas as guest speaker and Gus Fahlev from Crypto Hunters as moderator. When campaigning, 10 lucky AMA participants when asking questions on Google forms and AMA sessions will get a total TELOS ( TLOS ) prize of $100. The following is a summary of AMA questions and answers announced by the moderator and Segment 1 Question 1: Can you explain us, what is Telos? Answer: Telos is a blockchain platform for smart contracts. It is a low latencya new block every half second, high capacitycurrently in the top 2 blockchains in transactions per day, according to Blocktivity.info, and no transaction fee blockchain. Telos also has many unique features that allow developers to make better, dapps, such as our Telos Decide governance engine. Question 2: what ecosystem is used by telos? Answer: Telos is its own Layer-1 blockchain, not a token on another blockchain. The technology behind Telos is EOSIO, the same technology used by EOS and WAX, for example. Question 3: I see that Telos uses EOSIO platform, what are the very significant advantages that distinguish Telos from other projects? Answer: Telos uses the EOSIO platform but we have built several additional tools. Some of these add more security and resiliency to the blockchain, such as testing block producers and removing non-performant ones, but most are related to development. Telos provides attractive development tools that arent available elsewhere. Telos Decide is a governance platform that lets any group create self-governance tools easily. These run on Telos at very little cost and can provide all kinds of voting, elections, initiative ballots, committee management and funds allocation. Telos also has Telos EVM, an Ethereum virtual machine that can run Ethereum Solidity contracts at hundreds of times the speed of Ethereum and with no costs. Another Telos technology that is deploying soon is dStor, which is a decentralized cloud storage system associated with Telos so that dapps can store files controlled by blockchain contracts. Question 4: At what stage is Teloa Road Map now? what are the latest updates currently being realized? Answer: Telos launched its mainnet in December 2018 and has so far produced over 100,000,000 blocks without ever stopping or rolling back the chain. This is likely a record for a public blockchain. We have an ongoing group Telos Core Developers who build and maintain the code and are paid by our Telos Works funding system that is voted by the Telos token holders. Telos is a leader in blockchain governance and regularly amends its governance rules based on smart contract powered voting called Telos Amend. You can see the current Telos governance rules as stored live on the blockchain at tbnoa.org. The most recent updates were adding new features to Telos Decide to make it more powerful, implementing EOSIO v2.0 which increased the capacity of Telos about 8-10 times what it previously was, and implementing Telos EVM on our Testnet. We are currently working on better interfaces for Telos Decide voting, and building more infrastructure around Telos EVM so that it is ready to deploy on our mainnet. Question 5: Is telos currently available on an exchange? and is it ready to be traded? Answer: Telos has been trading on exchanges for over a year. The largest exchanges are Probit, CoinTiger, CoinLim, and P2PB2B. Other exchanges include Newdex and Alcor. We expect to be listed on larger exchanges in the near future. Question 6: Now is the time when defi tokens begin to develop, can telos be categorized as a defi project? and what strategies for this year and in the years to come prepared by telos? Answer: Telos is a smart contract platform, but it already has many DeFi tools built for it including REX staking rewards with a current yield of ~19% APR, smart contract controlled token swaps (like Bancor) with no counterparty called Telos Swaps, a common liquidity pool/order book shared by multiple DEXs to improve liquidity called EvolutionDEX. Wrapped BTC, ETH, XRP, EOS, and other tokens can be brought to Telos and exchanged or used via smart contracts through Transledger. We have more DeFi tools coming all the time including two new offerings in the next few weeks that will be the first of their kind. Question 7: Governance is an important topic in blockchain and Telos is considered a leader in this area. Why is that? Answer: Telos is among the top blockchain projects in terms of how it empowers its users to guide the growth of the chainalong the likes of Tezos or new DeFi tokens that offer governance coins. Telos users continuously elect the validating nodes, called Block Producers, that operate the network based on a set of governance documents such as the Telos Blockchain Network Operating Agreement (TBNOA). These are all stored entirely on-chain (viewable at tbnoa.org) and can be modified by smart contract through blockchain voting using Telos Amend. You can see examples of this at https://chainspector.io/governance/ratify-proposals Telos also has a robust user-voted funding mechanism called Telos Works that has funded many projects and is one of the more successful blockchain incubators around. Voting for all of these can be done in a number of ways including block explorers, wallets like Sqrl (desktop) and Telos Wallet (mobile), telos.net and Chainspector (https://chainspector.io/governance/telos-works). But Telos goes beyond any other chain-level governance by making all of these features and more available to any dapp on Telos through Telos Decide governance engine, making it easy for any dapp or DAO to add robust, highly customized voting. Segment 2 from google form Question 1: Defi projects are now trending whether telos will also go to Defi projects, to increase investors or the community? Answer: Yes, we have several DeFi tools on Telos that can work together: Telos Swaps is an automated, zero-counterparty token swapping smart contract where you can exchange any Telos tokens you may want at any time. Telos has DEXs and uses a common order book called EvolutionDEX that's available to any DEX so that a buy order on one can be matched against a sell order on another. This greatly increases liquidity for traders. We have staking rewards though the Resource EXchange (REX) with rewards currently at about 19% APR. We also have "wrapped" BTC, ETH, and other tokens that can be traded on Telos or used by its smart contracts at half-second transaction times with no transaction fees. This makes Telos a Bitcoin or Ethereum second layer or state channel that's much faster even than Lightning Network and has no fees once the BTC has been brought to Telos. Question 2: Telos aim is to build a new global economy could you explain how whole ecosystem works? There are already many centralized competitors so what is decentralization aspect in telos? Answer: Telos is one of the most decentralized blockchain's in the world. It is operated by 51 validators (block producers) who validate blocks in any month. These are voted for on an ongoing basis by Telos account holders. Telos is also economically decentralized with no large whales like Bitcoin, Ethereum, XRP or EOS because Telos never performed an ICO and limited the size of genesis accounts to 40,000 TLOS max. Telos is also geographically decentralized with users and block producers on every continent but Antarctica and in numerous countries. The is a large amount in North America and Western Europe, but also in Asia, Australia, and large contingents in Latin America and Africa. Telos has had a Block Producer in Indonesia since the beginning and some dapps on Telos are based in Indonesia as well, like SEEDS, for example. Question 3: Most investors focus only on the token price in the short term instead of the real value of the project. Can #TELOStell me the benefits for investors holding #TELOSthe long term? Answer: That's true about crypto speculators and traders, certainly. Traders are usually looking for coins with good positive momentuum that they hope will continue. But these are often pump and dumps where a few people get in early, pump the price, and then get out at the expense of new investors. That's very unfortunate. Telos isn't like this. One reason is that there aren't large whales who can easily manipulate the price. Telos seems to be greatly undervalued compared to its peers. Telos has capacity like EOS and well above XRP, XML, Tron, Ethereum. But its value is miniscule relative to these. Telos is a leader in blockchain governance like Tezos but its marketcap is tiny in comparison. Telos onboarded 100,000 new accounts last month and is appearing in the leading crypto press every week with new dapps or developments. So there's some disconnect between the value of Telos and the price. In my experience, these tend to equalize once more people learn about a project. Question 4: Eos Problems and How Telos Will Solve Them? Answer: Telos originally set out to solve problems with EOS. It was successful in this and now Telos stands on it's own and our roadmap is more about empowering users. In short, these are some of the EOS problems we already solved: RAM speculation - Telos had a plan to reduce RAM speculation through a published guidance price that has been extremely successful. The RAM price is guided by market forces but has remained within 10% of the guidance price since launch. CPU resources - Telos implemented the Telos Resource Improved Management Plan many months ago which was a 7-point approach to making EIDOS-type resource mining unprofitable on Telos. It has largely been successful and Telos has not experienced any resource shortages. Exchange Collusion/Voting - Telos governance does not permit Exchanges to vote with user tokens. This prevent voting situations seen on EOS or STEEM. Block Producer collusion - Telos has minimum requirements for block producers and do not allow anyone to own more than one block producer. Those who are found doing so (there have been about 3 cases so far) have been removed and sanctioned in accordance with the rules of the TBNOA. Question 5: What ecosystems do telos use? and why telos prefers to use EOS network over BEP2 or ERC20? what layer is used telos, can you please explain? Answer: uses the EOSIO protocol because it is the fastest and most powerful in the world and it also receives the fastest upgrades and ongoing development compared to other blockchain technologies. EOS and WAX also use the EOSIO protocol but they are completely different chains. Telos is a Layer 1 protocol, meaning that it is its own blockchain that other dapps and smart contracts deploy upon. One thing that happens when a blockchain like Telos has much, much higher speed and capacity than others like Bitcoin or Ethereum is that Telos can actually run those other blockchains better on its own platform than they can natively. For example, a number of tokens can come in to Telos as wrapped tokens. BTC, ETH, XRP are all current examples of tokens that can be on Telos as wrapped tokens. Once there, these can all be moved around with half-second transaction times and no transaction fees, so they are a better second layer for Bitcoin or Ethereum than Lightning Network or Loom. Telos can also emulate other chains, which we are doing using Telos EVM which emulates the Ethereum Virtual Machine at about 300 times faster and with no gas fees or congestion compared to Ethereum native deployment. Telos can run Ethereum (Solidity) smart contracts without any changes required. Telos EVM is already deployed on the Telos Testnet and will move to our mainnet soon. So anyone who wants to run ERC-20 tokens on Telos can do so easily and they will be faster and with much less cost than running the same contract on Ethereum. Segment 3 free asking Question: I am happy to see new things created by the Telos team. Like What concept did you build in 2020 to make Telos superior? Answer: Currently, I think Telos Decide is the most unique and powerful feature we have built. There are all kinds of organizations that need to vote. Apartment buildings, school boards, unions, tribes, youth sports leagues, city councils. Voting is hard, time consuming, and expensive for many. Telos Decide makes voting easy, convenient, and transparent. That will be a major improvement and disrupt old style voting. It also goes for buisnesses and corporate governance. Even before COVID it was important, but now people can't really gather in one place so fraud-proof voting is very important. No one has the tools that Telos has. And if they try to copy us, well, we are already way out ahead working on the next features. Question: If we look about partnerships, Telos has many partnership ! so what's the importance of that partnership for Telos? And How will you protect the value of Telos to your partners or investors ?? Answer: Many of the partnerships are dapps that have decided to deploy on Telos and receive some level of help from the TCD or Telos Foundation to do so. Once a dapp deploys on a chain, it really is like a long term partnership. Many dapps will become block producers as well and join in the governance of Telos. I suspect that in a few years, most block producers will be the large dapps on the platform with just a few remaining like my company GoodBlock. Of course, we will have our own apps out as well so I guess we'll be developers too. Telos is very fiscally responsible for investors. We spend little. There has not been any actual inflation on the chain in almost a year. (The token supply has remained unchanged at about 355M TLOS) we are actively working with dapps to bring more to Telos and exchanges and other services like fiat on- and off-ramps to increase value for users. Question: In challenging crypto market condition any project is really difficult to survive and we are witnessing that there are many platforms . What is telos project plan for surviving in this long blockchain marathon? In this plan, what motivates long term investors and believers? Answer: True. While we currently have a low token price, Telos as a DPOS chain can be maintained and grow without a massive army of miners and still maintain BFT. But the risk is really not whether Telos can continue. Already there are enough dapps that if the block producers went away somehow (not gonna happen) the dapps would just run the chain themselves. But with 100,000 new users last month and new dapps all the time, we are looking to join the top 5 dapp platforms on DappRadar soon. Survival as a project is not in question. One of the big reasons is that we never did any ICO and Telos is not a company. So regulatory risks aren't there and there's no company to go bankrupt or fail. We have already developed a bootstrapped system to pay block producers and core developers. So we aren't like a company that will run out of runway sometime. Question: Could you explain what is DSTOR? What will it contribute to your ecosystem? Answer: dStor is a decentralized cloud storage system that will have the performance of AWS or Azure with much lower costs and true decentralization. It's based on a highly modified version of IPFS that we have applied for patents for our implementation. It means that dapps will be able to store data like files, images, sound, etc. in a decentralized way. Question: Trust and security is very important in any business , what makes investors , customer and users safe secure when working with TELOS?? Answer: Telos is decentralized in a way that's more like bitcoin than other blockchains (but without the whales who can manipulate price). There was never any single company that started Telos, so there's no company whose CEO could make decisions for the network. There are numerous block producers who decide on any operational issue that isn't clearly described in the TBNOA governance documents. And to get to an action, 15 of the 21 currently active BPs need to sign a multisig transaction. So that's a high threshold. But also, the TBNOA speaks to a large number of issues and so the BPs can't just make up their own rules. Since there are really no whales, no one can vote in any kind of change or bring in their own BPs with their votes. This is also very different from other chains where there are whales. Telos is not located in any one country, so our rules can't be driven by one nation's politics. All in all, this level of decentralization sets Telos apart from almost any blockchain project in existence. People don't have to trust Telos because the system is designed to make trust unnecessary.
A Glance at the Heart: Proof-of-Authority Technology in the UMI Network
https://preview.redd.it/vhvj6v093df51.jpg?width=1024&format=pjpg&auto=webp&s=00c0c223d9758edec8ed49a8cb9024f96d3ee343 Greetings from the UMI Team! Our Whitepaper describes in detail the key pros and cons of the two mechanisms which the great majority of other cryptocurrencies are based on: ● Proof-of-Work (PoW) — mining technology. Used in Bitcoin, Ethereum, Litecoin, Monero, etc. ● Proof-of-Stake (PoS) and its derivatives — forging technology. Used in Nxt, PeerCoin, NEO, PRIZM, etc. As a result of a careful analysis of PoW and PoS, which are designed to fight against centralization, there came a conclusion that they both fail to perform their main mission and, in the long run, they lead to the network centralization and poor performance. For this reason, we took a different approach. We use Proof-of-Authority (PoA) algorithm coupled with master nodes, which can ensure the UMI network with decentralization and maximum speed. The Whitepaper allows you to understand the obvious things. This article will give you a clear and detailed explanation of the technology implemented in the UMI network. Let's glance at the heart of the network right now. Proof-of-Authority: How and Why It Emerged It's been over a decade since the first transaction in the Bitcoin network. Over this time, the blockchain technology has undergone some qualitative changes. It's down to the fact that the cryptocurrency world seeing the emerging Proof-of-Work defects in the Bitcoin network year after year has actively searched for ways to eliminate them. PoW decentralization and reliability has an underside of low capacity and scalability problem that prevents the network from rectifying this shortcoming. Moreover, with the growing popularity of Bitcoin, greed of miners who benefit from high fees resulting from the low network throughput has become a serious problem. Miners have also started to create pools making the network more and more centralized. The “human factor” that purposefully slowed down the network and undermined its security could never be eliminated. All this essentially limits the potential for using PoW-based cryptocurrencies on a bigger scale. Since PoW upgrade ideas came to nothing, crypto community activists have suggested cardinally new solutions and started to develop other protocols. This is how the Proof-of-Stake technology emerged. However, it proved to be excellent in theory rather than in practice. Overall, PoS-based cryptocurrencies do demonstrate a higher capacity, but the difference is not as striking. Moreover, PoS could not fully solve the scalability issue. In the hope that it could cope with the disaster plaguing all cryptocurrencies, the community came up with brand new algorithms based on alternative operating principles. One of them is the Proof-of-Authority technology. It was meant to be an effective alternative with a high capacity and a solution to the scalability problem. The idea of using PoA in cryptocurrencies was offered by Gavin Wood — a high-profile blockchain programmer and Ethereum co-founder. Proof-of-Authority Major Features PoA's major difference from PoW and PoS lies in the elimination of miner or forger races. Network users do not fight for the right to be the first to create a block and receive an award, as it happens with cryptocurrencies based on other technologies. In this case blockchain's operating principle is substantially different — Proof-of-Authority uses the “reputation system” and only allows trusted nodes to create blocks. It solves the scalability problem allowing to considerably increase capacity and handle transactions almost instantly without wasting time on unnecessary calculations made by miners and forgers. Moreover, trusted nodes must meet the strict capacity requirements. This is one the main reasons why we have selected PoA since this is the only technology allowing to fully use super-fast nodes. Due to these features, the Proof-of-Authority algorithm is seen as one of the most effective and promising options for bringing blockchain to various business sectors. For instance, its model perfectly fits the logistics and supply chain management sectors. As an outstanding example, PoA is effectively used by the Microsoft Azure cloud platform to offer various tools for bringing blockchain solutions to businesses. How the UMI Network Gets Rid of the Defects and Incorporates the Benefits of Proof-of-Authority Method Any system has both drawbacks and advantages — so does PoA. According to the original PoA model, each trusted node can create a block, while it is technically impossible for ordinary users to interfere with the system operation. This makes PoA-based cryptocurrencies a lot more centralized than those based on PoW or PoS. This has always been the main reason for criticizing the PoA technology. We understood that only a completely decentralized product could translate our vision of a "hard-to-hit", secure and transparent monetary instrument into reality. Therefore, we started with upgrading its basic operating principle in order to create a product that will incorporate all the best features while eliminating the defects. What we’ve got is a decentralized PoA method. We will try to explain at the elementary level: - We've divided the nodes in the UMI network into two types: master nodes and validator nodes. - Only master nodes have the right to create blocks and confirm transactions. Among master node holders there's the UMI team and their trusted partners from across the world. Moreover, we deliberately keep some of our partners — those who hold master nodes — in secret in order to secure ourselves against potential negative influence, manipulation, and threats from third parties. This way we ensure maximum coherent and reliable system operation. - However, since the core idea behind a decentralized cryptocurrency rules out any kind of trust, the blockchain is secured to prevent master nodes from harming the network in the event of sabotage or collusion. It might happen to Bitcoin or other PoW- or PoS-based cryptocurrencies if, for example, several large mining pools unite and perform a 51% attack. But it can’t happen to UMI. First, the worst that bad faith master node holders can do is to negligibly slow down the network. But the UMI network will automatically respond to it by banning such nodes. Thus, master nodes will prevent any partner from doing intentional harm to the network. Moreover, it will not be able to do this, even if most other partners support it. Nothing — not even quantum computers — will help hackers. Read our post "UMI Blockchain Six-Level Security" for more details. - A validator node can be launched by any participant. Validator nodes maintain the network by verifying the correctness of blocks and excluding the possibility of fakes. In doing so they increase the overall network security and help master nodes carry out their functions. More importantly, those who hold validator nodes control those who hold master nodes and confirm that the latter don't violate anything and comply with the rules. You can find more details about validator nodes in the article we mentioned above. - Finally, the network allows all interested users to launch light nodes (SPV), which enables viewing and sending transactions without having to download the blockchain and maintain the network. With light nodes, any network user can make sure if the system is operating properly and doesn't have to download the blockchain to do this. - In addition, we are developing the ability to protect the network in case 100% of the master nodes (10,000 master nodes in total) are "disabled" for some reason. Even this is virtually impossible, we've thought ahead and in the worst-case scenario, the system will automatically move to PoS. By doing so, it will be able to continue processing transactions. We're going to tell you about this in our next publications. Thus, the UMI network uses an upgraded version of this technology which possesses all its advantages with drawbacks eliminated. This model is truly decentralized and maximum secured. Another major drawback of PoA-based cryptos is no possibility to grant incentives to users. PoA doesn't imply forging or mining which allow users to earn cryptocurrency while generating new coins. No reward for maintaining the network is the main reason why the crypto community is not interested in PoA. This is, of course, unfair. With this in mind, the UMI team has found the best solution — the unique staking smart-contract. It allows you to increase the number of your coins up to 40% per month even with no mining or forging meaning the human factor cannot have a negative impact on the decentralization and network performance. New-Generation Proof-of-Authority The UMI network uses an upgraded version of PoA technology which possesses all its advantages with drawbacks virtually eliminated. This makes UMI a decentralized, easily scalable, and yet the most secure, productive, profitable and fair cryptocurrency, working for the sake of all people. The widespread use of UMI can change most aspects of society in different areas, including production, commerce, logistics, and all financial arrangements. We are just beginning this journey and thrilled to have you with us. Let's change the world together! Best regards, UMI Team!
Author: Gamals Ahmed, CoinEx Business Ambassador ABSTRACT The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS. The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking. A “weight” is attributed to a chain based on the ranks of the leaders who propose the blocks in the chain, and that weight is used to select between competing chains. The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking blockchain is further hardened by a notarization process which dramatically improves the time to finality and eliminates the nothing-at-stake and selfish mining attacks. DFINITY consensus algorithm is made to scale through continuous quorum selections driven by the random beacon. In practice, DFINITY achieves block times of a few seconds and transaction finality after only two confirmations. The system gracefully handles temporary losses of network synchrony including network splits, while it is provably secure under synchrony.
DFINITY is building a new kind of public decentralized cloud computing resource. The company’s platform uses blockchain technology which is aimed at building a new kind of public decentralized cloud computing resource with unlimited capacity, performance and algorithmic governance shared by the world, with the capability to power autonomous self-updating software systems, enabling organizations to design and deploy custom-tailored cloud computing projects, thereby reducing enterprise IT system costs by 90%. DFINITY aims to explore new territory and prove that the blockchain opportunity is far broader and deeper than anyone has hitherto realized, unlocking the opportunity with powerful new crypto. Although a standalone project, DFINITY is not maximalist minded and is a great supporter of Ethereum. The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS. DFINITY’s consensus mechanism has four layers: notary (provides fast finality guarantees to clients and external observers), blockchain (builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon), random beacon (provides the source of randomness for all higher layers like smart contract applications), and identity (provides a registry of all clients). DFINITY’s consensus mechanism has four layers Figure1: DFINITY’s consensus mechanism layers 1. Identity layer: Active participants in the DFINITY Network are called clients. Where clients are registered with permanent identities under a pseudonym. Moreover, DFINITY supports open membership by providing a protocol for registering new clients by depositing a stake with an insurance period. This is the responsibility of the first layer. 2. Random Beacon layer: Provides the source of randomness (VRF) for all higher layers including ap- plications (smart contracts). The random beacon in the second layer is an unbiasable, verifiable random function (VRF) that is produced jointly by registered clients. Each random output of the VRF is unpredictable by anyone until just before it becomes avail- able to everyone. This is a key technology of the DFINITY system, which relies on a threshold signature scheme with the properties of uniqueness and non-interactivity. https://preview.redd.it/hkcf53ic05e51.jpg?width=441&format=pjpg&auto=webp&s=44d45c9602ee630705ce92902b8a8379201d8111 3. Blockchain layer: The third layer deploys the “probabilistic slot protocol” (PSP). This protocol ranks the clients for each height of the chain, in an order that is derived determin- istically from the unbiased output of the random beacon for that height. A weight is then assigned to block proposals based on the proposer’s rank such that blocks from clients at the top of the list receive a higher weight. Forks are resolved by giving favor to the “heaviest” chain in terms of accumulated block weight — quite sim- ilar to how traditional proof-of-work consensus is based on the highest accumulated amount of work. The first advantage of the PSP protocol is that the ranking is available instantaneously, which allows for a predictable, constant block time. The second advantage is that there is always a single highest-ranked client, which allows for a homogenous network bandwidth utilization. Instead, a race between clients would favor a usage in bursts. 4. Notarization layer: Provides fast finality guarantees to clients and external observers. DFINITY deploys the novel technique of block notarization in its fourth layer to speed up finality. A notarization is a threshold signature under a block created jointly by registered clients. Only notarized blocks can be included in a chain. Of all RSA-based alternatives exist but suffer from an impracticality of setting up the thresh- old keys without a trusted dealer. DFINITY achieves its high speed and short block times exactly because notarization is not full consensus. DFINITY does not suffer from selfish mining attack or a problem nothing at stake because the authentication step is impossible for the opponent to build and maintain a series of linked and trusted blocks in secret. DFINITY’s consensus is designed to operate on a network of millions of clients. To en- able scalability to this extent, the random beacon and notarization protocols are designed such as that they can be safely and efficiently delegated to a committee
1.1 OVERVIEW ABOUT DFINITY
DFINITY is a blockchain-based cloud-computing project that aims to develop an open, public network, referred to as the “internet computer,” to host the next generation of software and data. and it is a decentralized and non-proprietary network to run the next generation of mega-applications. It dubbed this public network “Cloud 3.0”. DFINITY is a third generation virtual blockchain network that sets out to function as an “intelligent decentralised cloud,”¹ strongly focused on delivering a viable corporate cloud solution. The DFINITY project is overseen, supported and promoted by DFINITY Stiftung a not-for-profit foundation based in Zug, Switzerland. DFINITY is a decentralized network design whose protocols generate a reliable “virtual blockchain computer” running on top of a peer-to-peer network upon which software can be installed and can operate in the tamperproof mode of smart contracts. DFINITY introduces algorithmic governance in the form of a “Blockchain Nervous System” that can protect users from attacks and help restart broken systems, dynamically optimize network security and efficiency, upgrade the protocol and mitigate misuse of the platform, for example by those wishing to run illegal or immoral systems. DFINITY is an Ethereum-compatible smart contract platform that is implementing some revolutionary ideas to address blockchain performance, scaling, and governance. Whereas DFINITY could pose a credible threat to Ethereum’s extinction, the project is pursuing a coevolutionary strategy by contributing funding and effort to Ethereum projects and freely offering their technology to Ethereum for adoption. DFINITY has labeled itself Ethereum’s “crazy sister” to express it’s close genetic resemblance to Ethereum, differentiated by its obsession with performance and neuron-inspired governance model. Dfinity raised $61 million from Andreesen Horowitz and Polychain Capital in a February 2018 funding round. At the time, Dfinity said it wanted to create an “internet computer” to cut the costs of running cloud-based business applications. A further $102 million funding round in August 2018 brought the project’s total funding to $195 million. In May 2018, Dfinity announced plans to distribute around $35 million worth of Dfinity tokens in an airdrop. It was part of the company’s plan to create a “Cloud 3.0.” Because of regulatory concerns, none of the tokens went to US residents. DFINITY be broadening and strengthening the EVM ecosystem by giving applications a choice of platforms with different characteristics. However, if DFINITY succeeds in delivering a fully EVM-compatible smart contract platform with higher transaction throughput, faster confirmation times, and governance mechanisms that can resolve public disputes without causing community splits, then it will represent a clearly superior choice for deploying new applications and, as its network effects grow, an attractive place to bring existing ones. Of course the challenge for DFINITY will be to deliver on these promises while meeting the security demands of a public chain with significant value at risk.
1.1.1 DFINITY FUTURE
DFINITY aims to explore new blockchain territory related to the original goals of the Ethereum project and is sometimes considered “Ethereum’s crazy sister.”
DFINITY is developing blockchain-based infrastructure to support a new style of the internet (akin to Ethereum’s “World Computer”), one in which the internet itself will support software applications and data rather than various cloud hosting providers.
The project suggests this reinvented software platform can simplify the development of new software systems, reduce the human capital needed to maintain and secure data, and preserve user data privacy.
Dfinity aims to reduce the costs of cloud services by creating a decentralized “internet computer” which may launch in 2020
Dfinity claims transactions on its network are finalized in 3–5 seconds, compared to 1 hour for Bitcoin and 10 minutes for Ethereum.
1.1.2 DFINITY’S VISION
DFINITY’s vision is its new internet infrastructure can support a wide variety of end-user and enterprise applications. Social media, messaging, search, storage, and peer-to-peer Internet interactions are all examples of functionalities that DFINITY plans to host atop its public Web 3.0 cloud-like computing resource. In order to provide the transaction and data capacity necessary to support this ambitious vision, DFINITY features a unique consensus model (dubbed Threshold Relay) and algorithmic governance via its Blockchain Nervous System (BNS) — sometimes also referred to as the Network Nervous System or NNS.
February 15, 2017 Ethereum based community seed round raises 4M Swiss francs (CHF) The DFINITY Stiftung, a not-for-profit foundation entity based in Zug, Switzerland, raised the round. The foundation held $10M of assets as of April 2017. February 8, 2018 Dfinity announces a $61M fundraising round led by Polychain Capital and Andreessen Horowitz The round $61M round led by Polychain Capital and Andreessen Horowitz, along with an DFINITY Ecosystem Venture Fund which will be used to support projects developing on the DFINITY platform, and an Ethereum based raise in 2017 brings the total funding for the project over $100 million. This is the first cryptocurrency token that Andressen Horowitz has invested in, led by Chris Dixon. August 2018 Dfinity raises a $102,000,000 venture round from Multicoin Capital, Village Global, Aspect Ventures, Andreessen Horowitz, Polychain Capital, Scalar Capital, Amino Capital and SV Angel. January 23, 2020 Dfinity launches an open source platform aimed at the social networking giants
Dfinity is building what it calls the internet computer, a decentralized technology spread across a network of independent data centers that allows software to run anywhere on the internet rather than in server farms that are increasingly controlled by large firms, such as Amazon Web Services or Google Cloud. This week Dfinity is releasing its software to third-party developers, who it hopes will start making the internet computer’s killer apps. It is planning a public release later this year. At its core, the DFINITY consensus mechanism is a variation of the Proof of Stake (PoS) model, but offers an alternative to traditional Proof of Work (PoW) and delegated PoS (dPoS) networks. Threshold Relay intends to strike a balance between inefficiencies of decentralized PoW blockchains (generally characterized by slow block times) and the less robust game theory involved in vote delegation (as seen in dPoS blockchains). In DFINITY, a committee of “miners” is randomly selected to add a new block to the chain. An individual miner’s probability of being elected to the committee proposing and computing the next block (or blocks) is proportional to the number of dfinities the miner has staked on the network. Further, a “weight” is attributed to a DFINITY chain based on the ranks of the miners who propose blocks in the chain, and that weight is used to choose between competing chains (i.e. resolve chain forks). A decentralized random beacon manages the random selection process of temporary block producers. This beacon is a Variable Random Function (VRF), which is a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness. A core component of the random beacon is the use of Boneh-Lynn-Shacham (BLS) signatures. By leveraging the BLS signature scheme, the DFINITY protocol ensures no actor in the network can determine the outcome of the next random assignment. Dfinity is introducing a new standard, which it calls the internet computer protocol (ICP). These new rules let developers move software around the internet as well as data. All software needs computers to run on, but with ICP the computers could be anywhere. Instead of running on a dedicated server in Google Cloud, for example, the software would have no fixed physical address, moving between servers owned by independent data centers around the world. “Conceptually, it’s kind of running everywhere,” says Dfinity engineering manager Stanley Jones. DFINITY also features a native programming language, called ActorScript (name may be subject to change), and a virtual machine for smart contract creation and execution. The new smart contract language is intended to simplify the management of application state for programmers via an orthogonal persistence environment (which means active programs are not required to retrieve or save their state). All ActorScript contracts are eventually compiled down to WebAssembly instructions so the DFINITY virtual machine layer can execute the logic of applications running on the network. The advantage of using the WebAssembly standard is that all major browsers support it and a variety of programming languages can compile down to Wasm (not just ActorScript). Dfinity is moving fast. Recently, Dfinity showed off a TikTok clone called CanCan. In January it demoed a LinkedIn-alike called LinkedUp. Neither app is being made public, but they make a convincing case that apps made for the internet computer can rival the real things.
2.1 DFINITY CORE APPLICATIONS
The DFINITY cloud has two core applications:
Enabling the re-engineering of business: DFINITY ambitiously aims to facilitate the re-engineering of mass-market services (such as Web Search, Ridesharing Services, Messaging Services, Social Media, Supply Chain, etc) into open source businesses that leverage autonomous software and decentralised governance systems to operate and update themselves more efficiently.
Enable the re-engineering of enterprise IT systems to reduce costs: DFINITY seeks to re-engineer enterprise IT systems to take advantage of the unique properties that blockchain computer networks provide.
At present, computation on blockchain-based computer networks is far more expensive than traditional, centralised solutions (Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc). Despite increasing computational cost, DFINITY intends to lower net costs “by 90% or more” through reducing the human capital cost associated with sustaining and supporting these services. Whilst conceptually similar to Ethereum, DFINITY employs original and new cryptography methods and protocols (crypto:3) at the network level, in concert with AI and network-fuelled systemic governance (Blockchain Nervous System — BNS) to facilitate Corporate adoption. DFINITY recognises that different users value different properties and sees itself as more of a fully compatible extension of the Ethereum ecosystem rather than a competitor of the Ethereum network. In the future, DFINITY hopes that much of their “new crypto might be used within the Ethereum network and are also working hard on shared technology components.” As the DFINITY project develops over time, the DFINITY Stiftung foundation intends to steadily increase the BNS’ decision-making responsibilities over time, eventually resulting in the dissolution of its own involvement entirely, once the BNS is sufficiently sophisticated. DFINITY consensus mechanism is a heavily optimized proof of stake (PoS) model. It places a strong emphasis on transaction finality through implementing a Threshold Relay technique in conjunction with the BLS signature scheme and a notarization method to address many of the problems associated with PoS consensus.
2.2 THRESHOLD RELAY
As a public cloud computing resource, DFINITY targets business applications by substantially reducing cloud computing costs for IT systems. They aim to achieve this with a highly scalable and powerful network with potentially unlimited capacity. The DFINITY platform is chalk full of innovative designs and features like their Blockchain Nervous System (BNS) for algorithmic governance. One of the primary components of the platform is its novel Threshold Relay Consensus model from which randomness is produced, driving the other systems that the network depends on to operate effectively. The consensus system was first designed for a permissioned participation model but can be paired with any method of Sybil resistance for an open participation model. “The Threshold Relay is the mechanism by which Dfinity randomly samples replicas into groups, sets the groups (committees) up for threshold operation, chooses the current committee, and relays from one committee to the next is called the threshold relay.” Threshold Relay consists of four layers (As mentioned previously):
Notary layer, which provides fast finality guarantees to clients and external observers and eliminates nothing-at-stake and selfish mining attacks, providing Sybil attack resistance.
Blockchain layer that builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon.
Random beacon, which as previously covered, provides the source of randomness for all higher layers like the blockchain layer smart contract applications.
Identity layer that provides a registry of all clients.
2.2.1 HOW DOES THRESHOLD RELAY WORK?
Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a “threshold group”. The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key (“identity”) created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following “epoch”. The network begins at “genesis” with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values — if they were not then the group’s signatures on messages would be predictable and the threshold signature system insecure — and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values. In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message individually (here the preceding group’s threshold signature) creating individual “signature shares” that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group’s signature on the message. Other group members can validate each signature share, and any client using the group’s public key can validate the single group threshold signature produced by combining them. The magic of the BLS scheme is that it is “unique and deterministic” meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible. Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17.
2.3 DFINITY TOKEN
The DFINITY blockchain also supports a native token, called dfinities (DFN), which perform multiple roles within the network, including:
Fuel for deploying and running smart contracts.
Security deposits (i.e. staking) that enable participation in the BNS governance system.
Security deposits that allow client software or private DFINITY cloud networks to connect to the public network.
Although dfinities will end up being assigned a value by the market, the DFINITY team does not intend for DFN to act as a currency. Instead, the project has envisioned PHI, a “next-generation” crypto-fiat scheme, to act as a stable medium of exchange within the DFINITY ecosystem. Neuron operators can earn Dfinities by participating in network-wide votes, which could be concerning protocol upgrades, a new economic policy, etc. DFN rewards for participating in the governance system are proportional to the number of tokens staked inside a neuron.
DFINITY is constantly developing with a structure that separates consensus, validation, and storage into separate layers. The storage layer is divided into multiple strings, each of which is responsible for processing transactions that occur in the fragment state. The verification layer is responsible for combining hashes of all fragments in a Merkle-like structure that results in a global state fractionation that is stored in blocks in the top-level chain.
2.5 DFINITY CONSENSUS ALGORITHM
The single most important aspect of the user experience is certainly the time required before a transaction becomes final. This is not solved by a short block time alone — Dfinity’s team also had to reduce the number of confirmations required to a small constant. DFINITY moreover had to provide a provably secure proof-of-stake algorithm that scales to millions of active participants without compromising any bit on decentralization. Dfinity soon realized that the key to scalability lay in having an unmanipulable source of randomness available. Hence they built a scalable decentralized random beacon, based on what they call the Threshold Relay technique, right into the foundation of the protocol. This strong foundation drives a scalable and fast consensus layer: On top of the beacon runs a blockchain which utilizes notarization by threshold groups to achieve near-instant finality. Details can be found in the overview paper that we are releasing today. The roots of the DFINITY consensus mechanism date back to 2014 when thair Chief Scientist, Dominic Williams, started to look for more efficient ways to drive large consensus networks. Since then, much research has gone into the protocol and it took several iterations to reach its current design. For any practical consensus system the difficulty lies in navigating the tight terrain that one is given between the boundaries imposed by theoretical impossibility-results and practical performance limitations. The first key milestone was the novel Threshold Relay technique for decentralized, deterministic randomness, which is made possible by certain unique characteristics of the BLS signature system. The next breakthrough was the notarization technique, which allows DFINITY consensus to solve the traditional problems that come with proof-of-stake systems. Getting the security proofs sound was the final step before publication. DFINITY consensus has made the proper trade-offs between the practical side (realistic threat models and security assumptions) and the theoretical side (provable security). Out came a flexible, tunable algorithm, which we expect will establish itself as the best performing proof-of-stake algorithm. In particular, having the built-in random beacon will prove to be indispensable when building out sharding and scalable validation techniques.
The startup has rather cheekily called this “an open version of LinkedIn,” the Microsoft-owned social network for professionals. Unlike LinkedIn, LinkedUp, which runs on any browser, is not owned or controlled by a corporate entity. LinkedUp is built on Dfinity’s so-called Internet Computer, its name for the platform it is building to distribute the next generation of software and open internet services. The software is hosted directly on the internet on a Switzerland-based independent data center, but in the concept of the Internet Computer, it could be hosted at your house or mine. The compute power to run the application LinkedUp, in this case — is coming not from Amazon AWS, Google Cloud or Microsoft Azure, but is instead based on the distributed architecture that Dfinity is building. Specifically, Dfinity notes that when enterprises and developers run their web apps and enterprise systems on the Internet Computer, the content is decentralized across a minimum of four or a maximum of an unlimited number of nodes in Dfinity’s global network of independent data centers. Dfinity is an open source for LinkedUp to developers for creating other types of open internet services on the architecture it has built. “Open Social Network for Professional Profiles” suggests that on Dfinity model one can create “Open WhatsApp”, “Open eBay”, “Open Salesforce” or “Open Facebook”. The tools include a Canister Software Developer Kit and a simple programming language called Motoko that is optimized for Dfinity’s Internet Computer. “The Internet Computer is conceived as an alternative to the $3.8 trillion legacy IT stack, and empowers the next generation of developers to build a new breed of tamper-proof enterprise software systems and open internet services. We are democratizing software development,” Williams said. “The Bronze release of the Internet Computer provides developers and enterprises a glimpse into the infinite possibilities of building on the Internet Computer — which also reflects the strength of the Dfinity team we have built so far.” Dfinity says its “Internet Computer Protocol” allows for a new type of software called autonomous software, which can guarantee permanent APIs that cannot be revoked. When all these open internet services (e.g. open versions of WhatsApp, Facebook, eBay, Salesforce, etc.) are combined with other open software and services it creates “mutual network effects” where everyone benefits. On 1 November, DFINITY has released 13 new public versions of the SDK, to our second major milestone [at WEF Davos] of demoing a decentralized web app called LinkedUp on the Internet Computer. Subsequent milestones towards the public launch of the Internet Computer will involve:
On boarding a global network of independent data centers.
Fully tested economic system.
Fully tested Network Nervous Systems for configuration and upgrades
2.7 WHAT IS MOTOKO?
Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer. Full article
Review and Prospect of Crypto Economy-Development and Evolution of Consensus Mechanism (2)
https://preview.redd.it/a51zsja94db51.png?width=567&format=png&auto=webp&s=99e8080c9e9b1fb5e11cbd70f915f9cb37188f81 Foreword The consensus mechanism is one of the important elements of the blockchain and the core rule of the normal operation of the distributed ledger. It is mainly used to solve the trust problem between people and determine who is responsible for generating new blocks and maintaining the effective unification of the system in the blockchain system. Thus, it has become an everlasting research hot topic in blockchain. This article starts with the concept and role of the consensus mechanism. First, it enables the reader to have a preliminary understanding of the consensus mechanism as a whole; then starting with the two armies and the Byzantine general problem, the evolution of the consensus mechanism is introduced in the order of the time when the consensus mechanism is proposed; Then, it briefly introduces the current mainstream consensus mechanism from three aspects of concept, working principle and representative project, and compares the advantages and disadvantages of the mainstream consensus mechanism; finally, it gives suggestions on how to choose a consensus mechanism for blockchain projects and pointed out the possibility of the future development of the consensus mechanism. Contents First, concept and function of the consensus mechanism 1.1 Concept: The core rules for the normal operation of distributed ledgers 1.2 Role: Solve the trust problem and decide the generation and maintenance of new blocks 1.2.1 Used to solve the trust problem between people 1.2.2 Used to decide who is responsible for generating new blocks and maintaining effective unity in the blockchain system 1.3 Mainstream model of consensus algorithm Second, the origin of the consensus mechanism 2.1 The two armies and the Byzantine generals 2.1.1 The two armies problem 2.1.2 The Byzantine generals problem 2.2 Development history of consensus mechanism 2.2.1 Classification of consensus mechanism 2.2.2 Development frontier of consensus mechanism Third, Common Consensus System Fourth, Selection of consensus mechanism and summary of current situation 4.1 How to choose a consensus mechanism that suits you 4.1.1 Determine whether the final result is important 4.1.2 Determine how fast the application process needs to be 4.1.2 Determining the degree to which the application requires for decentralization 4.1.3 Determine whether the system can be terminated 4.1.4 Select a suitable consensus algorithm after weighing the advantages and disadvantages 4.2 Future development of consensus mechanism Last lecture review: Chapter 1 Concept and Function of Consensus Mechanism plus Chapter 2 Origin of Consensus Mechanism Chapter 3 Common Consensus Mechanisms (Part 1) Figure 6 Summary of relatively mainstream consensus mechanisms 📷 https://preview.redd.it/9r7q3xra4db51.png?width=567&format=png&auto=webp&s=bae5554a596feaac948fae22dffafee98c4318a7 Source: Hasib Anwar, "Consensus Algorithms: The Root Of The Blockchain Technology" The picture above shows 14 relatively mainstream consensus mechanisms summarized by a geek Hasib Anwar, including PoW (Proof of Work), PoS (Proof of Stake), DPoS (Delegated Proof of Stake), LPoS (Lease Proof of Stake), PoET ( Proof of Elapsed Time), PBFT (Practical Byzantine Fault Tolerance), SBFT (Simple Byzantine Fault Tolerance), DBFT (Delegated Byzantine Fault Tolerance), DAG (Directed Acyclic Graph), Proof-of-Activity (Proof of Activity), Proof-of- Importance (Proof of Importance), Proof-of-Capacity (Proof of Capacity), Proof-of-Burn ( Proof of Burn), Proof-of-Weight (Proof of Weight). Next, we will mainly introduce and analyze the top ten consensus mechanisms of the current blockchain. 》POW -Concept: Work proof mechanism. That is, the proof of work means that it takes a certain amount of computer time to confirm the work. -Principle: Figure 7 PoW work proof principle 📷 https://preview.redd.it/xupacdfc4db51.png?width=554&format=png&auto=webp&s=3b6994641f5890804d93dfed9ecfd29308c8e0cc The PoW represented by Bitcoin uses the SHA-256 algorithm function, which is a 256-bit hash algorithm in the password hash function family: Proof of work output = SHA256 (SHA256 (block header)); if (output of proof of work if (output of proof of work >= target value), change the random number, recursive i logic, continue to compare with the target value. New difficulty value = old difficulty value* (time spent by last 2016 blocks /20160 minutes) Target value = maximum target value / difficulty value The maximum target value is a fixed number. If the last 2016 blocks took less than 20160 minutes, then this coefficient will be small, and the target value will be adjusted bigger, if not, the target value will be adjusted smaller. Bitcoin mining difficulty and block generation speed will be inversely proportional to the appropriate adjustment of block generation speed. -Representative applications: BTC, etc. 》POS -Concept: Proof of stake. That is, a mechanism for reaching consensus based on the holding currency. The longer the currency is held, the greater the probability of getting a reward. -Principle: PoS implementation algorithm formula: hash(block_header) = Coin age calculation formula: coinage = number of coins * remaining usage time of coins Among them, coinage means coin age, which means that the older the coin age, the easier it is to get answers. The calculation of the coin age is obtained by multiplying the coins owned by the miner by the remaining usage time of each coin, which also means that the more coins you have, the easier it is to get answers. In this way, pos solves the problem of wasting resources in pow, and miners cannot own 51% coins from the entire network, so it also solves the problem of 51% attacks. -Representative applications: ETH, etc. 》DPoS -Concept: Delegated proof of stake. That is, currency holding investors select super nodes by voting to operate the entire network , similar to the people's congress system. -Principle: The DPOS algorithm is divided into two parts. Elect a group of block producers and schedule production. Election: Only permanent nodes with the right to be elected can be elected, and ultimately only the top N witnesses can be elected. These N individuals must obtain more than 50% of the votes to be successfully elected. In addition, this list will be re-elected at regular intervals. Scheduled production: Under normal circumstances, block producers take turns to generate a block every 3 seconds. Assuming that no producer misses his order, then the chain they produce is bound to be the longest chain. When a witness produces a block, a block needs to be generated every 2s. If the specified time is exceeded, the current witness will lose the right to produce and the right will be transferred to the next witness. Then the witness is not only unpaid, but also may lose his identity. -Representative applications: EOS, etc. 》DPoW -Concept: Delayed proof of work. A new-generation consensus mechanism based on PoB and DPoS. Miners use their own computing power, through the hash algorithm, and finally prove their work, get the corresponding wood, wood is not tradable. After the wood has accumulated to a certain amount, you can go to the burning site to burn the wood. This can achieve a balance between computing power and mining rights. -Principle: In the DPoW-based blockchain, miners are no longer rewarded tokens, but "wood" that can be burned, burning wood. Miners use their own computing power, through the hash algorithm, and finally prove their work, get the corresponding wood, wood is not tradable. After the wood has accumulated to a certain amount, you can go to the burning site to burn the wood. Through a set of algorithms, people who burn more wood or BP or a group of BP can obtain the right to generate blocks in the next event segment, and get rewards (tokens) after successful block generation. Since more than one person may burn wood in a time period, the probability of producing blocks in the next time period is determined by the amount of wood burned by oneself. The more it is burned, the higher the probability of obtaining block rights in the next period. Two node types: notary node and normal node. The 64 notary nodes are elected by the stakeholders of the dPoW blockchain, and the notarized confirmed blocks can be added from the dPoW blockchain to the attached PoW blockchain. Once a block is added, the hash value of the block will be added to the Bitcoin transaction signed by 33 notary nodes, and a hash will be created to the dPow block record of the Bitcoin blockchain. This record has been notarized by most notary nodes in the network. In order to avoid wars on mining between notary nodes, and thereby reduce the efficiency of the network, Komodo designed a mining method that uses a polling mechanism. This method has two operating modes. In the "No Notary" (No Notary) mode, all network nodes can participate in mining, which is similar to the traditional PoW consensus mechanism. In the "Notaries Active" mode, network notaries use a significantly reduced network difficulty rate to mine. In the "Notary Public Activation" mode, each notary public is allowed to mine a block with its current difficulty, while other notary public nodes must use 10 times the difficulty of mining, and all normal nodes use 100 times the difficulty of the notary public node. Figure 8 DPoW operation process without a notary node 📷 https://preview.redd.it/3yuzpemd4db51.png?width=500&format=png&auto=webp&s=f3bc2a1c97b13cb861414d3eb23a312b42ea6547 -Representative applications: CelesOS, Komodo, etc. CelesOS Research Institute丨DPoW consensus mechanism-combustible mining and voting 》PBFT -Concept: Practical Byzantine fault tolerance algorithm. That is, the complexity of the algorithm is reduced from exponential to polynomial level, making the Byzantine fault-tolerant algorithm feasible in practical system applications. -Principle: Figure 9 PBFT algorithm principle 📷 https://preview.redd.it/8as7rgre4db51.png?width=567&format=png&auto=webp&s=372be730af428f991375146efedd5315926af1ca First, the client sends a request to the master node to call the service operation, and then the master node broadcasts other copies of the request. All copies execute the request and send the result back to the client. The client needs to wait for f+1 different replica nodes to return the same result as the final result of the entire operation. Two qualifications: 1. All nodes must be deterministic. That is to say, the results of the operation must be the same under the same conditions and parameters. 2. All nodes must start from the same status. Under these two limited qualifications, even if there are failed replica nodes, the PBFT algorithm agrees on the total order of execution of all non-failed replica nodes, thereby ensuring security. -Representative applications: Tendermint Consensus, etc. Next Lecture: Chapter 3 Common Consensus Mechanisms (Part 2) + Chapter 4 Consensus Mechanism Selection and Status Summary CelesOS As the first DPOW financial blockchain operating system, CelesOS adopts consensus mechanism 3.0 to break through the "impossible triangle", which can provide high TPS while also allowing for decentralization. Committed to creating a financial blockchain operating system that embraces supervision, providing services for financial institutions and the development of applications on the supervision chain, and formulating a role and consensus ecological supervision layer agreement for supervision. The CelesOS team is dedicated to building a bridge between blockchain and regulatory agencies/financial industry. We believe that only blockchain technology that cooperates with regulators will have a real future. We believe in and contribute to achieving this goal. 📷Website https://www.celesos.com/ 📷Telegram https://t.me/celeschain 📷Twitter https://twitter.com/CelesChain 📷Reddit https://www.reddit.com/useCelesOS 📷Medium https://medium.com/@celesos 📷Facebook https://www.facebook.com/CelesOS1 📷Youtube https://www.youtube.com/channel/UC1Xsd8wU957D-R8RQVZPfGA
Crypto Caselaw Minute #51: 8/29/2019 ... that corporation does or doesn’t do as a general rule unless you both things in a couple of particular ways. ... caused by the absence of the purchased ... 51% Attack Explained. Last Updated: 8th January 2019. A 51% attack refers to an individual miner, or group of miners controlling more than 50% of a network’s mining power, also known as hash rate or hash power. A network’s hash rate is a measure of the rate at which hashes are being computed on the network, a process that is known as hashing. Monacoin, bitcoin gold, zencash, verge and now, litecoin cash. At least five cryptocurrencies have recently been hit with an attack that used to be more theoretical than actual, all in the last month. The topic of the 51% attack is strongly related to other concepts such as mining, the consensus protocol, orphan blocks and the double spending problem, and cannot be understood if the reader is ... 4. Historical 51% Attack Cases Bitcoin Cash (May 2019) Two Bitcoin Cash mining pools, BTC.com and BTC.top, carried a 51% attack on the Bitcoin Cash blockchain in order to stop an unknown miner from taking coins that he wasn’t supposed to have access to, while the network forked.. Even though some would argue the 51% attack was done to help the network, it still demonstrates the power these ...
New IRS Rules For Bitcoin And Crypto Holders. + SEC Rejects Bitwise ETF Proposal + Credits Shout-Out
Bitcoin Shadow attack can result in chain reorg which could lead to double spending. Get blockchain certified: https://academy.ivanontech.com join 20,000 stu... Copy & Paste Videos and Earn $100 to $300 Per Day - FULL TUTORIAL (Make Money Online) - Duration: 22:51. ... Best 3 New Bitcoin Mining Site 2020 1000 hash power free ... He explains that the amount of Bitcoin rewarded for mining is halved every four years which means that it will be less profitable. ... 14:51. Siraj Raval 80,908 views. 14:51. Bitcoin - The Future ... In today's episode, I explain Tradingshot's Bitcoin Golden 51%-49% Rule and why it indicates now is a good time to buy bitcoin all at once as opposed to dollar cost averaging. How to start Bitcoin mining for beginners (SUPER EASY) - ULTIMATE GUIDE - Duration: 13:51. We Do Tech 706,583 views. 13:51. How To Make $100+ A Day, Trading With A $1000 Account - Duration: 17:33.