Is Hashgraph the next step in distributed trust?
Introducing the fourth generation of distributed trust, from cryptocurrencies to ledgers to smart contracts to markets. Hashgraph completes all third-generation requirements, while having the ability to scale to fourth-generation market demands.
But what is Hashgraph? Hashgraph is a DAG (directed acyclic graph) based data structure. Think of it as multiple blockchains working together. However, a DAG accepts all initially created events (equivalent to blocks in a blockchain), whereas a blockchain doesn’t.
In a blockchain, using the PoW (proof of work) consensus algorithm, the block ‘mined’ first is accepted, while all other partially ‘mined’ blocks are not used, leading to inefficiency. Think of a single branch (blockchain) that has notches (disused partial blocks) running all the way along it. In contrast, Hashgraph is like an intertwined set of branches running in the same direction without any inefficiencies (notches).
Why Hashgraph is superior to blockchain
The main benefit Hashgraph has over blockchain consensus mechanisms is fairness in transaction order. Use cases include high-frequency trading (HFT) on a stock exchange, where the millisecond transaction ordering offered by Hashgraph creates a ‘fair’ market. This fairness is achieved through a combination of mathematical proof and accurate time-stamping.
What about transaction speeds? A common factor of debate among Bitcoin Core developers – as seen with the hard fork* of Bitcoin Cash – is that it increases block size within the blockchain to increase transactions per second, whereas events in Hashgraph can be any size.
When creating a new event, any new transaction/s, plus a few bytes for overhead, make up the entirety of the event size. Events can be anywhere from a few bytes (no transactions) to whatever size is required.
Combine this with Hashgraph’s consensus algorithm, ‘PoG’ (proof of gossip), where events within the graph gossip to each other about all previous events, thus spreading gossip about gossip. Transaction speeds can now be 250,000 per second pre ‘lightning network’ equivalent and pre ‘sharding’.
Hashgraph is completely secure. Bitcoin is not.
Moreover, Hashgraph is completely secure with aBFT (asynchronous Byzantine fault tolerance), which is, in theory, the most secure version of BFT. Bitcoin is not.
While Bitcoin transaction verifications decrease the probability of a ‘bad’ (unsecure) transaction to one in 150 billion after six verifications (due to elliptic curve digital signing algorithms), it is still not aBFT.
Now consider bitcoin.
It’s a similar story with Ethereum, which is one of the reasons why Ethereum is moving to a PoS (proof of stake) consensus. Even in comparison to VISA, the electrical output doesn’t currently seem logical. Long term, lightning and/or segwit integration will hopefully reduce energy consumption. Hashgraph doesn’t have this problem.
What are some of the potential downsides of Hashgraph?
In comparison to the completely open source bitcoin, Hashgraph has a patent owned by Swirlds, the company that Hashgraph is based on. One of Swirlds’ visions for the patent is to help stabilise the platform by never allowing Hashgraph to fork, a prevalent issue with some ledger platforms that artificially inflate supply.
Does this mean the code is closed source? No, the code will be open for review, aiming to provide trust and transparency.
Adding to this, the Hedera Hashgraph council will act as the governing body, providing distributed governance. The council consists of 39 governing bodies from a diverse range of leading organisations. The goal is to create a decentralised market solution, both in terms of organisational decision making and in consensus of transaction ordering.
Terms dictate that no single member or small group of members will have undue influence over the membership body. When it comes to consensus, the network will expand to millions of nodes, all of which will vote on consensus.
Finally, there are some significant pros to Hashgraph over traditional distributed trust technologies. But at this stage, it has yet to prove itself in the real world. Unforeseen scalability issues could arise, or other unthought-of problems might present themselves.
Only time will tell.