Bridge exploits account for ~50% of all decentralized finance exploits since September 2020, totaling ~$2.5B in lost assets, according to the Token Terminal.

Ever since chain interoperability has become a hot topic, blockchain bridges have become a popular target for hackers. This is due to the nature of Web3 projects (open-sourced) and huge amounts of money locked and managed by bridges.

In this article, I summarize the popular bugs and mistakes that creators and maintainers make, to increase security awareness and prevent their recurrence.

I have categorized them into 6 sins baked by real stories.

Let's go!

Bridges 101

At the beginning, blockchain bridges were used to simply transfer assets (e.g. native crypto and tokens) from one chain to the other, to be able to interact with applications on the latter one. As crypto bridges were evolving, another type of bridged asset appeared - a function call. This is how a very wide range of cross-chain Web3 applications appeared (e.g. cross-chain DeFi applications).

From the high-level perspective, the architecture of a bridge can be presented as follows.

Web3 bridge architecture
Web3 bridge architecture

End users and Web3 protocols can use bridges to transfer assets or make cross-chain function calls.

That would not be possible* without an infrastructure that monitors the source chains and relays the transfers and calls on the destination chain. This relaying infrastructure is operated by trusted third parties with high power, including minting or unblocking assets and making authorized function calls.

Depending on the way how the bridge handles assets, it may require liquidity providers. If the bridge allows withdrawal of the token on the destination chain, which is not minted by the bridge operators, the liquidity providers must deposit it first. On the contrary, if the bridge mints its wrapped version of a token, the liquidity is not required but the minted token is in fact a different one from the originally bridged token (see the official USD Coin (PoS) and its wrapped version USD Coin (Wormhole)) and might not be as widely accepted in applications as the original one.

To sum up the diagram from the security perspective, the attack surface that potential attacker could focus on, would consist of 3 parts:

  • WEB3 - security bugs in smart contracts on both chains (source and destination),
  • WEB2 - attacks on relaying and verifying infrastructures (e.g., intercepting and blocking data flow),
  • PEOPLE - customized social engineering attacks on operators or monitoring their activity to discover mistakes.

From the following sins you will notice that all those potential threats have materialized and ended up with millions stolen.

* In fact, there is an architecture of P2P bridges, where the relaying infrastructure is not needed as two parties directly interact with a contract on source and destination chains (e.g. Connext V1). However, such an approach did not gain popularity due to difficulties, including the requirement to constantly monitor contracts and call functions on them, which resulted in each liquidity provider having its own small infrastructure.

Sin #1: Improper key management

One of the most important issues, from the perspective of the hacks volume, is improper key management because they usually protect the critical functions of minting or unlocking the assets. Thus, if the operators leak their keys, the lucky ones who "find them" are able to mint an unlimited amount of tokens or withdraw all locked tokens.

Fortunately, the multisigs are commonly used, which means that funds are protected not by one key, but multiple keys (let's assume there is N of them) and any subset of M keys is necessary and sufficient to prepare a valid signature that represents the operators' will (e.g. unlocking funds). This is called M-of-N multi signature.

On the other hand, sometimes the operators themselves degrade the multisigs to 1-of-ANY which is equal to protecting the bridge with one key. That was the case with the Ronin bridge, used by the Axie Infinity game.

Initially, the Ronin bridge was protected with 5-of-9 multisig so any 5 out of 9 key holders were needed to sign a transaction withdrawing assets from the bridge. However, 4 of them were controlled by Sky Mavis, the company that developed Axie Infinity, so in fact it became 2-of-6 unless Sky Mavis kept 4 keys on different servers, protected by different security means and controlled by different departments, which I can doubt.

Additionally, the Axie DAO who controlled another key gave access to their key to Sky Mavis in December 2021, actually making it 1-of-5, thus the bridge was controlled only by Sky Mavis. That means, a leak from only one place would give access to the whole bridge.

Sky Mavis denied that there was an attack on the company's infrastructure using technical vulnerabilities and pointed to social engineering as an attack vector.


Another huge case of leaked private keys was Harmony Bridge, but we do not know much about how it happened, besides that there were 2 keys leaked (2-of-5 multisig) and, according to the US officials, the Lazarus group did it.

As the Harmony's team stated, "These keys were doubly encrypted using a passphrase and a key management service".

We do not have any proof on how the keys have leaked. However, we can hypothesize how this might happen. Based on experience, it was most likely caused in one of the following ways:

  • Intentional transfer (in fact, confirmed by Sky Mavis),
  • Unintentional disclosure (e.g. in git repository),
  • Unauthorized access to signing endpoint in Key Management Service (no plaintext key leakage),
  • Hack of the servers controlling and storing the key.

Sin #2: Insufficient validation

The second sin may sound a bit general, but I mean the validation of passed parameters or other parts of the transaction that is sent by a threat actor (i.e. the value of tx).

The first case is Multichain (formerly known as Anyswap) and its bug in the function that makes a swap with permit. Such construction is completely fine and allows users to make a swap in one transaction, without calling the approve function.

However, it is important to make sure that the supported token either implements the permit function or reverts otherwise. You could think that the token that does not have a permit function will revert if someone tries to call it. But is it true?

Not really. That would be true for most of the tokens, but there is one very special and popular token - Wrapped ETH (WETH). It has a fallback function that is used to receive Ether and mint WETH, and it accepts 0 value transfers.

So what happens when you try to call the permit function? The fallback function is executed, no WETH is minted, but it does not revert.

Multichain's vulnerable code.
Multichain's vulnerable code.

This was used in the Multichain's hack. As you can see in the source code above the function does not validate the token parameter and the attacker passed the address of the previously deployed malicious contract, which returned WETH address in the underlying function.

Multichain bridge called permit on WETH token which did not revert and allowed the attacker to make a call to the safeTransferFrom with any value of from parameter, thus stealing from anyone who approved the bridge contract.


A similar case of the non-validated token parameter was with Meter.io, and it also included the WETH token. Meter.io has two functions to deposit assets, one is depositETH used to deposit Ether and the other is deposit used to deposit other tokens. Do you see where it's heading? πŸ™‚

Meter.io's vulnerable code.
Meter.io's vulnerable code.

Both these functions call another contract, called the handler, and its function deposit. As you can see in the code above, the depositETH function firstly exchanges Ether to WETH and then makes a call.

As a side note, Meter.io is a fork of another project with a small modification in the handler's deposit function. It simply does not lock WETH token as presented below. It seems reasonable, because when a user deposits Ether, it was already sent in the depositETH function and when using the other deposit function the token is not WETH. Right? ;)

Meter.io's vulnerable change.
Meter.io's vulnerable change.

I am sure you already got it. Users can call the deposit function to deposit WETH directly as a token, not through depositETH. Then, when the handler's deposit function is called it will not lock (transfer) user's tokens because tokenAddress is the WETH address. And you end up with unlimited free deposits of WETH.


The case of THORChain's bridge (Bifrost) hack is interesting because it shows how hard it is to build multi-stack applications. You must know all the tricks of all stacks.

The vulnerable part of the bridge was built on CosmosSDK. Its goal was to parse the transactions sent to a particular contract on Ethereum network and generate corresponding transactions in THORChain. It simply reads the msg.value of a transaction as presented below.

Bifrost's vulnerable code.
Bifrost's vulnerable code.

The problem with this code is that it assumes that all transactions are direct transactions to the bridge contract. The attacker took advantage of this fact and prepared a contract that sends back the received value and makes an internal transaction to the Bifrost contract as presented below.

Bifrost's attack flow.
Bifrost's attack flow.

Bifrost thinks that it received 200 ETH because that is the value of the main transaction, but it never received any ETH because the internal transaction had no value. This bug allowed the attacker to simply fake deposits on Ethereum and withdraw the free tokens on THORchain.


Another case applies not to the bridge, but to the Li.Fi project which is an upper layer for bridges and integrates with them.

The Li.Fi project allowed users to swap assets before bridging them. As presented below, the project makes a call to _swapData.callTo contract to fulfill the swap with specific _swapData.callData data.

Li.Fi's vulnerable code.
Li.Fi's vulnerable code.

The problem, in this case, is that users can specify all parameters in the _swapData struct, including the callTo and callData fields, which means that users can make the project call any function on any contract.

Similar to the Multichain case, the attacker exploited the bug via execution of transferFrom on behalf of Li.Fi project. As a result, the tokens were stolen from victims that had made an unlimited approval for Li.Fi's contract.

The main threats included in this category are:

  • Unusual behavior of integrated contract (like permit in WETH),
  • Dual character of integrated contracts,
  • Invalid source of truth,
  • External call to user-provided contract.

Sin #3: Insecure upgrades

There are people who claim that upgradeability is a vulnerability because it doesn't make Web3 projects completely trustless. I will not discuss this opinion, but the fact is that an upgrade creates a new attack surface and can lead to a complete collapse of the project if done without the necessary security mechanisms.

The first case I describe in this category is the Nomad bridge case. The bridge used the Merkle tree structure to keep track of all valid cross-chain transfers.

Users had to first prove that a specific transfer message is included in the tree using the Merkle tree path that results in some root (actually any root, because it is not validated yet), assigned to the message. In the next step, the process function is called, and it verifies that the root assigned to the message is one of the acceptableRoot's (previously confirmed).

Nomad's process function.
Nomad's process function.

Nomad's acceptableRoot function.
Nomad's acceptableRoot function.

So far, everything works as expected: the algorithm that calculates the root is correct, the process function is correct and will not accept a root that was not previously confirmed.

Then, in June 2022, the Nomad Replica contract was upgraded. If you look at the transaction's details and initialize function source code you will see that it adds a new confirmed root. Can you see what just happened?

Nomad's initialization call.
Nomad's initialization call.

Nomad's initialization function.
Nomad's initialization function.

The team confirmed a zero-value root during the upgrade process. What are the consequences? Well, if you think of the initial values in messages mapping you will understand - it is zero value.

To put simply, after the upgrade anyone could process non-proved messages because their initial roots are zero which is accepted now. That led to a race in which people did not even know what exactly they were doing but were processing malicious messages. For sure, some of those messages were also white-hats trying to save tokens.

Nomad's race.
Nomad's race.


The second case of insecure upgrade is QBridge from Qubit Finance. They had similar contracts to the previously mentioned Meter.io bridge. There is a QBridge contract with a deposit function that handles over the execution to the QBridgeHandler contract's deposit function.

QBridge's vulnerable code.
QBridge's vulnerable code.

Then, the QBridgeHandler contract verifies whether the token associated with the passed resourceID is whitelisted and then transfers tokens from the depositor using a low-level call to transferFrom function (hoping it will revert on invalid transfer).

QBridgeHandler's vulnerable code.
QBridgeHandler's vulnerable code.

Same as in the previous case, all is working correctly until the upgrade which is coming. The Qubit team added depositEth functions to handle ETH deposits and changed the token address of ETH resource to zero-value which represented ETH:

Additionally, tokenAddress was the WETH address before depositETH was added, but as depositETH is added, it is replaced with the zero address that is the tokenAddress of ETH.

The new depositETH function was also working correctly, but there was one unintentional flow left unblocked - depositing WETH via deposit function. It was allowed because:

  • zero-value token address was whitelisted - it represented ETH,
  • the deposit function was not removed in ETH handler,
  • a low-level call to zero address (non-contract address) does not revert.

That flow allowed the attacker to pretend depositing WETH via deposit function without transferring WETH from the attacker's address, but emitting a Deposit event that is the confirmation of valid deposit.


The main threats included in this category are:

  • Storage collisions,
  • Insecure execution flow of β€œpreviously supported and not supported anymore” assets,
  • Setting unvalidated edge-case values.

Sin #4: Insecure cryptography

There is a rule that says β€œdon't roll out your own crypto” (cryptography of course ;)) which is very true, but sometimes it is even hard to use the existing and proven cryptographic algorithms and protocols securely.

I will start this category with a bug in Wormhole bridge on Solana. The attacker somehow managed to bypass checks on Verified Action Approval (VAA) which represent actions in the bridge, such as deposits, with signatures that confirm them.

Here is the code snippet from Wormhole (on Solana) that verifies VAAs and their signatures.

Wormhole's vulnerable code.
Wormhole's vulnerable code.

As you can see it loads a program that is used to verify the ECDSA signatures (on secp256k1 curve) which means that it delegates the signature verification to a built-in program. This is completely fine as most of the blockchains have built-in programs that perform specific operations.

What is important here is that it uses an unsafe function load_instruction_at that does not verify whether the loaded program is built-in.

Wormhole's unsafe function.
Wormhole's unsafe function.

Basically, the attacker created a malicious account with a spoofed secp256k1 program and passed its address to Wormhole to call it instead of the real secp256k1 program. That made the Wormhole to use a verification algorithm defined by the attacker.

As you probably know, the malicious verification passed and Wormhole emitted an event handled on Ethereum's side.


Another, more sophisticated attack was carried out against the BNB Bridge that bypassed the verification of a bundle of transfers.

This is the next bridge that uses the well-known Merke tree structure to compress many transfers into one root hash that is confirmed and allows to verify multiple transfers in one contract call. In fact, it uses the cosmos/iavl library to handle that process.

Below is the visualization of a Merkle tree with one transfer that contains the input parameter for iavl library.

Merkle proof
Merkle proof

The yellow node (a leaf) is a transfer, the blue ones represent the path that allows to generate the gray node - the root. Then, the calculated root is compared to the valid root and if they match, the transfer is valid.

The next example shows a Merke tree that represents two transfers to be verified. There are two transfers and two paths (blue and pink).

Merkle proof with 2 transfers
Merkle proof with 2 transfers

What is important is that the ordering of leaves is V1, V2, which means that the bridge will check the root hash for V1 node which is correct as the V1 and path P1.1, P1.2 and P1.3 values are legitimate.

Now, when it moves to the next leaf verification, V2, the root hash is already verified and the function only checks whether the hash or V2’s sub-path is on the right side of V1’s path. Therefore, it will calculate the hash H2.2 and simply compare it with P1.3 value.

What is important here, and you probably noticed it already on the diagram is that each node on the path has two values - Left and Right. What is even more important is that each node must not have both values defined and the iavl library assumes that it has been already verified (simply it does not verify this assumption by itself).

Merkle proof
Merkle proof

This diagram above presents the Merkle tree with a malicious transfer (the red one of course ;)) which is the second one on the list of transfers. Why? Because we want the bridge to verify the root hash for the legitimate transfer V1 first. The verification process of V1 is the business-as-usual, because V1 is a legitimate transfer.

Now, let’s move on to the verification process of V2. Notice that it has empty hash which means that the direct hash of V2 will be compared with V1’s path nodes. The path node of V1 that V2 is connected to is P2.

Now notice that the node P2 has both Left and Right values defined. The Left value of P2 node (the legitimate one) was used to validate the root for V1 but the Right value will be used to compare with hash of V2. Guess what? It is equal to the V2’s hash.

That’s all, you have just smuggled a malicious transfer V2 and it got confirmed!

If you want to know all details about this attack you can check out our article that is an ELI5 article.


There was also an interesting case of a bug in a bridge between Gnosis Chain and Ethereum (built by Succint) using ZK proofs but that’s a story for another article. Sign up for our newsletter to not miss it!


The main threats included in this category are:

  • Rebuilding existing cryptographic primitives,
  • Using unsafe-by-default functions,
  • Insecure use of cryptographic libraries.

Sin #5: Improper authorization

The authorization is a process of specifying an access to data or functions. From the smart contracts perspective, it would usually mean specifying who can call the particular function and is obtained with modifiers (e.g., onlyOwner).

However, some cases include insecure transitivity of the access to specific functions which is not that easy to detect and cannot be fixed with a simple modifier.

One of the quite common cases related to transitivity of the access occurred in Chainswap. They used infamous approve & transferFrom functions to firstly approve the bridge to transfer assets on behalf of users and then transfer it in other, more complex calls.

Why infamous? Because of multiple security issues that arise from this approach (including abuses of unlimited approvals like the one above, front-running approval updates, lack of easy overview of given approvals, etc.).

Here is the sendFrom function’s code.

Chainswap's vulnerable code.
Chainswap's vulnerable code.

Chainswap's _sendFrom function.
Chainswap's _sendFrom function.

As you can see, anyone can call this function and specify the from parameter. Classic abuse of approve & transferFrom!

If you have approved this bridge, it is possible to steal your tokens and cross-transfer it to address on a different chain.


Another example also concerns the same bridge, Chainswap. However this one is more complicated and therefore more interesting.

The general flow of funds in the bridge is following:

  1. User calls the send (or sendFrom as described above) function that locks your tokens on the source chain.
  2. Authorty (yep, that’s its name) increases the user's quota on the destination chain.
  3. User calls the receive function on the destination chain that updates the current user’s quota (decreases it) and transfers the funds.

Chainswap's receive function.
Chainswap's receive function.

Chainswap's _decreaseAuthQuota function.
Chainswap's _decreaseAuthQuota function.

Why does it need to update the user's quota? The users are rewarded for not withdrawing it right away because their tokens are used as liquidity. If you look at the modifier you will see the accounting.

Chainswap's modifier.
Chainswap's modifier.

Now let’s discuss one of the abuser stories... What happens if we try to call the receive function without prior call to send function on a different chain (unauthorized call)?

Of course, we would prepare signatures that pass the signatory checks so let’s not focus on that part. Let’s move on the decreasing the quota preceded by the auto update of quota, ending up in authQuotaOf function that calculates the current quota.

In the function we have 3 main variables:

  1. quota, which is 0 as it is taken from not yet initialized key of a mapping,
  2. quotaCap, which must be huge as it is the cap,
  3. delta, which is the cap divided by periods and multiplied by the number of seconds since the last update.

If you remember the abuser story, you will notice that we have never used this bridge before so our last update was never... represented as zero value. The current timestamp in seconds minus 0 is huge, so is the delta variable. In the last line (#2394) the huge delta value is capped by quotaCap, but it is still a nice number for sending nothing :)

So the root cause was that whenever a brand new address calls the receive function it gets quotaCap tokens for free.


The last example of improper authorization is the bug we have found an interesting bridge integration bug during one of the security reviews for our client. I will cover it in an upcoming article. Subscribe to our newsletter so you don't miss it.


The main threats included in this category are:

  • Undetected unauthorized calls,
  • Access to calls on behalf of other users,
  • Invalid implementation of approve & transferFrom pattern.

Sin #6: General lack of security

The last sign covered by this article is very general and obvious, but history shows that it is important to mention it.

The main worst practices that are included in this category are:

  • Lack of professional security review. Most of the hacks happen to not security reviewed code. You cannot rush to go-to-prod if you have not yet finished your security journey (THORchain case).
  • Security review as a certificate. Do not consider a security review a certification of protocol’s security. No security audit can cover all possible attack scenarios. In fact some of them might not yet be known prior to deployment.
    • The best guarantee of security is the continuous development of team knowledge. To use the security review as effectively as possible, make sure that everyone in the team understands the mistakes made.
    • Consider whether the detected vulnerabilities may exist in other places, security reviews always have a limited time and the developers know the code best.
  • Poor documentation. Many of the vulnerabilities found in protocols are business logic vulnerabilities that are difficult to identify without a thorough understanding of them. This is why documentation is so important.
  • Poor tests aka low coverage. Developers know their protocol best and are able to identify many security threats. It is good to note them down, but it is necessary to write tests that cover them, even before the functionality is implemented. This way developers can easily go back and verify them. In addition, the tests allow developers to check whether subsequent added functionalities do not violate the assumptions of the protocol's business logic and security requirements.
  • Other less known challenges. There are threats that are not widely discussed but can have significant consequences.
    • Chain reorgs (lack of finality),
    • Liquidity on bridges,
    • Non-standard tokens (rebasing tokens).

Learn from the mistakes of others

If you have gone through all the sins described in this article I hope you promise not to commit them - it's hard to get absolution from users in case of losing their funds.

  • Did you like this article? Be sure to share it on social media!

Composable Security πŸ‡΅πŸ‡±β›“οΈ is a Polish company specializing in increasing the security of projects based on smart contracts written in Solidity. Examples of projects that have trusted us are market leaders such as FujiDAO, Enjin, or Tellor. We are creators of the Smart Contract Security Verification Standard. Speakers at various conferences such as EthCC, ETHWarsaw, or OWASP AppSec EU. Authors of numerous publications on DeFi security. Experienced auditors operating in the IT Security space since 2016.

If you need support in the field of security or auditing smart contracts do not hesitate to contact us.

Damian Rusinek

Damian Rusinek

Managing Partner & Smart Contract Security Auditor

About the author

PhD, Speaker, Co-Author of SCSVS and White Hat. Professionally dealing with security since 2009, contributing to the crypto space since 2017. Smart contract security research lead.

View all posts (13)