← All Posts | off-chain | October 27, 2025

Top 7 Findings in Off-Chain Components

Damian Rusinek

Damian Rusinek

Managing Partner & Smart Contract Security Auditor

This article highlights seven interesting vulnerabilities discovered during security audits of off-chain components. These findings are presented with detailed descriptions, potential attack vectors, recommended solutions, and a summary of how each vulnerability was ultimately resolved.

Off-chain audits

As a company, we take a comprehensive approach to our clients’ security and do much more than just smart contract audits. A project is only as secure as its weakest link in the system, and despite the increasing awareness of the need to test smart contracts, off-chain components are often overlooked by Web3 projects. Below are the reasons why this should not be neglected, using examples of both large projects such as Lido and smaller ones such as GasBot or Research Portfolio.

Finding 1: Gasbot V2

Description

Gasbot V2 is a fast bridge facilitating asset transfers across multiple blockchains. It is highly centralized. This is due to the desire to minimize the costs of using the system and increase its gas efficiency.

While the main focus was on the on-chain part, that is smart contracts deployed on multiple chains, the project included also a web application that was used to manage bridged tokens and bridging transactions. 

The frontend application initially makes a request to an endpoint that returns details of the token transfer on the destination chain (e.g. the minimum amount received). Then, the transaction is signed by the user and sent to the source chain. The user can speed up the execution of the destination chain transaction by calling the endpoint verify-txn where they pass the transaction hash to be handled by the backend application, verified and executed on the destination chain.

During the review, a penetration test of the web application was conducted to determine whether an arbitrary transaction with fake transfers to the bridge on the source chain could be submitted. The system could not be induced to execute a transaction on the destination chain for an invalid transaction on the source chain; however, the test revealed another issue.

When an invalid transaction was submitted to the verification endpoint, the web application attempted to query and handle it but failed. The transaction was not flagged as invalid, causing the application to retry every second. Each attempt consumes Node RPC API points; eventually these points are exhausted and the web application stops functioning, blocking legitimate bridged transfers.

Attack Vector and Scenario

  1. The attacker submits non-existing transaction hash to the verification API endpoint.
  2. The application tries to query it but fails. 
  3. The transaction remains unchanged, but the operation from point 2 costs some Node RPC API points.
  4. Every second, the application tries to re-execute operation from point 2 until the balance of Node RPC API points is sufficient.
  5. The Node RPC API points balance is drained.
  6. The application cannot query legitimate transactions and the bridge stops working.

How it was Fixed

The fix for the issue was straightforward. As soon as the application detects the submitted transaction is invalid, it should be marked as invalid and never handled anymore.

The fix is however not free from possible threats. It can also pose another risk when an attacker could monitor transactions incoming to the bridge and submit them before they are processed by the network to block their execution later. However, to overcome this, the application should always monitor incoming transactions to the bridge contract and handle them even if they are currently marked as non-existent. The attack would block the feature

Finding 2: Research Portfolio

Description

The Research Portfolio project focuses on  tokenizing research papers, effectively creating a digital asset for each publication. The system allows for the establishment of a defined distribution model, accurately reflecting the contributions of all involved authors. Crucially, it enables users who benefit from or utilize the research to contribute tokens back to the paper, with these donations being automatically allocated among the authors according to the pre-set distribution parameters.

The security review of the Research Portfolio project commenced with an in-depth examination of its smart contracts. During this initial phase, a high vulnerability was identified: it was possible to bypass the intended validation mechanisms by calling the project’s factories directly. This circumvention allowed for the creation and setup of distributions without the necessary checks enforced by the designated Entrypoint contract, which was designed to validate all such operations. 

The result of that issue was potential theft of donations. The attackers might take the following steps in turn:

  1. Bypassing Entrypoint contract and calling createResearchToken in ResearchTokenFactory directly, giving their address as minter.
  2. Similarly, call createDistributorV2 in DistributorV2Factory directly and set up any distribution(e.g. 50%for them and 25% for two researchers).
  3. The web application stores their ResearchToken as such with the distribution.
  4. The attacker hopes the research paper will pass verification, because the research might be actually theirs.
  5. People willingly donate to their research because they assume that the research author would share the donation with 2 other researchers.
  6. No ResearchToken has been minted so far because they haven’t used the EntryPoint contract. They mint all the tokens for themselves and get the whole prize.

In the scenario above, point 4 was not guaranteed. This was because the attacker had to rely on the administrator overlooking any suspicious activity and verifying their research. Consequently, the further investigation checked whether this point could be achieved in a different way.

After initial reconnaissance of the web application it was noticed that there is an API endpoint used to toggle verification status: /api/toggleContractVerifyStatus. This endpoint allows the administrator to change the verification status of the selected paper and requires 3 parameters: myAddress, contractAddress, researchIdentifier. All of those parameters can be retrieved using a GraphQL query.

GraphQL response from off-chain components

After finding the appropriate values, it turns out that access control is most likely based on checking whether myAddress is a privileged (admin) address.

HTTP response from off-chain components

Knowing the value of myAddress parameter – which is publicly available – it was possible to bypass access control and verify any research paper and present it on the website as a legitimate, verified one. Then, the attacker could just wait for donations to be stolen.

Attack Vector and Scenario

Combining those two issues, the attacker can slightly update the attack scenario and do the following:

  1. Bypassing Entrypoint contract and calling createResearchToken in ResearchTokenFactory directly, giving their address as minter.
  2. Similarly, call createDistributorV2 in DistributorV2Factory directly and set up any distribution(e.g. 50%for them and 25% for two researchers).
  3. The web application stores their ResearchToken as such with the distribution.
  4. The attacker verifies the token through the described off-chain vulnerability.
  5. People willingly donate to their research because the research is verified and they assume that the research author would share the donation with 2 other researchers.
  6. No ResearchToken has been minted so far because they haven’t used the EntryPoint contract. They mint all the tokens for themselves and get the whole prize.

Our Recommendation

For the on-chain vulnerability it was recommended to allow the factories to be called only by the Entrypoint contract and use it by the web application. 

For the off-chain issue, it was recommended that access control should unambiguously bind the user to their role in the system. The user, as the owner of the research, must sign the details of the performed operation (including timestamp, changed value and the paper details) and the signature must be verified off-chain to make sure they are authorized to perform operations. 

Alternatively, an off-chain authentication and authorization mechanism can be introduced, that assigns the session to logged-in administrators.

How it was Fixed

The recommended fixes have been implemented. The factories can now only be called by the Entrypoint contract, and an appropriate access control mechanism has been introduced on the off-chain side.

Finding 3: Centralized Exchange

Description

This example is a combination of two different issues in two different projects – an on-chain token and a centralized exchange. Due to the non-disclosure policy their names cannot be mentioned.

The token had a quite straightforward security issue. It used tx.origin as the sender in one of the transfer functions to approve some contract to transfer tokens on behalf of the sender. Nowadays, the use of tx.origin is strongly discouraged and treated as a vulnerability. That’s because if you persuade, trick or force the victim to call your contract, you are able to steal all their tokens.

Exploiting the aforementioned vulnerability is challenging, as it requires either a phishing campaign to deceive victims or another vulnerability to inject your contract as the recipient. Therefore, overcoming this limitation was a necessary step.

Knowing that, a concept to attack a centralized exchange was developed. The premise is straightforward: when someone withdraws cryptocurrency (e.g., ETH), the exchange initiates a call to the withdrawal address, which is under their control.

Attack Vector and Scenario

The attack on the exchange needs some setup. First of all, the exchange must list the vulnerable tokens. Next, and most important, the exchange must use the same hot wallet to handle withdrawals of both, the native cryptocurrency, and the vulnerable token. If these requirements are met, the rest is quite simple. 

  1. Deploy the following contract:
contract Exploit {

    address public token;

    /**
     * Use vulnerable token address in constructor.
     */
    constructor(address _token) public {
        token = _token;
    }

    fallback() external payable{
        uint256 balance = Token(token).balanceOf(address(msg.sender));
        Token(token).vulnerableFunction(address(this), balance);
    }
}
  1. Set the deployed contract’s address as your withdrawal address on the exchange. 
  2. Withdraw a small amount of ETH to your withdrawal address.

The rest is done automatically. As soon as the withdrawal transaction is executed, the fallback function is called which calls the vulnerableFunction on the token. The tx.origin address is the exchange hot wallet’s address so it will transfer out all tokens from the hot wallet to the withdrawal address.

Our Recommendation

It was advised to the exchange to move the vulnerable token to a separate hot wallet as soon as possible, exclusively for managing this token withdrawals.

The next step was on the token owner’s side. They should update the token contract and remove the vulnerable function.

How it was Fixed

Upon submission of the issue, the exchange promptly transferred the vulnerable token. Subsequently, the token was updated to V2, allowing the exchange to securely manage it in the same manner as other tokens.

Finding 4: Lido Oracle – Uncaught exception leading to DoS

Description

This vulnerability was present in Lido Oracle’s Ejector module. It is responsible for reporting which validators should be exited when the next report is submitted.

Some Node Operators, who manage validators, can be forced to exit a number of validators. The get_remaining_forced_validators function is designed to remove validators that are required to exit, even if the withdrawal queue is fully claimable. This function iterates through all node operators, sorting them in descending order based on the number of validators awaiting forced exit.

The issue arises when a Node Operator has transient validators, which are counted in the

predictable_validators but not included in exitable_validators. The transient validator is a validator that has already deposited ETH but is not yet registered in Consensus Layer, so it cannot be exited yet. Consequently, if the function attempts to remove a validator included in predictable_validators when exitable_validators is empty, an IndexOutOfBounds exception occurs, halting the report generation.

Vulnerable Scenario

The following steps lead to the described issue:

  1. A Node Operator has one exitable validator and two transient validators. The force_exit_to limit is initially not set.
  2. The force_exit_to limit is then adjusted to 1. 
  3. The Ejector module calculates the number of predictable validators as 3. The new limit necessitates the exit of two validators while retaining one.
  4. Based on this miscalculation, the Ejector determines that 2 validators need to be exited (3 predictable – 1 limit).
  5. The module proceeds to eject the single available validator from the exitable_validators list.
  6. In the subsequent iteration of the processing loop, the difference is recalculated as 1 (2 remaining to be exited – 1 limit).
  7. The Ejector module attempts to eject another validator but tries to remove an element from the now-empty exitable_validators list. This action raises an uncaught exception.

The uncaught exception, not handled by the Oracle, disrupts the creation of the entire current report, leading to incomplete or failed reporting.

Our Recommendation

It was recommended that the while loop should confirm the existence of exitable validators for Node Operators that have not yet reached the force_exit_to limit.

Furthermore, it was crucial to implement measures and not introduce an infinite loop. This scenario could occur if a Node Operator, despite not having reached the force_exit_to limit, has no exitable operators. Detecting this condition is vital to avert a Denial of Service.

How it was Fixed

The vulnerability has been fixed as recommended. Instead of checking the number of predictable validators, the Oracle now checks whether there are any excitable validators left. Additionally, when the limit is not reached, but there are no exitable validators left, the loop is stopped to avoid infinite iterations.

Finding 5: Lido Oracle – Bypassing withdrawals lock

Description

The Lido Oracle has a security mechanism which is activated during a mass-slashing event and delays finalization of withdrawal requests in the queue.

The get_safe_border_epoch function determines the border epoch for unlocking withdrawal requests. All requests submitted before this epoch can be finalized. It relies on another function called _get_associated_slashings_border_epoch to compute this epoch based on the ongoing mass slashing activity.

In a mass slashing scenario, this function returns the most recent epoch before the slashing commenced. This epoch is determined as the earliest slashed_epoch among all validators with incomplete slashings at the time of the reference_epoch, rounded to the beginning of the most recent oracle report frame.

A challenge arises because the slashed_epoch field is not present in the validator data; it must be inferred. If the difference between a validator’s withdrawable_epoch and exit_epoch equals MIN_VALIDATOR_WITHDRAWABILITY_DELAY, it suggests a large exit queue, making it difficult to ascertain the precise slashed_epoch. Conversely, if this condition does not hold, the slashed_epoch can be estimated as withdrawable_epoch – EPOCHS_PER_SLASHINGS_VECTOR.

The core problem lies in the Oracle’s assumption that the earliest exit_epoch corresponds to one of the validators involved in the mass slashing. A dishonest operator could anticipate a mass slashing event and proactively exit one of their validators. 

Consequently, when the mass slashing occurs, the border epoch is calculated using the exit epochs of the validators being slashed. If the operator slashes their validator before it officially exits, the earliest exit_epoch will belong to that validator, resulting in a situation where the safe border is determined by its intentional slashing.

Attack Vector and Scenario

For simplicity it is assumed that the minimum exit delay is 10 and minimum withdrawal delay is 20. The attackers might take the following steps in sequence:

  1. A sophisticated operator managing multiple validators detects a potential mass slashing event.
  2. The operator exits one of their validators at time t0. This sets the exit epoch to t10 = t0 + 10 and the withdrawable epoch to t20 = t0 + 20 (reflecting the minimum withdrawability delay).
  3. A mass slashing event begins at time t5. The first slashed validators have an exit epoch set at t15 = t5 + 10 and a withdrawable epoch at t105 = t5 + 100 (based on epochs per slashing).
  4. The operator confirms the mass slashing and submits a withdrawal request at time t8, but it is blocked due to the active mass slashing, which establishes the safe border at t5 (the slashed epoch of a validator with the earliest exit epoch).
  5. The operator proceeds to slash the validator from step 2 at time t9. Consequently, the earliest exit_epoch will now be associated with the operator’s validator, and the slashed epoch becomes t9. 
  6. This action unlocks all withdrawal requests made between t5 and t9, which should have been blocked.

Our Recommendation

It was recommended that during a mass slashing event, the Oracle should consider the maximum slashed_epoch for all validators that are subject to slashing and have not yet reached the withdrawable state, rather than limiting the calculation to those with the earliest exit epoch.

How it was Fixed

All slashed validators that are still not withdrawable rather than only those with the absolute earliest exit epoch are now considered. This means the computed “earliest predicted slashed epoch” now reflects a broader view of the validators’ exit characteristics rather than being solely determined by an earliest exit epoch.

Finding 6: Lido Oracle – Negative Lido Fees

Description

Lido Protocol’s V3 introduces external vaults, each with independent validators managed by a single Node Operator for ETH deposits. While all depositors earn rewards and the Node Operator receives a share, the vault also incurs fees, which are paid out of those earnings.

Fees are determined by the simulated APR of stETH. This is calculated by taking the difference in stETH rates between consecutive reporting frames, dividing it by the previous rate, and then multiplying the result by the total number of frames in a year.

(post_rate – pre_rate) / pre_rate * frames_per_year

Slashing can cause fees to become negative when the post_rate is less than the pre_rate. This implies the Lido Protocol would pay the vault, which is an unintended outcome. In fact, such a report can never be applied to the vault (to update its state) because the smaller fee would cause the underflow when anyone tries to apply the report to the vault.

Vulnerable Scenario

The following steps lead to the described result:

  1. The vault is connected and some reports are applied to it.
  2. The vault has X Lido fees assigned.
  3. The rate of stETH decreases, e.g. due to slashing.
  4. The vault has Y (lower than X, due to negative rate_diff value) Lido fees assigned.
  5. The report is submitted to Lido.
  6. When anyone tries to apply the vault’s report, it reverts due to the underflow.

Our Recommendation

The straightforward recommendation was to set the rate difference to 0 when it gets negative, which ultimately sets the fees to zero when the rate decreases.

How it was Fixed

The vulnerability has been fixed as recommended.

Finding 7: Lido Oracle – Undercollateralized stETH minting

Description

The Lido Oracle calculates the total value of each vault in V3 of the protocol. The value includes the vault’s balance, all ETH kept on validators associated with the vault and the amount of predeposited ETH. The Oracle adds any pending deposits if the validator meets the criteria for activation.

For a validator to be ready for activation, it must possess at least 32 ETH (from both balance and pending deposits) and have at least one pending deposit of at least 31 ETH. A malicious Node Operator can exploit this to have the Oracle count the predeposited ETH twice, allowing them to mint stETH based on this inflated amount.

As the same ETH is counted twice, the vault’s total value is inflated and allows to mint more stETH that should be allowed. 

Attack Vector and Scenario

The malicious Node Operator could execute the following:

  1. The Node Operator predeposits a validator just before refBlock and immediately deposits 31 ETH via the DEPOSIT_CONTRACT.
  2. Another 31 ETH is staged (and locked) in StakingVault, resulting in a total expenditure of 63 ETH (1 ETH predeposit + 31 ETH deposit + 31 ETH staged).
  3. Assume no additional ETH is in the StakingVault and the Node Operator has only this one validator.
  4. A new report is generated before the validator is confirmed. The total value calculated is: 0 (available) + 31 (staged) + 1 (predeposit) + 32 (validator ready) = 64 ETH.
  5. Consequently, the Node Operator can mint stETH against 64 ETH, despite having spent only 63 ETH.

Our Recommendation

It was recommended to exclude the predeposited ETH for validators that are ready for activation by tracking which validators have active predeposits and deducting that amount from their total balance.

How it was Fixed

The vulnerability has been fixed in a slightly different way. Currently, the total value does not include the whole amount pre-deposited. In contrast, all new validators are checked for their pre-deposit stage and if it is PREDEPOSITED, the pre-deposit amount is added to the total value.

Therefore, new validators that were deposited externally (not using pre-deposit flow) are not included until the validator is eligible for activation.

Conclusion

This deep dive into seven distinct off-chain vulnerabilities underscores the critical importance of a comprehensive security posture that extends beyond smart contracts to all interconnected components. 

From the denial-of-service risk in Gasbot V2 and the donation theft potential in Research Portfolio to the sophisticated token manipulation in Centralized Exchange and the various Lido Oracle exploits (DoS, withdrawal bypass, negative fees, and undercollateralized minting), each finding highlights how seemingly minor oversights in off-chain logic, access control, or data handling can lead to significant financial or operational damage. 

The swift and effective implementation of the recommended fixes across these projects demonstrates a commitment to robust security, providing valuable lessons for developers and auditors alike. 

Ultimately, these cases serve as a testament to the continuous and evolving nature of blockchain security, where every layer – including off-chain components – of the system is paramount.

Join the newsletter now

Please wait...

Thank you for sign up!