After being eagerly awaited by the blockchain community, the ZkSync Era, a zero-knowledge platform developed to transform Ethereum transactions, was launched to much fanfare. However, the platform encountered a major setback just a week after its release when it could not produce blocks for several hours.
This hiccup has left many wondering about the platform’s reliability and ability to deliver on its promises, raising questions about the future of ZkSync and its impact on the broader blockchain ecosystem.
zkSync’s success is short-lived as the platform encounters a setback
As the second week of its launch approached, zkSync Era, a Layer 2 scaling solution, faced a significant issue. On April 1, just eight days after its launch, the platform stopped producing blocks, causing users to be unable to process transactions on the mainnet blockchain of zkSync Era.
According to the Block Explorer transaction data, the system failed to create any blocks between 01:52 AM CET and 6:02 AM CET on April 1. While the recent block failure on the zkSync Era network was undoubtedly a cause for concern, it is not entirely unexpected.
The zkSync Era Debacle
Previously, blockchain networks have encountered similar challenges, such as the recent block collapse on the Avalanche network. Nevertheless, what distinguished the predicament with zkSync Era was the brief interval between its inception and the block failure, which piqued the curiosity of both users and professionals.
It is essential to mention that the team recognized that the network was in its alpha stage in its response. This implies that the system is still undergoing testing and could face similar issues as it progresses toward a more dependable and consistent version.
Reason Behind zkSync Era’s block malfunctions
As per the announcement made by the zkSync Era group, the block manufacturing breakdown resulted from an issue in the block backlog database, which resulted in the stoppage of block production. Regrettably, the monitoring system for the database did not trigger an alert as it could not establish a connection to gather metrics.
This means that the issue went undetected until users began to report problems with the platform. Interestingly, the API was still operational despite the malfunction, which elaborates why no alerts were triggered.
Based on their announcement, the team was able to fix the problem in just five minutes, which is a testament to their dedication and expertise.
They also introduced alerts to notify users if the monitoring agent malfunctions or cannot set up a connection to collect metrics, which should help prevent similar issues from occurring.