Why Smart Contracts Fail: Undiscovered bugs and what we can do about them

Hrishi Olickel
7 min readJun 21, 2016

--

While I enjoy Medium, most of my writing has moved to my personal blog. I’d love to see you there.

My first foray into the world of smart contracts was through a venture called Gruber, a resource-sharing platform that would maintain itself on the blockchain and be open to rewrites through consensus from the community. While it was a hackathon project that did not last much longer than the day it was built, it prompted some discussion online that made me wonder about the possibilities of a truly peer-to-peer organisation. Smart contracts would allow such an entity to live on the blockchain, arbitrating exchanges of money and resources where the code would be the final, neutral arbiter.

Today it sounds a little too familiar and provokes a very different reaction than the one I initially had.

A few months ago, we started working on an analysis of the Smart Contracts launched on Ethereum along with a tool that could improve their security. The issues we found stemmed from systemic gaps in understanding how smart contracts function, and how they are fundamentally different from traditional programs that are not executed on a decentralised network. We analysed over 19,366 contracts, and found that little less than 44% of them— representing a collective worth of over 62 million USD at the time of writing — were flagged as buggy by our tool. Presented here is a look into what these errors are and why they tend to happen. Our recent paper contains detailed information on the bugs we discovered, as well as potential solutions. These have been in submission for over a month, but we believe it prudent to release them now in light of recent events.

The Bugs

On the bitcoin blockchain, the most common transaction involves a payment from A to B. Similarly on Ethereum, a transaction involves a payload that can invoke a smart contract, the results of which affect the state of the blockchain. It is easy to consider contracts as programs and write code for them in the same way that one would for a web server or a mobile application. However, a centralised system such as your computer can provide certain guarantees that a public blockchain cannot. When executing a program that resides on your system, it is natural to assume that the state of the program that will be run — or even the code that will be run — does not change from invocation to execution. Even in the case of web applications where the code is not owned by the user, it is not uncommon to assume the same. This is fatally untrue in the case of smart contracts, which led us to a bug found in 15.8% of all contracts on the blockchain that we called Transaction-Ordering Dependence (TOD).

In the TOD bug, the user of a smart contract assumes a particular state of a contract which may not exist when his transaction is processed. This may lead to unexpected, sometimes malicious behaviour. For example, consider a contract that offers a bounty for a solution to a puzzle. The owner is able to update the bounty as long as it hasn’t been claimed already, and users can submit solutions to the contract in order to claim the prize. If the owner so wished, he could monitor the network for submissions. When he detects an unprocessed transaction containing a valid submission, he could then submit one himself that lowers the bounty to zero. There is a probability that his transaction will be processed first, at which point the user watches helplessly as his bounty turns to zero and his answer is accepted for no cost to the owner. This is somewhat similar to the attack on TheDao, where there is an assumption that the state of the contract does not change after a withdrawal.

The second is the timestamp dependence bug, which arises from an imperfect understanding of timekeeping in smart contracts. An asynchronous network such as Ethereum is disconnected from a synchronised global clock. While there may be some assurances that can be made from within the contract regarding the time of its execution, these are rarely as precise as the ones programmers are used to. For example, the contract TheRun uses the current timestamp in order to generate random numbers and award a jackpot based on the result. Similarly, a betting contract may run for a predetermined betting period, after which it considers the bets that have been submitted. The smart contract framework in Ethereum provides the now variable for accessing the time, which is set to the local timestamp of the miner. Unfortunately, this timestamp can be manipulated by a colluding miner. He may adjust the timestamp provided by a few seconds, changing the output of the contract to his benefit. In TheRun, the output of the random number generator can be manipulated, and for the betting contract, the period can be extended or shrunk to affect future transactions.

The third and most common exploit is that of mishandling exceptions. Mentioned in the Ethereum Security Audit, the attack is also known as the Unchecked Send. In contracts that suffer from this bug, it is possible for an attacker to elicit unexpected behaviour from a contract by calling it from a carefully constructed call-stack. An example is the PonziGovernMental contract, where a potential attack could benefit the owners and drain participating users’ funds. The attack has been well documented, and the paper explains how it leads to real attacks on contracts on the blockchain, for sabotage or personal gain.

Solutions

We have attempted to improve the state of affairs through Oyente, a soon to be released tool for automated detection of these exploits, and an improved semantic for Ethereum called EtherLite proposed in the paper. However, the issue is deeper than identifying and patching individual bugs. What we’ve discovered is an underlying failure to understand how smart contracts need to be engineered and executed. Even though this is not unknown to the community in light of recent events, it needs to be emphasised as often as can be, and there needs to be a perpetual search for solutions.

Less abstraction

It is worth noting that the majority of bugs occur due to flaws in contract design, which stem from an obfuscation of the underlying mechanics of the blockchain. The most common language used to code smart contracts on Ethereum is Solidity, which has a syntax similar to Javascript. Another is Serpent, which is similar to Python. These are extremely high-level languages, which conceal underlying operations for ease of use. Making a transaction in Solidity is as simple as a send() command, and making time-relative computations is as simple as an if(now > a+1 day). The programmer is absolved and blissfully unaware of the complications due to failed transactions, or the implications of timestamp manipulation. In fact, it is far too easy to consider coding for Solidity as you would a server-side API, which should expose some of the pitfalls therein.

Not so soon.

It’s similar to operating a modern airliner (or a Tesla, now that we can) on autopilot, where a detailed understanding of the aerodynamics involved or the failure points of the control system is unnecessary. Unfortunately, when Code Is Law, autopilots are no longer an option. There needs to be a new language for smart contracts which requires at least a cursory understanding of the underlying system.

Moreover, intermediaries between source and bytecode need to undertake stronger roles. In addition to converting between the two, there needs to be pre-deployment security checks that build on work that has come before, ensuring that the first deployment is as secure as can be. The need for pre-deployment security checks is obvious without explanation, but needs emphasis due to the need for democracy in maintaining contract code. Smart contracts allow for functionality that accepts software patches, but these are rarely easy or quick, and never both. In a contract, a patch can be executed by a set of trusted parties, or by collective consensus among its users. The first allows for quick response to an attack, but remains vulnerable to malicious takeover and subverts the decentralisation smart contracts embody. The second is the most democratic, but is far too slow to be effective in countering attacks. Securing contracts before they are created seems to be the only option, even at the cost of a delayed and more expensive deployment.

Smart contracts have the potential to build truly peer-to-peer organisations. We hope that the paper is a step in the direction of building a safe science for engineering smart contracts, especially if this means that they will be secure and carefully considered.

The full paper is here. Loi Luu, Chu Duc Hiep, Prateek Saxena and Aquinas Hobor. Thanks to Brian Demsky, Vitalik Buterin, Yaron Welner, Gregory J. Duck, Christian Reitwiessner, Dawn Song, Andrew Miller, Jason Teutsch and Alex Zikai Wen for useful discussions and feedback on the early version of the paper, as well as the National University of Singapore and Yale-NUS College for supporting this research. This work is supported by the Ministry of Education, Singapore under under Grant No. R-252–000–560–112.

--

--