Table of contents
- Introduction
- Always check the return type of your custom borrow function and access modifiers
- Cover Important State Updates with Events
- Standardise Event Names and Params Where Possible
- Keep Params and Properties in the Same Order
- Avoid Panicking, return nil if possible
- If you must panic, be descriptive
- Prefer Dictionaries Over Arrays
- Do Not Iterate Over Dictionaries or Arrays That Can Be Artificially Inflated by Users
- Avoid relying on .length in your contracts
- Avoid iterating over data sets
- Check Capabilities Before Trying to Access Them!
- Calculate Computation Costs Before Going Live
- Link Resources with All the Correct Restrictions
- Avoid Emitting Events in Structs
- Run Tests Immediately After Installing a New CLI Version
- Conclusion
Hi! ๐ My name is Phill and I am Blockchain Tech Lead at The Fabricant and a Community Rep for Flow. You can find me on:
Discord: Phill#1854
Email: fullstackpho@protonmail.com
Github: github.com/ph0ph0
Twitter: twitter.com/fullstackpho
Introduction
In Part 1 of this 2-part series, I discussed some general blockchain tips and tricks for Flow. In this article, the pointers will be mainly focussed on Cadence, the programming language of the Flow blockchain, but will also include some advice that could extend into other languages as well.
We have a lot to get through, so without further ado, let's get to it!
Always check the return type of your custom borrow function and access modifiers
If you have written your own NFT smart contract on Flow, you will be familiar with the NFT standard. The standard contains a CollectionPublic
interface which exposes pub fun borrowNFT(id: UInt64): &NFT
.
Many developers create their own public interface that exposes their custom borrow function. Oftentimes, their custom borrow function will return a reference to the NFT that is not restricted by an interface. For example, if their NFT was called CatNFT, then the borrow function may look like this:
pub fun borrowCatNFT(id: UInt64): &CatNFT?
Developers do this to provide access to additional fields and functions that are not available in the NFT standard.
This may present an unintended vulnerability... Imagine our CatNFT looks like this (note that the example does not implement the NFT standard for simplicity):
pub contract CatNFT {
// ...contract code...
pub resource NFT {
pub let id: UInt64
pub let name: String
pub var age: UInt64
init() {
self.id = 1
self.name = "name"
self.age = 1
}
pub fun getCatNameAndAge(): String {
return "Hello my name is "
.concat(self.name)
.concat(" and I am ")
.concat(self.age.toString())
.concat(" years old")
}
pub fun incrementAge() {
self.age = self.age + 1
}
}
// ...contract code...
}
Elsewhere in the contract file, there is also a Collection
resource implementation and a custom CatCollectionPublic
interface that is used when exposing the Collection resource publicly by linking the resource to a public path:
pub resource interface CatCollectionPublic {
pub fun borrowCatNFT(id: UInt64): &CatNFT?
}
pub resource Collection {
//...Collection code...
}
The vulnerability here lies in the fact that the author of the contract probably did not intend for everyone to be able to increment the CatNFT's age - only the owner of the CatNFT when it was the NFT's birthday!
With the configuration above, a malicious actor can borrow a reference to the Collection
resource (&CatNFT.Collection{CatNFT.CatCollectionPublic}
), and then use this to borrow a reference to the CatNFT
and call the incrementAge()
function. Here is a playground demonstrating this: https://play.flow.com/2e664a79-f3e9-451b-a1d8-2f510c6af812. So how do we protect against this? There are a few options
The first solution is to return a scoped reference to the CatNFT resource. We can create an interface for the NFT in the CatNFT contract like so:
pub resource interface CatNFTPublic {
pub fun getCatNameAndAge(): String
}
And then update our CatNFT resource like so:
pub resource CatNFT: CatNFTPublic {
...same as before...
}
And then update our custom borrow function in both the Collection resource and the CatCollectionPublic
interface to be this:
pub fun borrowCatNFT(id: UInt64): &CatNFT{CatNFTPublic}
Now when a user calls our borrowCatNFT
function, they will only be able to access the getCatNameAndAge
function, and will no longer be able to increment the age of our cat maliciously.
The other option is to change the access modifier on the incrementAge()
function to something more restrictive.
Either way, the message here is clear: Always check that returned references are suitably restricted, and appropriate access modifiers are chosen for fields and functions.
We must always be careful when we expose access to our resources, as these areas in our code are always the first place a hacker will look for vulnerabilities. Remember that as Cadence developers, we must also write scripts and transactions, which adds additional surface area for attack vectors. Always review your transactions with the same level of scrutiny as your contracts. Here's an article that I wrote on a transaction that exposed a security vulnerability:
https://fullstackpho.com/matrix-market-vulnerability-on-flow-blockchain
Cover Important State Updates with Events
Events are used to communicate state change on the blockchain with off-chain applications. Events are an important tool for dApps, as they allow applications to respond to actions taken by users on the blockchain.
For example, events are employed on Flow to populate backend tables that are used to represent marketplaces. These tables contain records of listings that are displayed to the user on the frontend, facilitating the buying and selling of NFTs. In other words, events are a key information source for dApps and are useful as they allow developers to build applications on top of your smart contracts - possibly even applications that you hadn't predicted that your contracts would be used for!
Events are available as part of the block and are therefore stored perpetually on-chain, so historical events can be accessed given the tooling. Flowgraph is one such tool that allows users to query the blockchain for archival data.
When deciding where to emit events in your smart contracts, a good rule of thumb is to emit an event whenever a state change occurs. You should emit your events immediately after the state change takes place, and they should be named using the past tense. For example:
pub event TokensBurned(address: Address, value: UInt64)
Standardise Event Names and Params Where Possible
As mentioned above, events are a crucial component in the architecture of dApps. A single dApp will likely consist of multiple smart contracts, which means that there could be many possible events that could be emitted across the contracts.
Event indexers 'listen' for events emitted from the smart contract. An event indexing service, such as Graffle, is set to listen to specific events from smart contracts using the event identifier. Any time an event with this identifier is emitted, the event indexing service collects it and passes it on to the connected backend infrastructure. This part of the dApp then decides what to do with this information - it may update a table, or it could invoke an admin transaction in the backend to run some more code in the smart contract.
It's not uncommon for dApps to consist of multiple, similar smart contracts. For example, an NFT company may offer multiple types of NFTs - CatNFT, DogNFT, BirdNFT. Each of these NFTs are managed by its own dedicated smart contract. It is also incredibly likely that similar events from each smart contract will be handled by the same infrastructure components in the backend. For example, imagine a backend table needed to be updated each time a CatNFT, DogNFT or BirdNFT was minted.
Given this information, it's good practice to use the same name for events that are emitted during comparable state updates in similar smart contracts.
Using our previous example of minting an NFT, the backend infrastructure is much more reusable if the event associated with minting an NFT has the same signature (name and parameters).
This allows all events with this signature to be handled similarly, and so each time a new NFT smart contract is added, the backend infrastructure doesn't need to be updated to handle this new event.
To expand on this further, we must understand how event IDs are structured on Flow:
A.ContractAddress.ContractName.EventName
Since the event that represents a particular state change has the same EventName
across contracts and will contain the same parameters, the backend infra will simply need to detect when an event with this EventName is passed to it and handle it accordingly!
For example, the "NFT minted" events emitted by three different contracts below are not ideal:
CatNFT ContractCatNftMinted(id: UInt64, name: String, age: UInt64)
DogNFT ContractDogNFTCreated(id: UInt64, price: UInt64)
BirdNFT ContractBirdNftPurchased(id: UInt64, location: String)
They are cumbersome to work with and inconsistent. This would make handling them in the backend less elegant. Instead, consider the events below:
CatNFT ContractNftMinted(id: UInt64, uuid: UInt64, type: Type, name: String)
DogNFT ContractNftMinted(id: UInt64, uuid: UInt64, type: Type, name: String)
BirdNFT ContractNftMinted(id: UInt64, uuid: UInt64, type: Type, name: String)
You can see that in the second set of examples, the signature is the same, and the UUID
and Type
are included in the event. This additional information provides a lever to invoke other functionality in the backend that might be required for specific contracts.
Keep Params and Properties in the Same Order
One key trait of a great smart contract is that it should be easy to read. It should never be difficult to work out the intent of the logic.
There are many ways that you can make your contracts easier to read, but one really simple approach that you can use is to ensure that properties and parameters occur in the same order in resources and functions.
For example, consider the following resource:
pub contract CatNFT {
pub resource NFT {
pub let id: UInt64
pub let name: String
pub let age: UInt64
pub let location: String
init(
location: String,
name: String,
id: UInt64,
age: UInt64
) {
self.age = age
self.location = location
self.id = id
self.name = name
}
}
pub fun createCatNFT(
name: String,
id: UInt64,
location: String,
age: UInt64
): @NFT {
return <- create NFT(
location: location,
name: name,
id: id,
age: age
)
}
}
You can see in the example above that the order of the properties in the NFT
resource does not match the order of the parameters in the init
function or the createCatNFT
function. This is a simple example, but reordering your props/params in resources and functions can really help with the readability of your code!
Check out the ordered example below to see how much neater it is:
pub contract CatNFT {
pub resource NFT {
pub let id: UInt64
pub let name: String
pub let age: UInt64
pub let location: String
init(
id: UInt64,
name: String,
age: UInt64,
location: String
) {
self.id = id
self.name = name
self.age = age
self.location = location
}
}
pub fun createCatNFT(
id: UInt64,
name: String,
age: UInt64,
location: String
): @NFT {
return <- create NFT(
id: id,
name: name,
age: age,
location: location,
)
}
}
If you have a parameter that is used throughout your contract (eg id
) it also helps to keep that in the same position in all functions in the contract (ie id
is always the first param in a function), irrespective of what that id
is associated with.
In addition, you may have many scripts and transactions that take an address as an argument. Again, always be consistent with the order of parameters across scripts and transactions (eg if Address
is a param, it is always the first param). This will make running transactions in the CLI and on the frontend much simpler!
The message here is that consistency is important.
Avoid Panicking, return nil if possible
There are particular operations where panicking makes sense, but most of the time returning nil is a better choice. For example, if you have a getter function that returns data that might be used on a FE, it often makes sense to return nil and not panic.
For example:
pub let balances: {Address: UInt64}
pub fun getBalance(address: Address): UInt64 {
let balance = self.balances[address] ?? panic("Address does not exist in dictionary")
return balance
}
Can be rewritten as:
pub let balances: {Address: UInt64}
pub fun getBalance(address: Address): UInt64? {
return self.balances[address]
}
The second example returns an optional UInt64
, which is preferable to panicking because the caller can simply interpret a nil
response as "the address has not been assigned a balance yet".
It is important to understand when panicking should be used. Panicking terminates the transaction and prevents any further code from being executed, so is an important protective measure in some cases, such as during minting.
If you must panic, be descriptive
I prefer descriptive panic messages over vague statements. When you have built a platform with many composable contracts, it is much quicker and easier to debug if your panic message explains why the error might have occurred. Let's say that panicking in the previous example above was essential. For reference, here is the function again:
pub let balances: {Address: UInt64}
pub fun getBalance(address: Address): UInt64 {
let balance = self.balances[address] ?? panic("Address does not exist in dictionary")
return balance
}
An improved panic message might be:
pub let balances: {Address: UInt64}
pub fun getBalance(address: Address): UInt64 {
let balance = self.balances[address] ?? panic("Failed to get balance of address: ".concat(address.toString()))
return balance
}
A little bit of extra information can really help when debugging! Again, it also helps to be consistent - if this getter method is used in multiple contracts, then it should emit the same panic message!
You could even take this a step further and work with your frontend and back-end team to come up with an error system, where types of errors are grouped under error codes, allowing the recipient of the error message to respond accordingly. For example, you may replace the previous error message with:
....panic("[Error Code: 1234] Balance Check Error: Failed to get balance of address: ".concat(address.toString()))
Prefer Dictionaries Over Arrays
Arrays certainly have their place, they are useful for small sets that you need to iterate over. However, dictionaries are often better placed. Accessing a value in a large dictionary is far cheaper than iterating over a large array to access a value.
For example, consider the case where you are keeping a record of addresses that have minted your NFTs. You could keep an array, like so:
pub var addressesMinted: [Address]
The problem here is that if you wanted to determine if a particular address had minted yet, you would have to use the contains(_ element: T): Bool
method:
pub fun addressHasMinted(address: Address): Bool {
return self.addressesMinted.contains(address)
}
If the addressesMinted
array is large, you may hit the gas limit as contains
iterates over the array until it finds a match. Thus, using an array here is very risky and could break your code... A much better approach would be to use a dictionary:
// {mintAddress: nft.uuid}
pub var addressesMinted: {Address: UInt64}
pub fun addressHasMinted(address: Address): Bool {
return self.addressesMinted.containsKey(address)
}
NOTE: The above example would be useful if minting was limited to 1 NFT per address. If users could mint more, a different dictionary type would be better suited.
The containsKey(key: K): Bool
method on the dictionary type doesn't iterate over the keys, and so is a lot cheaper to use than the contains(_ element: T): Bool
on the array type, reducing the gas of your transactions!
Do Not Iterate Over Dictionaries or Arrays That Can Be Artificially Inflated by Users
Following on from the previous tip, avoid iterating over dictionaries or arrays that users can artificially inflate.
If users can add values to your dictionaries or arrays at no/low expense, then a malicious actor may add many entries to an array or dictionary in your contract. If your contract relies on this set of data, this could be very difficult to recover from!
Avoid relying on .length
in your contracts
.length
is a method on array and dictionary types that returns the length of the array. Whilst the set would indeed have to be large for .length
to be prohibitively expensive, it is best to avoid this method where possible, particularly for important business logic.
Avoid iterating over data sets
You should only iterate over a data set if you know that the set will always be small.
Iterating can be expensive if many operations are executed, and variable length sets can enhance this risk!
Check Capabilities Before Trying to Access Them!
Scripts
There is a very common pattern in Flow that almost every NFT smart contract employs as it is suggested by the NFT standard. That is, NFTs should be stored in Collection
resources.
To access an NFT in a collection, you will need to borrow a reference to it. You may, for example, need to access it to resolve a view (see: github.com/onflow/flow-nft/blob/master/cont..).
In the case of scripts, it's always good practice to check that the capability exists before trying to access it. This is particularly important if you are querying the blockchain in your application to determine the NFTs that the current user owns, for example. If you are accessing an NFT reference via the Collection
resource to obtain data, then you should return nil if the Collection that you were attempting to access does not exist. Then your application can decide on the next course of action.
To do this, you can use the Capability
method, .check()
, which returns a boolean; true
if the specified type exists at that path, and false
if not.
import ExampleNFT from "../contracts/ExampleNFT.cdc"
import MetadataViews from "../contracts/MetadataViews.cdc"
pub fun main(address: Address, nftId: UInt64): AnyStruct? {
if (getAccount(address).getCapability<&{MetadataViews.ResolverCollection, ExampleNFT.ExampleNFTCollectionPublic}>(ExampleNFT.NFTCollectionPublicPath).check()) {
let collectionRef = getAccount(address).getCapability<&{MetadataViews.ResolverCollection, ExampleNFT.ExampleNFTCollectionPublic}>(ExampleNFT.NFTCollectionPublicPath).borrow()!
let ids = collectionRef.getIDs()
if ids.contains(nftId) {
let nftViewResolver = _collectionRef.borrowViewResolver(id: nftId)
return nftViewResolver.resolveView(Type<MetadataViews.Display>())
}
return nil
}
However, I prefer to use if-let in many cases, as it doesn't require you to force unwrap the collectionRef
(below) to access .getIDs()
, and is also a bit neater:
import ExampleNFT from "../../../contracts/standard/examples/ExampleNFT.cdc"
import MetadataViews from "../../../contracts/standard/MetadataViews.cdc"
pub fun main(address: Address, nftId: UInt64): AnyStruct? {
let collectionRef = getAccount(address).getCapability<&{MetadataViews.ResolverCollection, ExampleNFT.ExampleNFTCollectionPublic}>(ExampleNFT.NFTCollectionPublicPath).borrow()
if let _collectionRef = collectionRef {
let ids = _collectionRef.getIDs()
if ids.contains(nftId) {
let nftViewResolver = _collectionRef.borrowViewResolver(id: nftId)
return nftViewResolver.resolveView(Type<MetadataViews.Display>())
}
}
return nil
}
Notice in both of the examples above that we have to call .getIDs()
. At the time of writing, the .borrowViewResolver()
function from MetadataViews doesn't return an optional. If we tried to access this function by passing in an nftId
that the user didn't own, then the contract would terminate the script. By checking the IDs in the nftId
s collection resource before calling the function, we avoid this error - this is not ideal as we have to use .contains()
, but it is our only option until the MetadataViews v2 is released, which will likely include an optional return type for this function.
Transactions
Whenever interacting with a capability in a transaction, it is useful to check that the capability exists. For example, when a user purchases an NFT from the marketplace, whether using the NFTStorefront contract or your custom marketplace contract, the NFT that is purchased will often be deposited into a collection during the transaction via a capability.
If the user has never owned one of your NFTs, it is likely that they do not have this capability in place. As such, if you don't check that the capability exists and create it for them, then the purchase transaction will fail. This is not what you want when users are buying NFTs on your marketplace.
To get around this, you can create the Collection
resource for them and link it publicly in their account before proceeding with the rest of the transaction:
transaction(sellerAddress: Address, listingID: String, amount: UFix64) {
...reference constants...
prepare(acct: AuthAccount) {
if !acct.getCapability<&{ExampleNFT.ExampleNFTCollectionPublic}>(ExampleNFT.CollectionPublicPath).check() {
if acct.type(at: ExampleNFT.CollectionStoragePath) == nil {
let collection <- ExampleNFT.createEmptyCollection() as! @ExampleNFT.Collection
acct.save(<-collection, to: ExampleNFT.CollectionStoragePath)
}
acct.unlink(ExampleNFT.CollectionPublicPath)
acct.link<&{ExampleNFT.Collection{NonFungibleToken.CollectionPublic, NonFungibleToken.Receiver, ExampleNFT.ExampleNFTCollectionPublic, MetadataViews.ResolverCollection}>(ExampleNFT.CollectionPublicPath, target: ExampleNFT.CollectionStoragePath)
}
...rest of prepare block...
}
...rest of transaction...
}
In the code above, you can see that we check to see if the capability exists
if !acct.getCapability<&{ExampleNFT.ExampleNFTCollectionPublic}>(ExampleNFT.CollectionPublicPath).check()
If it doesn't we then check to see if the
Collection
resource exists at the collection storage pathif acct.type(at: ExampleNFT.CollectionStoragePath) == nil
And if it doesn't exist at the storage path, we create a new collection and save it to the user's account.
Finally, we unlink any resources at the public path and link our newly created Collection resource using all of the relevant interfaces.
Calculate Computation Costs Before Going Live
The gas limit for transactions is 9999 units and for scripts it is 100,000 units.
When starting the emulator, you can run it with transaction fees on:
flow emulator -v --transaction-fees true
You can then use the computationUsed
output in the emulator to get an understanding of how many computational units your transactions and scripts are consuming. Run transactions/scripts at the upper and lower bounds of the state that you expect during the lifespan of your contract, particularly the important transactions/scripts, and make a note of the output. This will give you added confidence during launch. Flow does not currently employ a surge factor when calculating the computational cost, so the values on emulator should be comparable to those on mainnet and testnet.
In the future, I will write an article on how to simulate thousands of unique accounts minting against your contract in the emulator, so keep an eye out for that article!
Link Resources with All the Correct Restrictions
Linking resources on a user's account allows you to expose functionality in a controlled manner. When you are linking resources, it is important to think about why you are exposing this functionality at this path, and how it may be used.
A good example of this is the linking of Collection
resources from the NFT Standard contract. Many developers new to Cadence would only expose their custom CollectionPublic interface at the Collection public path, instead of all the appropriate interfaces. For example:
acct.link<&ExampleNFT.Collection{ExampleNFT.ExampleNFTCollectionPublic}>(ExampleNFT.ExampleNFTCollectionPublicPath, target: ExampleNFT.ExampleNFTCollectionStoragePath)
The above code is missing some other restrictions which would be expected at this public path. Namely the NonFungibleToken.CollectionPublic
, NonFungibleToken.Receiver
, MetadataViews.ResolverCollection
interfaces. These are included in the code below:
acct.link<&ExampleNFT.Collection{NonFungibleToken.CollectionPublic, NonFungibleToken.Receiver, ExampleNFT.ExampleNFTCollectionPublic, MetadataViews.ResolverCollection}>(ExampleNFT.ExampleNFTCollectionPublicPath, target: ExampleNFT.ExampleNFTCollectionStoragePath)
By including all of these restrictions on the Collection
resource at the CollectionPublicPath
, you ensure that the functionalities of the Collection
resource that users expect are present at this public path.
Avoid Emitting Events in Structs
As mentioned previously, events are used to inform off-chain applications about the state of your smart contract. Therefore we should carefully consider not only the information that our events contain, but also when they are emitted.
When you mint an NFT in your smart contract, you may emit an event in the init
block of the NFT resource. This is perfectly fine because each event will be associated with one NFT and a resource cannot be created outside of its containing contract (eg in a transaction). Resources also cannot be copied.
On the other hand, a structure can be created outside of its containing contract and it can be copied. Therefore, we should avoid emitting events inside structures, because these properties make them easy tools for spamming the blockchain with events. For example:
pub struct S {
...properties of the struct...
pub fun updateStruct() {
...update struct properties...
emit StructPropertiesUpdated(...event data...)
}
init() {
emit StructureCreated(...event data...)
}
}
Using the struct, S
, defined above, a malicious actor could create many instances of your struct and therefore emit StructureCreated()
repeatedly. They could also call updateStruct
over and over again, which would emit StructPropertiesUpdated()
many times. This would allow them to spam the blockchain and any off-chain applications that are listening to these events. This could create an issue for your off-chain infrastructure if you have not calibrated it to handle such occurrences; it could pollute your databases with meaningless data, and possibly even cause you to rack up a huge bill if your infrastructure does not have properly configured limits. This is known as a "greifing attack". To prevent this, it is often best to only emit events in resources or global functions with restricted access.
Run Tests Immediately After Installing a New CLI Version
If you are using flow-js-testing
and you install a new Flow CLI version, always run your tests. At the time of writing, unfortunately, new versions of Flow CLI rarely work with flow-js-testing
(however, Max is taking ownership of the project so hopefully this will change! https://github.com/onflow/developer-grants/issues/138#issuecomment-1441019439).
Conclusion
Writing smart contracts requires a meticulous nature as there is often a lot at stake, and it is very difficult to recover from a disaster.
It helps to be very familiar with best practices and the documentation associated with Cadence, and we should apply this knowledge alongside any software development best practices that we have picked up along the way.
Therefore, we must take certain precautions to avoid such scenarios. As demonstrated above, we should be aware of the data flow in our dApps, and consider any "greifing attacks" that poor code may give rise to.
If you made it this far - well done and thank you for reading my article, I hope you enjoyed it and were able to take away something useful!
Happy coding! ๐