These benchmarks aim to test the Incubed version for stability and performance on the server. As a result, we can gauge the resources needed to serve many clients.

Setup and Tools

  • JMeter is used to send requests parallel to the server
  • Custom Python scripts is used to generate lists of transactions as well as randomize them (used to create test plan)
  • Link for making JMeter tests online without setting up the server:

JMeter can be downloaded from:

Install JMeter on Mac OS With HomeBrew

  1. Open a Mac Terminal where we will be running all the commands

  2. First, check to see if HomeBrew is installed on your Mac by executing this command. You can either run brew help or brew -v

  3. If HomeBrew is not installed, run the following command to install HomeBrew on Mac:

    ruby -e "$(curl -fsSL"
    Once HomeBrew is installed, we can continue to install JMeter.
  4. To install JMeter without the extra plugins, run the following command:

brew install jmeter
  1. To install JMeter with all the extra plugins, run the following command:

    brew install jmeter --with-plugins
  2. Finally, verify the installation by executing jmeter -v

  3. Run JMeter using ‘jmeter’ which should load the JMeter GUI

JMeter on EC2 instance CLI only (testing pending):

  1. Login to AWS and navigate to the EC2 instance page

  2. Create a new instance, choose an Ubuntu AMI]

  3. Provision the AWS instance with the needed information, enable CloudWatch monitoring

  4. Configure the instance to allow all outgoing traffic, and fine tune Security group rules to suit your need

  5. Save the SSH key, use the SSH key to login to the EC2 instance

  6. Install Java:

    sudo add-apt-repository ppa:linuxuprising/java
    sudo apt-get update
    sudo apt-get install oracle-java11-installer
  7. Install JMeter using:

    sudo apt-get install jmeter
  8. Get the JMeter Plugins:

  9. Move the unzipped jar files to the install location:

    sudo unzip -d /usr/share/jmeter/
    sudo unzip -d /usr/share/jmeter/
    sudo unzip -d /usr/share/jmeter/
  10. Copy the JML file to the EC2 instance using:

(On host computer)

scp -i <path_to_key> <path_to_local_file> <user>@<server_url>:<path_on_server>
  1. Run JMeter without the GUI:
jmeter -n -t <path_to_jmx> -l <path_to_output_jtl>
  1. Copy the JTL file back to the host computer and view the file using JMeter with GUI

Python script to create test plan:

  1. Navigate to the txGenerator folder in the in3-tests repo.
  2. Run the file while referencing the start block (-s), end block (-e) and number of blocks to choose in this range (-n). The script will randomly choose three transactions per block.
  3. The transactions chosen are sent through a tumble function, resulting in a randomized list of transactions from random blocks. This should be a realistic scenario to test with, and prevents too many concurrent cache hits.
  4. Import the generated CSV file into the loaded test plan on JMeter.
  5. Refer to existing test plans for information on how to read transactions from CSV files and to see how it can be integrated into the requests.


  • When the Incubed benchmark is run on a new server, create a baseline before applying any changes.
  • Run the same benchmark test with the new codebase, test for performance gains.
  • The tests can be modified to include the number of users and duration of the test. For a stress test, choose 200 users and a test duration of 500 seconds or more.
  • When running in an EC2 instance, up to 500 users can be simulated without issues. Running in GUI mode reduces this number.
  • A beneficial method for running the test is to slowly ramp up the user count. Start with a test of 10 users for 120 seconds in order to test basic stability. Work your way up to 200 users and longer durations.
  • Parity might often be the bottleneck; you can confirm this by using the script in the scripts directory of the in3-test repo. This would help show what optimizations are needed.


  • The baseline test was done with our existing server running multiple docker containers. It is not indicative of a perfect server setup, but it can be used to benchmark upgrades to our codebase.
  • The baseline for our current system is given below. This system has multithreading enabled and has been tested with ethCalls included in the test plan.
Users/duration Number of requests tps getBlockByHash (ms) getBlockByNumber (ms) getTransactionHash (ms) getTransactionReceipt (ms) EthCall(ms) eth_getStorage (ms) Notes
20/120s 4800 40 580 419 521 923 449 206  
40/120s 5705 47 1020 708 902 1508 816 442  
80/120s 7970 66 1105 790 2451 3197 984 452  
100/120s 6911 57 1505 1379 2501 4310 1486 866  
110/120s 6000 50 1789 1646 4204 5662 1811 1007  
120/500s 32000 65 1331 1184 4600 5314 1815 1607  
140/500s 31000 62 1666 1425 5207 6722 1760 941  
160/500s 33000 65 1949 1615 6269 7604 1900 930 In3 -> 400ms, rpc -> 2081ms
200/500s 34000 70 1270 1031 12500 14349 1251 716 At higher loads, the RPC delay adds up. It is the bottlenecking factor. Able to handle 200 users on sustained loads.
  • More benchmarks and their results can be found in the in3-tests repo