Performance & Scale Testing - 2019

Performance & Scale Testing - 2019

Data Set

We have used data set provided by Atlassian in their "DC App Performance Test Kit". We needed to update this data set for our own tests. In the standard data set there are 500 projects. We have selected random 100 projects from this set and created extra data on these projects as shown below. In our tests we only used issues from these 100 projects. Note that this is nearly worst case scenario because in practice only some bigger projects will use features of this add-on and for things like view/edit issue, view board etc. this add-on will not exercised. But for our tests the add-on is exercised with every request. 

  • 2000 additional components (20 components per project)

  • 10.000 additional versions (100 versions per project)

  • 40.000 component versions (400 component versions per project)

  • 80.000 Issues have component version set on them. (800 per project)

  • 2.000 Bundles (20 per project)

  • 3 Custom fields. 2 calculated (fixed in bundle and affects bundle) and 1 single select (Manual bundle).

  • 10.000 Issues have "Manual Bundle" custom field set. 2 Calculated fields have a value potentially for every issue with component versions (80.000). 

  • ~8000 version hierarchy node (60+ per project)

  • We have created 15 subcomponents in each selected project and added 4 real components each (60 item subcomponent tree per project)

  • We have also created a subproject hierarchy using 15 virtual projects (folders) and 30 real projects inside each of them, 450 item in the tree.  

Test Procedures

We have used "DC App Performance Test Kit" on Amazon AWS for final execution. We have forked the project and made some modifications to tests and data sets. You can access the repository on Github. We have two branches, part1 and part2. We have used the procedures provided by Atlassian.

Test Procedure

We have extended provided JMeter tests. For some of the build in actions our app performs additional Rest API calls to get app configuration. We have added these to following JMeter extension points. We didn't use Selenium because nearly all of our UI is implemented with Javascript making Rest API request and put UI overhead to client browsers and they don't cause a load on the server.

Part 1

We have performed 4 runs, 2 without app and 2 with app. Our test results are consistent and there is no significant overhead introduced by our app when it is not exercised. All differences between executions are normal. Selenium tests should be hitting our api to get app specific settings because it executes REST API calls through a headless Chromium browser. But as can be seen from the results this is not introducing a measurable delay. We can verify this in Part 2, section 1 too. 

Workload:

We use default values in dc performance test framework, 200 as concurrency and 54500 for "total actions per hour" as Jmeter parameters. 

We have used following cluster configuration to run our tests:

  • Jira Nodes: c5.4xlarge instance. CPU Usage was 18% max.

  • DBdb.m4.large instance. We run PostgreSQL 9.6 on this. During our tests db usage was less around 10%.

  • Test Runner: t2.xlarge instance. By using an instance on AWS cloud to simulate the clients we minimize any nondeterministic behavior due to our network. During tests CPU usage was between 38% to 48%. This shows that our test runner is not a bottleneck. 

Action

with app run 1

without app run 1

with app run 2

without app run 2

Action

with app run 1

without app run 1

with app run 2

without app run 2

jmeter_login_and_view_dashboard

351

442

374

351

jmeter_view_issue

160

163

150

144

jmeter_search_jql

529

579

538

484

jmeter_view_dashboard

308

309

293

278

jmeter_view_backlog

400

386

359

354

jmeter_open_quick_create

73

82

76

67

jmeter_create_issue

191

191

161

166

jmeter_view_project_summary

256

263

247

234

jmeter_open_editor

93

93

73

80

jmeter_save_edit

249

246

228

224

jmeter_browse_projects

49

47

47

53

jmeter_view_kanban_board

893

1007

939

873

jmeter_view_scrum_board

166

144

142

152

jmeter_open_comment

86

88

87

76

jmeter_save_comment

145

197

136

167

jmeter_browse_boards

88

91

61

83

selenium_login:open_login_page

238

209

207

196

selenium_login:login_and_view_dashboard

2122

2120

953

2109

selenium_login

2492

2434

1288

2426

selenium_browse_project

452

462

473

451

selenium_browse_board

813

699

782

784

selenium_create_issue:open_quick_create

1432

1390

1466

1395

selenium_create_issue:submit_issue_form

880

832

885

835

selenium_create_issue:fill_and_submit_issue_form

6259

5881

6179

6071

selenium_create_issue

7906

7488

7859

7718

selenium_edit_issue:open_edit_issue_form

734

838

874

861

selenium_edit_issue:save_edit_issue_form

1649

1547

1529

1593

selenium_edit_issue

3636

3512

3550

3505

selenium_save_comment:open_comment_form

518

555

543

517

selenium_save_comment:submit_form

1449

1530

1563

1519

selenium_save_comment

2247

2398

2424

2309

selenium_search_jql

2153

2163

2019

2062

selenium_view_scrum_board_backlog

993

1144

1209

2044

selenium_view_scrum_board

949

989

2035

1988

selenium_view_kanban_board

830

895

911

896

selenium_view_dashboard

874

967

667

765

selenium_view_issue

1148

1191

1152

1169

selenium_project_summary

975

953

947

1070

selenium_log_out

812

957

803

853

Part 2

Section 1:

The app loads it's configuration on the following pages so we have added Rest API request for loading the app configuration to extensions.jmx file for the following entry points. 

  • View Project Summary

  • View Issue

  • Edit Issue

  • View Boards & Backlogs

The app also tries to load "component versions" on following entry points in addition to configuration. So we have added extra Rest API calls for below entry points. "Component Versions" is always loaded. When creating or editing an issue, the user may try to use "subcomponent picker" to update "Component/s" field. We assume that user will try to edit "Component/s" field with 10% of the time so with also load subcomponent tree of the project with 10% probability.  When creating an issue, the user may try to use "subproject picker" to select "Project" in addition to "component picker". We assume that user will update "Project" on create issue picker with 20% of the time. 

  • View Issue

  • Create Issue

  • Edit Issue

Workload:

We use default values in dc performance test framework, 200 as concurrency and 54500 for "total actions per hour" as Jmeter parameters. 

We have used following cluster configuration to run our tests:

  • Jira Nodes: c5.2xlarge instance (34 ECU). If we use the same m4.large instance (6.5 ECU) we used in part2, a lot of requests fails due to timeout. 

  • DBdb.m4.large instance. We run PostgreSQL 9.6 on this. During our tests db usage was less than 15%.

  • Test Runner: t2.xlarge instance. By using an instance on AWS cloud to simulate the clients we minimize any nondeterministic behavior due to our network. During tests CPU usage was around 50%. This is mostly caused by selenium. We don't observe any significant cpu usage in section 2 when we only use JMeter. 

 

You can find results for 1 node, 2 nodes, 3 nodes and 4 nodes execution of scalability tests. As can be seen there is no much difference between execution times on each run because even for 1 node configuration system is not overloaded. Our add-on specific requests starts with "conf_man_" prefix and they have very low numbers. 

Action

1 Node

2 Nodes

3 Nodes

4 Nodes

Action

1 Node

2 Nodes

3 Nodes

4 Nodes

jmeter_login_and_view_dashboard

600

479

538

491

jmeter_view_issue

282

214

193

176

conf_man_general_settings

4

4

4

4

conf_man_subcomponent_settings

3

3

3

3

conf_man_subproject_settings

3

3

3

3

conf_man_settings

12

10

11

10

conf_man_component_versions

7

6

6

6

conf_man_view_issue

8

7

6

6

jmeter_search_jql

883

701

649

554

jmeter_view_dashboard

528

378

352

306

jmeter_view_backlog

600

504

483

423

conf_man_view_board_backlog

5

5

5

5