About the Author

Performance In Development Is The Chief Cornerstone

In Roman architecture the stone in which a construction project was started from was called the cornerstone or foundation stone.  It was the most important because this was the stone in which all other stones would be based off of.  Great care was taken in making sure that the angles were correct.  If there was a slight deviation, even one degree, it could cause structural issues for a foundation.  When talking about performance across the lifecycle we must start at the foundation.  In my previous post we talked about the need for looking at performance not just in a single phase but in all aspects of the lifecycle.  In this article we will look at the need for performance to be the cornerstone of the development process.

What is the cornerstone for most development teams?

When researching what the top priorities for development teams are I was surprised to find that performance was rarely mentioned.  Focus was typically about other issues. Things like the scope of an application, limiting features for each build or best coding practices.  A recent Gartner survey showed that the top three priorities for development teams were:

  1. Develop applications Faster
  2. Expand the Use of Agile
  3. Reduce application development costs

While performance was in the list it was down near the bottom.  I find this ironic because in order to address the top three priorities performance should be the foundational priority.  Let’s look at each of these in the context of having performance as the cornerstone.

Develop Applications Faster

With the advancement of web delivered applications companies are demanding increased interactions with customers and partners.  The mobile revolution only compounds this.  New features brought to market quickly can be the difference for a company.  The agile process promotes quick iterations with small changes.  This allows teams to churn out features rapidly.  The drawback is that many times, in order to deliver applications on time, hard choices are made.  Features are pushed and testing is seen as a luxury.  As changes stack up problems that were introduced in earlier sprints will become harder to unwind.  This is the one degree that can come back to inhibit the scale of an application. A problem can be so imbedded that the process to address it means tearing down several layers of functionality and rebuilding.  Velocity now stalls as teams must stop innovating to deal with bottlenecks that were introduced earlier in the development phase. The following graph visualizes what happens when performance is not treated with high priority:

Performance must be a focus throughout development to avoid missed goals and missed expectations

Performance must be a focus throughout development to avoid missed goals and missed expectations

Let’s look at how having performance as the foundation can impact faster releases.  Using a performance platform earlier in the process gives developers insight into the performance of new application features.  In addition to looking at test results from Unit and Integration tests it is important to look at performance metrics that can be extracted from these test runs. This allows verification of functionality as well as architectural and performance problems by leveraging existing test assets. A best practice is to look at metrics such as # of executed Database Statements# of Thrown Exceptions, Total Execution Time, Created Objects, Generated Log Statements, … The following table highlights some of these metrics to look at:

Analyzing Performance Indicators for every test run identifies problems as they get introduced in the code

Analyzing Performance Indicators for every test run identifies problems as they get introduced in the code

Each code change is now tested for functional viability (Unit Test Status), architectural correctness (# of SQL Statements, # of Exceptions, …) and performance (Execution Time, …) . This indicates as to how the changes are impacting the overall quality and performance of an application.  The following figure shows a different representation of this data with a built-in regression detection. The performance management platform learns what the expected value ranges for each performance indicator is. In case a metric is out-of-range the platform automatically alerts on this regression:

Performance Indicators that are out of the exepected value range will automatically trigger alerts

Performance Indicators that are out of the exepected value range will automatically trigger alerts

Developers see how the latest changes impact performance of a particular component that got tested.  Each test can now generate a new task to address these automatically identified problems.  The additional information that got captured in order to calculate these indicators helps when analyzing the actual regression. The following screenshot shows the comparison between the “last known good” and the problematic transaction that raised the alert.

Visual comparison makes it easy to identify the actual regression

Visual comparison makes it easy to identify the actual regression

There is no need to write performance specific user stories as the platform automatically detects the test platform, in this case JUnit, and gives performance feedback for that build.  Testing of performance becomes part of the automation process.  With each build in the sprint performance is measured and compared to all pervious builds.  It now just becomes a part of the sprints overall “done-ness”.

An agile team can eliminate some of the stabilization sprints that are needed as performance defects have been vetted out in line.  This keeps innovation moving and fewer chances for velocity to stall out due to stability issues.

Expand the Use of Agile

It is not a surprise that this is a top concern for development teams.  More companies are moving from a waterfall structure to an agile process.  The concept of test automation must be implemented for this transition to be successful.  This is a time consuming process, as most agile teams are not starting from scratch but transitioning an application.  Without these tests in place a core feature of the agile practice is lost.  A main factor in regard to the adoption of agile practices and test automation is visibility.  If no one is looking at the data from build to build then there is no reason to have the tests in the first place.  Developers will focus on what is important to the completion of the project and writing tests takes precious time and resources.  This is a problem if you want to expand the use of an agile practice.

With a solution that is integrated into the functional testing process you now have instance feedback about each test.  Now the data provided is much richer than a simple pass/fail of tests.  There is now an initial performance indication for each test. Teams are looking at this data for every build and visibility is high around it.  There is a need to have coverage across the code base as performance is now a critical component in the development stage.  This need for understanding performance drives the adoption of test automation for agile teams, engraining it as standard practice.

Reduce Application Development Costs

This is just a reality of the world we live in now.  For a development team Time truly Equals Money.  One developer costs X dollars per hour.  In order to reduce the cost of development you must reduce the amount of time needed to deliver new features.  This can be achieved by reducing the amount of iterations needed to complete a new feature.  Currently most performance is assessed at the testing level.  As stated earlier performance issue could be layered into several sprints that only appear in large scale testing.  This can set back a team several sprints only adding to the cost.

Understanding performance earlier in the development cycle cuts down on the number of iterations needed to stabilize the code base for a new release.  This allows for teams to keep the velocity of development very efficient and clean.  The following charts illustrate this by comparing Traditional Performance Management (performance as an after-thought) and Continuous Performance Management (performance as top priority).

Continuously focusing on performance results in better quality and on-time delivery

Continuously focusing on performance results in better quality and on-time delivery

Development is where ideas start to become reality.  This is where applications specs begin to take shape and logical processes begin to morph into user interfaces and interactions.  As we can see the need to have performance in the development part of the lifecycle really is the cornerstone for an application.  The slightest deviation can cause a ripple that may not appear until later in the lifecycle.  So when we look at performance in this part of the lifecycle it is surprising how little is done.  In order to really become proactive performance must be addressed from the start.  The only way to do that is to have a platform in place that spans this lifecycle.  In my next article I will move along the process and address how the acceptance team is crucial in managing performance and is a key component in becoming truly proactive.  If you are a dynaTrace user check out the following article on the Community Portal: dynaTrace in CI – The Big Picture”.

Comments

  1. What gets measured, gets done. But it is the job of the owners of company stock to hold the C-Suite accountable for remunerating those who get done what gets done — otherwise what gets done will never be considered valuable by clients, employees or employers.

Comments

*


nine − 3 =