Andreas Grabner About the Author

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Developers think Functionality – But less about Scalability

2 weeks ago I co-hosted a Webinar with one of our users – Bill Mar, Director of Engineering Services from SmithMicro Software. SmithMicro provides the backbone of our digital life by connecting different digital devices together. In his role, Bill works in the Wireless Business unit working on Voice-related services, e.g.: VoiceSMS or Visual Voicemail – services that we’ve all become used to since we run around with smart phones such as the iPhone or Blackberry.

Bill talked about how SmithMicro had to move towards Proactive Performance Management as the company and the user base started to grow. In his presentation he made an interesting but bold statement: Developers Think Functionality – But Less About Scalability.

As I used to be a developer for many years (and still today, as dynaTrace still allows me to do a little coding on certain features) I had to think about this statement and didn’t know in the beginning whether I should agree with him from the perspective of my current role within dynaTrace or whether I should be offended from the perspective of a developer who just likes to code new features. In the end I agreed with him – especially after listening to all he had to say about his day-to-day challenges as Director of Engineering Services.

In this blog I summarize what was said on the webinar. Bill gave some great insight into what they did in order to become more proactive with performance management. He shared recommendations and their Best Practices that have worked for his team. He really told some great stories and had some great analogies. The bold statement I mentioned in the beginning is just a teaser :-)

Problems came with growing business success

Business success is a great thing, and is what every company is designed to achieve. More active users mean more money spent on the products or services you sell. If you provide Software as a Service – such as SmithMicro does – and you start with a rather small user base you don’t necessarily run into any software related issues right away. SmithMicro started realizing some certain usage peaks during the year – like during the holiday season or New Years when people send their Best Wishes to friends and family using their digital services. With their growing success, however, more volume related issues bubbled up to the surface. It was rather easy to find the initial load related problems by digesting log files and looking at exception stack traces. Even though this process took a certain amount of time it was still fast enough to react to problems that came in from a rather small user-base.

Problems happen faster if you drive faster

When driving 100 miles an hour you have much less time to react in order to avoid a fatal crash then when driving at 10 miles an hour. The same is true with the online business. If you have 100 transactions an hour you may lose the business of a hundred users if it takes you an hour to fix a problem. If you have 100 transactions per second (TPS) you will lose a whole lot of money in one hour. Bill also faced this problem as they reached 100 TPS. Looking at log files and analyzing exception stack traces was no longer fast enough to react on problems in order to avoid losing business. There is a two way approach to this problem:
a) don’t allow code to end up in production that has potential scalability issues and
b) bring tools into production that allows Operations to react more pro-actively (early alerting system) and that equips Devs with all information they need without needing to analyze log-files.

Developers need to understand their code and the real use case scenarios

Bill mentioned several interesting things on that topic and started with another great analogy: The plan used to build a house is not the same as the plan it was built. In order to have a clear understanding of what is actually going on in the application it is important to have plans of “the real” architecture. It is hard and not always practical to maintain blueprints or class diagrams as software is very dynamic – and often changes happen because they have to happen and nobody thinks about updating the documentation. A Best Practice therefore is that developers and architects need to understand the current architecture as it is – and not how they think the architecture should exist.

SmithMicro uses dynaTrace Sequence Diagrams from Real-Life Transactions instead of using manual maintained UML Diagrams

SmithMicro uses dynaTrace Sequence Diagrams from Real-Life Transactions instead of using manual maintained UML Diagrams

On the topic of scalability Bill talked about having an early focus on things like memory allocation, performance and scalability of critical components. Coming back to his initial bold statement about developers only focusing on functionality, he made it clear that functional readiness doesn’t necessarily mean Production Ready. With some longer-running local tests that test real use-case scenarios, developers can easily identify problems like excessive memory consumption or non-performing code using simple load generators and profiling-like tools. Scalability is a key requirement, and the understanding of real use cases used to verifying scalability is another Best Practice for proactive performance management.

SmithMicro looking at individual PurePaths captured under load to identify scalability issues and performance bottlenecks

SmithMicro looking at individual PurePaths captured under load to identify scalability issues and performance bottlenecks

Operations needs early indicators and an understanding about how the the applications work

Not all problems can be avoided by being proactive in development. Therefore another Best Practice from SmithMicro is to give Operations all they need to become more proactive in identifying problems early on and also help them understand what to do in case there are problems on the horizon without having to call in the engineering side every time a dashboard indicates an issue.

Operations therefore needs early indicators such as trend changes in transaction response times, memory consumption, garbage collection activity, number and execution time of database queries. In order to capture this information the right set of tools need to be brought in – tools that must be very lightweight in order to avoid unnecessary overhead but that provide enough information for both operations and developers to analyze problems that occur. Traditional monitoring tools that only monitor certain silos of the application stack, e.g. web server, app server, network, database – only help to identify problematic regions. In order for Operations to understand a problem and in order for developers to identify the root cause it is important to get End-to-End transactional tracing with the ability to view this data at a high-level as well as in-depth.
A high-level view provides Operations with enough data to identify performance trends and hotspots in their application infrastructure.

High-Level Operations Memory Dashboard used to identify trends in Memory Allocations, Usage and Garbage Collection Activity

High-Level Operations Memory Dashboard used to identify trends in Memory Allocations, Usage and Garbage Collection Activity

The In-Depth view on the same collected data provides developers with enough method and component-level data for problem analysis without having to digest log files and stack traces:

Low Level Database Dashboard shows Database Activity as well as individual SQL Statements and their Bind Variables

Low Level Database Dashboard shows Database Activity as well as individual SQL Statements and their Bind Variables

Developers tend to be curious and often try things that they shouldn’t: The goal for Bill is that Operations can do a better job in being proactive and not needing to call in developers every time a dashboard shows RED. With such early indicators and a better understanding about the application and it’s dependencies to all its involved components Operations can solve many of the production problems on their own. The problem they often ran into was that developers were rather “relaxed” when troubleshooting problems in production – often causing more problems than the problems they were working on.
As Bill said: If you don’t know it’s gonna work – you shouldn’t try it”. In order to prevent this situation it is important for SmithMicro to extract all information required by developers from the production system to help developers to understand what is going on without them needing to “mess with the real world” (I am still not offended by those comments :-) )

Where SmithMicro is heading?

The overall goal for Bill and his team is to become more pro-active when it comes to performance management. They want to enable Operations to become more self-sufficient by extending their knowledge about application internals and giving them early indicators of problems they can react to. They also want to make it easier for developers to understand what is really going on their application – especially spreading the knowledge in cross-functional teams.

Bill’s recommendations

At the end Bill gave his recommendations to all the rest of us out there.

  • Understand your use-case scenarios
    • What are your 5-15 main use case scenarios
    • Model these use case scenarios and monitor them
    • By doing this you become proactive.
  • Developers
    • Understand how the application works and
    • Understand the real life requirements that come from operations
  • Operations
    • Understand the run-time behaviour of the application
    • Look at trending and early indicators
    • Have actionable data for developers
  • By following such a process you become more proactive, and ensure your Application is Ready for Production

Further Information

I really hope this summary blog of the webinar made you want to hear more about it and actually listen to the recorded webinar. Follow this link and listen to what Bill and I had to say about Proactive Performance Management. There is also some other stuff that you might be interested in, like The Practical Guide to Performance Management in Development (How we at dynaTrace do it internally), Best Practices from Zappos on Performance Management and Alois’s Blogs in his Performance Almanac.


Comments

  1. Michael Buzz says:

    Thank you very much for the article. I appreciate it really when savvy people like you are blogging like you do:) Speaking about myself, I prefer to design with the database as a starting point. That way, I better understand the information required. It’s too easy to see what a page might look like—and even produce a mockup of such a page—before the implications of that design are well known. By designing the storage specifications first, the subsequent implementations are better understood. The entire design is never complete, but rather, it’s an ongoing process that’s constantly changing at every level to meet new requirements. In other words, don’t get too comfortable with the first design of anything.

  2. I have had the pleasure of listening to one of Bill Mar’s webinars. The company that he works for does a great job with network virtualization. It isn’t always easy connecting different devices together. I think he gives developers a lot to think about when managing their projects. The key to successful developer is being proactive.

  3. Anonymous says:

    I’m following Bill Mar’s work too. In my opinion he is sharing great ideas in his webinars.

  4. Bill does hold some very good webinars, I’ve seen a couple before and was impressed. I’m working on a big scale virualisation project for nitric pure and find that the material Bill shows helps a lot.

  5. In my views both things should be kept in mind as both are very important for any program to run properly. Sessions kimball california

  6. The entire design is never complete, but rather, it’s an ongoing process that’s constantly changing at every level to meet new requirements. In my opinion he is sharing great ideas.
    Vicodin dosage

Comments

*


four + = 12