Tips on creating stable Functional Web Tests to compare across Test Runs and Browsers
Test Framework: Selenium WebDriver
As test framework I decided to pick Selenium WebDriver and downloaded the latest version. I really thought it is easier to write tests that work in a similar way on both browsers. I have several lessons learned
- When you write a script always test it immediately on both browsers
- Use a Page Object approach when developing your scripts. With that you keep the actual implementation separated from the test cases (you will see my test scripts later in this blog – will make more sense when you see it)
- Be aware of different behaviors of IE and FF
- Make sure your test code can deal with unexpected timings or error situations
What a test script should look like (slick and easy to understand)
Here is a screenshot of one of my test cases.
Common functionality in PageObjectBase
I put lots of helper methods in a base class that I called PageObjectBase. As WebDriver doesn’t provide functionality to wait for certain objects or for the page to be loaded (at least I haven’t found anything on that) I created my own waitFor methods to wait until certain objects are on the page. This allows me to verify whether my app made it to the next stage or not. Here is another screenshot of one of my helper methods. You see that I had to work around a certain limitation I came across in IE – seems like By.linkText doesn’t work – same is true for most of the other lookup methods in By. What worked well for me is By.xpath with the only limitation that certain methods such as contains() don’t work on Firefox. As you can see – lots of things to consider – not everything works the same way on every browser
Easy to switch Browsers
My test classes create the WebDriver runner. Here I also created a base class that – depending on a system property that I can set from my Ant script – instantiates the correct WebDriver Implementation (IE or FF). This base class also checks whether dynaTrace will be used to collect performance data. If that’s the case it creates a dynaTrace object that I can use to pass Test and Test Step Names to dynaTrace. This makes it easier to analyze performance data later on – more on this later in this blog.
Analyzing Tests across Test Runs
Identify Client Side Regressions across Builds
I have access to different builds. Against every build I run my Selenium Tests and then verify the Selenium Results (Succeeded, Failed, Errors) and the numbers I get from dynaTrace (#Roundtrips, Time in JS, #Database Statements, #Exceptions, …). With one particular build I still got all successful Selenium Test executions but got a notification from dynaTrace that some values were outside of the expected value range. The following screenshot shows some of these metrics that triggered an alert:
A double click on one of the metrics of the build that has this changed behavior opens a Comparison View of this particular test case. It compares it with the previous test run where the numbers were ok:
A Side-by-Side Comparison of the Network Requests is also automatically opened showing me the differences in downloaded network resources. It seems that a developer added a new version of jQuery including a long list of jQuery plugins.
Identify Server-Side Regressions across Builds
Even though more and more logic gets executed in the browser we still need to look at the application executed on the application server. The following screenshot shows another test case that shows a dramatic growth in database statements (from 1 to more than 9000). Looks like another regression.
The Drill Down to compare the results of the problematic build with the previous works in the same way. Double click the measure and we get to a comparison dashboard. This time we are interested in the database statements. Seems it is one statement that got called several thousand times.
When we want to know who executed these statements and why they weren’t executed in the build before, we can open the PurePath Comparison Dashlet. The PurePath represents the actual Transactional Trace that dynaTrace captured for every request of every Test Run. As we want to focus on this particular database statement we can drill from here to this comparison view and see where i’ts been called.
Analyzing Tests across Browsers
In the same way as comparing results across test runs or builds it is possible to compare tests against different browsers. It is interesting to see how apps are different in different browsers. But – it is also interesting to identify regressions on individual browsers and compare these results with the browser that doesn’t show the regressions. The following screenshot shows the comparison of browser metrics taken from the same test executed against Internet Explorer and Firefox. Seems that for IE we have 4 more resources that get downloaded:
Whether you use Selenium, WebDriver, QTP, Silk, dynaTrace, YSlow, PageSpeed or ShowSlow – I imagine you are interested in Testing and you want to automate things. Check out my recent blogs such as those on Testing Web 2.0 Applications, Why you cant compare execution times across browsers or dynaTrace Ajax Premium.