
Power Learning Group Session - Discussing Code Coverage
- 03 Oct 2018 |
- 03 Mins read
You know what the cool thing is when you're passioned about something? You LOVE to learn all day long! The day even becomes to have too less hours for all the awesome stuff which is happening every single minute out there!
So, there is this learning group thingy happening out there, initiated by Lisi Hocke and Toyer Mamojee. They started it as power duo and over time they formed it to a group of extraordinary ladies and gentlemen by extended it more and more and taking other people in their online learning group - what a nice inspiring idea I thought.
Some time has passed since then and today I joined in for the very first time.
Joao Proenca has setup this session very nicely with a lean coffee table where we all have put in our topics beforehand. The majority of us chose the topic 'Code Coverage' which was posted by Mirjana.
A word on code coverage
Code Coverage on its own doesn't mean a lot to us in most cases. Code which is covered by automated tests, gives a bit more context but still not that much, that testers (and probably developers too) could truly understand it. What one first has to ask is WHY it might be a good thing to measure how much from the code base is covered by tests.
Code coverage is the percentage of code which is covered by automated tests. Code coverage measurement simply determines which statements in a body of code have been executed through a test run, and which statements have not.
For developers which are working daily on the productive codebase, this might be an interesting and helpful guidance towards the awareness of creating checks for the code they write and where they are lacking. For testers it might be a good thing to have an overall view, to know which areas are already covered by checks and which not. All major IDEs these days provide useful features to create visual code coverage reports of the code which is touched by checks and which not.
Our session
Now back to our session. Joao started off the topic by telling his experiences and the things he is doing around code coverage in his team. They use code coverage in his team to see in which areas of the code they are lacking behind of having automated tests in place. But instead having discussion around "How good is the test coverage of your code", he rather prefers to have discussion around risks and how to finally mitigate it. Toyer mentioned there should be a clear distinction between code coverage on code level vs functional test coverage on feature level. Then Pooja stated her point of view on the topic by clearly saying that there has to be a clear distinction and one should be able to make a statement regarding automated tests that cover lines of code, exploratory/automated tests that cover the functional aspects of a system and so on. In her current context, the developers are responsible for writing unit tests but the QA department is accountable for gathering the information and show what the coverage metrics for each area looks like. What they also trying to do is get some test coverage from monitoring and logging with tools like New Relic by recording and tracking the events which occur. Here Toyer hocked in and mentioned the tool Instana which he uses for monitoring the system while executing performance tests. I then talked a bit about my experiences and mentioned that whatever you are doing on which (let's call it) test level, it is important to visualise and show your team and stakeholders why, how and what you are doing while testing. The mutual conversion went on till the end and it was interesting to listen and ask further questions.
Retrospective
It is very energising to be part of such an awesome bunch of testers. The cool thing is that we all come from different nations to learn and growth by exchanging experiences and opinions.
Thanks to all for taking me on board! Looking forward to the next session! Let's ride!