What have I been up to?

Little bit of this, little of that. Little of the other.

Well, quite a lot of this and that actually. The bullet points are probably easier:

  • Testing REST APIs written in Java using Spring Boot (with Apache Camel and some other stuff)
  • Using the RestAssured library
  • With a builder pattern that utilises POJOs
  • To allow requests and responses to be built quickly
  • And run first in a Docker environment
  • To prove that the deployed WebSphere Liberty artifacts work
  • Before being pulled from GitHub by Jenkins
  • Where the code then enters the CI pipeline
  • If everything is successful on that Git branch the code is merged to a develop build
  • Which is then merged to release every week or so
  • Both dev and release branches are pushed to Artifactory

That’s kinda it, though it doesn’t really show the complexity of the development approach. We have a shitload of APIs in production, more in development, and they integrate with one another to greater or lesser extents.

The vast majority of our testing is automated, both by developers and testers. We write three layers of tests:

  • Unit – self-explanatory, tests a unit of code
  • Component – tests that exercise different components of the code in conjunction with one another
  • Black box – tests that prove contracts between different functional layers e.g. Between the service layer and a database

If the API integrates with other APIs or with backend legacy systems (and there are a lot of those) then a tester or developer will also write integration tests that exercises the contract between the two (or more) systems. These can range from pretty simple to quite complex depending on the functionality we want to check. Remember that these tests are primarily change indicators. The aim isn’t to excercise every feature and hit every permutation – this should have already been done at a lower level. The aim is to ensure that all of our assumptions about the interfacing system were correct.

The final element of testing is exploratory. Testers are encouraged to play around with the feature in an attempt to find unintended consequences. This step is a lot of fun, although I’ll admit that doing this in an environment where you’re encouraged to find automated means of improving your testing adds to the fun element. Utilising an existing automated framework to further explore a product helps speed up the process.

On top of this is performance testing, security testing, TVT and BVT – all carried out by specialist team members. We carry out some basic SIT testing after the war is deployed to ensure everything plays nice, but the aim is to test as much as we can at lower levels where it’s far less expensive (time, code, execution, money, whatever).

The approach we’re using focuses heavily on agility. We have a few things that we do for every epic:

  • Product inception – what we’re building, the value of it and how we’re going to build it
  • Two week sprints
  • Sprint planning sessions before each sprint
  • A kickoff for each card and a subsequent walk through at the completion of development
  • Fortnightly show cases – “We built this cool stuff” sessions for anyone interested

Does that sound like a lot of meetings? It’s really not. The team I work in is made up of three developers, a BA and a delivery lead.

  • We practice rapid inceptions; they combine architectural discussion, acceptance criteria determination, complexity evaluation (from which we derive points) and feature ordering, where we decide which features need to be built first
  • Two weeks appears to be optimum for the moment, though we could probably move to a week if there was a need to. But there isn’t
  • Sprint planning essentially boils down to “Who wants to work on what first?”
  • Kickoffs are quick, if only because they usually involve a discussion on what’s going to be built and what kind of testing – and at which layer – we’ll need to use in order to prove that the code meets the acceptance criteria of the card. If there is too little agreement on what’s going to be built, or the acceptance criteria are not clear, we can bounce the card into the backlog or have a more robust chat with our BA
  • Showcases are pretty good fun – we’re finding new ways to make them more entertaining. Plus, they involve beer and snack so, winning

There’re more than 70 people working on the service engine platform (as NAB likes to call it) and that doesn’t include the teams that make up development for our biggest consumer, the mobile development team. Most of the delivery teams are of a similar size and ratio and there are eight (nine?) teams carrying out work on the service engines. We’re growing to the point where we’re starting to adopt a nomenclature and structure similar to the one Spotify use.

In short, I’m having a LOT of fun. I get to work with great people every day, helping to solve complex problems in a supportive, highly technical and above all interesting environment.

 

BDD with JS

About 6 months ago when I started this job and I was trying to find my feet with a lot of unfamiliar tech, I came across this in my quest to get my head around BDD with CucumberJS. When I had no idea about JavaScript or much of an idea about how to do BDD well, it was all a bit much.

Now however it actually explains a lot of what I’m involved in. I’m using Chai instead of Mocha but the ideas are the same, just different syntax. Sometimes I wish we weren’t using Chai and rather had implemented something like Mocha or Jasmine but we didn’t because reasons. And those reasons were arrived at well before I got here.

Have a read if you’re stuck. I’m gradually coming around to the idea that JavaScript works really well when you’re testing JavaScript, in particular Angular with it’s asynchronous behaviour.

 

The New Place

Names? Who needs names?

Three days here so far and I’m having a blast. Not honeymooning (too cynical now) but there’s a lot to like:

  • Practical use of git
  • New tools like Gradle (whatever happened to Maven?)
  • New concepts e.g. contracts testing
  • More technical testing than I’ve ever been able to do: automation, BDD, security, API’s
  • Many things being implemented and trialed
  • Angular and Protractor
  • Little “a” agility
  • Walls with cards, stories being progressed, get togethers around whiteboards to clarify things, developers, testers, designers and BA’s excited about what they’re doing and enthusiastic about overcoming challenges

In short, much more to learn in an environment I’m more comfortable in (agile) which makes me a happy tester :)

 

Passing on the lessons learned

One of the things about facing the imminent end of your current employment (contract’s not being renewed as the role I’m in is effectively redundant) is that you know you’re not going to be there to help out the people you’ve been helping out for the last year. In light of that harsh truth, I started putting together a list of useful testing resources, things to read, even things that aren’t specifically related to testing but have proved useful to me at various points of my career.

The list won’t be useful to everyone and I don’t recommend going and buying every book on it. This is just what I’ve collected over time. If you have any to add please feel free to say so in the comments section.

Stuff that’s related to Agile and Scrum that you might find useful, even though the journey there will be difficult

  • Scrum Shortcuts without Cutting Corners – Ilan Goldsteins take on how to survive an agile implementation and what works in the real world (or at least what worked for him)
  • Agile Estimating and Planning – Mike Cohn’s excellent book on how to do it right. There’s a reason why the praise section reads like a who’s who of the agile world

Technical stuff that I enjoyed reading that wasn’t specifically related to testing

  • Introduction to the Command Line
  • The Phoenix Project (fiction but a great read)
  • Practical Lock Picking
  • BackTrack 5 Wireless Penetration testing Beginner’s Guide
  • Metasploit: The Penetration Tester’s Guide
  • Nmap Cookbook

There’s a bunch of stuff that I’m yet to get around to reading from a few authors who interest me. These are mainly about techniques used to eliminate waste at a number of levels, from software development to pointless management bullshit. The following authors may be of interest to you too:

If you end up leading or managing people please, I beg of you, ensure you know the essentials of the above. It might help make the next generation of leaders management types a little less dense.

My next trick is to post up a bunch of useful links, though the blogroll is a good place to start. This will include blogs, Twitter accounts, etc. that will give you insight into the craft. Some you may know, others may be unfamiliar.

Good luck.

 

Do we need testing?

Bob Marshall (@flowchainsensei) has an interesting post up on his site about No Testing. I can assume this follows on the heels of the #NoEstimates movement.

Interesting read. I think I can see where he’s going, somewhat like the “everybody tests” idea espoused by Bolton, Bach, etc. (and me for that matter). I like the idea, even though my profession is directly affected by the approach.

To be honest, I don’t think we need a specific role to be doing testing. But in order to do this the whole organisation producing software needs to be involved in the “testing” effort. Developers need to get better at building their own checks. The business needs to be able to clearly articulate their vision. Testing is as much an arse-covering exercise in a lot of places as it is a check to ensure we developed the right thing the right way for the right target market.

The discussion needs better definition. I’d also like to get at the root of why Bob thinks testing needs to go away.

I like the concepts of necessary and unnecessary wastes by the way, particularly as a way of characterising what testing is done. Something I’ll have to look at in more depth.

 

Transitioning to agile by using agile techniques

It’s a bit old but it’s still a good article by Esther Derby (@estherderby) on using an agile approach to become agile. Much better to set iterative goals than having mandated ones produced by senior management. That alternative is best viewed as (via @cowboytesting) “I know we’re agile because the Director has told us we are agile and a steering committee came up with our agile process to follow” which is a shit way of doing things.

My bullshit detector starts to go off as soon as people start talking about how “we’re Agile now” because “we have a wall with cards on it and we do standups”.

No. Just. No. Walls and standups and retrospectives and cards and sprints and all the other “ceremonies” (urgh, hate that term) of Scrum does not make you agile. What I see most often is organisations claiming agility because they have processes that turn sprints into mini-waterfalls. You know the kind: developers don’t start coding until they’ve been handed a “user story” (really just a mini requirement) from a BA. Testers who don’t start testing until code is fully completed. Automation of current sprint tests being done the sprint after the code is “done”.

Stop it.

Evaluate what’s going to deliver the highest value to the owners (in this case, the team, or management, or “someone who matters” in James Bach parlance) first. Slice the highest value bits up into manageable pieces and deliver them. Continually reflect and improve, asking not just “Can we do this better?” (efficiency) but “Should we really be doing this at all?” (effectiveness).

 

New toys!

I’ve been playing with some new toys lately. The first is completely unrelated to testing. I picked up a 24-70mm 2.8 L II lens a couple of weeks ago. It’s been amazing – incredibly sharp lens, beautiful images. For full goodness I’ll need a full frame camera (currently running on my awesome 7D) but that can wait.

Software related toys, I’ve come across Asana after reading about it on Rands in Repose. Looks like something I could use. I’m after an information radiator type tool for the various teams of testers where I’m working and this, coupled with Sprintboards (with a side helping of Instagantt for scheduling and tracking) looks like just the thing.

Oh yeah, the new gig. It’s working out better than I expected. Very much enjoying myself, even in the face of some “interesting” environmental issues. More on that later.

 

5 Things to Remember When Writing Test Cases

When writing test cases for your project, it’s often easy to forget what is important. This article is aimed at reminding experienced testers or informing new ones of what the key items are, to ensure we are being efficient in writing our tests.

The End User is Key

Who will be executing the test scripts? Will the scripts form part of a Regression Pack? Will the scripts be used for anything else (e.g. training manuals)?

These questions are meant to kick start your thought process. Essentially, the person who writes the test is not always the person who executes it. As such, Test Cases need to be written so that anyone of any level can read, understand and be able to execute the test case. The aim of this is that no matter the level of the tester, the outcome will always be the same. When writing your test, if you approach this with ‘baby-steps’ in mind and make no assumptions that the person who executes it is has any prior knowledge of the project, and you shouldn’t go far wrong …