What have I been up to?

Little bit of this, little of that. Little of the other.

Well, quite a lot of this and that actually. The bullet points are probably easier:

  • Testing REST APIs written in Java using Spring Boot (with Apache Camel and some other stuff)
  • Using the RestAssured library
  • With a builder pattern that utilises POJOs
  • To allow requests and responses to be built quickly
  • And run first in a Docker environment
  • To prove that the deployed WebSphere Liberty artifacts work
  • Before being pulled from GitHub by Jenkins
  • Where the code then enters the CI pipeline
  • If everything is successful on that Git branch the code is merged to a develop build
  • Which is then merged to release every week or so
  • Both dev and release branches are pushed to Artifactory

That’s kinda it, though it doesn’t really show the complexity of the development approach. We have a shitload of APIs in production, more in development, and they integrate with one another to greater or lesser extents.

The vast majority of our testing is automated, both by developers and testers. Developers write three layers of tests:

  • Unit – self-explanatory, tests a unit of code
  • Component – tests that exercise different components of the code in conjunction with one another
  • Black box – tests that prove contracts between different functional layers e.g. Between the service layer and a database

If the API integrates with other APIs or with backend legacy systems (and there are a lot of those) then a tester or developer will also write an integration test that exercises the contract between the two (or more) systems. These can range from pretty simple to quite complex depending on the functionality we want to ensure. Remember that these are primarily change indicators and checks. The aim isn’t to excercise every feature and hit every permutation – this should have already been done at a lower level. The aim is to ensure that all of our assumptions about the interfacing system were correct.

The final element of testing is exploratory. Testers are encouraged to play around with the feature in an attempt to find unintended consequences. This step is a lot of fun, although I’ll admit that doing this in an environment where you’re encouraged to find automated means of improving your testing adds to the fun element. Utilising an existing automated framework to further explore a product helps speed up the process.

On top of this is performance testing, security testing, TVT and BVT – all carried out by specialist team members. We carry out some basic SIT testing after the war is deployed to ensure everything plays nice, but the aim is to test as much as we can at lower levels where it’s far less expensive (time, code, execution, money, whatever).

The approach we’re using focuses heavily on agility. We have a few things that we do for every epic:

  • Product inception – what we’re building, the value of it and how we’re going to build it
  • Two week sprints
  • Sprint planning sessions before each sprint
  • A kickoff for each card and a subsequent walk through at the completion of development
  • Fortnightly show cases – “We built this cool stuff” sessions for anyone interested

Does that sound like a lot of meetings? It’s really not. The team I work in is made up of three developers, a BA and a delivery lead.

  • Product inception are usually the longest, but they combine architectural discussion, acceptance criteria determination, complexity evaluation (from which we derive points) and feature ordering, where we decide which features need to be built first
  • Two weeks appears to be optimum for the moment, though we could probably move to a week if there was a need to. But there isn’t
  • Sprint planning essentially boils down to “Who wants to work on what first?”
  • Kickoffs are quick, if only because they usually involve a discussion on what’s going to be built and what kind of testing – and at which layer – we’ll need to prove that the code meets the acceptance criteria of the card. If there is too little agreement on what’s going to be built, or the acceptance criteria are not clear, we can bounce the card into the backlog or have a more robust chat with our BA
  • Showcases are pretty good fun – we’re finding new ways to make them more entertaining. Plus, they involve beer and snack so, winning

There’re more than 70 people working on the service engine platform (as NAB likes to call it) and that doesn’t include the teams that make up development for our biggest consumer, the mobile development team. Most of the delivery teams are of a similar size and ratio and there are eight (nine?) teams carrying out work on the service engines. We’re growing to the point where we’re starting to adopt a nomenclature and structure similar to the one Spotify use.

In short, I’m having a shit LOT of fun. I get to work with great people every day, helping to solve complex problems in a supportive, highly technical and above all interesting environment.

 

James

 

Leave a Reply