Names? Who needs names?
Three days here so far and I’m having a blast. Not honeymooning (too cynical now) but there’s a lot to like:
- Practical use of git
- New tools like Gradle (whatever happened to Maven?)
- New concepts e.g. contracts testing
- More technical testing than I’ve ever been able to do: automation, BDD, security, API’s
- Many things being implemented and trialed
- Angular and Protractor
- Little “a” agility
- Walls with cards, stories being progressed, get togethers around whiteboards to clarify things, developers, testers, designers and BA’s excited about what they’re doing and enthusiastic about overcoming challenges
In short, much more to learn in an environment I’m more comfortable in (agile) which makes me a happy tester :)
Any PM – or anyone else for that matter – telling you just how awesome waterfall is as a development methodology clearly isn’t paying attention. Experience of the last 50 or 60 years in software development aside, one only needs to look at the paper that supposedly proposed waterfall as a working model to know that the process was a pile of suck from the beginning.
See, a guy by the name of Winston Royce wrote a paper titled “Managing the Development of Large Software Systems“. Go read it. It’s worth it. In the paper he describes exactly why this process is complete balls. Even though he never actually names it as waterfall, it can be clearly seen as such from the diagram.
I find it funny that there has been industry wide adoption of a process that was demonstrably inadequate to the task of building software.
Apparently DSDM Atern is an Agile implementation.
I really have no idea how this could ever be considered aligning with the agile values and supporting principles. Seems more waterfall with smaller iterations rather than agile. Which is fine, just don’t say it’s agile, particularly when you have as a guiding principle “Demonstrate control” that includes the gem “use an appropriate level of formality for tracking and reporting” then I’m pretty sure you’re #notAgile.
Apparently it’s all the rage among big corporates, with 1% of all those surveyed saying . . . Wait, what? One percent? One. Percent. So it’s right up there with the Agile Unified Process and agile modeling. And slightly lower than the category of “Other”.
But it’s nice when they make you laugh on the way out.
A former manager of mine sent this on their last day.
One of the things about facing the imminent end of your current employment (contract’s not being renewed as the role I’m in is effectively redundant) is that you know you’re not going to be there to help out the people you’ve been helping out for the last year. In light of that harsh truth, I started putting together a list of useful testing resources, things to read, even things that aren’t specifically related to testing but have proved useful to me at various points of my career.
The list won’t be useful to everyone and I don’t recommend going and buying every book on it. This is just what I’ve collected over time. If you have any to add please feel free to say so in the comments section.
- The Little Black Book On Test Design – If you’re testing something new, read this if you get stuck for ideas. Even if you’re not read it anyway
- Lessons Learned in Software Testing – Can’t recommend this highly enough. Do not consume all at once though. Bach, Pettichord, Kaner. Suggest you get to know those names
- Tacit and Explicit Knowledge – Not specifically about testing, but very useful all the same
- An Introduction to General Systems Thinking – Gerry Weinberg. If you don’t know who he is, time to learn. Learn to look at (not just software) components as part of a larger whole. Business processes, human interactions, value drivers are all part of a larger system
- The Black Swan: The Impact of the Highly Improbable – This and the follow up Anti-fragile: Things that Gain from Disorder by Nassim Taleb aren’t specifically related to testing but are extremely valuable. The prose can be a bit heavy at times but both are excellent reads
- The Secrets of Consulting – Gerry Weinberg again. Can’t overstate how useful this is; everyone should have a consultative mindset, even if you’re not consulting
- Agile Testing: A Practical Guide for Testers and Agile Teams – Written by Lisa Crispin and Janet Gregory who really know their shit when it comes to the craft of testing and how it works with agile. There’s a follow up work that I haven’t read yet
- Perfect Software and Other Illusions About Testing – Weinberg again. Seeing a pattern?
- The Mythical Man Month – Should be required reading for anyone involved with the building of software, particularly PMs
- Experiences of Software Test Automation – Some pretty interesting case studies that might serve as a warning to others
- The Leprechauns of Software Engineering – Debunks a lot of the myths in our industry, like the cost of defects relative to when they’re found (hint: the ‘fact’ is total bullshit)
- Lean Software Development – An Agile Toolkit – First published in 2003, still relevant. Easy to read and full of practical tips
Stuff that’s related to Agile and Scrum that you might find useful, even though the journey there will be difficult
Technical stuff that I enjoyed reading that wasn’t specifically related to testing
- Introduction to the Command Line
- The Phoenix Project (fiction but a great read)
- Practical Lock Picking
- BackTrack 5 Wireless Penetration testing Beginner’s Guide
- Metasploit: The Penetration Tester’s Guide
- Nmap Cookbook
There’s a bunch of stuff that I’m yet to get around to reading from a few authors who interest me. These are mainly about techniques used to eliminate waste at a number of levels, from software development to pointless management bullshit. The following authors may be of interest to you too:
If you end up leading or managing people please, I beg of you, ensure you know the essentials of the above. It might help make the next generation of
leaders management types a little less dense.
My next trick is to post up a bunch of useful links, though the blogroll is a good place to start. This will include blogs, Twitter accounts, etc. that will give you insight into the craft. Some you may know, others may be unfamiliar.
Sometimes, when reading scientific literature of the psych/social nature, there’s a tendency to think “No shit, really?” in response to the findings. However, the advantage of having a paper published on a subject that confirms what anecdotal evidence has always assumed was correct lends weight to those original, un-researched findings.
Such is the case when I came across this particular paper: Boss Competence and Worker Well-being. Or, as the original ./ lede had it: Your incompetent boss is making you unhappy.
This really helps to explain the enjoyable jobs I’ve had in the past, as well as the not so enjoyable ones. All the roles I’ve occupied that have been enjoyable were in places where I work for, or report to, someone who I consider competent in the job they’re doing. And all the absolutely horrible ones – one last year springs to mind – have all been working for, or reporting to, incompetent cretins who have all the capability of an extremely stupid toaster. Either that or they’re lying scumbags.
Hmmm. That’s probably more about them being a shit person I suppose. Po-tay-to, po-tah-to.
What truly baffles me though is why, when we know all of this, when we understand why bonuses don’t work for knowledge work, that KPIs are seldom relevant to the work being carried out, that theory x type managers are bad for business, that happier employees contribute more to the value of the company than unhappy ones, that collaboration and trust are more effective than command and control . . . When we know all of this, why do we keep trying the same bullshit anti-patterns that continue to fail? Are companies addicted to failure? Or does someone at some point high up in the hierarchy think that if we just keep trying the same old thing then sooner or later we’ll get it right?
Anyway, I found the paper quite interesting. Given that my current contract is coming to an end, I’m hoping that wherever I land next will give me the opportunity to work for someone who knows their shit.
Bob Marshall (@flowchainsensei) has an interesting post up on his site about No Testing. I can assume this follows on the heels of the #NoEstimates movement.
Interesting read. I think I can see where he’s going, somewhat like the “everybody tests” idea espoused by Bolton, Bach, etc. (and me for that matter). I like the idea, even though my profession is directly affected by the approach.
To be honest, I don’t think we need a specific role to be doing testing. But in order to do this the whole organisation producing software needs to be involved in the “testing” effort. Developers need to get better at building their own checks. The business needs to be able to clearly articulate their vision. Testing is as much an arse-covering exercise in a lot of places as it is a check to ensure we developed the right thing the right way for the right target market.
The discussion needs better definition. I’d also like to get at the root of why Bob thinks testing needs to go away.
I like the concepts of necessary and unnecessary wastes by the way, particularly as a way of characterising what testing is done. Something I’ll have to look at in more depth.
It’s a bit old but it’s still a good article by Esther Derby (@estherderby) on using an agile approach to become agile. Much better to set iterative goals than having mandated ones produced by senior management. That alternative is best viewed as (via @cowboytesting) “I know we’re agile because the Director has told us we are agile and a steering committee came up with our agile process to follow” which is a shit way of doing things.
My bullshit detector starts to go off as soon as people start talking about how “we’re Agile now” because “we have a wall with cards on it and we do standups”.
No. Just. No. Walls and standups and retrospectives and cards and sprints and all the other “ceremonies” (urgh, hate that term) of Scrum does not make you agile. What I see most often is organisations claiming agility because they have processes that turn sprints into mini-waterfalls. You know the kind: developers don’t start coding until they’ve been handed a “user story” (really just a mini requirement) from a BA. Testers who don’t start testing until code is fully completed. Automation of current sprint tests being done the sprint after the code is “done”.
Evaluate what’s going to deliver the highest value to the owners (in this case, the team, or management, or “someone who matters” in James Bach parlance) first. Slice the highest value bits up into manageable pieces and deliver them. Continually reflect and improve, asking not just “Can we do this better?” (efficiency) but “Should we really be doing this at all?” (effectiveness).
I’ve been playing with some new toys lately. The first is completely unrelated to testing. I picked up a 24-70mm 2.8 L II lens a couple of weeks ago. It’s been amazing – incredibly sharp lens, beautiful images. For full goodness I’ll need a full frame camera (currently running on my awesome 7D) but that can wait.
Software related toys, I’ve come across Asana after reading about it on Rands in Repose. Looks like something I could use. I’m after an information radiator type tool for the various teams of testers where I’m working and this, coupled with Sprintboards (with a side helping of Instagantt for scheduling and tracking) looks like just the thing.
Oh yeah, the new gig. It’s working out better than I expected. Very much enjoying myself, even in the face of some “interesting” environmental issues. More on that later.
I like to use mind maps for, well, lots of different things. They’re handy as I build a picture of a new testing project, developing test cases, fleshing out test ideas, overhauling the approach, whatever. Start with a central idea and grow outwards from there.
Mindmup is a free online tool for doing just that. Quickly, easily, and you don’t have to worry about installing anything. Get on it.