Other articles about my Apple career:
- How I ended up at Apple (1999)
- My first job at Apple: AppleCare (1999-2000)
- Interviewing for the Mail team (2000)
- Mail team: Technical support (2000-2004)
- >>>>> Mail team: QA (2000-2004)
- Aperture: Senior QA (2004-2005)
- Eye Candy QA (2005-2011)
My position on the Mail team was really two jobs in one:
- Be an on-call support person for all of software engineering as well as important people in other organizations. I was the public face of Mail and also monitored the “help” mailing list
- Serve as the lone quality assurance engineer that was fully dedicated to Mail. This included managing incoming bug reports, analyzing, root causing, and prioritizing them (this article)
I’d never done software testing before, so to be the lone dedicated tester on one of the signature apps for Mac OS X was very intimidating! Now, there were others that did testing across the system and there was this dogfood testing concept, but ultimately I was responsible when Mail was broken.
Even though I lacked experience, there was also an element of looking at things with fresh eyes. Also, there really wasn’t much uniformity and infrastructure in place, so I felt like it was a time to figure out how software should be tested. I felt an incredible amount of freedom to try things out, keep things that worked, and discard things that didn’t.
Bertrand Serlet, our fearless leader, believed that everyone was an engineer and that Quality Assurance (QA) engineering was engineering all the same and in our HR system, we were classified as software engineers. A psychological benefit of that is that I always felt like a true peer of everyone in engineering and I was generally always treated that way. That type of close relationship with QA engineers is vital to making a great product.
The first time anyone running the overall project ever asked me for anything was when they asked for my test plans. I didn’t really know what they were expecting so I looked at what others had done. This is when I first realized that test plans are a list of things that seemed worthwile to test at the time that someone demanded you have a test plan. Then, you let them get horribly out of date and don’t ever use.
I learned that no one ever really looked at what you wrote in your test plan. I would miss major things and no one ever noticed. I would throw in random lines like “make sure this works on other planets” and no one noticed. So, it was basically like a checkbox item. Red tape. Didn’t sit well with me.
Eventually, I grew to where I always wrote test plans and I wrote them for me. And I used them. To the outside world, they may have looked the same, but there were key differences.
- They were very high level and not very detailed
- The goal was to list all the areas I wanted to be sure to check at least once before the product shipped. As products became more complex, it became easier to forget to check something.
- Nothing was described specifically, which allowed randomness to be inserted at the time something was actually tested. Each time I checked an area, I could check it differently to get better coverage.
After a few years, there started to be more demands from the higher ups and the biggest was to introduce automated testing into our workflows. I very quickly developed a dislike for it and also saw how other teams fared. The Finder team dedicated one of two QA engineers entirely to automated testing and I was shocked at the ineffectiveness of it.
I became more of a student of QA at this point, learning about what techniques existed and which were considered effective and ineffective. But even that didn’t always jive with my day-to-day experience. If I looked at things from an efficiency standpoint, looking at every hour I had to test software, which things were the most effective?
I evangelized a lot of what I learned about testing, but never made much headway with managment. Eventually, many years later, my experiences in these early days experimenting with testing methods led to a number of documents I wrote about the testing process.
The seeds that were planted in 2000-2004 germinated for the next decade and more as I continued to test software in an increasingly rigid and larger company. These manifestoes have proven to be useful to those that have stumbled upon them so I have them available on this website and have listed them below.
- My hierarchy of testing
- Ideas for better quality software
- How to use dogfood testing
- Common software usability issues
Ok, these are all very long! But I’d been writing them in my head for 10+ years so when I got around to writing them, it was really just a matter of hours before each of them were written.
I could split them up, but, like software, all of this is deeply interconnected. You can’t improve your testing procedures for a feature without thinking about the usability mistakes you made the last time you did a feature like this. It’s one big interconnected mess.
Kind of like software. 😉