During an internship I was given a valuable piece of feedback by my boss: "Bryan, you pick things up like a sponge, but you need to take more time testing your work.". I brushed his observation off at the time, but it stuck with me. Fast forward to an entry level QA position and I would find his words constantly ringing through my head.
Now as a developer, I find myself striving to resist the urge to cut corners during testing. The temptation to call something "Done" when it hasn't been fully tested, or when you test it only in optimistic scenarios can be great at times, particularly when timelines and budgets come into play. However, every corner cut in the short term inevitably has to be repaid with interest in the future. Consider the following scenario:
As a developer I am budgeted 10 hours to implement feature X. To properly test feature X I will require 2.5 hours which has not been budgeted in the implementation. I will also require 1 hour to review the business requirements thoroughly and ensure that I have a strong understanding of what I am building and what all of the potential edge cases that need to be handled are.
In this contrived scenario the difference is a mere 3.5 hours, the "cost" of doing this correctly will either be passed along and accounted for in testing and defect resolution or absorbed by the timeline and implementation budget, but in terms of percentage this is a 35% delta between budgeted vs actual. If this is a systemic inaccuracy in budgeting 35% is the difference between a year long project shipping in January vs March or April, it can be the difference between generating a healthy profit or barely making your margin(assuming everything else goes smoothly).
Let's take a look at what happens when development doesn't take a hard stand on doing things right and instead kicks the proverbial can down the road.
In this scenario we assume that the level of effort to identify, triage, resolve, and regression test is 3x. This is based off of an initial allocation of 30% testing effort for implementation. In addition to these costs we also have project management overhead that is not accounted for. The point of these scenarios is not to be a hard and fast guide for budgeting, but to illustrate the added cost introduced by not having strong development testing standards.
Developer testing should be the first and strongest line of QA. The person who wrote the code should also be the best qualified to break the code. I am not advocating the dissolution of QA or diminishing the value of a strong tester, however I do believe that the traditional role of QA as the apparatus that finds bugs is not the most efficient use of a QA resource. The development process should produce quality software so that QA can focus on verifying that business requirements have been satisfied, creating structured testing plans, and conducting ad-hoc unscripted testing.