The other day I was in a conversation with some developer that was complaining about some feature. He claimed that it was too complex and that it had led to tons of bugs. In the middle of the conversation, the developer said that the feature had been so buggy that he ended up writing a lot of unit tests for it. To be honest I don’t think there were a lot of bugs after those tests were written, so that made me think:

Maybe the testers in his team are doing too good of a job?

Because, you know, if testers are finding enough of “those bugs” (the ones that should be caught and controlled by unit tests and not by testers weeks after the code was originally written), maybe some developers are just not “feeling the pressure” and can’t really get that they should be writing tests for their code. If testers are very good, things just work out fine in the end… sort of. And of course, the problem here is the trailing “sort of”.
I know I’m biased, but in my view there is a ton of bugs that should never be seen by someone that is not the developer itself. Testers should deal with more complex, interesting, user-centred bugs. Non-trivial cases. Suboptimal UIs. Implementation disagreements between developers and stakeholders. That kind of thing. It’s simply a waste of time and resources that testers have to deal with silly, easy-to-avoid bugs on a daily basis. Not to mention that teams shouldn’t have to wait for days or weeks until they find basic bugs via exploratory testing. Or that a lot of those are much, much quicker to test with unit tests than having to create the whole fixture/environment for them to be found with exploratory testing.
My current conclusion is that pushing on the UI/usability side is not only good for the user, but it’s likely to produce better code as it will be, on average, more complex and will have to be better controlled by QA (code review, less ad-hoc design, …) and automated tests. Maybe developers will start hating me for that, but hopefully users will thank me.