Posts Tagged “testability”
Jan 5, 2011
It’s been more than two months since the GTAC and I never wrote anything about it in this blog, so I thought I’d write some words so I could cross it off my to-do list.
As you can imagine, the conference was great. It was my first big conference and my first time outside of Europe, so it was doubly exciting for me. And even though there were many interesting talks, meeting all that bunch of testing nerds was much better. It shows that Google really worked hard to make people socialise.
But let’s start from the beginning. Probably around a year ago now I had written a talk about testability that I had submitted to EuroSTAR 2010, but had been rejected. That had been my third rejection I believe, so I started losing hopes that I’d ever speak at an international conference. However, relatively shortly after being rejected Google announced this year’s event, and the theme was “Test to testability”, so I said “what the hell!”.
They said from the start that it would be invitation-only, meaning you had to apply even for simply attending. That was actually pretty cool, because the idea was that Google would choose the attendees, and once selected and notified, those attendees would _vote for each other’s talks _to decide what the program would be. It would also mean that attendees would be chosen because they had something interesting to add to the conference, not simply money to pay the registration fee.
And one day, right before leaving for a short vacation, I received the news that I had been chosen to attend. At that point, of course, I had no idea if I would actually talk, but just attending was awesome and I was really happy and a bit surprised (I was going to a conference! in India!). A couple of days later I received a lot of proposed talks to rate. That was pretty exciting, and seeing a lot of very interesting topics was kind of cool, because it was so promising, but also a bit discouraging, because I thought the chances of getting chosen were pretty low. Still I didn’t lose all hope, and when the deadline came, I was notified that I had been chosen to talk. At that point I was pretty surprised, but when I kept reading and saw that there were only 8 talks selected (+ 3 keynotes), then I was pretty shocked.
The rest of the story you don’t have to imagine, because in the typical Google fashion, all the conference material is available on the website (both videos and slides). As I imagine that the conference page link will break sooner or later, I’ll just give you the official GTAC 2010 YouTube playlist. My favourite talks were (in order of appearance):
Twist, A Next Generation Functional Testing Tool - really nice tool and very good demo, although not being open source and being for Eclipse was kind of a let down
The Future of Front-End Testing - kind of everything a professional QA Engineer should know about front-end testing, but it’s not always the case; I thought it was kind of basic, but it was a useful reminder and listening to Simon Stewart is just fun
Flexible Design? Testable Design? You Don’t Have To Choose! - great talk with unit testing tips/patterns; one of the nice things is that those patterns are not only for statically-typed languages
Crowd Source Testing, Mozilla Community Style - very nice talk about making the community help you testing complex products, with many examples and details
I guess I should also mention “Measuring and Monitoring Experience in Interactive Streaming Applications” and “Turning Quality on its Head”. The first, because I thought it was a cool story about how hard it is to find bugs that are important for users, but are vague and hard to reproduce. The second, mostly because of the tool that James shows off. You can see screenshots and an explanation of it from minute 52.
About my own talk, “Lessons Learned from Testability Failures”, I was really worried that I was going to freak out and block on stage. After all, I was used to talking in front of 5, 10, 20 or maybe 30 people. Speaking in front of around 100 and knowing that I was being recorded for YouTube (and that a lot of people interested in the subject would watch those videos) was quite scary in itself. And then there was the other factor: I usually speak to people who (theoretically) know less than me about that concrete subject, but it wasn’t like that at all in this case. However, people there were so cool and friendly that I felt less nervous than I usually feel. Watching the video, I do look nervous the first minutes, but after the introduction and such it felt really good. Kudos to the organisation and the attendees for being so open, cool and friendly. Meeting all that crowd was clearly the best of going to be conference.
All in all, it was a great experience and I made a lot of contacts and friends, and I’m looking forward to attending another similar conference (maybe next year’s GTAC?). We’ll see.
Sep 5, 2010
Summary: there’s a simple tool that will tell you which Facebook sharing options are “too open” in your account. I’d like you to help me by trying it out and telling me what you think (if you had problems using it, if you would like extra/other information to be shown, if you found any bugs, etc.). Skip to “how to use it” below if you’re not interested in the details for developers. Thanks!
Weeks passed and the tool didn’t get any update, so I decided to step in and try to help the original programmer adapt the tool so it worked again. The ReclaimPrivacy code is in GitHub so it was pretty easy to make my own fork and start hacking away. It didn’t take me long to adapt the first things to the new privacy settings layout, and after some more time I was much more comfortable with the code, had made more things work, added tests and even added new features. Now that it’s starting to get close to something we could release as the new official ReclaimPrivacy version, I’d like your feedback.
The getInformationDropdownSettings method, renamed to getSettingInformation, is now shorter, more readable, more testable and has more features. The changes are: (1) making it receive an object with the relevant part of the DOM, instead of a window object; (2) supporting, in principle, any kind of setting, not only dropdowns; (3) allowing each setting to have its own idea of what “too open” means (see the settings array); (4) allowing the caller of the method to specify its own list of recognised settings and acceptable privacy levels; (5) passing the number of open and total sections to the handler, instead of just a boolean stating whether or not there’s any “too open” setting.
I made the old getUrlForV2Section more testable by extracting the most interesting (read: likely to break or need maintenance) code to its own method, _extractUrlsFromPrivacySettingsPage, and making the new getUrlForV2Section work with both real URLs (checking Facebook with an Ajax call) and fake HTML dumps representing what those URLs would return.
I made the old withFramedPageOnFacebook, a very important method used in several places, more flexible by accepting not just URLs, but also functions or data structures (new withFramedPageOnFacebook).
Aug 10, 2010
This post is probably not about what you’re thinking. It’s actually about automated testing.
Different stuff I’ve been reading or otherwise been exposed to in the last weeks has made me reach a sort of funny comparison: code is (or can be) like science. You come up with some “theory” (your code) that explains something (solves a problem)… and you make sure you can measure it and test it for people to believe your theory and build on top of it.
I mean, something claiming to be science that can’t be easily measured, compared or peer-reviewed would be ridiculous. Scientists wouldn’t believe in it and would certainly not build anything on top of it because the foundation is not reliable.
I claim that software should be the same way, and thus it’s ridiculous to trust software that doesn’t have a good test suite, or even worse, that may not even be particularly testable. Trusting software without a test suite is not that different from taking the word of the developer that it “works on my machine”. Scientists would call untested science pseudo-science, so I am tempted to call code without tests pseudo-code.
Don’t get me wrong: sure you can test by hand, and hand-made tests are useful and necessary, but that only proves that the exact code you tested, without any changes, works as expected. But you know what? Software changes all the time, so that’s not a great help. If you don’t have a way to quickly and reliably measure how your code behaves, every time you make a change you are taking a leap of faith. And the more leaps of faith you take, the less credible your code is.