Posts Tagged “software”
Feb 15, 2012
This is the second part of my unit testing advice. See the first part on this blog.
If you need any introduction you should really read the first part. I’ll just present the other three ideas I wanted to cover.
Focusing on common cases
This consists of testing only/mostly common cases. These tests rarely fail and give a false sense of security. Thus, tests are better when they also include less common cases, as they’re much more likely to break inadvertently. Common cases not only break far less often, but will probably be caught reasonably fast once someone tries to use the buggy code, so testing them has comparatively less value than testing less common cases.
The best example I found was in the
wrap_stringtests. The relevant example was adding the string “A test of string wrapping…”, which wraps not to two lines, but three (the wrapping is done only on spaces, so “wrapping…” is taken as a single unit; in this sense, my test case could have been clearer and use a very long word, instead of a word followed by ellipsis). Most of the cases we’ll deal with will simply wrap a given word in two lines, but wrapping in three must work, too, and it’s much more likely to break if we decide to refactor or rewrite the code in that function, with the intention to keep the functionality intact.
See other examples of this in aa20bce (no tests with more than one consecutive newline, no tests with lines of only non-printable characters), b248b3f (no tests with just dots, no valid cases with more than one consecutive slash, no invalid cases with content other than slashes), 5e771ab (no directories or hidden files), f8ecac5 (invalid hex characters don’t fail, but produce strange behaviour instead; this test actually discovered a bug), 7856643 (broken escaped content) and 87e9f89 (trailing garbage).
Not trying to make the tests fail
This is related to the previous one, but the emphasis is on trying to choose tests that we think will fail (either now or in the future). My impression is that people often fail to do this because they are trying to prove that the code works, which misses the point of testing. The point is trying to prove the code doesn’t work. And hope that you fail at it, if you will.
The only example I could find was in the
strcasecmpendtests. Note how there’s a test that checks that the last three characters of string “abcDEf” (ie. “DEf”) is less than “deg” when compared case-insensitively. That’s almost pointless, because if we made that same comparison case-sensitively (in other words, if the “case” part of the function breaks) the test still passes! Thus it’s much better to compare the strings ”abcdef” and “Deg”.
Addendum: trying to cover all cases in the tests
There’s another problem I wanted to mention. I have seen several times before, although not in the Tor tests. The problem is making complicated tests that try to cover many/all cases. This seems to stem from the idea that having more test cases is good by itself, when actually more tests are only useful when they increase the chances to catch bugs. For example, if you write tests for a “sum” function and you’re already testing
[5, 6, 3, 7], it’s probably pointless to add a test for
[1, 4, 6, 5]. A test that would increase the chances of catching bugs would probably look more like
[-4, 0, 4, 5.6]or
So what’s wrong with having more tests than necessary? The problem is they make the test suite slower, harder to understand at a glance and harder to review. If they don’t contribute anything to the chance of catching bugs anyway, why pay that price? But the biggest problem is when we try to cover so many test cases than the code produces the test data. In this cases, we have all the above problems, plus that the test suite becomes almost as complex as production code. Such tests become much easier to introduce bugs in, harder to follow the flow of, etc. The tests are our safety net, so we should be fairly sure that they work as expected.
And that’s the end of the tips. I hope they were useful :-)
Feb 14, 2012
When reviewing tests written by other people I see patterns in the improvements I would make. As I realise that these “mistakes” are also made by experienced hackers, I thought it would be useful to write about them. The extra push to write about this now was having concrete examples from my recent involvement in Tor, that will hopefully illustrate these ideas.
These ideas are presented in no particular order. Each of them has a brief explanation, a concrete example from the Tor tests, and, if applicable, pointers to other commits that illustrate the same idea. Before you read on, let me explicitly acknowledge that (1) I know that many people know these principles, but writing about them is a nice reminder; and (2) I’m fully aware that sometimes I need that reminder, too.
Edit: see the second part of this blog.
Tests as spec
Tests are more useful if they can show how the code is supposed to behave, including safeguarding against future misunderstandings. Thus, it doesn’t matter if you know the current implementation will pass those tests or that those test cases won’t add more or different “edge” cases. If those test cases show better how the code behaves (and/or could catch errors if you rewrite the code from scratch with a different design), they’re good to have around.
I think the clearest example were the tests for the
eat_whitespace*functions. Two of those functions end in
_no_nl, and they only eat initial whitespace (except newlines). The other two functions eat initial whitespace, including newlines… but also eat comments. The tests from line 2280 on are clearly targeted at the second group, as they don’t really represent an interesting use case for the first. However, without those tests, a future maintainer could have thought that the
_no_nlfunctions were supposed to eat whitespace too, and break the code. That produces confusing errors and bugs, which in turn make people fear touching the code.
See other examples in commits b7b3b99 (escaped ‘%’, negative numbers, %i format string), 618836b (should an empty string be found at the beginning, or not found at all? does “\n” count as beginning of a line? can “\n” be found by itself? what about a string that expands more than one line? what about a line including the “\n”, with and without the haystack having the “\n” at the end?), 63b018ee (how are errors handled? what happens when a %s gets part of a number?), 2210f18 (is a newline only \r\n or \n, or any combination or \r and \n?) and 46bbf6c (check that all non-printable characters are escaped in octal, even if they were originally in hex; check that characters in octal/hex, when they’re printable, appear directly and not in octal).
Boundaries of different kinds are a typical source of bugs, and thus are among the best points of testing we have. It’s also good to test both sides of the boundaries, both as an example and because bugs can appear on both sides (and not necessarily at once!).
The best example are the tor_strtok_r_impl tests (a function that is supposed to be compatible with
strtok_r, that is, it chops a given string into “tokens”, separated by one of the given separator characters). In fact, these extra tests discovered an actual bug in the implementation (ie. an incompatibility with
strtok_r). Those extra tests asked a couple of interesting questions, including “when a string ends in the token separator, is there an empty token in the end?” in the “howdy!” example. This test can also be considered valuable as in “tests as spec”, if you consider that the answer to be above question is not obvious and both answers could be considered correct.
See other examples in commits 5740e0f (checking if
tor_snprintfcorrectly counts the number of bytes, as opposed the characters, when calculating if something can fit in a string; also note my embarrassing mistake of testing
snprintf, and not
tor_snprintf, later in the same commit), 46bbf6c (check that character 21 doesn’t make a difference, but 20 does) and 725d6ef (testing 129 is very good, but even better with 128—or, in this case, 7 and 8).
Testing implementation details
Testing implementation details tends to be a bad idea. You can usually argue you’re testing implementation details if you’re not getting the test information from the APIs provided by whatever you’re testing. For example, if you test some API that inserts data in a database by checking the database directly, or if you test the result of a method call was correct by checking the object’s internals or calling protected/private methods. There are two reasons why this is a bad idea: first, the more implementation details you tests depend on, the less implementation details you can change without breaking your tests; second, your tests are typically less readable because they’re cluttered with details, instead of meaningful code.
The only example I encountered of this in Tor were the compression tests. In this case it wasn’t a big deal, really, but I have seen this before in much worse situations and I feel this illustrates the point well enough. The problem with that deleted line is that it’s not clear what’s it’s purpose (it needs a comment), plus it uses a magic number, meaning if someone ever changes that number by mistake, it’s not obvious if the problem is the code or the test. Besides, we are already checking that the magic number is correct, by calling the
detect_compression_method. Thus, the deleted
memcmpdoesn’t add any value, and makes our tests harder to read. Verdict: delete!
I hope you liked the examples so far. My next post will contain the second half of the tips.
Oct 18, 2011
After not having blogged about anything but book summaries lately, I thought it was about time to write something else :-) EDIT: Added the last point, the most important one!
I had been thinking of trying out GNOME 3 since it was released. For a number of reasons, I only managed to give it a try a couple of days ago. I normally use KDE 4, but wanted to see how GNOME was doing these days, and wanted to see if it was something I could maybe switch to. I have to say I quite liked some of the stuff I saw, but I don’t think I can switch. My reasons:
Language switcher keyboard combination: I just couldn’t find any combination I could use. Everything conflicted with some other combination I use (esp. in Emacs). Having to change the keymap by clicking on the top bar didn’t sound sane to me.
Order of the OK/Cancel buttons: even if I switched, I would probably use a combination of systems. Having to train my brain to look for the buttons in a different position seemed like too much.
Rhythmbox seemed plain, clunky and hard to use. It seemed hard for me to do what I wanted, plus it crashed consistently while trying to listen to some podcast.
I kind of like the idea of how workspaces work (even though I have to radically change the way I use them to adapt to them), but for me it’s too much that both (a) closing the last window makes the workspace disappear and (b) you can’t create workspaces “above”. That is a deal-breaker for me.
Can’t create workspaces on the right or left? I could get used to that probably, but it added up to my frustrations with GNOME 3 workspaces.
Constant repainting issues.
Can’t make sense of the window traversing. Let’s say I have two virtual desktops, one with a browser and another one with two terminals. The focus is on one of the terminals, and I want to go to the other terminal (with the keyboard, of course). If I just press Ctrl-Tab GNOME takes me to the web browser in the other desktop! If I want to go to the other terminal, I have to press Ctrl-Tab, Shift-Ctrl-Tab to go back to the terminal, arrow down to see all the terminal windows, arrow right to go to the next terminal. It’s even worse when I have Opera in one virtual desktop (maximised) with the error console in the same desktop. As Opera is maximised, I can’t even click with the mouse, so the only way to switch to the error console I can think of is doing the dance described above. Am I missing something, is this for real? EDIT: I was told about Ctrl-
(changes between windows of the same application). Cute attempt, but I don't think I can get used to thinking if I have to use Ctrl-TAB or Ctrl-. So that remains “impossible to use” for me.
I wonder if some GNOME user can shed light on some of those issues, although it doesn’t seem like I can find a solution for all my frustrations :-)
Oct 13, 2011
This is the third and last part of my summary of Pragmatic Thinking & Learning, by Andy Hunt. See parts one and two on this same blog. This part will cover chapters “Gain experience”, “Manage Focus” and “Beyond Expertise”.
Your brain is made to learn by exploring and building mental models on your own, not by receiving information passively (see amazing footage of tennis teacher in Alan Kay’s Doing with Images Makes Symbols, starting on 55:30). In real life there’s no curriculum, you’ll make mistakes and it will get messy: the kind of feedback you need. We learn by “playing”, but that doesn’t mean easy or non-business-like. Papert students called it “fun” because it was hard, not in spite of.
Leveraging existing knowledge helps learning new skills: we can often find similarities, literal or metaphorical, with existing skills. However, careful not to stick with the similarity. Fully embrace the new skill’s unique characteristics.
Failures are valuable because they can lead to study and understand what went wrong and how to fix it. They are critical to success, but only when well-managed. A good environment for good failures needs freedom to experiment (few problems have a single solution; prototype more than one solution to a problem), ability to backtrack to a stable state, reproduce any work product as of any time (source control, and the ability to run those versions of the program), ability to demonstrate progress (get comparable feedback between versions). Version control, unit testing and automation give this environment. A supporting environment can make or break learning for anyone.
Tip: to raise awareness when some code fails without apparent reason, try to imagine what the code should look like, then compare it to the real thing.
Time pressure actively shuts things down. Your vision narrows, and R-mode doesn’t get the chance to work at all.
When not doing anything, the L-mode will produce incessant mental chatter, which interferes with R-mode processing. Meditation helps controlling the L-mode “monkey voice”.
When meditating you don’t want trance, falling asleep, or “contemplate the big mystery”, but to sink into a relaxed awareness of yourself and your environment without judgement or making responses. Meditation exercise suggestion: find a quiet spot; sit in a comfortable, alert position with a straight back (become aware of any tension and fix it); close you eyes and focus your awareness on your breath; be aware of the rhythm (don’t try to change it, just be aware); keep your mind focused on your breath (do not use words or start a conversation with yourself); when you start thinking about some topic or having a conversation with yourself, let the thoughts go and get the focus back on the breath. Even if you mind is wandering often, the exercise of bringing yourself back is useful.
You have to let your thoughts “marinate” to get the best results. Different people have different methods to marinate (sitting around doing nothing, humming, eating a crunchy snack, making paper dolls…). Thinking about at least three solutions to a problem gives you the confidence that you have thought “enough” about it. Then, you can let those ideas marinate and come up with the best solution.
Develop your exocortex (mental memory or processing outside your physical brain: book collection, notes, favourite IDE, etc).
Deliberate switching to e-mail/IM. Close what you’re working on, take a deep breath, then switch.
Have a way to note things quickly when they come up, without losing the flow or what you were doing (wiki/scratchpad inside you IDE or whatever).
When stuck or bored, doodle on a piece of paper, or go for a walk (without talking to anyone). That helps against checking the internet or e-mail.
Don’t answer IM right away, put up a sign (“don’t disturb”) during a debugging session or similar, or close the door if you have one. When you do get interrupted, try to save your mental state before you lose it.
Get two monitors (same size and brand) to keep the whole context at sight. Organise virtual desktops by task (communications/distractions, writing, coding/checking documentation, surfing), not by kind of application (browsers, editors, terminals).
In summary: learn to quiet your chattering L-mode, deliberately work with and add to thoughts in progress, and be aware of how expensive context switching can be.
Change is harder than it looks, old habits don’t go away easily. Don’t be hard on yourself, just correct it and go back to the right path. Suggestions to make effective change:
Start with a plan. Keep track of what you have accomplished, it’s probably more than you think.
Inaction is the enemy, not error. The danger is not doing things wrong, is not doing anything.
New habits take time. Expect at least three weeks, maybe more.
Belief is real. If you think you’ll fail, you will.
Take small, next steps. Keep your big goal in mind, but don’t try to spell out all steps you need to get there.
Possible first steps (complete list on p. 247-248, this is just two highlights): (1) pick two things that’ll help you maintain context and avoid interruption, do them right away; (2) open your mind to aesthetics and additional sensory information (your cubicle, desktop, code: how “pleasing” is it?).
You don’t want to become a “niche expert”: approach learning without preconceived notions, prior judgement or a fixed viewpoint; be aware of your own reaction to new technology and ideas; be aware of yourself and the context. The biggest reason any of us fail is the autopilot.
And this is the end. I hope you liked it.
Oct 12, 2011
This is the second part of my summary of Andy Hunt’s “Pragmatic Thinking & Learning”. See the first part on this blog. This part will cover chapters ”Get in Your Right Mind”, “Debug Your Mind” and ‘‘Learn Deliberately”.
Get in Your Right Mind
A good way to involve your brain more is to use more senses than usual. For tactile you can use building blocks like Lego, CRC cards, etc. Example of “role-playing” a software design on p. 77. The advantage of using the R-mode is not that it’s a panacea, it’s simply to use the other half of your brain too (reference to pair programming). Story of the climbing teacher on p. 81: the importance of feeling something first (R-mode) before learning it’s theory (L-mode), because it gives you the context to understand the theory and explanations better. Learning can be impeded by trying to memorise facts when you don’t grasp the whole yet. When creating, be comfortable with the absurd and impractical; when learning, get “used to it” before learning and memorising. Using metaphors can open up creativity because they communicate the R-mode and the L-mode (wordnet can help when creating metaphors).
Tip: the “morning pages” technique (p. 98). Write them first thing in the morning (before coffee, shower or anything else); write at least three pages by hand, without computer; do not censor what you write; do not skip a day. Blogging is also a good exercise (what you think about a topic, what you can defend publicly).
Tip: learn martial arts or yoga to improve concentration (p. 103). Tip: break small, daily routines (turn off the autopilot).
Forcing the brain to reconcile unlike patterns broadens the scope of material under consideration (see Zen koans and Greek oracles on p. 107). Reference to oblique strategies (has electronic versions, including an Android version!).
Debug Your Mind
We make decisions and solve problems based on faulty memory and our emotional state of the time, ignoring crucial facts, etc. Some cognitive biases: anchoring (ref: experiment with numbers and prices in predictably irrational), fundamental attribution error (other people behave based on their personality, we have excuses for our own behaviour; in reality, behaviour is often caused by the context), self-serving bias (“it’s my success”, but “it’s not my failure”; you’re always part of the system), need for closure (naturally uncomfortable with uncertainty; have to learn to live with it), confirmation bias, exposure effect (prefer familiar things), Hawthorne effect (people change when they’re being watched, but after a while they go back to how they were behaving), false memory (easy to confuse imagined events with real memories; every memory read is a write in light of the current context), symbolic reduction fallacy (L-mode is anxious to “symbol-away” complexity), nominal fallacy (thinking that labelling a thing means you understand it).
How to fight biases: understand that “rarely” doesn’t mean “never”, defer closure (you know the most about a project at the end of it, so don’t take decisions too early, be comfortable with uncertainty), remember that you don’t remember well. People are mainly a product of their environment and of the times. Explanation of different American generations on p. 125-131. In summary, generational archetypes are prophet (vision, values), nomad (liberty, survival, honor), hero (community, affluence) and artist (pluralism, expertise, due process). Realise where your thinking is coming from, what are your influences, and what kind of arguments you make. Try to have a diverse team so biases can catch/cancel each other. Myers Briggs Type Indicator discussion on p. 133-135. Trust intuition, but verify. If you think you have defined something, try to define the opposite.
A single intense, out-of-context classroom event can only get you started in the right direction. You need continued goals, feedback to understand your progress and approach it far more deliberately than a once-a-year course.
For any goal (desired state, usually short-term) you have in mind you need a plan, a series of objectives (steps towards that goal). Objectives should be Specific (“learn Erlang” vs. “be able to write a web server in Erlang that dynamically generates content”), Measurable (how do you know when you’re done? related to “specific”. You don’t have to see where you’re going, just a couple of meters ahead of you), Achievable (from the current state!), Relevant (does it matter to you? is it under your control?) and Time-boxed (perhaps the most important: the deadline).
Create Pragmatic Investment Plans (PIP) to learn whatever you want to learn. Major point involving managing the plan:
Have a concrete plan: devise different levels of goals (now, next year, next five years).
Diversify: make an effort to choose different methodologies, languages, industries, and non-technical stuff.
Active investment: need to be able to evaluate your plan and realistically judge how’s going. Adapt/change the plan based on that.
Invest regularly: you need to make a commitment to invest a minimum of time on a regular basis. Create a ritual, if needed.
Other techniques, like mind maps, talk to the duck and learning by teaching are mentioned in this chapter, but I’m skipping them in the summary.
And this is the end of the second part of my summary. The next one will cover the rest of the book, namely chapters “Gain experience”, “Manage Focus” and “Beyond Expertise”.
EDIT: read the third part of this summary.
Oct 11, 2011
This is the first part of my summary of “Pragmatic Thinking & Learning”, by Andy Hunt. It’s a book about how the brain works and how to take more advantage of it. It explores many interesting topics, like learning, focusing, the brain modes of operation, etc. Do note that I’ll skip many techniques/lessons in the summary, sometimes because they were less interesting for me (as in, I didn’t think they would work for me), sometimes because I already practise them. This first part will cover the introduction (in PDF), and chapters ”Journey from Novice to Expert” (in PDF) and “This is Your Brain”.
Despite advances in programming languages, techniques, methodologies, …, the defect density has remained fairly constant. Maybe we’re focusing on the wrong things: software is created in your head.
You don’t get taught, you have to learn. Everything is connected, there’s nothing in isolation, so sometimes small things can have unexpectedly larger effects.
Journey from Novice to Expert
The novice needs clear, context-free rules to operate, but an expert is ineffective when constrained by those same rules. Experts don’t just “know more” than novices, they experience fundamental differences in how they perceive the world, approach problem solving, etc. The stages of learning in the Dreyfus model are:
Novices. Have little or no previous experience in this skill area (“experience” results in a change of thinking; doing the same for years doesn’t count!). They don’t particularly want to learn, want to accomplish an immediate goal, don’t know how to respond to mistakes and are fairly vulnerable to confusion when things go awry (example of doing taxes for years on p. 20).
Advanced Beginners. Can start to break from the fixed rule set a little bit. Can try tasks on their own but have trouble troubleshooting. They want information fast (eg. API reference) and can start using advice in the correct context, but don’t have a holistic understanding and really don’t want it yet.
Competent. Can develop mental models and work with them effectively, troubleshoot problems on their own, and begin to figure out how to solve novel problems, as well as seek and take advice from experts. They’re typically described as “having initiative” or “resourceful” and tend to be in a leadership role in the team (formal or not). They’re great to have on your team because they can mentor the novices while not annoying the experts.
Proficient. Need the big picture, and thus will seek out and try to understand the larger conceptual framework around the skill. They’re frustrated by simplified information. This is the first level that can correct previous poor performance and revise their approach. They can also learn from the experience of others, which comes with the ability to understand and apply maxims.
Experts. Primary sources of knowledge and information in any field, continually looking for better methods and better ways of doing things. Statistically, there aren’t many: probably around 1-5%. Experts work on intuition, not reason. They may be completely inarticulate as to how they arrived at a conclusion. They aren’t perfect though, and have the same cognitive biases as everyone else. They’re also likely to disagree with one another.
Most people, for most skills, never get past the second stage. Plus, practitioners at a lower skill level have a marked tendency to overestimate their own abilities. Note that you want a mix of skills on a team. See “10 years to expertise” on p. 32. The dangers of overreliance on formal models (I’m skipping some in this list!):
Confusing the model with reality. It’s easy to confuse the two, but they aren’t the same.
Devaluing traits that cannot be formalised. Good problem-solving skills are critical to our jobs, but problem solving is a very hard thing to formalise.
Legislating behaviour that contradicts individual autonomy. You want thinking, responsible developers. Don’t reward herd behaviour.
Alienating experienced practitioners in favour of novices. Targeting your methodology to novices, you create a poor working environment for the more experienced.
Oversimplification of complex situations. Every project/situation is more complex than that.
Demand for excessive conformity. What worked great in your last project might be a disaster in the next one.
Insensitivity to contextual nuances. Formal methods are geared to the typical, not the particular. But when does the “typical” ever happen?
Mystification. Speech becomes so sloganised that it becomes trivial and loses meaning (eg. “we’re a customer-focused organisation”).
This is your brain
You have two “CPUs”: the linear, logical thought and language processing CPU (“L-mode”, for “linear”; the “left part of the brain”); and the searching and pattern matching CPU (The “R-mode”, for “rich”; the “right part of the brain”). They share the same bus so they can’t function at the same time.
R-mode can search “asynchronously” and come up with the response (possibly days) later. It doesn’t do any verbal processing, so the results are not verbal either (eg. trying to describe dreams). Also, it’s not under our direct concious control: as it can give answers anytime, we have to be ready to write down anything that comes up (related: everyone has good ideas, but far fewer track them; of those, even fewer bother to act on those ideas, and even fewer of those have the resources to make a good idea a success). It is very concrete, relating things as they are; it makes analogies and doesn’t require reason or known facts to process input. It’s holistic and wants to see the whole thing at once, perceiving overall patterns and structures. It’s intuitive, making leaps of insight, based on incomplete patterns, hunches, feelings or visual images. It’s very useful for software design.
Synthesis can be good for learning, see “Don’t Dissect the Frog, Build It” on p. 62. Aesthetics also make a difference, see p. 66-67. The brain is wonderfully plastic. There’s no limit to the number of skills you can learn, as long as you believe it (ie. what you think about your brain capabilities physically affects the “wiring” of the brain itself).
And that’s all for now. The next part will cover chapters ”Get in Your Right Mind” and “Debug Your Mind”.
EDIT: read the second part of this summary.
Jun 28, 2009
I have been using Perl for many years, but I had never uploaded anything to CPAN. That’s unfortunate, because I’ve probably written several programs or modules that could have been useful for other people. The point is, now I have. Not only that, but it was code I wrote at work, so if I’m not mistaken these are my first contributions to free software from Opera. Yay me!
The two modules I’ve released so far are:
Parse::Debian::PackageDesc, a module for parsing both
.changesfiles from Debian. This is actually a support module for something bigger that I hope I’ll release soon-ish.
As I feel that Migraine could be useful to a lot of people, but it’s easy to misunderstand what it really does (unless you already know Rails migrations of course), I’ll elaborate a bit. Imagine that you are developing some application that uses a database. You design the schema, write some SQL file with it, and everybody creates their own databases from that file. Now, as your application evolves, your schema will evolve too. What do you do now to update all databases (every developer installation, testing installations, and don’t forget the production database)? One painful way to do it could be documenting which SQL statements you have to execute in order to have the latest version of the schema, and expect people to apply copying-and-pasting from the documentation. However, it’s messy, confusing, and it needs someone to know both which databases to update and when.
Migraine offers a simpler, more reliable way to keep all your databases up to date. Basically, you write all your changes (“migrations”) in some files in a directory, following a simple version number naming convention (e.g.
002-change_passwd_field_type.sql), and migraine will allow you to keep your databases up to date. In the simplest, most common case, you call migraine with a configuration file specifying which database to upgrade, and it will figure out which migrations are pending to apply, if any, and apply them. The system currently only supports raw SQL, but it should be easy to extend with other types.
In principle, you shouldn’t need to write any Perl code to use migraine (it has a Perl module that you can use to integrate with your Perl programs if you like, but also a command-line tool), so you can use it even in non-Perl projects. Of course, some modern ORMs have their own database migration system, but very often you have to maintain legacy code that doesn’t use any fancy ORM, or you don’t like the migration system provided by the ORM, or you prefer keeping a single system for schema and data migrations… I think in those cases Migraine can help a lot reducing chaos and keeping things under control. Try it out and tell me what you think
In a couple of days I’ll blog again about other contributions to free software I’ve made lately, but this time in the form of Opera widgets…
May 10, 2009
I’ve been working on something lately that I hope I will publish sometime next month: it’s a set of tools to manage an APT package repository. The idea is that, given an upload queue (you can set it up as an anonymous FTP, or some directory accessible via SSH/SCP, or whatever floats your boat in your setup and team), you’ll have a web interface to approve those packages, a set of integrated autobuilders building the approved packages in whatever combination of architectures and distributions you want, and all that integrated with reprepro to keep your repository updated. I’ll write more about it when I have released something.
The point now is that, while working on it, I needed some module to parse command-line options and “subcommands” (like
svn update, etc.). As it’s written in Perl, I had a look at CPAN to see if I could see anything. The most promising module was App::Rad, but it lacked a couple of things that were very important for me: my idea was “declaring” all the possible commands and options and have the module do all the work for me (generating the help pages and the default
--helpimplementation, generate the
program help subcommandand so on).
App::Raddidn’t have that, and it didn’t seem to me like that was the direction they wanted to go to with the module. But I figured I’d drop the author an e-mail anyway and see if he liked the idea so I could start adding support for all that…
And boy was that a good idea. He replied a couple of days later, and said that they had liked the idea so much that they had implemented it already (that’s why he took a couple of days to reply), and he sent me an example of the new syntax they had introduced and asked if that was what I was thinking. And not only that, but they added me to the list of contributors just for giving the idea! That completely made my day, free software rocks!
May 20, 2008
One week ago (but I just noticed), FFII published this press release about McCreevy trying to legalise Software Patents. I haven’t had time to read the whole thing, but this is just amazing. I mean, doesn’t Mr. McCreevy get fucking bored, if nothing else?
We don’t want your filthy software patents. We have said so many many times. Now go and [censored] yourself, find something useful to do for Europe.
May 13, 2008
> > I cant tell you how much I appreciate the work you all have done. Its a work of art. If I could thank each and every one of you I would. > >
> > You have given her the world to learn and explore. > >
> > So if you get frustrated or tired in your work for Open Source/Free Software, just remember that somewhere in Missouri there is a 14 year-old girl named Hope, an A-student who runs on the track team, who is now your biggest fan and one of the newest users of Linux/Ubuntu. > >
Although I haven’t really participated in KDE or Ubuntu (not directly anyway), I too feel proud of what we, as a community, have created. Also, like that person, I feel very thankful for everything I have learned and got from the free software community.
Cheers guys, you all rock!