HCoder.org
Posts in Category “Computers”
-
GIMP Scripting
Dec 3, 2020 onThe last month or so I have been working on a project that I will talk about shortly. For that I had to do some simple image edits often, and after the third time or so I figured it was better to script them. I had done a little GIMP scripting before so I figured it wouldn’t be hard to write a couple of scripts to do what I wanted (which was very simple to start with). However, every time I had to do something with GIMP scripting I forget the details and I need to check again, so I figured it would be useful to leave some notes for myself and for others who might be interested.
Scripting languages
There are two main scripting engines you can use in GIMP: Script-Fu (which uses Scheme, a Lisp dialect), and python-fu (which uses Python). I have used both at some point but this tutorial will use Script-Fu. Unfortunately, making an introduction to Scheme as a language would be too long for this blog post, but you can check the Script-Fu tutorial in the official documentation.
Note that in the example I’m using some function defined in GIMP 2.10 and later, so the example code will not work as-is with older versions of the software (in particular, you will have to delete the calls to the functions
gimp-context-set-stroke-method
andgimp-context-set-line-width
, and changegimp-drawable-edit-stroke-selection
togimp-edit-stroke
… but if you do that, the script will lose some functionality).Types of scripts/uses
There are two typical uses for these scripts: adding extra menu entries so we can call our script from within GIMP itself, and adding functions that we can call from the command-line. The former typically receive images or layers, and the latter typically receive filenames. In both cases we define functions that do what we want (from images/layers or from filenames) and then we either register them in the menu, or leave them as-is so we can call them from the command-line.
Defining a simple function
Create a new file
add-border-to-image.scm
inside your GIMPscripts/
directory. If you don’t know where that is, go to Edit ➝ Preferences ➝ Folders ➝ Scripts. You can use any of the folders in that list, or even add a new one.In that file, enter the following text:
(define (add-border-to-image image) (let* ((drawable (car (gimp-image-get-active-layer image)))) ;; The context push allows us to change settings (like the ;; foreground colour) and go back to the previous settings ;; when we pop it. The gimp-image-undo-group-* makes sure ;; that all the operations are considered only one with ;; regards to undo. (gimp-context-push) (gimp-image-undo-group-start image) (gimp-selection-all image) (gimp-selection-shrink image 2) (gimp-context-set-stroke-method STROKE-LINE) (gimp-context-set-line-width 4) (gimp-context-set-foreground "#657487") (gimp-drawable-edit-stroke-selection drawable) (gimp-image-undo-group-end image) (gimp-context-pop)))
This code defines a function called
add-border-to-image
which receives an image and paints a 4-pixel border on it. It also makes sure that all the operations are considered only one for undo purposes.Once we have that function, we can add a menu entry for it, or we can prepare it so it’s easy to call from the command-line.
Functions already defined in GIMP
When you write these scripts you will need to check which functions are already defined in GIMP (like
gimp-selection-all
orgimp-drawable-edit-stroke-selection
here). You do that by going to Filters ➝ Script-Fu ➝ Console and then clicking on the “Browse…” button. You will see a list of functions and a search box you can use to search for them. Note that by default the search only looks for functions with those names, so you might want to change that to “by description”.Adding a menu entry for the function
If we want to be able to paint borders on an image we have open in GIMP, we can add the following code at the end of the file, then either restart GIMP or go to Filters ➝ Script-Fu ➝ Refresh Scripts:
(script-fu-register "add-border-to-image" "<Image>/Edit/Add border" "Paints a 4-pixel border" "Esteban Manchado Velázquez" "Esteban Manchado Velázquez" "2020" "RGB*" SF-IMAGE "Image" 0)
This will add a new menu entry under Edit called “Add border”. The entry will be grayed out if you don’t have any image open. If you do, you will be able to click on the new option and see the newly added border.
Note: for some reason that I haven’t been able to find out, you will need to click anywhere on the image for the border to appear. This is only a problem when using it from the menu.
Calling the function from the command-line
To call the function from the command-line we will have to create a new function that receives a file path pattern (could be a file path or something like
images*.png
), opens the file(s), calls the function we defined, and then saves the file(s) somewhere. Add this at the top ofadd-border-to-image.scm
:(define (add-border-to-file file-pattern) (let* ((filelist (cadr (file-glob file-pattern 1)))) (while (not (null? filelist)) (let* ((filename (car filelist)) (image (car (gimp-file-load RUN-NONINTERACTIVE filename filename))) (drawable (car (gimp-image-get-active-layer image))) (output-filename (string-append (car (strbreakup filename ".")) "-focused.png"))) (add-border-to-image image) (gimp-file-save RUN-NONINTERACTIVE image drawable output-filename output-filename) (gimp-image-delete image)) (set! filelist (cdr filelist)))))
This will receive one parameter, namely
file-pattern
, interpret it as a file pattern with possible wildcards (with thefile-glob
function) and go through every file that matches that pattern. Then, for every file, it will open it, calculate the output filename for that file, calladd-border-to-image
with the open image, and then save the result in the calculated output file and close the image.Calling the function from the command-line
The command-line incantation to call this function is not trivial and I always forget it. We need to call the
gimp
program with the-i
option (so that the user interface doesn’t load) and the-b
option (for batch) and pass it some Scheme to call the function we want (in this case,add-border-to-file
). We also need to call the special functiongimp-quit
so that GIMP quits after the function has finished. We do so by calling it like this:gimp -i -b '(add-border-to-file "img*.png")' -b '(gimp-quit 0)'
Conclusions
Automating repetitive tasks with Script-Fu can be extremely useful and save us a bunch of time. We can expose the functionality we create in two ways: via GIMP’s own menus, and via the command-line’s “batch mode”. If one is used to Lisp dialects, using Script-Fu with Scheme is not too difficult; otherwise, Python-Fu can be a better alternative. Both languages have an interactive console inside of GIMP to try things out, plus a documentation browser to see and search for available Script-Fu functions.
If one wants to learn more details, the official documentation has a Script-Fu tutorial.
-
FireHOL and Wireguard
Apr 8, 2020 onThe last blog post was a quick introduction to FireHOL, the software to make firewalls. In this blog post we will see how to configure FireHOL to allow Wireguard to work, if you want to install Wireguard on the same server. In this configuration, Wireguard will be used as a simple VPN server (think OpenVPN): accepting connections from a client (typically a laptop or a mobile phone) and route that traffic to the internet.
EDIT: Corrected/simplified a couple of things, based on feedback.
Assumptions
For this blog post, I will assume that you already have Wireguard working, and you have FireHOL installed and configured (except that Wireguard now doesn’t work, and you have to fix FireHOL’s configuration to make it work again).
I will assume that your Wireguard interface is
wg0
, you are using the (standard) Wireguard port51820
, and your main network interface iseth0
.Configuring Wireguard
There are three things we must do in order to make Wireguard work:
- Accept the initial connection to the Wireguard server port
- Accept traffic from the Wireguard network interface
- Route the traffic from the Wireguard interface to the internet (the main network interface)
Accepting Wireguard connections
The first thing one has to do is to open the Wireguard port. Because Wireguard’s port is not defined in FireHOL, we need to specify the port like this:
interface eth0 # ... server custom wireguard udp/51820 default accept
If you put those two lines at the end of your
interface eth0
definition you should be good. Note that, if you would prefer that line to look like the other service definitions, you can tell FireHOL what the Wireguard port is and define that line likeserver wireguard accept
.Accepting traffic from the Wireguard interface
For that we need to declare the Wireguard interface and accept everything from/to it:
interface wg0 vpn policy accept
Put those lines before or after your other
interface
definitions.Routing
Last but not least, we need to allow the traffic from
wg0
to be routed to and from the main network interface. To do that, put these lines at the end of your configuration file:router vpn2internet inface wg0 outface eth0 masquerade route all accept
Conclusions
One could do more sophisticated configurations, but that’s a basic one that should work well. As always, activate the new configuration with
firehol try
, so that if you break anything you will not lose access to the server. I hope this post was useful! -
Firewalls with FireHOL
Apr 7, 2020 onIf you have a computer connected to the internet, eg. a server/VPS in some hosting company, you are receiving lots of attacks from randos on the internet. It’s a very good idea to have a firewall because chances are that at some point someone will reach your server with an exploit that you haven’t had time to patch yet. Like, it’s a matter of when and not if. But, what if you aren’t really the sysadmin type and you have no idea about iptables or any of those incantations needed to protect yourself? Don’t fret, because FireHOL has you covered.
Installation
On Debian (and probably Ubuntu?), you can install it by typing:
sudo apt install firehol && \ sudo systemctl stop firehol && \ sudo systemctl disable firehol
The
systemctl
calls are VERY IMPORTANT because of a current bug in the package, which will leave your server inaccessible! After it’s installed (and disabled), addserver all accept
to the default config file/etc/firehol/firehol.conf
so that it ends up like this (skipping the initial comment block):version 6 # Accept all client traffic on any interface interface any world client all accept server all accept
At this point you can set
START_FIREHOL=YES
in/etc/default/firehol
, and then run:sudo systemctl enable firehol && sudo systemctl start firehol
That will give you a running FireHOL that won’t filter anything. So, same as you had before you even installed FireHOL. But at least now you can start…
Defining rules
There are two kinds of things you will want to block with FireHOL: ports/services, and IPs. The first is very easy. The second is not too hard but you need to learn a thing or two to make it sustainable (ie. use lists maintained by others).
Blocking ports
More than blocking ports, you specify which ports/services you want open, and everything else is closed by default. Instead of saying
server all accept
, you put lines like this in its place:server http accept server https accept server ssh accept
Maintaining IP lists
The easiest way to filter bad IPs (malware, spammers, etc.) is to download IP lists and blacklist them from the FireHOL configuration. There’s a tool called
update-ipsets
(available in thefirehol-tools
package in Debian) that you can use to download them. You can runupdate-ipsets
to see the available lists (and update them, if enough time has passed) andupdate-ipsets enable <listname>
to enable them. For example, you can run this command to enable thespamhaus_drop
andspamhaus_edrop
IP lists:sudo update-ipsets enable spamhaus_drop spamhaus_edrop && \ sudo update-ipsets
This will download the lists under
/etc/firehol/ipsets
. Once they are there, you can add these lines to your configuration file (before theinterface
definitions) to block incoming connections from any of the IPs and networks mentioned by the lists above:ipv4 ipset create badnets hash:net for list in spamhaus_drop spamhaus_edrop; do ipv4 ipset addfile badnets ipsets/$list.netset done ipv4 blacklist ipset:badnets
Tips
Trying your changes
You can use the
firehol try
command to try changes: it will automatically revert in 30 seconds unless you typecommit
in a terminal.Keeping your logs clean
By default, FireHOL will send log data (including every single dropped connection!) to syslog. If you want to keep your syslog clean and send FireHOL logs to a different file, you can do the following:
- Install the
firehol-doc
package withsudo apt install firehol-doc
. - Add
FIREHOL_LOG_PREFIX=FireHOL:
at the top of/etc/firehol/firehol.conf
. - Use the provided example files (see below).
To use the example rsyslog configuration and the example logrotate configuration, run the following commands (the latter is so that the FireHOL log files don’t grow forever):
sudo cp /usr/share/doc/firehol/examples/rsyslog/rsyslog_d-firehol.conf \ /etc/rsyslog.d/firehol.conf sudo cp /usr/share/doc/firehol/examples/rsyslog/logrotate_d-firehol \ /etc/logrotate.d/firehol
Once you follow these steps you will have the FireHOL logs under
/var/log/firehol
.Conclusions
FireHOL is a great tool to make firewalls easily without having to learn arcane syntax or command-line options. Even if you don’t have advanced sysadmin knowledge, it’s easy to get started and secure your servers. I hope this little guide was useful!
- Install the
-
Making lyric videos
Jan 15, 2019 onFor some time now I had wanted to find a way to create lyric videos without much effort. I really didn’t want a custom video per song, with hand-made everything. Just an easy way to call some scripts and do some tweaks by hand, and get a lyric video.
My first attempts with ImageMagick and animated GIFs didn’t quite work out because the animated GIFs, at least when converted to a video (to add the audio track) didn’t really keep the timing they were supposed to. So I was about to give up, but Luka gave me a very good idea: to make subtitles out of the lyrics, and the use
ffmpeg
to burn the subtitles into the video itself. So I got to work.Note: This whole process is absolutely for nerds, as it involves the command-line, Linux, and some hand-made scripts and tweaks in Aegisub, the subtitling program I used. If you’re looking for an easy way to do it with a graphical or web program, you’re out of luck (there might be ways, but this is certainly not it).
Preparing the subtitles
The first step is to create the subtitles for the lyrics. Astonishingly, I start with a simplified text file (in a made-up format), then convert with a hand-made script into an
.srt
file… then I convert it to the final.ass
format in Aegisub. So, yes, the lyrics go through three different formats in total.The first, made-up lyric format looks like this:
00:52,500 Another gal asked him to please everyone 00:55,000 what an impossible burden to bear 00:59,000 01:09,000 Bunch of Satan suckers 01:11,500 Selling cola products 01:14,500 Are you Christians? Please forgive me 01:17,500 If you didn't like what I said
As you can probably see, it’s a very compact format that can be written easily in a text editor. The idea is that each line stays until the next one, hence I add an empty line with a timestamp (the third line) to make the previous lyric show for 4 seconds, instead of 14. With a hand-made script I convert this format to an
.srt
file, by calling it like so:./lyrics-to-srt.sh melvin.txt >melvin.srt
Now you might be wondering: why don’t you convert them directly into the final format? The reason is that the
.ass
format is a bit more involved, and it contains formatting information, too, like the font used, font size, position of the subtitle lines, etc. It was easier for me to convert to.srt
first, do the visual stuff in Aegisub, and save the file as.ass
in Aegisub.So how does that work, exactly? Aegisub has a “Styles Manager” (available from the menu “Subtitle”) in which you can define a subtitle style. In it you define the font and other things. You define that once, and that style will be available for you in any file you open with Aegisub. So once I have the subtitles in
.srt
(a format that Aegisub can open), I open the file there, select all subtitle lines with Ctrl-A, then choose the style I want in the UI:After clicking on the style, all lines get marked as having that style, as you can see here (notice the change in the “Style” column, to the left of the lyric text itself):
Now I can choose “Save as…” and save the file as eg.
melvin.ass
.Preparing the background video
Once I get the subtitles in the appropriate format, I can make the base video (without lyrics) in Kdenlive. The process is as follows:
- I load the song with “Add Clip” on the left hand pane.
- I click “Add Title Clip” on the left hand pane, and I put whatever static text or images I want for the background, including the name of the song.
- I add both to “Audio 1” and “Video 1” respectively, and make sure that the title clip is as long as the audio track.
- I render it to an MP4 video file.
This will give me a video file that has the title of the song and whatever background I want, but no lyrics or subtitles.
Putting it all together
At this point I have two things: the final subtitles in
.ass
format, and the basic video without the lyrics. With this, I can useffmpeg
to produce a second video with the subtitles already on it, in whatever font, size, and position I chose in Aegisub. The magic command is this:ffmpeg -i background-melvin.mp4 -vf ass=melvin.ass melvin.avi
The result will be a video with the subtitles rendered on it, like the Melvin lyric video you can see on YouTube.
Conclusion
Although it does involve several steps and it’s pretty dirty, this method to create the videos is good enough for me. I wasn’t going to create that many (only four in this batch) and I’m comfortable using these tools.
It should be possible to simplify the workflow by typing the subtitles directly in Aegisub, instead of in a text file, then convert them. I might do exactly that the next time I have to make another batch of these, but this time I already had the lyrics in the first format (due to my previous attempts with ImageMagick) so I figured I’d convert them to be able to open them in Aegisub instead of typing them there again. I hope this post was useful!
-
Controlling ligatures in LibreOffice.org
Jul 31, 2018 onFor a tiny project of mine (that I’ll publish once it’s ready) I needed to write a short document, and I used LibreOffice.org, as always. I wanted a fancy, old fashioned font for the document, so I headed for Font Squirrel and found a font I liked, Elsie. When I had written several paragraphs I realised that there was a ligature (“fi”) that didn’t display correctly. I really liked the font and I didn’t want to change it, but I couldn’t really use it as-is. So I started looking for ways to disable certain ligatures, or at least ligatures in general.
Disabling ligatures in LibreOffice.org
Looking around on the internet (mostly Stack Overflow) it seemed that at least modern versions of LibreOffice.org, with at least certain types of fonts, could disable at least certain ligatures. Or ligatures in general. Or something.
It wasn’t very clear to me at first, but after digging a bit I saw that fonts define certain “flags” that you can turn on and off. And how do you do so? Through a very ugly hack: you can ask LibreOffice.org to use a font like “
Elsie:-liga
”, and that’s interpreted as using theElsie
font but disabling theliga
flag. Unfortunately, in this case there’s no granularity in the ligatures in this font, so I couldn’t disable just the “fi” ligature. In this case it wasn’t a big deal because the other ligatures were a bit over the top for the body text anyway. As I didn’t have any “fi” in the titles, I’ve left the full font plus all ligatures for the titles.Finding out the tags for a given font
Now, how do you know which flags are available in a given (OpenType) font? Under Linux you have a collection of utilities called
lcdf-typetools
which includes a utility calledotfinfo
. You can read more in How the OpenType font system works, the article where I found this information.In this case, the output of the tool was:
$ otfinfo -f elsie/Elsie-Regular.otf liga Standard Ligatures salt Stylistic Alternates
In this case one can guess that there’s no way to disable just the “fi” ligature, and I just had to use the
Elsie:-liga
to get rid of all of the ligatures. I could have marked the parts with “fi” and remove ligatures only there, but I thought it wasn’t worth it.Installing a newer LibreOffice.org
Also, all this only works under LibreOffice.org >= 5.3. Unfortunately, my version was older so the trick didn’t work. However, I have the fantastic Flatpak installed for these cases, so it’s easy to install random versions of random programs without messing with the base packages of the operating system or adding new eg. APT sources. So I went to Flathub and found a recent enough version of LibreOffice.org.
Conclusion
It’s possible to tweak certain characteristics of an OpenType font under LibreOffice.org >= 5.3 through a really ugly hack with the font names. These names can of course be used in styles, so one can define the “Body text” style to use eg.
Elsie:-liga
instead of simplyElsie
to remove ligatures from the body text. For more information about OpenType fonts under Linux, read the article How the OpenType font system works. -
Installing F-Droid on Nokia 6
Jul 1, 2018 onI had to buy a new phone recently for reasons. I decided to stay on Android partly because of the apps I’m already using and depending on. Many of them I install from F-Droid, and AFAIK most are only available there. As F-Droid is not available on the Google Play Store, an initial “bootstrapping” is needed. Typically, you download the
.apk
file from the F-Droid site and install from the file manager.However, there was apparently no way to install the
.apk
file from the file manager that came with the phone, so I didn’t know what to do. I looked for the relevant options trying to find the right switch to make it possible, but I didn’t find anything. I tried to look for solutions on the internet but again nothing. After several attempts I figured there must be some application on the Google Play Store that would allow you to install.apk
files, and I was right. The problem was, all applications I could see were full of ads and I’m guessing they would leak a lot of information about the phone and its user.After a while, though, I found one that seemed to do what I wanted without displaying ads and (hopefully) not spying on you. It’s called App Installer and it’s made by a certain Eugen C. It was simple to use (it just finds all
.apk
s you have downloaded and presents them so you can choose which one to install) and it worked like a charm. After that I could finally start installing the applications I wanted from the F-Droid store. -
A year of Elm
Jun 7, 2017 onIt hasn’t quite been one year since I wrote the first post on Elm, but it’s not that far off and the title was clearer that way. In this time I have written three programs in Elm, each one more complex than the last:
-
Client for the catalogs generated by cataloger. My RPG resources catalog is an example of the result.
-
Prexxer, a “Dress-up” application to combine sprites and create characters. You can see the code on GitHub.
-
NARROWS, a storytelling system halfway between online role-playing games and improvised Choose Your Own Adventure books.
The first one was useful to get to know (and love) Elm, and of course to get my RPG resources catalog online. With the second I learned about Elm ports (the interface to interact with JavaScript) and got a bit more comfortable with Elm. With the last one, the only one that has a back-end (written in ES6, though; Elm is only for front-end), I learned how to structure big applications in Elm, how to use the excellent ProseMirror, and how to use much more complex Elm ports.
What I like about Elm
-
It’s a simple language with few concepts, easy to learn and understand. After my initial (small) struggles, I have mostly loved it since.
-
The Elm Architecture is really nice and simple, and it’s nice that it’s integrated into the environment. It’s like having a framework for free with the language.
-
Everything immutable: the easy way is the right way.
-
Nice, clear, useful compiler error messages.
-
Generally nice toolchain.
-
Static types help you a lot, mostly without being annoying.
-
Newer major versions of the language simplify things further and make things clearer and less error prone (I’ve been through two major updates).
What I don’t like about Elm
-
The compiler seems slow and often seems to compile older versions of the code. This might be my own fault, as I have a custom Brunch configuration with an Elm plugin and I don’t even understand Brunch all that well. In comparison, the TypeScript compiler was amazing.
-
Some complex types and interfaces, notably the modules for HTTP requests and JSON parsing. The former improved a lot in the new version, Elm 0.18.
-
Newer major versions of the language break compatibility and you have to rewrite parts of your application, which is really annoying… but also worth it.
-
I’m not that fond of currying. Sometimes I feel like it makes some compilation error messages harder to understand.
Conclusion
I’m really happy I gave Elm a try almost a year ago. Although I’m looking forward to going back to ClojureScript for some project, I really, really enjoy Elm, and it has almost become my defacto front-end language/framework.
-
-
First impressions of Elm
Jul 18, 2016 onFor my next pet project I decided to learn Elm. Elm is a functional language similar to Haskell, that compiles to JavaScript. These are my first impressions so take them with a grain of salt.
Syntax
Elm’s syntax is similar to Haskell’s. I had tried to learn Haskell a long time ago but failed miserably because I couldn’t understand the types. I did not find Elm’s syntax to be a problem, and it was nice and simple, especially compared to Elixir, the last language I had learned. However, I think it was the first time in my life that I wished I had had a small syntax description or guide before diving too deep into the language. For me there were two confusing things:
-
Sometimes I saw a bunch of words that were separated by spaces or by commas, and it took me a bit of time to realise that function arguments are separated by spaces, but elements in a list or similar are separated by commas.
-
Compound types that have more than one word, like
List Int
. That one is easy to figure out of course, but when I sawHtml Msg
I had no idea what it meant. I’m still not completely sure, in fact.
The first point in particular has an implication: if you use the result of a function call as an argument for second function you must enclose the first function call in parentheses. This all seems super obvious in retrospect, but when staring at code that uses a DSL-like library to generate HTML in a language you’re starting to learn… well, it would have helped to have the first point spelled-out. Example:
ul [] (List.map (tagView sectionPage) (ModelUtils.getSectionTags section))
Here we have a call to the function
ul
that has two arguments: an empty list and another list, namely the result of the call toList.map
. Note how the wholeList.map
must be enclosed in parentheses: otherwise, it would be interpreted as a call toul
with four arguments, not two. In the same way, both arguments toList.map
must be enclosed in parentheses, too.Elm Architecture
Although strictly speaking you don’t have to use it, the Elm Architecture is a fundamental part of Elm. It’s the high-level pattern for application architecture when writing Single-Page Applications in Elm, the main usecase for the language. The pattern is very similar to redux and friends in React, but it’s nicer and more natural in a functional, statically-typed language like Elm.
In short, it means separating your application in sub-applications that have a model, a React-style view produced from the model, a set of messages the view can send, and an update function that receives messages and changes the model. Besides, Elm supports commands and subscriptions, which gives a nice, clean interface for WebSockets and other things.
Conclusion
Although I’ve been looking forward to going back to ClojureScript, and in particular learn and play with Om Next, Elm is certainly a worthy contender and totally worth checking out, especially if you’re using React and you want to go one step further.
I admit I did get frustrated now and then with the static types, and at first a bit with the syntax (see the two points above) and the indentation. However, all in all I enjoyed learning and using Elm a lot, and it feels very clean, productive, and just nice to program in.
The application I wrote was very small, though, and I didn’t quite get to explore patterns in how to split an application in several sub-applications. I did read a bit about it but didn’t get to use anything fancy.
And if you want to see what I built, head over http://rpg.hcoder.org/ for the site, or https://github.com/emanchado/rpg-catalog for the code.
-
-
First impressions of Elixir + Phoenix
Jul 1, 2016 onI had been curious about Elixir for some time. After all, the promise of having the best of Erlang with a more palatable syntax was very attractive indeed.
A couple of days ago I finally finished a small project in Elixir using the Phoenix web framework, which is a sort of “Elixir on Rails”. These are my first impressions of both Elixir as a language and Phoenix as a framework. Take all this with a grain of salt: most of it is pretty subjective, and it’s from the perspective of a total Elixir/Erlang noob.
Elixir
I used Introducing Elixir for learning, which turned out to be a bad choice because it can feel like an intro to functional programming using Elixir, not so much an in-depth book about Elixir for someone who knows functional programming. In fact, the book preface says:
If you’re already familiar with functional languages, you may find the pacing of this gentle introduction hopelessly slow. Definitely feel welcome to jump to another book or online documentation that moves faster if you get bored.
Elixir is a bit of a mindfuck for me in that it looks like Ruby, but it’s not object-oriented at all. The language also seems to value convenience a bit too much for my taste (sacrificing simplicity or consistency). In particular, I find the extra, convenience syntax for symbols in maps extremely annoying:
%{"blah" => 1} # string as a map key %{blah: 1} # symbol as a map key (instead of %{:blah => 1})
Another case of different syntax options is the
if
statement. These two are equivalent:if x > 10 do :large else :small end if x > 10, do: :large, else: :small
I seem to recall this has something to do with macros, but all that syntax sugar feels weird. And all those colons, sometimes before, sometimes after a word, look ugly and confusing to me.
I have other, smaller peeves, but they’re subjective or unimportant. However, they strengthened the impression that I didn’t like the syntax.
In conclusion, the syntax reminded me of ES2015: syntax and exceptions for convenience, which makes it feel inconsistent, complex, and hard to remember. It constantly reminded me of the fat arrow function in ES2015.
Phoenix
Phoenix came with its own, related mindfuck: it looks like Rails and often feels like it, but there aren’t classes. That confused me a couple of times, but I guess it’s just a matter of getting used to it.
I think I liked it generally, and it felt productive, but I also felt that there was too much magic and generated code. Not as bad as with how I remember Rails (from many years ago), but enough to make me feel uncomfortable. Also, take into account that my project was a perfect fit for the framework: a small, mostly CRUD application.
I did get to try both tasks and channels, which were really cool, for example with the automatic reconnect in channels (they are implemented using WebSockets) without having to write any special code.
Conclusions
It was interesting to learn Elixir, but I’m curious about Erlang now. As in, I like the concepts behind Elixir (which are mostly Erlang concepts) and I’m not in love with Elixir’s syntax, so if I had to build a system that needed that kind of scalability and reliability I would consider Erlang.
-
TypeScript and new pet project
Jun 20, 2016 onAround two months ago I started a new pet project. As always, I built it partly to solve a problem, and partly to learn some new language or technology. The problem I wanted to solve was showing images and maps to players when playing table-top role-playing games (and, while at it, manage my music from the same place). The language and technology were TypeScript and to a lesser extent ES2015. As always, I learned some things I didn’t quite expect or plan, like HTML drag-and-drop, Riot, a bit more Flexbox, and some more canvas image processing. The result is the first public version of Lyre, my program to help storytellers integrate music and images into their stories (especially useful for semi-improvised or interactive stories).
But the reason for this post is to talk a little bit about the technology. I admit that I haven’t really studied TypeScript thoroughly (I mostly learned bits and pieces while programming), but I think I like it to the point that it might become my front-end language of choice when I cannot use ClojureScript or similar.
So, what’s so great about it? Essentially, it’s ES2015 with optional types. The bad thing is that it needs special incantations to be able to use regular JavaScript modules, and in most cases you don’t have types defined for non-TS modules so you end up with type any for everything. The good thing is that it’s extremely familiar for people who know ES2015, because it’s almost the same, and that of course you can specify types wherever you want to. I am not the biggest fan of static types, but after trying it I think I really like the idea of optional types. Moreover, types in TypeScript are relatively open and feel like they fit very well in the JavaScript ecosystem. Two examples:
-
Apart from enums, you can create union types that eg. are as simple as a choice between two strings, like type Mode = “off” “on”. - Interfaces can be used to specify the properties, not just methods, that should be available in an object. Among other things, it’s possible to specify that an object should have certain specified properties plus any number of extra properties as long as their values are of a given type.
They have other interesting features, like union and intersection types, type guards for functions, decorators, generics and others things. I haven’t really used most of those, but I realised that I liked TypeScript when I was writing some JavaScript and found myself missing the optional types.
For actual editing I’m of course using Emacs, in this case with TIDE. Although the refactoring capabilities are very limited, the rest worked quite well and I’m very happy with it.
The other bigger thing I learned was Riot. Which, sadly, I didn’t like as much: I found it very confusing at times (what does this point to in the templates? it seems to depend on the nesting level of the ifs or loops), ended up with lots of rebinding of methods so they could be safely passed around in templates, and generally felt that I spent too much time fighting with it rather than writing my application. Now, some of these problems might have been caused by Riot-TS and not Riot itself, but still the experience wasn’t that great and I don’t expect to use it again in the future. Again, bear in mind that I mostly tried to learn on the go, so maybe I was doing everything wrong :-)
In conclusion, I love these projects because I end up with something useful and because I always learn a bunch of things. In this case in particular, I even learned a language that positively surprised me, even if I’m not a big fan of static typing. Again, if you want to have a look at the result, you can learn about Lyre and download the code from GitHub.
-
-
Functional Programming Is Not Weird
Feb 29, 2016 onRecently I read a blog post titled “Functional Programming Is Not Popular Because It Is Weird”. I strongly disagree with the assessment and Twitter was too short for an answer, hence this blog post :-)
The blog post says that “Writing functional code is often backwards and can feel more like solving puzzles than like explaining a process to the computer”. I don’t agree at all, and I think this is simply a result of seeing functional programming in imperative programming terms. The first thing that comes to mind is that programming should not feel like “explaining a process” to the computer. Why should it? Imperative programming is centred around the process, the how. Functional programming is more focused on the what (which indeed is backwards compared to imperative, but I don’t think it’s backwards for humans).
Second, I think the recipe example is terrible and doesn’t show anything about the two styles of programming. The point of reading a program is not to be able to follow it step by step and produce the correct result (the computer does that), it’s about understanding the intent. Food recipes are about mixing things and obtaining the result. Programming is not about mixing things. In fact, the easier to separate and recombine they are, the better. This is very unlike a recipe.
Third, there is the “Imperative languages have this huge benefit of having implicit state. Both humans and machines are really good at implicit state attached to time”. Hahaha, what? State is the source of many, many problems in programming. Avoiding state, or at least separating the state-juggling bits, generally makes for better programs. Humans are terrible at keeping state in their minds, or understanding how a certain state was reached, or realising what produced changes in the state. To give an more concrete example of this: the post claims that “you know that after finishing the first instruction the oven is preheated, the pans are greased and we have mixed a batter”. No, you don’t. In a real program, you would be reading the source code for the second point, and it wouldn’t be obvious what the current state is at that moment. You would have to read, understand and play in your head the whole first point (and remember that, in real programs, there would be many ways to reach point 2, yikes!). And when anyone changed the first point, suddenly your understanding of the second point would be wrong, and you wouldn’t even know because the second point’s source code hasn’t changed. That is one of the reasons why state is generally best avoided.
The C++ example using templates as a functional language I’ll just skip as I don’t think is relevant at all for real functional programming. But right after, the author claims “I spend too much time trying to figure out how to say things as opposed to figuring out what to say”. I can relate to this while I was learning functional programming, but that’s entirely normal: you are, in a way, re-learning how to program. Many of your instincts and mental tools aren’t valid any more, and in some sense you have to start over. It’s frustrating, but the fact that it’s hard means that you’re being forced to think in a different, and in my view better, way.
Finally, at the end there’s “if functional programming languages want to become popular, they have to be less about puzzle solving”. If it wasn’t hard for imperative programmers to learn it, it wouldn’t be a different style, it would simply be a different syntax. No point in learning it! The reason why it feels like “puzzle solving” is because it’s different, not because it’s hard.
Now, I’m not saying functional programming is the best tool for every single problem, but most gripes imperative programmers have about functional programming are about “it’s different”, whether they realise it or not. Don’t be afraid to learn functional programming when it seems hard: that’s the whole point, you are learning a different way to solve problems!
-
Music Explorer
Nov 30, 2015 onLately I’ve worked on several small projects, mostly to learn new technologies. The newest one is music-related: a piano that shows scales and chords “in context”, to learn and explore music theory. The idea came about because my first instrument was the guitar, and music theory is pretty hard to make sense of when you’re playing the instrument. It’s just too hard to remember all the notes you’re playing, let alone realise when two chords in the same song are repeating notes because those notes might be played in different positions (eg. one chord might use E on the open sixth string, and another might use E on the second fret of the fourth string).
I remembered that when I started playing around with a piano, and I could figure out how to play a couple of chords, it was painfully obvious that they were repeating notes because they are in the same positions. In the same way, it felt much more natural and easier to figure out on a piano which chords fitted a scale, so I decided to write Music Explorer, and ended up even buying music-explorer.org to host it. I don’t have a particularly grand plan for it, but I’ll probably add at least some small improvements here and there.
If you are interested in the technical side of it, Music Explorer is written in JavaScript (EcmaScript 6, to be more exact). I learned to use React with JavaScript, Browserify, Sass/SCSS, Ramda, AVA, the fantastic JavaScript music library teoria and a bit more CSS while writing this so it was definitely a useful learning experience for me. The full source code lives in GitHub and it’s licensed under the MIT license so you can go have a look!
-
Pet projects
Oct 11, 2015 onI’ve been writing several pet projects in the last months. I wrote them mostly to learn new languages, techniques or libraries, and I’m unsure as to how much I’ll use them at all. Not that it matters. All three are related to role-playing games. In case you’re interested:
-
Character suite: a program to help create characters for RPGs. It’s designed to make it easy to add new rule systems for new games. It’s written in Clojure and ClojureScript, and with it I learned devcards, macros, Clojure’s core.async, figwheel and PDF generation with PDFBox. The code is messy in parts, but I learned a lot and I have a usable result, which is the important thing.
-
Nim-DR: a tiny command-line program to roll dice, including adding aliases for rolls (eg. alias “pc” for “1d100”). It doesn’t even support mixing kinds of dice, or adding numbers to the result. I mostly wrote it to get a taste of Nim. While I’m not a big fan of statically typed languages, the type inference really helped, and I liked what I saw. I may try more experiments in Nim in the future.
-
Map discoverer: a program to uncover a map bit by bit. You can use it online. I wrote it to learn ES6, the new JavaScript standard, and a bit about the HTML5 canvas element. I used Babel to translate ES6 to regular JavaScript (I could have used Node 4, but Babel supports much more ES6) and es6-features.org as a reference to learn the new bits. I really liked ES6: it still feels like JavaScript, but it improves things here and there. I just wished
let
didn’t allow re-assignment of variables.
While writing the last one I even learned about the
pointer-events
CSS property, which allows you to mark an element as not receiving events likemouseclick
,mousemove
, etc. Very useful! -
-
Bye Flickr, hi trovebox!
Sep 23, 2014 onFor some time now I’ve become more and more interested in running my own services and using less and less external services. The latest has been Flickr, which I had been a Pro member of for over 8 years now (yikes!). I used it less and less, and was more wary of uploading pictures to it, so I thought it made sense to just paying for it. However, there was one thing I was still using Flickr for, a program I wrote called clj-flickr-memories (now clj-photo-memories): every week, that program searches for pictures taken this-week-several-years-ago and sends you an e-mail with the results, if any.
The first possible alternative I found was MediaGoblin, but the API is too basic and it doesn’t even save the picture details (such as the date the photo was taken) in the database so I couldn’t even improve the API to support that. I was close to giving up when I found Trovebox: it’s a hosted service you can pay for, like Flickr, but it’s also open source so you can host it yourself if you want. And although its API documentation leaves a bit to be desired, it could do what I wanted, so I got cracking and modified my clj-flickr-memories to support both Flickr and Trovebox.
If you want to switch from Flickr to a self-hosted Trovebox and want to import your photos, there are two things you should know:
-
If you self-host and use HTTPS (and you should!) you need to include the “https://” both in the “host” key in the command-line tool configuration file and in the Android app.
-
You can easily import all your Flickr photos in two steps: first you use export-flickr to fetch information about your Flickr photos (only works with Flickr Pro accounts, though!) and second you use import to import those photos into Trovebox. Note that the first step will leave a directory “fetched/” with one file per photo, so you can choose which photos to import to Trovebox, eg. based on the date.
Trovebox also has mobile apps, but at least the Android one requires internet access to it’s not great for me (you can’t browse photos offline).
-
-
Book review: Clojure Cookbook
May 7, 2014 onI just finished “Clojure Cookbook”, a book full of recipes for all kinds of things you can do with the Clojure programming language. I got the book through O’Reilly’s Reader Review Program. Obviously the book is not for you if you don’t know any Clojure, but I think it’s great if you know some Clojure but still wonder how to do relatively basic tasks, or want to see how common libraries are used, or want to see idiomatic Clojure solutions to everyday problems.
The book is divided in different sections ranging from basic data structures to network I/O, performance and production tips or testing. Each section has a number of recipes in the form of small problems together with solutions and a detailed explanation of the solution, as well as caveats or things you should take into account when solving that kind of problem. In a book like this is inevitable that you’re not going to care much about certain sections (in my case, I didn’t care much for distributed computation and I don’t find database access that exciting), but in general the book is very clear, the explanations concise, the code to the point and the recipes useful and varied.
In a nutshell, very good if you know some Clojure but want to learn more about how to solve everyday programming problems elegantly with Clojure.
-
Digital Rights Management on the web
Apr 8, 2014 onI strongly dislike Digital Rights Management (DRM), and often refuse to buy certain things simply on the grounds of them having DRM. So, what I do have against DRM? For the sake of clarity, I’ll assume we’re talking about videos/films (but all these arguments apply to any kind of content):
-
Any platform (= “device” or “operating system”) not supported by the producer will not have a legal way to watch the video: as DRM needs special video players (that enforce certain conditions, like not allowing you to share the film with other people), you can only watch the film if there’s a player for your platform.
-
Because of the point above, free software is left out almost by definition (eg. under Linux, DVDs have their protection cracked in order to watch them because there’s no legal player; this is not only cumbersome, but illegal in some countries even if you have bought the DVD; Netflix is not officially available on Linux, although there are workarounds).
-
It gives unnecessary control to the producers of the content (eg. Amazon can delete books you have bought from your Kindle; you maybe not be able to lend the film/book to a friend; many DVDs don’t allow you to watch the film directly without watching the trailers first).
-
It’s often inconvenient for the paying customer, plus it doesn’t even work to fight piracy (music examples / film examples).
So, can’t you just not buy any DRM-“protected” films/books if you don’t like DRM? That’s more or less what I do, but I’m worried now that the w3c is planning on introducing DRM as part of the web, which will encourage more companies to use DRM. And I said “encourage”, not “allow”, because it’s perfectly possible to do so now (eg. Netflix does it). As I’m opposed to DRM, I think it’s ok that’s it’s painful or awkward to implement: there’s no need to make something bad easier and more common.
Some pro-DRM people seem to have this misconception that people who oppose DRM want the content to be free of charge. That’s ridiculous. I want to pay for the content, I just want to watch/read that content in whatever way is comfortable for me, and in whatever devices/operating systems I happen to use. In fact, I do spend a lot of money on music (but only when it’s DRM-free).
The only pro-DRM arguments I could take seriously were brought by the BBC (hat-tip to Bruce Lawson for sending me the link). They have very good points in the sense that in many cases it’s not even their choice to offer unlimited access. The thing is, again, that:
-
If we agree that DRM (eg. limiting the content to certain kinds of devices) is bad, why make it easier? It’s already possible, let’s not make that the default if we think it’s bad. Keeping thing that are bad-but-necessary-right-now hard, but possible, sounds like a good strategy to me…
-
Non-supported platforms would be excluded, why make it easier to have content on the web that discriminates against certain people (eg. people who have non-mainstream devices, free software)?
Finally, the EFF has a page about owning vs. renting that talks about other reasons why I don’t like DRM.
-
-
Learning Clojure
Nov 19, 2013 onI have always had a thing for functional programming. Not so much for functional languages maybe, but definitely for functional programming principles. I had learned a tiny bit of Lisp at the uni and have fond memories of reading “On Lisp” by Paul Graham, but I never really tried to learn any Lisp for real. Or, I tried to learn Common Lisp again by reading “Land of Lisp” but gave up very quickly. I have tried to learn other functional languages with varying degrees of success (ranging from “read a tutorial or two and got the general idea” to “solved a couple of exercises and/or write trivially small programs”), but for some reason none of them really stuck.
One of those times that I decided I would try to learn a new language, I tried Clojure. I had read some hype about it but remembered Common Lisp as annoying so I was sceptical. Although this is probably extremely unfair, and I don’t really have any experience with any Lisp that is not Common Lisp (and then again that was only a bit of uni stuff), I got this impression that Clojure had all the good bits of Lisp while avoiding a lot of stuff that really bothered me.
So, in case you have avoided Clojure because you (think you) hate Lisp, you should know that:
-
Clojure makes the syntax a bit easier to scan (because it uses other characters, like [] and {}, for some data structures) while keeping all the advantages of “code = data”.
-
Another way it which Clojure feels more modern is function names: let*, setf, car, cdr, and others I hated are not there. To be fair this is both subjective and might be exclusive of Common Lisp.
-
As it runs on the JVM, there are many, many things you can take for granted, like available libraries, and well-known, documented and relatively sane way to install new libraries and figure out what is available.
-
Leiningen is a really nice tool for “project management” (run the project, install dependencies, run tests, etc.), so don’t be afraid if you hate Maven/Ant, the authors of Leiningen did, too grin
-
Clojure really insists on using immutable data structures, and has some very, very cool implementation for all the basic data structures that allow immutability while having excellent performance (in a nutshell, different copies share all the common items). Of course you do have mutable versions of those data structures for special cases, but they’re rarely needed.
-
There is a thing called ClojureScript, which is a language very, very similar to Clojure (exactly the same for most stuff) that compiles to Javascript, so you can use Clojure for both the front- and back-end of web applications. This is actually one of the reasons that convinced me to try Clojure, although I haven’t really used ClojureScript yet.
-
My impression is that it has more IDEs to choose from that the average Lisp: apart from VIM and Emacs, you have Eclipse, Lighttable, and many others.
If you’re interested in learning Clojure, the book I used was Clojure Programming, which I found really nice and informative, and I totally recommend. Although I haven’t completely groked all the concepts yet because I haven’t had the chance to use them in real settings, I have a basic understanding of most Clojure stuff and I know I can go back to the book and re-read parts I need.
And while I haven’t really written anything big in Clojure yet, I have had some fun learning it making two (very) small projects:
-
cludoku, a simple sudoku solver made to learn more than anything.
-
clj-flickr-memories, a small utility that fetches pictures from Flickr and sends them via e-mail. The idea is that is picks photos that were taken several years ago, “on a day like this”.
I’m looking forward to using and learning more Clojure, and hopefully using it at work someday…
-
-
Personal groupware: SOGo
Sep 14, 2013 onOh, wow. It has been a long while since I wrote anything on this blog. Hopefully I’ll get back in shape soon. This time I wanted to write about groupwares for personal use. As you may know, I had already written a personal wiki, and several weeks ago I started thinking that it would be cool to have my own place to keep my calendar and my contacts, and use exactly the same list in any e-mail/calendaring program I use, regardless of the device.
After looking around a bit, I chose SOGo. Firstly, because I managed to get it to work (I had tried and failed with Kolab first); secondly, because it seemed simple/small enough to be appropriate for personal use. In my case, I’m using the version that comes with Debian Wheezy (1.3), but I don’t think it will be very different to install in other environments.
Update: Added URL for calendars, small formatting fixes.
Installing SOGo
The installation itself is kind of long, and although it’s documented, the installation and configuration guide doesn’t give a straightforward list of steps to install. Instead, you have to read it and understand how the whole system is put together. This post is a reminder for myself, as well as documentation for others that might want to install SOGo in their own servers.
The first step is to install the Debian packages “sogo”, “postgresql” and “apache2”. Then, copy
/usr/share/doc/sogo/apache.conf
into/etc/apache2/sites-available/
, and tweakx-webobjects-server-{port,name,url}
. Then, enable Apache modules “proxy”, “proxy_http”, “headers” and “rewrite” and enable the new site with the following commands:# a2enmod proxy proxy_http headers rewrite # a2ensite sogo # /etc/init.d/apache restart
The next step is to configure PostgreSQL. First, add this line at the end of
/etc/postgresql/9.1/main/pg_hba.conf
(or the equivalent for your PostgreSQL version):host sogo sogo 127.0.0.1/32 md5
Then create a PostgreSQL user “sogo” and a database “sogo” with the following commands (remember the password you set for the “sogo” user, you’ll need it later):
# createuser --encrypted --pwprompt sogo --no-superuser --no-createdb --no-createrole # createdb -O sogo sogo
Then connect to the database with
psql -U sogo -h 127.0.0.1 sogo
and create a table “sogo_custom_auth” with this SQL:CREATE TABLE sogo_custom_auth ( c_uid varchar(40) CONSTRAINT firstkey PRIMARY KEY, c_name varchar(40) NOT NULL, c_password varchar(128) NOT NULL, c_cn varchar(128), mail varchar(80) );
Then calculate the MD5 for whatever password you want for your user (eg. with
echo -n 'MY PASSWORD' | md5sum -
) and connect again to the database withpsql
, this time inserting the user in the database:insert into sogo_custom_auth values ('myuser', 'myuser', '<PASSWORDMD5SUM>', 'User R. Name', 'myuser@mydomain.org');
Now you have to configure SOGo so that (1) it can connect to the database you just created, and (2) it looks for users in that database. You do (1) by editing
/etc/sogo/sogo.conf
to set the correct username and password for the PostgreSQL database; you do (2) by adding the following lines to your/etc/sogo/sogo.conf
:SOGoUserSources = ( { type = sql; id = directory; viewURL = "postgresql://sogo:@127.0.0.1:5432/sogo/sogo_custom_auth"; canAuthenticate = YES; isAddressBook = YES; userPasswordAlgorithm = md5; } );
Finally you’ll have to restart the “sogo” service with
/etc/init.d/sogo restart
so it uses the updated configuration.Importing contacts
It’s easy to import contacts if you have them in vCard format. Just login to SOGo (should be https://
/SOGo/), go to Address Book, right click on Personal Address Book and select "Import cards". If you want to import the contacts in your Google account, go to GMail, click on the “GMail” menu at the top left (just below the Google logo), and select “Contacts”. From there, you have a menu “More” with an “Export…” option. Make sure you select vCard format.
Clients
Of course, the whole point of setting all this up is making your e-mail/calendaring applications use this as a server. I think there are several formats/protocols SOGo can use, but WebDAV/CardDAV works out of the box without any special tweaking or plugins so I went for that.
I have only tried contacts, mind you, but I imagine that calendar information should work, too.I haven’t tried having my e-mail in SOGo because I don’t care :-)I have briefly tried with two different clients: Evolution running on Ubuntu Raring, and Android. Both seem to be able to get data from SOGo, but here are some things to note:
-
The WebDAV/CardDAV URL for the contacts should be something like: https://<YOURSERVER>/SOGo/dav/<YOURUSERNAME>/Contacts/personal/. The Android application seemed to have enough with https://<YOURSERVER>/SOGo/ or https://<YOURSERVER>/SOGo/dav/(can’t remember which one), though, so maybe it’s Evolution that can’t autodiscover the URL and needs the whole thing spelled out.
-
The CalDAV URL for the calendars should be something like: https://<YOURSERVER>/SOGo/dav/<YOURUSERNAME>/Calendar/personal/. It’s likely that whatever application you’re using will have enough with https://<YOURSERVER>/SOGo/ or https://<YOURSERVER>/SOGo/dav/, though.
-
Evolution seems to have a problem with HTTPS CardDAV servers that don’t use port 443. If yours runs in a different port, make sure you make the WebDAV URLs available through port 443 (with a proxy or similar).
-
Certain contact editing operations seem to crash Ubuntu Raring’s version of Evolution. A newer version seemed to work fine on Fedora 15’s live CD and an older version seemed to work on some Debian I had around.
-
Android doesn’t seem to support CardDAV natively, but there’s a set of applications to add support for it. I have tried “CardDAV-Sync free beta” for contact synchronisation and at least it seems to be able to read the contacts and get updates. I have only tried the “read-only” mode of operation, I don’t know if the other one works.
In conclusion, this post should contain enough information to get you started installing SOGo on your own server and having some e-mail clients use it as a source for contacts. Feel free to report success/failure with other clients and platforms. Good luck!
-
-
Javascript for n00bs
Mar 19, 2013 onRecently a friend asked me for an easy way to learn JavaScript. I can’t remember how I learned myself, but I do remember some things that were surprising coming from other languages, or that I had guessed wrong or took me a while to understand for whatever reason, despite not really being complicated at all.
As I wrote the list for her anyway, I figured it could be useful for other people, too, so here it goes, somewhat edited:
-
It’s object-oriented, but it doesn’t have classes (instead, it’s prototype-based). The Java-like syntax often makes you think it’s a normal object-oriented language, but it’s not.
-
What are called “objects” in Javascript are hashes, really. It just happens that the values for some of the keys are functions. Related:
object.prop
andobject['prop']
are equivalent. -
The
for (x in obj) { ... }
statement is strictly an object (not an array) thing. If you need to traverse an array with a “for” loop, you have to use the C-style form:for (var i = 0, len = obj.length; i < len; i++) { ... }
. -
There are no methods in the sense of other programming languages. When you do
object.blah(...)
in JS, you’re simply calling the blah function with the “context” (ie. the value of the reserved word “this”) set. For example, you could dovar f = obj.blah; f(...)
. That would (a) be perfectly legal, and (b) set “this” inside that function call to be undefined, not obj. The “call” function allows you to set that binding explicitly, like so:f.call(obj, ...)
. -
This implies something that I find ugly: if you have two levels of functions, you need to save the value of the
this
of the outer function in some variable if you want to use it in the inner function. See example below: -
JavaScript has functional language features: functions are first class citizens, and higher-order functions are possible and even common.
-
You should use the operators
===
and!==
instead of==
and!=
(the latter are too liberal with type casting). -
You should get used to always using semicolon at the end. Although the interpreter adds them in most cases, in some other cases it might be a big surprise.
-
Also, JavaScript is full of small design problems and quirks. Learn them, learn how to avoid them, and you use tools like “jshint” or “jslint”. That way it’s much easier to enjoy the language :-)
-
Maybe I’m weird, but I have always found programming “inside” the browser to be awkward and clunky. To fool around with the language, you might want to use Node instead.
Example of two levels of functions:
// Imagine this function is a method function foo(widgetList) { // Save the value of "this" for later var that = this; widgetList.forEach(function() { // Here, "this" points to widgetList. If we need // the object the "foo" method was called on, we // need to refer to "that", saved above }, widgetList); }
Resources (that I haven’t read myself, but seem useful):
-
JavaScript Guide in the Mozilla Developer Network.
-
Javascript: the Good Parts (or anything by Doug Crockford I guess).
-
-
Writing music, printing .gpx files
Oct 13, 2012 onUPDATE 2012-10-27: I have updated the .gpx viewer to recognise silences! Download the updated version.
Note: if you’re only interested in printing .gpx files, download guitarpro-printer.zip and follow the instructions in the README.txt file.
I have been playing with my last band for over half a year now. From the beginning the idea was to write and play our own material, but actually we had been mostly doing song covers. After some time practising and after having our first gig, we decided to start focusing more on writing our own songs. That’s something I had never done, so it sounded intriguing, challenging and a bit scary (as I essentially don’t know any music theory, and I don’t even own a guitar or keyboard, it seemed extra difficult for me to come up with ideas for new songs). So I decided to try it out, and that meant looking for a way to try it out ideas and record them.
I tried many different programs both for Linux and for Android (I even tried a couple of Windows programs under emulation, but they seemed terribly complex), but nothing was really satisfactory. After searching a lot, I found Guitar Pro for Android (a tablature editor). It wasn’t really what I was thinking about at first, but I found that it was actually the best for my needs: thinking in terms of tabs is easier for me, as I don’t really know music but I have played a bit of guitar. Guitar Pro for Android is supposed to be mostly a tab viewer, but it does have a small feature for “notes”. The idea is that you’re on the bus or whatever, and come up with some musical idea: in that case, Guitar Pro allows you to quickly write it down. As you can listen to what you’re writing, I don’t need an actual guitar/bass to check if what I’m writing really sounds like I think.
Guitar Pro for Android works fairly well for my needs, but something really bugged me: you can only export the music you have written to the .gpx format, which didn’t seem supported by any open source program I knew of. That really pissed me off because it looked like I would be forced to buy Guitar Pro for Linux in order to print the music I had written (I wanted to do so in order to distribute it to my bandmates). After searching the net for a while I found the excellent alphaTab library, but it seemed to not recognise many of the parts I had written, which was a disappointment. See below for the nerdy details, but long story short I improved slightly alphaTab to support Guitar Pro mobile’s .gpx files so now I can print all the music I write, w00t! You can download a little Guitar Pro file viewer/printer I wrote using alphaTab. It’s basically a webpage that can show Guitar Pro files, see the README.txt for details.
Now on to the technical details. You can skip the rest of the blog post if you’re not interested ;-) alphaTab is an open source library that can read several Guitar Pro formats. It can play them, render them in the browser, and do other things. It’s written in a language called Haxe, which compiles to, among others, Javascript. If you download the alphaTab distribution you’ll get a Javascript version of the software that you can use directly in your browser, which is really cool and already does a bunch of stuff, but there were two changes I wanted to do: fix the bug that made it not understand most of the files Guitar Pro mobile produced, and add an option to upload files from the browser (instead, the example programs read tabs directly from a path relative to the programs).
For the first, I debugged a bit and quickly realised that the problem was that alphaTab was being too strict: Guitar Pro mobile was producing some “empty” notes and beats, and that made alphaTab consider the file corrupt and not show it at all. Adding some code to ignore those empty notes seemed enough to make alphaTab render the file. I filed bug #31 on GitHub and added the ugly patch I made :-)
For the second, as the alphaTab API needed a URL to load the tablature file, I had to learn a bit more about the Javascript File API and be a bit creative, replacing the file loader in alphaTab with something that would load the local file using the FileReader object, as you can see here:
function handleFileSelect(evt) { // Fake the getLoader function so make it support data // URIs: apparently jQuery can't make an Ajax request to // a data URI so this was the best I could think of. alphatab.platform.PlatformFactory.getLoader = function() { return { loadBinary: function(method, file, success, error) { var reader = new FileReader(); reader.onload = function(evt) { var r = new alphatab.platform.BinaryReader(); r.initialize(evt.target.result); success(r); }; reader.readAsBinaryString(file); } }; }; document.getElementById('files').style.display = 'none'; $('div.alphaTab').alphaTab({file: evt.target.files[0]}); }
With these two changes, alphaTab finally does what I need, so I don’t need to buy Guitar Pro for Linux just to print tabs. I might buy it anyway for other reasons, but it’s nice to not be forced to do so ;-)
I hope this code and small program is useful to someone. If not, at least I have solved a pretty annoying problem for myself.
-
Exceptions in Node
Oct 12, 2012 onWhoa, boy. It seems I haven’t written for a good while now. Let’s fix that. One of the things I had in my list of possible posts was my experiments (and frustrations) with Javascript exception classes under Node, so here we go:
I needed to have several exception classes in Javascript (concretely, for RoboHydra, which works under Node). My first attempt looked something like this:
function NaiveException(name, message) { Error.call(this, message); this.name = name; this.message = message; } NaiveException.prototype = new Error();
That seemed to work well, except that the stacktrace generated by such a class doesn’t contain the correct name or the message (notice how I even try to set the message after inheriting from Error, to no avail). My second-ish attempt was to try and cheat in the constructor, and not inherit but return the Error object instead:
function ReturnErrorException(name, message) { var e = Error.call(this, message); e.name = name; return e; } ReturnErrorException.prototype = new Error();
That did fix the stacktrace problem, but breaks instanceof as the object will be of class Error, not ReturnErrorException. That was kind of a big deal for me, so I kept trying different things until I arrived at this monster:
function WeirdButWorksException(name, message) { var e = new Error(message); e.name = name; this.stack = e.stack; this.name = name; this.message = message; } WeirdButWorksException.prototype = new Error();
This is the only code that seems to do what I want (except that the stack trace is slightly wrong, as it contains an extra line that shouldn’t be there). I tried in both Node 0.6 and Node 0.8 and the behaviour seems to be the same in both. In case you’re interested, here’s my testing code showing the behaviour of the different approaches:
// NO WORKY (breaks stacktrace) function NaiveException(name, message) { Error.call(this, message); this.name = name; this.message = message; } NaiveException.prototype = new Error(); // NO WORKY (breaks instanceof; also stacktrace w/ 2 extra lines) function ReturnErrorException(name, message) { var e = Error.call(this, message); e.name = name; return e; } ReturnErrorException.prototype = new Error(); // WORKS (but has extra stacktrace line) function WeirdButWorksException(name, message) { var e = new Error(message); e.name = name; this.stack = e.stack; this.name = name; this.message = message; } WeirdButWorksException.prototype = new Error(); [NaiveException, ReturnErrorException, WeirdButWorksException].forEach(function(eClass) { var e = new eClass('foo', 'bar'); console.log(e.stack); console.log(e instanceof eClass); console.log(e instanceof Error); });
It feels really strange to have to do this to get more or less proper exceptions under Node, so I wonder if I’m doing anything wrong. Any tips, pointers or ideas welcome!
-
Small experiments with Cherokee
Mar 10, 2012 onA couple of weeks ago I decided to move my wiki (see Wiki-Toki on GitHub) and my package repository (see Arepa on CPAN) over to a new machine. The idea was to move it to some infrastructure I “controlled” myself and was paying for (mainly inspired by the blog post “A Set of Tenets I Have Set Myself”). As I was curious about Cherokee and this was an excellent opportunity to learn it, I decided to use it as the web server.
I have to say I was pretty impressed by how easy it was to set it up. Although I did have several small problems, most of them were less related to Cherokee itself, and more related to me not being very familiar with Node application configuration outside of Joyent’s environment, or FastCGI configuration. In particular, the web-based configuration is brilliant: you don’t have to open or know the format of any configuration files, but instead configure everything from a pretty powerful UI (which in the end writes a text configuration file of course, so you can always automate or generate the configuration if you need to). I even knew this already, but seeing it in action was pretty impressive. To avoid security problems with people accessing that configuration interface, there’s this little tool called cherokee-admin that starts another web server with the configuration interface (tip: pass the -b option without parameters if you want to connect to it from a different machine, which is the case unless you’re installing Cherokee in your own PC). On start it generates a random admin password, which you use to login.
Static content serving, CGI, FastCGI, specifying certain error codes for certain paths, and reverse proxying was all very easy to set up. There was only a small problem I bumped into: tweaking URLs in reverse-proxied responses. In my situation, I was doing reverse proxying from port 443 to port 3000. As the final application didn’t know about the proxy, it generated URL redirections to “http://…:3000/” instead of “https://…/”, so part of the process of proxying was fixing those URLs. Cherokee, of course, supports this out of the box, in a section called “URL Rewriting”. Each entry in that section takes a regular expression and a substitution string. My first attempt (“http://example.com:3000/” -> “https://example.com/”) didn’t work: all URL redirections were changed to “https://example.com/”, disregarding the rest of the URL. After some time trying different things, I decided to try with “http://example.com:3000/(.*)” and “https://example.com/$1”. As it turns out, that worked like a charm! The documentation does mention that it uses Perl-compatible regular expressions, but I thought the HTTP reverse proxy documentation could have been more explicit in this regard.
But apart from that detail, everything was very smooth and I’m very, very happy with it :-)
-
Unit testing advice for seasoned hackers (2/2)
Feb 15, 2012 onThis is the second part of my unit testing advice. See the first part on this blog.
If you need any introduction you should really read the first part. I’ll just present the other three ideas I wanted to cover.
Focusing on common cases
This consists of testing only/mostly common cases. These tests rarely fail and give a false sense of security. Thus, tests are better when they also include less common cases, as they’re much more likely to break inadvertently. Common cases not only break far less often, but will probably be caught reasonably fast once someone tries to use the buggy code, so testing them has comparatively less value than testing less common cases.
The best example I found was in the
wrap_string
tests. The relevant example was adding the string “A test of string wrapping…”, which wraps not to two lines, but three (the wrapping is done only on spaces, so “wrapping…” is taken as a single unit; in this sense, my test case could have been clearer and use a very long word, instead of a word followed by ellipsis). Most of the cases we’ll deal with will simply wrap a given word in two lines, but wrapping in three must work, too, and it’s much more likely to break if we decide to refactor or rewrite the code in that function, with the intention to keep the functionality intact.See other examples of this in aa20bce (no tests with more than one consecutive newline, no tests with lines of only non-printable characters), b248b3f (no tests with just dots, no valid cases with more than one consecutive slash, no invalid cases with content other than slashes), 5e771ab (no directories or hidden files), f8ecac5 (invalid hex characters don’t fail, but produce strange behaviour instead; this test actually discovered a bug), 7856643 (broken escaped content) and 87e9f89 (trailing garbage).
Not trying to make the tests fail
This is related to the previous one, but the emphasis is on trying to choose tests that we think will fail (either now or in the future). My impression is that people often fail to do this because they are trying to prove that the code works, which misses the point of testing. The point is trying to prove the code doesn’t work. And hope that you fail at it, if you will.
The only example I could find was in the
strcasecmpend
tests. Note how there’s a test that checks that the last three characters of string “abcDEf” (ie. “DEf”) is less than “deg” when compared case-insensitively. That’s almost pointless, because if we made that same comparison case-sensitively (in other words, if the “case” part of the function breaks) the test still passes! Thus it’s much better to compare the strings ”abcdef” and “Deg”.Addendum: trying to cover all cases in the tests
There’s another problem I wanted to mention. I have seen several times before, although not in the Tor tests. The problem is making complicated tests that try to cover many/all cases. This seems to stem from the idea that having more test cases is good by itself, when actually more tests are only useful when they increase the chances to catch bugs. For example, if you write tests for a “sum” function and you’re already testing
[5, 6, 3, 7]
, it’s probably pointless to add a test for[1, 4, 6, 5]
. A test that would increase the chances of catching bugs would probably look more like[-4, 0, 4, 5.6]
or[]
.So what’s wrong with having more tests than necessary? The problem is they make the test suite slower, harder to understand at a glance and harder to review. If they don’t contribute anything to the chance of catching bugs anyway, why pay that price? But the biggest problem is when we try to cover so many test cases than the code produces the test data. In this cases, we have all the above problems, plus that the test suite becomes almost as complex as production code. Such tests become much easier to introduce bugs in, harder to follow the flow of, etc. The tests are our safety net, so we should be fairly sure that they work as expected.
And that’s the end of the tips. I hope they were useful :-)
-
Unit testing advice for seasoned hackers (1/2)
Feb 14, 2012 onWhen reviewing tests written by other people I see patterns in the improvements I would make. As I realise that these “mistakes” are also made by experienced hackers, I thought it would be useful to write about them. The extra push to write about this now was having concrete examples from my recent involvement in Tor, that will hopefully illustrate these ideas.
These ideas are presented in no particular order. Each of them has a brief explanation, a concrete example from the Tor tests, and, if applicable, pointers to other commits that illustrate the same idea. Before you read on, let me explicitly acknowledge that (1) I know that many people know these principles, but writing about them is a nice reminder; and (2) I’m fully aware that sometimes I need that reminder, too.
Edit: see the second part of this blog.
Tests as spec
Tests are more useful if they can show how the code is supposed to behave, including safeguarding against future misunderstandings. Thus, it doesn’t matter if you know the current implementation will pass those tests or that those test cases won’t add more or different “edge” cases. If those test cases show better how the code behaves (and/or could catch errors if you rewrite the code from scratch with a different design), they’re good to have around.
I think the clearest example were the tests for the
eat_whitespace*
functions. Two of those functions end in_no_nl
, and they only eat initial whitespace (except newlines). The other two functions eat initial whitespace, including newlines… but also eat comments. The tests from line 2280 on are clearly targeted at the second group, as they don’t really represent an interesting use case for the first. However, without those tests, a future maintainer could have thought that the_no_nl
functions were supposed to eat whitespace too, and break the code. That produces confusing errors and bugs, which in turn make people fear touching the code.See other examples in commits b7b3b99 (escaped ‘%’, negative numbers, %i format string), 618836b (should an empty string be found at the beginning, or not found at all? does “\n” count as beginning of a line? can “\n” be found by itself? what about a string that expands more than one line? what about a line including the “\n”, with and without the haystack having the “\n” at the end?), 63b018ee (how are errors handled? what happens when a %s gets part of a number?), 2210f18 (is a newline only \r\n or \n, or any combination or \r and \n?) and 46bbf6c (check that all non-printable characters are escaped in octal, even if they were originally in hex; check that characters in octal/hex, when they’re printable, appear directly and not in octal).
Testing boundaries
Boundaries of different kinds are a typical source of bugs, and thus are among the best points of testing we have. It’s also good to test both sides of the boundaries, both as an example and because bugs can appear on both sides (and not necessarily at once!).
The best example are the tor_strtok_r_impl tests (a function that is supposed to be compatible with
strtok_r
, that is, it chops a given string into “tokens”, separated by one of the given separator characters). In fact, these extra tests discovered an actual bug in the implementation (ie. an incompatibility withstrtok_r
). Those extra tests asked a couple of interesting questions, including “when a string ends in the token separator, is there an empty token in the end?” in the “howdy!” example. This test can also be considered valuable as in “tests as spec”, if you consider that the answer to be above question is not obvious and both answers could be considered correct.See other examples in commits 5740e0f (checking if
tor_snprintf
correctly counts the number of bytes, as opposed the characters, when calculating if something can fit in a string; also note my embarrassing mistake of testingsnprintf
, and nottor_snprintf
, later in the same commit), 46bbf6c (check that character 21 doesn’t make a difference, but 20 does) and 725d6ef (testing 129 is very good, but even better with 128—or, in this case, 7 and 8).Testing implementation details
Testing implementation details tends to be a bad idea. You can usually argue you’re testing implementation details if you’re not getting the test information from the APIs provided by whatever you’re testing. For example, if you test some API that inserts data in a database by checking the database directly, or if you test the result of a method call was correct by checking the object’s internals or calling protected/private methods. There are two reasons why this is a bad idea: first, the more implementation details you tests depend on, the less implementation details you can change without breaking your tests; second, your tests are typically less readable because they’re cluttered with details, instead of meaningful code.
The only example I encountered of this in Tor were the compression tests. In this case it wasn’t a big deal, really, but I have seen this before in much worse situations and I feel this illustrates the point well enough. The problem with that deleted line is that it’s not clear what’s it’s purpose (it needs a comment), plus it uses a magic number, meaning if someone ever changes that number by mistake, it’s not obvious if the problem is the code or the test. Besides, we are already checking that the magic number is correct, by calling the
detect_compression_method
. Thus, the deletedmemcmp
doesn’t add any value, and makes our tests harder to read. Verdict: delete!I hope you liked the examples so far. My next post will contain the second half of the tips.
-
Book summary: Designing with the Mind in Mind
Jan 11, 2012 onThis is my summary of “Designing with the Mind in Mind” by Jeff Johnson, a book about user interface design that explores the reasons why those principles work.
Introduction
How valuable are UI design guidelines depends on who applies them. They often describe goals, not actions, so they can’t be followed like a recipe. They also often conflict with each other. This book tries to give rationale to make them easier to apply.
Visual content
-
Structure: Gestalt principles (proximity, similarity, continuity, closure, symmetry, figure/ground, common fate), see on p. 11-22. They act together, and sometimes they produce effects we don’t want: review designs with each of these principles in mind to see if they suggest a relationship that shouldn’t be there.
-
Text: we’re wired for language, but not for reading. Typical text-related mistakes: geek-speak, tiny fonts (if the text is hard to read, it probably won’t be read, so you might as well remove it), burying important information in repetition, and using just too much text (use just enough to help most users get to their intended goals).
-
Colour: our vision is optimised to detect contrasts (edges), and our ability to distinguish colours depends on how they are presented (colours are harder to distinguish if they’re pale, in small/thin patches or separated from each other). Guidelines for colour: (1) distinguish by saturation/brightness, not only hue, (2) use distinctive colours (red, green, yellow, blue, black and white), (3) avoid colour pairs that colour-blinds can’t distinguish (tips on p.59-60, including http://www.vischeck.com/), (4) don’t rely on colour alone, and (5) don’t place strong opponent colours next to each other (see p. 62).
-
Peripheral vision: we only have good resolution in the centre of wherever we’re looking. Peripheral vision (1-2 centimetres from peripheral vision) mostly only provides cues for our eye movement (good example of bad error message, based on this, on p. 71). To make an error message visible: (1) put it where users are looking, (2) mark it (often it’s enough placing it next to what it refers to), (3) use an error symbol/icon, and (4) reserve red for errors. Heavy artillery (use with care!): (1) pop-ups, (2) sound (makes users scan the screen for errors; make sure they can hear it!), (3) motion or blinking (but not for more than a quarter- or half-second).
-
Visual hints: use pictures where possible to convey function (quicker to recognise; memorable icons hint at their meaning, are distinguishable from others and consistently mean the same even across applications); thumbnails depict full-sized images effectively if they keep features; use visual cues to let users recognise where they are (distinctive visual style, colours, etc).
Expectations
Our perception is biased by our experience, current context and goals. Implications: (1) test your design to make sure different users interpret it in the same way; (2) be consistent with position, colours and font for elements that serve the same function; and (3) understand your users’ goals (they can be different, too!) and make sure you support them by making relevant information clearly visible at every stage.
Memory
We’re much better at recognising than at remembering (“see and choose” is easier than “recall and type”). Tips: (1) avoid modes, or if you can’t, show the current mode clearly; (2) show search terms when showing search results (they’re easy to forget if we get interrupted for whatever reason); (3) when showing instructions for a multi-step process, make sure the instructions are there while following the steps.
Attention and goals
When people focus their attention on their tools, it is pulled away from the details of the task. Software applications should not call attention to themselves. Short-term memory and attention are very limited, so don’t rely on them. Instead indicate what users have done versus what they have not yet done, and/or allow users to mark or move objects to indicate which ones they have worked on versus the rest.
Support people’s goal-execute-evaluate cycle: provide clear paths, including initial steps, for the goal; software concepts should be focused on the task rather than implementation, for the execution; provide feedback and status information to show users their progress towards their goal, and allow users to back out of tasks that didn’t take them toward their goal, for evaluation.
While pursuing a goal, people only notice things that seem related to it, and take familiar paths whenever possible rather than exploring new ones, esp. when working under deadlines. After they’re done, they often forget cleanup steps: design software so that users don’t need to remember, or at least to remind them.
Misc. tips:
-
Don’t expect users to deduce information, tell them explicitly what they need to know.
-
Don’t make users diagnose system problems. They don’t have the skills.
-
Minimise the number and complexity of settings. People are really bad at optimising combinations of settings.
-
Let people use perception rather than calculation, when possible (some problems, when presented graphically, allow people to achieve their goals with quick perceptual estimates instead of calculations).
-
Make the system familiar (concepts, terminology, graphics), by following industry standards, by making the software work like an older application the user know, or by basing the design on metaphors.
-
Let the computer do the math, don’t make people calculate things the computer can calculate for them.
Learning
We learn faster when operation and vocabulary is task-focused, and when risk is low:
-
Task-focused operation: you have to understand the users’ goals in order to reduce the gap between what the user wants and the operations supported by the tool is called “the gulf of execution”. To understand goals: perform a task analysis (checklist on p. 135) and design a task-focused conceptual model (the list of objects/actions analysis the user should know about). The second should be as simple as possible (ie. less concepts), avoid overlapping concepts, and not have things “just in case”. Only then sketch and design a user interface, based strictly on the task analysis.
-
Task-focused vocabulary: it should be familiar and consistent (“same name, same thing; different name, different thing”), mapped 1:1 to the concepts.
-
Low risk: often people are afraid of being “burned” and don’t learn or explore. People make mistakes, so systems should try to prevent errors where possible, make errors easy to detect by showing users what they have done, and allow users to undo/reverse easily.
Responsive systems
Responsiveness is not the same as performance, it can’t be fixed by having faster machines/software. Principles:
-
Let you know immediately your input was received
-
Provide some indication of how long operations will take
-
Free you to do other things while waiting
-
Manage queued events intelligently
-
Perform housekeeping and low-priority tasks in the background
-
Anticipate your most common requests
When showing progress indicators (always an operation takes longer than a few seconds): show work remaining; total progress, not progress on the current step; start percentages at 1%, not 0% (and display 100% only briefly at the end); show smooth, linear progress, not erratic bursts; use human-scale precision, not computer precision (“about 4 minutes” better than “240 seconds”).
Display important information first. Don’t wait to have all the information to show everything at once.
-
-
Emacs adventures
Nov 28, 2011 onI have been using Emacs for over a year now. I actually didn’t learn a lot when I started using it (just the basics to get going and then some relatively common keyboard shortcuts), but lately I have been reading and learning much more about it. I’m so grateful by everything I’ve learned from different people on the net that I wanted to share a couple of things I’ve learned, and a simple major mode for editing AsciiDoc documents.
As a long-time VIM user, I feel it’s my duty to make a micro-introduction to Emacs to VIM users (skip this whole paragraph if you’re not one of them). Emacs is so different than VIM that the comparison doesn’t even make complete sense, but Emacs does have many sweet, sweet features that might tempt VIM users. And let me make this clear: Emacs, in its default configuration, is rubbish. If you don’t like the idea of customising your editor, learning about it, discovering new tricks, “plugins” and shortcuts and maybe even writing your own extensions… use a different editor (eg. I find VIM way better than Emacs on the default configuration). Likewise, if you don’t learn VIM properly and don’t learn the gazillion shortcuts to fly around your code while you stay on the home row… use a different editor. With that out of the way…
A lot of what I’ve learned lately I’ve learned from a handful of websites and Twitter accounts, listed here:
-
EmacsWiki: you probably already know this one, but it’s pretty useful for a variety of reasons.
-
EmacsRocks / @Emacsrocks: it’s a series of screencasts showing off cool, advanced Emacs features. Each screencast is very short and focused on one thing. Instant awesome.
-
EmacsRookie / @EmacsRookie: a blog with articles about different Emacs trips & tricks and features. More geared towards beginners (but my impression is that many people stay “beginners” of Emacs for quite a long time).
-
Steve Yegge described “10 Specific Ways to Improve Your Productivity With Emacs”. In particular, I’d recommend making Caps-Lock behave as an extra Control key (I didn’t swap, I just have one more Control key), invoke M-x without the Meta key (both C-x C-m and C-c C-m) and being comfortable with the buffer commands. For navigation, apart from incremental search, you can also use ace-jump.
-
Christian Johansen has an interesting intro article to Emacs Lisp.
-
Other Twitter accounts worth following are @emacs_knight, @dotemax and @learnemacs.
In particular, things I have learned that I thought would be cool to share:
-
The zenburn colour theme. I didn’t really like any of the colour themes that come with Emacs. And although I did see solarized, I thought it looked like crap on Emacs (very different from the screenshots, maybe it only works properly on Emacs 24?).
-
hippie-expand is a pretty cool completion system, familiarise yourself with it.
-
If you code Perl, you should be using cperl-mode, not the default Perl mode.
-
If you code Javascript, you should be using js2-mode, not the default Javascript mode. Also have a look at my js2-mode configuration, you might want to do some of the same tweaks.
-
If you have to do anything with XML, make sure you use nxml-mode, including its awesome feature to validate an XML document against a schema.
-
yasnippet. Very cool snippet system. Just have a look at the EmacsRocks screencast on yasnippet.
-
key-chord. Allows you to assign shortcuts to “key chords” (two keys pressed at the same time). Very comfy esp. if you miss some VIM shortcuts and want certain operations more VIM-like. Also seen on EmacsRocks, and again just check out the EmacsRocks screencast on key-chord.
-
And speaking of VIM, have a look at iy-go-to-char, again seen in EmacsRocks (on episode 4). Hint: this works best when combined with key chords.
And last but not least, I have been writing an Emacs major mode for editing AsciiDoc documents (it currently only implements syntax highlighting, which I think is the most important part of a format like AsciiDoc). For that I basically followed the major mode tutorial and then tweaked the multi-line region matching code (for code blocks and such) by setting the
font-lock-extend-region-functions
variable to a function that appropriately extends the region to be highlighted. If you’re interested, just have a look at my function asciidoc-font-lock-extend-region, essentially copied from some other mode. As a last tip for writing syntax highlighting for major modes, don’t miss re-builder, it’s pure gold for testing your regular expressions! -
-
Pragmatic Thinking & Learning, Wikis and Javascript
Oct 24, 2011 onAfter so much “slacking” (just posting book summaries) I’m trying to go back to regular blogging. Remember my summary of Pragmatic Thinking & Learning? There are many exercises and pieces of advice in that book that I have been trying to practice. One of the things I decided to go for was having a personal wiki. One of the reasons being, in all honesty, that I had always wanted to have one. Another reason being that my pet TODO application, Bubug, had finally died after some Debian update (some retarded Ruby module broke compatibility with the version I was using, or something; couldn’t care to investigate). And yet another reason, well, to have a new small pet project and follow my obsession with learning Javascript, and especially Node. And that I wanted to give Joyent’s free Node service a try!
But enough with the reasons. It’s starting to look like it was a pretty useful mini-project. Not just because I learned a bit more Javascript, the excellent express web development framework and other things, but also because the result itself, even though it didn’t take long to develop (and it was pretty fun, even!), feels useful. It feels like a nice place to put all my notes, TODOs, random ideas for projects, etc. A similar feeling of freedom as when I started using my first Moleskine. Not that I would ditch paper for computer-_anything_, but it’s useful and freeing in its own way, for specific purposes.
About the technology, I used the Markdown format for the pages thanks to the markdown-js library (it’s really nice that the module has an intermediate tree format that you can parse to add your own stuff before converting to HTML, like e.g. wikipage links!), express for the whole application structure and js-test-driver + jsautotest + a bit of syntax sugar from Sinon.js for the tests (but looking forward to trying out Buster.js when it’s released!). The deployment to Joyent’s Node.js SmartMachine was reasonably easy. Actually, it was pretty easy once I figured the following:
-
You must not forget to listen in the correct port, with
server.listen(process.env.PORT || 8001)
-
There are a couple of pretty useful Node.js-related command-line utilities to check logs, restart applications and so on
-
The configuration of the application can be done via
npm config
, see npm integration on Joyent’s Wiki
If you’re curious to see the code, play with it or use it yourself, take a peek to the Wiki-Toki repository on GitHub. Happy hacking!
-
-
My short experience with GNOME 3
Oct 18, 2011 onAfter not having blogged about anything but book summaries lately, I thought it was about time to write something else :-) EDIT: Added the last point, the most important one!
I had been thinking of trying out GNOME 3 since it was released. For a number of reasons, I only managed to give it a try a couple of days ago. I normally use KDE 4, but wanted to see how GNOME was doing these days, and wanted to see if it was something I could maybe switch to. I have to say I quite liked some of the stuff I saw, but I don’t think I can switch. My reasons:
-
Language switcher keyboard combination: I just couldn’t find any combination I could use. Everything conflicted with some other combination I use (esp. in Emacs). Having to change the keymap by clicking on the top bar didn’t sound sane to me.
-
Order of the OK/Cancel buttons: even if I switched, I would probably use a combination of systems. Having to train my brain to look for the buttons in a different position seemed like too much.
-
Rhythmbox seemed plain, clunky and hard to use. It seemed hard for me to do what I wanted, plus it crashed consistently while trying to listen to some podcast.
-
I kind of like the idea of how workspaces work (even though I have to radically change the way I use them to adapt to them), but for me it’s too much that both (a) closing the last window makes the workspace disappear and (b) you can’t create workspaces “above”. That is a deal-breaker for me.
-
Can’t create workspaces on the right or left? I could get used to that probably, but it added up to my frustrations with GNOME 3 workspaces.
-
Constant repainting issues.
-
Can’t make sense of the window traversing. Let’s say I have two virtual desktops, one with a browser and another one with two terminals. The focus is on one of the terminals, and I want to go to the other terminal (with the keyboard, of course). If I just press Ctrl-Tab GNOME takes me to the web browser in the other desktop! If I want to go to the other terminal, I have to press Ctrl-Tab, Shift-Ctrl-Tab to go back to the terminal, arrow down to see all the terminal windows, arrow right to go to the next terminal. It’s even worse when I have Opera in one virtual desktop (maximised) with the error console in the same desktop. As Opera is maximised, I can’t even click with the mouse, so the only way to switch to the error console I can think of is doing the dance described above. Am I missing something, is this for real? EDIT: I was told about Ctrl-
(changes between windows of the same application). Cute attempt, but I don't think I can get used to thinking if I have to use Ctrl-TAB or Ctrl-
. So that remains “impossible to use” for me.
I wonder if some GNOME user can shed light on some of those issues, although it doesn’t seem like I can find a solution for all my frustrations :-)
-
-
Book review: Javascript Web Applications
Sep 20, 2011 onThis is my review of “Javascript Web Applications” by Alex MacCaw, part of the O’Reilly Blogger Review Program (in a nutshell: you can choose an ebook from a given selection, and you get it for free if you make a review and post it in any consumer site). It’s a book about using Javascript to write (mostly) client-side web applications. The book cover says “jQuery Developers’ Guide to Moving State to the Client”, which is somewhat misleading: although most examples that could be written with jQuery are written with jQuery, it’s a book that anyone interested in Javascript can use, enjoy and learn from, regardless of their library of choice. It doesn’t even assume you know jQuery, and there’s a whole appendix dedicated to introducing the reader to the library, should she need it.
Structure
The book teaches how to write web applications using Javascript, always following the MVC pattern. It’s divided in four parts:
-
The first two chapters serve as an introduction to both the MVC pattern and the Javascript language. Although this book is not aimed at total Javascript newbies, you don’t have to know that much to follow the book. For example, it explains prototypes and constructor functions.
-
Chapters 3 to 5 cover the implementation details of MVC in Javascript (one chapter for the Model, another for the Controller and the last one about the View).
-
Chapters 6 to 10 cover many practicalities of client-side web development, like dependency management, unit testing, debugging, interesting browser APIs and deployment tips.
-
The last three chapters cover Javascript libraries: Spine, Backbone and JavascriptMVC.
Additionally, there are three appendices covering jQuery, Less and CSS3.
Highlights and references
-
Chapter 10 (“Deploying”) is full of very good tips and information.
-
Both the Backbone and the JavascriptMVC chapters were brilliant, looking forward to use any of them soon.
-
All the example code is on GitHub.
-
Page 24: “The secret to making large Javascript applications is not make large Javascript applications”.
-
HJS plugin for jQuery for a nice syntax to create classes.
-
ES5-shim for browsers that don’t support Ecmascript 5 yet.
-
Chapter 2 was a very good introduction about events. removeEventListener (p. 41), stopPropagation/preventDefault (p. 43), list of properties (p. 44), load vs. DOMContentLoaded (p. 45), delegating events (p. 46) and custom events (p. 47-49), among others.
-
Reference to blog post about namespacing.
-
Object.create discussed on page 55.
-
Using URL hash for URLs on pages 82, 83.
-
Didn’t really understand the explanation for the HTML5 history API on p. 85. Alternatively, see the HTML5 history API on Dev Opera.
-
Very interesting file API on p. 103 and p. 111. Forget the drag-n-drop (reason) and the copy/paste.
-
Tips about when to load your Javascript on p. 156.
-
The JavascriptMVC chapter was brilliant, see p. 208-213 for the class syntax (nicer and more compact, supports this._super()), p. 210 for instrospection and namespaces, p. 211, 212 for model attributes and observables, and p. 213 for setters. Very cool server encapsulation on p. 215. Type conversion and CRUD events on p. 218. JMVC views on p. 219. Templated actions and final example on p. 226-228.
Note that all page references are pages in the PDF file, not pages in the book!
Wrapping up
This book is packed with very practical information and a lot of code that will teach you how to write applications in Javascript. It builds up from relatively simple code to more advanced stuff, including tips, use of libraries, etc. It’s one of those books that makes you want to play with all the stuff you’re learning, and try it all in your next project.
However, sometimes the amount of code makes the book hard to read. Some parts (eg. beginning of the chapter about controllers) are a bit tiring as you have to read and understand so much code, esp. if you’re not that used to reading more-or-less advanced Javascript. It also lacks information about some important tools like Dragonfly (it almost feels like there’s nothing for developing with Opera) or js-test-driver.
In summary, this is the perfect book if you know a bit of Javascript and want to learn modern techniques and libraries that will get you started in serious client-side programming. Especially if you are one of those server-side programmers that don’t like Javascript but has to use it anyway (because despite all its warts, it’s a really nice language!). If you’re a Javascript wizard and you have been developing client-side code for years, this book may not be for you.
-
-
LeakFeed and
Sep 15, 2011 onA couple of week ago I discovered LeakFeed, an API to fetch cables from Wikileaks. I immediately thought it would be cool to play a bit with it and create some kind of application. After a couple of failed ideas that didn’t really take off, I decided to exploit my current enthusiasm for Javascript and build something without a server. Other advantages were that I knew Angular, an HTML “power up” written in Javascript (what else?), which I knew it would ease the whole process a lot, and I even got the chance to learn how to use Twitter’s excellent Bootstrap HTML and CSS toolkit.
What I decided to build is a very simple interface to search for leaked cables. I called it LeakyLeaks (see the code on GitHub). Unfortunately the LeakFeed API is quite limited, so I had to limit my original idea. However, I think the result is kind of neat, especially considering the little effort. To build it, I started writing support classes and functions using Test-Driven Development with Jasmine. Once I had that basic functionality up and running I started building the interface with Bootstrap and, at that point, integrating the data from LeakFeed with Angular was so easy it’s almost ridiculous. And as LeakFeed can return the data using JSONP, I didn’t even need a server: all my application is simply a static HTML file with some Javascript code.
All this get-data-from-somewhere-and-display-it is astonishingly simple in Angular. There’s this functionality (“resources”) to declare sources of external data: you define the URLs and methods to get the data from those resources, and then simply call some methods to fetch the data. E.g.: you can get the list of all tags in LeakFeed from http://api.leakfeed.com/v1/cables/tags.json (adding a GET parameter
callback
with a function name if you want JSONP). Similarly, you can get the list of all offices in LeakFeed from http://api.leakfeed.com/v1/cables/offices.json. In Angular, you can declare a resource to get all this information like this:this.LeakFeedResourceProxy = $resource( 'http://api.leakfeed.com/v1/cables/:id.json', { callback: 'JSON_CALLBACK' }, { getTags: {method: 'JSON', isArray: true, params: {id: 'tags'}}, getOffices: {method: 'JSON', isArray: true, params: {id: 'offices'}}} );
Once you have declared it, using it is as simple as calling the appropriate method on the object. That is, you can get the tags by calling
this.LeakFeedResourceProxy.getTags()
, and the offices by callingthis.LeakFeedResourceProxy.getOffices()
. And when I say “get the tags”, I mean get a list of Javascript objects: no JSON, text or processing involved. If you assign the result of those functions to any property (say,this.availableOffices
), you’ll be able to show that information like so (the|officename
is a custom filter to show the office names with a special format):<select id="office" name="office"> \ <option value=""> ()</option> \ </select>
The cool thing is, thanks to Angular’s data binding, anytime the value of that variable changes, the representation in the HTML changes, too. That is, if you assign another value to
this.availableOffices
the select box will be automatically updated to have the new set of options! But the data binding is two-way, so any changes you make in the UI also get reflected back in the Javascript variables. This further simplifies many tasks and makes programming with Angular a breeze. There are other nice things about Angular (and many nice things about Bootstrap and Jasmine of course!), but I think that’s enough for today :-) -
Book summary: Prototyping (II)
Sep 4, 2011 onThis is the second half of my summary for the book “Prototyping” by Todd Zaki Warfel. See the first part on this blog. It will cover chapters 4-12, which talk about the guiding principles for prototyping, prototyping tools and how to test your prototype.
Principles
Most prototyping mistakes come from either (1) building too much or too little, (2) prototyping the wrong thing or (3) not setting expectations about what the prototype will be. Principles:
-
Understand your audience and intent. This is the most important principle. Once you understand them, you’ll be much better equipped to determine what you need to prototype, set appropriate expectations, determine the right level of fidelity and pick the right tool.
-
Plan a little, prototype the rest. Software systems change constantly and quickly. Plan a little and prototype the rest, so you can cope with the changing environment by working incrementally and iteratively.
-
Set expectations. This lets you avoid rabbit-hole discussions on things that aren’t important or haven’t been prototyped yet.
-
You can sketch. Anyone can draw well enough for the purposes of a prototype.
-
It’s a prototype, not the Mona Lisa. Don’t lose too much time on making it pretty. Not only it’s not necessary, but it has some advantages like making it clear that it’s not finished product, which makes people more likely to give feedback. You need the least amount of effort to communicate your design idea, nothing more.
-
If you can’t make it, fake it. You can fake many things you can’t make with JPG files, clickable HTML files, PDFs or PowerPoint presentations.
-
Prototype only what you need. Often prototypes only cover part of a system. Even if your ultimate goal is usability testing, chances are you’ll only test 5 or 6 scenarios, so you only need to build that.
-
Reduce risk—prototype early and often. Prototyping is about making small investments with a significant return. The return can be positive, in which case you can just go ahead, or negative, in which case your risk is substantially reduced because you identified the problem soon enough. The earlier you catch mistakes, the easier and cheaper it is to fix them.
Tools
When choosing the prototyping tool, consider audience, intent, familiarity/learnability, cost, need for collaboration, distribution and throwaway vs. reusable. Notes for specific tools follow:
Paper
It’s the most versatile method. It’s also fast, cheap, easy, you can manipulate it on the fly (and even the participants can help), collaborative, not limited by prebuilt widgets or technology, and can be done anywhere and anytime, even without computers. The bad sides are that it’s hard for geographically distributed teams to use it, requires imagination and lacks visual aesthetics. Tips:
-
Include transparencies (useful for simulating roll-overs and such), post-it notes (for displaying changing states on the page, highlighting elements or dialog windows), coloured pens/markers (sketching in black/blue, errors in red, success messages in green) and scotch tape or glue stick in your kit.
-
Use pre-drawn/printed widgets. The book resources include a (kind of limited) sample Illustrator file with printable widgets for that purpose.
-
You can use transparencies for context, pop-up help (even using a marker to highlight fields).
-
To accomplish a show/hide effect, you can fold/unfold part of the paper.
-
You can simulate slide effects by having two different pieces of paper, cut one of them so the other fits (leaving a sort of “window” so you can see the other through), and moving the second one back and forth.
Presentation software
It has a low learning curve, it’s available in most computers, you can use master slides to ensure consistency, you can copy-paste and rearrange elements with drag-and-drop, and export to HTML or PDF if necessary. However, it has limited drawing tools (so often not good for hi-fi prototypes), the interactivity is limited and the prototype has no reusable source code whatsoever. The book resources have a sample prototyping kit for Powerpoint and Keynote, and Manuela Hutter (Oslo UX book club organiser) wrote another prototyping kit for OpenOffice.org (and see the whole blog post about prototyping with OO.o). Tip: you can simulate fade effects in presentation software by having two slides (one with the element highlighted, the other without) and setting a “dissolve” transition effects between them.
HTML
There are several ways to approach making prototypes with HTML. You can simply slap up a few images and use image maps to link to each other, you can have HTML exported from some other tool, or you can write “production-level” HTML for a prototype that will contain potentially reusable code. The strengths of the last way are being platform-independent, free, portable, “real” (in case we’re prototyping a web app), it helps gauging feasibility, modular (helps in productivity), collaborative (if we split in different files), reusable code and unlimited potential. The downsides are that it might take more time and effort to make a prototype like this, and that it’s not easy to make annotations on it.
Testing your prototype
Common mistakes
-
Usability testing is a process, not an event. There’s also planning, analysis and reporting, not just “sitting in front of a person with a computer”.
-
Poor planning. The first question to ask is “why am I doing usability testing?”. Determine who you want to test, who is going to use the product or service, what are their behaviours, and why would they use the product in the first place.
-
Not recruiting the right participants. The whole point of the testing is seeing how the design works in the eyes of the people who will use it. If you recruit the wrong people, you will get the wrong data.
-
Poorly-formed research questions. This is one of the biggest challenges. You have to get your answers without asking explicitly. Instead of telling them to plan a dinner + movie, you can ask them to look for something to do with friends. The point being we shouldn’t make them use the application in the way we want, but make them use the application in whatever way they normally would.
-
Poor test moderation. A good moderator balances being a silent fly on the wall, watching, with asking enough questions to get the test going; know how to extract just the right level of detail; know when to let the participant explore, and when to pull them back; how to get the answer to the question they want without asking it.
-
Picking the wrong method to communicate findings and recommendations. Nobody is going to read a 10-20 page report. Short presentations with a summary of the findings typically work well. Including video clips showing the highlights of the test is useful.
Steps to conduct a usability test
-
Preparation. Decide, with your team, what are the key characteristics and behaviour you’re looking for, and also the ones you don’t want. If you’re going to record audio or video, have a waiver ready for the participants. Knowing the intent of the test will inform the appropriate scenarios, research questions and prototype. Limit the test to 45-60 minutes: enough time to test 5 or 6 scenarios while not exceeding the attention span of the participants.
-
Design test scenarios. Either specific to determine if a user can access a concrete feature of the site, or exploratory to gain insight into a participant’s overall approach to solving her goal. Focus on the goal, allowing different activities and process to reach it.
-
Test the prototype. Getting feedback from participants is easier if they feel comfortable. Once they’re comfortable, ask them about their experiences related to whatever you’re going to test that day. You can use that information to provide context.
-
Record observations and feedback. Have one person moderating, and another person taking notes remotely. It’s better to over-record than to under-record. Use a rating scale of, say, 1-5 for each scenario. Both moderator and participant fill it in. The former should be based of measurable elements like time and effort. The latter is more subjective, focused on the satisfaction with completing the task. Try to filter any variable not related to the system you’re testing. For example, use the same operating system as the user is most used to.
-
Analyse and determine next steps. When you finish, you typically have a list of bigger issues in your head. This list is a starting point. Analyse all your data points, and find themes. Look for frequency, severity and learnability. It’s better to use a method that combines significance to the customer, value to the business and technical feasibility of fixing the issue.
And that’s the end of my summary.
-
-
Book summary: Prototyping (I)
Sep 3, 2011 onThis is the first half of my summary for the book “Prototyping” by Todd Zaki Warfel. See my review of the book in Goodreads (one-sentence summary of the review: “a tad disappointing”). It will cover the first three chapters, “The Value of Prototyping”, “The Prototyping Process” and “Five Types of Prototypes”. The second half will cover the guiding principles for prototyping, tools and how to test your prototype.
EDIT: see also the second part.
Value of prototyping
A prototype is a representative model or simulation of the final system, which goes beyond show & tell and let you experience the design: if a picture is worth 1,000 words, a prototype is worth 10,000. As the complexity of a system increases, the cost-to-benefit ratio of prototyping increases dramatically. Some technical requirements can’t be captured in a prototype, but a short and simple supplemental document can cover them. Also, it often takes less effort to produce a prototype than to create a detailed specification document + annotated wireframes. Disadvantages of specification documents:
-
Nobody wants to read a 60-200 page document.
-
If you can’t get them to read it, you won’t get them to fully understand it.
-
It hides the big picture.
-
Words leave too much room for interpretation.
-
They don’t encourage play (which prototypes do), which makes people understand the system better.
[There are some notes about experiences with prototypes, which I didn’t find that convincing. In summary, prototypes are much better and cost less effort to make than (lengthy) documents, at least for the initial understanding of the system before it’s built. —Esteban]
Process
It’s commonplace in architecture and industrial design, why not in software engineering? Process in a nutshell is (1) Sketching, (2) Presentation and Critique, (3) Modelling/prototyping and (4) Testing. Sketching is present through the whole process.
Sketching
The goal is generating many different concepts and put them on some tangible format. The point is not fleshing out the ideas fully, we’ll refine later. It’s a good idea to limit the sketching time to, say, 10-30 minutes. [Unfortunately, there are many things that aren’t clear to me after reading this part: who is part of the sketching process? Is it a meeting? Is there anyone looking while someone else sketches? If so, who? How many people sketch, and many many sketches do we produce concurrently?__ —Esteban]
Presentation and critique
The goal is to find the best ideas. This step if focused on quality, and it’s arguably the most important step in the prototyping process. You present the strengths of your sketch, and your peers highlight the parts that need more work or clarification. Guidelines:
-
Keep it short: around three minutes for presentation and two for critique.
-
Focus presentations on the strongest parts: If you need more than three minutes to present your sketch, there’s probably something wrong with it.
-
Critiques mention both good and bad sides: mention two or three things that are good, and one or two to improve.
-
Take notes: it’s best to take the notes on the sketch itself.
Prototype
After the last step, you’re left with the strongest concepts. These are the ones you’re going to prototype. Always consider the following: (1) use a tool/medium you’re comfortable with; (2) make sure you have the ability to communicate effectively with the audience or consumer; (3) consider how much time you have; (4) consider the level of fidelity you need. Once you have a prototype, run the presentation and critique again, but with longer times. Tip: if you project your prototype on a whiteboard, you can take notes on that whiteboard easily.
Test
Testing can be done in two different ways: with clients and with end-customers. When testing with clients, run a presentation and critique for 1.5-3 hours, but instead of making a list of revisions, simply sketch the changes. This makes everyone walk away from the meeting being on the same page. At the end of the session, the client gets a copy of the prototype to play with. After two or three days they typically come back with some more feedback.
Testing with end-customers it a standard usability test, with 8-12 participants, 5-6 scenarios, audio-video capture, analysis and reporting of the results afterwards.
Types of prototypes
[I don’t even know why this chapter is called “types of”. They feel more like “uses of” to me —Esteban]
-
Shared communication. _Get a designer and a developer to sketch ideas together. Benefits: it opens the line of communication between developers and designers, it teaches them how to communicate with one another, and it builds relationships. It’s a great team-building exercise. _Tip: record a video of yourself using the prototype.
-
_Working through a design. _If you are going to make a big redesign, you need to test it first, explore the different possibilities, work through them, test them and refine them.
-
Selling your idea internally. Prototypes work as a tool to sell the feasibility and value of your idea.
-
Usability testing. A prototype allows you make usability tests and do data-driven decisions.
-
Gauging technical feasibility and value. Simulations are good to get both managements and engineering to buy into concepts and prove if it can be built.
And this is the end of the first half of my summary. I’ll post the second half soon.
-
-
Humble Indie Bundle #3
Aug 1, 2011 onI had seen the Humble Indie Bundle before, but it wasn’t until HIB #3 that I decided to actually buy it. I learned about it through the Electronic Frontier Foundation (who else!), and some of the games looked neat. Of course, all of them have a native version for Linux and there is no DRM whatsoever. The other two reasons why I bought it: (1) you set the price, and (2) you donate part of the price to the EFF and/or Child’s Play (you decide the exact split).
One of them, unfortunately, doesn’t actually work on my machine (my video card, or my driver, is too crappy), and one of them I haven’t even tried because I’m not that interested. However, “And Yet It Moves” and “Crayon Physics Deluxe” are fantastic!
Apart from saying that you should buy them, I wanted to write this blog post to give a solution to a problem I had running Crayon Physics Deluxe. When I tried to run it for the first time, I got this error message:
Cannot mix incompatible Qt library (version 0x40703) with this library (version 0x1040702) Aborted
There seems to be a problem between the Qt I have installed and the copy of the Qt library that comes with the game itself. The solution for me was to simply rename the directory lib32 to something else.
Back to playing… :-P
-
My experience writing Opera extensions
Jul 9, 2011 onApart from a couple of widgets, I have written two Opera extensions. The first was Show Filtered Content, a proof of concept of how to use the Opera Link API, and OAuth in general, from Javascript. Now, due to a couple of coincidences (isn’t life all about that?), I decided to write Meme Smileys: a very silly extension to turn text smilies into small pictures taken from popular memes like the rage guy and such. It’s my own version of a Chrome extension I had seen. Partly I wrote it because I wanted to have it, partly for the lulz, partly to learn a bit more Javascript, and partly to use Jasmine more. The surprising thing was that writing such a trivial, small extension did teach me a couple of things:
-
The NodeIterator/NodeFilter Javascript API. I ended up not using it for the extension, but it’s good to know.
-
The DOM event “DOMNodeInserted”, very useful when you want to do some work based on new elements “appearing” on the page (as it’s more and more common).
-
Javascript regular expression lookahead/lookbehind. The latter, which I needed, is not supported by Javascript, so I had to use a lookbehind mimicking trick/workaround to get what I wanted.
-
It gave me a bit more experience in Test-Driven Development. Which reminds me: if you are interested in Javascript and TDD and you happen to understand some Norwegian, have a look at the excellent zombietdd screencast series!
As always, the code is on GitHub, so you can read it, fork it, make your own extension based on it or whatever you want.
EDIT: I forgot to mention that the meme smiley extension got quite popular because it was featured in the Choose Opera blog (twice!) and then “recommended” in the Opera extensions homepage :-D
-
-
First steps with Scala
Mar 28, 2011 onSo I had promised to write a bit more about my initial experience with Scala. Here it is.
In my previous post I had explained why Scala in the first place, and I had mentioned that almost all of my knowledge comes from having read a bit of “Programming Scala”. Some of my highlights:
-
It has a REPL for experiments.
-
Immutable variables (see “Variable Declarations”).
-
Type inference (see “Inferring Type Information”).
-
The Option type (see “Option, Some, and None: Avoiding nulls”).
-
The handy import syntax (see “Importing Types and Their Members”).
-
The flexible syntax that allows for DSLs (see “Methods Without Parentheses and Dots” and “Domain-Specific Languages”).
-
The flexible for loop (see “Scala for Comprehensions”).
-
Pattern matching (see “Pattern Matching”, and make sure you don’t miss the examples with types, sequences, tuples with guards or case classes).
-
Traits (see chapter 4, “Traits”).
-
The nice XML library that comes with Scala (see chapter 10, “Herding XML in Scala”).
And note that I haven’t read that much of the book. In particular, I expect to like a lot of things about the functional aspects of Scala (described in chapter 8, “Functional Programming in Scala”).
That’s about learning the basics of the language. When I tried to make a small application to access the Flickr API, things worked really smoothly, which is always encouraging: using scalaj-http for the HTTP requests is a breeze, and parsing the resulting XML to get the interesting pieces was also pretty straightforward. Another nice surprise was the ScalaTest library.
Only when I had to start writing a bit more “real-world” code, I had to use some Java libraries. And I have to say, that was by far the worst part of programming in Scala. I only had to make some very simple date calculations, but that turned out to take more or less as long as the rest of the code I had written. Or at least, much more frustration. To write that small piece of code, I had to learn some API that didn’t make any sense to me; it took me a while to find the right, non-deprecated way of doing things; and all along I felt that the designers of that API were more focused on how proud they were of their “correct”, decoupled design that on making it simple and practical for the actual programmers using that library. My impression of the JavaMail library wasn’t actually much better, but at least the first thing I copied and pasted worked well enough.
And before I finish, I wanted to mention a couple of things about tools. Although I’m currently using Emacs, I did give Eclipse and NetBeans a try. Probably those tools are not for me, so take my experience with a pinch of salt, but I found them really confusing or they didn’t work at all for some reason. However, Scala mode and ensime for Emacs worked well for my, for now, limited needs, and frankly, I’d rather stay with Emacs that having to edit code in Eclipse or some other editor.
I still have several things pending, like finishing the Scala book, trying out sbt, experiment more with ensime and the Scala mode and write a library to access the Opera Link API. But it looks like it’s going to be a lot of fun :-)
-
-
The quest to learn a new programming language
Mar 20, 2011 onSome time ago I realised I wasn’t all that excited about any programming language. All the languages I knew were, for some reason or another, annoying and I didn’t really feel like having any pet projects. That, combined with the idea that learning new stuff is good, pushed me to try and learn some new programming language.
Choosing one was actually kind of a problem: it had to be “mainstream” enough that it wouldn’t just be a theoretical exercise (I wanted to use it to write actual code, not just to learn it) and yet different enough to what I was used to. It also had to be “stable” enough that I didn’t have to install it from sources or follow the development of the compiler/interpreter. That didn’t really leave me a lot of options, I thought. The only ones I could think of, really, were Haskell, Go, Lisp and Scala.
Haskell I had tried to learn, and I more or less failed. I did learn the basics and I tried to start writing a Sudoku solver with it, but I got demoralised quite quickly. I felt it was a bit too exotic for actual work, and it was a bit of a downer that it took me so long to write some simple unit tests for some basic code I started to write (I couldn’t get my head around the type system and I was fighting with something really silly for many hours). Go, well, I didn’t even start learning because the Go installation instructions totally freaked me out. Not that I didn’t understand them, but the idea of fetching the sources of the compiler just to learn a programming language turned me off. And don’t get me started with the name of the compiler (dependent on the architecture, no less!), the separate linking step or the 90s-style “.out” file name for the binaries. So that left me with Lisp and Scala.
Lisp, I did know a bit. I had learned some basic Lisp at the university, and liked it quite a bit back then. I had also read part of the excellent “On Lisp” and I thought it was really cool. I still had my doubts I could use for actual work, but I was willing to give it a try. So I borrowed the (also excellent) book “Land of Lisp” from a friend and started reading, trying to learn a bit. The process wasn’t too bad, but I had the increasing feeling that in the end it would be too exotic for me, and I found the syntax too annoying. I was learning some practical Lisp, but it was taking really long to learn interesting things. And when those interesting things came, I felt they were too obscure for me, and I needed a lot of thinking to even understand the examples. But I decided to give it a try anyway, and I went ahead and tried to write some simple code to use some web service (the final goal was to write some example code for the Link API). In this case, the deal breaker was that the OAuth library I found depended on an obscene number of Lisp packages, many of which were recommended to be downloaded directly from GitHub (srsly? fuck this shit).
That left me with Scala. I had mixed feelings about it. At a first glance it looked interesting, but it was compiled, related to Java and more or less centred on concurrency. I tried to learn Scala by reading “Programming Scala”, which turned out to be more fun and productive than I had anticipated. I’m considering buying the “dead tree” version, but I have so many books to read that I don’t know when I’ll do that. So, what did I like about Scala so much? It made me feel like when I learned Ruby: it had fairly powerful features (pattern matching, traits, type inference, others I forget about) but with a readable, easy to understand syntax. It’s like the Robin Hood of programming languages, stealing features only available in impossible-to-understand languages, and bringing them to the masses. It also felt good liking a statically typed language, I didn’t think that was possible for me anymore :-)
But enough for now. Some other day I’ll write some more details about Scala and about my pet project FlickrMemories.
-
GTAC (two months late)
Jan 5, 2011 onIt’s been more than two months since the GTAC and I never wrote anything about it in this blog, so I thought I’d write some words so I could cross it off my to-do list.
As you can imagine, the conference was great. It was my first big conference and my first time outside of Europe, so it was doubly exciting for me. And even though there were many interesting talks, meeting all that bunch of testing nerds was much better. It shows that Google really worked hard to make people socialise.
But let’s start from the beginning. Probably around a year ago now I had written a talk about testability that I had submitted to EuroSTAR 2010, but had been rejected. That had been my third rejection I believe, so I started losing hopes that I’d ever speak at an international conference. However, relatively shortly after being rejected Google announced this year’s event, and the theme was “Test to testability”, so I said “what the hell!”.
They said from the start that it would be invitation-only, meaning you had to apply even for simply attending. That was actually pretty cool, because the idea was that Google would choose the attendees, and once selected and notified, those attendees would _vote for each other’s talks _to decide what the program would be. It would also mean that attendees would be chosen because they had something interesting to add to the conference, not simply money to pay the registration fee.
And one day, right before leaving for a short vacation, I received the news that I had been chosen to attend. At that point, of course, I had no idea if I would actually talk, but just attending was awesome and I was really happy and a bit surprised (I was going to a conference! in India!). A couple of days later I received a lot of proposed talks to rate. That was pretty exciting, and seeing a lot of very interesting topics was kind of cool, because it was so promising, but also a bit discouraging, because I thought the chances of getting chosen were pretty low. Still I didn’t lose all hope, and when the deadline came, I was notified that I had been chosen to talk. At that point I was pretty surprised, but when I kept reading and saw that there were only 8 talks selected (+ 3 keynotes), then I was pretty shocked.
The rest of the story you don’t have to imagine, because in the typical Google fashion, all the conference material is available on the website (both videos and slides). As I imagine that the conference page link will break sooner or later, I’ll just give you the official GTAC 2010 YouTube playlist. My favourite talks were (in order of appearance):
-
Twist, A Next Generation Functional Testing Tool - really nice tool and very good demo, although not being open source and being for Eclipse was kind of a let down
-
The Future of Front-End Testing - kind of everything a professional QA Engineer should know about front-end testing, but it’s not always the case; I thought it was kind of basic, but it was a useful reminder and listening to Simon Stewart is just fun
-
Flexible Design? Testable Design? You Don’t Have To Choose! - great talk with unit testing tips/patterns; one of the nice things is that those patterns are not only for statically-typed languages
-
Crowd Source Testing, Mozilla Community Style - very nice talk about making the community help you testing complex products, with many examples and details
I guess I should also mention “Measuring and Monitoring Experience in Interactive Streaming Applications” and “Turning Quality on its Head”. The first, because I thought it was a cool story about how hard it is to find bugs that are important for users, but are vague and hard to reproduce. The second, mostly because of the tool that James shows off. You can see screenshots and an explanation of it from minute 52.
About my own talk, “Lessons Learned from Testability Failures”, I was really worried that I was going to freak out and block on stage. After all, I was used to talking in front of 5, 10, 20 or maybe 30 people. Speaking in front of around 100 and knowing that I was being recorded for YouTube (and that a lot of people interested in the subject would watch those videos) was quite scary in itself. And then there was the other factor: I usually speak to people who (theoretically) know less than me about that concrete subject, but it wasn’t like that at all in this case. However, people there were so cool and friendly that I felt less nervous than I usually feel. Watching the video, I do look nervous the first minutes, but after the introduction and such it felt really good. Kudos to the organisation and the attendees for being so open, cool and friendly. Meeting all that crowd was clearly the best of going to be conference.
All in all, it was a great experience and I made a lot of contacts and friends, and I’m looking forward to attending another similar conference (maybe next year’s GTAC?). We’ll see.
-
-
A month of Colemak
Oct 17, 2010 onI have been using Colemak for one month, so I thought it was a good moment to wrap up the experience so far and make some sort of summary. Things I have discovered:
-
Keyboard stickers are useful: I thought that I’d just learn the keymap and never look at the keyboard, so I wouldn’t need them. I was only partially right: not having stickers, sometimes I would look at they keyboard, and my brain would start doing calculations of equivalences like “J” (printed on the key, that is, in QWERTY) is “N” (in Colemak) and so on, particularly for keyboard shortcuts. Having Colemak key stickers avoided me some brain damage and also helps me getting the “shape” of the keyboard in my brain.
-
I’m typing more slowly and with more mistakes than I expected after one month. I can think of a bunch of possible explanations, see below.
-
I’m a decent typist now in Colemak I think, probably faster than average/slow typists. I just used to be quite fast in QWERTY.
-
Learning the keymap itself is one thing, but to type fast your brain always has to be a couple of keys ahead of your fingers, so you need to learn letter clusters or even whole words. That’s what I had done in QWERTY really, and it’s most of what I’m currently missing in Colemak to be a fast typist: “spelling out” words is a very slow way of typing.
-
You have to relearn a bunch of keyboard shortcuts, and some of them aren’t Colemak-friendly (say, hjkl for moving in vim and less). It’s kind of a bummer, but nothing too serious.
-
When I type in a QWERTY keyboard it definitely feels much more “jumpy” and English words feel harder to write. I’m still using QWERTY for Spanish and Norwegian so I wouldn’t know about them (I used to have three keymaps: US QWERTY, Spanish QWERTY and Nowegian QWERTY; now I have simply changed the US QWERTY to standard Colemak).
-
I think Colemak is brilliantly designed, because it combines Dvorak’s good key distribution (for English), particularly the powerful home row, with QWERTY-like, shortcut-friendly features. When I compare to other alternative layouts, I realise that Colemak is not only designed for speed, but it’s very practical for the real (read: QWERTY) world.
-
If you are going to use Caps Lock as Ctrl in Colemak on Linux, and you have problems making the key just do Ctrl, instead of both Ctrl and Backspace, you might want to open /usr/share/X11/xkb/symbols/us and comment out the definition for “key
". That was the only way I could make it work.
So why do I think this hasn’t gone as quickly as I expected? My list of lame excuses is (lamest at the end):
-
With Colemak I learned “real” touch typing, something I had never learned and that wasn’t completely natural to me initially.
-
Part of that touch typing change was that I used to place my right hand incorrectly (with my index over QWERTY’s “h”, not QWERTY’s “j”). The reason was that I used vim, so I placed my right hand in position to cover hjkl, to move around easily.
-
I thought relearning vim would be a pain in the ass, so I decided to go ahead and learn Emacs. It might sound crazy, but I thought that keeping my muscle memory intact for vim/QWERTY, and develop my muscle memory for Emacs/Colemak would be easier than keeping both vim/QWERTY and vim/Colemak in my head.
-
I haven’t been a “good student” and I haven’t really “practised properly” (i.e. with KTouch) that much: I practised properly the first few days, then I just started using the keyboard as my main keyboard. That might have been an error.
-
Although that’s a quite recent change, I have started using Caps Lock as an extra Ctrl key (common for Emacs users to reduce hand movement). That involved more thinking and adapting.
-
I haven’t used Colemak exclusively really, because e.g. my phone has a QWERTY keyboard… until a couple of days ago, that I found a Colemak (and Dvorak) keymap for AnySoftKey. I have some issues with it so I might drop it from the phone, though.
-
I’m too old for this :-(
All in all, I’m quite happy that I switched to Colemak: it really feels easier on my fingers and now that I have “seen the light” it feels stupid to go back to bad old QWERTY (for English at least). I think I’ll retain most of my QWERTY skills anyway, as both keymaps don’t seem hard to keep in your head/fingers, so in the worst case I can just go back to QWERTY without much trouble. And I even learned Emacs in my journey!
-
-
Book Summary: Storytelling for UX (3/3)
Oct 12, 2010 onAnd this is the last past of the summary of “Storytelling for UX” (first part, second part). In this last part I’ll cover the tips to create stories. At the end I’ll do a mini-review of the book and will add some extra comments.
How to create a story_ _
Stories have four elements: audience, ingredients, structure and medium.
Audience
There are two important relationships in stories: story-audience and you-audience. About the first, you want to include details that fill the gap, and also stories are a good way to make the audience see a different perspective by feeling it. Finally, endings are important. They should be memorable and settled (“take them home”).
Ingredients
See checklist on p. 209.
-
Perspective. there isn’t a neutral POV in stories. Types of perspectives are realist (3rd person, “absent” author), confessional (focused on author experience) and impressionist (mixes descriptions of events with a strong structure). The last intends to spark ideas/actions and while they can have an ending, they might end with implicit question. An easy way to add perspective is letting the main character do the talking.
-
Characters. One of the reasons why UX stories are useful is because they add specificity and texture to the usually one-dimensional view of users. Also useful to highlight needs outside the mainstream. Tips to build characters: (1) choose (only) details that add meaning; (2) show, don’t tell (show in action instead of describing traits); (3) set up “hooks” that you can use later in the story; (4) leave room for imagination.
-
Context. Five types: physical (time, date, location, location scale), emotional (how characters feel), sensory (5 senses), historical (“when phones had dials”), memory (storyteller’s memory, flashbacks).
-
Imagery. Things that make us picture the story (example in p. 205). Don’t use too much!
-
Language. Tips: (a) speak in the language of the characters, (b) make the story active, (c) focus on telling the story, not describing, (d) don’t judge characters, context or events.
Structure/plot
Structure is the framework/skeleton of the story. Plot is the arrangement of the events. Strong structures help the audience, the author and the story (p. 215). See types of stories on p. 216. “Checklist” for good structure and plot on p. 235.
Medium
Four big media: oral (mind the gap to written, p. 243), written (make the point explicit, keep it short, make use of cultural cues as in p. 253), visual (comics and storyboards work, see p. 258-260), multimedia/video.
See tips on how to integrate stories in reports on p. 265 and p. 266. See strong sides of different media on p. 272.
Mini-review and conclusions
I quite liked the book, although I admit that the last part (the one summarised in this post) was a bit disappointing. I guess it’s hard to give tips about something as complex as creating a story, in a book. The book has a very clear structure and it’s easy to follow and read, which helps in figuring out what to read, what to skim and what to leave for later.
Another thing that really struck me while reading the book (the second book I read following the tips from “How to Read a Book”) is how little I used to understand of the books I read. I now go through the book three times: one to get an idea of the structure and the most interesting parts, one to read the content, and one to review and make a summary. So even while I was reading it for the last time, I made sense of things that I hadn’t realised while reading the book (and that was after knowing the structure, knowing what to expect from each chapter, and having made some preliminary notes!). Not only that, but I also feel that I’m much more critical with what I read and I compare it much more with what I think myself.
If you aren’t doing it already, I strongly recommend that you give those tips a try…
-
-
Book Summary: Storytelling for UX (2/3)
Oct 11, 2010 onThis is the second (and longest) part of my summary of “Storytelling for UX” (see the first part). It will cover how to fit stories and storytelling into the UX design process.
There shouldn’t be “a storyteller” in the team, as many as possible should be familiar with the technique. Prototypes based on stories allow exploration of new ideas, esp. if they’re big changes.
There are several parts of the UX process were stories are useful:
-
Collecting input from users. You’re already hearing those stories. Do it consciously.
-
Exploring user research and other data. Summary of hard data.
-
Experimenting with design ideas. See stories that help launch a design discussion (type) and the role “spark new ideas”.
-
Testing designs. They can evaluate if you have stayed true to original needs and if they will work with real users.
Collecting input
Being in the user work environment helps noticing things people don’t mention. When you just arrive, everything is unfamiliar. Take notes then. If you can’t talk to your users, you can get some limited info from: search/server logs, customer service records, people who do training and sales demos, market research and satisfaction surveys.
Getting people in groups can help make people talk (build on each other). Also asking people to recall specific events is really useful.
Tip: be open to tangents, but don’t waste too much time in them if you don’t see value. Also, a user avoiding talking about what you want is information, too.
Tip: Use a structure for the interview (first closed questions, then open), see p. 82. Try to have the interview in the context the product will be used.
Selecting stories
Characteristics of good stories:
-
Heard from more than one source
-
With action detail
-
Make user data easy to understand
-
Illustrate an aspect the UX team is interested in
-
Surprise or contradict common beliefs
They should help explain something about UX beyond data, bring data to life. They should also connect with other stories and resonate, leading to action.
Experimenting with design ideas
Three possible uses of stories: brainstorming, concept and specification. When no user research is available, you can brainstorm to create user stories. See a good technique/game for it on page 111.
When you do have user research, you can develop those stories. For that, some rules: (1) defer judgement, (2) encourage wild ideas and (3) build on the ideas of others. See adaptation of the game for this case, p. 118. Concept stories should include: (a) focus on activity set in a specific context, (b) description of motivations that trigger action, (c) describe the characters well enough to set them in context.
Specification stories are useful to summarise results. They are included in specs. They keep the real-world context available for reference.
Testing/evaluating designs
Three uses of stories: create scenarios/tasks for usability testing, serve as guide for expert reviews, and quality testing.
If in usability testing you ask the user first what her interests are, you can turn that story into a usability test task.
Stories and personas from them are very useful to set a context for expert reviews. Give each expert a persona and make them try to complete a task from that persona POV.
[I didn’t really get the “quality testing” part, whatever that means, so I don’t have notes about it]
Sharing stories
When communicating with people, stories get the audience attention, set context and inspire action.
Listening exercises make you understand your audience, and make them understand how diverse/similar they are.
There are three typical types of audiences:
-
Strategic leaders: generate and maintain a common vision (p. 143). Things that work for them: identify point of pain and offer a solution, identify gap in market and show how to fill, show new approach by reconfiguring common/existing components, and identify UX trends and show impact on business.
-
Managers: have a mission and have to make decisions (p. 146). Don’t have time to spare, prefer short meetings to brainstorming sessions. If you bring bad news, show why it’s important to care. Don’t go into much detail.
-
Technical experts: implement a vision (p. 149). Can be difficult to reach them with stories, esp. if not grounded in details. Tips: (a) use representative characters and situations and be ready to back up with hard data, (b) make the action of the story specific and tangible, (c) keep the story on track, (d) use technical terminology accurately.
And that was the end of this part of the summary. In the next and last post I’ll cover the tips about creating stories and will write some sort of mini-review and conclusions.
-
-
Book Summary: Storytelling for UX (1/3)
Oct 10, 2010 onThis is book is the first book chosen for Oslo’s UX book club. It was a quite interesting book about using stories and storytelling techniques in different steps of the User Experience design process. The following is the first part of my (long) summary of the book. The summary is mostly intended to remind me things I read, but probably/hopefully it will be interesting and useful to others. As the book is more or less divided in four parts (introduction, listening, how to fit stories in the process and how to create a story), I’ll cover the introduction and the notes on listening in this post, and will leave the other two parts to other posts. Edit: see parts two and three.
Introduction (chapters 1-2)
Stories help keeping people at the center (p. 2). There are different types of stories (p. 5):
-
Those that describe context/situation: describe the world today. Not only sequence of events, but also reasons and motivations.
-
Those that illustrate problems: show a problem that a new product or design change can fix. They should describe it in a way that opens the door for brainstorming.
-
Those that help launch a design discussion: starting point for a brainstorming session. Enough detail to make sense but leave room for the imagination.
-
Those that explore a design concept: explain/explore idea or concept and its implications for the experience. Helps shape the design by showing it in action.
-
Those that prescribe the result of a new design: describe the world as it will be in more detail. Similar to the 1st, but describe a user experience that doesn’t exist yet.
Interesting quote in page 10, with the message “until you hear a story, you can’t understand the experience”.
Stories are interactive, change with the audience (p. 14). They also not only describe actions, but add context and why (motivation). There is a fine line with how many detail to include in motivation, because of shared cultural understanding and other things (p. 17, 19).
Stories have different roles:
-
Explain: give context and sensory experience, not just events. This is different from use-cases.
-
Engage the imagination: surpass linear logic and evoke new ideas.
-
Spark new ideas: as we fill in the gaps, we can hint details but let people come up with their own ideas.
-
Create a shared understanding.
-
Persuade.
In any case, stories are not “made up”: they’re based on data.
Listening (chapter 3)
Really listening to users (e.g. in interviews and such) gives you access to a lot of info you can’t get anywhere else. Open questions are very important for this. Giving time to answer sometimes gives people time for second thoughts (not just what they think you want to hear), which has more value than the first reply. Also, pay attention to the context, people forget to mention “obvious” (for them) everyday facts.
Practising active listening is very important, see the following links:
And that’s it for the first part. Stay tuned for the rest of the summary.
-
-
Trying out Colemak (keyboard layout)
Sep 22, 2010 onSo on Sep 14th someone shared a link to this amazing game, Biolab. The game is not that amazing by itself, but its pretty impressive that its built with web standards (HTML5 and Javascript). No flash or any other plugins required. And if you’re impressed by that, the level editor showcased in the making-of video will totally blow your mind.
But I digress. After playing the game a bit, I had a look at the author’s blog and found an entry about an alternative keyboard layout, Colemak. As the blog author, I hadn’t tried Dvorak because it looks so hard to learn and I wasn’t sure how much it was going to change my typing experience. I mean, look at it, it feels totally “upside down” coming from QWERTY/QWERTZ (where else can you come from?).
I would normally totally ignore that post, but some claims made me very curious:
-
Easy to learn (relatively close to QWERTY)
-
Tries to be “compatible” with QWERTY (common letters used in shortcuts remain in the same positions, notably Ctrl-Z, Ctrl-X, Ctrl-C, Ctrl-V, Ctrl-Q and Ctrl-W)
-
Designed to make you stay in the “home row”: less movement would theoretically mean more potential speed and of course less chances of carpal tunnel
Although I’ve only been using it for a week and I still suck at it, my impressions so far are:
-
It’s impressive how relatively easily I got used to it (I’m still terribly slow and make many mistakes, but I can type without looking at the keyboard)
-
There are many English words that you can write exclusively or almost exclusively within the home row: actually all typing exercises are typing real words, never gibberish
-
It really feels like I move my hands much less when typing
-
It seems possible to keep both QWERTY and Colemak in your head (important because there will always be cases where you need to use QWERTY)
The only downside so far is that using vim felt just too hard (for starters, I couldn’t use hjkl for moving anymore, and I didn’t feel like relearning the editor really), so I decided to try and learn Emacs. That is, I’ll try to keep vim as my QWERTY editor and Emacs as my Colemak editor. Let’s see how that will work out.
-
-
Facebook privacy scanner (ReclaimPrivacy)
Sep 5, 2010 onSummary: there’s a simple tool that will tell you which Facebook sharing options are “too open” in your account. I’d like you to help me by trying it out and telling me what you think (if you had problems using it, if you would like extra/other information to be shown, if you found any bugs, etc.). Skip to “how to use it” below if you’re not interested in the details for developers. Thanks!
Some time ago I discovered a neat Javascript tool called ReclaimPrivacy. It was a very simple program that scanned your Facebook privacy settings and told you if you had “too open” settings so you could review and fix them. I really liked the tool and thought it was a great idea, but after Facebook changed the layout of the privacy settings, the tool stopped working.
Weeks passed and the tool didn’t get any update, so I decided to step in and try to help the original programmer adapt the tool so it worked again. The ReclaimPrivacy code is in GitHub so it was pretty easy to make my own fork and start hacking away. It didn’t take me long to adapt the first things to the new privacy settings layout, and after some more time I was much more comfortable with the code, had made more things work, added tests and even added new features. Now that it’s starting to get close to something we could release as the new official ReclaimPrivacy version, I’d like your feedback.
How to use it: add a new bookmark for this link. You usually just have to drag and drop it to your browser toolbar, or alternatively add a new bookmark (typically you can do that by pressing Ctrl-D) and make sure the address is the above link. Go to the ReclaimPrivacy help page if you have trouble (but use my link, not the one provided there!). Once you have the bookmark, go to Facebook and click on the bookmark. It will show you some information about your Facebook privacy settings on top of the page. Just leave a comment here or drop me an e-mail with your opinion, thanks! You can skip the rest of the post if you are not interested in Javascript programming and/or software automated testing ;-)
During my hacking I made a lot of different changes: I split the source file into several different files, I made the code (more) testable, I added tests, and I added more features. I’m really into testing and testability, so one of the first things I did with the code was trying to decouple it from the network calls so I could write tests for it. As you may know, I think that code that doesn’t have tests is very hard to work with, and I even consider it’s not “true code”. Now, I’m no Javascript expert, so some of my techniques might not be very… idiomatic. That said, some of the code change highlights you may be interested in:
-
The getInformationDropdownSettings method, renamed to getSettingInformation, is now shorter, more readable, more testable and has more features. The changes are: (1) making it receive an object with the relevant part of the DOM, instead of a window object; (2) supporting, in principle, any kind of setting, not only dropdowns; (3) allowing each setting to have its own idea of what “too open” means (see the settings array); (4) allowing the caller of the method to specify its own list of recognised settings and acceptable privacy levels; (5) passing the number of open and total sections to the handler, instead of just a boolean stating whether or not there’s any “too open” setting.
-
I made the old getUrlForV2Section more testable by extracting the most interesting (read: likely to break or need maintenance) code to its own method, _extractUrlsFromPrivacySettingsPage, and making the new getUrlForV2Section work with both real URLs (checking Facebook with an Ajax call) and fake HTML dumps representing what those URLs would return.
-
I made the old withFramedPageOnFacebook, a very important method used in several places, more flexible by accepting not just URLs, but also functions or data structures (new withFramedPageOnFacebook).
-
Now we have some basic tests (with fixtures even), without which doing some of these changes would have been such a pain, I wouldn’t have bothered making them in the first place.
http://github.com/emanchado/reclaimprivacy/blob/master/javascripts/utils.js#LID42
-
-
Flattr: microdonations rock!
Aug 31, 2010 onEver since I discovered Flattr I was really excited about it. Back then it was a closed beta, only-by-invitation service, and I couldn’t get hold of an invitation before they opened it to everyone.
Of course I signed up, tried it out and looked for content to “flattr” right away. I think the idea is great, and I can’t really complain about the implementation either. The service feels really easy to use and understand, and there are many extensions and plugins to integrate with different tools, including the Wordpress plugin I’m using.
How does Flattr work then? Basically, you pay a fixed amount of money per month (you choose how much of course!), and you click “Flattr” buttons of the content you find interesting on the internet. At the end of the month, the money you paid is divided between the number of buttons you clicked, and each of those “slices” will be given to each author. You can watch the video below for a better explanation:
The only downside is that the money Flattr gets for every transaction is a bit high (10%), but I really like the idea and the service and I feel it’s something I have to support. Because, as the Question Copyright folks say, “I am the content industry”.
-
Review: iRiver Spinn
Aug 16, 2010 onLast time I travelled by train I was dumb enough to leave my portable music player behind. I tried calling the Lost + Found office, where much more expensive and fancy items had been collected, to no avail. So the search for a new player began.
My basic requirement was that it must play Ogg Vorbis, and ideally also Flac, apart from MP3. I didn’t want to spend a lot of time comparing and looking for the ideal player, and of course I didn’t want it to be very expensive, but I didn’t really have more requirements that the music formats.
Long story short, I ended up buying an iRiver Spinn. One of the reasons was that it was relatively cheap, and another reason was that I thought it was nice supporting iRiver, to my knowledge the first company that really cared about Ogg Vorbis as a format for portable music players, many years ago.
I have to say I’m disappointed with it. While I don’t have such strong feelings about music players and it’s not like I’m not going to use it, there are a number of things that were (negatively) surprising or didn’t meet my expectations:
-
It has trouble with UTF-8. I’m sorry, last time I checked it was 2010. iRiver guys, W T F ?
-
I have had a bunch of problems (many of them unresolved) with the song order. I have set the track order in the tags, but it still gets the order wrong. Typically it orders alphabetically by title in the tag (not filename). How the fuck is that useful?
-
It has a touchscreen, which most of the time you don’t really have to use… but it’s not exactly responsive, so when you have to use it it’s very annoying.
-
When I turned it on for the first time, it seemingly had a lot of songs already loaded. When you tried to play any of them, it would do nothing so it was quite confusing. Navigating through the menus (inside a menu called “SET” -> “Advanced”) I found a “Rebuild library” option. Using that option “refreshes” the idea the player has of what your library looks like, and of course it turned out empty (ie. no sample tracks included by default). Also, every time you copy or delete new music you are apparently expected to use that option again. While I don’t mind having to do it as I don’t copy/remove music very often, it’s just dumb and bad design.
-
The USB cable you need to charge/load songs is not a standard mini-USB. From what I can see it’s a proprietary USB cable, which is an amazingly moronic design decision to make. If I lose that cable I’ll probably not buy another one from them, I’ll just buy a new player and never again buy from iRiver.
In summary, I’m not going to throw it away or anything, but I’m disappointed with the iRiver Spinn. When I stop using it, I’m going to consider stop using those altogether and just start using my phone for listening to music. It might turn out to be cheaper, more practical and less disappointing.
-
-
From pseudo-code to code
Aug 10, 2010 onThis post is probably not about what you’re thinking. It’s actually about automated testing.
Different stuff I’ve been reading or otherwise been exposed to in the last weeks has made me reach a sort of funny comparison: code is (or can be) like science. You come up with some “theory” (your code) that explains something (solves a problem)… and you make sure you can measure it and test it for people to believe your theory and build on top of it.
I mean, something claiming to be science that can’t be easily measured, compared or peer-reviewed would be ridiculous. Scientists wouldn’t believe in it and would certainly not build anything on top of it because the foundation is not reliable.
I claim that software should be the same way, and thus it’s ridiculous to trust software that doesn’t have a good test suite, or even worse, that may not even be particularly testable. Trusting software without a test suite is not that different from taking the word of the developer that it “works on my machine”. Scientists would call untested science pseudo-science, so I am tempted to call code without tests pseudo-code.
Don’t get me wrong: sure you can test by hand, and hand-made tests are useful and necessary, but that only proves that the exact code you tested, without any changes, works as expected. But you know what? Software changes all the time, so that’s not a great help. If you don’t have a way to quickly and reliably measure how your code behaves, every time you make a change you are taking a leap of faith. And the more leaps of faith you take, the less credible your code is.
-
Faster than the fastest
Jul 5, 2010 onThese are interesting times in the browser world: not only there are more browsers than ever, but now even Internet Explorer is starting to become competitive again, so in a year or two it might not even be safe to assume that every other browser is better. Go figure.
So anyway, recently Opera released 10.60, which is awesome news because finally Linux has a modern stable release, because of the amount of new eye candy in the UI, the new supported web standards (like Geolocation or WebM video, yay!) and… because of the amazing speed (“much faster than a potato”).
On Saturday, DailyTech published an article comparing the speed of several browsers, Opera 10.60 included. Obviously the conclusion was that Opera is the fastest (I wouldn’t link to that article from this post if it wasn’t the case, would I? :-P), and shortly after reading that, I came across this hilarious video that sort of follows up on that:
I mean, the video even mentions Opera Link, I have to like it :-P (although yeah, the claim is not correct, Chrome does have something similar). My favourite quotes are:
-
“You promised innovation, but look at Opera!”
-
“Maybe Opera is hiring”
And the second reminded me that yes, we are hiring!
-
-
Book review: "97 Things Every Project Manager Should Know"
Jun 27, 2010 onIn the last batch of books I ordered from The Book Depository I had “97 Things Every Project Manager Should Know”. It was a thin book and one of the first to arrive, so I figured it was a good one to start. The book is a collection of 2-page articles about project management. It has 198 pages, but I just read until around page 70, then “speed-read” the rest because I was so disappointed that I just wanted to get it over with. This has been the most disappointing book I’ve read in many years, and I rarely stop reading books even if I don’t like them that much (especially if they are as short as this one).
But I hate not trying to be constructive, and just saying that it was disappointing for me won’t tell you much about the possibility of it being disappointing for you, so here we go:
-
The choice of articles seemed “random”: clearly some of the authors had very good things to share, but many others didn’t sound that experienced or having so much interesting to say. I could imagine myself writing some of those articles.
-
Many articles read like they want to give “general” advice, but extrapolating from circumstances that I may never have (like making a rule out of a “this happened to me once” kind of experience).
-
I didn’t find it “inspiring” at all, if I wasn’t a project manager already I would not want to become one. The idea of working as a project manager felt dry, boring, and too focused on processes.
-
Many articles feel written for someone that doesn’t have any project management experience whatsoever. That’s cool, but it’s useless for me and should have been clearer in the book I think.
-
Many other articles seem written for project managers from other industries (or even simply “managers”) that are going to start managing a software project. That is not only plain useless to me, it also bores me to death. Seriously, WTF is with the definitions of super basic concepts? If you don’t know what an “iteration” or a “hack” is and you won’t check yourself out of curiosity you shouldn’t be allowed to manage a software project. Period.
-
Many articles felt too “corporate” to me, there was too much jargon and too many references to job titles, methodologies and contractors instead of really essential stuff based on experience.
-
Reading some of the more or less interesting stuff, I couldn’t help thinking that those things would be obvious for someone who has been working as a software developer for years and wants to become a project manager because she finds it interesting.
-
Other articles were interesting, but lacked depth to make them really useful.
Don’t get me wrong, there are useful articles, but the book as a whole doesn’t feel that useful. Certainly not worth the time reading the whole thing.
And finally, something that kept popping in my head, even if the comparison is unfair (it’s a different kind of book), is that this book is in many respects the opposite of the things I loved about Making Things Happen (an excellent book that you should read if you have any interest in project management). Oh well.
-
-
Video editing woes
Jun 19, 2010 onGoogle Test Automation Conference. In India. Sounds great, doesn’t it? That’s what I thought too, so I applied. For that, though, I had to shoot not only one, but two videos: one explaining the full-length talk I wanted to present, and a video of a lightning talk. As both of them were related to talks, I figured they’d be much better off having the slides on the video when they were referenced. That way the videos would be easier to follow and wouldn’t be just a boring static shot.
But that meant I had to edit video. Which I had never done before. And I figured it wouldn’t be trivial if I only wanted to use Free Software tools under Linux. I was partly wrong, because after looking around a bit I found OpenShot, which I found pleasant enough to use (at least for my very basic, very limited needs). However, the final footage I used made OpenShot export corrupted videos. I know it was something specific to that source video (a MOV format, H.264 codec, EPICly HD resolution (1920x1080) video) because I had tried to do exactly the same things with earlier, lower-resolution, MPEG-format takes, and it had worked like a charm.
In any case, I was sort of fucked because I couldn’t get the final edited video out, so I had to resize it and change the format somehow. I won’t list here everything I tried (that includes trying to download and use several programs on Windows, as well as using mencoder on Linux), but after a very long and frustrating process, only ffmpeg did the trick for me. My first attempt with ffmpeg did export the video, but with awful quality. After looking around a bit, I found what worked for me:
ffmpeg -i original.mov -s hd720 -b 3200k resized.mpeg
The trick to get a decent result was forcing the bitrate (“-b” option), which will hopefully help someone in need. Meanwhile, I’m going to stop typing so I can go back to crossing my fingers to get picked for GTAC ;-)
-
My first smartphone
May 23, 2010 onI’m not really a “fancy phone” guy. Actually, some years ago I used to hate mobile phones. Luckily, things have changed, and to make a long story short, I bought a (second hand) HTC Hero after thinking of buying an Android phone for months.
My first impression is fairly good: even though it’s the first decent Android phone and quite old now, I find it very nice to use and quite customisable (which is great, considering all the applications and widgets available for the platform). And even when using an old version of Android (1.5) I don’t find it slow. At least not enough to be irritating.
However, there are several annoyances and things I found out that I figured I’d share:
-
It doesn’t automatically import SMS from the SIM card, let alone use the SIM card as the SMS storage. I find that really silly, but to be honest it doesn’t bother me that much. You can of course import your backed-up SMS using some utilities (I haven’t bothered).
-
It took me a good deal of effort to import my contacts from the old phone. I tried some app called vCardIO, which sounded awesome but it didn’t work for me. The final solution was using a utility called “Import Contacts” that doesn’t seem to be in the Android Market (?). I had exported my contacts using gammu/wammu, but I had just in case removed the X-GAMMU-* lines from it. I don’t know if it had anything to do.
-
I found the default mail application to be kind of sucky, so I looked around and found K-9 Mail. I’m quite happy with it.
-
The default browser is some sort of bad joke, but luckily there’s Opera Mini. Opera Mini 5 totally rocks, especially with Opera Link.
-
-
Arepa - Apt REPository Assistant
Mar 22, 2010 onFor some time now I had been frustrated by the tools to manage APT repositories. The only ones I knew of either covered too little (only adding/removing packages from a repository and such, like reprepro) or were way too complex (like the official tools used by Debian itself). Maybe/probably I’m a moron and I just didn’t know of some tool that would solve all my problems, but now it’s kind of late ;-) And before you say it, no, Launchpad is not what I was looking for as far as I understand it.
So I started to work on my own suite of tools for it, and recently I decided to release what I’ve done so far. It’s by no means complete, but it’s very useful for me and I thought it would be useful for others. And, with a bit of luck, someone will help me improving it.
So what is it? Arepa (it stands for “Apt REPository Assistant”, but obviously I called it like that after the yummy Venezuelan sandwiches) is a suite of tools that allow you to manage an APT repository. It contains two command-line tools and a web interface, and its main features are:
-
Manages the whole process after a package arrives to the upload queue: from approving it to re-building from source to signing the final repository.
-
It allows you to “approve” source packages uploaded to some “incoming” directory, via a web interface.
-
It only accepts source packages, and those are re-compiled automatically in the configured autobuilders. It can even “cross-compile” for other distributions (treated like binNMUs).
-
Far from reinventing (many) wheels, it integrates tools like reprepro, GPG, Rsync, debootstrap and sbuild so you don’t have to learn all about them.
The approval via some web interface was actually sort of the driving force for the project. One of my pet peeves was that there wasn’t an easy way to have an upload queue and easily approve/reject packages with the tools I knew. From what I had seen, the tools were either for “single person” repositories (no approval needed because the package author is the owner of the repository) or full-blown distribution-size tools like dak and such. My use-case, however, is the following:
-
You have an installation of Arepa for an entire organisation (say, a whole company or a big department).
-
People inside that organisation upload packages to the upload queue (possibly using dput; the point is, the end up in some directory in the machine hosting Arepa).
-
Someone (or a small group of people) are the “masters” of the repository, and they’ll have access to the web interface. From time to time they check the web UI, and they’ll approve (or not) the incoming source packages.
-
If they’re approved, the source will be added to the repository and it’ll be scheduled for compilation in the appropriate combination(s) of architectures and distributions.
-
A cronjob compiles pending packages every hour; when the compilation is successful, they’re added to the repository.
-
At this point, the repository hosted by the Arepa installation has the new packages, but you probably want to serve the repository from a different machine. If that’s the case, Arepa can sync the repository to your production machine with a simple command (“arepa sync”).
I imagine that a lot of people have the same need, so I uploaded all the code to CPAN (you can see it with the rest of the contributions by Opera Software). Sadly there’s a silly bug in the released code (I wanted to release ASAP to be able to focus on other things, and I ended up rushing the release), but it has both a workaround and a patch. So, please give it a try if you’re interested and tell me if you would like to contribute. I haven’t released the code in GitHub or similar yet, but I’ll probably do if there’s interest.
-
-
Goodbye Typo, Hello WordPress!
Jan 24, 2010 onAs I had mentioned several times, I had been frustrated with Typo. Several bugs or misfeatures that really annoyed me, upgrades that had frustrated me, and sometimes the feeling that more or less visible things were broken from time to time in new releases. And while the upgrade problems were mostly because of the need to upgrade Ruby gems, still it was something that was inevitable with Typo apparently, so sticking with Typo meant having to deal with Rubygems, which as you may know I hate.
So, after the last upgrade and the frustrations that came with it, I decided to ask around for good blogging software. The main contenders I had in mind were Wordpress and Movable Type. Most of the people who replied talked wonders about Wordpress, but I decided to try both. Wordpress’ installation was ridiculously easy (I’m talking about installing my own copy, not opening a blog in wordpress.com obviously) and I had a working blog pretty quickly. Also, at least the first impression of the UI is that it’s very slick and easy to use. It shows maturity. Movable Type was easy enough to install, although I did have some problems (mostly due to my own stupidity, but still). The first impression was that Movable Type was much “heavier” and maybe a bit too much for a single, personal blog. So I decided to go for Wordpress, which was the one that I had been recommended by most people anyway.
So, the first thing I had to do was exporting the content from Typo’s HCoder so I could import into Wordpress. I quickly found some script for Typo that would export in Wordpress’ format, for easy import. It worked very well, although I did a problem with the tags: they were treated as normal categories, so I ended up with many categories and no tags (and a huge, horrible, impossible to navigate sidebar with dozens of categories). I started to look around, and I couldn’t find a spec for the wxr format. Maybe I was naive thinking that there would be one, but hey. In any case, eventually I figured out that I had to change the:
<category>rants</category>
to
<category domain="tag">rants</category>
for the tags. The categories had to stay as they were, but luckily for me, all uppercase names were categories, and all lowercase names were tags, so I could do the trick with vim with:
:%s/<category>\([a-z]\)/<category domain="tag">\1/
After that, I could import back all the content, but then I had the next hurdle: the style of the blog. I didn’t mind if the design wasn’t exactly the same, but I was used to the old one and didn’t want to change it too much, so I used the excellent Opera Dragonfly to inspect the styles of the old blog, and I slowly copied the most interesting values (colours and font sizes mostly) to the equivalent CSS classes in the Wordpress theme. I’m happy with the result, so I think I’ll leave it as it is for now.
Last, but not least, I wanted to try to keep the old URLs working. I did two things for this:
-
I added some URL rewrites to keep Typo’s feed URLs working. However, the Atom ones also redirect to the RSS ones, I wonder if that’ll be a problem.
-
I changed the default permalink settings in Wordpress so they matched what I had in Typo. Hopefully almost all blog posts will actually keep the URL and the migration to Wordpress won’t be very traumatic. You tell me if I’ve broken anything ;-)
One thing that I don’t like about Wordpress’ blog editor is that apparently it doesn’t allow you to write in some Wiki-like syntax, like Markdown or Textile. I know Movable Type does have it, but several other things made me stick with Wordpress and I’m happy overall. At least for now ;-)
-
-
Feeling the pressure produces better code?
Dec 6, 2009 onThe other day I was in a conversation with some developer that was complaining about some feature. He claimed that it was too complex and that it had led to tons of bugs. In the middle of the conversation, the developer said that the feature had been so buggy that he ended up writing a lot of unit tests for it. To be honest I don’t think there were a lot of bugs after those tests were written, so that made me think:
Maybe the testers in his team are doing too good of a job?
Because, you know, if testers are finding enough of “those bugs” (the ones that should be caught and controlled by unit tests and not by testers weeks after the code was originally written), maybe some developers are just not “feeling the pressure” and can’t really get that they should be writing tests for their code. If testers are very good, things just work out fine in the end… sort of. And of course, the problem here is the trailing “sort of”.
I know I’m biased, but in my view there is a ton of bugs that should never be seen by someone that is not the developer itself. Testers should deal with more complex, interesting, user-centred bugs. Non-trivial cases. Suboptimal UIs. Implementation disagreements between developers and stakeholders. That kind of thing. It’s simply a waste of time and resources that testers have to deal with silly, easy-to-avoid bugs on a daily basis. Not to mention that teams shouldn’t have to wait for days or weeks until they find basic bugs via exploratory testing. Or that a lot of those are much, much quicker to test with unit tests than having to create the whole fixture/environment for them to be found with exploratory testing.
My current conclusion is that pushing on the UI/usability side is not only good for the user, but it’s likely to produce better code as it will be, on average, more complex and will have to be better controlled by QA (code review, less ad-hoc design, …) and automated tests. Maybe developers will start hating me for that, but hopefully users will thank me.
-
Review: Dingoo (A320)
Nov 12, 2009 onWhen I mentioned that I wanted an “open” portable gaming console that played PSP games, Enrique mentioned the Dingoo. Not that it actually plays PSP games, but it’s indeed an “open” console, cheap and with a number of “extras”. So I wondered if playing PSP games was so important for me. Not that it wouldn’t be awesome playing God of War, Katamari Damacy, Patapon, LocoRoco or Echochrome on the train/plane, but the main point was having games, music and the possibility of watching films on a portable device. After a couple of weeks pondering, I decided “screw Sony” and ordered the Dingoo.
So, what does the Dingoo have to offer? Well, it’s a nice and small portable gaming console that apart from games, it plays music, video and radio, and has a simple picture viewer and a basic plain text reader (with features like bookmarking). On the gaming side, it has its own game format (it comes loaded with around 30 games) and emulators for quite a bunch of different machines, so you can play games from NES, Super NES, Neo Geo, Mega Drive, Game Boy Advance, and the arcade machines CPS1 and CPS2. I don’t have words to say how awesome that is. The Dingoo has an internal memory of 4Gib and supports one external MiniSD card, so you have more than enough space for a lot of games, some music and even a couple of films.
In general, I have to say that both the emulation and the video playing works very well. A handful of games can’t be played (they crash or behave funny) and other games can be played but are too slow/annoying to play (e.g. Super Mario World for Super NES), but in general there aren’t any problems. I have a couple of minor complaints though:
-
I find some of the button conventions confusing (e.g. for menu navigation). It doesn’t help that different consoles have different conventions on which buttons to use for which actions.
-
The Mega Drive emulator doesn’t seem to support the
.bin
format, which is slightly annoying. -
There are a lot of video formats supported (the console comes with several sample videos), but the first video I tried to copy and watch wasn’t recognised :-( I hope that won’t happen often.
All in all, I think it’s a great console and it’s quite cheap, so I’m very happy I bought it. If you’re curious about how it looks and works, have a look at this video review:
-
-
Slides for several talks now published
Sep 20, 2009 onI had said that I was going to publish the slides for a couple of talks I had given over the last couple of months, and I just got around to actually do it, so here they are:
-
Software automated testing 123, an entry-level talk about software automated testing. Why you should be doing it (if you’re not already), some advice for test writing, some basic concepts and some basic examples (in Perl, but I trust it shouldn’t be too hard to follow even if you don’t know the language).
-
Taming the Snake: Python unit tests, another entry-level talk, but this time about Python unit testing specifically. How to write xUnit style tests with
unittest
, some advice and conventions and some notes on how to use the excellentnosetests
tool. -
Introduction to Debian packaging, divided in four sessions: Introduction, Packaging a simple app, Backporting software and Packaging tools.
Just a quick note about them: the slides shouldn’t be too hard to understand without me talking, but of course you’ll lose some stuff that is not written down, some twists, clarifications of what I mean exactly by different things and whatnot. In particular, the “They. don’t. make. sense. Don’t. write. them” stuff refers to tests that don’t have a reliable/controlled environment to run into. I feel really strong about them, so I wanted to dedicate a few more seconds to smashing the idea that they’re ok, hence the extra slides :-)
Enjoy them, and please send me any comments you have about them!
-
-
Proprietary vs open: a new hope
Sep 13, 2009 onThere is something that has been bothering me for quite a long time now: while I realise that Sony is often evil and proprietary (I mean, come on, memory stick? the horrible, horrible PS2 memory “cards”? the draconian sharing terms for the online PS3 network?), there is something that attracts me to their products. I own a PlayStation 2, a Sony Ericsson phone, and I may even buy a PlayStation Portable.
The PlayStation 2, well, it had all the games I wanted to play when I bought it (the Prince of Persia series, ProEvolution soccer, Ico, Shadow of the Colossus and God of War), it was cheaper than the alternatives and it had a ridiculous amount of second hand games and websites with reviews. The phone I bought mostly because of a recommendation, but actually I find other brands stupid and confusing, so I really like it and my next phone might be another Sony Ericsson. The PlayStation Portable… well, the DS is awesome in many ways, I realise that, but again the games I want to play (God of War, Patapon, LocoRoco, Echochrome, Katamari Damacy) are all for PSP.
There’s something similar with Apple. Some of their products (most? all?) look awesome, and I’m sure they’re awesome in many ways… but I can’t stand that they’re so closed in their own world. So far I haven’t bought anything from them, and when I read certain news I’m really happy that I haven’t. That’s more or less why it bothers me that the iPhone is so successful and that no vendor seemed to be able to do anything about it… until I saw the HTC Hero.
I haven’t used it myself (only played with it a bit and seen some videos), but it looks seriously good. According to these videos it’s still a bit rough around the edges compared to the iPhone, but it has more functionality, the same kind of app store (only open) and based on Android. I’m not sure what else I could ask for. Ah, yes: a couple of models more so they can polish it further :-) So I wonder if the Android platform, through HTC Hero and similar phones, will allow me and others to escape Sony (and Apple) on the phone arena.
Now, I only have to find some “open” portable console that will allow me to play PSP games ;-P
-
BCM4312 on Linux: easier than expected
Sep 10, 2009 onJust a quick post to say that I was being stupid and it took me a couple of days of fighting, lockups and reading to realise that the driver for the wireless card in my new laptop is actually already packaged and it works like a charm.
The long(er) story:
-
I bought a laptop with that card, and I wanted to make it work.
-
Apparently the open source driver (
b43
) doesn’t recognise my card, although it seems it should? -
I tried to download the proprietary driver provided by the vendor (
wl
), but it didn’t compile at first. After applying some patch for the kernel 2.6.29 (I’m using kernel 2.6.30) it did compile, but it didn’t quite work. Meaning, it locked up my machine seconds after loading. -
After a couple of days of wondering and trying to make it work… I realised the driver is already compiled in Debian (in particular, broadcom-sta-modules-2.6.30-1-686). Just installing and loading it worked like a charm. Oh well.
-
-
The search for a Linux music score editing program
Jul 29, 2009 onSome time ago I had promised a friend I’d bring some drum exercises the next time I went back home. Of course, I forgot so I taught him the exercises I could remember and promised I’d write some of those exercises and send him by e-mail. Thus, I had to find some Linux program to do it. I knew there were a couple of alternatives, and I even knew some names, but I had never used them to typeset music, so I had to give them a try. Sadly the search was more painful than I had expected.
Disclaimer: I have no idea about music programs and my needs were a bit "special" (typesetting drums, not "normal notes"), so take my comments with a grain of salt.
The programs I tried were Rosegarden, NtEd, NoteEdit, Lilypond, Canorus and MuseScore. I’m sure there are more, but those were the ones I had the patience to try, and they were conveniently packaged for Debian. My final pick was MuseScore, but as I said YMMV.
Rosegarden is, according to its website, a “well-rounded audio and MIDI sequencer, score editor, and general-purpose music composition and editing environment”. It seems like quite a complex beast, and probably capable of a lot of things (most of them I’m not really interested in of course). I think it took me a while to figure out how to fire up the music score editing interface, and once I did, I couldn’t see anything that helped typesetting drums. While I could have typeset the notes in their correct places, I didn’t find a way to change the “head” of the notes (like for the hi-hats and stuff, see the Lilypond documentation for drums). Thus, discarded.
NtEd didn’t seem bad (althought I find the UI really ugly), but it seemed a bit painful to use the keyboard and I couldn’t figure out how to add “lyrics” (for the comments on using left or right hand). Also, it doesn’t have any notion of drums, so I would have had to change all the types of note heads all the time (using the mouse, which involves extra pain).
Noteedit also looked nice, but again I couldn’t figure out the lyrics, keyboard usage seemed suboptimal and it had no special shortcuts or options for drums.
I also considered using writing Lilypond by hand. What I wanted seemed simple enough to be writable by hand, but adding lyrics and possibly a second voice didn’t seem so fun anymore (it looks more like programming than typesetting).
Canorus looks quite simple, but it seems to lack, again, good keyboard input and any kind of drum typesetting support (I can’t see how to change the note heads, again). Also I can’t seem to find any option for adding lyrics.
Finally, MuseScore seemed to match my needs. Although it did take a bit of time to figure out certain things, and I don’t completely get the keyboard usage, it seems easy enough for what I wanted, the lyrics input is very clear and easy, and the output is quite good (although probably most, if not all, these programs really use Lilypond as backend, so I guess all of them look good). The downside is that the Debian package in the current Debian Sid is quite unstable, so I have to save very often :-/
-
Quality Assurance as a copilot
Jul 19, 2009 onThose who know me professionally know that I care a lot about software quality assurance. I think it’s a mostly misunderstood field, and generally “the world” would be better off with more QA (and/or better QA). Of course, I’m always looking for more arguments to support my view :-D and the last one I found came from a very interesting blog post, Plane Crashes, Software Failures, and other Human Errors. This post explains how mistakes are made in the aviation and healthcare industries, and claims something that sounds shocking but actually makes quite a bit of sense: “errors occur most often when a senior, experienced person is performing”. The reason why it doesn’t happen as often when the less experience person is performing (again according to the blog post): “because it means the second pilot isn’t going to be afraid to speak up”.
That got me thinking. No matter how expert one person is, he can’t take all the right decisions without help and feedback: a second opinion is always useful and can save the team from embarrassing (or, in some cases, fatal) consequences. A second opinion can give perspective or aspects not thought of by the first person.
If you apply this to software development, I can’t help thinking that one of the roles of QA fulfils this need: being experts in the field that provide second opinions and critiques on anything the team decides or produces. And they shouldn’t feel afraid to speak up because… well, it’s their job after all. And while yes, fellow developers could serve as “second opinion” too, having a more or less formal position for a “Quality Assurance Engineer” is helpful for a variety of reasons. First, as I said the chances of being afraid to speak up are much lower, because it’s their job. Second, not producing the result themselves gives some perspective that people having to fight with everyday details can have, but usually don’t; at least not as much. And last but probably important, it’s their job so they can focus on it and they don’t stop doing it because “they have deliveries soon” or because “they don’t have time”.
Finally, there is another blog post, linked from the above, that also supports my vision of QA: Toyota “Stop the Line” mentality. But this one is about processes and taking a step back when something is wrong, trying to find the root cause instead of an immediate solution. Enjoy the blog posts :-)
-
More work on widgets
Jul 1, 2009 onAs I had mentioned, I had been working on Opera widgets. Some time ago I had seen a great Javascript plotting library for jQuery called flot, and I really wanted to try it out in some “real world” project. As I was working on the World Loanmeter widget, which incidentally uses jQuery too, it was very easy to figure out some way to use flot for something useful: I decided to add some simple graphs to the widget.
The initial idea of the loanmeter widget was to show where in the world Kiva was offering loans. However, as I used the widget myself, I realised that the location in the world was less important for me, and I was more interested in knowing what the person was going to use the money for. So, I added some options to filter by “sector” and I figured that having some graphs comparing how much money was requested and already funded, for each sector, would be a very quick and visual way to get the information I wanted. I started playing with flot, and I have to say that except for a couple of relatively minor problems, it was quite easy to use. I don’t have screenshots showing the graphs, but feel free to try the widget itself and have a look (hint: you have two buttons at the bottom right corner to switch between “map view” and “graph view”).
The other widget I have been working on is a monitor widget for projects in CruiseControl.rb (a really simple and neat continuous integration server we use at Opera). More than one year ago, my colleague Nico had written a very quick & dirty widget for monitoring the result of the test runs of the My Opera functional testsuite. There were a couple of things I wanted to change, and I also wanted to monitor other projects, so I figured that I’d rewrite the widget to have a more maintainable codebase and then make it generic, so you could configure which CC.rb installation and which project to monitor. I’m moderately happy with the result of the refactoring, and happy enough with the final result. I know it has several issues, and I expect that once anyone outside our team starts using it, there will be things to improve and fix :-) If you use CruiseControl.rb, give it a try!
-
The ultimate TODO app redux
Jun 29, 2009 onUPDATE: Bubug hasn’t been maintained for a long time and is now deprecated, sorry. The closest equivalent I have to a TODO application is Wiki-toki, my personal Wiki program.
When writing yesterday about the Perl modules, I realised that I hadn’t written anything about the TODO application since “The ultimate TODO app”. Well, a lot has happened to it actually. I’m glad to announce that:
-
It does have a (lame) name now: Bubug (supposedly stands for “Barely Unconventional Bug Untracking Gizmo”. Whatever).
-
It has improved a lot here and there, and it now has authentication and multi-user support, not to mention a lot of UI bling bling and goodies.
-
The development has moved to BitBucket, an excellent free service built by ex-Opera’s Jesper Noehr, where you can follow it more easily, comment on, check the Wiki, fork, or whatever you want. You even have a screenshot there ;-)
As you can guess from the last point, for this project I’ve been using Mercurial instead of Git. Although I certainly don’t have sophisticated needs, so YMMV (heavily), I find Git more pleasant to use. Which is kind of surprising, because I always thought that Git’s UI was a pain in the ass. Oh, well. That doesn’t mean that Mercurial is hard to use, though. I think it’s more that I’m used to Git now, and there are a couple of things that I find more convenient: the coloured diff (possible in Hg, but you have to install some extension for it, and only thinking about installing some Python extension that is not even packaged for Debian makes me want to switch to Git) and the staging area are the most important ones I can think of.
So, if you thought I had abandoned the TODO application thing, you were wrong ;-) If you’re interested, have a look at the Bubug BitBucket project page, download it, play with it, and tell me what you think.
-
-
My first contributions to CPAN
Jun 28, 2009 onI have been using Perl for many years, but I had never uploaded anything to CPAN. That’s unfortunate, because I’ve probably written several programs or modules that could have been useful for other people. The point is, now I have. Not only that, but it was code I wrote at work, so if I’m not mistaken these are my first contributions to free software from Opera. Yay me!
The two modules I’ve released so far are:
-
Parse::Debian::PackageDesc, a module for parsing both
.dsc
and.changes
files from Debian. This is actually a support module for something bigger that I hope I’ll release soon-ish. -
Migraine, a still somewhat primitive database change manager inspired on the Ruby on Rails migration system.
As I feel that Migraine could be useful to a lot of people, but it’s easy to misunderstand what it really does (unless you already know Rails migrations of course), I’ll elaborate a bit. Imagine that you are developing some application that uses a database. You design the schema, write some SQL file with it, and everybody creates their own databases from that file. Now, as your application evolves, your schema will evolve too. What do you do now to update all databases (every developer installation, testing installations, and don’t forget the production database)? One painful way to do it could be documenting which SQL statements you have to execute in order to have the latest version of the schema, and expect people to apply copying-and-pasting from the documentation. However, it’s messy, confusing, and it needs someone to know both which databases to update and when.
Migraine offers a simpler, more reliable way to keep all your databases up to date. Basically, you write all your changes (“migrations”) in some files in a directory, following a simple version number naming convention (e.g.
001-add_users_table.sql
,002-change_passwd_field_type.sql
), and migraine will allow you to keep your databases up to date. In the simplest, most common case, you call migraine with a configuration file specifying which database to upgrade, and it will figure out which migrations are pending to apply, if any, and apply them. The system currently only supports raw SQL, but it should be easy to extend with other types.In principle, you shouldn’t need to write any Perl code to use migraine (it has a Perl module that you can use to integrate with your Perl programs if you like, but also a command-line tool), so you can use it even in non-Perl projects. Of course, some modern ORMs have their own database migration system, but very often you have to maintain legacy code that doesn’t use any fancy ORM, or you don’t like the migration system provided by the ORM, or you prefer keeping a single system for schema and data migrations… I think in those cases Migraine can help a lot reducing chaos and keeping things under control. Try it out and tell me what you think
:-)
In a couple of days I’ll blog again about other contributions to free software I’ve made lately, but this time in the form of Opera widgets…
-
-
Opera Unite: another milestone
Jun 21, 2009 onSo we finally managed to get some public release of Opera Unite out of the door. That was a really good thing, first because it’s a very cool idea and we had to let others play with it and make it evolve, and second because it was painful keeping a secret for so long ;-)
In case you have been hiding under a rock these days and you don’t know what I’m talking about: Opera Unite is the latest crazy idea from Opera Software. Basically, embedding a web server inside the web browser, so that people can be more than spectators on the web, share their data without having to upload to third party services, and generally change the way they interact with the web. The cool thing is that the system is not limited to sharing files or whatever, you can actually program your own Opera Unite services to do pretty much anything you want (check the documentation in Dev Opera if you’re a developer). However, do note that this is just a Labs release, that is, just a preview of relatively immature software. For a more complete introduction, go to the Labs blog post introducing Opera Unite.
Opera created quite a bit of hype before the release, which seemed to work pretty well. In particular, the teasing in the HTML comments of the preview URL (http://opera.com/freedom), which were being updated every day by adding some more words, was brilliant. There has been a lot of press coverage of Opera Unite this week, and even some Russian fella created the website unitehowto.com the same day Unite was released, and he wrote (collected?) a bunch of information, articles and tutorials about Opera Unite.
I have to say that, although there has been some quite challenging times while developing Opera Unite (it’s a quite ambitious project that involves several departments with massively different backgrounds and values, what did you expect?), I’m quite happy with the result and I think we have made, in general, a good job. As I said, this is just a very rough version, and there’s a lot of work left to do, but I’m sure it will improve a lot before we release the final version. That said, I’m sure that the most exciting things about Unite will, without doubt, start happening once people start writing interesting services, changing the way we see Opera Unite and the way we see the web. I’m so eager to see what people are going to build with this…
-
Predictably irrational
Jun 1, 2009 onI haven’t written in some time, I know. I haven’t done much worth blogging about. Just a new release of the Kiva World Loanmeter widget, and also a couple of things at work that I’ll be releasing soon (including a small tool for managing database changes and some Perl module to parse Debian
.changes
files).However, recently I watched a really funny and interesting talk at TED, Are we in control of our own decisions?, by Dan Ariely. In the talk he mentions his book, Predictably Irrational, which funnily enough a friend had already mentioned to me.
Well, I just finished the book and I have to say it was very interesting and eye-opening. It’s interesting how it shows our minds are biased for certain kinds of decisions or behaviour, even though they are often not the best for us. Some of the experiments are truly brilliant and they show totally unexpected (at least before starting reading the book ;-P) outcomes. One of the experiments that got me thinking was this:
> > Research on stereotypes shows […] that stereotyped people themselves react differently when they are aware of the label that they are forced to wear […] One stereotype of Asian-Americans, for instance, is that they are especially gifted in mathematics and science. A common stereotype of females is that they are weak in mathematics […] In a remarkable experiment, […] asked Asian-American women to take an objective math exam. But first they divided the women into two groups. The women in one group were asked questions related to their gender […] The women in the second group were asked questions related to their race […] The performance of the two groups differed in a way that matched the stereotypes of both women and Asian-Americans. Those who had been reminded that they were women performed worse than those who had been reminded that they were Asian-American. > >
I can’t stop thinking about the implications this has to working conditions and productivity in different countries, and also to project management.
-
Free software rocks!
May 10, 2009 onI’ve been working on something lately that I hope I will publish sometime next month: it’s a set of tools to manage an APT package repository. The idea is that, given an upload queue (you can set it up as an anonymous FTP, or some directory accessible via SSH/SCP, or whatever floats your boat in your setup and team), you’ll have a web interface to approve those packages, a set of integrated autobuilders building the approved packages in whatever combination of architectures and distributions you want, and all that integrated with reprepro to keep your repository updated. I’ll write more about it when I have released something.
The point now is that, while working on it, I needed some module to parse command-line options and “subcommands” (like
git commit
,svn update
, etc.). As it’s written in Perl, I had a look at CPAN to see if I could see anything. The most promising module was App::Rad, but it lacked a couple of things that were very important for me: my idea was “declaring” all the possible commands and options and have the module do all the work for me (generating the help pages and the default--help
implementation, generate theprogram help subcommand
and so on).App::Rad
didn’t have that, and it didn’t seem to me like that was the direction they wanted to go to with the module. But I figured I’d drop the author an e-mail anyway and see if he liked the idea so I could start adding support for all that…And boy was that a good idea. He replied a couple of days later, and said that they had liked the idea so much that they had implemented it already (that’s why he took a couple of days to reply), and he sent me an example of the new syntax they had introduced and asked if that was what I was thinking. And not only that, but they added me to the list of contributors just for giving the idea! That completely made my day, free software rocks!
-
Happy Birthday Opera!
Apr 28, 2009 onToday Opera turns 15! To celebrate it, there is a lot of stuff going on, like a list of Opera’s most popular innovations.
I have to note here that I’m very proud of being part of Opera Link, which is consistently mentioned in the list of innovations.
Happy Birthday Opera!
-
Recent pet projects + Git + Github
Apr 7, 2009 onI had mentioned that I was learning Javascript to write a Kiva Opera widget. Some time ago I released the first version of my World Loanmeter widget, and I have uploaded two more since. Not much has happened between the first and the third release from the user POV, but a couple of things were interesting when developing it:
-
I learned QUnit, which I used to write some really useful unit tests. It’s quite nice to be able to write Javascript unit tests easily.
-
I made some heavy refactoring (see above) which made me learn some more Javascript and made the code much more flexible, so now the widget is not limited to a single Kiva API page of results, but to as many pages as needed to fetch whatever number of loans the user wants. Not to mention that the data source need not be a URL.
-
Now the widget actually has some configuration. Namely, the number of loans to show in the map. It also stores it persistently using the preference store, which is quite nice.
As I said, I used Git for it. I don’t “hate” it anymore, but I still find some things annoying, like the horrible, confusing names some options have (I’m thinking about “git checkout “ to revert the local changes, or “git diff –cached” to see the contents of the index/staging area; seriously guys, W-T-F?). I used to be skeptical about the “git add” for changes and then “git commit”, but I actually find it quite nice: it’s easier to plan a commit that way, and if you don’t want to plan it, you can always just “git commit “ directly. Also “git add -p” is really nice to commit just parts of a file (at last, someone copies some of the good stuff Darcs had had for ages!). Apart from Git itself, it’s cool that there is GitHub, so it’s easy to share your repositories without having to
rsync
to some web server or similar… not to mention that your project is much more visible that way.But the World Loanmeter wasn’t the only pet project I was working on these past weeks: I also wrote a simple sudoku solver, demisus, in Ruby. The reason? Writing a prototype of a sudoku solver in a language I’m fluent with, to play with the design and get something interesting and easy to maintain… to rewrite it in Haskell. I have been trying to learn some functional language for some years now, but I never find a “project” that is interesting enough to write some “real world program” in the language and I end up not learning anything. After starting reading Real World Haskell, I really felt like trying to learn the language once and for all, and I figured that a sudoku solver was easy enough to write, something I know enough about, and something math-y enough to be reasonably easy to implement in Haskell.
So, if you’re interested in any of them, you can have a look in Github and even contribute ;-)
-
-
Kiva API, Javascript, Git and my first widget, oh my!
Feb 26, 2009 onAbout two weeks ago I wrote about Kiva, a cool website that allows people to make microloans. Almost one month ago they had started a developer site, including an easy to use API to access the data (loans, borrowers, lenders, etc).
I couldn’t resist the temptation to have a look at the documentation and start thinking about some application to use it. Soon after I started reading I came up with the idea of writing an Opera widget. There were a couple of reasons for that:
-
I had never written a widget, so it sounded like a good excuse to learn how to write them.
-
Widgets use Javascript, and that felt like a natural fit (as the API returns JSON).
-
I didn’t really know that much Javascript (just enough to write a couple of event handlers), and that seemed a good opportunity to start learning the language “properly”.
-
A widget in http://widgets.opera.com had more possibilities of actually being used than a random pet project of mine lying in some obscure repository of some obscure version control system (well, actually I ended up using Git for it, so it’s not that obscure in some sense; but you get the point).
So I started by learning a bit of Javascript. After asking around, the best thing I found to learn quickly was a very good series of videos by Douglas Crockford hosted in Yahoo! Video.
Then, I had a look at the excellent articles in Dev Opera about creating widgets and started creating one. As I had the idea of creating something that would show loans around the world, I started looking for HTML and Javascript for building maps, and found a very good article in A List Apart about accessible maps. The sad part is, once I understood how everything worked I destroyed the whole accessibility of the solution, but it was for a widget anyway (you have excellent support for CSS and Javascript in Opera, no need to have a fallback to show textual data in a widget) and my code ended up much much simpler and easier to maintain.
Finally, for Git, I had a look at the screencasts hosted in GitCasts. I already new some basic Git things, but I think I started to feel more comfortable with it after watching a couple of those videos. Still, too many references to the obscure objects and SHA names and whatever, but clear enough to understand your way around it.
In short, I have to say that creating the widget was easy enough, and it was lots of fun to write it. I had some frustrations debugging it, but things worked fairly well in general. When I finished it, I uploaded to widgets.opera.com and after a couple of days it was already approved and public for everyone.
So, if you want to give it a shot, just go to my widgets page and download the World Loanmeter! Enjoy it! :-D
-
-
Another Typo upgrade
Feb 23, 2009 onI just upgraded the blog to Typo 5.2. I had a couple of issues, but things worked reasonably ok. Just in case this helps anyone, these are the issues I ran into:
-
I had to install a ridiculous amount of dependencies, sometimes to go from version 1.1.1 to 1.1.3 of some module. I really wonder if Typo 5.2 really needs those versions.
-
When trying to upgrade, it seemed to hang in this line:
Backing up to /var/www/virtual/hcoder.org/db/backup/backup-20090223-1843.yml
. It turns out, it didn’t really hang, it just takes a good while (and yeah, the file stays at 0 bytes for a long time too) -
I had some permission issues that I had to fix (when upgrading, it tried to modify/copy some files, and it couldn’t)
-
When applying the migrations, it died with a really strange error message. It turns out, my version of Rubygems was too old ????
While writing this first post, I see some improvements in the admin interface, although I still can’t see any kind of “preview”. I hope it helps me with my struggle with spam, at least :-/
-
-
The ultimate TODO app
Feb 9, 2009 onUPDATE: Bubug hasn’t been maintained for a long time and is now deprecated, sorry. The closest equivalent I have to a TODO application is Wiki-toki, my personal Wiki program.
I have been quite frustrated by TODO applications for some months now. They’re usually either too simple, or almost too complex and without features that I think are really valuable. In particular, there are two things that I don’t remember having seen in any TODO application:
-
Possibility to “postpone” a task, so it doesn’t appear in the main view for a defined time.
-
Possibility to associate a task to a “person to nag”.
When you have a lot of small tasks to do, and they are not the kind of things you put in a BTS (say, stuff that you have to do that is not really connected to some project’s code) I think these two features are really useful, and I was surprised that no applications I saw seemed to have those. I mean, don’t people have the same problems as me?
That’s why, as I had mentioned, I started writing my own TODO application: I’d have what I wanted, and I’d learn a thing or two about Merb, DataMapper and jQuery. The application has several design limitations that I used to simplify things, like not having any notion of users (single user app without authentication) or supporting only a “title” for the tasks, without any longer description. It isn’t something I really plan to publish for other people to use (I mean, the code is in my Darcs repo, I’m just not going to make a project page for it or anything like that), so I don’t really care how much it fits other people’s needs :-)
As it is a pet project and I didn’t really mind how long it would take to finish it, I started by making some mockup of the application in HTML (+ a bit of Javascript with jQuery), and once I was happy, I started with the actual design and code. I think some parts of the code are nice, and it has some Ajax sweetness, but I admit I haven’t used it yet for myself: only as a kind of underpowered BTS for the application. Maybe I’ll upload some screenshot some day. In the meantime, feel free to download and try it out ;-)
-
-
The Myths of Innovation
Dec 17, 2008 onThat’s the title of a really good book by Scott Berkun, the fella that was project manager for Internet Explorer when it could still be called a browser ;-) The Myths of Innovation is very easy to read, funny and has some food for thought. It dissects a bunch of myths about innovation and innovators, points out typical difficulties and dangers that innovators face, and analyses why these myths are common, why people like them, and why they are so handy to refer to the history and reality of innovation, which is of course much more complex.
One chapter that made me think a lot was chapter 7: “Your boss knows more about innovation than you”. It explores the relation between (traditional) management and innovation, and claims that managers can work against innovation if they just try to increase efficiency and keep things under control. In that sense, quality assurance engineers can be like those project managers, so I wondered a lot about my role and my duties with regards to innovation. On the one hand, you do have to control things that are being done and be conservative to a certain extent. On the other hand, innovation is such an important part of an IT company (particularly if it’s Internet-related) that you really don’t want to risk blocking or stifling it.
Fortunately, it also explains how to keep the workplace open to innovation, including things like having toys and “funny” things at the office. It turns out that they’re not there to spoil the employees, but to provide an environment where people feel free to “think different” and are not afraid of new ideas or to say what they think.
All in all, I think it’s a great book. Recommended!
-
Opera Widgets redux
Dec 14, 2008 onI never liked Opera Widgets too much. I tried them a couple of years ago, but I never saw the point. I even tried the games, but they performed so ridiculously poorly that I just gave up. What did I need them for?
Around one year ago, however, I found the first useful widget, a kind of simple “monitor” for the Continuous Integration server run for some project. It was really simple and actually useful (basically, a big window that is either green or red). Shortly after, someone pointed me at a “random lolcat” widget (best widget ever, I say; unfortunately is not public), so I started to wonder if I was wrong and widgets were maybe useful after all.
Since then, I have found another widget that I find very handy, the Twitter widget, and I even realised that the performance problems were something of the past, so I could consider trying a couple of games. And, alas, it turns out that there are at least two games worth trying: Bubbles and my favourite, Ninja Ropes Extreme.
So give them a try, you might be surprised :-)
-
Playing around with jQuery
Dec 14, 2008 onA couple of weeks ago I started a new pet project. Namely, making the ultimate todo list application. The idea was to:
-
Make a TODO application that I actually like (I’ll post about it some other day).
-
Learn Merb and DataMapper… and jQuery.
The experience have been roughly half frustrating, half rewarding. It’s fun learning new things, but the documentation for both Merb and DataMapper sucks big time so sometimes I spend much more time than I would like figuring out how to make things work. Don’t get me wrong, the reference documentation looks very complete… but there’s no single source of consistent documentation to learn how things are done. And that’s painful. Moreover, apparenly the API has changed several times (at least before 1.0.0, but it hasn’t been that long since), so a lot of recipes or solutions you find on the internet are simply not valid anymore, which just adds up to the confusion and frustration.
Anyway. The client-side piece of the puzzle, jQuery, has proven to be a very handy, clean, easy-to-use-even-for-non-Javascript-wizards, natural way of writing Javascript. I admit I’m a kind of Javascript-phobe, as I don’t really know more than the basics and never has had the need or inclination to actually learn the language (yeah, my bad, but whatever). And yet, I really like jQuery, so there has to be something there. My favourite feature is the selectors: they’re a very clean way to access elements in a page, and add event handlers or otherwise manipulate them. Also the jQuery Ajax features feel really natural and comfortable to use.
The only problem I have had was using the autocomplete jQuery UI feature. I read the documentation, downloaded the appropriate bits from the jQuery website, included everything in my application, but it just wouldn’t work. After a lot of trial and error (and more frustration), I finally could make it work… using the Javascript files from the demo page (js, css) instead of the latest version from the jQuery download page. I think the problem is that the API has changed (the API apparently documents the older version that they use in the demo page), but I couldn’t figure it out reading the source code, so I just used the older, known-to-work version.
-
-
Documentation and wikis
Nov 28, 2008 onAfter a couple of (unrelated) recent events, I remembered that some/most people use some desktop “word processor” for writing and maintaining documentation. After years of working with Wikis for virtually all documentation, I have to say that I don’t understand why people still use those dinosaurs. Using a word processor for documentation feels so nineties.
When working in technical teams, I think the advantages of the Wikis are amazing:
-
You know you’re always reading/modifying the latest version. Uploading to a central server or a shared folder, although theoretically possible (and I’m sure some people do), I don’t think it works as well.
-
You can link all content to any other content (and if you keep all your documentation in the same Wiki, you can link to other project documentation or general company/team guidelines or conventions, for example).
-
You can keep bits of documentation that don’t fit in a standalone “document”, like collections of small tips, lists of things to take into account when you do this or that, checklists, configuration/code snippets and examples, journals, etc. And of course link all that to any other part of the documentation, as stated above.
-
You think “globally”, in terms of the content, not in terms of “documents” that are (usually artificially) independent from each other. Also, it’s mentally cheaper to browse through wiki pages than it is to browse word processor documents, so the documentation is more visible and more used.
-
You focus on content, not on formatting or the way things are presented. It’s also easier to keep the same consistent look and feel for all your documentation, if you wanted to change it.
-
As you don’t have “documents”, just “documentation”, people feel free to edit and update it whenever is necessary, instead of feeling the need to ask the “author” of each document.
-
You don’t need any special program that might not be available in all platforms, or at least not interpret the document in exactly the same way. It’s also easier to access it from other computers.
-
Documents don’t get lost or become obsolete because of the format.
-
You usually get revision control for free (revision control that makes it trivial to see the whole change history for the documentation, review which exact changes some person has made in a given moment, etc). And if you’re using a Wiki that doesn’t support version control, you should use a different Wiki
;-)
Of course, I’m not saying Wikis are the perfect solution, let alone independently of the team, company, project and context you’re using them in, but I think in general they are quite superior as technical documentation repository for a software development team.
-
-
Why I hate Rubygems
Nov 23, 2008 onI have always thought that systems should be something integrated. Each “system” has its own conventions, cultural values, etc. and I think you have to respect that. I believe in the Debian way (adapting programs to an integrated system, not just creating a large collection of packages that are identical to the upstream versions), I like to adapt my style of programming to the language (indentation conventions, identifiers, tools for building and testing, etc.), I prefer cross-platform applications that look and feel like each platform they run on, etc.
In the same way, I feel that the mere idea of having a programming-language-dependent packaging system is a broken idea. I know it has advantages, and I know that being specific to the language, some things work better or are more flexible, but I just don’t believe in that idea. Why should I use a different packaging system for certain things just because they’re written in Ruby? Why do I, as a user of those programs/modules, even have to know that there’s some Ruby-specific packaging system, that doesn’t integrate at all with my system’s packaging system, and mixing both leads to a mess?
Not only that, but Rubygems in particular is quite hostile to repackaging into a platform-specific packaging system. A lot of people only provide the gems for their software, which are harder to work with than “normal” tarballs. They also use their own conventions for directories, that break the FHS (for example) and basically only make sense in the context of Rubygems. In that sense, CPAN is much better (although I think using it for application deployment is a very bad idea, but that’s a different matter), because at least it installs everything in sane directories, it doesn’t change Perl in any way, and it’s not a special format, just a repository of easy-to-install, easy-to-work-with, easy-to-hack, easy-to-repackage “distributions”.
Why, oh, why?
-
Opera Mini 4.2 (beta)
Nov 16, 2008 onI have to say I’m impressed with Opera Mini. It’s a very good product that not only is innovative, but also is damn hard to get working decently in a plethora of ill-designed, ill-implemented, crashing-and-burning-at-any-error, incompatible phones. But somehow these guys bring the Internet to everyone that has a mobile phone that supports Java (a pretty low requirement these days)… and that lives in a country where mobile phone operators don’t charge your ass for connecting to the Internet of course (and then again, Opera Mini heavily compresses the pages so you only download a fraction of the original).
And the experience, taking into account the limited interface, is pretty good. And they add features and improvements in every release (namely, they brought back “skins”, added notes to the list of supported Link data types, and probably other things I haven’t noticed). What else can I say?
The other day I wanted to go and buy some board game. I had gone to BoardGameGeek (awesome website BTW) and had made a list of the games that looked interesting. So I go to the shop, and of course most of them weren’t there… but there was some other game that looked interesting but I hadn’t seen before: Primordial Soup. Having Opera Mini in my phone, I could very easily check the rating and some basic information for that game, which helped me decide if I should buy it. Not only that, but thanks to Link I had the list of games I wanted to buy in my bookmarks (I had added them from my Desktop computer), so I could even compare the ratings for that game and the ones I wanted to buy to start with. How awesome is that?
Go Opera Mini team!
-
Photo management applications
Nov 2, 2008 onIt’s been a couple of years now since I have been a digiKam user. I have been mostly happy with it (actually I don’t even use a lot of its features as my needs are not particularly advanced), but from time to time the Flickr would fail for no reason. Some time ago I needed to upload a lot of pictures and it started failing again, so I looked for some alternatives.
Apart from other apps I knew already and didn’t particularly like, I found dfo (Desktop Flickr Organizer), a GNOME application. It was nice, and it was easy enough to upload pictures to Flickr with it, but it felt weird. What I would like to have is some application to manage my gallery, with some option to upload certain pictures to Flickr. However, this applications is more like a local Flickr mirror with synchronisation options. I don’t want all my pictures in Flickr, even marked as private. I just don’t care, and I don’t want to wait for all synchronisation between the app and Flickr. Moreover, I feel kind of tied to Flickr using that, and I’d rather work in a more “agnostic” environment. So it was cool using it to upload the pictures I had to upload, but I wasn’t really going to keep using it.
At the same time, one friend suggested using Picasa to upload some pictures, so I gave it a try. I had tried it briefly in the past, and I remember that some things were nice, but for some reason it was never my gallery manager of choice. So, trying it again, and even using the synchronisation options for the Picasa web albums, somehow I got the same feeling again: it’s nice, but there’s something undefined that makes me not use it. I have to admit that the interface is really fancy and easy to use, and it works decently well, but I don’t completely like the way the synchronisation works, not to mention that I don’t want to be stuck with only Picasa web albums. Also, I’m not happy with it being proprietary, not available in the Debian repositories, and with that special, anti-integrated interface. Some things work much better than in digiKam (I’m especially thinking fullscreen/slideshow, which sucks pretty badly in it), but I still prefer digiKam overall.
As I wasn’t too happy with the alternatives, I decided to have a look at the problem with digiKam. It turns out that digiKam just uses the so-called Kipi-plugins for picture exporting and other things, and that there was a new version of it that fixed a couple of issues… one of them being a problem with Flickr upload. The package is not available on Debian unstable because we’re currently in freeze (unfortunately, that means that Lenny will ship without a functional Flickr-uploading Kipi plugin). However, I saw that the new package was actually uploaded to experimental, so I decided to give it a try. Not only it works like a charm, but the new version 1.6 reworks the Flickr export plugin completely, and now it’s much nicer. So I’m happy now, back to digiKam with a working Flickr export
o/
. To install it yourself, make sure that you have this line in your/etc/apt/sources.list
:deb http://ftp.de.debian.org/debian/ experimental main non-free contrib
Then, update your available package list and install
kipi-plugins
from experimental, like this:sudo aptitude update && sudo aptitude -t experimental install kipi-plugins
That should do it.
-
Hugin and small, silly mencoder tip
Oct 28, 2008 onFrom time to time I like making panorama pictures. When I started several years ago, Autostitch was really popular, but it didn’t have a Linux version, which sucked. Actually, it still doesn’t. However, it worked under wine, so I just used it via emulation. It was very simple and worked ok.
Sometimes I’d look for alternatives under Linux (if possible, free) and I had seen a tool called Hugin. It looked complicated (at least compared to Autostitch’s select-pictures-hit-ok-there-you-go), and for some reason I never really used it. It probably wasn’t packaged for Debian or something like that.
A couple of days ago, though, I arrived from a trip where I took a couple of panoramas, and Autostitch had a quite suboptimal behaviour: it didn’t recognise one of my panoramas, and some others were completely destroyed perspective-wise. So I decided to give Hugin another go. And boy am I happy with it. It’s very easy to install in Debian, and although I had some problem with the path to
enblend
(apparently I had to specify the absolute path to it in preferences), everything worked fine. Selecting the points to join the pictures is not that hard, and actually has one advantage over Autostitch, namely that if it doesn’t recognise your panoramas automatically, you are giving “hints” about which points are the same in other pictures to Hugin, so it will work. Another advantage is that it has several ways of joining the pictures, which solved my second problem with perspective destruction :-)Apart from the panorama pictures, I also had some videos… and one of them was recorded as “portrait” instead of “landscape”. So I needed a way to rotate the video. Fortunately, that was easy enough with
mencoder
(using command-line, though):mencoder -vop rotate=2 MVI_2352.AVI -ovc lavc -oac copy -o MVI_2352.avi
I found the tip in some thread in Ubuntu forums, and had to look up the values for “rotate” in @mencoder@’s manpage:
0 Rotate by 90 degrees clockwise and flip (default). 1 Rotate by 90 degrees clockwise. 2 Rotate by 90 degrees counterclockwise. 3 Rotate by 90 degrees counterclockwise and flip.
-
GPG confusion
Sep 22, 2008 onToday I was playing with GnuPG, trying to add a couple of public keys to an “external” keyring (some random file, not my own keyring). Why? you ask. Well, I was preparing some Debian package containing GPG keys for APT repository signing (like
debian-archive-keyring
and such).The point is, I was really confused for quite a bit because, after reading the
gpg
manpage, I was trying things like:gpg –no-default-keyring –keyring keys.gpg –import … # Wrong!
But that wouldn’t add anything to the
keys.gpg
, which I swear I had in the current directory. After a lot of wondering, I realised thatgpg
interprets paths for keyrings as relative to…~/.gnupg
, not the current directory. I guess it’s because of security reasons, but I find it really confusing.The lesson learned, always use
--keyring ./keys.gpg
or, better, never usekeys.gpg
as filename for external keyrings, but something more explicit and “non-standard” likemy-archive-keyring.gpg
or whatever. -
Spam adventures
Sep 22, 2008 onToday I have had a gigantic e-mail spam attack. And by “gigantic” I mean something like one every couple of seconds. It seems to have stopped by now, though (maybe until tomorrow, sigh). However, there is some small tip that I used in the meantime, and I have found it helps me filtering spam so I thought I’d share with you. It’s very simple: ordering by subject instead of by date. Of course, you have to filter your view to only unread messages, but it works surprisingly well.
This is very easy to do in mutt, my mail reader of choice (for personal e-mail; I have found that, at least for work e-mail, Opera’s M2 works quite well too). You just have to limit to unread messages (pressing lowercase “L” and then using “~N” as filter), and then sort by subject (
:set sort=subject
). I have even created too “macros” in mutt to switch back and forth between “spam filtering mode” and “normal mode” :macro index Cs ":set sort=subject<return>l~N<return>" macro index Cq ":set sort=threads<return>lall<return>"
Let’s hope it doesn’t begin again tomorrow
:-S
-
Sucky Typo update
Aug 19, 2008 onThe other day I was talking about upgrading Typo. The update itself went well, true, and the site was up and running without too much downtime, but then I started using it again… and I have noticed two things so far (both about writing posts) that I really dislike:
First, the good old editor is not there anymore: the Typo editor used to be really good, because on the left hand side you had a very reliable and easy to use textarea with Wiki syntax (you can choose which exact syntax you want), and on the right hand side you had a “live preview” of your post, automatically updated with Ajax, that showed you how the post was going to look like. Well, that’s gone. Now there are two options: some retarded WYSIWYG box, that I tried to use and failed, and some good old textarea… without the damn live preview. That sucks big time, because there is no other preview (that I have seen: please enlighten me if there is indeed one), so I just blindly write things in a Wiki format, and hope that it’s going to look OK when I press “Publish”.
Second, I was playing with the Wiki format for the articles, and I changed it to “Markdown” (I always mix “Textile” with “Markdown”, and never remember which is which; the one I prefer is Textile). After I hit “Save”, not only the next article was parsed in Markdown format by default, but every single blog post. It’s like, you select the parser the system is going to use to interpret your whole blog. How retarded is that? Once you have written posts, it doesn’t make sense to change their syntax (unless you do it manually editing the post itself). Clearly the format is a property of each blog post, not of the whole blog installation.
Not everything is bad though: it seems that now you finally have a “Draft” concept, so I can start writing a blog post and just save as a draft, instead of unticking the “Online” property and saving as a normal post. Also, the drafts are saved automatically, so I don’t have to remember to hit “Save” from time to time just in case the browser crashes or I hit something stupid and erase the contents of the post. Yay for that.
-
YAPC::Europe 2008
Aug 15, 2008 onIt’s funny. One month ago, I had never been to Copenhagen. I had two weeks of vacation, so I spent a couple of days there and got to know the city. A couple of weeks later, I’m back in Copenhagen for the YACP::Europe 2008.
In short, the talks were good. Not fantastic on average, but good. In particular, Damian Conway’s Keynote on Thursday morning was really funny, and had food for thought. It was about contexts and the Contextual::Return module (BTW, does anyone know which system he uses for the slides?). Wednesday’s keynote by Larry Wall was about Perl 6, a bit too much into details. It had some interesting ideas about programming language extensibility, but it was a bit too much for a Wednesday morning (without much sleep). Prophet (“a grounded, semirelational, peer to peer replicated, disconnected, versioned, property database with self-healing conflict resolution”) looks really cool, I’ll see if I can have a look soon. Also some ideas about QA and automated testing, to think about, explore, and share with other people.
Many of the lightning talks were very very funny. One of the funniest was the talk about implementing lolcode in Perl6. Really impressive when you think about it, and really funny too. Others, like the Trailer Theory from Adam Kennedy were very funny too.
All in all, we had a really good time, we learned some things, we have some things written down to investigate later, and we met some new people. Yay for the YAPC::Europe!
-
Typo upgrade
Aug 7, 2008 onHey there!
I have just upgraded Typo. It was slightly traumatic, because at first the blog broke horribly and I couldn’t see anything other than errors 500. To be fair, the change was quite big, because it included also an upgrade to Rails 2 (I was using some older Typo that used Rails 1.2.x), so everything worked better than expected.
I could login as admin, and change preferences and whatnot, and the only thing that was broken was the public view of the blog. I had a look at the logs, and it complained about not being able to find some template for the sidebars. I was very confused, and didn’t know where to starting looking for this. So, obviously, I asked “Señor Google”. He didn’t tell me that much, but someone left me the following hint: if you comment out the call to the helper
render_sidebars
(in the active theme code) solved the problem…. at the price of not having sidebars of course.So I decided to connect a Ruby/Rails console to the production database, and have a look at the Sidebar model. The summary of what I did is this:
Sidebar.find(:all, :order => ‘active_position ASC’). map {|s| s.active_position} => [0, 0, 1, 1, 2, 2, 3, 4] Sidebar.find(:all, :order => ‘active_position ASC’). map {|s| s.type} => [nil, “CategorySidebar”, nil, “ArchivesSidebar”, nil, “TagSidebar”, “StaticSidebar”, “XmlSidebar”] Sidebar.find(:all, :order => ‘active_position ASC’). find_all {|s| s.type.nil?}.size => 3 Sidebar.find(:all, :order => ‘active_position ASC’). find_all {|s| s.type.nil?}.each {|s| s.destroy} => [#<Sidebar id: 1, active_position: 0, config: {“empty”=>false, “count”=>true}, staged_position: nil, type: nil>, #<Sidebar id: 2, active_position: 1, config: {“title”=>”Links”, “body”=>”…”}, staged_position: nil, type: nil>, #<Sidebar id: 3, active_position: 2, config: {“format”=>”rss20”, “trackbacks”=>true, “comments”=>true, “articles”=>true}, staged_position: nil, type: nil>] Sidebar.find(:all, :order => ‘active_position ASC’).find_all {|s| s.type.nil?}.size => 0
So, the problem is that there were some (severely broken) leftovers of the upgrade. I just removed them, and everything started working again. Phew!
-
Useful spam and Knol
Jul 31, 2008 onThe last days I have noticed that most of the spam I receive has some made up news as subject. I imagine it is to make people click on the messages.
The point is that one of those messages was titled “Knol, the Wikipedia killer”, or something along those lines. I didn’t click on the message, and actually I just thought that “Knol” was a made up word… but then I thought “hm, maybe this exists after all”, so I went to the Wikipedia page… and there you go, I just found out about Knol and learned something about Music in Capoeira because of some spam message.
Informative spam. Go figure. Or maybe it means I should read more tech news?
-
Linux video editing and YouTube annotations
Jul 23, 2008 onIn my recent trip to Copenhagen, I recorded a small video of the subway (it’s really cool, because it’s completely automatic, it doesn’t have drivers or anything). I wanted to edit the video to remove people that were reflected on the window, so I wondered if I could do that on Linux. I imagined it wouldn’t be trivial, but it was more frustrating than I thought. Maybe I’m too old for this.
The first thing I tried was looking in APT’s cache for “video editing”. The most promising was kino. I had tried that some time ago a couple of times, and I never made it to work, but I figured I would try again. Unfortunately, same result: I just can’t figure out how to import my videos. Maybe I’m just hitting the wrong button or whatever, but it’s really frustrating.
Second thing was having a look in the internet. I found the (dead and being rewritten?) Cinelerra, as always, and I didn’t feel like installing the old one from source, only to lose my time and not get it to work, so I just ignored it. Maybe they had it in debian-multimedia and wouldn’t have been a tough install after all. Anyway.
Next thing, I found some program called openmovieeditor. This one apparently worked, but I couldn’t figure out how to crop the image (or almost any other thing for that matter).
Next, some neat program written in Python, called pitivi. When I tried to run it though, it just said
Error: Icon 'misc' not present in theme
on the console and died. I later figured out that I had to installgnome-icon-theme
for it to work (yeah, Debian maintainer’s fault). It’s funny, because on the webpage it says that it has some “advanced view” that you can access via the “View” menu… but I couldn’t find it. My menu only had one entry: “Fullscreen”. Great.Oh, wait, there’s a
gimp-gap
. I could just import my animation in Gimp, crop the frames, and convert again to video. Easier said than done. I needed some programs that I didn’t have, and I wasn’t sure if they were so easy/quick/clean to install (sure, I could have exported to GIF animation and probably convert to video, I just didn’t want to lose so much color quality in the GIF step). Forget for now. At least I had the images, so if I could just turn them into a movie…So, I started wondering if, given that I had decided to just crop, and especially now that I had a lot of images that were the frames, maybe I could just use some command line tool or something. So I found this tiny little program,
images2mpg
. Long story short, after installing some dependencies from source (that gave compilation errors, but luckily I could compile only the binaries I really needed) that program was completely retarded and didn’t even do what I wanted (it wanted at least one second between images, but I didn’t want a slideshow, just a normal movie from the frames). It looks some simple and it’s so buggy. Gah.So I started wondering if I could just crop with mplayer… Hmmm… after a couple of problems (like documented switches that were not there and other crap), I ended up with this command line:
<code> mencoder -vf crop=320:200:0:40 MVI_2160.AVI -ovc lavc -nosound -o metro-crop.avi </code>
That was reasonably quick and easy but it was so frustrating after all that lost time.
In any case, I ended up with the video I wanted, so I went to YouTube to upload it. When uploading, I realised that there was some option I had never seen: annotations.
YouTube annotations are really cool. They are like the notes on Flickr, but on a video
:-D
Actually I kind of wanted to make a note like that on this video, to show the automatic doors on the Metro station, so I was really happy to see that I could actually do it. And the interface is really easy to use and very clear. I really like it! You can see the result here:EDIT: WTF? The annotations don’t appear on the embedded videos? You’ll have to go to the video page to see them, then…
-
Work-related news
Jul 22, 2008 onSome time ago, Opera announced the Opera Web Standards Curriculum project. It’s a very interesting collection of articles that can be used as “curriculum” to learn about web development. It gets extra geeks points for using a Creative Commons license for the articles themselves. Even the W3C mentioned it
:-)
I just found some time to have a look at it, that’s why I’m posting now:-)
The other news is that finally the Opera QA blog is online, and has the first non-hello-world-article (written by yours truly), “Continuous Integration: Team Testing”. I’m very excited about this, because it’s the first time I’ll participate directly in a company blog, and because the IT world needs more (and better) QA, so hopefully we’ll be able to spread the word and make the world a better place
:-D
-
Frustrated by Python module management
Jun 24, 2008 onI don’t really do any Python development myself, but at work I do support some automated testing infrastructure (in this particular case, I’m talking about a CruiseControl.rb installation), and some of the projects that use that infrastructure use Python. The setup is so that the tests are actually executed in the CC.rb server, so I have to have Python installed there, and it has to have some basic dependencies to be able to run the tests.
A couple of times something strange happened: suddenly, those tests would start failing with no apparent reason, and looking at the logs, it looked like some dependencies were not installed (error messages such as
ImportError: No module named sqlalchemy
). Of course that didn’t make any sense, because SQLAlchemy is needed for the tests and they were working like a charm for weeks. I was totally and completely confused by the error message, and I tried to install SQLAlchemy again. That solved the problem, luckily, so I decided to forget about it because it wasn’t my thing anyway.But the problems appeared again. And again. And I got another error message that was really confusing, because it looked like Python was using some old version of some module (a version that wasn’t there anymore, because the code had been updated from SVN). So I just got tired of not knowing what was going on, and decided to investigate enough to find out the root of the problem. And I found something surprising.
What I found is that the famous
python setup.py develop
(that everyone told me to use) actually adds the “current directory” to the list of paths where Python searches for modules, so you can develop your module and use it from anywhere. I had heard some comment on that, but I didn’t quite get what it meant, and I don’t think the person that said it realised either.The fun thing with
setup.py develop
is that when you have several branches of the same project in the same machine, and you use that to make the modules available… well, I guess that knowing which versions of which modules Python will use becomes an interesting question to say the least. I’m not saying that the way it works is necessarily wrong, but I do think it is dangerous, and people shouldn’t think of it as the “normal” way of developing modules in Python. It should be used with care.After having realised that and thought about it a bit, I still don’t understand why those modules simply “dissappeared”, but it seems that there was some corruption of
/usr/lib/python2.5/site-packages/easy_install.pth
or similar (that file seems to be modified when you install packages witheasy_install
, and it had references to the directories I ransetup.py develop
from, so that’s my main suspect for now). At least I know now that I could backup a workingeasy_install.pth
file, and restore when we have problems again, but I’m far from happy with that “solution”;-)
Also, I’m wondering what the hell should I do in the future to prevent more problems, because using
setup.py develop
sounds like a terrible idea to me. I tried to setPYTHONPATH
instead, but apparently I failed. Any suggestions?EDIT: I’m finally using
PYTHONPATH
. I have no idea what I tried last time, but using it was easy, quick and clean. I still have no idea why the hell Python sometimes forgets where some modules are, though. -
CHDK - Canon Hacker's Development Kit
Jun 15, 2008 onSome days ago, Arve posted a very interesting link in Twitter: Turn Your Point-and-Shoot into a Super-Camera. It was about something called CHDK (Canon Hacker’s Development Kit), which is a non-official firmware enhancement for many Canon cameras.
It sounds pretty scary, but actually it’s really safe and easy to use: you just copy some files into your memory card, and ask the camera to upgrade the firmware via some menu option. The awesome part is that it only “upgrades” a copy in memory, so if you simply turn off the camera, the next time everything is back to normal. Of course there are options to load it on startup if you’re happy with it.
The goodies: saving in RAW format, some new menu options, more information on the OSD, configurable OSD, BASIC scripting, and even games (Sokoban and Reversi). One of the features that caught my attention in the article was a special mode for motion detection, that apparently works well for making pictures of lightning strikes. And it’s actually a user-written script, how awesome is that?
I haven’t played that much with it yet, but I have tried and it works as advertised (YMMV). I can’t wait to use it more, and maybe even try some silly BASIC program.
Thanks a lot Arve!
;-)
-
Retarded keyboard
Jun 11, 2008 onSo, today I was working normally, and suddenly I mispress something… and I can’t switch to other desktops anymore.
First thing I think: maybe some KDE global shortcut manager or whatever went nuts, and redefined my “Switch to Desktop” keys. So I go and check the preferences, and I find that everything is alright.
So I try to redefine the shortcuts again, and I notice that according to KDE, F1 produces
XF86Launch0
, and the rest of my F-keys just don’t do anything. I panic, think for a moment about changing the shortcuts toCtrl-1
,Ctrl-2
, etc., discard the idea because sooner or later I’ll need the F-keys anyway… and decide to reboot. But still I can’t use my F-keys.Totally desperate, I ask on IRC and someone says “F Lock”. And I go “WTF is that?” but look at my keyboard, and see some key that is indeed labelled “F Lock”. I press it and everything goes back to normal.
Then, the person goes on to explain that Microsoft has very retarded keyboards (in particular, I was using a Microsoft Natural Keyboard) that “feature” a key called “F Lock”, that redefines the “F keys” (F1, F2, …) to be some “useful” idiotic retarded shortcuts for Office applications or who knows what. I was also told that apparently some of those keyboards, when they boot, they are by default in “retarded mode” (mine seemed to somehow remember the setting in my last reboot, because it has never done that).
I just had to blog about this. Amazing.
-
Microsoft Office formats
May 22, 2008 onI read that Office 2007 won’t support ISO’s OOXML. There are two things I find funny in there:
-
After pushing for making OOXML a “standard format” (as in ISO), Microsoft is not implementing the standard spec after all (and won’t until some future version).
-
Microsoft is going to support ODF (competing, open format).
Of course, they want people to use a non-standard OOXML (the one currently in Office 2007 apparently), so they aren’t really in a hurry to support ISO’s OOXML, and their ODF support will probably not be perfect, so they’re just doing the usual stuff, trying to get people to use some format that they are in a better position to support. Oh well.
-
-
I don't "git" it
May 13, 2008 onI admit I don’t get it. Tons of people are using Git these days, and most of them seem incredibly happy with it. I don’t really have any relevant experience with it (just used a couple of days), but I didn’t like it that much. Feels weird, clunky and complicated (especially, the interface is horrid, but then I’m used to Darcs so I’m biased/spoilt there).
Yeah, yeah. So everyone says that Git’s power lies in the concepts it’s built on, and that they’re different from other VCS, and you have to learn all that to really “get” Git. But at the same time they admit the documentation sucks and doesn’t really help you understand it. So, to be enlightened you have to play a lot with it then. I just don’t feel like it. I’m just afraid that all that power… well, I just won’t give a shit about it, to put it bluntly. Having a quick look at the net, the arguments supporting Git seem to sound really obscure or not that life saving to me.
And yes, I realise that sounds like the Blub Paradox in Beating the averages, but I just can’t see how a revision control system can be so wonderful and make a difference for small and medium projects. I have no doubt Git does make a difference every single day for the Linux kernel, but when most (non free software) projects work “not that bad” even with a centralised VCS like Subversion, is there really any important feature that Git can add vs. any other distributed system (I’m thinking mostly Mercurial here)? Isn’t the interface going to have a much bigger impact in everyday work (and everyone seem to agree that Git’s still sucks)?
Personally, I’m looking forward to certain talk about Git, to see if it will make me see the light ;-)
-
Free Software rocks
May 13, 2008 onI just read in Aaron Seigo’s blog a very nice message from a user that proves that free software is making a difference in many areas, even in some that we don’t usually think about. Some quote:
> > I cant tell you how much I appreciate the work you all have done. Its a work of art. If I could thank each and every one of you I would. > >
> > You have given her the world to learn and explore. > >
> > So if you get frustrated or tired in your work for Open Source/Free Software, just remember that somewhere in Missouri there is a 14 year-old girl named Hope, an A-student who runs on the track team, who is now your biggest fan and one of the newest users of Linux/Ubuntu. > >
Although I haven’t really participated in KDE or Ubuntu (not directly anyway), I too feel proud of what we, as a community, have created. Also, like that person, I feel very thankful for everything I have learned and got from the free software community.
Cheers guys, you all rock!
-
I'm blogging from my brand new OLPC!!!111oneeleven
May 7, 2008 onI just got my OLPC. It’s sweet (and green!). I’ll take pictures and post them later…
w00t!
EDIT: And now I have Opera!
-
Adventures in the Internet
Apr 3, 2008 onIt’s kind of funny. I created a twitter account many months ago. I never really used it, because I guess I didn’t see the point or something. During all that time, several people started “following” me (in twitter jargon), even if I had no content at all, nor plans to add any.
Just today and yesterday, three people added me, so I got kind of curious, and decided to login and have a look. I made a comment just today, about me finding it funny that so many people started “following” me, and someone replied. So I started “following” other people, and reading, and I have made a couple of more comments since. I’m not really sure I’m going to use it everyday, but now I have installed a really handy Opera widget for twitter, so this might be “the start of a beautiful friendship”.
Alas, not just twitter, but I also started using eBay (and, to a certain extent, PayPal) this week. Why? Because I have been trying to find one of the greatest PlayStation 2 games ever made, Ico. It’s quite hard to get in a shop nowadays, even second hand, because it’s an old game that wasn’t very successful when it was released. Now it’s a kind of cult game that you’re better off finding in eBay or similar, hence my sudden interest in using eBay:
Note that most of that is actually while being played, not videos. It looks like a film because it doesn’t have a HUD.
I have to say that the eBay experience was satisfactory: it was really easy to find what I wanted, it was easy to bid (special mention to the automatic bidding system, which I didn’t know, that renders the old bid monkeys kind of obsolete), and I won the item, yay! For the maximum money I wanted to pay, but still. I did have a couple of really weird problems with PayPal when paying for it, but it finally worked.
Another thing that just happened to me today is that I realised (stupid me) that Skandiabanken works like a charm in Opera. It was my fault for being so nazi with the cookies.
Finally, although not a website, I’m really amazed by the new Opera Mini 4.1 beta. These guys have managed to make a really awesome browser that works in any crappy mobile phone (and that means working around stupid limitations and bugs of tons of different models). Kudos to them!
-
CruiseControl.rb
Feb 25, 2008 onAs part of my QA work on several projects, months ago I was looking for a continuous integration server. I looked at several, but most of them seemed really scary judging from the documentation. I finally went for CruiseControl.rb, and I have been really happy with it all this time. It’s a really nice, very simple continuous integration server written in Rails. I had it up and running before I even understood how to install the others I looked.
Even though is a really cool piece of software, I was missing some better test result reporting. It was actually there, but only for Rails projects, and unfortunately we don’t have any Rails (or Ruby, for that matter) projects at work. So, I just had a look at the sources to see if I could hook my own reporting there, and the code turned out to be impressively easy to understand (especially taking into account that it’s a rather non-standard Rails application, as it has builders running as daemons, it doesn’t really use a database, etc).
The result is a patch for CC.rb, already submitted to their BTS, that adds a plugin-based result reporting, that can be extended to understand any kind of testsuite. It’s basically a parser that collects all the test passes and test failures from the testsuite output log.
Also, the other day I had another need, which was even easier to make because it could be implemented as a simple CC.rb notification plugin. It depends on the above patch, and it collects all the bugs in the current build, searches in the history of the project, finds out who made the commits that produced the regressions, and bugs all those people by e-mail, pointing out which failures were supposedly made by them, and which build they started failing (so it’s easier to locate the offending code).
It’s not perfect, and it cannot be, but it’s a nice addition to continuous integration. This notification plugin is not public yet, but it might be in the future (especially if they accept my patch as part of upstream), so stay tuned if you’re interested.
-
Mobile phones
Jan 23, 2008 onI have always hated mobile phones. I always had problems with them (coverage, battery), I always found them ridiculously counterintuitive, expensive, impractical…
But then I moved to Norway (from Spain), and, partly because I wanted to be able to use OperaMini, I decided to buy a new phone. I didn’t buy anything fancy at all, especially for Norwegian standards (a Sony Ericsson K310i), but I must admit I’m simply impressed by the phone. I know it’s old now and probably half of the phones in the latest five years have been good in those regards, but I find it really intuitive to use, very well thought out, with lots of tiny details that make it easier to understand and use, and frankly, for my modest needs, it’s just great. Sure, the camera is very crappy, almost useless, but I never trusted a camera phone anyway.
Also, living in Norway, any phone services I could want to use (normal calls/messages, international calls, Internet access) feels affordable, almost cheap, and now I can just check Mick Jagger ‘s age if I’m arguing about it with somebody in the middle of the street
;-)
So, after buying the phone, I wanted to make backup copies of the contacts and messages, and I also wanted to be able to copy pictures and videos, and (why not?) games, ringtones and other stuff. I tried fiddling a bit with the IrDA and Linux, but I didn’t get it to work and I got frustrated, so I decided to just go and buy a (insanely expensive) USB cable. The good news was that the phone had a mass-storage mode that is compatible with pretty much any operating system. The bad news is that that mode doesn’t let you access the contacts or messages, just ringtones, pictures, movies, themes and similar.
I was quite desperate, especially after having bought the cable (I did find some really great games in the net, though, so I used the cable for something), so I decided to download the official Sony Ericsson PC Suite, and try on some Windows machine (real hassle, because I don’t have that at home). And, oh the horror, that wasn’t a solution either, because I couldn’t just make a backup of the contacts, I had to “synchronise” with Outlook. And that wouldn’t work for me, that’s for sure.
So I didn’t know what to do, I tried with other progams under Linux, but nothing really let me back my contacts… until I found
gammu
and especially the oh-wonderfulwammu
GUI. I just had to specify the USB device in some wizard (in my case,/dev/ttyACM0
) and everything just worked like I wanted to. They even have a Gammu-supported phone database, with a Sony Ericsson K310i entry.I’m so happy now, everything works like a charm with
wammu
, I can backup my contacts, messages, and even the calendar, todo list and list of calls, if I wanted to. I can also access the ringtones, themes, pictures, videos, so I have everything I need now, under Linux without problems. Yay! -
Haberdasher demo available
Oct 29, 2007 onAt last! I have prepared an online demo of Haberdasher. Now you can play around with it, as an anonymous user or logging in with demo/demo123.
It has a limit of 50K for patches, so don’t be surprised if you upload a bigger one and it gets truncated
:-)
Have fun, and tell me what you think!
EDIT: The link to the demo was wrong, gah! Thanks to Joaquín for noticing.