HCoder.org
Posts in Category “Debian”
-
FireHOL and Wireguard
Apr 8, 2020 onThe last blog post was a quick introduction to FireHOL, the software to make firewalls. In this blog post we will see how to configure FireHOL to allow Wireguard to work, if you want to install Wireguard on the same server. In this configuration, Wireguard will be used as a simple VPN server (think OpenVPN): accepting connections from a client (typically a laptop or a mobile phone) and route that traffic to the internet.
EDIT: Corrected/simplified a couple of things, based on feedback.
Assumptions
For this blog post, I will assume that you already have Wireguard working, and you have FireHOL installed and configured (except that Wireguard now doesn’t work, and you have to fix FireHOL’s configuration to make it work again).
I will assume that your Wireguard interface is
wg0
, you are using the (standard) Wireguard port51820
, and your main network interface iseth0
.Configuring Wireguard
There are three things we must do in order to make Wireguard work:
- Accept the initial connection to the Wireguard server port
- Accept traffic from the Wireguard network interface
- Route the traffic from the Wireguard interface to the internet (the main network interface)
Accepting Wireguard connections
The first thing one has to do is to open the Wireguard port. Because Wireguard’s port is not defined in FireHOL, we need to specify the port like this:
interface eth0 # ... server custom wireguard udp/51820 default accept
If you put those two lines at the end of your
interface eth0
definition you should be good. Note that, if you would prefer that line to look like the other service definitions, you can tell FireHOL what the Wireguard port is and define that line likeserver wireguard accept
.Accepting traffic from the Wireguard interface
For that we need to declare the Wireguard interface and accept everything from/to it:
interface wg0 vpn policy accept
Put those lines before or after your other
interface
definitions.Routing
Last but not least, we need to allow the traffic from
wg0
to be routed to and from the main network interface. To do that, put these lines at the end of your configuration file:router vpn2internet inface wg0 outface eth0 masquerade route all accept
Conclusions
One could do more sophisticated configurations, but that’s a basic one that should work well. As always, activate the new configuration with
firehol try
, so that if you break anything you will not lose access to the server. I hope this post was useful! -
Firewalls with FireHOL
Apr 7, 2020 onIf you have a computer connected to the internet, eg. a server/VPS in some hosting company, you are receiving lots of attacks from randos on the internet. It’s a very good idea to have a firewall because chances are that at some point someone will reach your server with an exploit that you haven’t had time to patch yet. Like, it’s a matter of when and not if. But, what if you aren’t really the sysadmin type and you have no idea about iptables or any of those incantations needed to protect yourself? Don’t fret, because FireHOL has you covered.
Installation
On Debian (and probably Ubuntu?), you can install it by typing:
sudo apt install firehol && \ sudo systemctl stop firehol && \ sudo systemctl disable firehol
The
systemctl
calls are VERY IMPORTANT because of a current bug in the package, which will leave your server inaccessible! After it’s installed (and disabled), addserver all accept
to the default config file/etc/firehol/firehol.conf
so that it ends up like this (skipping the initial comment block):version 6 # Accept all client traffic on any interface interface any world client all accept server all accept
At this point you can set
START_FIREHOL=YES
in/etc/default/firehol
, and then run:sudo systemctl enable firehol && sudo systemctl start firehol
That will give you a running FireHOL that won’t filter anything. So, same as you had before you even installed FireHOL. But at least now you can start…
Defining rules
There are two kinds of things you will want to block with FireHOL: ports/services, and IPs. The first is very easy. The second is not too hard but you need to learn a thing or two to make it sustainable (ie. use lists maintained by others).
Blocking ports
More than blocking ports, you specify which ports/services you want open, and everything else is closed by default. Instead of saying
server all accept
, you put lines like this in its place:server http accept server https accept server ssh accept
Maintaining IP lists
The easiest way to filter bad IPs (malware, spammers, etc.) is to download IP lists and blacklist them from the FireHOL configuration. There’s a tool called
update-ipsets
(available in thefirehol-tools
package in Debian) that you can use to download them. You can runupdate-ipsets
to see the available lists (and update them, if enough time has passed) andupdate-ipsets enable <listname>
to enable them. For example, you can run this command to enable thespamhaus_drop
andspamhaus_edrop
IP lists:sudo update-ipsets enable spamhaus_drop spamhaus_edrop && \ sudo update-ipsets
This will download the lists under
/etc/firehol/ipsets
. Once they are there, you can add these lines to your configuration file (before theinterface
definitions) to block incoming connections from any of the IPs and networks mentioned by the lists above:ipv4 ipset create badnets hash:net for list in spamhaus_drop spamhaus_edrop; do ipv4 ipset addfile badnets ipsets/$list.netset done ipv4 blacklist ipset:badnets
Tips
Trying your changes
You can use the
firehol try
command to try changes: it will automatically revert in 30 seconds unless you typecommit
in a terminal.Keeping your logs clean
By default, FireHOL will send log data (including every single dropped connection!) to syslog. If you want to keep your syslog clean and send FireHOL logs to a different file, you can do the following:
- Install the
firehol-doc
package withsudo apt install firehol-doc
. - Add
FIREHOL_LOG_PREFIX=FireHOL:
at the top of/etc/firehol/firehol.conf
. - Use the provided example files (see below).
To use the example rsyslog configuration and the example logrotate configuration, run the following commands (the latter is so that the FireHOL log files don’t grow forever):
sudo cp /usr/share/doc/firehol/examples/rsyslog/rsyslog_d-firehol.conf \ /etc/rsyslog.d/firehol.conf sudo cp /usr/share/doc/firehol/examples/rsyslog/logrotate_d-firehol \ /etc/logrotate.d/firehol
Once you follow these steps you will have the FireHOL logs under
/var/log/firehol
.Conclusions
FireHOL is a great tool to make firewalls easily without having to learn arcane syntax or command-line options. Even if you don’t have advanced sysadmin knowledge, it’s easy to get started and secure your servers. I hope this little guide was useful!
- Install the
-
Book summary: Coding Freedom
Jan 24, 2014 onThese are my notes for “Coding Freedom”, a sociology/anthropology book that analyses the free software community. You can download it for free from its website, or buy a paper version. These notes cover only the history of free software, which I found very interesting even if I basically knew it already.
1970-1984: The commodification of software
During the 1960s and part of the 1970s, most hardware was sold with software and there was no software patent or copyright protection. Programmers in university labs routinely read and modified the computer source code of software produced by others.
In 1976, just as companies began to assert copyrights over software, Gates wrote a letter to a group of hobbyists chastising them for, as he saw it, stealing his software: they had freely distributed copies of Gates’ BASIC interpreter at one of their meetings.
In the late 1970s and early 1980s the US software industries dominated internationally. Amid fears of losing ground to foreigners, US legislators launched an aggressive campaign to develop and fund the high-tech and knowledge economic sector and encountered little friction when accepting software patents in 1980.
1984-1991: Hacking and its discontents
In the late 1970s and the 1980s, corporations started to deny university-based hackers access to the source code to their corporate software, even if the hackers only intended to use it for personal or noncommercial use. This infuriated Richard Stallman, who became a “revenge programmer” (whole, fascinating story in p.68) and ultimately founded the Free Software Foundation in 1985 and then wrote the first draft of the General Public License in 1989. In 1984 he actually said “I am the last survivor of a dead culture. And I don’t really belong in the world anymore. And in some ways I feel like I ought to be dead”.
1991-1998: Silent Revolutions
Trade groups intensified their efforts to change intellectual property law largely through international treaties. They worked with law enforcement to strike against “pirates”, pursued civil court remedies against copyright infringers, launched moral education campaigns about the evils of piracy, and pushed aggressively for the inclusion of intellectual property provisions in the multilateral trade treaties of the 1990s. For example, through TRIPS, patents had to ultimately be open to all technological fields.
In the meantime, Linux would gain momentum in companies: managers would say they were not using Linux, but techies would say “yes… but don’t tell my boss”.
1998-2004: Triumph of open source and ominous DMCA
The term “open source” (less philosophical and more technical) was created and won, and the DMCA was passed, which criminalised all attempts to circumvent access control measures (ie. DRM), practically giving copyright owners technological control over digitized copyright material.
Misc final notes
For most developers, acceptance of free software rarely led to political opposition producers of proprietary software, but made them develop a critical eye toward practices such as abuse of intellectual property law and tendency to hide problems from costumers: “Free software encourages active participation. Coporate software encourages consumption”.
One of the most profound political effects of free software has been to weaken the hegemonic status of intellectual property law; copyright and patents now have company.
And that’s it. I hope you enjoy it. Go download the book if it sounds interesting or you want to learn more about hacker culture and free software.
-
Personal groupware: SOGo
Sep 14, 2013 onOh, wow. It has been a long while since I wrote anything on this blog. Hopefully I’ll get back in shape soon. This time I wanted to write about groupwares for personal use. As you may know, I had already written a personal wiki, and several weeks ago I started thinking that it would be cool to have my own place to keep my calendar and my contacts, and use exactly the same list in any e-mail/calendaring program I use, regardless of the device.
After looking around a bit, I chose SOGo. Firstly, because I managed to get it to work (I had tried and failed with Kolab first); secondly, because it seemed simple/small enough to be appropriate for personal use. In my case, I’m using the version that comes with Debian Wheezy (1.3), but I don’t think it will be very different to install in other environments.
Update: Added URL for calendars, small formatting fixes.
Installing SOGo
The installation itself is kind of long, and although it’s documented, the installation and configuration guide doesn’t give a straightforward list of steps to install. Instead, you have to read it and understand how the whole system is put together. This post is a reminder for myself, as well as documentation for others that might want to install SOGo in their own servers.
The first step is to install the Debian packages “sogo”, “postgresql” and “apache2”. Then, copy
/usr/share/doc/sogo/apache.conf
into/etc/apache2/sites-available/
, and tweakx-webobjects-server-{port,name,url}
. Then, enable Apache modules “proxy”, “proxy_http”, “headers” and “rewrite” and enable the new site with the following commands:# a2enmod proxy proxy_http headers rewrite # a2ensite sogo # /etc/init.d/apache restart
The next step is to configure PostgreSQL. First, add this line at the end of
/etc/postgresql/9.1/main/pg_hba.conf
(or the equivalent for your PostgreSQL version):host sogo sogo 127.0.0.1/32 md5
Then create a PostgreSQL user “sogo” and a database “sogo” with the following commands (remember the password you set for the “sogo” user, you’ll need it later):
# createuser --encrypted --pwprompt sogo --no-superuser --no-createdb --no-createrole # createdb -O sogo sogo
Then connect to the database with
psql -U sogo -h 127.0.0.1 sogo
and create a table “sogo_custom_auth” with this SQL:CREATE TABLE sogo_custom_auth ( c_uid varchar(40) CONSTRAINT firstkey PRIMARY KEY, c_name varchar(40) NOT NULL, c_password varchar(128) NOT NULL, c_cn varchar(128), mail varchar(80) );
Then calculate the MD5 for whatever password you want for your user (eg. with
echo -n 'MY PASSWORD' | md5sum -
) and connect again to the database withpsql
, this time inserting the user in the database:insert into sogo_custom_auth values ('myuser', 'myuser', '<PASSWORDMD5SUM>', 'User R. Name', 'myuser@mydomain.org');
Now you have to configure SOGo so that (1) it can connect to the database you just created, and (2) it looks for users in that database. You do (1) by editing
/etc/sogo/sogo.conf
to set the correct username and password for the PostgreSQL database; you do (2) by adding the following lines to your/etc/sogo/sogo.conf
:SOGoUserSources = ( { type = sql; id = directory; viewURL = "postgresql://sogo:@127.0.0.1:5432/sogo/sogo_custom_auth"; canAuthenticate = YES; isAddressBook = YES; userPasswordAlgorithm = md5; } );
Finally you’ll have to restart the “sogo” service with
/etc/init.d/sogo restart
so it uses the updated configuration.Importing contacts
It’s easy to import contacts if you have them in vCard format. Just login to SOGo (should be https://
/SOGo/), go to Address Book, right click on Personal Address Book and select "Import cards". If you want to import the contacts in your Google account, go to GMail, click on the “GMail” menu at the top left (just below the Google logo), and select “Contacts”. From there, you have a menu “More” with an “Export…” option. Make sure you select vCard format.
Clients
Of course, the whole point of setting all this up is making your e-mail/calendaring applications use this as a server. I think there are several formats/protocols SOGo can use, but WebDAV/CardDAV works out of the box without any special tweaking or plugins so I went for that.
I have only tried contacts, mind you, but I imagine that calendar information should work, too.I haven’t tried having my e-mail in SOGo because I don’t care :-)I have briefly tried with two different clients: Evolution running on Ubuntu Raring, and Android. Both seem to be able to get data from SOGo, but here are some things to note:
-
The WebDAV/CardDAV URL for the contacts should be something like: https://<YOURSERVER>/SOGo/dav/<YOURUSERNAME>/Contacts/personal/. The Android application seemed to have enough with https://<YOURSERVER>/SOGo/ or https://<YOURSERVER>/SOGo/dav/(can’t remember which one), though, so maybe it’s Evolution that can’t autodiscover the URL and needs the whole thing spelled out.
-
The CalDAV URL for the calendars should be something like: https://<YOURSERVER>/SOGo/dav/<YOURUSERNAME>/Calendar/personal/. It’s likely that whatever application you’re using will have enough with https://<YOURSERVER>/SOGo/ or https://<YOURSERVER>/SOGo/dav/, though.
-
Evolution seems to have a problem with HTTPS CardDAV servers that don’t use port 443. If yours runs in a different port, make sure you make the WebDAV URLs available through port 443 (with a proxy or similar).
-
Certain contact editing operations seem to crash Ubuntu Raring’s version of Evolution. A newer version seemed to work fine on Fedora 15’s live CD and an older version seemed to work on some Debian I had around.
-
Android doesn’t seem to support CardDAV natively, but there’s a set of applications to add support for it. I have tried “CardDAV-Sync free beta” for contact synchronisation and at least it seems to be able to read the contacts and get updates. I have only tried the “read-only” mode of operation, I don’t know if the other one works.
In conclusion, this post should contain enough information to get you started installing SOGo on your own server and having some e-mail clients use it as a source for contacts. Feel free to report success/failure with other clients and platforms. Good luck!
-
-
Arepa - Apt REPository Assistant
Mar 22, 2010 onFor some time now I had been frustrated by the tools to manage APT repositories. The only ones I knew of either covered too little (only adding/removing packages from a repository and such, like reprepro) or were way too complex (like the official tools used by Debian itself). Maybe/probably I’m a moron and I just didn’t know of some tool that would solve all my problems, but now it’s kind of late ;-) And before you say it, no, Launchpad is not what I was looking for as far as I understand it.
So I started to work on my own suite of tools for it, and recently I decided to release what I’ve done so far. It’s by no means complete, but it’s very useful for me and I thought it would be useful for others. And, with a bit of luck, someone will help me improving it.
So what is it? Arepa (it stands for “Apt REPository Assistant”, but obviously I called it like that after the yummy Venezuelan sandwiches) is a suite of tools that allow you to manage an APT repository. It contains two command-line tools and a web interface, and its main features are:
-
Manages the whole process after a package arrives to the upload queue: from approving it to re-building from source to signing the final repository.
-
It allows you to “approve” source packages uploaded to some “incoming” directory, via a web interface.
-
It only accepts source packages, and those are re-compiled automatically in the configured autobuilders. It can even “cross-compile” for other distributions (treated like binNMUs).
-
Far from reinventing (many) wheels, it integrates tools like reprepro, GPG, Rsync, debootstrap and sbuild so you don’t have to learn all about them.
The approval via some web interface was actually sort of the driving force for the project. One of my pet peeves was that there wasn’t an easy way to have an upload queue and easily approve/reject packages with the tools I knew. From what I had seen, the tools were either for “single person” repositories (no approval needed because the package author is the owner of the repository) or full-blown distribution-size tools like dak and such. My use-case, however, is the following:
-
You have an installation of Arepa for an entire organisation (say, a whole company or a big department).
-
People inside that organisation upload packages to the upload queue (possibly using dput; the point is, the end up in some directory in the machine hosting Arepa).
-
Someone (or a small group of people) are the “masters” of the repository, and they’ll have access to the web interface. From time to time they check the web UI, and they’ll approve (or not) the incoming source packages.
-
If they’re approved, the source will be added to the repository and it’ll be scheduled for compilation in the appropriate combination(s) of architectures and distributions.
-
A cronjob compiles pending packages every hour; when the compilation is successful, they’re added to the repository.
-
At this point, the repository hosted by the Arepa installation has the new packages, but you probably want to serve the repository from a different machine. If that’s the case, Arepa can sync the repository to your production machine with a simple command (“arepa sync”).
I imagine that a lot of people have the same need, so I uploaded all the code to CPAN (you can see it with the rest of the contributions by Opera Software). Sadly there’s a silly bug in the released code (I wanted to release ASAP to be able to focus on other things, and I ended up rushing the release), but it has both a workaround and a patch. So, please give it a try if you’re interested and tell me if you would like to contribute. I haven’t released the code in GitHub or similar yet, but I’ll probably do if there’s interest.
-
-
Slides for several talks now published
Sep 20, 2009 onI had said that I was going to publish the slides for a couple of talks I had given over the last couple of months, and I just got around to actually do it, so here they are:
-
Software automated testing 123, an entry-level talk about software automated testing. Why you should be doing it (if you’re not already), some advice for test writing, some basic concepts and some basic examples (in Perl, but I trust it shouldn’t be too hard to follow even if you don’t know the language).
-
Taming the Snake: Python unit tests, another entry-level talk, but this time about Python unit testing specifically. How to write xUnit style tests with
unittest
, some advice and conventions and some notes on how to use the excellentnosetests
tool. -
Introduction to Debian packaging, divided in four sessions: Introduction, Packaging a simple app, Backporting software and Packaging tools.
Just a quick note about them: the slides shouldn’t be too hard to understand without me talking, but of course you’ll lose some stuff that is not written down, some twists, clarifications of what I mean exactly by different things and whatnot. In particular, the “They. don’t. make. sense. Don’t. write. them” stuff refers to tests that don’t have a reliable/controlled environment to run into. I feel really strong about them, so I wanted to dedicate a few more seconds to smashing the idea that they’re ok, hence the extra slides :-)
Enjoy them, and please send me any comments you have about them!
-
-
Linköping trip
Sep 13, 2009 onI spent the whole last week (or this week; after all it’s Sunday… and Sunday is obviously the last day of the week, not the first, right?) in Linköping, Sweden. The idea was repeating some Debian course I gave here in Oslo, giving two more talks about automated testing since I was there anyway, and attend two more talks. It was lots of fun, partly thanks to my “host” (thanks Gerald!), and surprisingly I found a bunch of things that seemed plain weird to me… or at least quite different from Oslo.
The talks themselves went pretty good I think, although I’d have preferred more people attending. I guess it was normal that there were less people than I’m used to, since the Linköping office is much smaller. But anyway. The Debian course went quite well and some people got started packaging stuff almost right away. The other talks were an introduction to automated testing (advocacy and arguments for it, advice, basic examples and small rant about a different kind of QA), which went ok, and an entry-level talk about unit testing in Python (thanks Ask and Batiste for the information and reviewing the slides!), which went very well. I’ll try to get the slides for all the talks available somewhere.
About the city itself, it’s a charming little part of Sweden where:
-
Restaurants have insanely different prices for food whether it’s for lunch or dinner. Typical prices for lunch are 80 SEK (around 8 EUR) and typical prices for dinner are around 250 SEK just the main course!
-
Restaurants usually serve some Swedish dish for lunch… and I mean every restaurant, meaning all the Greek, Vietnamese, etc. Considering “real” Swedish restaurants are very expensive, you usually go to those foreign cuisine ones when you actually want to eat Swedish food.
-
Restaurants typically have some salad (that you have to take yourself) while you wait for the food… and some coffee, tea and cookies (that obviously you have to take yourself) for the end.
-
Related to this, restaurants are usually very self-service. I thought service in Norway sucked, but boy was I wrong, at least there is some service. And: there were typically long but pretty-fast-moving queues, and there was this one place where you didn’t even get the food on the table after ordering at the bar; instead, you were given some gadget with some wireless receiver, and when your food was ready it’d beep so you knew you had to go to some special place and fetch your food. Is it really cheaper maintaining some gadgets than hiring a waiter? I guess so.
-
The restrictions on the amount of alcohol that can be bought outside the special Government booze stores are even harder than in Norway. You can only buy booze with up to 3.5% alcohol outside “Systembolaget”. Now that is sad. And I was complaining about Norway’s 5%.
-
Partly because of that (I assume/hope) the Swedish “cider” you get in Sweden is even sweeter and worse and the Swedish cider you get in Norway.
-
We went to this nice student pub… which was literally for students. They actually checked your student id, but each student could bring one non-student along. Once you were “identified” as a non-student-coming-with-a-student, you’d get a stamp on your hand so you wouldn’t have to bring along the student when you ordered again. Also, the place was so very slow it was almost funny. One of the good sides was that they had what I thought it was the only decent Swedish cider… but after checking just now, it seems it’s actually American. Bummer. And the name of it was funny too: “Hardcore Cider”.
-
Right before leaving the office on Friday there was a small gathering in the canteen (the “Friday Beer”), where they had a Dreamcast with one of the most awesome games I’ve seen in a long while: The Typing of the Dead, a version of The House of the Dead 2 in which you kill the zombies by typing words that appear on the screen, instead of aiming and shooting with a gun:
-
-
BCM4312 on Linux: easier than expected
Sep 10, 2009 onJust a quick post to say that I was being stupid and it took me a couple of days of fighting, lockups and reading to realise that the driver for the wireless card in my new laptop is actually already packaged and it works like a charm.
The long(er) story:
-
I bought a laptop with that card, and I wanted to make it work.
-
Apparently the open source driver (
b43
) doesn’t recognise my card, although it seems it should? -
I tried to download the proprietary driver provided by the vendor (
wl
), but it didn’t compile at first. After applying some patch for the kernel 2.6.29 (I’m using kernel 2.6.30) it did compile, but it didn’t quite work. Meaning, it locked up my machine seconds after loading. -
After a couple of days of wondering and trying to make it work… I realised the driver is already compiled in Debian (in particular, broadcom-sta-modules-2.6.30-1-686). Just installing and loading it worked like a charm. Oh well.
-
-
My first contributions to CPAN
Jun 28, 2009 onI have been using Perl for many years, but I had never uploaded anything to CPAN. That’s unfortunate, because I’ve probably written several programs or modules that could have been useful for other people. The point is, now I have. Not only that, but it was code I wrote at work, so if I’m not mistaken these are my first contributions to free software from Opera. Yay me!
The two modules I’ve released so far are:
-
Parse::Debian::PackageDesc, a module for parsing both
.dsc
and.changes
files from Debian. This is actually a support module for something bigger that I hope I’ll release soon-ish. -
Migraine, a still somewhat primitive database change manager inspired on the Ruby on Rails migration system.
As I feel that Migraine could be useful to a lot of people, but it’s easy to misunderstand what it really does (unless you already know Rails migrations of course), I’ll elaborate a bit. Imagine that you are developing some application that uses a database. You design the schema, write some SQL file with it, and everybody creates their own databases from that file. Now, as your application evolves, your schema will evolve too. What do you do now to update all databases (every developer installation, testing installations, and don’t forget the production database)? One painful way to do it could be documenting which SQL statements you have to execute in order to have the latest version of the schema, and expect people to apply copying-and-pasting from the documentation. However, it’s messy, confusing, and it needs someone to know both which databases to update and when.
Migraine offers a simpler, more reliable way to keep all your databases up to date. Basically, you write all your changes (“migrations”) in some files in a directory, following a simple version number naming convention (e.g.
001-add_users_table.sql
,002-change_passwd_field_type.sql
), and migraine will allow you to keep your databases up to date. In the simplest, most common case, you call migraine with a configuration file specifying which database to upgrade, and it will figure out which migrations are pending to apply, if any, and apply them. The system currently only supports raw SQL, but it should be easy to extend with other types.In principle, you shouldn’t need to write any Perl code to use migraine (it has a Perl module that you can use to integrate with your Perl programs if you like, but also a command-line tool), so you can use it even in non-Perl projects. Of course, some modern ORMs have their own database migration system, but very often you have to maintain legacy code that doesn’t use any fancy ORM, or you don’t like the migration system provided by the ORM, or you prefer keeping a single system for schema and data migrations… I think in those cases Migraine can help a lot reducing chaos and keeping things under control. Try it out and tell me what you think
:-)
In a couple of days I’ll blog again about other contributions to free software I’ve made lately, but this time in the form of Opera widgets…
-
-
Free software rocks!
May 10, 2009 onI’ve been working on something lately that I hope I will publish sometime next month: it’s a set of tools to manage an APT package repository. The idea is that, given an upload queue (you can set it up as an anonymous FTP, or some directory accessible via SSH/SCP, or whatever floats your boat in your setup and team), you’ll have a web interface to approve those packages, a set of integrated autobuilders building the approved packages in whatever combination of architectures and distributions you want, and all that integrated with reprepro to keep your repository updated. I’ll write more about it when I have released something.
The point now is that, while working on it, I needed some module to parse command-line options and “subcommands” (like
git commit
,svn update
, etc.). As it’s written in Perl, I had a look at CPAN to see if I could see anything. The most promising module was App::Rad, but it lacked a couple of things that were very important for me: my idea was “declaring” all the possible commands and options and have the module do all the work for me (generating the help pages and the default--help
implementation, generate theprogram help subcommand
and so on).App::Rad
didn’t have that, and it didn’t seem to me like that was the direction they wanted to go to with the module. But I figured I’d drop the author an e-mail anyway and see if he liked the idea so I could start adding support for all that…And boy was that a good idea. He replied a couple of days later, and said that they had liked the idea so much that they had implemented it already (that’s why he took a couple of days to reply), and he sent me an example of the new syntax they had introduced and asked if that was what I was thinking. And not only that, but they added me to the list of contributors just for giving the idea! That completely made my day, free software rocks!
-
Photo management applications
Nov 2, 2008 onIt’s been a couple of years now since I have been a digiKam user. I have been mostly happy with it (actually I don’t even use a lot of its features as my needs are not particularly advanced), but from time to time the Flickr would fail for no reason. Some time ago I needed to upload a lot of pictures and it started failing again, so I looked for some alternatives.
Apart from other apps I knew already and didn’t particularly like, I found dfo (Desktop Flickr Organizer), a GNOME application. It was nice, and it was easy enough to upload pictures to Flickr with it, but it felt weird. What I would like to have is some application to manage my gallery, with some option to upload certain pictures to Flickr. However, this applications is more like a local Flickr mirror with synchronisation options. I don’t want all my pictures in Flickr, even marked as private. I just don’t care, and I don’t want to wait for all synchronisation between the app and Flickr. Moreover, I feel kind of tied to Flickr using that, and I’d rather work in a more “agnostic” environment. So it was cool using it to upload the pictures I had to upload, but I wasn’t really going to keep using it.
At the same time, one friend suggested using Picasa to upload some pictures, so I gave it a try. I had tried it briefly in the past, and I remember that some things were nice, but for some reason it was never my gallery manager of choice. So, trying it again, and even using the synchronisation options for the Picasa web albums, somehow I got the same feeling again: it’s nice, but there’s something undefined that makes me not use it. I have to admit that the interface is really fancy and easy to use, and it works decently well, but I don’t completely like the way the synchronisation works, not to mention that I don’t want to be stuck with only Picasa web albums. Also, I’m not happy with it being proprietary, not available in the Debian repositories, and with that special, anti-integrated interface. Some things work much better than in digiKam (I’m especially thinking fullscreen/slideshow, which sucks pretty badly in it), but I still prefer digiKam overall.
As I wasn’t too happy with the alternatives, I decided to have a look at the problem with digiKam. It turns out that digiKam just uses the so-called Kipi-plugins for picture exporting and other things, and that there was a new version of it that fixed a couple of issues… one of them being a problem with Flickr upload. The package is not available on Debian unstable because we’re currently in freeze (unfortunately, that means that Lenny will ship without a functional Flickr-uploading Kipi plugin). However, I saw that the new package was actually uploaded to experimental, so I decided to give it a try. Not only it works like a charm, but the new version 1.6 reworks the Flickr export plugin completely, and now it’s much nicer. So I’m happy now, back to digiKam with a working Flickr export
o/
. To install it yourself, make sure that you have this line in your/etc/apt/sources.list
:deb http://ftp.de.debian.org/debian/ experimental main non-free contrib
Then, update your available package list and install
kipi-plugins
from experimental, like this:sudo aptitude update && sudo aptitude -t experimental install kipi-plugins
That should do it.
-
The shoemaker's son always goes barefoot
Oct 21, 2008 onI admit it. I’m a terrible developer. I write code, sometimes even write tests.
But. I. don’t. test. my. programs.
By hand, that is. And sometimes (usually) the coverage is not enough, and I end up making embarrassing mistakes. It usually happens outside of work, although at work I also have my share. The last one was with the Debian package
dhelp
, where trying to fix an issue before Lenny is released, I ended up making it even worse. The story goes like this:There was some problem with the indexing of documents on installation/upgrade (namely, it would take ages for most people upgrading to Lenny, and they would think the upgrade process had hung). So, I go and change the indexing code so it ignores documents on installation/upgrade. Also, as suggested by someone, I created some small example utility to reindex documentation for certain packages. I test installation, upgrades, upgrade of the
dhelp
package itself, the utility, searching for keywords before and after all that… and everything worked.Only that I made a typo. A typo that would make all indexing to be ignored (except for the example utility, because it was a bit lower level). And I didn’t realise, because it “only” broke some cronjob, a completely different part of the package. And it happens that the cronjob reindexed everything weekly, to make sure that you had reasonably up-to-date search indices. And it also happens that, given that the documentation reindexing was being ignored on package installation/upgrade, the weekly total reindex process was the only thing that could provide the user with indexed documentation. But I screwed up. Oh well.
Someone filed a bug yesterday, and I fixed more or less right away. But this time I spent a couple of hours thinking of test paths and ways to make it fail, and actually doing all that testing. Thanks to that, I found some potential bug in the example utility, that I fixed just in case. So hopefully everything is fine now, if I can convince the Release Masters to allow the new, less broken update to
dhelp
to be accepted for Lenny.I think I need personal QA. Anyone up to the task?
-
GPG confusion
Sep 22, 2008 onToday I was playing with GnuPG, trying to add a couple of public keys to an “external” keyring (some random file, not my own keyring). Why? you ask. Well, I was preparing some Debian package containing GPG keys for APT repository signing (like
debian-archive-keyring
and such).The point is, I was really confused for quite a bit because, after reading the
gpg
manpage, I was trying things like:gpg –no-default-keyring –keyring keys.gpg –import … # Wrong!
But that wouldn’t add anything to the
keys.gpg
, which I swear I had in the current directory. After a lot of wondering, I realised thatgpg
interprets paths for keyrings as relative to…~/.gnupg
, not the current directory. I guess it’s because of security reasons, but I find it really confusing.The lesson learned, always use
--keyring ./keys.gpg
or, better, never usekeys.gpg
as filename for external keyrings, but something more explicit and “non-standard” likemy-archive-keyring.gpg
or whatever. -
Linux video editing and YouTube annotations
Jul 23, 2008 onIn my recent trip to Copenhagen, I recorded a small video of the subway (it’s really cool, because it’s completely automatic, it doesn’t have drivers or anything). I wanted to edit the video to remove people that were reflected on the window, so I wondered if I could do that on Linux. I imagined it wouldn’t be trivial, but it was more frustrating than I thought. Maybe I’m too old for this.
The first thing I tried was looking in APT’s cache for “video editing”. The most promising was kino. I had tried that some time ago a couple of times, and I never made it to work, but I figured I would try again. Unfortunately, same result: I just can’t figure out how to import my videos. Maybe I’m just hitting the wrong button or whatever, but it’s really frustrating.
Second thing was having a look in the internet. I found the (dead and being rewritten?) Cinelerra, as always, and I didn’t feel like installing the old one from source, only to lose my time and not get it to work, so I just ignored it. Maybe they had it in debian-multimedia and wouldn’t have been a tough install after all. Anyway.
Next thing, I found some program called openmovieeditor. This one apparently worked, but I couldn’t figure out how to crop the image (or almost any other thing for that matter).
Next, some neat program written in Python, called pitivi. When I tried to run it though, it just said
Error: Icon 'misc' not present in theme
on the console and died. I later figured out that I had to installgnome-icon-theme
for it to work (yeah, Debian maintainer’s fault). It’s funny, because on the webpage it says that it has some “advanced view” that you can access via the “View” menu… but I couldn’t find it. My menu only had one entry: “Fullscreen”. Great.Oh, wait, there’s a
gimp-gap
. I could just import my animation in Gimp, crop the frames, and convert again to video. Easier said than done. I needed some programs that I didn’t have, and I wasn’t sure if they were so easy/quick/clean to install (sure, I could have exported to GIF animation and probably convert to video, I just didn’t want to lose so much color quality in the GIF step). Forget for now. At least I had the images, so if I could just turn them into a movie…So, I started wondering if, given that I had decided to just crop, and especially now that I had a lot of images that were the frames, maybe I could just use some command line tool or something. So I found this tiny little program,
images2mpg
. Long story short, after installing some dependencies from source (that gave compilation errors, but luckily I could compile only the binaries I really needed) that program was completely retarded and didn’t even do what I wanted (it wanted at least one second between images, but I didn’t want a slideshow, just a normal movie from the frames). It looks some simple and it’s so buggy. Gah.So I started wondering if I could just crop with mplayer… Hmmm… after a couple of problems (like documented switches that were not there and other crap), I ended up with this command line:
<code> mencoder -vf crop=320:200:0:40 MVI_2160.AVI -ovc lavc -nosound -o metro-crop.avi </code>
That was reasonably quick and easy but it was so frustrating after all that lost time.
In any case, I ended up with the video I wanted, so I went to YouTube to upload it. When uploading, I realised that there was some option I had never seen: annotations.
YouTube annotations are really cool. They are like the notes on Flickr, but on a video
:-D
Actually I kind of wanted to make a note like that on this video, to show the automatic doors on the Metro station, so I was really happy to see that I could actually do it. And the interface is really easy to use and very clear. I really like it! You can see the result here:EDIT: WTF? The annotations don’t appear on the embedded videos? You’ll have to go to the video page to see them, then…
-
Free Software rocks
May 13, 2008 onI just read in Aaron Seigo’s blog a very nice message from a user that proves that free software is making a difference in many areas, even in some that we don’t usually think about. Some quote:
> > I cant tell you how much I appreciate the work you all have done. Its a work of art. If I could thank each and every one of you I would. > >
> > You have given her the world to learn and explore. > >
> > So if you get frustrated or tired in your work for Open Source/Free Software, just remember that somewhere in Missouri there is a 14 year-old girl named Hope, an A-student who runs on the track team, who is now your biggest fan and one of the newest users of Linux/Ubuntu. > >
Although I haven’t really participated in KDE or Ubuntu (not directly anyway), I too feel proud of what we, as a community, have created. Also, like that person, I feel very thankful for everything I have learned and got from the free software community.
Cheers guys, you all rock!
-
dhelp goes international
Feb 21, 2008 onSome good news in the
dhelp
front: after talking to some people and a couple of messages indebian-i18n
,dhelp
has (hopefully) full support for UTF-8, and two more translations, the first two apart from the Spanish one: Russian and German. It’s really cool seeing some program you have written producing output in cyrillic;-)
I haven’t uploaded yet, because I found two new strings that weren’t in
dhelp.pot
, but I’ll upload soon, when I receive the updates for the translation. The UTF-8 update is related to some improvements indoc-base
, so things are looking good in the documentation tools side of Debian, yay!:-)
-
One Year!
Jan 21, 2008 onToday I have been one year working in Oslo! Yay! So far the experience has been quite good, so I’m staying here for some more time still.
I’ve also slowly becoming kind of active again in Debian (especially helping
dhelp
), although I admit not being very active in any other software project (Haberdasher feels kind of abandoned, because I don’t have any urge for new features). Hopefully that will change… -
Big changes in dhelp
Nov 15, 2007 onAs I said earlier, now the fun stuff begins
:-)
I have been working withdhelp
these days, and there are a couple of things I have changed already:-
I have dropped support for the
dhelp
-specific.dhelp
files. Now I just use thedoc-base
information directly (until now,doc-base
had to convert its own format todhelp
, which was a bad things for several reasons, one of them losing important information in the process). -
I have changed the indexing code so it now indexes the actual documentation content, instead of the documentation directory generated by
dhelp
. -
I have rewritten most of the HTML used in the searches and in the documentation directory so it’s nicer and easier to modify (e.g. no more
<font>
or similar obsolete tags).
While working on the indexing changes, I have been playing with swish+++, an indexing engine. Seems really useful, although some options are not that obvious, and I haven’t been able to use
extract++
to extract the text according to the file format (e.g. skipping HTML tags in HTML). I’ll keep trying…Hopefully, the package will be ready for release in a week or so…
-
-
Fun with dhelp
Nov 9, 2007 onI finally uploaded
dhelp
to unstable, and everything went almost surprinsingly good. The only bugs reported so far are #448211 and #447789. The first one was a silly mistake made by me, in some translation files that aren’t even being used now (that will change in the near future). The second was a bug exposed bydhelp
, but actually in another package (libcommandline-ruby
, which is funnily enough also maintained by me, and it’s already patched and pending upload).So, now that the package is uploaded and we’re using a sane implementation for
dhelp_parse
, I can start doing fun stuff. Right now I’m mostly fixing more bugs, but I’m also implementing new features and talking to thedoc-base
maintainer, to improve the integration between both. -
Dhelp's new release
Oct 22, 2007 onDhelp’s new release is coming along nicely. In the last days I have fixed a couple of bugs in @dhelp_parse@’s rewrite, and I think it’s now ready for upload. The new package closes 28 bugs, which is more than half the current open bugs for the package.
I have warned the current maintainer and the
debian-doc
mailing list, so I hope to upload the new version in a couple of days… -
Dhelp strikes back
Oct 19, 2007 onIn the last days I have gone back to working on
dhelp
, a Debian package for documentation indexing and search. Months ago I had started rewritingdhelp_parse
, the only program in the suite written in C, in Ruby.The rewrite was almost done, but the program wasn’t tested much (some modules had unit tests, but the program itself didn’t), so I found a couple of big bugs easily :-D Now it looks better, so hopefully I’ll be done soon, and I’ll upload the new package to Debian so people can start testing it.