Sunday, 30 November 2008

Using Konqueror with Gmail full featured version

When I run on Windows XP, I've my standart gmail address opened in firefox (and 3 other google apps gmail), and a developper gmail adress (which subscribe many mailling list).

Now that I'm on kubuntu (at home), I needed an extra browser to do the same, and it's obviously Konqueror... but what's sad is that it's a poor plain html interface that you get with gmail.

So I switched the user agent to Firefox, but I get a blank page.

I've tryed with Safari (as it uses webkit, the render engine of Konqueror) and it works...except that I need to use a wheel click to make a regular clic. Everything else work fine (except you don't have rich text editing, but maybe you don't have it on safari as well...)

Tuesday, 14 October 2008

Switching from CVS to Baazar Version Control System

(note : this post will be improved soon)

I'm changing my server to a new machine as the existing one as capacitor leak issue on the motherboard...
I switch from sme server to ubuntu server.
And as I was almost finishing the migration of my cvs repository to the new server, I asked myself : and what about SVN ? and Bazaar on which I've read about on mysql developer site ? And git, the VCS used for the linux kernel made by Linus Torvald himself (GIT and the Kernel)

Here is a site that help me to have an idea of the feature of each VCS :

I've made a try with GIT, but due to a poor eclipse plugin which development seems to have stalled. I've chosen to try Bazaar.

And here is links that answer my main concern :

I've also mainly choose Bazaar because I already know cvs quite well, and made some try on SVN, but didn't know Bazaar at all (And I'm curious).

I really like to thanks the people on the Bazaar mailing list for their help.

I've started to install the eclipse plugin to see if ssh connection was possible.

Eclipse plugin install

I've installed it by following the install instruction.

which says :
  1. Install bazaar where you're running eclipse
  2. Install bazaar xml output plugin
  3. Install the Eclipse plugin

I've downloaded the latest available version here (window version for now) :

check md5 with my portableapps WinMD5sum

Run the installer, choose the install path.

Next I downloaded the bzr-XMLOutput windows plugin installer and make a md5 sum check.
Run the installer, use the default directory.

Then I've installed the eclipse plugin, Help->Software Update, change to "Available software" tab, click on add site button, paste

Click Ok, expand the newly added site, tick the "Eclipse plugin for Bazaar" and click the Install button in the upper right corner.

Once installed, I've setup the server side on my new ubuntu server for the CVS to Bazaar conversion.

CVS to Bazaar conversion

In order to have a fairly recent release of Bazaar on ubuntu you have to modify the apt sources list, or you'll only get the 1.3.0 version of Bazaar whereas the 1.8.0 is about to be released.

sudo vi /etc/apt/sources.list

Add the following two lines to the file :
deb hardy main
deb-src hardy main

And then run :
sudo apt-get update
sudo apt-get install svn bzr bzrtools
mkdir -p ~/.bazaar/plugins
cd ~/.bazaar/plugins
bzr branch lp:bzr-fastimport fastimport #(1)
bzr checkout lp:bzr-cvsps-import cvsps_import #(2)

(1) is for conversion with CVS2SVN
(2) is for conversion with cvs2ps

(svn is needed to get one conversion tool)

Next you need to tell Bazaar who you are :

bzr whoami 'Thomas <>'

Now, here is the conversion part.

As I'm a soooooo lucky man, I had a filename encoding issue : a few file filename happen to be encoded in a non UTF-8/non ASCII encoding and this was really a big problem to the conversion tools of bazaar.
So I've tryed almost every conversion solution ;)

Unfortunately, the conversion instruction are a bit laconic. Thanks to the Bazaar mailing list and it's large and active community, I've been able to try each solution and finally get something working with my encoding issue (Hard to be French, I tell you ;)

CVS2SVN conversion tool

This tools seems to be the more accurate according to what I've read on the mailing list. (Although, this one crashed on my filename encoding issue)
There's no mistake, it's "CVS2SVN" not "CVS2BZR" or something.

This tool was initialy developed for CVS 2 SVN conversion, but it evolved to output a fileformat understandable by GIT (yes GIT...), and Bzr conversion tool also understand it as well ;)

Get CVS2SVN tool

Ubuntu Hardy Heron apt package is also old, so you should get it directly from the repository :

mkdir ~/temp
cd ~/temp
svn co --username=guest cvs2svn-trunk

simply press enter when asked for a password.

Now get your CVS repository backup file (tar jcf of the repository directory. If you can't have this try another solution)

mkdir -p temp/cvsrepo
cd temp/cvsrepo
scp root@ .
tar jxf 20081004_ibay_cvs.tar.bz2
cd ..

Now your CVS Repository is here : ~/temp/cvsrepo and one of its subdirectory must be CVSROOT.
And the conversion tool is here : ~/temp/cvs2svn
The current working directory is ~/temp

Now we need to get some configuration file sample.
I won't give away my configuration files because, the tool maintainer says on its site that this configuration file format often change.

cp cvs2svn/cvs2svn-trunk/cvs2svn-example.options .
cp cvs2svn/cvs2svn-trunk/test-data/main-cvsrepos/cvs2svn-git.options .

In cvs2svn-git.options file, there's the path to the CVS repository to change :
look around the line 122 where you should see : r'test-data/main-cvsrepos',

and change it to the relative path to your cvs repo.
As my working directory will be ~/temp and that the cvs repo is in ~/temp/cvsrepo I've just put cvsrepo :

# Now add a project to be converted to git:
# The path to the part of the CVS repository (*not* a CVS working
# copy) that should be converted. This may be a subdirectory
# (i.e., a module) within a larger CVS repository.

# See cvs2svn-example.options for more docume

Next I had to edit cvs2svn-example.options
because my commit comment were in latin1 encoding, not UTF-8 encoding, so in order to get rid of conversion Warning :

# How to convert author names, log messages, and filenames to unicode.
# The first argument to CVSTextDecoder is a list of encoders that are
# tried in order in 'strict' mode until one of them succeeds. If none
# of those succeeds, then fallback_encoder is used in lossy 'replace'
# mode (if it is specified). Setting a fallback encoder ensures that
# the encoder always succeeds, but it can cause information loss.
ctx.cvs_author_decoder = CVSTextDecoder(
ctx.cvs_log_decoder = CVSTextDecoder(

Then run the tool :

./cvs2svn/cvs2svn-trunk/cvs2svn --options cvs2svn-git.options

If all goes well, you should see a ~/temp/cvs2svn-tmp directory with 2 files in it :

thomas@home:~/temp/cvs2svn-tmp$ ll
total 238720
drwxr-xr-x 2 thomas thomas 4096 2008-10-07 17:48 .
drwxr-xr-x 11 thomas thomas 4096 2008-10-13 13:40 ..
-rw-r--r-- 1 thomas thomas 241749759 2008-10-07 17:47 git-blob.dat
-rw-r--r-- 1 thomas thomas 2444025 2008-10-07 17:48 git-dump.dat

That's what GIT and Bazaar conversion tools understand.

Here is how you convert to Bazaar :
mkdir ~/temp/bzr
cd ~/temp/bzr
bzr init-repo .
cat ../cvs2svn-tmp/git-blob.dat ../cvs2svn-tmp/git-dump.dat | bzr fast-import -

After this command has been executed, your directory should have been filled with your files. And the revisions of these files in .bzr directory.

Here was some clue for the instructions found on fast import page:

bzr init-repo .
front-end | bzr fast-import -

Too easy ;)

CVS2PS import

Install some needed tools ( you still need the bzr cvs2ps plugin (2))

sudo apt-get install cvs cvsps rcs

(Note: you can do without rcs, with --use-cvs but it's slower)

Your cvs repo is here : ~/temp/cvsrepo

mkdir ~/temp/bazaar
cd ~/temp
bzr cvsps-import cvsrepo . bazaar

It should convert your entire repository (~/temp/cvsrepo) into ~/temp/bazaar directory.
If you want to convert only one module, use your module name instead of '.' in the command.


The last tool I tried to convert my CVS repo to Bazaar is Tailor.
It's this tools that succeed in converting my repository that have filename encoding.
Instead of crashing on files that have issue (only 2 for what I know), Tailor trap the exception and continue the conversion.
The two files I lost was doc files with nothing really important.

Thanks to Colin D Bennett (who apparently works on Grub) who gave me it's configuration files for Tailor I did try tailor and succeed. (I was about to give up the history of my project and import as a new project).

Tailor connect to your CVS repository and do not need to have a direct access to the repository files.

On my linux box, I use CVS through SSH, I'm not using a pserver.

Here is the content of one of my tailor configuration files :

thomas@home:~/temp$ cat crf-irp.tailor
patch-name-format = None
source = cvs:crf-irp
target = bzr:crf-irp

repository =
module = crf-irp
encoding = iso-8859-1


The first [crf] is a name you pick.
Next, source & target will be the name between bracket.
In the source section, you put your cvs configuration.
the target section doesn't need more instruction (bzr: seems to be enough).

I advise you to use a public key ssh authentication as it will ask you for the password several times.

Repeat for each module of your repository.

Publication of my (newly converted to Bazaar) project to Launchpad

Before Bazaar, I've set up on my own server a CVS server, JIRA/Confluence/Fisheye for the project handling.
As I've made no release yet (it's should happen soon), JIRA/Confluence was not really used.

Now I've migrate to Bazaar, I could set up a Bazaar server. But I did loose a huge amount of time on the Bazaar conversion (because of this ***king filename encoding issue) and I found it would be more efficient and more reliable to use plateform.

So, I've created my project (, update my profile with my public key, load my private key into pagent.
and ran the following command from windows (My laptop unfortunately run under vista)

bzr push bzr+ssh://

bzr push bzr+ssh://

bzr push bzr+ssh://

It took some times as my repo was about 300MB.

Next Blog post : How to import a project with the eclipse plugin.

Monday, 13 October 2008

Convert your CVS repository to GIT

While I was on sick list, this last two week, I've search to get away from the venerable CVS and it's painful merge capabilities. (Also I was curious to try something else).

Here is how I converted my CVS repository to git :

Install GIT :
sudo apt-get install git-core git-cvs

Install cvs for the conversion :

sudo apt-get install cvs cvsps

Get the CVS repository backup ( a tar jcf of the cvs root dir) :
mkdir ~/temp/cvs2git
cd ~/temp/cvs2git
scp root@ .
tar jxf 20081006_ibay_cvs.tar.bz2

Launch the conversion :

export CVSROOT=/home/thomas/temp/cvs2git/cvs/files

git cvsimport -C /home/thomas/temp/cvs2gitOutput/crf-irp           crf-irp
git cvsimport -C /home/thomas/temp/cvs2gitOutput/crf-irp-model     crf-irp-model
git cvsimport -C /home/thomas/temp/cvs2gitOutput/crf-irp-monitor   crf-irp-monitor
git cvsimport -C /home/thomas/temp/cvs2gitOutput/crf-irp-portail   crf-irp-portail
git cvsimport -C /home/thomas/temp/cvs2gitOutput/crf-irp-utilities crf-irp-utilities

It worked perfectly.

There's an eclipse plug-in for GIT, the install site is

But it's not working well enough to be used. Also its development seems to have stalled.

That's why I tried Bazaar... but that's for another post.

Sunday, 12 October 2008

mount a NTFS partition on hardware RAID0 controller under linux

My main computer has a major hardware issue (probably the motherboard, maybe the cpu).
I can't re-install windows, it crashes in the middle of the installation.

To complicate things, I've installed my windows on a (fake)hardware RAID0. It's fake because the raid part is handle by the windows driver. That's why you can still see the hard drives under linux as /dev/sda /dev/sdb etc... even after having setup your RAID0/1/5 Array in the bios.

As I'm not completely dumb, I've stored no critical data on the RAID array...
But still, I wanted to check that fact (I'm not dumb ;) and get some non critical files, such as game save (I don't play that much recently... so sad, but it may change once Diablo III or Starcraft II is released).

To get my data back, I've used a linux Distribution and use it's DMRAID capabilities.

I've downloaded and burned the Gentoo Live CD.
BTW, it's a really great distribution. I loved to installed it from scratch and compile all from sources.

boot onto it.
At grub prompt I typed the following to get DMRAID support :

boot: gentoo dodmraid

Once the livecd start up (when it could with my hardware issues) with a nice 1920*1200 interface, I opened a shell and do the following :

sudo -s                                   #(0)
alias ll="ls -la"                         #this one should be always set by default !
mkdir /a                                  #(1)
ll /dev/mapper                            #(2)
total 0
drwxr-xr-x  2 root root    180 Oct 13 00:18 .
drwxr-xr-x 17 root root   4320 Oct 13 00:18 ..
lrwxrwxrwx  1 root root     16 Oct 13 00:18 control -> ../device-mapper
brw-rw----  1 root disk 253, 2 Oct 13 00:18 nvidia_dacaiefa
brw-rw----  1 root disk 253, 5 Oct 13 00:18 nvidia_dacaiefa1
brw-rw----  1 root disk 253, 1 Oct 13 00:18 pdc_bjahfddb
brw-rw----  1 root disk 253, 4 Oct 13 00:18 pdc_bjahfddb1
brw-rw----  1 root disk 253, 0 Oct 13 00:18 pdc_caafigga
brw-rw----  1 root disk 253, 3 Oct 13 00:18 pdc_caafigga1

mount /dev/mapper/nvidia_dacaiefa1 /a     #(3)
cd /a

  • First: get root access
  • Next : create a simple directory within the root (remember it's a livecd, it's in memory modification). I needed the shortest path possible to not loose time typing long path as my system was really not stable.
  • Next : find the name of the device representing the Fake RAID. For me, it was nvidia_*. nvidia_dacaiefa is the (virtual raid0) hard drive, and nvidia_dacaiefa1 it's first and only NTFS partition.
  • Next : Mount the ntfs partition in the filesystem.
Finally, I used command such as :
tar jxf /tmp/BunchOfFiles.tar.bz2 path1 path2
scp /tmp/BunchOfFiles.tar.bz2

to get my data out of this sick computer !

Thursday, 9 October 2008

What I allways install/change on my ubuntu servers

I'm installing several ubuntu servers for various purpose (home server, Secondary DNS server, new primary DNS server, web servers, development servers).

For each one, I write the setup documentation (which are stored on a google apps premium account, which is by the way a must have for all the cooperative stuff).

Here is what I do for all servers, no matter the final use :

useful software

sudo -s # switch to root account as at setup I do a lot of root stuff
apt-get update # update the apt-get package list
apt-get upgrade # upgrade all package that are installed by default
apt-get install vim-full; # A lot of dependencies comes along, but vim is much better after (need some config modification)
apt-get install sysstat; # To monitor Hard drive activities
apt-get install whois; # contains the mkpasswd command
apt-get install slocate; # slocate, to find file
apt-get install nmap; # nmap, a port scanner which help to see if your firewall is properly set up
apt-get install debian-helper-scripts; # install the 'service' command (ex: sudo service mysqld restart instead of /etc/init.d/mysqld restart, I'm used to the service command)
apt-get install ntp ntp-doc; # to keep the clock up to date (all computer tends to loose time because of Interruption)
apt-get install lynx # a text browser that help sometime to do some stuff
apt-get install unzip # unzip
apt-get install screen # detach a shell, to logout, re-logon, retrieve your work
apt-get install ndisc6 tcptraceroute # network diagnostic tools (like tcptraceroute, tcptraceroute6)

Colored prompt and aliases

vi .bashrc and uncomment


if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases

vi .bash_aliases

and put :

alias ps="ps eaxjf" #ps as a tree, all system process

# enable color support of ls and also add handy aliases
if [ "$TERM" != "dumb" ] && [ -x /usr/bin/dircolors ]; then
eval "`dircolors -b`"
alias ls='ls --color=auto'
alias dir='ls --color=auto --format=vertical'
alias vdir='ls --color=auto --format=long'

alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'

alias ll='ls -la'
alias cd..="cd .." #type error I do often
alias uncd="cd $OLDPWD" #go to the previous directory (which can be different than cd .. ;)

Change the default home directory permission :

sudo vi /etc/adduser.conf

change DIR_MODE to :


Change permission on the root directory

cd /
sudo chmod 700 root

CTRL+ALT+SUPPR disabling

sudo vi /etc/event.d/control-alt-delete

comment the line :
#exec /sbin/shutdown -r now "Control-Alt-Delete pressed"

Configuration of NTP

choose the closest NTP server from your server if you know one, or your country NTP server from

sudo vi /etc/ntp.conf

# You do need to talk to an NTP server or two (or three).
server # The triple play modem from my ISP (free) is also a NTP server
server # default server
server # NTP server for my country from the

Firewall configuration

IPv6 activation

sudo vi /etc/default/ufw

change to


Home server (ssh/http/samba)

sudo ufw logging off
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow 137/udp
sudo ufw allow 138/udp
sudo ufw allow 139/tcp
sudo ufw allow 445/tcp
sudo ufw disable;sudo ufw enable;

Production name server (DNS/HTTP/SSH for some hosts)

sudo ufw logging on

sudo ufw allow proto tcp from XX.XX.XX.XX to any port 22
sudo ufw allow proto tcp from XX.XX.XX.XX to any port 22
sudo ufw deny ssh # deny SSH for all host except the two above

sudo ufw allow domain # DNS port 53
sudo ufw allow http # http Port
sudo ufw disable;sudo ufw enable;

To allow an additional host to connect with ssh :

sudo ufw delete deny ssh                             #delete the rule : deny SSH for all
sudo ufw allow proto tcp from X.X.X.X to any port 22 #Add the IP for the additional host
sudo ufw deny ssh #deny ssh for all others (ie deny for all except the 2 existing hosts + the new one)
sudo ufw disable;sudo ufw enable; #reload the firewall

Firewall state example :

thomas@ns1:~$ sudo ufw status
Firewall loaded

To Action From
-- ------ ----
53:tcp ALLOW Anywhere
53:udp ALLOW Anywhere
80:tcp ALLOW Anywhere
22:tcp ALLOW X.X.X.X
22:tcp ALLOW Y.Y.Y.Y
22:tcp DENY Anywhere
22:udp DENY Anywhere
53:tcp ALLOW Anywhere (v6)
53:udp ALLOW Anywhere (v6)
80:tcp ALLOW Anywhere (v6)
22:tcp DENY Anywhere (v6)
22:udp DENY Anywhere (v6)

VIM configuration

(I found the mouse activation too much disturbing... and anyway... the mouse slow you down more than being drunk ;)

sudo vim /etc/vim/vimrc

" All system-wide defaults are set in $VIMRUNTIME/debian.vim (usually just
" /usr/share/vim/vimcurrent/debian.vim) and sourced by the call to :runtime
" you can find below. If you wish to change any of those settings, you should
" do it in this file (/etc/vim/vimrc), since debian.vim will be overwritten
" everytime an upgrade of the vim packages is performed. It is recommended to
" make changes after sourcing debian.vim since it alters the value of the
" 'compatible' option.

" This line should not be removed as it ensures that various options are
" properly set to work with the Vim-related packages available in Debian.
runtime! debian.vim

" Uncomment the next line to make Vim more Vi-compatible
" NOTE: debian.vim sets 'nocompatible'. Setting 'compatible' changes numerous
" options, so any other options should be set AFTER setting 'compatible'.
"set compatible

" Vim5 and later versions support syntax highlighting. Uncommenting the next
" line enables syntax highlighting by default.
syntax on

" If using a dark background within the editing area and syntax highlighting
" turn on this option as well
set background=dark

" Uncomment the following to have Vim jump to the last position when
" reopening a file
if has("autocmd")
au BufReadPost * if line("'\"") > 0 && line("'\"") <= line("$") \| exe "normal g'\"" | endif endif " Uncomment the following to have Vim load indentation rules according to the " detected filetype. Per default Debian Vim only load filetype specific " plugins. if has("autocmd") filetype indent on endif " The following are commented out as they cause vim to behave a lot " differently from regular Vi. They are highly recommended though. set showcmd " Show (partial) command in status line. set showmatch " Show matching brackets. set ignorecase " Do case insensitive matching set smartcase " Do smart case matching set incsearch " Incremental search set autowrite " Automatically save before commands like :next and :make set hidden " Hide buffers when they are abandoned "set mouse=a " Enable mouse usage (all modes) in terminals " Source a global configuration file if available " XXX Deprecated, please move your changes here in /etc/vim/vimrc if filereadable("/etc/vim/vimrc.local") source /etc/vim/vimrc.local endif

Log Rotate config

Change to 52 weeks of log retention as it's the new law in France.
And compress it.

sudo vim /etc/logrotate.conf

# see "man logrotate" for details
# rotate log files weekly

# keep 4 weeks worth of backlogs
rotate 52

# create new (empty) log files after rotating old ones

# uncomment this if you want your log files compressed

# packages drop log rotation information into this directory
include /etc/logrotate.d

# no packages own wtmp, or btmp -- we'll rotate them here
/var/log/wtmp {
create 0664 root utmp
rotate 1

/var/log/btmp {
create 0664 root utmp
rotate 1

# system-specific logs may be configured here

Wednesday, 8 October 2008

Setting up UPS link with an ubuntu server

I've bought a new server (build from parts) to replace the old one that has capacitor leak issues and died †

And with it, I bought a new UPS (Uninterruptible Power Supply) to replace the old one that non longer work either...

I've googled internet to setup this UPS with my linux server (ubuntu server and here is the result of my search and work :

When the UPS detect a power outage, it wait some times
(configured depending on the capacity of the UPS and the
power consumption of the server), and then send a mail
with the UPS state just before the shutdown,
and then shutdown the server.

While waiting some time, if the power is back,
the shutdown is cancelled.

The UPS state is useful as it shows you the health of your UPS when it goes on battery.

  1. If the charge is really low when the server is shutdown, you might want to reduce the time it wait on battery before shutting down the server or replace the battery (or the UPS).

  2. If the charge is really close to 100, you might want to increase the time on battery.

Script to send mail are located in my home account /home/thomas.
My UPS is a MGE Ellipse 750VA connected trough USB.
(a serial cable is also available)

NUT install & configure

NUT is the piece of software that will communicate with the UPS, and shut your server down properly.
I've read that MGE is involved in NUT development, which is worth to be mentioned as it's not so often a company do that.

NUT documentation is for nuts ;) (easy joke) Well not that easy to get into.
Instead I've read the following french tutorial written by Olivier Van Hoof and this one also helped : (still in French)

In all configuration files/script excerpt, I'll use some colours to show the link between configuration files. So if you want to change one value to match your config, change all string with the same colour.

Let's install NUT

a piece of cake :
sudo apt-get install nut

Edit configuration files

Note you can download a copy of my configuration files here :

Configuration files are in /etc/nut
No configuration files are created by default. If you want example, you can find some in /usr/share/doc/nut/examples/

First we need to tell the system to start nut's daemon when the system starts :

sudo vi /etc/default/nut

change the first two 'no' to 'yes' to do so.

# start upsd

# start upsmon

Now lets declare our UPS :

sudo vi /etc/nut/ups.conf
driver = usbhid-ups
port = auto
desc = "MGE UPS Systems UPS"
Within the bracket, you can set your UPS name (no space allowed).
The most important thing is to find your driver that handle your UPS.
You can find it here :
(beware, this is for the last stable version of NUT which might not be the version installed on your server)

For an USB connection, the port is 'auto'.

If you use a serial cable instead, you should do this :
chmod 0600 /dev/ttyS0
chown nut:nut /dev/ttyS0

and put /dev/ttyS0 instead of auto.

sudo vi /etc/nut/upsd.conf
ACL all
ACL localhost

ACCEPT localhost

This file defines who is able to connect. In my case, only localhost.
I guess this files is involved when you have several computer connected to one big UPS.

sudo vi /etc/nut/upsd.users
password = test
allowfrom = localhost
upsmon master
This file defines a user 'thomas' with a password 'test' allowed to connect on localhost, and is allowed to control the UPS (upsmon master)

sudo vi /etc/nut/upsmon.conf
# define the ups to monitor and the permissions
MONITOR MGE-Ellipse750@localhost 1 thomas test master
# define the shutdown comand
SHUTDOWNCMD "/sbin/shutdown -h now"

# launch the upssched program to execute script with delay
NOTIFYCMD /sbin/upssched

This files basically tells NUT what to do on power outage.
The first instruction defines on which UPS is monitored, on which machine, the user/password used to connect, and the type : this is the master server.
All the line is defined from the previous configuration files we created, so if you change them, change this line too.

The SHUTDOWNCMD is what will be executed to shutdown the server.

NOTIFYCMD is called when the power status change (OB : on battery, OL : on line).
Here we call /sbin/upssched which will allow to say : Ok, if in X seconds power is not back, we do something
Events that trigger the NOTIFYCMD is the two last line (ONBATT and ONLINE)

sudo vi /etc/nut/upssched.conf
# the script to be executed
CMDSCRIPT /home/thomas/scripts/alertAndShutdown.php

# mandatory fields that must be set before AT commands
PIPEFN /var/run/nut/upssched.pipe
LOCKFN /var/run/nut/upssched.lock

# the timers, here 30 sec after the ONBATT (ups on battery) event

# cancel the countdown is power is back

In this file, we define, the script called upon the end of timer. (we'll see later this script)
Define 2 technical resources.
The two last line define the behaviour of NUT considering the events, which give in plain english :

On ONBATT event, start a time called onbatt for 30 secs, when those 30 secs are ellapsed the CMDSCRIPT will be called, unless a ONLINE event is recieved.

Now that we've defined our configuration files, we need to secure them :
sudo chown root:nut /etc/nut/*
sudo chmod 640 /etc/nut/*

At first I create a dedicated directory in /var/run named upsshed, but upon server restart, the directory is deleted... so instead I used the existing /var/run/nut directory.
If the directory is not there you get this error :

Oct  8 19:29:27 home upssched[5324]: Failed to connect to parent and failed to create parent: No such file or directory

Restart the NUT daemon

I recommend you to open to console, the first doing this :

sudo tail -f /var/log/daemon.log

that will display what is appended to the daemon.log file.
It's usefull to understand what's going wrong.

Then restart :

sudo service nut stop; sudo service nut start

which gave for me :

the stop :
Oct  8 21:13:48 home upsd[5171]: Signal 15: exiting
Oct 8 21:13:48 home upsmon[5174]: Signal 15: exiting
Oct 8 21:13:48 home upsmon[5173]: upsmon parent: read
Oct 8 21:13:48 home usbhid-ups[5169]: Signal 15: exiting

the start :
Oct  8 21:14:26 home usbhid-ups[5498]: Startup successful
Oct 8 21:14:26 home upsd[5499]: listening on port 3493
Oct 8 21:14:26 home upsd[5499]: Connected to UPS [MGE-Ellipse750]: usbhid-ups-MGE-Ellipse750
Oct 8 21:14:26 home upsd[5500]: Startup successful
Oct 8 21:14:26 home upsmon[5502]: Startup successful
Oct 8 21:14:26 home upsd[5500]: Connection from
Oct 8 21:14:26 home upsd[5500]: Client thomas@ logged into UPS [MGE-Ellipse750]

As I was doing many other thing while I was setting up my UPS, the following may not be exact :
But I had stop/start several time to see thing change upon my tries. (but as I say, maybe I simply just forgot to save the configuration files (many time))

Test the connection with the UPS

Now that NUT is started successfully, we can test that the UPS is here :

thomas@home:/etc/nut$ upsc MGE-Ellipse750@localhost
battery.charge: 100
battery.charge.low: 30
battery.runtime: 2818
battery.type: PbAc usbhid-ups
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.version: 2.2.1- MGE HID 1.01
driver.version.internal: 0.32
input.transfer.high: 264
input.transfer.low: 184
outlet.0.desc: Main Outlet 1
outlet.0.switchable: no
outlet.1.desc: PowerShare Outlet 1 2
outlet.1.status: on
outlet.1.switchable: no
outlet.2.desc: PowerShare Outlet 2
output.frequency.nominal: 50
output.voltage: 230.0
output.voltage.nominal: 230
ups.beeper.status: enabled
ups.delay.shutdown: -1
ups.delay.start: -10
ups.load: 0
ups.model: Ellipse 750
ups.power.nominal: 750
ups.productid: ffff
ups.serial: BDCJ2303A
ups.status: OL CHRG
ups.vendorid: 0463

if you want to test the SHUTDOWNCMD instruction, to trigger the event that run shutdown command :
(beware, this will shut down your server if the SHUTDOWNCMD is properly setted)
/sbin/upsmon -c fsd

This should stop the computer.

Now, about the /home/thomas/scripts/alertAndShutdown.php script.

You can find here the set of script used :

Download the file, have a look in it to see there nothing bad there (you should always check ;)
tar jxf UPS-scripts.tar.bz2

This would create a scripts dir in your home directory.

First : I use php as my linux script engine because it's simple but yet powerful, and I know this language very well and it's already installed for other need on my linux box.
To get it :
sudo apt-get install php5 php5-cli

Before we proceed we need to add a group.
As I run several scripts with several user (root/thomas/nut etc...), in which there is some private data such as password (linux account/MySQL etc...) we need to do something so that all user that run these scripts can read these files, and only them.

To do so, we'll create a group, and add all those user to this group, and change the file permission so that the file belong to this group.
The script would be in a better place in /usr/local or something but I personally prefer to have them in my home directory on my small personal server. (well maybe I'll move them into /usr/local and create a link to it, it would be far more clean ;)

groupadd scriptExecutor
chmod 770 /home/thomas
sudo chown thomas:scriptExecutor /home/thomas
sudo chown -R thomas:scriptExecutor /home/thomas/scripts
sudo chmod 640 /home/thomas/scripts/config/config.php
sudo chmod 770 /home/thomas/scripts/alertAndShutdown.php
sudo chmod 640 /home/thomas/scripts/mail.php
sudo chmod 640 /home/thomas/scripts/lib/*

#thomas is a sudoers so you need to add admin otherwise you won't be able to sudo again.
sudo usermod -G scriptExecutor,admin thomas
sudo usermod -G scriptExecutor root
sudo usermod -G scriptExecutor nut

Note : if something is wrong in the script permission (from the / to the script, check permission and ownership on each directory) then you'll get this kind of warning :
Oct  8 01:12:48 home upssched[5355]: exec_cmd(/home/thomas/scripts/alertAndShutdown.php onbatt) returned 126

I use phpMailer to send mail :

my mail function is based on the gmail example within the phpMail archive.

What's worth to be notice in these scripts :

  1. the leading #!/usr/bin/php in alertAndShutdown.php along with chmod 770 allow you to execute the php interpreter and run this script at one time instead of doing #php alertAndShutdown.php
  2. Ok the core of the script : build an html message with the output of the command upsc
  3. Write to /var/log/ups.log the fact that a shutdown has been processed by NUT (you can find this information in /var/log/daemon.log, but it contains other logs soo...)
  4. And the last command which is utterly needed otherwise your server won't stop : "/sbin/upsmon -c fsd" which will tel NUT to run the shutdown cmd.

Test carefully your configuration

To test the script, you have no choice but unplug your UPS (between UPS and your house power outlet. (not between the server and the UPS as I did the first time (lack of sleep is terrible))
While you do so, use a console with the "tail -f /var/log/daemon.log" running in it.

Make the following test :

  1. Unplug the UPS, wait to see in logs that this as been seen by NUT, replug the UPS, and see that the timer is canceled, your server shouldn't stop.

  2. Unplug the UPS, wait more than 30 seconds (the time we set in upsshed.conf), see that you recieved a mail and that our computer shut down.

  3. Check that your UPS is fully charged, (battery.charge). Comment the last line of the alertAndShutdown.php script, unplug the UPS, and see how much time your server need to suck all the power of your UPS. Then take 80% of that time, et set it in upsshed.conf in seconds

  4. Uncomment the last line of alertAndShutdown.php, re run test 3.

For me it gave :

First test :

Broadcast Message from
(somewhere) at 22:57 ...

UPS MGE-Ellipse750@localhost on battery

Oct 8 22:57:37 home upsmon[6221]: UPS MGE-Ellipse750@localhost on battery
Oct 8 22:57:37 home upssched[6302]: Timer daemon started
Oct 8 22:57:38 home upssched[6302]: New timer: onbatt (30 seconds)

Broadcast Message from
(somewhere) at 22:57 ...

UPS MGE-Ellipse750@localhost on line power

Oct 8 22:57:57 home upsmon[6221]: UPS MGE-Ellipse750@localhost on line power
Oct 8 22:57:57 home upssched[6302]: Cancelling timer: onbatt

Second test :

Broadcast Message from
(somewhere) at 23:01 ...

UPS MGE-Ellipse750@localhost on battery

Oct 8 23:01:57 home upsmon[6221]: UPS MGE-Ellipse750@localhost on battery
Oct 8 23:01:57 home upssched[6315]: Timer daemon started
Oct 8 23:01:58 home upssched[6315]: New timer: onbatt (30 seconds)
Oct 8 23:02:28 home upssched[6315]: Event: onbatt
Oct 8 23:02:29 home upsd[6218]: Connection from
Oct 8 23:02:29 home upsd[6218]: Client on logged out

Broadcast Message from
(somewhere) at 23:02 ...

Executing automatic power-fail shutdown

Broadcast Message from
(somewhere) at 23:02 ...

Auto logout and shutdown proceeding

Oct 8 23:02:31 home upsmon[6221]: Signal 10: User requested FSD
Oct 8 23:02:31 home upsd[6218]: Client thomas@ set FSD on UPS [MGE-Ellipse750]
Oct 8 23:02:31 home upsmon[6221]: Executing automatic power-fail shutdown
Oct 8 23:02:31 home upsmon[6221]: Auto logout and shutdown proceeding

Broadcast message from
(unknown) at 23:02 ...

The system is going down for halt NOW!
Oct 8 23:02:36 home init: tty4 main process (4928) killed by TERM signal
Oct 8 23:02:36 home init: tty5 main process (4929) killed by TERM signal
Oct 8 23:02:36 home init: tty2 main process (4931) killed by TERM signal
Oct 8 23:02:36 home init: tty3 main process (4934) killed by TERM signal
Oct 8 23:02:36 home init: tty6 main process (4935) killed by TERM signal
Oct 8 23:02:36 home init: tty1 main process (5240) killed by TERM signal

and the mail :

Third test :

My UPS with my server last 14 minutes with an initial battery state at 92% of full charge before reaching the 30% low limit (on which NUT execute the SHUTDOWNCMD anyway).
(so about 15 minutes with full charge)

My server parts are designed to be low power consumer, so the result is not that nice.
Details about server parts can be found here if you're interested :

I'll do some tweak, like processor speed change (if possible) and stop the harddrive if not used and mount /var/log on an USB key (so that the hard drives can stop)

Final thought

  1. recheck your UPS from time to time, like every 6 months. UPS lifetime can be bad.

  2. Of course, all above information is to be used at your own risk and may certainly be improved in many way. If you do so, share ;)

Tuesday, 22 July 2008

How to remove a windows update

here is how you should proceed to uninstall the an update : (here KB936357 )

  1. Insert you windows XP CD (any version) in your CD Rom drive, boot, use your boot Menu or go into your bios to choose your CD Rom drive as first boot device

  2. Boot on the CD (hit enter to boot on the CD)

  3. If you're using RAID 0 or RAID 1 for your C drive

    1. Hit F6 to specify an additionnal driver when it tells you so in the bottom dark blue line
    2. wait that windows loads all it's stuff
    3. Insert your floppy disk with raid driver on it
    4. hit S
    5. choose all line marked as "Required" (by repeating the S thing many time)
    6. then hit enter to continu normal boot

  4. Hit R to repare an existing window installation
  5. Choose the windows home directory (usually c:\windows but can be different for entreprise installation or nerds PC ;)
  6. enter the windows Administrator password
  7. you get the command line, now, type the following command :

cd $NtUninstallKB936357$/spuninst
batch spuninst.txt

For me it print on the screen the following (approximative, translate from french):

File not found
1 file copied
1 file copied
1 file copied

And it gives the hand back.

I reboot by typing 'exit', and the update has gone.

Sunday, 6 July 2008

Developping with googleMaps everywhere with the same licence id

When you want to use google maps on a website, you need to signup and get a key for the internet domain name that will use google Maps.

When your developing your googleMaps feature, it's not on the production site with the final url. (At least... I hope it's not so ;)
And you may code some google feature for several sites, and all these sites, you often code it on your computer with a local Apache/Php or Tomcat server.

So in you html (jsp, php, asp whatever) page you'll include the following code to have access to the google maps feature.

<!-- google map -->
<script type="text/javascript" src=""> </script>

A simple trick, is to request a key for a fake domain here :

Let's say, for "http://toms-projects"

Get the key, put it in your html code, and then edit your etc/hosts file.
(/etc/hosts on linux, c:\windows\system32\drivers\etc\hosts)

and edit the line to       localhost toms-projects

As all system, by default, query the /etc/hosts file before making a DNS query,

toms-projects will be located at by the machine dns client, and thus point to your local website.

And google sees the api key used for the toms-projects domain.

And everything works.

Even better,

you may register directly the final domain key, let's say

and edit you /etc/hosts in the same way :       localhost

It will work as well, but beware, as long as you /etc/hosts file will be configured that way, will be your computer !

Spring Transaction support with JDBCTemplate and Java 1.5 Annotation

Here is quick tutorial to enable transaction in a Spring project using JdbcTemplate and Java 1.5 Annotations.

I'm using Spring 2.5.2, Java 1.6.
(it should work with Spring 1.2 and Java 1.5 too, considering this blog post, which also gives some usefull info for people new to transaction with Spring)

for example , as simply as this :

@Transactional (propagation=Propagation.REQUIRED, rollbackFor=Exception.class)
public Intervention createEmptyIntervention(int idRegulation) throws Exception
//...jdbc Code

Let's begin with the beginning, applicationContext.xml :

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns= ""
xmlns:xsi= ""

xsi:schemaLocation= "">

<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location">

<import resource="spring/cache.xml" />
<import resource="spring/dao.xml" />
<import resource="spring/scheduler.xml" />
<import resource="spring/dwr.xml" />
<import resource="spring/services.xml" />
<import resource="spring/servlet.xml" />

<bean id="SecurityService" class="" autowire="constructor"/>

Here, among other things, I include a dao.xml and a services.xml beans definitions files.
dao.xml defines jdbc related things, and, services.xml some beans that use the jdbc things.

Also, I use XML namespace and XSD instead of DTD, because it will be needed after.

Basically, the <beans ... is equivalent to
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN 2.0//EN" "">

here is the dao.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns = ""
xmlns:xsi= ""
xmlns:tx = ""

xsi:schemaLocation= "">

<bean id="crfIrpDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" ><value>${jdbc.driver}</value></property>
<property name="url" ><value>${jdbc.url}</value></property>
<property name="username" ><value>${jdbc.username}</value></property>
<property name="password" ><value>${jdbc.password}</value></property>

<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="crfIrpDataSource"/>

<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="crfIrpDataSource"/>

<tx:annotation-driven transaction-manager="txManager"/>


Besides the usual dataSource and JdbcTemplate, I add a transactionManager and the tx:annotation-driven bean.
This last will do the job of examining your beans for annotations.
The transaction Manager is here to actually manage the transaction where tx:annotation-driven tells to do so.
Notice that 1 namespaces and and 2 xsd locations has been addded for the tx namespace prefix.

To be complete, here is the services.xml

<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns = ""
xmlns:xsi= ""

xsi:schemaLocation= "">


<bean id="interventionService" class="" autowire="constructor" />


And finally, the code of the service:


import java.sql.Types;
import java.util.Date;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;

import fr.croixrouge.irp.model.monitor.Intervention;

public class InterventionServiceImpl extends JDBCHelper implements InterventionService
private JdbcTemplate jdbcTemplate = null;
private static Log logger = LogFactory.getLog(InterventionServiceImpl.class);

public InterventionServiceImpl(JdbcTemplate jdbcTemplate)
this.jdbcTemplate = jdbcTemplate;

/*...other methods skipped...*/

private final static String queryForCreateEmptyIntervention =
"INSERT INTO `intervention`\n"+
" (`id_dispositif`, `id_origine`, `id_motif`, `id_regulation`, `DH_saisie`, `num_inter`)\n"+
" ( 0, 0, 0, ?, ?, 0)\n";

@Transactional (propagation=Propagation.REQUIRED, rollbackFor=Exception.class)

public Intervention createEmptyIntervention(int idRegulation) throws Exception
Intervention intervention = new Intervention();

intervention.setDhReception (new Date() );

Object [] os = new Object[]{ intervention.getIdRegulation(), intervention.getDhReception()};
int [] types = new int []{ Types.INTEGER , Types.TIMESTAMP };

jdbcTemplate.update(queryForCreateEmptyIntervention, os, types);

intervention.setIdIntervention (this.getLastInsertedId());

logger.debug("Intervention inserted with id="+intervention.getIdIntervention());

return intervention;
/*...other methods skipped...*/

private int getLastInsertedId()
return this.getLastInsertedId(jdbcTemplate, "intervention");

/* herited from JDBCHelper*/
protected int getLastInsertedId(JdbcTemplate jdbcTemplate, String tableName)
return jdbcTemplate.queryForInt("SELECT last_insert_id() from `"
+"` LIMIT 1", null, null);

I've changed nothing but the Java 1.5 annotation that tells the TransactionManager that a Transaction is mandatory here (Required), and it should rollback on any exception that is derived from java.lang.Exception.

Here, I set up the transaction, in order last_insert_id query can work. Without a transaction, Spring can use a different connection between the insert query and the last_insert_id query, and in that case, this last query would return 0 instead of the last inserted id.

To be sure that your method has been executed within a transaction, put Spring in debug mode, and you should be able to see something like this :

2008-07-06 19:54:18,828 DEBUG [http-8080-Processor25] ( - Exec: MonitorInputIntervention.createEmptyIntervention() Object created,  not stored. id=0
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Using transaction object [org.springframework.jdbc.datasource.DataSourceTransactionManager$DataSourceTransactionObject@139d115]
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Creating new transaction with name []: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,-java.lang.Exception
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Acquired Connection [jdbc:mysql://localhost/crfirp?autoReconnect=true, UserName=root@localhost, MySQL-AB JDBC Driver] for JDBC transaction
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Switching JDBC Connection [jdbc:mysql://localhost/crfirp?autoReconnect=true, UserName=root@localhost, MySQL-AB JDBC Driver] to manual commit
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Bound value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] to thread [http-8080-Processor25]
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Initializing transaction synchronization
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Executing prepared SQL update
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Executing prepared SQL statement [INSERT INTO `intervention`
(`id_dispositif`, `id_origine`, `id_motif`, `id_regulation`, `DH_saisie`, `num_inter`)
( 0, 0, 0, ?, ?, 0)
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] bound to thread [http-8080-Processor25]
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] bound to thread [http-8080-Processor25]
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Setting SQL statement parameter value: column index 1, parameter value [2], value class [java.lang.Integer], SQL type 4
2008-07-06 19:54:18,843 DEBUG [http-8080-Processor25] ( - Setting SQL statement parameter value: column index 2, parameter value [Sun Jul 06 19:54:18 CEST 2008], value class [java.util.Date], SQL type 93
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - SQL update affected 1 rows
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] bound to thread [http-8080-Processor25]
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Executing prepared SQL query
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Executing prepared SQL statement [SELECT last_insert_id() from `intervention` LIMIT 1]
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] bound to thread [http-8080-Processor25]
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] bound to thread [http-8080-Processor25]
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Retrieved value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] bound to thread [http-8080-Processor25]
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Intervention inserted with id=12
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Triggering beforeCommit synchronization
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Triggering beforeCompletion synchronization
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Initiating transaction commit
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Committing JDBC transaction on Connection [jdbc:mysql://localhost/crfirp?autoReconnect=true, UserName=root@localhost, MySQL-AB JDBC Driver]
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Triggering afterCommit synchronization
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Triggering afterCompletion synchronization
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Clearing transaction synchronization
2008-07-06 19:54:18,859 DEBUG [http-8080-Processor25] ( - Removed value [org.springframework.jdbc.datasource.ConnectionHolder@1bef5e8] for key [org.apache.commons.dbcp.BasicDataSource@14189d0] from thread [http-8080-Processor25]
2008-07-06 19:54:18,875 DEBUG [http-8080-Processor25] ( - Releasing JDBC Connection [jdbc:mysql://localhost/crfirp?autoReconnect=true, UserName=root@localhost, MySQL-AB JDBC Driver] after transaction
2008-07-06 19:54:18,875 DEBUG [http-8080-Processor25] ( - Returning JDBC Connection to DataSource

Sunday, 22 June 2008

MySQL last_insert_id and Spring Framework JdbcTemplate

I'm using Spring Framework JdbcTemplate on a project with MySQL server as database.

And I've noticed that the query "SELECT last_insert_id()" returns 0 most of the time.

Using this syntax improves a bit :

SELECT last_insert_id() from `last_inserted_table` LIMIT 1

The reason is that for each jdbc access, Spring use a connection for it's connection pool.

So it may use one connection from its pool, to make the insert, and another connection for the last_insert_id query.

The problem is that the last_insert_id query is tied to the connection that actually made the insert.

In order to force Spring JdbcTemplate to use the same connection all along a portion of code is to start a Transaction before the insert, and commit it after the last_insert_id query.

See this post for simply managing transaction.

You might say "I always use transaction"... well, in my case, transaction were not needed until this last_insert_id issue that returned 0.

Tuesday, 1 April 2008

Open ssh connection through proxy with NTML authentication

Here is the explanation of how to pass a ssh connection through a proxy that use NTLM authentication protocol (cryptographic authentication) instead of a plain login/password authenication.

First follow the steps describes here :

Next we have to deal with NTLM authentication.

Putty do not know how to talk with proxy that use NTML authentication, so we need a soft that will handle the authentication and encapsulate the network packet.

To do that we'll use cntlm which is a port of an unix programms using cygwin (no need to make a insall of cygwin though...).

Once you unzip the files, edit the configuration file cntlm.ini

You have to change 4 values :

login: Your Windows login name
domain: Your domain name

You can get theses information by hitting CTRL+ALT+SUPPR, a window display and in the first fieldset you can see something like "Thomas Manson has opened a session as DOMAIN\LOGIN"

proxy url and port : See the previous post to see how to get this informations
ctnlm port : the local port on which CNTML will listen (Local that is : on your computer). The port shoud be above 1024 (as below, ports are reserved for specific use) and not used (use netstat -a > c:\netstat.log and check that the port you choosed is not in the file). Take for example 5865.

Example :

# Cntlm Authentication Proxy Configuration
# NOTE: all values are parsed literally, do NOT escape spaces,
# do not quote. Use 0600 perms if you use plaintext password.

Username __LOGIN__
Domain __DOMAIN__
#Password password # Use hashes instead (-H)
#Workstation netbios_hostname # Should be auto-guessed


# This is the port number where Cntlm will listen
#Listen 5865

# If you wish to use the SOCKS5 proxy feature as well, uncomment
# the following option, SOCKS5. It can be used several times
# to have SOCKS5 on more than one port or on different network
# interfaces (specify explicit source address for that).
# WARNING: The service accepts all requests, unless you use
# SOCKS5User and make authentication mandatory. SOCKS5User
# can be used repeatedly for a whole bunch of individual accounts.
#SOCKS5Proxy 8010
#SOCKS5User dave:password

# Use -M first to detect the best NTLM settings for your proxy.
# Default is to use the only secure hash, NTLMv2, but it is not
# as available as the older stuff.
# This example is the most universal setup known to man, but it
# uses the weakest hash ever. I won't have it's usage on my
# conscience. :) Really, try -M first.
Auth LM
#Flags 0x06820000

# Enable to allow access from other computers
#Gateway yes

# Useful in Gateway mode to allow/restrict certain IPs
#Deny 0/0

# GFI WebMonitor-handling plugin parameters, disabled by default
#ISAScannerSize 1024
#ISAScannerAgent Wget/
#ISAScannerAgent APT-HTTP/
#ISAScannerAgent Yum/

# Headers which should be replaced if present in the request
#Header User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)
Header User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)

# Tunnels mapping local port to a machine behind the proxy

Next we need configure putty to use the CNTLM program.

On connection->proxy settings.
On Proxy hostname field, type "localhost", on port field, type the port number you choose for cntlm (ex : 5865).

On username field : your windows login.
On password field : your windows password.

Save theses inputs in a session (so you don't have to type it each time).

Run cntml, open your connection with putty and you should be able to login on your login box.

Notice that each time your windows password changes, you have to change it in putty too.

Saturday, 1 March 2008

How I setup GoogleApps Premium on an existing domain

Mail administration is boring and time consuming... It continuously takes more importance in every days exchange and the loss of one mail can have huge consequence : What ! You didn't bought the bread ! I sent a mail so you don't forget... it's enough... I divorce ! :op

Nowdays, email takes a huge amount of space, that you need to backup... So we decided to migrate our client to google Apps : 7 servers to handle your mail, 25Go of space, a simple and very powerfull web interface with the ability to still use imap.

You'll find here how I set-up Google Apps Premium for the domain

You should plan this operation at the beginning of the weekend to minimize any potential issue.

First I get there :

I click on "Inscription" on the Premier Edition.

Then I give the domain name :

Then I fill the form

As it's for a company, I set the VTA reference of the company to avoid to pay VTA :

Next I pay and set-up the first account and then arrive on the interface :

You'll notice the warning message that tells you to confirm that you really are the owner of the domain.

Click on the blue link.

I choose to use the CNAME dns modification.

To do that I edit the domain configuration file on my DNS Server and add the following line :

googleffffffff96aa79ed  IN  CNAME

And I update the domain serial number

$TTL 86400
@ IN SOA (
2008030101; serial number
7200 ; 21600 refresh
900 ; 3600 retry
3600000 ; expire
86400 ; minimum

Once done, I reload the dns server:

service named reload

And I click on verify. It's then says it can take up to 48h : it's the time needed to the DNS update to be propagated.

Then we need to set-up the MX of this domain so that mails are routed to google mail servers instead of our previous mail server.

Clic on the help link :

Next click on the modify your MX link

Here you're beeing told to replace the existing MX entries with the one provided by google :

So re edit your domain configuration file,

replace the existing mx entries by the following ones:

; entree MX :

Don't forget the ending points on both domain (your domain and google's domain)

Re edit your domain serial number

$TTL 86400
@ IN SOA (
2008030102; serial number
7200 ; 21600 refresh
900 ; 3600 retry
3600000 ; expire
86400 ; minimum

reload your name server

service named reload

Then you can check if your modification has been successfull

type your domain and you should see google's domain :

Now, it's done, add you're other user with the web interface and you should recieve your mail on this domain in the google interface.

you can migrate the mail that exists on your old mail server to gmail with this interface (it behaves as a client and fetch all your mail to google server)

I've tested... it's working right now, even if the domain verification is not yet achieve.

I finally used an html file verification too, to be quicker.

In about one hour of work, I've migrated my mails to google server... great !

Saturday, 16 February 2008

Add image in the blogger link list

I wanted to add link on my blog to blogs I read regularly.

Ok, I've found the Link List module, I add it. Add few link I like...

But, some of the blogs are in French other in English...

So I've obviously tried to insert some <img> or even <span>... nothing works...

So I used the hard way :

I've modified my template to add prototype +

<script src='' type='text/javascript'> </script>
<script src='' type='text/javascript'> </script>
<script src='' type='text/javascript'> </script>

I used the text inside the <a> as a prototype template.

For example :

#{fr} Rank My Day | Référencement

And wrote this code to change #{fr} into a nice flag (with many thanks to for his works on all great icons)

function linkListCustomize()
$$('#LinkList1 a').each(function(item){
item.innerHTML = (new Template(item.innerHTML).evaluate({en:'<img class="clickable" src="" alt="English written website"/>',fr:'<img class="clickable" src="" alt="French written website"/>'}));
function init()
linkListCustomize ();
/*taken from */
function callOnLoad(init)
if (window.addEventListener)
window.addEventListener("load", init, false);
else if (window.attachEvent)
window.attachEvent("onload", init);
window.onload = init;

You can have a look on i've added some customization of the <pre> html tag.
I'm working on syntax highlighting, but i'm having some trouble with cross domain ajax...

Friday, 25 January 2008

Full Screen Editor for thanks to Greasemonkey

The blogger editor to write a new post on your blog is damn small... and on my 24" screen, it's even more sad... all this space wasted...

I know Greasemonkey for a while and I search scripts that would increase the editor size (if I hadn't found it, I would have wirtten it!)

And i've found one that works perfectly well here :

You don't know Greasemonkey yet... ?

It's a must-have !

In short, it's a firefox plugin that allows you to inject custom javascript files into any web page to add some functionnality.

more info here:

Get firefox :
Get Greasemonkey plugin :
Get the Blogger Script :
(click on "install this script" on the upper right corner)
Get more scripts here :

Other usefull plugin for firefox : ad block plus (prevents ads from beeing displayed), firebug (a developer plugin), yslow(a plugin for firebug (which is a firefox plugin))

Wednesday, 23 January 2008

Open a SSH connection through an http proxy and dig tunnels :)

I'm working for an IT Services & Software Engineering company, and I often work for some time (3 months to a year) by the client company's office...

And often, I'm behind a ****ing proxy that filters http request, provide antivirus analysis etc.. Useful for the company securtity and efficiency, but really borring for me.

Those proxy allows http (tcp/80) and https (tcp/443) connections and nothing else...

So no tcp/22 for SSH, and I miss it really much for many reasons.

So here is what to do, to get an SSH connexion to a linux box.

But beware that doing that will certainly break the security contract you signed when you enter the company. Doing that allow you to bypass the content filtering and security. And you could be fired for that...

This blog post will show you how to bypass an http proxy with a basic authentication.

The following post will show you a way to bypass a proxy with NTLM authentication mecanism.

Server configuration

You're a geek like me, so you have a linux box somewhere running 24hours a day running a ssh server in a linux box.

The ssh server usually listen on the tcp/22 port, but this port is filtered by the proxy. The proxy allow tcp/80 & tcp/443.

As the tcp/80 is probably used by your web server, we'll focus on tcp/443, the https port which you probably don't need. But you can do the same with tcp/80 port.

What we need is that your linux box accepts a ssh connection on the tcp/443 port.

We can either change the ssh server setting to listen on port 443 or redirect the connection established on the 443 port to 22 port.

I prefer the second option that leaves untouch the ssh server configuration, and let your linux box be accessed trough ssh on it's standart port.

You can redirect you 443 port to the 22 port using iptables or other tools.
You may even have a graphical interface that allow you to do that easily.

With the sme server, i've such a tools (a web interface attainable only though my local network)

You can find information on how to do that here :

if this can help, here is the lines of my iptables-save output that involves the 443 port :

-A PortForwarding_22383 -p tcp -m tcp --dport 443 -j DNAT --to-destination
-A InboundTCP_22383 -d -p tcp -m tcp --dport 443 -j ACCEPT

where is my external public ip address.

Once this is done, you can test it by trying to open an ssh connection on the port 443, or use telnet (telnet youBoxIp 443) and you should see you open ssh server version : SSH-2.0-OpenSSH_3.9p1.

Configure your ssh client

Putty is a marvelous ssh client for windows, I couldn't live without it.

You can get it here :

All settings describe below can be change on the fly when the ssh connection is open (except for the proxy setting... of course) by right clicking in the title window->change settings.

Basic settings

In the hostname field, enter the ip or domain name of your linux box.
in the port field, enter 443 (or 80 if you use the 80 port)

In Saved sessions, enter a string that remind you your trully loved linux box ;o)

Putty's number of lines of scrollback

this is not really a mandatory step, but really usefull.

Replace the 200 value by 20000000 (it should be enough). Adding more zeros could lead putty the behave strangely... i've tried ;o)

Putty's encoding

In order to have a proper display of characters in your putty, you need to set the encoding of putty accordingly to your linux box configuration (mostly UTF-8 or your country's specific charset).

You can find these setting on your linux box in /etc/sysconfig/i18n

cat /etc/sysconfig/i18n

here it's fr_FR which is ISO-8859-1 charset.
You can find which charset match with your country code in /usr/share/i18n/locales/your country code

cat /usr/share/i18n/locales/fr_FR
comment_char %
escape_char /
% French Language Locale for France
% Source: RAP
% Address: Sankt Jo//rgens Alle 8
% DK-1615 Ko//benhavn V, Danmark
% Contact: Keld Simonsen
% Email:
% Tel: +45 - 31226543
% Fax: +45 - 33256543
% Language: fr
% Territory: FR
% Revision: 4.3
% Date: 1996-10-15
% Users: general
% Charset: ISO-8859-1
% Distribution and use is free, also
% for commercial purposes.

Keep alive settings

The http proxy will automatically close an idling connection. To avoid that, putty has the keep alive parameter which simulate an activity and thus avoid the proxy to close the connection.

Set it to 4 seconds which is a good value.

Proxy settings

Here is the tricky part...

You need to get the proxy url and port, login and password.

In most case, your company uses Microsoft windows and Internet explorer.

You can get the proxy settings from internet explorer, in :
Tools->Internet Options->Networking->Network parameters

If the checkbox in the proxy server fieldset is ticked, then just use the information in this fieldset (have a look in Advanced also) but in most case, it will use a script.
copy and paste the proxy script in the internet explorer address bar, save the content to a file and read it...

You need to determine what is the proxy url you use.
The proxy scripts usually use your network address and subnet to determine which is the proxy server you should use with function such as dnsDomainIs(host, "") or isInNet (host, "", "")

     if ( dnsDomainIs (host, "") ||
dnsDomainIs (host, "") ||
dnsDomainIs (host, "") ||
dnsDomainIs (host, "") ||
isInNet (host, "", "") ||
isInNet (host, "", ""))


if (isInNet (myIpAddress (),"", ""))
return "PROXY;" +
return "PROXY srv-proxy-01.site2.dom:8090;" +
"PROXY srv-proxy-02.site2.dom:8090";

in this case, the proxy usr would be one of the srv-proxy-02.site2.dom and the port would be 8090.

In a command console(Windows key+R, type cmd), you can get your ip address with the following command :
ipconfig /all

Which will help you to determine the proxy settings that is calculated by the scripts.

Or more basically you can run, still in a console

netstat -a

and look for something like a proxy...

The proxy url might contains the word proxy so
netstat -a | find "proxy"

Once you have it (or you can try each url & port in the script)
paste it in the proxy hostname an port.

Username is usually your NT Domain\windows account username.
You can get it by hitting CTRL+ALT+SUPPR, it will be displayed the the window that appears. (type escape to return to where you were)

SSH compression

Enabling ssh compression will make the connexion smoother...

SSH tunnels

SSH Tunnels, one of the wounderfull functionnality of the SSH protocol.

in the source port, type 22, in the destination

this will create a tunnel that will take the tcp traffic on the port 22 of the local machine to the remote 22/tcp port with the ip : your linux box, all that throug the ssh connection (established on the 443 port, forwarded to the 22 port)

Like this, you can browse file with a secure ftp programm like winscp, filezilla, or any text editor that supports SSH like ultraedit.

You'll just need to point this programs to localhost:22.
Not on yourLinuxBoxIp:443, because this link won't be kept alive by default by these programs.

Another usefull tunnel is L3390->

where machine would be a windows computer with remote desktop activated (Windows Key+pause, remote connection tab, remote desktop fieldset) on your private network that can be reached by your linux box.

with this, you can run Microsoft Terminal Service Client : windows key + R, type mstsc, type localhost:3390 and you'll be able to use your windows computer which is also up 24h a day ;o) and browse the web as if you were at home (a bit slowly though).

You can see why i wrote this post :

Save settings

Save all these settings.

If it doesn't works, try other proxy settings in the proxy configuration script.
If it still don't work, it may be because your company uses the NTLM authentication protocol. This will be described in a next article.

late update :