Dave Hall Consulting logo


Very Rough Guide

When I go to the library I regularly check out what computer books are sitting on the shelf. Some books seem to be there every time I go in, such as iTunes 6 and iPod for windows and Macintosh. I usually end up grabbing one or 2 books on something I am at least vaguely interested in. I usually end up flicking through them over a month, and forget most of it a week later.

On my latest visit I picked up The Rough Guide to Blogging, by Jonathan Yang. I wasn't expecting to learn a lot out of it, but I hoped that there might be a few little gems or at least 1 thing that I didn't already know.

In general the book is ok. If you are new to blogging there is quite a few things that you can learn from reading a book like this. It seems to be pitched at people who use computers to get a job done, not geeks - that's cool. You shouldn't have to be a geek to read a book on a topic such as blogging.

Unfortunately the book's target audience probably isn't as well versed in convention and netiquette as a geek would be. As a geek reading the book I found myself thinking "hmm" on a few occasions. Then I found a wtf?! show stopper on page 79. Here is the quote under the heading "Loading images from other websites" (my emphasis):

You can use an image from elsewhere on the Web without copying it to your server. Simply find the address of the individual image (not the page it's displayed on) and use the IMG tag in the usual way.

Before posting an image on your blog, however, it's best to ask for permission from the copyright holder. In reality, nothing is likely to happen to you for using an image without permission - especially in the case of celebrity photos and other commonly circulated stock photos - but at the very least it's polite to ask before using, say, a drawing from an artist's website.

Generally hotlinking is considered by many as a copyright violation and bandwidth theft. Most webmasters don't approve of others using their content and bandwidth without permission. Not so long ago, US Senator and potential Presidential cantidate John McCain found out what happens when you hotlink. There are numerous other examples of disgruntled copyright holders and webmasters taking action against hotlinkers.

Given the size of the copyright notice in the footer on his site, Jonathan seems to take his copyright pretty seriously, pity that his respect doesn't seem to extend to others' works.

Update: I emailed Jonathan a link to this post and he has replied.

Thanks for reading and reviewing the book. The section you referenced about "hotlinking." Definitely not good blogger etiquette. I should probably post something about the importance of not only asking permission but also hosting your own images. I hope I meant "use the images, but host them yourself" but clearly the text doesn't reflect that.

Further Update: Jonathan has posted a clarification post on his blog (since moved to a different url).

bye bye PHP 4

I have several servers running PHP 5 already, but as my laptop is my primary phpGroupWare development and test environment, it was running PHP 4.

I knew this day would come, I just didn't think it would be so soon. PHP4 has been dropped from ubuntu. Ubuntu has never shipped php4 in main, but until feisty it has been in available in the universe.This is no more.

The advantage of using PHP 5 on ubuntu is that it is in main, so has full security support.

I started using PHP 3 and I was pretty enthusiastic about making the jump to PHP4, but have held back on PHP 5 due to the problems with running phpGroupWare (and other scripts) under it. It looks like I no longer have any excuses,

Now that all the major distros ship PHP 5 and PHP 6 is around the corner, it is time to bury PHP 4. The world didn't end when register globals was turned off by default. Switching to PHP 5 won't kill us either, but holding back may.

phpGroupWare release?

I have been thinking about how to deal with releases of phpGroupWare. For me it is a technical, procedural and political question.

Over the last few months I have been playing with drupal a fair bit. I love drupal. It is simple to install, skin and hack. The community is great. The website is massive and has almost anything you want about durpal. They dog food their stuff. I have quite a few clients using drupal for their sites, they love it. There are many cool things on technical level within druapl too - but that would take this post off on a long tangent.

Drupal was allocated 20 summer of code places by google. phpGroupWare received 1 spot, indirectly. To me this is a sign of the popularity of the project and the level of activity within the community.

I hear you thinking, but hang on, drupal is a CMS, phpGW is a groupware suite, compare apples with apples. Well, you see I am not trying to compare apples with apples. I am looking for good ideas about how to build a quality release.

I think drupal have it sus'd. The have the core which is released when it is ready. They also have a stack of modules which are released when the developers feel like they are ready. This provides a lot more flexibility to all developers. Developers can prepare versions for multiple versions of drupal, but also release stuff when they think it is ready for release, not wait for the next mega tarball to be prepared.

I think that the drupal release model may work well for phpGroupWare. We could prepare the core (probably API, admin, addressbook, calendar, email, filemanager, notes, preferences, setup, todo - the PIM apps amd sync when it is ready) and release that as phpGroupWare 0.9.18. Then modules developers would be free to package there modules and release them when they were ready. Modules which were tested and stable at the time of the official releases would be listed in the release annoucements. It would mean that getting our apps site working would be important as that would be the entry point for a new eco system. It would also mean that if someone is working on new features for an app, they could release often, while the core would be more static and stable, in order to encourage more app development.

I don't think that we can have a release time table for the core unless we have significantly more (paid?) resources available.

I think that covers the technical and procedural issues, the political hopefully won't be too painful either.

The decision on what is core and what is not will need to be made early. The criteria for assessment should be made publicly available. Developers should be free to ask that their app be considered core. Being a core app is not a vital thing, apps will still be promoted it they are not core.

We could look at 4 levels of apps. Core, as discussed above. Supported, apps which meet the core app standards, but are not considered core for what ever reason. Unsupported, apps which work but do not meet the project standards for some reason (coding standards, bypassing the api, lack of docs, require patches etc). Dead, apps which are no longer maintained and other developers feel should no longer be maintained. The status of the apps would be publicly listed.

Such a model as proposed above would allow us to release a 0.9.18 core with some additional supported apps (such as ged, property, messenger, tts) a lot sooner than trying to get all the 0.9.16 tarball apps ready for release.

The only technical issue to resolve in this plan would be version control as savannah's CVS. isn't really designed for this model of development. I have been discussing switching to SVN with the savannah hackers, they are supportive of the idea.

I do have comments open on my blog, so feel free to leave a comment, otherwise discuss it on the dev list. I will post a summary of the comments there if I feel it is warranted.

The Summer of Code Roller Coaster

I am awaiting final confirmation from google and the GNU project, but I am 99% sure now that phpGroupWare will be getting a Summer of Code (SoC) slot, and I will be mentoring a student to implement sync.

What a process it has been, and I am yet to start mentoring. I thought others might be interested in the the ups and downs involved in getting phpGroupWare a SoC place.

At the start of March, Google opened applications mentoring organisations. I made an application on behalf of phpGroupWare. All up 141 organisations were accepted, ranging from small obscure projects through to some of the stars of the FOSS community, with a mix of grass roots and commercial projects. Unfortunately phpGroupWare ended up at 142nd or lower in the rankings.

I am sure google was inundated with applications, but it would have been nice to at least receive a "thanks but no thanks email" from them. I found out by checking the listings when they were announced. It kind of reminded me of first year uni when they would post provisional marks for the semester a couple of weeks before mailing out academic transcripts. I thought our Summer of Code was over before it had even started.

Then I noticed that the GNU Project had been accepted as a mentoring organisation. As phpGroupWare is a GNU Package it was eligible for SoC slots, under the GNU banner. The GNU project received 65 valid applications, with a further 12 being deemed to be invalid. I thought phpGW had a pretty good chance of getting one of the slots.

phpGroupWare received 2 applications for worth while improvements, those being sync and redoing the installer/setup code. It was really hard choosing between the 2 applications. The rationale went something like this. Setup is the first thing a new phpgw admin will see, our current setup app has had some eye candy added by it is far from polished and needs some serious attention, yet it is not something that someone is likely to invest money in, but it is important. Sync on the other hand allows us to support mobile devices, desktop apps (including dare I say it MS Outlook) and to tick another business functionality box, yet our previous attempts at sync have failed, usually due to technical reasons. In the end a tough choice had to be made (sorry again jarg), and we went with sync as it was going to bring the greatest benefit to our current and potential users.

The GNU SoC admins asked all the mentors to rate the applications. Our top choice (sync) was rated up.

While I was away for Easter (with limited dialup speed internet) it was announced that the GNU project had been allocated 8 spots. That seemed like a reasonable number off slots given that some GNU packages, such as GNOME had been given many more in their own right. The problem was that the 8 places allocated to the GNU project had to be shared between 11 packages. This meant that some people were going to miss out. I still thought that there was good chance of phpGW getting a slot.

On Monday when I got home (with a good connection) the final 8 was proposed, to my disappointment phpGroupWare wasn't on the list, we had been ranked 10th. This ride was still far from over. A later message suggested that there had been a rethink and 2 packages had been dropped and the last 2 slots were to shared between phpGroupWare and 2 other projects. Back to a two-thirds chance. I fired off an email explaining why I thought we should get one of remaining slots.

The next morning I woke to the bad news, we had lucked out again. Another application had been found as the diamond amongst the coal and elevated to 8th spot. phpGW was now sitting in 10th spot and out of contention, or so I thought at the time.

By this stage I was proof reading a draft blog post on the whole SoC of process. I am now glad that I didn't publish it, not that I was likely to, it was more a venting. I did email one of the admins privately asking for more information on why we had been rejected.

On Wednesday morning I was checking my mail and found a new GNU SoC final list (rev 3, I counted). One student had been allocated to 2 projects and had decided to work on the other project, this freed up 1 slot, which meant phpGroupWare moved to 9th, still 1 short. The news got better, google had allocated the GNU project an additional place. We made it, finally!

I am sure Google didn't intend it to be such a tough process for mentors, but when there are so many worthwhile FOSS projects, some many enthusiastic and competent students and a limited budget it does make it difficult for Google to give everyone a go.

It has been a long and stressful process to get phpGroupWare a SoC slot this year. I hope that the stressful part of the process is now over and that Johan Gunnarsson will turn out a functional syncML interface for phpGroupWare.

If it works out well this year, it might be worth all the effort to ride the roller coaster again in 2008, as long as Google is willing to put up the cash.

Update: I have checked and phpGroupWare is in Google's accepted list.

Linux is better for the environment

TechWorld has a story about how the UK government is recommending the adoption of Linux and FOSS as it better for the environment. The story quotes a Californian Department of Commerce report.

The recommendation of using Linux and FOSS in government is hardly surprising, many governments around the world have been reaching the same conclusions. The interesting part is the environmental angle.

I have always liked the lower resource requirements for Linux based solutions. For a couple of years my primary development machine was a Frankenstein repurposed AMD K6-233 (running at 125Mhz) with varying amounts of RAM. Various parts had been changed in it as they wore out - physically or practically. The machine then became a firewall for many years, until late last year. Now it serves as my network and server monitoring machine, which is also a SMS gateway.

The machine has served me well. The CPU is almost 10 years old. I don't have access to reliable stats, but I suspect that it draws less than 50W. My current Centrino laptop draws upto 90W.

Another example is the old Apple PowerBook Lombard my partner uses for surfing the net, checking her mail and typing the occasional document. She seems happy enough with xubuntu, which I installed on it when I got the machine cheap from a friend.

When the motherboard or CPU fails in the old K6 or the PowerBook dies, they will be added to the box of dead parts in my office. Then one day, I will take all the dead bits to the computer recyclers.

How is all this relevant to the UK report and linked article? It shows how long a machine can continue to run linux in a useful way.

As the article points out, most people will just junk their old PCs, not recycle them. I have sourced parts for machines I still use from garage sales or nature strip (Julie doesn't let me out when hard rubbish collection is on). PCs contain heavy metals such as lead and cadmium which can leech into soil and even underground waterways when disposed of in landfill.

When looking at a new PC organisations should not only look at the cost of procuring the new machine. The repurposing or recycling of the old machine could be considered. The on going running costs of the machine needs to be factored in. For example a Pentium D 840 draws twice the power than a newer Core 2 Duo E4300 which only uses 65W per hour. These days a Core 2 Duo can be less than 50AUD more expensive than a Pentium D. Not only does the newer machine draw less power, it probably has a longer upgrade lifespan.

When considering Linux and FOSS there are many reasons why it can work out cheaper. Lower software costs, lower hardware costs and lower power consumption. So by choosing FOSS you can look after the environment while looking after your bottom line.

New Mac PC ads

Novell has released a couple of new ads for Linux which are a spoof of Apple's Mac PC ads. They are quite well done. If you prefer the ogg or mpeg versions. If you prefer the ogg or mpeg versions. Note: This is not an endorsement of Novell's products or its deal with Microsoft, just their sense of humor.

Sun SunFire T2000 rev2 and Ubuntu Dapper 6.06

A couple of months ago I received a shiny new Sun SunFire T2000. It is a monster 1 CPU with 8 core, each capable of running 4 threads each (that is 32 concurrent threads) 8G of RAM and 2x73.4G Seagate SAS HDDs. The 2U case hides the power hidden away inside. Once powered up it sounds like a jet engine, but that is ok it is designed for the data center not a HTPC.

I obtained the box under the Sun Try n Buy Program for testing ubuntu 6.06LTS (aka dapper drake) and some PHP based web apps. I also wanted to play with Solaris and some other OSes on the box. I was also interested in Solaris Brands. I wanted to take Jonathan Schwartz up on his offer of running ubuntu on the box and getting to keep it. As I consider myself a Linux system admin of medium level competence I thought it should be easy enough. How wrong I was.

The first couple of times I tried to install dapper on the server I used a CD. I used both the 6.06LTS and 6.06.1LTS update CD and neither worked. It turns out there was a bug in the iso9660 support which shipped on these CD images. As of the time of writing no new official CD images have been released with the problem fixed, although the nightly build CD have the fix included.

After some research I discovered that "netboot"ing was the preferred way to install ubuntu on these boxes. Again it seemed relatively straight forward, setup rarpd and tftpd, grab the image and away we go. Unfortunately this wasn't the case. After running ethereal (now known as wireshark) on the debian server, I discovered that the T2000s experts to pull the boot image via tftp using the broadcast address ( I later found out that both tftp and tftp-hpa which ship with edubuntu 6.06LTS and Debian 3.1 (aka sarge) don't like requests being made this way. I tracked down the author of tftp-hpa, H Peter Anvin, and discussed the behaviour I was experiencing. He pointed me to a newer release of tftp-hpa which contains a fix for problem. Peter considers the way the T2000s (and other Sun servers) handle tftp boot to be a bug in Sun's firmware and was rather unhappy about Sun's tftp client implementation. Peter stated "I still think Sun needs to be kicked in the ding-ding for not doing DHCP (or at least BOOTP, it's only a 20-year-old standard) and valid TFTP" [IRC on #syslinux on OFTC discussion 21-Oct-2006 14:17 AEST].

After removing the stock Debian tftp-hpa deb on my sarge box, I downloaded the tftp-hpa 0.43 onto my sarge box and complied it and installed it using check-install. This was a painless process.

I thought I could see the light at the end of the tunnel. I had RARP and tftp working, the server was getting an IP address, requesting and receiving the dapper boot image. I later realised that the light was actually an oncoming freight train and the T2000, the duck and myself were all heading for a train wreck.

I tried following the official ubuntu on sparc instructions. I found that they were rather light on. The documentation seemed to be written for users with no Linux experience but some SPARC experience. I am well aware that this is community generated documentation and so I should be grateful someone has put something together. I plan to help improve the page a little when I have more time. I have already added a note about the rev2 and dapper kernel issues.

I did manage to get the installer running pretty easily. It is really no different to a normal kvm based install on i386/amd64 based servers, except there are no virtual consoles. This might sound like a minor thing, but in practice it can lead to a lot of frustration. Over the years I have installed various versions of Linux on version machines. From time to time the installer decides it is taking its bat and ball and going home, with virtual consoles this isn't a problem, [ctrl]-[alt]-[f2] (or what ever) and you get the install log or a shell so you can start poking around to see what is (or isn't) going on. On the T2000 you have to watch the HDD lights or try ESP to see if it is still alive. My first few (3+) attempts I assumed that the installer had crashed during formatting the partitions. I assumed this as the screen would just be blue and the drive lights suggested very little disk activity. I now know that this assumption was wrong.

I had tried several times over the period of a month or so to try and get the install done. I had tried asking my good friend Google for help on getting it all working. I was not getting very far with it. By now Sun was starting to ask for their baby back.

One night I decided I was going to install dapper on the server at any cost. I was prepared. I had a stack of tabs open in firefox with the relevant documentation up. I updated to the latest firmware (again). I had connected to the server. I had cold beer in the fridge. I got very comfortable in the chair. First attempt I tried to partition the disk the way I wanted it, this seemed to fail after creating the /boot partition. I looked into the partitioning more and discovered that the partition table spills into the first 512Kb of the drive, and so you need to keep the first 512Kb (1Mb recommended) of the drive unused. The next attempt I tried again partitioning the drive the way I wanted it with 1Mb (8.2Mb was actually used) free at the start of the disk. I crossed my fingers and went to watch some tv. 20mins later I came back and found the lovely blue screen back and no real signs of life. This time I decided to try with 1MB (8.2Mb) free at the start of the disk and let ubuntu decide how to deal with the rest of the drive. swap and /boot both seemed to be ok about being formatted with the default EXT3 filesystem. Then as usual the screen went blue and everything seemed to have stopped. I took a few deep breathes, started abusing the box and Sun. I did some more poking around and couldn't find any more information.

It was getting late, but I decided no piece of scrap metal was going to beat me. This time I grabbed a new install image, just in case that was the problem. Again I started the install process. Again I let ubuntu decide how to handle things after the first 8.2Mb. I did a few other things while the installer was running, flicking back every minute or 2 to see what was going on. This time was the same as the previous attempts - it looked to me like it had failed. I tossed up between having a beer then going to bed or watching paint dry for the rest of the night, as I didn't have any paint, the beer and bed won. I was too annoyed with the T2000 to shut it down that evening.

The next morning I awoke to an ubuntu installer still running, very slowly but still running. It was wanting me to tell it about which driver to use for Xorg. I didn't care as the box had no video card in it. I decided to go with fbdev. The installer continued to run, albeit slower than I remember RH6 installing on my 486 many years ago. I let it go. It asked a couple more questions about X config along the way, which I just left at the default values. I noticed that the 2 drive lights were always one, except the drive where I was installing dapper, would flicker off for a split second every 3 to 5 seconds. I had read some stuff about slow i/o on these boxes, and assumed that maybe it was meant to be like this. I patiently waited, and waited and waited. Finally after 24hours of waiting, I had managed to install ubuntu 6.06.1LTS on my SunFire T2000. I danced, I was happy - really happy. Then I thought to myself, they can't really expect people to wait this long for an install to work.

As I am a sucker for punishment, I grabbed a new image, checked my notes and started installing dapper onto the other drive too. This time I kept on checking the installer. It had taken about 5 hours and 30 mins to fomrat a 70G EXT3 partition. All up it took around 24 hours to install on the second drive.

I finally decided that it wasn't me, it seemed like it was something hardware related. I logged a ticket with Sun. Then I decided to start digging for answers. Eventually I discovered that the T2000 rev2 uses a different SAS drive controller which isn't supported by the ubuntu 6.06LTS kernel. Fixes are available in newer kernels, but the ubuntu server team have indicated that they will not be backporting the fixes to 6.06LTS and that users should upgrade to 6.10 if they wish to run ubuntu on a SunFire T2000.

To check this was correct I tried installing the 6.10 on the server. I was shocked. It flew. Less than 15seconds to EXT3 format 70G all done in less than 2 hours.

After all this where does it leave people? As I see it you have 3 options if you want to run ubuntu linux on a SunFire T2000 rev2 box. The first is to install 6.06LTS and have it run slow, but this is a huge waste of of money, you would be better off buying a cheap second hand PII from somewhere, so this option isn't very practical. Option 2 is to run the latest and greatest version on it, 6.10 (aka edgy), there are some major downsides with this option, most notably the lack of certification and support is only available for 18 months instead of 5 years. The 3rd option is to wait a while and while you wait, encourage ubuntu, Sun and Canonical (the company that provided commercial backing to ubuntu) to work together to resolve this issue. As it stands at the moment all 3 players have made a big deal about ubuntu on Nigara and so all 3 players stand to face a customer back lash. Bad PR isn't good for any one.

Update: [13-Apr-2007 23:00] I have just got off the phone to Barton George, Group Manager, GNU/Linux Strategy and Product Management at Sun. The phone call follows on from an email exchange that started earlier this week. It seems that Sun and Canonical both want the problem fixed, they just have to work out how best to do it. So they are meeting today (US time) to try and come up with a plan to resolve the issues.

Although not mentioned on their website, as a work around Sun recommends using Ubuntu 6.10 (aka Edgy Eft).

I am awaiting a response from Sun about my request to be able to retest and submit an entry in the CoolThreads Performance Contest.

I will post any more info as I get it.

Disclosure: Sun is sending me a t-shirt, no string attached.

Finally a website and a blog

Like the builder with the worst looking house in the street, I was the guy who did web based work with no website. Now that has all changed - I finally have a website and a blog.

The website is a show case of the free/open source products and services offered by Dave Hall Consulting.

The blog will be a place for people to see what I am working on, playing with, discuss current issues and also provide a space for me to brain dump.

I hope you enjoy the site.