Wiki-isation of a static web site

From a presentation at HE Academy Technical Away Day, Newcastle, 7 February 2007

Most of our site content is in databases or in dedicated applications such as a blog or wiki. However, we still have a lot of content in static XHTML pages. We occasionally need to make corrections to this archive material, but I can’t justify the effort of slurping it all into a CMS, with attendant information architecture/URL design issues. This page describes a small project to allow authorised people to edit that content in a wiki-like way.


  • Make changes immediately from the browser
  • Restrict editing only to the page content, not the other things on the page like breadcrumb trail, Server Side Includes etc.
  • Word-processor-style editing, but with option to edit source directly
  • Ability to paste from Word
  • Minimal effect on source formatting
  • Live spell-check
  • Multiple users with logging of their activity
  • Deployable on multiple sites
  • Secure
  • No money- must use only free software
  • Relatively small amount of new code: this is primarily supposed to be a time-saving exercise

Technologies Used

  • Bookmarklet in personal toolbar folder of browser
  • Server-side redirect (Apache in our case)
  • Password-protected directory how to do this
  • Perl CGI script (but could be done with any kind of script or active page)
    • Get the Perl code here: EditPl
    • Comments in the code show the bookmarklet code and redirect command
  • The LibXML library. This parses documents and allows you to query them with XPath: a language for finding information in an XML document.
    • There are LibXML and/or XPath libraries freely available for Perl, PHP and several other languages.
    • One XPath expression can extract a given element or block from an XML document, which can be as specific as you like, e.g. the second bullet point of an unnumbered list in a DIV whose class is “sidebar”. In the case of my main site, page content is always in a DIV whose id is “content”. Hence the XPath expression I use is ‘//div[@id=”content”]’
    • If you know how CSS selectors work, then conversion to XPath is possible How to map CSS selectors to XPath queries
  • FCKEditor: a free word-processor-like HTML editor written in JavaScript (alternatives include KUPU or MCE)
  • FireFox: version 2 includes as-you-type spell checking for form input

Outcomes and Lessons

  • It works, my team use it, and it lets us make minor changes in pretty much the fastest way humanly possible. Non-techie colleagues like the Word-like interface.
  • Originally I used the XPath library rather than LibXML. This has a very strict parser. Making sure the whole site was pedantically valid (not just W3C validator valid) created a bit (not a huge amount) of work, but it’s nice to have something that degrades more gracefully.

Further Wiki-isation

This project was about turning a collection of static web pages into something like a wiki. For me, a wiki has three key features:

  1. Quick editing
  2. Quick creation of new pages
  3. Rollback (tracking changes and recreating a page as it was on a certain date)

So far I have achieved 1) with a couple of days of work. 2) could be achieved with a similar effort, but would have to be highly constrained: I don’t want colleagues creating pages willy-nilly and messing up the site’s information architecture.

3) is a desirable and possible goal. The system already makes copies of all edits. I plan to adapt code from the open source wiki OddMuse, which in turn uses the standard free tool diff, to allow users to compare how a document looked on two different dates.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: