Elements or Lower

Tue, 18 Mar 2008

www.woking.gov.uk

For over a year now, we’ve been planning and working on a new version of www.woking.gov.uk, and it’s now finally live. For me personally, the project has absolutely dominated the past six months, and together with Buy Our Honeymoon, represents some the best work I’ve ever done.

Before and after

The old Woking site had evolved gradually since prior to my involvement over ten years ago. Sure, we’d redeveloped the CMS a couple of times in that process, but each successive version of the site used content largely copied verbatim from the previous version. The structure had become labyrinthine, and the design (last updated in 2000) had become known internally as the “Rover dashboard”.

The Council’s Web Strategy Group saw the opportunity to completely refresh the site from scratch, with a brand new design, a completely reworked navigation structure, and a refresh of various aspects of the CMS. I’m terribly grateful that Article Seven was commissioned without hesitation to deliver both the new design and the technical implementation of the new site.

Accessibility was a key priority in the new site, and to that end we asked the Shaw Trust to help us work through the process. Just under a year ago, they carried out a full audit of the old site, highlighting any areas of concern. Once the new site templates were ready, the Shaw Trust assessed them, and just before the final site went live, the entire site was audited again and any final recommendations implemented. This whole process was incredibly valuable, particularly since it’s not a mere workthrough of the WCAG checkpoints. The Shaw Trust extensively test the site using people with a wide range of real disabilities, and any issues they highlight generally stem from real access problems experienced by testers using a variety of assistive technologies.

We also included an option to switch accesskeys on (your preference saved in a cookie), and have a pair of zoom stylesheets for low-vision users — that also happen to be great for mobile access too.

We wanted to include Google Maps in a number of places on the site, and so I developed a system to try to mitigate the accessibility issues of this. Maps embedded on the site have their zoom and pan controls moved to a row of keyboard-accessible buttons below the map itself (although the normal click-and-drag mechanism continues to work too, of course), and a link to Google’s HTML version of the map is displayed instead in the event that Javascript isn’t enabled.

As a result of all this, the site has been awarded Accessible Plus accreditation from the Shaw Trust, one of only seven organisations in the UK to achieve such a high standard. All of us who’ve worked on the site (including a substantial team of web publishers) are immensely proud of this — but none more so than me.

Thu, 07 Sep 2006

Monks Chartered Surveyors

Since then, the site has been overhauled a few times. Firstly, the “showcase” was reimplemented using a tied DBM file. Then, I added a crude administration layer so that I no longer had to update the site myself. Finally, when the company changed hands and became Monks Estate Agents, the design was refreshed and the HTML rewritten.

Now, the company’s rebranded and growing, and this morning we made a completely rebuilt monks.co.uk live. Unexpectedly, the site represents the fourth publicly-available installation of my CMS, so that content could be easily developed and edited in-house, and so that I had a convenient framework already in place for the site redevelopment.

We’ve been quite careful to include a handful of features I’d have liked in the sites the missus and I used to help us buy our home here in Greenwich a few years ago:

As always, there are various changes I’d already like to make. Principally, the map on the search page is way too big and only really serves to get in the way of the form. But it’s very early days, and we’ll continue to develop the site once it’s had a chance to settle in. Overall, I’m quite pleased with how this one turned out.

Thu, 13 Jul 2006

Odi et Amo

I recently wrote that my new, long-term mission would be to get the administration area of my CMS working with both the WCAG and ATAG.

Truth be told, I haven’t yet made any progress with this, other than to give some of the issues rather more thought. In doing so, however, I’m already being pulled in two different directions.

Fundamentally, then, a publisher can edit content if they have JavaScript disabled: they merely need to know HTML first. And that, of course, isn’t really good enough.

Until recently, I’ve been very much of the school of thought that the only accessibility issue with JavaScript is whether or not the functionality you’re providing works on at least some level without JavaScript. The trouble is, that’s not true. As evidenced by some of the recent work on AJAX and accessibility, assistive technologies such as the JAWS screen reader operate on top of Internet Explorer — so, if IE has JavaScript enabled, so does the screen reader.

This, in turn, means the visual editor needs to be operable in a screen reader context. We can’t assume that if you’re using a screen reader, you’ll have JavaScript turned off. It means that popup windows such as those created by tools in the visual editor to allow the creation of links and insertion of images into content, have an impact. It means that the visual editor’s tools need to be device-independent. Those functions I’ve tucked away in a right-click menu become a problem.

On the other hand, I’m falling in love with script.aculo.us. Some well-placed visual effects here and there, and a little bit of AJAX magic in the right places, can work wonders on overall usability.

For example, the Greenwich Community Network’s contact form has some questions that ask you to expand on your answer to a prior question. Hiding the questions you’re not expected to answer using JavaScript improves the form no end.

Equally, The Woking Forum has a bunch of ongoing discussions with many, many responses. On page load, messages older than three weeks aren’t included, but a little AJAX incorporates them into the page if you want them.

In the context of the Administration Shell, there are a number of places where these kinds of technique would improve usability in general. For example, the Shell includes an expandable sitemap to help the publisher select the page they want to edit. Although it already takes some steps to split the DOM manipulation required to open and close levels of the hierarchy into manageable chunks, everything would be so much snappier if levels of the sitemap were only loaded when expanded, using AJAX.

The Shell also has a number of places where the publisher is asked to sort a list of items as they choose. For example, a navigation list for a level in the site hierarchy isn’t generally sorted alphabetically, or by date, but according to an arbitrary sort order of the publisher’s choosing. Coding a usable interface for this kind of thing isn’t easy.

Except, of course, it is. The current Administration Shell doesn’t use a drag-and-drop list. Instead, it uses a series of buttons marked “here” for the publisher to indicate where they want to position an item within a list. But this is terribly clumsy, means that anything other than the shortest list takes up an acre of screen space, and means that it’s impossible to alter the position of more than one item at a time.

The main problem with scrip.aculo.us’s drag-and-drop sortable lists, however, is that in being drag-and-drop, they’re as far from being device-independent as you can get. It’s not possible to use the keyboard to sort one of these lists. I had hoped to improve accessibility within the Shell, not make it worse.

So, on the one hand, I want to be able to claim WCAG compliance for the Administration Shell. I want to be able to lift the browser-sniffing on the Shell for Window on Woking. I want to be able to apply the same rigour to the back-end of sites that I’ve started to apply to the front-end.

But, equally, I want to make the Shell as easy an environment to use as possible. I want to be able to take advantage of the good things to come out of “Web 2.0” (but not the name).

I’m hardly alone in trying to get to grips with this kind of problem, and the community will find answers in time, just as it always has. For me, I suspect the answer may lie in having a set of preferences for the Shell, where publishers can make the choice as to whether they’re comfortable with drag-and-drop or not, and whether they want to edit content using the visual editor, or using Markdown. As long as I don’t find myself tempted to stick a bunch of JavaScript into the preferences screen, that is.

Thu, 18 May 2006

SiteMorse Testing

Right now, there’s a fascinating discussion in progress over at Accessify Forum on the merits and problems with the SiteMorse automated site testing tool, specifically insofar as it claims to test pages against the WCAG.

Regular readers here will know that SiteMorse publish a monthly league table of both central and local government sites, and that despite these having no formal government backing, it’s hard to ignore them altogether. The league table divides its results into a number of columns, including “performance”, “metadata”, “code quality”, “accessibility” and “function”. We’d been consistently scoring well on most of these, but very poorly on “function”, and we had no real idea what precisely this meant, nor did we get very far in finding out by trying to contact SiteMorse directly.

So, we decided to purchase a bunch of credits for the SiteMorse tool, in order to fully divine what they perceived was wrong with the site, and to be able to make better sense out of the monthly league table. In the light of the current discussion, I thought it might be useful to summarise what we found.

Of course, part of the reason for this is that the SiteMorse tool simply balances the number of errors found against the number of pages tested. If your site uses correct syntax, but does nothing, has no content, and links nowhere, your SiteMorse score will be fantastic. A great SiteMorse score isn’t really an endorsement of what you’re doing, but merely an indication that you managed to avoid a specific set of potential errors.

This has a great impact on the interpretation of the SiteMorse accessibility score. A SiteMorse accessibility score of 10/10 does not equal a fully-accessible page, but merely indicates that there were no failures discovered for any of the 16 different accessibility checks it actually carries out. Of course, it’s very hard to keep this in mind, even though the page for accessibility in a given test lists a further 26 points with “manual check” written against them. What’s more, there seems to be a very strong disconnect between the disclaimers in the SiteMorse test results themselves, and the tone of the company’s marketing.

Despite all this, we found the accessibility tests genuinely useful in identifying a handful of real issues. We had a few pages, adapted from old CGI scripts, that contained deprecated HTML. We had a few pages with out-of-sequence headings. We had a couple of pages whose content didn’t validate properly. And we had a truckload of links which used the same link text, but with different link URLs.

In addressing these issues, it was really important to be able to interpret the results in a sensible and educated way. For example, just as one could avoid broken link errors by having no links, one could avoid heading-sequence errors by having no headings. I don’t think it’s entirely fair to consider this an argument against the tests themselves, however. It’s more of a caution not to prioritise a “good” result over a good site, league tables be damned.

Mon, 27 Mar 2006

hCard

In a fit of forward-thinking, I’ve now set all organisation and councillor homepages on Window on Woking to use the hCard microformat.

We’re planning some improvements to the Diary of Events, and I’ll implement hCalendar at the same time as implementing those changes.

Fri, 17 Mar 2006

The Accessible Shell

As Patrick Griffiths notes, in the same way that the WCAG describes accessibility for web content, the lesser-known ATAG describes accessibility for applications and services that generate web content — including, of course, the web-based “Administration Shell” employed by my CMS.

Naturally, in an ideal world, the Shell would conform to all Priority One and Two (plus selected Priority Three) guidelines from both the WCAG and the ATAG. I’m sceptical of the extent to which I’ll be able to manage this — for one thing, so much of the authoring envrionment depends on JavaScript that checkpoint 6.3 of the WCAG is pretty much doomed to be broken. Nonetheless, it certainly defines a long-term goal, and a serious challenge.

The existing Shell is very far from achieving conformance with anything. I’ve even yet to apply the standard-based treatment of the front-end to the back-end, and the Shell doesn’t behave properly right now using anything other than Internet Explorer 6 — which, to make matters worse, is only available on Windows.

To address this problem, my own top priority here is to get the Shell working fully in Firefox 1.5+. It very nearly manages this even now, but there are still issues that need to be addressed.

Then comes a thorough revision of the HTML, CSS and scripting used in the Shell, so that everything is as clean and valid as possible. And then, finally, comes the larger task of working through the WCAG and ATAG. This is my new mission. It will take a while.

Wish me luck.

Wed, 18 Jan 2006

Window on Woking

Towards the end of 2004, Woking Borough Council won a Round One e-Innovations bid, securing funding from the ODPM to develop three initiatives in Trusted Partner Access to Information.

Part of the resulting project involved the creation of a new community site for Woking, whereby local voluntary and community organisations would be able to develop their own pages within an overall framework, for free.

This relates heavily to the ODPM’s Priority Outcomes for local government, specifically outcome G2:

Empowering and supporting local organisations, community groups and clubs to create and maintain their own information online, including the promotion of job vacancies and events

The key innovation in Woking’s community site would be that each organisation would be able to restrict some of their pages on the site to members of their own organisation, or to members of other organisations which they could specify, creating a network of trusted partners between organisations on the site.

Article Seven — which is to say, me — was charged with the development of the infrastructure for this, which would be based largely on the CMS put together for the main woking.gov.uk site. The key deadline for this was 1 April 2005, by which time we had to be able to demonstrate that the basis of the site was functional and sound, even if the site was far from being a complete, marketable commodity.

That meant that the first three months of last year required me to focus practically 100% on this one project. Oddly, I found that experience rather liberating: all my other clients knew that development work for them would have to wait until April, and I determinedly didn’t accept any additional work for that period. For three months, I lived and breathed the Community Site.

I want to write in detail about the development process later, but for now it’s enough to say that:

  1. We got the basic infrastructure in place on schedule

  2. We spent the rest of the year getting the site fit for launch

We’re now, finally, ready to launch Window on Woking.

The Window on Woking homepage

The Council began approaching local organisations over the summer, taking many through the sign-on process and running training sessions in the CMS. At the time of writing, we have 234 organisations signed up and developing pages on the site — including each of the local councillors, for whom we’ve developed a basic blogging system.

Our launch involves promoting the site to the public, with enough content now in place to justify a visit, and many of the initial problems resolved. That’s not to say, however, that work on the site is over — far from it: there are still some things that don’t quite work as well as we’d like them to, and plans for future development on most aspects of the site.

It’s been a pleasure working on this over the past year, and I’m really happy to be continuing to work on it (albeit with less exclusivity) for the foreseeable future. The project also gave me my first taste of collaborative development, with much of the graphic design and CSS work on the site having been expertly created by Adam Pink at Sardine Media. Not only that, but the shaping and specification for the site was built with Basecamp between me and Sean Rendall, the manager for this strand of the e-Innovations project at Woking Borough Council, and with whom I’ve always had an enjoyable working relationship. Sean has a keen eye for the way things ought to be done, even if they’re technically harder to achieve. If the site turns out to be a success, it’s likely to be because Sean’s stopped me being lazy.

Mon, 05 Dec 2005

The PageCache

I’ve previously noted that the CMS I’ve put together uses a fried rather than baked model for its presentation layer. For a few months now, this has only partly been true.

The presentation layer now implements a cache for the final HTML of pages, and serves from that when there’s a copy of the requested page there. The advantage of this is an acceleration in the delivery of cached pages, and a reduction in the amount of redundant work the CMS has to do, especially for popular pages.

The cache is a regular database table, containing a resource ID, the “framework” (viewing context), and the actual generated HTML. When a request is made for a page, once the CMS has analysed the URL to establish the resource ID and framework in question, it checks to see if there’s matching content in the cache. If there is, it serves it; if not, it proceeds to generate the content as normal.

This is different from a normal baked CMS, only in that the pre-generated content is effectively served from a database rather than as static files on the server, and that the CMS continues to fry-up content if there isn’t anything already baked. By doing this, and by binding the cache to the CMS at a fairly fundamental level, we can achieve quite a lot of flexibility.

For a typical page, if there isn’t any pre-generated content, the CMS will generate the page as normal, and then try to store the final HTML in the database for the next request. The CMS won’t actually permit the storage of pages, however, in the following circumstances:

  1. The page is a PDF file.

  2. The page is being viewed in the test environment (the CMS can have separate test and live content for any page).

  3. The request contained a query string or POST content.

  4. The content is marked as do-not-cache.

All that happens here is that the generated content is never stored, and so the CMS will be forced to re-generate it for each identical request. The initial check for pre-generated content, therefore, trusts the database completely. If there’s pre-generated content available, it’s always served. This keeps the processing overhead to serving that content to an absolute minimum.

The database, however, is deliberately very fragile. The CMS wipes the entire cache overnight, to avoid any pages becoming stale, and the administration layer wipes selected portions of the cache when amendments are made. Generally, changing the textual content of a page means that the CMS only need wipe the cache for that page alone, whereas changing the title or metadata of a page prompts wiping the cache for the page, its descendants in the site hierarchy and its siblings in the site hierarchy. Moving a page within the site prompts the CMS to wipe the entire cache, and so on. The CMS tries to be as cautious as it can be here — it’s better to wipe too much of the cache, than not enough.

Even then, given the range of sources from which the CMS can acquire content, sometimes the cache still can occasionally contain data that’s not perfectly fresh, and so there are options within the “Administration Shell” to wipe the entire cache, or to wipe the cache only for a specific resource.

Because the preparation of a finished page of HTML can involve a lot of work for the CMS, including a number of XML transformations, introducing the cache has helped response times — particularly for popular pages — by a surprising amount. It’s far, far easier for the CMS to shunt data straight from the database than to go through the normal page generation process, and because the system runs under mod_perl, the CMS code is itself compiled into the Apache process, and the database connection is also cached and reused.

The CMS request log keeps a high-resolution timer of the processing time taken from parsing the incoming HTTP headers to initiating the log record. Taking the homepage as an example, a sample request without the PageCache took 0.906 seconds to complete. With the output of that request cached, the second request took only 0.007 seconds. Most pages don’t take quite that long to generate the content (typically around 0.3-0.6 seconds) — but even here, the difference between half a second and less than a tenth of a second is palpable.

Mon, 28 Nov 2005

Woking gets a makeover

Way back in May 2004, I set myself the mission of reworking the design of the Woking Borough Council web site in CSS.

I managed to let it take me until September this year to actually get this done, and even now there are parts of the site where old, tables-based markup, riddled with presentational attributes and spacer GIFs, is nested inside the new CSS-driven templates. This is largely a product of the different ways the CMS can acquire content — parts of the site are derived from legacy CGI scripts massaged into the newer CMS processes. But at least the templates are done.

Of course, the Council themselves weren’t motivated to change the site templates for the same reasons I was. I felt it was high time I made the transition to rigorously separate presentation from semantics, and to put into practice the CSS techniques in which I’d only recently begun to dabble. The new implementation would have some key, measurable benefits for the site, however, and these prompted their enthusiasm for the project.

Notwithstanding the many and varied concerns over their accessibility testing, the monthly SiteMorse league tables furnished us with a number of key target areas for improving the performance of the site as a whole. We’d done well with metadata, improved our error count, and remained focussed on accessibility — but our download speeds were consistently poor.

As part of my proposal document for the project, I ran a small experiment here. The breadcrumb trail at the top of each page had been implemented as a table, but my thoughts on the CSS version led me to believe it should be an ordered list. Taking the breadcrumb trail from one example page of the site and re-coding it in this way resulted in the following delightful realisiation:

The two different approaches result in an identical visual appearance. The first approach, however, uses 846 bytes of code. The second — taking the HTML and CSS together — uses only 575 bytes. This saving is compounded by the fact that the stylesheet would only need to be loaded once, and would then be applied to any page, whereas the HTML approach is included in full for every single page on the site. The HTML alone in the second approach is only 279 bytes.

In itself, this is a small saving — but the templates contained many, many similar examples. Moreover, each page on the site contained dozens of small graphics to drive the presentation. Each primary navigation button contained the text of the button as part of the graphic, and had a rollover state — for our (at the time) 10 different navigation buttons, we therefore had to load 20 different graphics onto the page. Plus background images (specified as attributes to individual table cells, of course), spacer GIFs, drop shadows, rounded corners, and (on the homepage) a further 10 buttons and text-in-graphic headers to different blocks on the page.

We’d recently introduced a series of links on each page to run the page through Google Translate — which worked well enough for the actual text of the page, but of course didn’t touch all the text rendered as graphics, leading to a page mostly in one language, but with navigation in another.

The CSS project therefore had three key goals, other than quelling my shame:

  1. Help our download times by trimming the code itself as much as possible.
  2. Further help this by reducing the size and number of graphics used as part of the templates.
  3. Remove instances of text rendered as graphics, whilst staying pretty.

Additionally, we hoped to be able to achieve two more:

  1. Improve our accessibility through semantic code.
  2. Improve our error quotient through simplifying the HTML.

Finally, it was important that the finished version resembled the existing design as closely as possible. It was clear that the two versions wouldn’t match exactly — if nothing else, rendering text exclusively as text would guarantee that — but this wasn’t so much a redesign as a re-implementation of the existing design.

The work was completed in two main stages. The first was to take an existing page from the site and recode it; the second was to then implement this new design in the CMS’s XSLT templates.

In recoding the HTML (essentially a case of throwing away all the presentational cruft and getting down and dirty with the semantics), I had a notion that we would be able to make life easier for JAWS users (et al) by placing the primary site navigation towards the end of the HTML source rather than at the beginning, and then using a touch of creative CSS to keep them in the left-hand “ribbon” column. The principal blocks we ended up with for most pages, then, are as follows:

  1. The borough logo
  2. The breadcrumb trail
  3. The page header
  4. The “This Section” navigation
  5. The main page content
  6. The search box, primary site navigation and translation tools
  7. The page footer

The next stage was to create a range of background graphics in Fireworks, with the intent of excessive use of the CSS Sprites technique to minimise the number of graphics which needed to be downloaded, without sacrificing our multicoloured rollover effects, or the range of button imagery on our homepage.

I had been worrying about the rounded corners I’d implemented on many of the panels we had in the previous design — I didn’t want to lose them altogether, but equally, I didn’t really want to have to include separate corner graphics and the extra DIVs necessary to embed them. About this time, I learned of Alessandro Fulciniti’s Nifty Corners — some presentational Javascript to round off the corners of specified blocks without needing images. Like sIFR, Nifty Corners is resolutely unobtrusive — if you don’t have Javascript enabled, your corners merely remain sharp. No big deal. Lovely.

In order to get sign-off on the changes, I then (after spending a short time on other projects) incorporated the new design into a fresh set of XSLT templates using a system built into the CMS whereby the presentation layer can switch to a different context based on a small change to the URL of any given page. This enabled us to compare the old and new implementations of the design for any page on the site very easily, and we used this to obtain some measure of the success of the project:

[We ran a] speed check on two pages with their closest counterparts on the real site. The existing homepage was estimated at a 54 second download on a 56K modem. The new implementation weighed in at 19 seconds. The comparison of the inner page is less dramatic, with the old design at 16 seconds and the new at 10 seconds. The old inner page needed a total of 71 individual HTTP requests (mostly images), whilst the new needs 27.

Once the implementation was signed-off, all I had to do was make the new XSLT templates the default ones, and the work was largely complete.

Because the new implementation of the design addressed everything that our old “Easy Access” version was intended to do, we also switched off that feature from the site. The new site has a print stylesheet that removes much of the navigation from a printed page, and the implementation is frankly more accessible than the Easy Access version itself actually managed. With a few recent tweaks, many pages on the site (including the homepage) now seem to be WAI-AAA accessible — although of course it’s really not possible to be sure of that without further real-world, non-automated testing.

While all this was being worked on, the format of the SiteMorse benchmarks changed somewhat. The monthly league tables no longer specify a score for HTML errors (although the new implementation validates consistently), and the automated WAI-A and AA scores are now given as a percentage of pages assessed that contained at least one accessibility error. For October, we had 13.6% of pages with at least one WAI-A error, and 38.4% of pages with at least one WAI-AA error. Unfortunately, of course, you have to buy the full report to find out just what your errors are perceived to be. We might just go ahead and do that, and I’ll be sure to follow up here with any conclusions that may be drawn from this.

On the positive side, whilst SiteMorse consider our actual download times to still be unacceptable on a 56K connection (though they’re now a “Pass” for ADSL), our server has furnished us with the fastest actual response time of all 463 assessed local government sites for the past two months running. This has much less to do with the CSS makeover than the beefy hardware and the CMS’s own caching strategy (more on which another time), but I’m awfully pleased about it anyway.

Fri, 17 Dec 2004

Before I forget again, I’d like to boast that Woking have leapt from position 128 to 51 in the SiteMorse rankings for December. The monthly “league table” of Local Government sites currently ranks 460 authorities against each other on a range of criteria.

Although we’ve just moved over to a faster server and a better metadata system, I doubt that either of those have made the difference this month. The new server is responsible for the average page generation time going from 0.9 seconds to around 0.6 seconds. A third off is a bargain by anyone’s standards, but we’re still failing the download speed tests for both modem and ADSL connections. When I’m finally able to do it, the long-awaited CSS makeover of the site should cure that.

No, I strongly suspect that what made the difference was a clampdown on broken links. There’s not a vast amount that a (fried not baked) CMS can do about broken external links, so Woking have invested in a link-checking service to help us keep track of those.

It’s in the realm of internal links that the CMS can help. When creating a page, publishers can link to other resources using a dialog that encourages them to enter the link destination one way for an external link, another for links within the site, plus another for email addresses and — in a novel twist — a fourth for links to pages that haven’t actually been created yet. By recording the ID of internal pages that are being linked to rather than their published URL, the CMS can make sure that links from one resource to another are kept up-to-date even if the resource being linked to is moved from one part of the site to another. It also allows the system to construct the link according to the browsing circumstances, so if you’re in one version of the site, you stay there rather than being shunted to another version.

Given that, there remain two circumstances in which an internal link could become broken:

  1. The link is to a resource which is subsequently deleted from the site altogether. Even if the CMS emails notification to all publishers who have resources linking to the newly-deleted resource, there’s still inevitably a period of time when the page has a link leading nowhere.

  2. A resource in test links to another resource in test. The first resource is then made live before the second one. A page in test — that hasn’t been published to the live site at all yet — is invisible to the live site, and will consequently 404.

We came up with the following technique to help deal with this:

Consequently, from the perspective of the casual visitor to the site, there are no broken internal links any more — although there might be phrases that seem out-of-context without a link adorning them. If publishers are careful not to over-use click here, even that shouldn’t be too much of an issue.