Journal tags: microformats

67

Going offline with microformats

For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.

It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:

I realised I didn’t need to store anything. HTML is the API.

Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.

Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.

In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.

So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.

Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.

When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.

The markup for my offline page looks like this:

<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>

I’ll populate that “history” div with information from a cache called “pages” that I’ve created using the Cache API in my service worker.

I’m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those thens would lead to some serious indentation without async/await.

All async functions have to have a name—no anonymous async functions allowed. I’m calling this one listPages, just like Remy is doing. I’m making the listPages function execute immediately:

(async function listPages() {
...
})();

Now for the code to go inside that immediately-invoked function.

I create an array called browsingHistory that I’ll populate with the data I’ll use for that “history” div.

const browsingHistory = [];

I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser.

const parser = new DOMParser();

Time to open up my “pages” cache. This is the first await statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache (again with the imaginative naming).

const cache = await caches.open('pages');

Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys!

const keys = await cache.keys();

Time to get looping. I’m getting each request in the list of keys using a for/of loop:

for (const request of keys) {
...
}

Inside the loop, I pull the page out of the cache using the match() method of the Cache API. I’ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.

const response = await cache.match(request);

I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text() method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I’ll have a variable called html that contains the body of the response.

const html = await response.text();

Now I can use that DOM parser I created earlier. I’ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn’t asynchronous so there’s no need for the await keyword.

const dom = parser.parseFromString(html, 'text/html');

Now I’ve got a DOM, which I have creatively stored in a variable called …dom.

I can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value “h-entry”:

if (dom.querySelector('.h-entry h1.p-name') {
...
}

In this particular case, I’m also checking to see if the h1 element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if statement.

Inside the if statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data!

const data = new Object;

Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request variable in my for loop.

data.url = request.url;

I’m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.

data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));

While I’m at it, I’m going to grab the human-readable date from the innerText property of that same time.dt-published element.

data.published = dom.querySelector('.h-entry .dt-published').innerText;

The title of the h-entry is in the innerText of the element with a class of p-name.

data.title = dom.querySelector('.h-entry .p-name').innerText;

At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I’ll grab that now.

data.description = dom.querySelector('meta[name="description"]').getAttribute('content');

Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory array.

browsingHistory.push(data);

My if statement and my for/in loop are finished at this point. Here’s how the whole loop looks:

for (const request of keys) {
  const response = await cache.match(request);
  const html = await response.text();
  const dom = parser.parseFromString(html, 'text/html');
  if (dom.querySelector('.h-entry h1.p-name')) {
    const data = new Object;
    data.url = request.url;
    data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
    data.published = dom.querySelector('.h-entry .dt-published').innerText;
    data.title = dom.querySelector('.h-entry .p-name').innerText;
    data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
    browsingHistory.push(data);
  }
}

That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.

First of all, I want to make sure that the browsingHistory array isn’t empty. There’s no point going any further if it is.

if (browsingHistory.length) {
...
}

Within this if statement, I can do what I want with the data I’ve put into the browsingHistory array.

I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory array according to the timestamp property of each item within it:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div. I’m storing the markup in a string called …markup (my imagination knows no bounds).

let markup = '<p>But you still have something to read:</p>';

I’m going to add a chunk of markup for each item of data.

browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML() method to do this.

document.getElementById('history').insertAdjacentHTML('beforeend', markup);

Here’s what my finished JavaScript looks like:

<script>
(async function listPages() {
  const browsingHistory = [];
  const parser = new DOMParser();
  const cache = await caches.open('pages');
  const keys = await cache.keys();
  for (const request of keys) {
    const response = await cache.match(request);
    const html = await response.text();
    const dom = parser.parseFromString(html, 'text/html');
    if (dom.querySelector('.h-entry h1.p-name')) {
      const data = new Object;
      data.url = request.url;
      data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
      data.published = dom.querySelector('.h-entry .dt-published').innerText;
      data.title = dom.querySelector('.h-entry .p-name').innerText;
      data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
      browsingHistory.push(data);
    }
  }
  if (browsingHistory.length) {
    browsingHistory.sort( (a,b) => {
      return b.timestamp - a.timestamp;
    });
    let markup = '<p>But you still have something to read:</p>';
    browsingHistory.forEach( data => {
      markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
    });
    document.getElementById('history').insertAdjacentHTML('beforeend', markup);
  }
})();
</script>

I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.

If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.

If you don’t have an offline strategy for your website, there’s a book for that.

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

rel="source"

Aral and his trusty sidekick Victor have taken up residency for a while at the Clearleft office in Middle Street while they work on their very exciting project. It’s nice having them around.

I got chatting to Aral about a markup pattern that’s become fairly prevalent since the rise of Github: linking to the source code for a website or project. You know, like when you see “fork me on Github” links.

We were talking about how it would be nice to have some machine-readable way of explicitly marking up those kind of links, whether they’re in the head of the document, or visible in the body. Sounds like a job for the rel attribute, I thought.

The rel attribute describes the relationship of the current document to the linked document. You can use it on the link element (in the head of your document) and the a element (in the body). The example that everyone is familiar with is rel=”stylesheet” when linking off to a CSS file—the linked document has the relationship of being a stylesheet for the current document.

The rel attribute could theoretically take a space-separated list of any values, just like the class attribute. In practice, there’s much more value in having everyone agree on which rel values should be used.

There used to be a page on the WHATWG site for listing rel values, but it tended to stagnate. So now the official registry for rel values is on the microformats wiki. That’s where you can see which values are recommended for use today and you can also brainstorm new ideas.

The benefit of having one centralised for this is that you can see if someone else has had the same idea as you. Then you can come to agreement on which value to use, so that everyone’s using the same vocabulary instead of just making stuff up.

It doesn’t look like there’s an existing value for the use case of linking to a document’s (or a project’s) source code so I’ve proposed rel=”source”.

Now I should document some use cases of people linking their site to its source code. It might be that wikis qualify as another use case: every “edit” button points to the source of the document in wiki markup.

If you have any thoughts on this pattern, or examples to add, please feel free to add them.

Parsing webmentions

Thanks to everyone who helped me test webmentions that I hacked together at Indie Web Camp last weekend.

Let me explain what web mentions are all about…

Basically, it’s an equivalent to pingback. Let’s say I write something here on adactio.com. Suppose that prompts you to write something in response on your own site. A web mention is a way for you to let me know that your response exists.

If you look in the head of any of my journal posts, you’ll see this link element:

<link rel="webmention" href="http://adactio.com/webmention.php" />

That’s my web mention endpoint: http://adactio.com/webmention.php …it’s kind of like a webhook: a URL that’s intended to be hit by machines rather than people. So when you publish your response to my post, you ping that URL with a POST request that sends two parameters:

  1. target: the URL of my post and
  2. source: the URL of your response.

Ideally your own CMS or blogging system would take care of doing the pinging, but until that’s more widely implemented, I’m providing this form at the end of each of my posts:

Either way, once you ping my web mention endpoint—discoverable through that link rel="webmention"—with those two parameters, I just need to confirm that your post does indeed contain a link to my post—by making a cURL request and parsing your source—and then I return a server response of 202 (Accepted).

Here’s the code for a minimum viable web mention in PHP.

That’s as far as I got at Indie Web Camp but it was enough for me to start collecting responses to posts.

Webmentions as links

The next step is to do something with the responses. After all, I’ve already got the source of each response from those cURL requests.

Barnaby has a written a nice straightforward microformats parser in PHP. I’m using that to check the cURLed source for any responses that have been marked up using h-entry. That’s one of the microformats 2 vocabularies—a much simpler way of writing structured content with microformats.

Aaron, Amber, and Barnaby all sent responses that were marked up with h-entry so now their responses appear in full.

Webmentions as comments

So there you have it. Comments are now open on every journal post on adactio.com …the only catch is that you have to write the comment on your own site. And if you want the content of your post to appear here (instead of just a link) then update your blog post template to include a handful of h-entry classes.

Feel free to use this post as a test. Mark up your blog with h-entry, write a post that links to this URL, and enter the URL of your post in the form below.

Figuring out

A recent simplequiz over on HTML5 Doctor threw up some interesting semantic issues. Although the figure element wasn’t the main focus of the article, a lot of the comments were concerned with how it could and should be used.

This is a subject that has been covered before on HTML5 Doctor. It turns out that we may have been too restrictive in our use of the element, mistaking some descriptive text in the spec for proscriptive instruction:

The element can thus be used to annotate illustrations, diagrams, photos, code listings, etc, that are referred to from the main content of the document, but that could, without affecting the flow of the document, be moved away from that primary content, e.g. to the side of the page, to dedicated pages, or to an appendix.

Steve and Bruce have been campaigning on the HTML mailing list to get the wording updated and clarified.

Meanwhile, in an unrelated semantic question, there was another HTML5 Doctor article a while back about quoting and citing with blockquote and its ilk.

The two questions come together beautifully in a blog post on the newly-redesigned A List Apart documenting this pattern for associating quotations and authorship:

<figure>
 <blockquote>It is the unofficial force—the Baker Street irregulars.</blockquote>
 <figcaption>Sherlock Holmes, <cite>Sign of Four</cite></figcaption>
</figure>

Although, unsurprisingly, I still take issue with the decision in HTML5 not to allow the cite element to apply to people. As I’ve said before we don’t have to accept this restriction:

Join me in a campaign of civil disobedience against the unnecessarily restrictive, backwards-incompatible change to the cite element.

In which case, we get this nice little pattern combining figure, blockquote, cite, and the hCard microformat, like this:

<figure>
 <blockquote>It is the unofficial force—the Baker Street irregulars.</blockquote>
 <figcaption class="vcard"><cite class="fn">Sherlock Holmes</cite>, <cite>Sign of Four</cite></figcaption>
</figure>

Or like this:

<figure>
 <blockquote>Join me in a campaign of civil disobedience against the unnecessarily restrictive, backwards-incompatible change to the cite element.</blockquote>
 <figcaption class="vcard"><cite class="fn">Jeremy Keith</cite>, <a href="http://24ways.org/2009/incite-a-riot/"><cite>Incite A Riot</cite></a></figcaption>
</figure>

Making Workshops for the Web

The latest Clearleft offering is Workshops for the Web. It made sense to move our workshop offerings out of the Clearleft site—where they were kind of distracting from the main message of the company—and give them their own home, just like our other events, dConstruct and UX London.

As well as the range of workshops that can be booked privately at any time, there’s a schedule of upcoming public workshops for 2010:

  1. CSS3 Wizardry on January 29th,
  2. Copywriting for the Web on March 5th,
  3. HTML5 for Web Designers on April 23rd,
  4. UX Fundamentals on June 11th and
  5. Usability Testing on July 16th.

The next workshop, CSS3 Wizardry with Rich and Nat, promises to be packed full of cutting-edge front-end techniques. Book a place if you want to have CSS3 kung-fu injected into your brainstem.

Visual Design

I’m pretty pleased with how the site turned out. When I began designing it initially, I thought I would give it a sort of Russian constructivist feeling: the title Workshops for the Web made me think of an international workers movement. I started researching political propaganda posters, beginning with the book Revolutionary Tides.

Revolutionary Tides

There’s also some fantastic propaganda material in The National Archives (and I just love the modern twist of World War Three propaganda posters). I found a treasure trove of images of American working life in the Flickr Commons collection from The Library of Congress. I started gathering these sources together and distilling some of the common components such as bold colours and diagonal lines.

Workers of the web: unite!

This was when Jon was working as an intern at Clearleft. I enlisted his help in brainstorming some ideas and he came up with some great stuff—like using Soviet space-race imagery—and we played around with proof-of-concept ideas for creating diagonal backgrounds using CSS3 transforms.

But it never really came together for me. Much as I loved the Russian constructivist propaganda angle, I ditched it and started from scratch.

IA

I scribbled down a page description diagram describing what the site needed to communicate in order of importance:

  1. The name of the site.
  2. A positioning statement.
  3. The next workshop.
  4. Other upcoming workshops.
  5. A list of all workshops available.
  6. A way of getting in touch.

The hierarchy for an individual workshop page looked pretty similar:

  1. The title of the workshop.
  2. The date of the workshop.
  3. The location of the workshop.
  4. The price of the workshop.
  5. Details of the workshop.

It was clear that the page needed to quickly answer some basic questions: what? where? how much?

I started marking up the answers to those questions from top to bottom. That’s when it started to come together. Working with markup and CSS in the browser felt more productive than any of the sketching I had done in Photoshop. I started really sweating the typography …to the extent that I decided that even the logotype should be created with “live” text rather than an image.

Build

From the start, I knew that I wanted the site to be a self-describing example of the technologies taught in the workshops. The site is built in HTML5, making good use of the new structural elements and the powerful outline algorithm. Marking up an events site with the hCalendar microformat was a no-brainer. There are hCards a-plenty too.

CSS3 nth-child selectors came in very handy and media queries are, quite simply, the bee’s knees when it comes to building a flexible site: just a few declarations allowed me to make sure the liquid layout could be optimised for different ranges of viewport size.

Workshops for the Web homepage Workshops for the Web homepage

Given the audience of the site, I could be fairly certain that Internet Explorer 6 wouldn’t be much of a hindrance. As it turns out, everything looks more or less okay even in that crappy browser. It looks different, of course, but then do websites need to look exactly the same in every browser?

Right before launch, Paul took a shot at tweaking the visual design, adding a bit more contrast and separation on the homepage with some horizontal banding. That’s a visual element that I had been subconsciously avoiding, probably because it’s already used on some of our other sites, but once it was added, it helped to emphasise the next upcoming workshop—the main purpose of the homepage.

Just because the site is live now doesn’t mean that I’ll stop working on it. I’d like to keep tweaking and evolving it. Maybe I’ll finally figure out a way of incorporating some elements of those great propaganda posters.

Propaganda

Microformation

It’s been a busy week for microformats.

Google announced that it was following in the footsteps of in indexing microformats and RDFa to display in search results. For now, it’s a subset of microformats— and —on a subset of websites, including the newly microformated Yelp. The list of approved sites will increase over time so if you’re already publishing structured contact and review information, let Google know about it.

But the other new announcement is equally important. After a lot of hard work, the is ready for use.

The what now? I hear you ask. Well, if you’ve been feeling hampered by the combination of the and design patterns, the value class pattern offers a few alternatives.

To my mind, that’s one of the greatest strengths of the value class pattern: it doesn’t offer one alternative, it allows authors to choose how they want to mark up their content. I think that’s one of the reasons why datetime values have proven to be such a sticking point up ‘till now. Concerns about semantics and accessibility really come down to the fact that, as an author, you had very little choice in how you could mark up a datetime value.

You could either present the datetime between the opening and closing tags of whatever element you were using:

<span class="dtstart">
 2009-06-05T20:00:00
</span>

…or you could put the value in the title attribute of the abbr element:

<abbr class="dtstart" title="2009-06-05T20:00:00">
 Friday, June 5th at 8pm
</abbr>

Those were your only options.

But now, with the value class pattern, all of the following are possible:

<span class="dtstart">
 <abbr class="value" title="2009-06-05">
  Friday, June 5th
 </abbr>
 at
 <abbr class="value" title="20:00">
  8pm
 </abbr>
</span>

<span class="dtstart">
 <span class="value-title" title="2009-06-05T20:00:00">
  Friday, June 5th at 8pm
 </span>
</span>

<span class="dtstart">
 <span class="value-title" title="2009-06-05">
  Friday, June 5th
 </span>
 at
 <span class="value-title" title="20:00">
  8pm
 </span>
</span>

<span class="dtstart">
 <span class="value-title" title="2009-06-05T20:00:00"> </span>
 Friday, June 5th at 8pm
</span>

Personally, I’ll probably use the first example. I like the idea of splitting up the date and time portions of a datetime value. I think there’s a big difference between putting a date string into the title attribute of an abbr element and putting a datetime string in there. In the past, when I argued that having an ISO date value in an abbreviation was semantic, accessible and internationalised, Mike Davies rightly accused me of using a strawman—the issue wasn’t about dates, it was about datetimes. That’s why I created the page on the microformats wiki; to disambiguate it as a subset of the larger .

Now, others might think that even using dates in combination with the abbr design pattern is semantically dodgy. That’s fine. They now have some other options they can use, thanks to the value-title subset of the value class pattern. Me? I don’t see myself using that. I’m especially not keen on the option to use an empty element. But I’m perfectly happy for other authors to go ahead and use that option. When it comes to writing, there are often no right or wrong answers, just personal preferences. That’s true whether it’s English, HTML, or any other language. As long as you use correct syntax and grammar, the details are up to you. You can choose semicolons or em-dashes when you’re writing English. You can choose abbr or value-title when you’re writing microformats.

The wiki page for the value class pattern doesn’t just list the options available to authors. It also explains them. That’s just as important. Head over there and read the document. I think you’ll agree that it’s an excellent example of clear, methodical writing.

The microformats wiki needs more pages like that. One of the biggest challenges facing microformats isn’t any particular technical problem; it’s trying to explain to willing HTML authors how to get up and running with microformats. Given Google’s recent announcement, there’ll probably be even more eager authors showing up, looking to sprinkle some extra semantics into their markup. We’ll be hanging out in the IRC channel, ready to answer any questions people might have, but I wish the wiki were laid out in a more self-explanatory way.

In the face of that challenge, the page for the value class pattern leads by example. Ben and Tantek have done a fantastic job. And it wouldn’t have been possible without the help and support of Bruce, James and Derek, those magnanimous giants of the accessibility community who offered help, support and data.

Small world, loosely joined

I’m in Seattle. Dopplr tells me that Bobbie is showing up in Seattle on the last day of my visit. I send Bobbie a direct message on Twitter. He tells me the name of the hotel he’ll be staying at.

I use Google Maps to find the exact address. All addresses on Google Maps are marked up with . I press the in my bookmarks bar to download the converted vcard into my address book. Thanks to , my updated address book is soon in the cloud . My gets the updated information within moments.

I go to the address. I meet Bobbie. We have coffee. We have a chat.

The World Wide Web is a beautiful piece of social software.

Machine-tagging Huffduffer some more

After I wrote about the hoops I had to jump through to get Amazon’s API to output JSON (via XSLT), Tom detailed a way of avoiding JSON by using XML-RPC. That’s very kind of him but the truth is that:

  1. I like dealing with JSON and
  2. the XSL transformation is done by Amazon, not me; that wouldn’t be the case if I used XML-RPC.

Anyway, having successfully created a Huffduffer-Amazon bridge using machine tags, I thought I’d do a little more hacking. Instead of restricting the mashup love to Amazon, I figured that Last.fm would be the perfect place to pull in information for anything tagged with the music namespace.

Last.fm has quite a full-featured API and yes, it can output JSON. To start with, I’m using the artist.getInfo method for anything tagged with music:artist=..., music:singer=... or music:band=.... Here are some examples:

I’m pulling a summary of the artist’s bio, a list of similar artists and a picture of the artist in question. For maximum effect, view in Safari, the browser with the finest implementation of .

Nice as Last.fm’s API is, it’s not without its quirks. Like most APIs, the methods are divided into those that require (anything of a sensitive nature) and those that don’t (publicly available information). The method user.getInfo requires authentication. Yet, every piece of information returned by that method is available on the public profile.

So when I wanted to find a Last.fm user’s profile picture—having figured out through when someone on Huffduffer has a Last.fm account—it made far more sense for me to use to parse the microformatted public URL than to use the API method.

Just over two years ago, Drew delivered a superb presentation called In some situations, the answer is definitely “Yes.”

Update: It all ties together, as Julian explains on Twitter:

@adactio ha, I went to Drew’s presentation you mentioned on your blog; it made me add microformats to Last.fm in the first place :D

Hacking Huffduffer with Last.fm

The took place in London yesterday. Much nerdy fun was had by all and some very cool hacks were produced.

Nigel made a neat USB-powered arduino-driven ambient signifier à la availibot that lights up when one of your friends is listening to music. Matt made Songcolours which takes your recently listened-to music, passes the songs through LyricWiki, extracts words that are colours, passes them through the Google chart API and generates a sentence of cut up lyrics (Hannah’s was the best: love drunk home fuck good night). The winning hack, Staff Wars, is a Last.fm-powered quiz that allows people to battle for control of the office stereo—something that could prove very useful at Clearleft.

I knew I’d never be able to compete with the l33t hax0rs in attendance, so I cobbled together a very quick little hack to enhance Huffduffer. I hacked it together fairly quickly which gave me some time to hang out with Hannah in the tragically hip environs of Shoreditch. My hack has one interesting distinguishing feature: it doesn’t make use of the API. Instead, it uses two simpler technologies: microformats and .

  1. Microformats. User profiles on Last.fm are marked up with . If a URL is provided, the user profile also makes use of the most powerful value of : rel="me". If that URL also links back to the Last.fm profile with rel="me"—even if in a roundabout way—that reciprocal link will be picked by Google’s Social Graph API. I’m already making use of that API on Huffduffer to display links to other profiles under the heading Elsewhere. So if someone provides a URL when they sign up to Huffduffer and they’re linking to their social network profiles, I can find out if they use Last.fm and what their username is. The URL structure of user profiles is consistent: http://www.last.fm/user/USERNAME.
  2. RSS. Last.fm provides users with a list of recommended free MP3s. This list is also provided as RSS. More specifically, the RSS feed is a podcast. After all, a podcast is nothing more than an RSS feed that uses enclosures. The URL structure of these podcasts is consistent: http://ws.audioscrobbler.com/2.0/user/USERNAME/podcast.rss.

So if, thanks to magic of XFN, I can figure out someone’s Last.fm username, it’s a simple matter to pull in their recommended music podcast. I’m pulling in the latest three recommended MP3s and displaying them on Huffduffer user profiles under the heading Last.fm recommends. You can see it in action on my Huffduffer profile or the profiles of any other good social citizens like Richard, Tom or Brian.

This isn’t the first little Huffduffer hack I’ve built on top of the Social Graph API. If a Huffduffer user has a Flickr account, their Flickr profile picture is displayed on their Huffduffer profile. When I get some time, I need to expand this little hack to also check for Twitter profiles and grab the profile picture from there as a fallback.

None of these little enhancements are essential features but I like the idea of rewarding people on Huffduffer for their activity on other sites. Ideally I’d like to have Huffduffer’s recommendation engine being partially driven by relationships on third-party sites. So your user profile might suggest something like, You should listen to this because so-and-so huffduffed it; you know one another on Twitter, Flickr, Last.fm…

hCard Wizard

The microformats meetup in San Francisco after An Event Apart had quite a turnout. The gathering was spoiled only by Jenn getting her purse stolen. Two evenings earlier, Noel had been robbed at gunpoint. San Francisco wasn’t exactly showing its best side.

Still, the microformats meetup was a pleasant get-together. Matthew Levine pulled out his laptop and gave me a demo of the Lazy Web in action…

On the first day of An Event Apart, I twittered a reminder that my liveblogging posts were filled with hCards. Christian asked how I added the hCards and I replied that, while I just add them by hand, some kind of “wizard” for adding simple hCards to any textarea would be very welcome.

Less than 48 hours later, Matthew had whipped up exactly what I asked for. It’s a bookmarklet. Drag it to your bookmarks bar and click on it whenever you want to add a simple hCard. It uses JavaScript to create a faux window with a form where you are prompted to enter given name and family name. You can also add a middle name and a URL.

This is just a small subset of all the properties available in hCard so it isn’t suitable for detailed hCards. If you’re creating the markup for a contact page, for example, you’d be better off with the hCard-o-matic. But this little bookmarklet easily hits 80% of the use cases for adding hCards within body text (like in a blog post, for example).

This is a first release and there will inevitably be improvements. The ability to add XFN values would be a real boon. Still… that’s really impressive work for something that was knocked together so quickly.

If you want to use the bookmarklet (regardless of what blogging engine or CMS you use), drag this to your bookmarks bar:

hCard Wizard

Open Tech 2008

Open Tech was fun. It was like a more structured version of BarCamp: the schedule was planned in advance and there was a nominal entrance fee of £5 but apart from that, it was pretty much OpenCamp. Most of the talks were twenty minutes long, grouped into hour-long thematically linked trilogies.

Things kicked off with a three way attack by Kim Plowright, Simon Wardley and Matt Webb. I particularly enjoyed Matt’s stroll down the memory lane of the birth of cybernetics. Alas, the fact that I stayed to enjoy this history lesson meant that I missed David Hayes’s introduction to Edenbee. But I did stick around for the next set of environment-related talks including a demo of the Wattson from DIY Kyoto and the always-excellent Gavin Starks of AMEE fame.

After a pub lunch spent being entertained by Ewan Spence’s thoroughly researched plan for a muppet remake of Star Wars, I made it back in time for a well-connected burst of talks from Simon, Gavin and Paul. Simon pimped OpenID. Gavin delivered a healthy dose of perspective from the h’internet. Paul ranted about the technologies depicted in his wonderful illustration entitled The Web is Agreement.

I made sure to catch the state of the nation address from Open Street Map. It was, as expected, inspiring. It’s quite amazing how far the project has come since the last Open Tech in 2005. Hearing about the wealth of data available gave me the kick up the arse to update the dConstruct location page to use Open Street Map tiles. It turned out to be a simple process involving the addition of just a few more lines of JavaScript.

My talk at Open Tech was a reprise of my XTech presentation, Creating Portable Social Networks With Microformats although the title on the schedule was Publishing With Microformats. I figured that the Open Tech audience would be fairly advanced so I decided against my original plan of doing an introductory level talk. The social network portability angle also tied in with quite a few other talks on the day.

I shared my slot with Jeni Tennison who gave a hands-on look at RDFa at the London Gazette. The two talks complemented each other well… just like microformats and RDFa. As Jeni said, microformats are great for doing the easy stuff—the low-hanging fruit—and deliberately avoid more complex data structures: they hit 80% of the use cases with 20% of the effort. RDFa, on the other hand, can handle greater complexity but with a higher learning curve. RDFa covers the other 20% of use cases but with 80% effort. Jeni’s case study was the perfect example. Whereas as I had been showing the simple patterns of user profiles and relationships on social networks (easily encoded with and ), she was dealing with a very specific data set that required its own ontology.

I was chatting with Dan at the start of Open Tech about this relationship. We’re both pretty fed up with the technologies being set up as somehow being rivals. Personally, I’m very happy that RDFa covers the kind of data structures that microformats doesn’t touch. When someone comes to the microformats community with an idea for a complex data format, it’s handy to have another technology to point them to. If you’re dealing with simple, common structures that have aggregate benefit like contact details, events and reviews, microformats are the perfect fit. But if you’re dealing with more complex structures—and I’m thinking here about museum collections, libraries and laboratories—chances are that some flavour of is going to be more suitable.

Jeni and I briefly discussed whether we should set up our talks as a kind of mock battle. But that kind of rivalry, even when it’s done in a jokey fashion, is unnecessary and frankly, more than a little bit dispiriting. It’s more constructive to talk about real-world use cases. On that basis, I think our Open Tech presentations hit the right note.

Open Tech schedule

I’ll be heading up to the University of London tomorrow for Open Tech 2008. The last Open Tech was in 2005 which was, by all accounts, a legendary affair—it led directly to the creation of the ORG.

I’ll be speaking about microformats, probably reworking some of the things I was talking about at XTech. It looks like there’ll be quite a lot of discussion around social networks, portability and privacy so I’m going to concentrate on XFN and hCard. Speaking of which, be sure to read Ben’s excellent article on Digital Web and then check out David’s superb implementation of the Social Graph API: what a productive pair of flatmates!

I put together an hCalendar schedule for Open Tech so if you’re going along, you might want to subscribe. I recommend subscribing over downloading as the schedule is likely to change. I’ll do my best to update the hCalendar document accordingly. Depending on the WiFi situation and how knackered I am after the early start from Brighton, I may try to do some liveblogging.

XTech 2008

I enjoyed being back in Ireland. Jessica and I arrived into Dublin last Saturday but went straight from the airport to the train station so that we could spend the weekend in seeing family and friends. Said town was somewhat overwhelmed by the arrival of .

We were back in Dublin in plenty of time for the start of this year’s XTech conference. A good time was had by the übergeeks gathered in the salubrious surroundings of a newly-opened hotel in the heart of Ireland’s capital. This was my third XTech and it had much the same feel as the previous two I’ve attended: very techy but nice and cosy. In some ways it resembles a BarCamp (but with a heftier price tag). The talks are held in fairly intimate rooms that lend themselves well to participation and discussion.

I didn’t try to attend every talk — an impossible task anyway given the triple-track nature of the schedule — but I did my damndest to liveblog the talks I did attend:

  1. Opening Keynote by David Recordon.
  2. Using socially-authored content to provide new routes through existing content archives by Rob Lee.
  3. Browsers on the Move: The Year in Review, the Year Ahead by Michael Smith.
  4. Building the Real-time Web by Matt Biddulph, Seth Fitzsimmons, Rabble and Ralph Meijer.
  5. AMEE — The World’s Energy Meter by Gavin Starks.
  6. Ni Hao, Monde: Connecting Communities Across Cultural and Linguistic Boundaries by Simon Batistoni.
  7. Data Portability For Whom? by Gavin Bell.
  8. Why You Should Have a Web Site by Steven Pemberton.
  9. Orangutans, Oxen and Ogham Stones by Sean McGrath.

There were a number of emergent themes around social networks and portability. There was plenty of SemWeb stuff which finally seems to be moving from the theoretical to the practical. And once again the importance of XMPP, first impressed upon me at the Social Graph Foo Camp, was once again made clear.

Amongst all these high-level technical talks, I gave a presentation that was ludicrously simple and simplistic: Creating Portable Social Networks with Microformats. To be honest, I could have delivered the talk in 60 seconds: Add rel="me" to these links, add rel="contact" to those links, and that’s it. If you’re interested, you can download a PDF of the presentation including notes.

I made an attempt to record my talk using Audio Hijack. It seems to have worked okay so I’ll set about getting that audio file transcribed. The audio includes an unusual gap at around the four minute mark, just as I was hitting my stride. This was the point when Aral came into the room and very gravely told me that he needed me to come out into the corridor for an important message. I feared the worst. I was almost relieved when I was confronted by a group of geeks who proceeded to break into song. You can guess what the song was.

Ian caught the whole thing on video. Why does this keep happening to me?

Sand E. Eggo

I’m in San Diego for Jared’s Web App Summit. It’s my first time here and I find myself quite won over by the city’s charm. It’s a shiny sparkly kind of place.

The conference kicked off with a day of workshops. I should have tried to gatecrash Luke or Indy’s sessions but with the weather being so nice, I bunked off with Derek, Keith and Cindy to venture across the water from Coronado to explore the city. With no plan in mind, we found our path took us to the USS Midway, now a floating museum. We spent the rest of the afternoon geeking out over planes and naval equipment.

I got my talk about Ajax design challenges out of the way yesterday. It seemed to go pretty well. It might have been a little bit too techy for some of the audience here but I’ve received some very nice comments from a lot of people. As usual, the presentation is licensed under a Creative Commons attribution license. Feel free to download the slides but the usual caveat applies: the slides don’t make all that much sense in isolation.

With that out of the way, I was able to relax and enjoy the rest of the day. The highlight for me was listening to Bill Scott talk about interaction anti-patterns. I found myself nodding vigourously in agreement with his research and recommendations. But I must join in the clamour of voices calling for Bill to put this stuff online somewhere. I would love to have a URL I could point to next time I’m arguing against adding borked behaviour to a web app.

The conference continues today. Jason Fried kicked off the day’s talks and Keith and Derek will be in the spotlight later on (it’s always convenient when Derek is on the same bill as me because I can fob off all the Ajax accessibility issues on him).

Before making the long journey back to the UK I’ve got a social event I’m looking forward to attending. There’s a microformats dinner tonightTantek is in town too for a CSS Working Group meetup. Come along to Gateway to India at 9520 Black Mountain Road if you’re in San Diego. We can combine a vegetarian Indian buffet with semantic geekery.

Viva

My trip to MIX08 was also my first visit to Las Vegas. I’m sure I’m not the first place to make this observation but may I just say: what an odd place!

I experienced first-hand what Dan was talking about in his presentation Learning Interaction Design from Las Vegas. In getting from A to B, for any value of A and any value of B, all routes lead through the casino floor; the smoky, smoky casino floor. If it wasn’t for the fact that I had to hunt down an Apple Store to try to deal with my broken Macbook—more on that later—I wouldn’t have stepped outside the hotel/conference venue for the duration of my stay. Also, from the perspective of only seeing The Strip, visiting Las Vegas was like Children of Men or a bizarro version of Logan’s Run.

But enough on the locale, what about the event? Well, it was certainly quite different to South by Southwest. Southby is full of geeks, MIX was full of nerds. Now I understand the difference.

I was there to hear about Internet Explorer 8. Sure enough, right after some introductory remarks from Ray Ozzie, the keynote presentation included a slot for Dean Hachamovitch to showcase new features and announce the first beta release. I then had to endure three hours of Silverlight demos but I was fortunate enough to be sitting next to PPK so I spent most of the time leaning over his laptop while he put the beta through its paces.

After the keynote, Chris Wilson gave a talk wherein he ran through all the new features. It goes without saying that the most important “feature” is that the version targeting default behaviour is now fixed: IE8 will behave as IE8 by default. I am, of course, ecstatic about this and I conveyed my happiness to Chris and anyone else who would listen.

IE8 is aiming for full CSS2.1 support. Don’t expect any CSS3 treats: Chris said that the philosophy behind choosing which standards to support was to go for the standards that are finished. That makes a lot of sense. But then this attitude is somewhat contradicted by the inclusion of some HTML5 features. Not that I’m complaining: URL hash updates (for bookmarking) and offline storage are very welcome additions for anyone doing any Ajax work.

Overall IE8 is still going to be a laggard compared to Firefox, Safari and Opera when it comes to standards but I’m very encouraged by the attitude that the team are taking. Web standards are the star by which they will steer their course. That’s good for everyone. And please remember, the version available now is very much a beta release so don’t get too discouraged by any initial breakage.

I’m less happy about the closed nature of the development process at Microsoft. Despite Molly’s superheroic efforts in encouraging more transparency, there were a number of announcements that I wish hadn’t been surprises. Anne Van Kesteren outlines some issues, most of them related to Microsoft’s continued insistence on ignoring existing work in favour of reinventing the wheel. The new XDomainRequest Object is the most egregious example of ignoring existing community efforts. Anne also some issues with IE’s implementation of ARIA but for me personally, that’s outweighed by the sheer joy of seeing ARIA supported at all: a very, very welcome development that creates a solid baseline of support (you can start taking bets now on how long it will take to make it into a nightly build of WebKit, the last bulwark).

The new WebSlices technology is based heavily on hAtom. Fair play to Microsoft: not once do they refer to their “hSlice” set of class names as a microformat. It’s clear that they’ve been paying close attention to the microformats community, right down to the licensing: I never thought I’d hear a Microsoft keynote in which technology was released under a Creative Commons Public Domain license. Seeing as they are well aware of microformats, I asked Chris why they didn’t include native support for hCard and hCalendar. This would be a chance for Internet Explorer to actually leapfrog Firefox. Instead of copying (see the Firebug clone they’ve built for debugging), here was an opportunity to take advantage of the fact that Mozilla have dropped the ball: they promised native support for microformats in Firefox 3 but they are now reneging on that promise. Chris’s response was that the user experience would be too inconsistent. Using the tried and tested “my mom” test, Chris explained that his mom would wonder why only some events and contact details were exportable but not others. But surely that also applies to WebSlices? The number of WebSlices on the Web right now is close to zero. Microsoft are hoping to increase that number by building in a WebSlice parser into their browser; if they had taken the same attitude with hCard and hCalendar, they themselves could have helped break the chicken’n’egg cycle by encouraging more microformat deployment through native browser support.

Overall though, I’m very happy with the direction that Internet Explorer is taking even if, like John, I have some implementation quibbles.

Having experienced a big Microsoft event first-hand, I still don’t know whether to be optimistic or pessimistic about the company. I get the impression that there are really two Microsofts. There’s Ray Ozzie’s Microsoft. He’s a geek. He gets developers. He understands technology and users. Then there’s Steve Ballmer’s Microsoft. He’s an old-school businessman in the mold of Scrooge McDuck. If Ray Ozzie is calling the shots, then there is reason to be hopeful for the future. If the buck stops with Steve Ballmer however, Microsoft is f**ked.

Semantopoly

A pleasant Saturday afternoon of tea and burlesque was followed by a pleasant Saturday evening of cocktails, conversation and Guitar Hero at Andy’s flatwarming party. The party went on rather late which meant that I didn’t get a very early start on Sunday. I did, however, manage to convince Ben, Patrick and Frances to stay down in Brighton instead of running off early for the last train back to London so now I’ve got some new rel="friend met colleague" values added to my bedroll.

I eventually roused myself enough to get up to London for the last few hours of Semantic Camp. There was a session on pimping your FOAF of which I understood nary a word, Glenn gave a great talk about parsing microformats and Premasagar demonstrated and discussed compund microformats.

But the highlight of the event was the latest creation of Jon Linklater-Johnson: Semantopoly. Imagine a game of Monopoly where all the pieces are Happy Webbies, all the properties are websites or technologies and the currency is friends rather than money. Twitterable remarks were flowing a-plenty:

Andy Clarke uses Cindy Li’s CSS and looses 450 friends. He has to take Safari AND Facebook offline to do it. Facebook is offline oh noes!

Ah, what larks! Nicely done, Jon. And nicely done, Tom for organising a most enjoyable Semantic Camp… even if I did miss most of it. I blame Andy’s l33t Margarita skillz.

SG Foo Camp schedule

Thanks to my life-saving inflatable mattress, I managed to get a decent night’s sleep. A full day of sessions is about to kick off so I’m going to fortify myself with plenty of coffee.

But markup comes before coffee. I’ve copied down the schedule (as it currently stands) from the whiteboard and turned it into a nice portable hCalendar:

http://icanhaz.com/sgfoo

If you’re here, you might want to subscribe to the schedule and stick it on your phone (or any other device with a calendar).

Cascading calendars

I had fun styling the hCalendar markup for the Berlin Web 2.0 Expo schedule. I based the CSS on what I had written the table of errata for DOM Scripting (which uses del and ins to indicate changes—a nice little dab of ). For the conference schedule, I added a little flourish for standards-compliant browsers by highlighting headers and cells on the same row with these declarations:

table.vcalendar tbody td:hover,
table.vcalendar tbody td:focus {
 background-color: #ddd;
}
table.vcalendar tbody tr:hover th,
table.vcalendar tbody tr:focus th {
 background-color: #678;
}

Gareth asked if he could use the same CSS for some hCalendar schedules he marked up:

I’m more than happy to share my style sheets. In fact, Brian suggested the CSS could be released under a Creative Commons licence. So that’s what I’ve done.

This CSS file is licensed under a Creative Commons attribution licence. If you use Dmitry Baranovskiy’s conference schedule creator to generate an hCalendar table, this CSS should style it nicely.

Of course, you might decide to write your own CSS. In that case, please consider also sharing it under a similar licence. It would be nice to gather together a whole range of possible style sheets for hCalendar schedules and list them on the microformats wiki.

Berlin, day 3

For the second morning in a row, I rose at an ungodly hour to make my way to the Web 2.0 Expo and clamber on stage. There wasn’t a huge crowd of people in the room but I was glad that anyone had made the effort to come along so early.

I greeted the attendees, “Guten Morgen, meine Dame und Herren.” That’s not a typo; I know that the plural is “Damen” but this isn’t a very diverse conference.

I proceeded to blather on about microformats and nanotechnology. People seemed to like it. Afterwards Matt told me that the whole buckyball building block analogy I was using reminded him of phenotropics, a subject he’s spoken on before. I need to investigate further… if nothing else so that I can remedy the fact that the concept currently has no page on Wikipedia.

After my talk, I hung around just long enough to catch some of Steve Coast’s talk on OpenStreetMap and Mark’s talk on typography, both of which were excellent. I gave the keynotes a wide berth. Instead I hung out in the splendid food hall of the KaDeWe with Jessica and Natalie.

The evening was spent excercising my l33t dinner-organising skillz when, for the second night in a row, I was able to seat a gathering of geeks in the two digit figures. Berlin is a very accommodating city.