Journal tags: scripting

15

event.target.closest

Eric mentioned the JavaScript closest method. I use it all the time.

When I wrote the book DOM Scripting back in 2005, I’d estimate that 90% of the JavaScript I was writing boiled down to:

  1. Find these particular elements in the DOM and
  2. When the user clicks on one of them, do something.

It wasn’t just me either. I reckon that was 90% of most JavaScript on the web: progressive disclosure widgets, accordions, carousels, and so on.

That’s one of the reasons why jQuery became so popular. That first step (“find these particular elements in the DOM”) used to be a finicky affair involving getElementsByTagName, getElementById, and other long-winded DOM methods. jQuery came along and allowed us to use CSS selectors.

These days, we don’t need jQuery for that because we’ve got querySelector and querySelectorAll (and we can thank jQuery for their existence).

Let’s say you want to add some behaviour to every button element with a class of special. Or maybe you use a data- attribute instead of the class attribute; the same principle applies. You want something special to happen when the user clicks on one of those buttons.

  1. Use querySelectorAll('button.special') to get a list of all the right elements,
  2. Loop through the list, and
  3. Attach addEventListener('click') to each element.

That’s fine for a while. But if you’ve got a lot of special buttons, you’ve now got a lot of event listeners. You might be asking the browser to do a lot of work.

There’s another complication. The code you’ve written runs once, when the page loads. Suppose the contents of the page have changed in the meantime. Maybe elements are swapped in and out using Ajax. If a new special button shows up on the page, it won’t have an event handler attached to it.

You can switch things around. Instead of adding lots of event handlers to lots of elements, you can add one event handler to the root element. Then figure out whether the element that just got clicked is special or not.

That’s where closest comes in. It makes this kind of event handling pretty straightforward.

To start with, attach the event listener to the document:

document.addEventListener('click', doSomethingSpecial, false);

That function doSomethingSpecial will be executed whenever the user clicks on anything. Meanwhile, if the contents of the document are updated via Ajax, no problem!

Use the closest method in combination with the target property of the event to figure out whether that click was something you’re interested in:

function doSomethingSpecial(event) {
  if (event.target.closest('button.special')) {
    // do something
  }
}

There you go. Like querySelectorAll, the closest method takes a CSS selector—thanks again, jQuery!

Oh, and if you want to reduce the nesting inside that function, you can reverse the logic and return early like this:

function doSomethingSpecial(event) {
  if (!event.target.closest('button.special')) return;
  // do something
}

There’s a similar method to closest called matches. But that will only work if the user clicks directly on the element you’re interested in. If the element is nested within other elements, matches might not work, but closest will.

Like Eric said:

Very nice.

Submitting a form with datalist

I’m a big fan of HTML5’s datalist element and its elegant design. It’s a way to progressively enhance any input element into a combobox.

You use the list attribute on the input element to point to the ID of the associated datalist element.

<label for="homeworld">Your home planet</label>
<input type="text" name="homeworld" id="homeworld" list="planets">
<datalist id="planets">
 <option value="Mercury">
 <option value="Venus">
 <option value="Earth">
 <option value="Mars">
 <option value="Jupiter">
 <option value="Saturn">
 <option value="Uranus">
 <option value="Neptune">
</datalist>

It even works on input type="color", which is pretty cool!

The most common use case is as an autocomplete widget. That’s how I’m using it over on The Session, where the datalist is updated via Ajax every time the input is updated.

But let’s stick with a simple example, like the list of planets above. Suppose the user types “jup” …the datalist will show “Jupiter” as an option. The user can click on that option to automatically complete their input.

It would be handy if you could automatically submit the form when the user chooses a datalist option like this.

Well, tough luck.

The datalist element emits no events. There’s no way of telling if it has been clicked. This is something I’ve been trying to find a workaround for.

I got my hopes up when I read Amber’s excellent article about document.activeElement. But no, the focus stays on the input when the user clicks on an option in a datalist.

So if I can’t detect whether a datalist has been used, this best I can do is try to infer it. I know it’s not exactly the same thing, and it won’t be as reliable as true detection, but here’s my logic:

  • Keep track of the character count in the input element.
  • Every time the input is updated in any way, check the current character count against the last character count.
  • If the difference is greater than one, something interesting happened! Maybe the user pasted a value in …or maybe they used the datalist.
  • Loop through each of the options in the datalist.
  • If there’s an exact match with the current value of the input element, chances are the user chose that option from the datalist.
  • So submit the form!

Here’s how that translates into DOM scripting code:

document.querySelectorAll('input[list]').forEach( function (formfield) {
  var datalist = document.getElementById(formfield.getAttribute('list'));
  var lastlength = formfield.value.length;
  var checkInputValue = function (inputValue) {
    if (inputValue.length - lastlength > 1) {
      datalist.querySelectorAll('option').forEach( function (item) {
        if (item.value === inputValue) {
          formfield.form.submit();
        }
      });
    }
    lastlength = inputValue.length;
  };
  formfield.addEventListener('input', function () {
    checkInputValue(this.value);
  }, false);
});

I’ve made a gist with some added feature detection and mustard-cutting at the start. You should be able to drop it into just about any page that’s using datalist. It works even if the options in the datalist are dynamically updated, like the example on The Session.

It’s not foolproof. The inference relies on the difference between what was previously typed and what’s autocompleted to be more than one character. So in the planets example, if someone has type “Jupite” and then they choose “Jupiter” from the datalist, the form won’t automatically submit.

But still, I reckon it covers most common use cases. And like the datalist element itself, you can consider this functionality a progressive enhancement.

Custom properties

I made the website for the Clearleft podcast last week. The design is mostly lifted straight from the rest of the Clearleft website. The main difference is the masthead. If the browser window is wide enough, there’s a background image on the right hand side.

I mostly added that because I felt like the design was a bit imbalanced without something there. On the home page, it’s a picture of me. Kind of cheesy. But the image can be swapped out. On other pages, there are different photos. All it takes is a different class name on that masthead.

I thought about having the image be completely random (and I still might end up doing this). I’d need to use a bit of JavaScript to choose a class name at random from a list of possible values. Something like this:

var names = ['jeremy','katie','rich','helen','trys','chris'];
var name = names[Math.floor(Math.random() * names.length)];
document.querySelector('.masthead').classList.add(name);

(You could paste that into the dev tools console to see it in action on the podcast site.)

Then I read something completely unrelated. Cassie wrote a fantastic article on her site called Making lil’ me - part 1. In it, she describes how she made the mouse-triggered animation of her avatar in the footer of her home page.

It’s such a well-written technical article. She explains the logic of what she’s doing, and translates that logic into code. Then, after walking you through the native code, she shows how you could use the Greeksock library to achieve the same effect. That’s the way to do it! Instead of saying, “Here’s a library that will save you time—don’t worry about how it works!”, she’s saying “Here’s it works without a library; here’s how it works with a library; now you can make an informed choice about what to use.” It’s a very empowering approach.

Anyway, in the article, Cassie demonstrates how you can use custom properties as a bridge between JavaScript and CSS. JavaScript reads the mouse position and updates some custom properties accordingly. Those same custom properties are used in CSS for positioning. Voila! Now you’ve got the position of an element responding to mouse movements.

That’s what made me think of the code snippet I wrote above to update a class name from JavaScript. I automatically thought of updating a class name because, frankly, that’s how I’ve always done it. I’d say about 90% of the DOM scripting I’ve ever done involves toggling the presence of class values: accordions, fly-out menus, tool-tips, and other progressive disclosure patterns.

That’s fine. But really, I should try to avoid touching the DOM at all. It can have performance implications, possibly triggering unnecessary repaints and reflows.

Now with custom properties, there’s a direct line of communication between JavaScript and CSS. No need to use the HTML as a courier.

This made me realise that I need to be aware of automatically reaching for a solution just because that’s the way I’ve done something in the past. I should step back and think about the more efficient solutions that are possible now.

It also made me realise that “CSS variables” is a very limiting way of thinking about custom properties. The fact that they can be updated in real time—in CSS or JavaScript—makes them much more powerful than, say, Sass variables (which are more like constants).

But I too have been guilty of underselling them. I almost always refer to them as “CSS custom properties” …but a lot of their potential comes from the fact that they’re not confined to CSS. From now on, I’m going to try calling them custom properties, without any qualification.

A tiny lesson in query selection

We have a saying at Clearleft:

Everything is a tiny lesson.

I bet you learn something new every day, even if it’s something small. These small tips and techniques can easily get lost. They seem almost not worth sharing. But it’s the small stuff that takes the least effort to share, and often provides the most reward for someone else out there. Take for example, this great tip for getting assets out of Sketch that Cassie shared with me.

Cassie was working on a piece of JavaScript yesterday when we spotted a tiny lesson that tripped up both of us. The script was a fairly straightforward piece of DOM scripting. As a general rule, we do a sort of feature detection near the start of the script. Let’s say you’re using querySelector to get a reference to an element in the DOM:

var someElement = document.querySelector('.someClass');

Before going any further, check to make sure that the reference isn’t falsey (in other words, make sure that DOM node actually exists):

if (!someElement) return;

That will exit the script if there’s no element with a class of someClass on the page.

The situation that tripped us up was like this:

var myLinks = document.querySelectorAll('a.someClass');

if (!myLinks) return;

That should exit the script if there are no A elements with a class of someClass, right?

As it turns out, querySelectorAll is subtly different to querySelector. If you give querySelector a reference to non-existent element, it will return a value of null (I think). But querySelectorAll always returns an array (well, technically it’s a NodeList but same difference mostly). So if the selector you pass to querySelectorAll doesn’t match anything, it still returns an array, but the array is empty. That means instead of just testing for its existence, you need to test that it’s not empty by checking its length property:

if (!myLinks.length) return;

That’s a tiny lesson.

Teaching in Porto, day three

Day two ended with a bit of a cliffhanger as I had the students mark up a document, but not yet style it. In the morning of day three, the styling began.

Rather than just treat “styling” as one big monolithic task, I broke it down into typography, colour, negative space, and so on. We time-boxed each one of those parts of the visual design. So everyone got, say, fifteen minutes to write styles relating to font families and sizes, then another fifteen minutes to write styles for colours and background colours. Bit by bit, the styles were layered on.

When it came to layout, we closed the laptops and returned to paper. Everyone did a quick round of 6-up sketching so that there was plenty of fast iteration on layout ideas. That was followed by some critique and dot-voting of the sketches.

Rather than diving into the CSS for layout—which can get quite complex—I instead walked through the approach for layout; namely putting all your layout styles inside media queries. To explain media queries, I first explained media types and then introduced the query part.

I felt pretty confident that I could skip over the nitty-gritty of media queries and cross-device layout because the next masterclass that will be taught at the New Digital School will be a week of responsive design, taught by Vitaly. I just gave them a taster—Vitaly can dive deeper.

By lunch time, I felt that we had covered CSS pretty well. After lunch it was time for the really challenging part: JavaScript.

The reason why I think JavaScript is challenging is that it’s inherently more complex than HTML or CSS. Those are declarative languages with fairly basic concepts at heart (elements, attributes, selectors, etc.), whereas an imperative language like JavaScript means entering the territory of logic, loops, variables, arrays, objects, and so on. I really didn’t want to get stuck in the weeds with that stuff.

I focused on the combination of JavaScript and the Document Object Model as a way of manipulating the HTML and CSS that’s already inside a browser. A lot of that boils down to this pattern:

When (some event happens), then (take this action).

We brainstormed some examples of this e.g. “When the user submits a form, then show a modal dialogue with an acknowledgement.” I then encouraged them to write a script …but I don’t mean a script in the JavaScript sense; I mean a script in the screenwriting or theatre sense. Line by line, write out each step that you want to accomplish. Once you’ve done that, translate each line of your English (or Portuguese) script into JavaScript.

I did quick demo as a proof of concept (which, much to my surprise, actually worked first time), but I was at pains to point out that they didn’t need to remember the syntax or vocabulary of the script; it was much more important to have a clear understanding of the thinking behind it.

With the remaining time left in the day, we ran through the many browser APIs available to JavaScript, from the relatively simple—like querySelector and Ajax—right up to the latest device APIs. I think I got the message across that, using JavaScript, there’s practically no limit to what you can do on the web these days …but the trick is to use that power responsibly.

At this point, we’ve had three days and we’ve covered three layers of web technologies: HTML, CSS, and JavaScript. Tomorrow we’ll take a step back from the nitty-gritty of the code. It’s going to be all about how to plan and think about building for the web before a single line of code gets written.

Making progress

When I was talking about Async, Ajax, and animation, I mentioned the little trick I’ve used of generating a progress element to indicate to the user that an Ajax request is underway.

I sometimes use the same technique even if Ajax isn’t involved. When a form is being submitted, I find it’s often good to provide explicit, immediate feedback that the submission is underway. Sure, the browser will do its own thing but a browser doesn’t differentiate between showing that a regular link has been clicked, and showing that all those important details you just entered into a form are on their way.

Here’s the JavaScript I use. It’s fairly simplistic, and I’m limiting it to POST requests only. At the moment that a form begins to submit, a progress element is inserted at the end of the form …which is usually right by the submit button that the user will have just pressed.

While I’m at it, I also set a variable to indicate that a POST submission is underway. So even if the user clicks on that submit button multiple times, only one request is set.

You’ll notice that I’m attaching an event to each form element, rather than using event delegation to listen for a click event on the parent document and then figuring out whether that click event was triggered by a submit button. Usually I’m a big fan of event delegation but in this case, it’s important that the event I’m listening to is the submit event. A form won’t fire that event unless the data is truly winging its way to the server. That means you can do all the client-side validation you want—making good use of the required attribute where appropriate—safe in the knowledge that the progess element won’t be generated until the form has passed its validation checks.

If you like this particular pattern, feel free to use the code. Better yet, improve upon it.

Async, Ajax, and animation

I hadn’t been to one of Brighton’s Async JavaScript meetups for quite a while, but I made it along last week. Now that it’s taking place at 68 Middle Street, it’s a lot easier to get to …seeing as the Clearleft office is right upstairs.

James Da Costa gave a terrific presentation on something called Pjax. In related news, it turns out that the way I’ve been doing Ajax all along is apparently called Pjax.

Back when I wrote Bulletproof Ajax, I talked about using Hijax. The basic idea is:

  1. First, build an old-fashioned website that uses hyperlinks and forms to pass information to the server. The server returns whole new pages with each request.
  2. Now, use JavaScript to intercept those links and form submissions and pass the information via XMLHttpRequest instead. You can then select which parts of the page need to be updated instead of updating the whole page.

So basically your JavaScript is acting like a dumb waiter shuttling requests for page fragments back and forth between the browser and the server. But all the clever stuff is happening on the server, not the browser. To the end user, there’s no difference between that and a site that’s putting all the complexity in the browser.

In fact, the only time you’d really notice a difference is when something goes wrong: in the Hijax model, everything just falls back to full-page requests but keeps on working. That’s the big difference between this approach and the current vogue for “single page apps” that do everything in the browser—when something goes wrong there, the user gets bupkis.

Pjax introduces an extra piece of the puzzle—which didn’t exist when I wrote Bulletproof Ajax—and that’s pushState, part of HTML5’s History API, to keep the browser’s URL updated. Hence, pushState + Ajax = Pjax.

As you can imagine, I was nodding in vigourous agreement with everything James was demoing. It was refreshing to find that not everyone is going down the Ember/Angular route of relying entirely on JavaScript for core functionality. I was beginning to think that nobody cared about progressive enhancement any more, or that maybe I was missing something fundamental, but it turns out I’m not crazy after all: James’s demo showed how to write front-end code responsibly.

What was fascinating though, was hearing why people were choosing to develop using Pjax. It isn’t necessarily that they care about progressive enhancement, robustness, and universal access. Rather, it’s often driven by the desire to stay within the server-side development environment that they’re comfortable with. See, for example, DHH’s explanation of why 37 Signals is using this approach:

So you get all the advantages of speed and snappiness without the degraded development experience of doing everything on the client.

It sounds like they’re doing the right thing for the wrong reasons (a wrong reason being “JavaScript is icky!”).

A lot of James’s talk was focused on the user experience of the interfaces built with Hijax/Pjax/whatever. He had some terrific examples of how animation can make an enormous difference. That inspired me to do a little bit of tweaking to the Ajaxified/Hijaxified/Pjaxified portions of The Session.

Whenever you use Hijax to intercept a link, it’s now up to you to provide some sort of immediate feedback to the user that something is happening—normally the browser would take care of this (remember Netscape’s spinning lighthouse?)—but when you hijack that click, you’re basically saying “I’ll take care of this.” So you could, for example, display a spinning icon.

One little trick I’ve used is to insert an empty progress element.

Normally the progress element takes max and value attributes to show how far along something has progressed:

<progress max="100" value="75">75%</progress>

75%

But if you leave those out, then it’s an indeterminate progess bar:

<progress>loading...</progress>

loading…

The rendering of the progress bar will vary from browser to browser, and that’s just fine. Older browsers that don’t understand the progress will display whatever’s between the opening and closing tags.

Voila! You’ve got a nice lightweight animation to show that an Ajax request is underway.

The ghost of browsers past

Even before a line of code was written for the line-mode browser simulator when we gathered together at CERN, there was a gleeful period of digital spelunking.

Brian goes browsing Demonstration data sources

We poked at the markup of the first ever website

  • What’s that NEXTID element? Turns it out it’s something specific to the NeXT operating system.
  • Why does the first iteration of HTML already contain H1 through to H6? It’s because they were lifted wholesale from a flavour of SGMLStandard Generalized Markup Language—that was already in use at CERN.

Oh, and Brian asked Robert Cailliau why they went with the term World Wide Web. “Well,” he said, “we had to call it something. And we thought we could always change it later.”

Then there was the story of the line-mode browser. It was created by Nicola Pellow, who was a student at CERN in 1990. She later worked on the Mac browser but her involvement with kickstarting the world wide web ended around 1993. She never showed up to any of the reunions.

We poked around in the (surprisingly short) source code of the line-mode browser. We found the lines that described how elements should be styled—the term “style sheet” appeared in a comment!

Proto-stylesheet Parsing the parser

If you’ve fired up the line-mode browser simulator and run some websites through it, you’ll probably see occasions where a whole bunch of JavaScript—nestled between script tags in the head of the document—gets rendered to the screen.

Clearleft

We could’ve hidden that JavaScript, but we made a deliberate decision to display it. That’s what the line-mode browser would have done. The script element didn’t exist back then. Heck, JavaScript didn’t exist back then. So browsers would have handled the unknown element in the standard HTML way: ignore the opening and closing tags and just render what’s in-between them. That’s still the error-handling model for unrecognised elements in HTML.

This is why we used to write our JavaScript like this:

<script language="JavaScript" type="text/javascript">
<!--
(JavaScript goes here)
//-->
</script>

The HTML comments stopped the JavaScript from being rendered to the screen in older browsers (like the line-mode browser). Using the opening HTML comment <!-- is functionally equivalent to // single-line comments in JavaScript …although you still need to prefix the closing --> comment with a //.

I remember doing this when I first started making websites in the 90s. You can see it if you view source on the first version of this website.

Later on, we all switched to XHTML so we updated the syntax to make it valid XML.

<script type="text/javascript">
//<![CDATA[
(JavaScript goes here)
//]]>
</script>

The <![CDATA[ part stops an XML parser from trying to parse the JavaScript. But HTML parsers would choke on that because it starts with an angle bracket. Hence the JavaScript-style // comment.

Anyway, we don’t bother with HTML or XHTML comments at the start of our script blocks anymore. And that’s why the line-mode browser simulator renders the JavaScript to the screen.

Note that the JavaScript isn’t executed. That’s thanks to a clever little hack by Remy: the line-mode browser simulator changes the type attribute of every script element to text/plain, effectively defusing them. Smart!

Canvas sparklines

I like sparklines a lot. Tufte describes a sparkline as:

…a small intense, simple, word-sized graphic with typographic resolution.

Four years ago, I added sparklines to Huffduffer using Google’s chart API. That API comes in two flavours: a JavaScript API for client-side creation of graphs, and image charts for server-side rendering of charts as PNGs.

The image API is really useful: there’s no reliance on JavaScript, it works in every browser capable of displaying images, and it’s really flexible and customisable. Therefore it is, of course, being deprecated.

The death warrant for Google image charts sets the execution date for 2015. Time to start looking for an alternative.

I couldn’t find a direct equivalent to the functionality that Google provides i.e. generating the images dynamically on the server. There are, however, plenty of client-side alternatives, many of them using canvas.

Most of the implementations I found were a little heavy-handed for my taste: they either required jQuery or Processing or both. I just wanted a quick little script for generating sparklines from a dataset of numbers. So I wrote my own.

I’ve put my code up on Github as Canvas Sparkline.

Here’s the JavaScript. You create a canvas element with the dimensions you want for the sparkline, then pass the ID of that element (along with your dataset) into the sparkline function:

sparkline ('canvasID', [12, 18, 13, 12, 11, 15, 17, 20, 15, 12, 8, 7, 9, 11], true);

(that final Boolean value at the end just indicates whether you want a red dot at the end of the sparkline).

The script takes care of normalising the values, so it doesn’t matter how many numbers are in the dataset or whether the range of the numbers is in the tens, hundreds, thousands, or hundreds of thousands.

There’s plenty of room for improvement:

  • The colour of the sparkline is hardcoded (50% transparent black) but it could be passed in as a value.
  • All the values should probably be passed in as an array of options rather than individual parameters.

Feel free to fork, adapt, and improve.

The sparklines are working quite nicely, but I can’t help but feel that this isn’t the right tool for the job. Ideally, I’d like to keep using a server-side solution like Google’s image charts. But if I am going to use a client-side solution, I’m not sure that canvas is the right element. This should really be SVG: canvas is great for dynamic images and animations that need to update quite quickly, but sparklines are generally pretty static. If anyone fancies making a lightweight SVG solution for sparklines, that would be lovely.

In the meantime, you can see Canvas Sparkline in action on the member profiles at The Session, like here, here, here, or here.

Update: Ask and thou shalt receive. Check out this fantastic lightweight SVG solution from Stuart—bloody brilliant!

Months and years

While I was in San Francisco for the last Event Apart of the year in December, Luke pulled me aside while he was preparing for his A Day Apart workshop on mobile web design. As befits the man who literally wrote the book on web forms and also wrote the the book on mobile-first design, Luke was planning to spend plenty of time covering input on mobile devices and he wanted my opinion on one of the patterns he was going to mention.

Let’s say you’ve got your typical checkout form asking for credit card details. The user is going to need to specify the expiry date of their credit card, something that historically would have been done with select elements, like so:

With the introduction of the new input types in HTML5, we can now just use input type="month".

That’s particularly nice on mobile devices that support input type="month" like Mobile Safari since iOS5.

input type="month"

But the behaviour on non-supporting browsers would be to display just like input type="text" …not ideal for inputting a date.

So the pattern that Luke proposed was to start with the traditional double drop-downs for month and year, and then use feature detection to replace them with input type="month" in supporting browsers.

That was an angle I hadn’t thought of. Usually when I’m implementing new HTML5 input types (like input type="number") I put the new type in the markup and then try to come up with JavaScript polyfills for older non-supporting browsers. In this case though, the old-fashioned way is what goes in the markup and JavaScript does the enhancing for newer browsers.

The only downside is that some the desktop browsers that do support input type="month" do so in a way that—from a UI standpoint—seems like a step backwards from simply having two selects: Safari displays it with a spinner control like input type="number", while Opera shows an entire Calendar (days’n’all).

Anyway, I threw a quick hack together as a proof of concept so you can see it in action. I’m sure you can improve upon it. Feel free to do so.

Orientation and scale

Paul Irish, Divya Manian and Shi Chuan launched Mobile Boilerplate recently—a mobile companion site to HTML5 Boilerplate.

There’s some good stuff in there but I was a little surprised to see that the meta viewport element included values for minimum-scale=1.0, maximum-scale=1.0, user-scalable=no:

Setting user-scalable=no is pretty much the same as setting minimum-scale=1.0, maximum-scale=1.0. In any case, I’m not keen on it. Like Roger, I don’t think we should take away the user’s right to pinch and zoom to make content larger. That’s why my usual viewport declaration is:

Yes, I know that most native apps don’t allow you to zoom but I see no reason to replicate that failing on the web.

But there’s a problem. Allowing users to scale content for comfort would be fine if it weren’t for a bug in Mobile Safari:

When the meta viewport tag is set to content="width=device-width,initial-scale=1", or any value that allows user-scaling, changing the device to landscape orientation causes the page to scale larger than 1.0. As a result, a portion of the page is cropped off the right, and the user must double-tap (sometimes more than once) to get the page to zoom properly into view.

This is really annoying so Shi Chuan set about fixing the problem.

His initial solution was to keep minimum-scale=1.0, maximum-scale=1.0 in the meta viewport element but then to change it using JavaScript once the user initiates a gesture (the gesturestart event is triggered as soon as two fingers are on the screen). At the point, the content attribute of the meta viewport element gets updated to read minimum-scale=0.25, maximum-scale=1.6, the default values:

var metas = document.getElementsByTagName('meta');
var i;
if (navigator.userAgent.match(/iPhone/i)) {
  document.addEventListener("gesturestart", gestureStart, false);
  function gestureStart() {
    for (i=0; i<metas.length; i++) {
      if (metas[i].name == "viewport") {
        metas[i].content = "width=device-width, minimum-scale=0.25, maximum-scale=1.6";
      }
    }
  }
}

That works nicely but I wasn’t too keen on the dependency between the markup and the script. If, for whatever reason, the script doesn’t get executed, users are stuck with an unzoomable page.

I suggested that the script should also set the initial value to minimum-scale=1.0, maximum-scale=1.0:

var metas = document.getElementsByTagName('meta');
var i;
if (navigator.userAgent.match(/iPhone/i)) {
  for (i=0; i<metas.length; i++) {
    if (metas[i].name == "viewport") {
      metas[i].content = "width=device-width, minimum-scale=1.0, maximum-scale=1.0";
    }
  }
  document.addEventListener("gesturestart", gestureStart, false);
}
function gestureStart() {
  for (i=0; i<metas.length; i++) {
    if (metas[i].name == "viewport") {
      metas[i].content = "width=device-width, minimum-scale=0.25, maximum-scale=1.6";
    }
  }
}

Now the markup still contains the robust accessible default:

…while the script takes care of initially setting the scale values and also updating them when a gesture is detected. Here’s what’s happening:

  1. By default, the page is scaleable because the initial meta viewport declaration doesn’t set a minimum-scale or maximum-scale.
  2. Once the script loads, the page is no longer scalable because both minimum-scale and maximum-scale have been set to 1.0. If the device is switched from portrait to landscape, the resizing bug won’t be triggered because scaling is disabled.
  3. When the gesturestart event is detected—indicating that the user might be trying to scale the page—the minimum-scale and maximum-scale values are updated to allow scaling. At this point, if the device is switched from portrait to landscape, the resizing bug will occur because the page is now scaleable.

Jason Weaver points out that you should probably detect for iPad too. That’s a pretty straightforward update:

if (navigator.userAgent.match(/iPhone/i) || navigator.userAgent.match(/iPad/i))

Mathias Bynens updated the code to use querySelectorAll which is supported in Mobile Safari. Here’s the code I’m currently using:

if (navigator.userAgent.match(/iPhone/i) || navigator.userAgent.match(/iPad/i)) {
  var viewportmeta = document.querySelector('meta[name="viewport"]');
  if (viewportmeta) {
    viewportmeta.content = 'width=device-width, minimum-scale=1.0, maximum-scale=1.0';
    document.body.addEventListener('gesturestart', function() {
      viewportmeta.content = 'width=device-width, minimum-scale=0.25, maximum-scale=1.6';
    }, false);
  }
}

You can try it out on Huffduffer, Salter Cane, Principia Gastronomica and right here on Adactio.

Right now there’s still a little sluggishness between the initial pinch-zoom gesture and the scaling; the scale values (0.25 - 1.6) don’t seem to take effect immediately. A second pinch-zoom gesture is often required. If you have any ideas for improving the event capturing and propagation, dive in there.

Update: the bug has been fixed in iOS 6.

DOM Scripting, second edition

You may have noticed that there’s a second edition of DOM Scripting out. I can’t take any credit for it; I had very little to do with it. But I’m happy to report that the additions meet with my approval.

I’ve written about it on the DOM Scripting blog if you want all the details on what’s new. In short, all the updates are good ones for a book that’s now five years old.

If you’ve already got the first edition, you probably don’t need this update, but if you’re looking for a good introduction to JavaScript with a good smattering of Ajax and HTML5, and an additional dollop of jQuery, the second edition of DOM Scripting will serve you well.

The cover of the second edition, alas, is considerably shittier than the first edition. So don’t judge the second edition of a book by its cover.

JavaScript jamboree

It’s been a fun post-dConstruct week. Tantek has been staying in Brighton being, as always, the perfect guest. On his last night in the country, we went along to Async, the local JavaScript twice-monthly meet-up, host to a show’n’tell this time ‘round.

Tantek demoed his Cassis project. It’s nuts. He’s creating polyglot scripts that are simultaneously JavaScript and PHP, as well as having the ability to report which context they are running in. I feel partly responsible for this madness: he got the idea the last time he was in Brighton after reading Bulletproof Ajax and deciding that he didn’t want to write the same logic twice in two different programming languages. The really crazy thing is that he’s got it working.

Prem, who organised the event, showed his Sandie code: a security mechanism that allows external scripts to be loaded and arbitrary JavaScript to be executed without affecting the global scope. It’s smart stuff that could be incredibly useful for his sqwidget work.

Mark demoed one of the coolest bookmarklets I’ve seen in a while: Snoopy:

It’s intended for use on mobile browsers where you can’t view-source to poke around under the hood of sites to see how they’re built.

If the lack of “view source” on the iPad and iPhone has been bothering you, Snoopy is your friend.

Alas, we had to leave the Async awesomeness early to rendez-vous with the Mozilla HTML5 meet-up in The Eagle so I didn’t even get to see Jim demo the disco snake that he made at Music Hack Day last weekend.

Ajax workshop in NYC

On July 6th I’ll be presenting an all day Ajax and DOM Scripting workshop in New York with Carson Workshops.

A few days later, on the 10th and 11th of July, An Event Apart NYC comes to town. Why not make a week of it? If you’re coming along to AEA, you might want to arrive a few days early for the workshop.

The Ajax workshop costs $495 and will be held at the Digital Sandbox. Registration for An Event Apart costs $1095. It will be held at Scandinavia House.

My previous workshops in London and Manchester were a lot of fun and garnered plenty of praise so I’m really excited about taking the show to New York. If you live in or near New York city, come along for a day of Ajaxy goodness and come away with a Neo-like “I know Kung-Fu” awareness of DOM Scripting.

Oh, and If you sign up now, you’ll also get a copy of my book.

Transcribing podcasts

As noted over on the DOM Scripting site, the audio from the presentation I gave with Aaron at South by Southwest is now available for download.

It turned out quite well. The audio quality is good and neither Aaron or myself do too much uhm-ing and ah-ing.

But there is an inherent problem with publishing audio files on the Web. That problem is succinctly summarised in this comment accompanying an entry for an audio file over at Vitamin:

Is there anyway to get a transcription of this? I am deaf so an audio mp3 is not going to help me a bit.

There is another problem, which is that right now audio files can’t be indexed and searched. That problem is secondary to the accessibility issue but, as with so many accessibility solutions, a fix will benefit everybody.

I started looking into podcast transcription services. The most intriguing one I found was a site called Casting Words. It uses Amazon’s Mechanical Turk API to farm out the task of transcribing the audio content.

This is a textbook illustration of the kind of problem Mechanical Turk sets out to solve, namely the kind of problem that requires human beings rather than computers. Speech recognition, like language translation, is a service that is not going to be replaced by machines any time soon.

For the developers at Casting Words, the Mechanical Turk API works like any other Web Service. They send a request with the parameters for the task they want solved. Later, Mechanical Turk sends back a response. What sets it apart from other Web Services is the fact that the response is sent via wetware, rather than hardware. The response isn’t retrieved from a database or algorithm; it’s retrieved from a human brain.

Casting Words put together a simple front-end for all of this. Jeff Barr has written up the process of submitting a podcast for transcription. You choose which audio files you want to have transcribed, you supply any other useful information that might be relevant to the transcriber, and you’re sent off to PayPal.

They charge 42 cents per minute of audio. For the SXSW presentation, which is an hour long, that works out at just over $25. That seems like a reasonable price to me.

I submitted the presentation audio, sat back and waited. A few days later I got an email with the finished transcription. It came in three different formats: Rich Text, HTML, and plain text. You can view the results for yourself.

Overall, it’s very impressive. There were a couple of glitches, but let’s face it, the subject matter was particularly technical. Elvis Costello once said that talking about music was like dancing about architecture. Listening to a presentation about code — and attempting to transcribe it — is an equally quixotic endeavour.

Still, when you combine either the audio or the transcription with the presentation slides, you can follow along pretty well.

I spent about an hour going through the transcription and tweaking the occasional misheard phrase. I’ve posted the final transcript in the articles section.

For a one-off recording like this, getting a transcription was an easy, inexpensive option. If you gave a presentation at SXSW, I encourage you to do the same: if you were on a panel, you could even split the cost four or five ways.

But could this scale to cover regularly scheduled podcast episodes? I think so. It does cost money, but then so does bandwidth. Bandwidth is often covered by sponsorship or a PayPal tip jar, so why not transcriptions?

You could argue that, if anyone wants a transcription, they could commission one themselves. But then the time and effort is repeated. Whereas, if you provide a transcription, there’s just a one-off payment.

By providing a transcription, you’ll also be providing a spiderable resource than can be easily scanned, quoted, cut and pasted. And you’ll get lots of .