A Web for the Next Century


The Web Platform
Chapter 1.

1. In the beginning, Tim created the Web.
2: And the platform was without form, and void; and confusion was upon the face of the Internet. And the mind of Tim moved upon the face of the problem.
3: And Tim said, Let there be a new protocol: and there was HTTP.
4: And Tim saw the protocol, that it was good: and he divided the network by domains and subdomains.
5: And he called the network the World Wide Web.
6: And Tim said, Let there be a browser for viewing pages delivered by this Web that they might be viewed.
7. And it was so.
8: And Tim separated the structure of the content from its style.
9: And the structured content he called HTML and the means of styling he called CSS. And he saw that it was good.
10. And Tim said, let us describe this structured content in the form of a tree and make it scriptable, and it was so.
11. And from the dust of the Interwebs were created developers, whom he gave dominion over the platform.

If you’ve read any of the numerous articles about The Extensible Web or heard about it in conference presentations, or seen The Extensible Web Manifesto you’ve likely seen (or heard) three phrases repeated: “Explain the magic,” “fundamental primitives” and “evolution of the platform”. I thought it might be worth (another) piece explaining why I think these are at the heart of it all…

For thousands of years the commonly accepted answer to the question ”where did dolphins come from” (or sharks or giraffes or people) was essentially that they were specially created in their current form, by a deity as part of a complex and perfect plan.  Almost all cultures had some kind of creation myth to explain the complex, high level things they couldn’t understand.

Turns out that this very simplified view was wrong (as is much of the cute creation myth I’ve created for the Web Platform) and I’d like to use this metaphor a bit to explain…

Creation and Evolution: Concrete and Abstract

It’s certainly clear that Sir Tim’s particular mix of ideas became the dominant paradigm:  We don’t spend a lot of time talking about SGML or Gopher.

It seems straightforward enough to think of the mix of ideas that made up the original Web as being evolutionary raw materials and to think of users as providing some kind of fitness function in which it became the dominant species/paradigm, but that is is a pretty abstract thing and misses a subtle, but I think important distinction.

The Web Platform/Web browsers are not an idea, they are now a concrete thing.  The initial creation of the Web was act of special creation – engineering that introduced not just new ideas, but new software and infrastructure.  The Web is probably the most grand effort in the history of mankind – browsers as a technology outstrip any operating system or virtual machine in terms of ubiquity and they  are increasingly capable systems.  There are many new systems with concrete ideas to supplant the Web browser and replace it with something new.  People are asking themselves:  Is it even possible  for the Web to hang on?  Replacing it is no easy task: technically or socially – This is a huge advantage to the Web.  So how do we make it thrive?  Not just today, but years from now?

Some more history…

In Tim’s original creation, HTTP supported only GET; In HTML there were no forms, no images, no separate idea of style.  There was no DOM or async requests – as – indeed there was no script. Style was a pretty loosely defined thing – there wasn’t much of it – and CSS wasn’t a thing.  There was just GET me that very simple HTML document markup which is mediocre at displaying text – and display it – when I give you a URL and make sure there is this special concept of a “link”.

This is at the heart of what we have today, but it is not nearly all of it:  What we have today has become an advanced Platform – so how did we get here?  Interestingly, there are two roads we’ve followed at different times – and it is worth contrasting them.

In some cases, we’ve gone off and created entirely new high level ideas like CSS or AppCache which were, well, magic.  That is, they did very, very complex things and provided a high-level, declarative API which was highly designed to solve very specific use-cases.  And at other times (like DOM, XMLHttpRequest and CSSOM) we have explained some of the underlying magic by taking some of those high-level APIs and providing some imperative APIs.

Looking at those lists, it seems to me that were it not for those small efforts to explain some of the magic, the Web would already be lost by now.

Creating a Platform for the Next 100 Years

The real strength of life itself is derived from the fact that it is not specifically designed to perfectly fill a very niche, but because complex pressures a high level judge relatively minor variance at a low level and this simple process inevitably yields the spread of things that are highly adaptive and able to survive changes in the complex pressures.

Sir Tim Berners-Lee couldn’t have forseen iPhones and Retina displays, and had he been able to account for them in his original designs, the environment itself (that is, users who choose to use or author for the Web) would likely have rejected it.   Such are the complex pressures changing our system and we could learn something from nature and from the history of technology here:  Perfectly designed things are often not the same as “really widely used” things and either can be really inflexible to change.   

Explaining the magic means digging away at the capabilities that underly this amazing system and describing their relationships to one another to add adaptability (extensibility).   At the bottom are a number of necessary and fundamental primitives that only the platform (the browser, generally) can provide.  When we think about adding something new, let’s try to explain it “all the way down” until we reach a fundamental primitive and then work up.

All of this allows for small mutations – new things which can compete for a niche in the very real world – and unlike academic and closed committees can help create new, high-level abstractions based on real, verified shared need and acceptance and shared understanding.  In other words, we will have a Platform which, like life itself, is highly adaptive and able to survive complex changes in pressures and last beyond any of our lifetimes.

Regressive Disenfrancishement: Enhance, Fallback or Something else.

My previous This is Hurting Us All: It’s time to stop…” seems to have caused some debate because in it I mentioned delivering users of old/unsupported browsers a 403 page.  This is unfortunate as the 403 suggestion was not the thrust of the article, but a minor comment at the end.  This post takes a look at the history of how we’ve tried dealing with this problem, successes and failures alike, and offers some ideas on how an evergreen future might impact the problem space and solutions going forward.

A History of Evolving Ideas

Religious debates are almost always wrong: Almost no approach to things is entirely meritless and the more ideas that we mix and the more things change the more we make things progressively better.  Let’s take a look back at the history of the problem.

In THE Beginning….

In the early(ish) days of the Web there was some chaos: vendors were adding features quickly, often before they were even proposed as a standard. The things you could do with a Web page in any given browser varied wildly. Computers were also more expensive and bandwidth considerably lower, so it wasn’t uncommon to have a significant number of users without those capabilities, even if they had the right “brand”.

As a Web developer (or a company hiring one), you had essentially two choices:

  • Create a website that worked everywhere, but was dull an non compelling, and used techniques and approaches which the community had already agreed were outdated and problematic – essentially hurting the marketability and creating tech debt.
  • Choose to develop better code with more features and whiz/bang – Write for the future now and wait for the internet to catch up, maybe even help encourage it and not worry about all of the complexity and hassle.

“THIS SITE BEST VIEWED WITH NETSCAPE NAVIGATOR 4.7 at 800×600” 

Many people opted for the later choice and, while we balk at it, it wasn’t exactly a stupid business decision.  Getting a website wasn’t a cheap proposition and it was a wholly new business expense, lots of businesses didn’t even have internal networks or significant business software.  How could they justify paying people good money for code that was intended to be replaced as soon as possible?

Very quickly, however, people realized that even though they put a notice with an “Get a better browser” kind of link, that link was delivered along with a really awful page which makes your company look bad.

Browser Detection

To deal with this problem sites started detecting your browser via user-agent and giving you some simpler version of the “Your browser sucks” page which at least didn’t make them look unprofessional: A broken page is the worst thing your company can put in front of users… Some people might even associate their need for a “modern browser” as “ahead of the curve”.

LIAR!:  Vendors game the system

Netscape (at this point) was the de-facto standard of the Web and Microsoft was trying desperately to break into the market – but lots of sites were just telling IE users “no”.  The solution was simple:  Lie.  And so it was that Microsoft got a fake ID and walked right past the bouncer, by publicly answering the question “Who’s asking?” with “Netscape!”.

Instead of really fixing that system, we simply decided that it was too easy to game and moved on with other ideas like checking for Microsoft specific APIs like document.all to differentiate on the client.

Falling Back

As HTML began to grow and pages became increasingly interactive, we introduced the idea of fallback. If a user agent didn’t support script, or object/embed or something, give them some content. In user interface and SEO terms, that is a pretty smart business decision.

One problem: Very often, fallback content wasn’t used. When it was, the fallback usually said essentially “You browser sucks, so you don’t get to see this, you should upgrade”.

the CROSS browser era and the great stagnation

Ok, so we have to deal with more than one browser and at some point they both have competing ideas which aren’t standard, but are far too useful to ignore.  We create a whole host of solutions:

We came up with safe subsets of supported CSS and learned all of the quirks of the browsers and doctypes, we developed libraries to create new APIs that could switch code paths in and do the right thing with script APIs.

As you would expect, we learned things along the way that seem obvious in retrospect: Certain kinds of assumptions are just wrong.  For example:

  • Unexpected vendor actions that might increase the number of sites a user can view with a given browser isn’t unique to Microsoft. Lots of solutions that switched code paths based on document.all started breaking as Opera copied it, but not all of Microsoft’s apis.  Feature detection is better than basing logic on assumptions about the current state of vendor APIs.
  • All “support” is not the same – feature detection alone can be wrong.  Sometimes a standard API or feature is there, but it is so woefully incomplete or wrong that you really shouldn’t use it.

And all of them still involved some sense of developing for a big market share rather than “everyone”.  You were almost always developing for the latest browser or two for the same reasons listed above – only the justification was even greater as there were more APIs and more browser versions.  The target market share was increasing, but not aimed at everyone – that would be too expensive.

Progressive Enhancement

Then, in 2003 a presentation at SXSW entitled “Inclusive Web Design For the Future” introduced the idea of “progressive enhancement” and the world changed, right?

We’re all familiar with examples of a list of links that use some unobtrusive JavaScript to add a more pleasant experience for people with JavaScript enabled browsers.  We’re all familiar with examples that take that a step further and do some feature testing to take this a bit further and make the experience still a little better if your browser has additional features, but still deliver the crux content.  It gets better progressively along with capabilities.

Hold that Thought…

Let’s skip ahead a few years and think about what happened:  Use of libraries like jQuery exploded and so did interactivity on the Web, new browsers became more mainstream and we started getting some forward progress and competition again.

In 2009, Remy Sharp introduced the idea of polyfills – code that that fill the cracks and provides slightly older browsers with the same standard capabilities as the newer ones.  I’d like to cite his Google Plus post on the history

I knew what I was after wasn’t progressive enhancement because the baseline that I was working to required JavaScript and the latest technology. So that existing term didn’t work for me.

I also knew that it wasn’t graceful degradation, because without the native functionality and without JavaScript (assuming your polyfill uses JavaScript), it wouldn’t work at all.

In the past few years, all of these factors have increased, not decreased.  We have more browsers, more common devices with variant needs, more OS variance, and an explosion of new features and UX expectations.

Let’s get to the point already…

Progressive Enhancement: I do not think it means what you think it means.The presentation at SXSW aimed to “leave no one behind” by starting from literally text only and progressively enhancing from there.   It was in direct opposition to the previous mentality of “graceful degradation” – fallback to a known quantity if the minimum requirements are not met.  

What we’re definitely not generally doing, however, is actually living up to the full principles laid out that presentation for anything more than the most trivial kinds of websites.

Literally every site I have ever known has “established a baseline” of what browsers they will “support” based on market-share.  Once a browser drops below some arbitrary percentage, they stop testing/considering those browsers to some extent.  Here’s the thing:  This is not what that original presentation was about.  You can pick and choose your metrics, but the net result is that people will hit your site or app with browsers you no longer support and what will they get?

IE<7 is “dead”.  Quite a large number of sites/apps/libraries have announced that they no longer support IE7, and many are beginning to drop support for IE8.  When we add in all of the users that we are no longer testing for and it’s becoming an a significant number of people… So what happens to those users?

In an ideal, progressively enhanced world they would get some meaningful content, progressive graded according to their abilities, Right?

But in Reality…

What does the online world of today look like to someone, for example, still using IE5?

Here’s Twitter:

Twitter is entirely unusable…

And Reddit:

Reddit is unusable… 

Facebook is all over the map.  Most of the public pages that I could get to (couldn’t login) had too much DOM/required too much scroll to get a good screenshot of – but it was also unusable.

Amazon was at least partially navigable, but I think that is partially luck because a whole lot of it was just an incoherent jumble:

Oh the irony.

I’m not cherry picking either – most sites (even ones you’d think would because they aren’t very feature rich or ‘single page app’ like) just don’t work at all.  Ironically, even some that are about design and progressive enhancement just cause that browser to crash.

FAIL?

Unless your answer to the question is “which browsers can I use on your site and still have a meaningful experience?” is “all of them” then you have failed in the original goals of progressive enhancement.  

Here’s something interesting to note:  A lot of people mention that Yahoo was quick to pick up on the better ideas about progressive enhancement and introduced “graded browser support” in YUI.   In it, it states

Tim Berners-Lee, inventor of the World Wide Web and director of the W3C, has said it best:

“Anyone who slaps a ‘this page is best viewed with Browser X’ label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network.”

However, if you read it you will note that it identifies:

C-grade browsers should be identified on a blacklist.

and if you visit Yahoo.com today with Internet Explorer 5.2 on the Mac here is what you will see:

Your browser sucks.

Likewise, here’s what happens on Google Plus:

You must be at least this tall to ride this ride…

In Summary…

So what am I saying exactly?  A few things:

  • We do have to recognize that there are business realities and cost to supporting browsers to any degree.  Real “progressive enhancement” could be extremely costly in cases with very rich UI, and sometimes it might not make economic sense.  In some cases, the experience is the product.  To be honest, I’ve never really seen it done completely myself, but that’s not to say it doesn’t exist.
  • We are right on the cusp of an evergreen world which is a game changer.  In an evergreen world, we can use ideas like pollyfills, prollyfills and “high end progressive enhancement” very efficiently as there are no more “far behind laggards” entering the system.
  • There are still laggards in the system and there likely will be for some time to come – we should do what we can to get as many of them who can update to do so and decrease the scope of this problem.
  • We are still faced with choices that are unpleasant from a business perspective for how to deal with those laggards in terms of new code we write.  There is no magic “right” answer.
  • It’s not entirely wrong to prevent yourself from showing your users totally broken stuff that you’d prefer they not experience and associate with you.  It is considerably friendlier to them if you literally write them off (as the examples above do) anyway and there is at least a chance that you can get them to upgrade.
  • In most cases, however, the Web is about access to content – so writing anyone off might not be the best approach.  Instead it might be worth investigating a new approach, here’s one suggestion that might work for even complex sites:  Design a single, universal fallback content (hopefully one which still unobtrusively notifies the user why they are getting it and prompts them to go evergreen) which should work on even very old browsers to deliver them meaningful, but probably comparatively non compelling content/interactions and deliver that to non-evergreen browsers and search engines.  Draw the line at evergreen and enhance/fill from there.

Dear W3C TAG

Last week following in tradition, TAG had an unofficial teleconference introducing old and new members and discussing the future and focus of TAG itself.  During this discussion (and prior to the election) the idea of bringing TAG closer to/making it more known in and relevant to the development community was raised (and seemingly supported by the co-chair).  Given this, I have offered the input of one developer (me) and expand on some of those points and their relevance here for anyone who cares to understand why I think these are important or relevant.

Read “build concensus” in the charter as “in the larger community”

Convince us.  Get us interested, excited and involved.  Given it’s very small size and makeup it provides (or can provide) a unique bully pulpit that has been under-utilized. One of the few things explicitly within TAG’s charter is to build consensus and I think that that has to read (though it seems to historically not been read as) as within the larger community.

Foster a More Direct Relationship With The Community

In order to use the bully pulpit, first people have to care.  Most developers didn’t even know that TAG existed until very recently.  The connections/credentials that the newly elected members have in the developer community is just one reason I actively campaigned on their behalf.  TAG members should hear, if not experience first hand, the pain and feedback that comes from the wider development community. We don’t ask that they solve each issue, merely that we feel that they understand them and aren’t off in an ivory tower somewhere saying “Let them eat cake” while we all starve.

A more public presence of TAG / members

One of the most interesting and effective things to happen in semantic web, in my opinion, was when TimBL did a couple of Ted Talks.  We need representatives that lots of people are willing to listen to, and I think we’ve just elected some good ones.  I for one would love to see blogs and talks by members (taking their TAG hats off if necessary) just to let people know that they are out there, involved and thinking deeply about significant issues whether those issues necessarily get picked up and addressed formally by TAG or not.

Back in the late 1990’s and early 2000’s TimBL wrote a series of thoughts on design issues (http://www.w3.org/DesignIssues) including one that I love called “The Evolution of a Specification” (http://www.w3.org/DesignIssues/Evolution.html) which contains a lot of great insights and thoughts.  With a decade and a half in retrospect, I’d love to hear updated thoughts from him and others.

Consider how W3C process/policy affect long term health

Noah (the TAG co-chair) said,  “The TAG’s charter is to focus on the long-term architectural viability of the Web, but we need to do a much better job of having impact year-by-year. My perception is that these goals are sometimes in tension. How do we help a community that’s implementing “early and often” to also build an architecture that’s clean and will scale for decades?”

Historically within W3C at at large it does seem that those two things are often (maybe always) tension, but it doesn’t seem that they inherently need be and I think that that is something I would love to see TAG discuss.  A healthy architecture requires a system/processes that promotes health and I feel that there is currently a lot that doesn’t.

An inordinate amount of fuss and time and energy is spent at what really amounts to bike-shedding and formalities that are taken because of some other obscure formality which – at the end of the day, to many of us it seems at best silly and at worst confusing.  I’d like to see TAG comment on whether fixing some of this is actually part of creating a stable and healthy architecture – I think it is.

I think that you can sum up a lot of my thoughts on this as: Paving cowpaths implies that cowpaths can be created in the first place and really, they often can’t, and I think that is unhealthy and leads to an inordinate amount of missteps.

Layers, layers, layers

If TAG deals with helping to create layers and fundamental principles that allow, maybe even encourage evolution, then it’s work (and W3C’s) will be relevant for decades to come, and we will be able to collect data to show it.  Even if certain layers are improved or replaced, we will no more mourn them than we do the loss of manual typesetting when we read a book.

What we can learn from phone numbers?

The co-chair also cited the phone number system created 90+ years ago as example of something that has stood the test of time and a kind of model of what TAG needs to do as one of its larger goals “… part of the TAGs role is to help the community build a Web that will be viable in growing in 50 years, and maybe 100.”

Here are what lessons I think are worth drawing from that:

  • Like much of the success of the early Web it’s success appears to be largely because it is really about a targeted and small thing that wasn’t necessarily technically the best – it was the best that they could sell.
  • The inventors never imagined, or tried to imagine, all of the myriad of things that telephone numbers would be used for 50 and 100 years out – and if they had, you can bet they would have missed the mark substantially.
  • While it was created in 1923, it wasn’t really the world-wide standard until it beat out all of the competition much later.  It is entirely plausible that in the intervening years something better could have come along and displaced it.
  • It was decoupled from the larger system enough in design to allow that phones, transmission lines, and wholly new ideas using this infrastrucuture could all evolve more or less independently.

I am earnestly looking forward to hearing/reading more from this group and hope that other voices in the community (like you?) will contribute to the conversations and help make our collective future a better place.

The New Gang Of Four

For programmers, when we hear “Gang Of Four” we pretty much immediately conjure up an image of the blue and white book that most of us have sitting on a shelf somewhere in which four very smart people sat down and documented design patterns which would shape the way that people think about writing software for many years to come.  However, the concept a “Gang of N” is used frequently in politics the world round to denote a group of, at least somewhat like-minded people who form a voting block big enough to matter and thus wield a greater degree of power than any one of them could individually… Power which is often used to bring about significant changes rapidly.

The Powers that Be

Most developers have probably never heard of the W3C Technical Architecture Group, so if you haven’t, here it is in a nutshell:  This is a very small group of people at the W3C – a mere 9 people to be exact, chaired by Tim Berners-Lee himself – responsible for sort of envisioning the architecture of the Web itself and championing it.  I’m not sure exactly how much formal power they have, but in the very least it wields the power of the bully pulpit at the W3C and therefore to some significant extent over the actual focus that a lot of big groups drive toward.

The Need for Change

I will let you read more in the links below, but to sum up: Most of the stuff that this group has historically concerned itself with has little to do with the reality of some of the most important pieces of the Web platform that most of you reading this live and breathe every day.  If you want to think deep thoughts about how the Web is documents full of data interlinked through URIs – this is traditionally the group for you.  Don’t get me wrong – I’m not trying to trivialize it, but the Web is about so, so much more than this group has actively discussed and in my opinion this is creating a sad new reality where it seems that the importance and influence of the once proud W3C is less important to the everyday developer. If you now reference the WHAT-WG HTML Living Standard over the current W3C version, prefer JSON despite W3C’s enormous push of “technically superior” XML family of languages and tools and really want more focus on things that you actually deal with programmatically – then you see the problem.

One GIANT Chance to Make a Big Change

Only 5 of the 9 positions in that group are elected (Sir Tim is the chair and 3 positions are appointed – hmmm).  However, it just so happens that this time around 4 of those 5 spots are up for grabs… and somehow miraculously we managed to get 4 people officially nominated who could really shake things up!

  • Anne Van Kesteren
  • Alex Russell
  • Yehuda Katz
  • Marcos Caceres

If you don’t know who these guys are – I think you just aren’t paying a whole lot of attention…   They are smart, super active (dozens of influential working groups and important open source projects) and not the sort of guys who sit quietly and go along if they disagree.  A few of them have written some good articles about what they think is wrong with TAG now and what they plan to do if elected [1][2][3][4][5].

Basically – Make it concentrate more on the stuff you really care about:

  • Layered Architecture – increasing levels of abstraction
  • Close coordination with ECMA
  • Extensibility – Web Components / Shadow DOM – new APIs including composition and templates
  • Bring a level of “from the trenches” representation.
  • In short:  Advocate and use those powers for what people like us really care most about.

Let’s Get These Guys Elected!

Here’s what you can’t do: Cast an actual vote.  Only the 383 member organizations of W3C actually get to vote – but they get to vote once for every open slot.

Here is what you can do:  Lobby…. Use the power of social media to promote this article saying HEY MEMBER ORGS — THESE FOUR ARE WHO WE WANT  YOU TO CAST YOUR VOTE FOR… Tweet… +1… Like… Blog… Make Memes…

C’mon interwebs – do what you do so well…