Logical Psuedo Selectors: A Proposal

When you get right down to it, CSS rules select matching sets of elements.  Sets and logic gates are two of the most fundamental and powerful concepts in computer science – at some level it is upon their backs that just about everything else builds.  Despite this, until pretty recently CSS has never had features that hinted at the concept of a set and that seems a shame because integrating the language of logic/sets could be a powerful combination.

What’s changing?

After many years during “the great stagnation” HTML and CSS are moving along quickly again. HTML has increased our ability to express semantics and CSS is adding new selector powers.  Selectors Level 3  is a done deal, work is well underway on Selectors Level 4 and we’ve already got a wiki of features for Selectors Level 5.  Likewise, we are adding long-needed features like Regions, scoped stylesheets, Shadow DOM and Generated Content.  All of these things combine to create a really positive future where CSS can really begin to live up to its potential and the visuals pertaining to good structures can managed via CSS without requiring changes to markup.

Two pseudo-classes in particular – in a group called Logical Combinators – currently :matches (some implemenations call it :any) and :not begin to bring with them some interesting possibilities.  Currently they are very limited and can only accept simple (or compound – depending on the level) selectors, but eventually they could accept complex selectors.  When that day comes we could begin talking about things in terms of sets.

A Proposal

4 Logical Combinators which can take complex selectors:

  • :anyof (:matches?) Filters elements based on whether they match ANY of the provided selector(s) (that is, selectors are OR’ed together) which may express different paths in the same tree.
  • :allof
    Filters elements based on whether they match ALL of the provided selector(s) (that is, selectors are AND’ed together) which may express different paths in the same tree.
  • :noneof (:not ?) Filter elements based on whether they match NONE of the provided selector(s) (that is, selectors are NOR’ed together) which may express different paths in the same tree.
  • :oneof
    Filters elements based on whether they match EXACTLY ONE of the provided selector(s) (that is, selectors are XOR’ed together) which may express different paths in the same tree.

An Example:
Given some rich markup that describes a structure in some local semantic terms (semantics important to my domain)…


<div class="cars">
<div class="domestic">
<div class="new" id="a">
<div class="cheap small efficient"><p class="car">2012 Ford Fiesta</p></div>
<div class="quality efficient"><p class="car">2012 Chrysler 300</p></div>
<div class="quality fast performance"><p class="car">2012 Dodge Charger</p></div>
</div>
<div class="used" id="b">
<div class="cheap small efficient"><p class="car">2009 Ford Fiesta</p></div>
<div class="cheap"><p class="car">2004 Chevy Malibu</p></div>
<div class="quality fast"><p class="car">2010 Dodge Charger</p></div>
</div>
</div>
<div class="foreign">
<div class="new" id="c">
<div class="cheap"><p class="car">2012 Kia Forte</p></div>
<div class="quality"><p class="car">2012 BMW 525i</p></div>
</div>
<div class="used" id="d">
<div class="cheap"><p class="car">2009 Kia Forte</p></div>
<div class="cheap efficient"><p class="car">2005 Toyota Camry</p></div>
<div class="quality"><p class="car">2009 Audi R5</p></div>
</div>
</div>
</div>

view raw

cars.html

hosted with ❤ by GitHub

I can use logical sets to add styles…


/* Style the cars that are new and have quality as well as domestic and peformance */
.cars div:allof(.new .quality, .domestic .performance) p {
color: red;
}
/* Style the cars that are foreign and used or domestic, new and effiecient. */
.cars div:anyof(.foreign .used, .domestic .new .efficient) p {
color: blue;
}
/* Style the efficient cars that are neither domestic and used nor foreign and new. */
.efficient:noneof(.domestic .used, .foreign .new) p {
color: green;
}
/* Style the cars that are only one of quality or fast (but not both). */
.cars div:oneof(.quality, .fast) p {
font-weight: bold;
}

view raw

cars.css

hosted with ❤ by GitHub

And they would style up as…
logical-combinators-in-action

Prollyfilling…

All of the above are prollyfilled currently and come “out of the box” with hitchjs (works in IE9 and all evergreen browsers) – they are all prefixed with -hitch-*.  If you’d like to play around with it, simply add the following script include:

<script type="text/javascript" src="http://www.hitchjs.com/dist/hitch-0.6.1.min.js"></script>

and add a data attribute to any <style> or <link> tag which contains hitch prollyfilled rules, for example:


<style type="text/css" data-hitch-interpret>
/* Style the cars that are foreign and used or domestic, new and effiecient. */
.cars div:-hitch-anyof(.foreign .used, .domestic .new .efficient) p {
color: blue;
}
</style>

Read the original docs we wrote for this.

Regressive Disenfrancishement: Enhance, Fallback or Something else.

My previous This is Hurting Us All: It’s time to stop…” seems to have caused some debate because in it I mentioned delivering users of old/unsupported browsers a 403 page.  This is unfortunate as the 403 suggestion was not the thrust of the article, but a minor comment at the end.  This post takes a look at the history of how we’ve tried dealing with this problem, successes and failures alike, and offers some ideas on how an evergreen future might impact the problem space and solutions going forward.

A History of Evolving Ideas

Religious debates are almost always wrong: Almost no approach to things is entirely meritless and the more ideas that we mix and the more things change the more we make things progressively better.  Let’s take a look back at the history of the problem.

In THE Beginning….

In the early(ish) days of the Web there was some chaos: vendors were adding features quickly, often before they were even proposed as a standard. The things you could do with a Web page in any given browser varied wildly. Computers were also more expensive and bandwidth considerably lower, so it wasn’t uncommon to have a significant number of users without those capabilities, even if they had the right “brand”.

As a Web developer (or a company hiring one), you had essentially two choices:

  • Create a website that worked everywhere, but was dull an non compelling, and used techniques and approaches which the community had already agreed were outdated and problematic – essentially hurting the marketability and creating tech debt.
  • Choose to develop better code with more features and whiz/bang – Write for the future now and wait for the internet to catch up, maybe even help encourage it and not worry about all of the complexity and hassle.

“THIS SITE BEST VIEWED WITH NETSCAPE NAVIGATOR 4.7 at 800×600” 

Many people opted for the later choice and, while we balk at it, it wasn’t exactly a stupid business decision.  Getting a website wasn’t a cheap proposition and it was a wholly new business expense, lots of businesses didn’t even have internal networks or significant business software.  How could they justify paying people good money for code that was intended to be replaced as soon as possible?

Very quickly, however, people realized that even though they put a notice with an “Get a better browser” kind of link, that link was delivered along with a really awful page which makes your company look bad.

Browser Detection

To deal with this problem sites started detecting your browser via user-agent and giving you some simpler version of the “Your browser sucks” page which at least didn’t make them look unprofessional: A broken page is the worst thing your company can put in front of users… Some people might even associate their need for a “modern browser” as “ahead of the curve”.

LIAR!:  Vendors game the system

Netscape (at this point) was the de-facto standard of the Web and Microsoft was trying desperately to break into the market – but lots of sites were just telling IE users “no”.  The solution was simple:  Lie.  And so it was that Microsoft got a fake ID and walked right past the bouncer, by publicly answering the question “Who’s asking?” with “Netscape!”.

Instead of really fixing that system, we simply decided that it was too easy to game and moved on with other ideas like checking for Microsoft specific APIs like document.all to differentiate on the client.

Falling Back

As HTML began to grow and pages became increasingly interactive, we introduced the idea of fallback. If a user agent didn’t support script, or object/embed or something, give them some content. In user interface and SEO terms, that is a pretty smart business decision.

One problem: Very often, fallback content wasn’t used. When it was, the fallback usually said essentially “You browser sucks, so you don’t get to see this, you should upgrade”.

the CROSS browser era and the great stagnation

Ok, so we have to deal with more than one browser and at some point they both have competing ideas which aren’t standard, but are far too useful to ignore.  We create a whole host of solutions:

We came up with safe subsets of supported CSS and learned all of the quirks of the browsers and doctypes, we developed libraries to create new APIs that could switch code paths in and do the right thing with script APIs.

As you would expect, we learned things along the way that seem obvious in retrospect: Certain kinds of assumptions are just wrong.  For example:

  • Unexpected vendor actions that might increase the number of sites a user can view with a given browser isn’t unique to Microsoft. Lots of solutions that switched code paths based on document.all started breaking as Opera copied it, but not all of Microsoft’s apis.  Feature detection is better than basing logic on assumptions about the current state of vendor APIs.
  • All “support” is not the same – feature detection alone can be wrong.  Sometimes a standard API or feature is there, but it is so woefully incomplete or wrong that you really shouldn’t use it.

And all of them still involved some sense of developing for a big market share rather than “everyone”.  You were almost always developing for the latest browser or two for the same reasons listed above – only the justification was even greater as there were more APIs and more browser versions.  The target market share was increasing, but not aimed at everyone – that would be too expensive.

Progressive Enhancement

Then, in 2003 a presentation at SXSW entitled “Inclusive Web Design For the Future” introduced the idea of “progressive enhancement” and the world changed, right?

We’re all familiar with examples of a list of links that use some unobtrusive JavaScript to add a more pleasant experience for people with JavaScript enabled browsers.  We’re all familiar with examples that take that a step further and do some feature testing to take this a bit further and make the experience still a little better if your browser has additional features, but still deliver the crux content.  It gets better progressively along with capabilities.

Hold that Thought…

Let’s skip ahead a few years and think about what happened:  Use of libraries like jQuery exploded and so did interactivity on the Web, new browsers became more mainstream and we started getting some forward progress and competition again.

In 2009, Remy Sharp introduced the idea of polyfills – code that that fill the cracks and provides slightly older browsers with the same standard capabilities as the newer ones.  I’d like to cite his Google Plus post on the history

I knew what I was after wasn’t progressive enhancement because the baseline that I was working to required JavaScript and the latest technology. So that existing term didn’t work for me.

I also knew that it wasn’t graceful degradation, because without the native functionality and without JavaScript (assuming your polyfill uses JavaScript), it wouldn’t work at all.

In the past few years, all of these factors have increased, not decreased.  We have more browsers, more common devices with variant needs, more OS variance, and an explosion of new features and UX expectations.

Let’s get to the point already…

Progressive Enhancement: I do not think it means what you think it means.The presentation at SXSW aimed to “leave no one behind” by starting from literally text only and progressively enhancing from there.   It was in direct opposition to the previous mentality of “graceful degradation” – fallback to a known quantity if the minimum requirements are not met.  

What we’re definitely not generally doing, however, is actually living up to the full principles laid out that presentation for anything more than the most trivial kinds of websites.

Literally every site I have ever known has “established a baseline” of what browsers they will “support” based on market-share.  Once a browser drops below some arbitrary percentage, they stop testing/considering those browsers to some extent.  Here’s the thing:  This is not what that original presentation was about.  You can pick and choose your metrics, but the net result is that people will hit your site or app with browsers you no longer support and what will they get?

IE<7 is “dead”.  Quite a large number of sites/apps/libraries have announced that they no longer support IE7, and many are beginning to drop support for IE8.  When we add in all of the users that we are no longer testing for and it’s becoming an a significant number of people… So what happens to those users?

In an ideal, progressively enhanced world they would get some meaningful content, progressive graded according to their abilities, Right?

But in Reality…

What does the online world of today look like to someone, for example, still using IE5?

Here’s Twitter:

Twitter is entirely unusable…

And Reddit:

Reddit is unusable… 

Facebook is all over the map.  Most of the public pages that I could get to (couldn’t login) had too much DOM/required too much scroll to get a good screenshot of – but it was also unusable.

Amazon was at least partially navigable, but I think that is partially luck because a whole lot of it was just an incoherent jumble:

Oh the irony.

I’m not cherry picking either – most sites (even ones you’d think would because they aren’t very feature rich or ‘single page app’ like) just don’t work at all.  Ironically, even some that are about design and progressive enhancement just cause that browser to crash.

FAIL?

Unless your answer to the question is “which browsers can I use on your site and still have a meaningful experience?” is “all of them” then you have failed in the original goals of progressive enhancement.  

Here’s something interesting to note:  A lot of people mention that Yahoo was quick to pick up on the better ideas about progressive enhancement and introduced “graded browser support” in YUI.   In it, it states

Tim Berners-Lee, inventor of the World Wide Web and director of the W3C, has said it best:

“Anyone who slaps a ‘this page is best viewed with Browser X’ label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network.”

However, if you read it you will note that it identifies:

C-grade browsers should be identified on a blacklist.

and if you visit Yahoo.com today with Internet Explorer 5.2 on the Mac here is what you will see:

Your browser sucks.

Likewise, here’s what happens on Google Plus:

You must be at least this tall to ride this ride…

In Summary…

So what am I saying exactly?  A few things:

  • We do have to recognize that there are business realities and cost to supporting browsers to any degree.  Real “progressive enhancement” could be extremely costly in cases with very rich UI, and sometimes it might not make economic sense.  In some cases, the experience is the product.  To be honest, I’ve never really seen it done completely myself, but that’s not to say it doesn’t exist.
  • We are right on the cusp of an evergreen world which is a game changer.  In an evergreen world, we can use ideas like pollyfills, prollyfills and “high end progressive enhancement” very efficiently as there are no more “far behind laggards” entering the system.
  • There are still laggards in the system and there likely will be for some time to come – we should do what we can to get as many of them who can update to do so and decrease the scope of this problem.
  • We are still faced with choices that are unpleasant from a business perspective for how to deal with those laggards in terms of new code we write.  There is no magic “right” answer.
  • It’s not entirely wrong to prevent yourself from showing your users totally broken stuff that you’d prefer they not experience and associate with you.  It is considerably friendlier to them if you literally write them off (as the examples above do) anyway and there is at least a chance that you can get them to upgrade.
  • In most cases, however, the Web is about access to content – so writing anyone off might not be the best approach.  Instead it might be worth investigating a new approach, here’s one suggestion that might work for even complex sites:  Design a single, universal fallback content (hopefully one which still unobtrusively notifies the user why they are getting it and prompts them to go evergreen) which should work on even very old browsers to deliver them meaningful, but probably comparatively non compelling content/interactions and deliver that to non-evergreen browsers and search engines.  Draw the line at evergreen and enhance/fill from there.