Friendly Fire: The Fog of DOM

Any military is comprised of individuals, grouped into successively larger units.  While all of these units are striving for a single overarching goal, they don’t share consciousness and so, regardless of how well trained they are or how much intelligence they have, there is an inevitable amount of uncertainty which is impossible to remove.  This concept is sometimes termed “The Fog of War” and one very unfortunate result is that sometimes same-side damage is caused by an ally.  This is often referred to as “Friendly Fire” or “Blue on Blue”.  In this post, I’ll explain how these same concepts apply in the DOM and talk about how we can avoid danger..

Any large company tends to organize things a bit like a military – individuals in units of units cooperating toward a common goal. If they’re using something like a CMS, one team might define some general layouts for reuse.  Another team creating content and still other teams creating reusable/shared ‘widgets’ which are all assembled together (often dynamically) to create a whole page.

One instance of this might look like the figure below:

A common layout containing a company-wide standard header, main content with social sharing cuttons, an aside/secondary content with shared 'widgets' and footer.

A common layout containing a company-wide standard header, main content with social sharing buttons, an aside/secondary content with shared ‘widgets’ and footer.

What’s interesting about this is that, at an observable level, only the third-party social sharing buttons are not created and maintained by the company itself and need to be treated as a potentially hostile actor.  All of the other pieces are allies in the task of presenting and managing the DOM and any distinction in how they are created, organized and maintained is, at some level, artificial and purely organizational.  Anyone who has worked in this sort of model will have experienced “The Fog of DOM“.  That is, preventing accidental violence against your allies is really hard – it’s simply too easy to accidentally select and operate on elements that aren’t “yours“.  Specifically, it’s not a security concern, it’s a cooperation/coordination concern: The fact that all operate on the same DOM and make use of selectors means that unless there is perfect coordination, even careful use of a selector in a page can easily harm components and vice versa: Friendly fire.

What’s perhaps most surprising is that you don’t actually need a large organization to experience this problem yourself, you just have to want to reuse code that you aren’t developing all at once.

Why is it so damned hard?

Primarily, I think that the answer has a lot to do with how the Web has evolved.  First, there was just markup and so this wouldn’t have been much of a problem even if these approaches/large teams existed way back then.  Then came JavaScript which had its own default global issues, but at least you could scope your code. Then came real DOM and we could start doing some interesting things.  CSS’s aim was to allow you to write rules that allowed you to make broad, sweeping statements like “All paragraphs should have a 1em margin” and then override with more specific statements like “All paragraphs inside a section with the class `suggestions` should have a 2em left margin.”  Within the context of what we were doing, this was clearly useful and over time we’ve used selectors to identify elements in JavaScript as well and our projects continued to get bigger, more complex and more ambitious.

Now, while there are some actions I can take to prevent my component from affecting the page that hosts it, it’s incredibly hard to prevent the outer page from accidentally selecting/impacting elements inside components.  One mistake on either end could spell problems and since they are maintained separately, by different teams, keeping them constantly in check involves an incredible degree of coordination and in the end any shortcomings in coordination usually mean investigation, hackery and guesswork to resolve.

Note again:  This isn’t about security.  I’m not concerned that my co-workers could steal my data, it’s their data too.  My users are their users, my company is their company.  We are allies.  It just needs to be plausible for us to easily collaborate and work together toward a common goal without a high chance of hurting our collaborators.  It still needs to be possible from the hosting page to say “Yes, I mean all buttons should be blue”, it’s just that it shouldn’t be the only (nor necessarily the default) position because it means it’s impossibly easy to harm your allies.

In short:  What the current design means for us, in many cases, is that it’s incredibly hard to “say what you mean” in such a way that it’s likely to avoid the most basic kinds of friendly fire. We have gobs and gobs of selectors stated as document-wide rules which become implausibly hard to reason about.  This makes them hard to write and debug.  Most sites today have hundreds of selectors, many have thousands – any of which could, theoretically apply in many many permutations.  So even if you manage to “get it right” for the time being, when you do have something that suddenly causes unexpected pain (and you will), it’s even more difficult to find out what’s causing it and why because it requires resolving all of those conflicts by resolving the rules in your head without creating others.

Is it “fixable”?

So, is it plausible to imagine a better future?  Is it possible to “fix” this problem, or at least make it very markedly better without completely re-inventing things?  There have been numerous proposals to do so over the years, all of them failed to gain enough traction to cross the finish line.  There were efforts in XBL in the early 2000’s, proposals to allow externally linked CSS mounted/scoped to an element in 2005,  and what would become known as “scoped CSS” is discussed and in drafts in 2006.  This proposal was actually eventually implemented in Chrome behind a flag and then removed.  Removed in part it doesn’t answer all of the questions that another proposal, Shadow DOM did.

We seem to tend to shift back and forth between answers too ambitious or too minimal, and once again, there’s a struggle with agreement.  And so, there’s an effort to step back and see if there is something less ambitious but that we actually can get agreement on. Currently there’s a debate about it going on pulling in two possible directions:  On one side is a group which posits that this is something that should be handled by CSS, on the other is a group which argues that it shouldn’t.  I am in the latter camp, and I’d like to explain why.

Key to both proposals is “isolation” – some kind of boundary at which authors can safely talk and reason about things ‘within a part of a tree’ without implying things outside that part unless they explicitly intend to do so.

Option A: “Handle it with CSS”

This approach suggests that something, perhaps a new @rule could be used to identify ‘isolation points’ via selectors and that subsequent rules could identify an isolation point for which they were relevant.  In fact, however, I believe this creates more questions and problems than it does answers and actually increases the authors’ cognitive load.  This raises some interesting follow up questions for me:

  • Since selectors operate off the whole document, what if I specify that both .foo and .bar should be isolated (and my rules are in that order), and a .bar occurs inside a .foo? Is it isolated as part of the .bar isolation, or only the .foo isolation?
  • Imagining I successfully identified an element as being isolated – would rules loaded or specified inside that element (by <link> or <style> or script) be scoped to that element automatically?
  • Combine the above to questions and it seems that any answers inevitably create a radically new concept in CSS that affects how selector matching is done. While @rules are indeed ‘different’ by nature, at least historically they don’t seem to raise these kinds of issues.  If the new @rule ignores all boundaries to create a new isolation, it does so without any kind of combinator explaining that and it means that the default thing is still sort of an act of violence and you just have to manage avoiding harm through coordination.  If it doesn’t then within your ‘component’ (whether it is a true Web Component or not) you have something conceptually very new and it’s possible to ‘lose’ isolation by something as simple as a class change which seems to largely describe the situation we’re in today.
  • What about the non-CSS end of things?  JavaScript uses selectors too, and has the same kinds of friendly fire issues.  With CSS at least I have dev tools to analyze what rules are in play. When I use a selector via querySelectorAll or something will it be context dependent?  If it uses the same selector matching as CSS, this is a new kind of problem.  It’s never been possible for CSS to affect what kinds of selectors match in DOM, and since it is not reflected in the DOM tree, understanding why this matches or doesn’t requires explanation.  If matching works differently in JavaScript, well, that is kind of confusing and leaves an important part of the problem unchanged.
  • Would this be readable or writable or deletable through JavaScript?  In other words, could JavaScript create an isolation boundary – are we explaining and exposing magic or creating new magic? If they aren’t creatable through JavaScript, it feels like we are going in the wrong direction and even if they aren’t, they’re based on selectors.  In other words, there is a backdoor into it because they are very easily mutable.  This is either new vector for friendly fire or just a continuation of the one we have depending on how you look at it – either way it seems bad.
  • It’s abstract rules.  As I said, at a certain point, it becomes incredibly hard to reason about rules and their permutations – especially when they are super mutable as anything defined by selectors would inherently be.  The bigger the variance and the more you attempt to break up the work, the harder it is to manage them, and when a problem does happen – how will you debug and understand it?

Option B: A New DOM Concept

The DOM is built of nodes with parent and child pointers.  Pretty much everything about selector matching is already built on this concept. If you can imagine a new kind of Node which simply connected a subtree with links that aren’t parent/child, this is a kind of a game-changer. For purposes here, let’s just call it a “MountNode” to avoid conflation with all the bits of Shadow DOM which is essentially this plus more.  Imagine that you’d connect this new MountNode to a hostElement and it would contain a mounted subtree.  Instead of the MountNode being just another child of the hostElement then, it is linked through the hostElement’s .mountPoint property instead.  And instead of the MountNode having a .parentElement, it has a link back through the .hostElement property.  Just something simple like this gets you some interesting effects and offers advantages that none of the other historic approaches (except Shadow DOM) provided:

  • Everything that authors understand about trees more or less remains intact, with the slight caveat that there is a new kind of Node (not Element).
  • DOM inspectors can show this (see Chrome’s representation of “Shadow Root”) and you can easily explain what’s going on.  All of the selector APIs (JavaScript and CSS) work the same way.
  • A mount isn’t linked with parent/child nodes so all existing matching schemes work pretty much “as is” without change because they use parent/child traversal.
  • Its intuitive then to create a combinator in CSS which allows you to select the mount explicitly because that’s just what combinators do today, tell you what to consider during the tree-walking.
  • It’s minimal “new magic”.  We don’t create new things where CSS affects DOM, we don’t raise questions about matching or mutability of boundaries – it’s just a new, foundational piece of DOM that in all other respects works just the way it always has and allows us to experiment with and build higher-level answers.

This last point is especially important to me.  Is it possible to involve CSS and answer all of the questions in a sane way?  To be entirely honest, I don’t know.  What I do know, however, is that today this would involve a lot of speculation, decisions and bikeshedding.  I think this is better vetted in the community – and that any standards org is just as likely (maybe more) to get it wrong as they are to get it right.  Small improvements can have outsized impacts and provide a grounds upon which we can continue to “build” an explainable and layered platform, so I lean toward one that starts low and imperative, with DOM and provides a tool that can be used to solve both the stylistic and imperative ends of it.

What do you think?  Share your thoughts at www-style@w3.org.

Many thanks to all my friends who proofread this at some phase of evolution over the past several weeks and helped me (hopefully) give it fair and thoughtful treatment: Chris Wilson, Sylvain Galineau, Brendan Eich, François Remy, Léonie Watson and Coralie Mercier.  

What Would Bruce Lee Do? The Tao of the Extensible Web.

Recently I participated in a panel on The Standards Process and the Extensible Web Manifesto at Edge Conf.  While ours was the very last session of the day (you can watch the video here), nearly every topic that day called out some mention of the Extensible Web Manifesto.  Privately, there were still plenty of people I met between sessions who privately said “So, what exactly is this Manifesto thing I keep hearing about, tell me more…”.  and my day was filled with discussions at all levels about how exactly we should apply its words in practice or what exactly we meant by x – or specifically what we want to change about standards. 

Given this, I thought it worth a post to share my own personal thoughts, and say some of the things I didn’t have an opportunity to say on the panel.   I’ll begin in the obvious place:  Bruce Lee.


What Would Bruce Lee Do?

What Would Bruce Lee Do? T-shirt from partyonshirts.com who totally agrees with me (I think, they didn’t actually say it, but I suspect it).

Let’s talk about Bruce Lee.  Chances are pretty good that you’ve heard of him from his film work and pop-culture status, but it’s (slightly at least) less commonly known that he was more than just an exceptional martial artist:  He was a trained philosopher and deep thinker – and he did something really new:  He questioned the status quo of traditional martial arts.  He published a book entitled “The Tao of Jeet Kune Do” and laid out an entirely new approach to fighting.  He taught, trained and fought in a way unlike anyone before him. 

He kicked ass in precisely the same way that standards don’t.…  It started by being both intelligent and gutsy enough to make some observations…

There are values in principles, but nothing is too holy to be questioned…

You know, all I keep hearing is “the fight took too long,” “too much tradition, too much classical mess, too many fixed positions and Wing Chun”. You know everything that’s wrong, so fix it. – Linda Lee’s character to Bruce in “Dragon: The Bruce Lee Story” (a fictional, but inspiring scene)

Ok, so we all agree that standards are painful to develop and have sometimes gone very wrong.  From the outside they look, at times, almost comically inefficient, illogical or even self-defeating.  Many comments in our panel were about this in some form:  Too much classical mess – too much Wing Chun.

Bruce Lee saw through the strict rules of forms to what they were trying to accomplish, and he said “yeah, but that could be better- what about this instead”… and opponents fell.

Likewise, standards bodies and processes are not a religion.  Sometimes process is just process – it can be in the way, unproductive – even harmful if it isn’t serving its true purpose.  There are no holy texts (including the Manifesto itself) and there should be no blind faith in them:  We are thinking, rational people.  The Manifesto lays out guiding principles, with what we hope is a well rationed incentive, not absolutes.  Ideological absolutes suck, if you ask me, and Bruce’s approach makes me think he’d agree.

The Manifesto is pretty short though, it begs follow-on and one recurrent theme was that many people at the conference seemed interested in discussing what they perceived to be contradiction in that we have not redirected absolutely everything toward the absolute lowest primitives possible (and for others, a fear that we would).  I’d like to explain my own perspective:

It is often more important to ask the question than it is to answer it perfectly.

Again, the Manifesto lays out principles to strive for, not laws: Avoid large, deep sinks of high-level decisions which involve many new abstractions and eliminate the possibility of competition and innovation – we have finite resources and this is not a good use of them. Favor lower level APIs that allow these things to occur instead – they will, and try to help explain existing features.  Where to draw the line between pragmatism and perfection is debatable, and that’s ok – we don’t have to have all of the perfect answers right now, and that’s good, because we don’t.

Screen Shot 2014-09-28 at 9.54.04 PM

Paralysis is bad.  Perfect, as they say, can be the enemy of good:  Progress matters and ‘perfect’ is actually a non-option because it’s an optimization problem that involves many things, including an ever changing environment.  Realize that there are many kinds of advantage that have historically trumped theoretical purity every time – the technically best answer has never, to the best of my knowledge ‘won’ – and shipping something is definitely way up there in terms of things you need to succeed. More importantly, do what we can to avoid bad. At some very basic level, all the Manifesto is really saying is:  “Hey standards bodies — Let’s find ways to manage risk and complexity, to work efficiently and use data for the really big choices”.

In my mind, the W3C TAG has done a great job of attempting to redirect spec authors, without paralyzing progress, toward asking the right questions: “Is this at an appropriately low level?”, “If it isn’t quite at the bottom, does it look like we can get there from here – or is it what’s here too high-level and overly complicated with magic to be explained?”, “Can we explain existing magic with this?” and, “Is it consistent in the platform so that authors can re-apply common lessons and reasoning”?

I’m sure the there are some who would disagree, but in the end, I think that Web Audio API is a nice example of this kind of pragmatism at play.  There are things that we know already:  We’ve have an <audio> tag, for example, for a while. We have a collected cases which should be plausible, but currently aren’t.  The Web Audio API tries to address this at a considerably lower level, but definitely not the lowest one possible.  However… It has been purposely planned and reviewed to make sure that it gives good answers to the above questions.  While it’s possible that this could lead to something unexpected beneath, we’ve done a lot to mitigate that risk and admitted that we don’t have all of the information we need to make good choices there yet.  It was built with the idea that it will have further low-level fleshing out and that we think we know enough about it to say that we can use it to explain the mechanics of the audio tag with it.  We got the initial spec and now, nearly immediately, they’ve begun work on the next steps.  This has the advantage of steady progress, drawing a boundary around the problem and gives a lot of developers new tools with which they’ll help ask the right questions and through use, imagine new use cases which feed into a better process.  It gives us tools that we need so that efforts like HTML as Custom Elements can begin to contribute in helping to explain the higher level feature.  The real danger is only in stopping the progressive work.

Similarly, it was asked by someone whether Beacon “flies in the face of the Extensible Web Manifesto” because it could be described by still further low level primitives (it might be possible to implement with Service Workers, for example).  Again, I’d like to argue that in fact, it doesn’t.  It’s always been plausible, but prohibitively cumbersome to achieve roughly the same effect – this is why HTML added a declarative ping attribute to links:  Simple use cases should be simple and declarative – the problem isn’t that high-level, declarative things exist, we want those – the Manifesto is explicit about this — the problem is in how we go about identifying them.  Because this is a really common case that’s been around for a long time – we already have a lot of good data on that.  As it happens, ping wasn’t implemented by some browsers, but they were interested in sendBeacon, which – hooray – can actually be used to describe what ping does, and maybe polyfill it too!  It’s simple, constrained, concise – and it’s getting implemented.  It’s enough for new experimentation and it’s also different enough from what you typically do with other network level APIs that maybe it’s fine that it have no further explanation.  If you read my posts, you know that I like to make the analogy of evolution, and so I’ll point to something similar in the natural world:  This may simply be a case of convergent evolution – differently adapted DNA that has a similar-looking effect, but share none of the primitives you might think, and that too can actually be ok.

The important part isn’t that we achieve perfection as much as that we ask the questions, avoid bad mistakes and make good progress that we can iterate on: Everyone actually shipping something that serves real needs and isn’t actively bad is actually a killer feature.  

4921The Non-Option

Bruce had a lot excellent quotes about the pitfalls of aiming too low or defeating yourself.

It’s easy to look at the very hard challenges ahead of us to distill the cumbersome sky-castles we’ve built into sensible layers and fundamentals and just say “too hard” or “we can’t because explaining it locks out our ability to improve performance”.

Simultaneously, one of the most urgent needs is that we fill the gaps – if the Web lacks features that exist in native, we’re losing. Developers don’t want to make that choice, but we force them to.
We’re doing best when we are balancing both ends of this to move ever forward.

Don’t give up.  It’s going to be hard but we can easily defeat ourselves before we ever get started if we’re not careful.  We can’t accept that – we need to convince ourselves that it’s a non-option, because it is.  If we don’t do the hard work, we won’t adapt – and eventually this inability will be the downfall of the Web as a competitive platform.

Jump in the Fire

Good things happen when we ship early and ship often while focusing this way.  As we provide lower-level features we’ll see co-evolution with what’s happening in the developer community, this data can help us make intelligent choices – lots of hard choices have been made, we can see what’s accepted, we can see new use cases.

Some have made the case – to me at least – that we shouldn’t try applying any of this to standards until we have everything laid out with very precise rules about what to apply when and where.  I disagree.  Sometimes, the only way to figure some of it out is to jump in, have the discussions and do some work.

Standards processes and even bodies themselves have evolved over time, and we should continue to adapt where we see things that aren’t working.  It won’t be perfect – reality never is — it’s messy, and that’s ok..

Very special thanks to Brendan Eich and Bruce Lawson for kindly reviewing and providing useful thoughts, commentary and corrections ❤..

Chapters Web Standards Pilot Launched

About a week ago, I blogged about our need to involve the community in new ways on Medium, proposing “Chapters” – local meetups with the idea toward 

  1. Bring people together in local forums to discuss developing standards 
  2. Create a ‘safer’ and ‘easier’ place for lesser skilled developers to learn with mentoring 
  3. Get help reviewing both standards and human documentation around them
  4. To enlist help in looking at/reviewing and developing prollyfills and tests with an eye toward the standard
  5. To ask introduce people to new technology and get them to try to use it 
  6. Most importantly perhaps, to help figure out a way to coalesce this feedback and information and feed it back into standards bodies to manage noise and channels productively.

In this time, we put together a pilot on short notice in Burlington, Vermont (US) and last night, we had our first meeting at the University of Vermont (UVM) – 20 people turned out, including: A W3C Team Member, Darrel Miller who drove all the way from Montreal, UVM 5 students and numerous professionals with varying interests and skills.  UVM’s CS Department graciously sponsored food as well.  I just wanted to take a few minutes and record my own thoughts about what happened, as this is a pilot we hope to emulate in other parts of the world – it’s worth capturing to make those better.

First, I’m fairly introverted and not especially social in real life – I hadn’t expected to be ‘speaking’ as much as just facilitating getting people organized into some groups and helping discussions get rolling.  In retrospect, it’s fairly obvious that I should have expected to do a lot more organization on the first night.  Some people really didn’t know a lot about standards, the interests and skill levels were very diverse (that’s a great thing), it wasn’t clear to some what the goals were (not such a great thing), etc.   Such a diverse audience turned out to be a little rougher than I imagined –  I think if I were going to do it again, I’d plan to have a half hour to 45 minutes just to introduce things and then hope that’s all you need, but be prepared for another 30-45 minutes worth of questions.  In our case, despite a few attempts, I didn’t leave the front of the room for the entire 2.5 hours, which seemed to me like it must have been painful to the audience.  In other words, don’t expect a lot of concrete discussions as much as organization, goal and process agreement, information trading the first night.  Also, tell people to bring laptops and make sure there is wifi in the room that they can access… whoops.  

Despite the hiccups – I’m hopeful that people weren’t too bored and return, maybe even bring a friend.  The good news, even afterward, it seemed that there was genuine interest… Pleasantly, I got a few communications and a hand full of people who stuck around afterward to tell me that they are excited to be involved – so the group will go on, it remains to be seen how the next few will go, but I expect good things once we start digging into it.

If you were in attendance last night and have any thoughts you’d like to share for posterity or have thoughts/advice to others setting up a similar group, feel free to share in the comments.

W3C Special Election

The W3C Technical Architecture Group (TAG) is in the midst of a special election. If you’re reading this then you probably know that this is something of a pet topic for me, and that I think that reforming the TAG and AB is critical to real change in web standards. Generally, I explain who I think the candidates we need are and ask for developers to help make some noise, reach out to anyone they know and help get them elected. If you’re only interested in who I think is the best choice, I suppose you can cut to the chase, but I have some more to say in this election too, so as long as you’re here….

Why an election

First, why are we having an election at all? What makes it “special”? A special election is held when a seat opens up for some unexpected reason, off the normal cycle. In this case, it’s made necessary by the change of employment by one of it’s elected members. According to the rules, no member organization may have more than one representative in each of the elected bodies (TAG and AB). I’m pointing this out because on several occasions throughout the history of these groups, this same thing has happened: Elected folk change employers. Why? It’s simple really… Because the biggest employers with focus on standards belong to W3C and because we tend to try to elect smart folks who are well-respected in the community, it’s plain to see why this occasionally happens. This time, at least it’s led to some good discussion about whether the rule is actually what’s best for the Web, or whether we should make changes. I think we should. In the very least, a simple ‘confirmation’ vote should precede a special election: the rules are there for the people, and when they have implications that make virtually nobody happy, that’s a bad rule.

But that’s where we are: An election to fill a single seat. We have 4 nominees to fill the seat of Alex Russell, who, ironically, isn’t the one who switched employers but rather the one who had the shortest duration of term remaining with that member org (Google).

The Election


The nature of TAG is, at least theoretically, such that all nominees are technically qualified, and that the task is rather to choose who you think is the best choice for TAG at this time. There is a reason I’m bringing this up as well: How votes are cast and counted has been a topic of discussion for some time, and this election provides a perfect illustration of why it matters….

image

Travis Leithead

image

Mark Nottingham

I believe that two candidates are much better choices at this juncture in time than the other two, namely, both Travis Leithead and Mark Nottingham seem like excellent choices to me. They are well known to developers, have good cross-interests with important groups that we need in discussions and seem like they bring the best set of necessary assets as we work to flesh out an extensible platform with a long term vision of standards development that helps them compete and evolve with changing pressures.

But wait, there’s more!

The benefits that any nominee brings to the table can be subtle, but very real. Each AC gets only one vote, and that doesn’t have a lot of expressive power. I think that Travis Leithead has a slight edge in terms of what he brings, but I think that Mark Nottingham stands a slightly better chance of election by merit of his past service and visible social presence. So if I encourage ACs to vote for Travis, because, for example, bringing an inspired Microsoft player to the table is a great thing at this juncture and has virtuous ripple effects, each AC who does is potentially a vote Mark doesn’t get which he otherwise would. The number of ACs who cast votes in the first place is incredibly small, so losing just a couple can easily mean that neither of my preferred candidates get elected, a result that makes me notably less happy than if my “just barely second choice” candidate won. This is the case for preferential voting, it allows me to say ”I prefer this guy most” and “then this guy above others” and so on. The net result is that generally, more people are more happy with the results and spend less time agonizing over the choice.

In a funny way, while the system itself doesn’t work this way, it’s possible to achieve a similar result by way of a compact amongst ACs. If 5 or 6 like-minded ACs discussed who to vote for, and voted together you could avoid most of the incidental effects where neither our collective first, nor second (nor maybe even third) choice is elected.

So, please, encourage any AC you might know to vote responsibly and not only support the candidates I mentioned, but the reforms discussed above as well.

The Extensible Web Summit II: The Web Needs You

On September 11, 2014, Berlin will host the second ever Extensible Web Summit. You can get a (free) ticket from Lanyard and it’s scheduled to coincide with JS Conf EU 2014, so if you are going to be in the area I’d highly encourage you to attend if possible for something very different and important.

A little over a year ago, we sat down and authored The Extensible Web Manifesto which observes failings in how we historically create Web standards and proposes an alternative vision to correct these and make/keep the Web a compelling, competitive and adaptable platform for decades to come. Key among these ideas is the need to involve developers and harness the power of the community itself. 9 months later we held the very first Extensible Web Summit in San Francisco, California.  Unlike many conferences, the summit sets out with a different goal: To bring together developers, library authors, browser implementors and standards writers to discuss and solve problems, hear perspectives, help set priorities, etc. While there were a few quick lightning talks at the opening, the agenda, groups, etc were entirely decided by participants and run bar-camp style.

If you’re curious about how the first one went, here are a few links to posts which themselves have some additional links you can follow.

If you write or use libraries, if you have ever had thoughts, ideas or complaints about Web APIs, if are interested in standards, or you’d like to meet the folks who write the specs or implement in the browsers, if you develop for the Web in any significant way – you should come. Not only that, but I’d encourage you to really participate. At the first summit, several people seemed a little intimidated at first because of the presence of “giants” like Sir Tim Berners-Lee. Don’t be.  These people are all pleasantly human, just like you.  Several good things came out of the first one and with some practice and increased attendance and participation, we gain even more. So, go if you can – and tell someone else about it either way.

Desparately Seeking Jimi: The Web Standards Mountain

Over the past year or so, a number of very smart people have asked me privately or made comments about how I expend my energies with regard to standards: I’ve dedicated a lot of time toward elections to two advisory groups within the W3C (currently AB). Most of the comments can be distilled down to “why bother?” While there are varying arguments, the most prevalent can be summed up as “The groups in question have no real power…” Others argue that W3C is increasingly irrelevant, and that such efforts are a lost cause. More generally, that my time and talents would be better spent in other ways that didn’t deal with seemingly mundane details or bureaucracy. I thought it worth a short post to explain why I do what I do, and why I want you to care too…

Creating standards historically has a way of making one feel a little like Sisyphus in Greek mythology – forever compelled to push an immense boulder up a mountain, only to watch it roll back down again and repeat.  Time and again, we’ve seen struggles: Small groups of very bright people with nothing but the best intent work hard – of that there can be no doubt. They discuss and debate a LOT – progress and consensus takes a long time to arrive, implementations don’t aways arrive at all – and frequently what we get isn’t really usable as we’d like.  A change of venue alone doesn’t solve the problem either: The WHATWG has struggles and conflicts of its own – and has created failures as well. Maybe it’s a smaller mountain, I’ll give you that – but it’s still a big task.  And with the smaller mountain comes a lacks the Titan-sized support of W3C membership.  While I think the WHATWG makes a good argument about the importance of this, it takes a planet to really make it a good standard and having the support of W3C’s membership is still a good thing.  It’s not the fault of the people (ok, sometimes it is), it’s mainly the fault of the process we’ve created based on an outlook of where standards come from. The W3C organization and its processes are geared toward serving this model and that outlook is in need of revision.

A few years ago, several of us converged upon similar ideas which, almost exactly 1 year ago, were written down and agreed to in The Extensible Web Manifesto. It presents an alternative vision –  one that recognizes the importance of developer feedback, contribution and ultimate adoption as well as the evolutionary nature of successful and valuable standards that can both compete in an ever changing market and stand the test of time. Getting a bunch of smart people to sign a document about core principles is an excellent start, but, effectively convincing the larger standards world that it needs to change and embrace important elements of this vision is, to stick with my earlier Greek mythology theme, a Herculean task.

It takes movement on many, many fronts – but interestingly, they all involve people. All of these member orgs, all of the reps, all of the developers out there – it’s just people… People like me and you. The fist step is getting all of those members to pay attention, and then getting everyone to agree that there is a problem that they actually want to solve. It involves realizing that sometimes change requires some brave souls to stand up and make a statement. Sometimes just the act of hearing someone else you respect say it can lead to others standing up too, or at least to opening a dialog.

There are lots of ways to accomplish this I suppose, but it seems efficient if there were some ways to capitalize on the fact that we’re all really connected and, ultimately, wanting the same basic goals. If only there were a way to create a network effect – a call for change and a perception of value that picks up steam rather than simmers out after hitting the brick wall. One way to accomplish this might be give the right people some microphones and stick them in front of the right folks. As the W3C has two major “positions” (Director and CEO) and two advisory bodies (TAG and AB) that have their ear, and, in a more direct way, the ears/role of communicating with technical folks on matters related to the long term architecture (TAG) and on direction /process with AC members (the AB) – those seem like excellent intersections. Getting people elected to these positions involves us reaching out to get enough ACs to vote for them in the first place, electing many gives them perceived volume. It makes people sit up and take notice. It stimulates the debate. It means we have to work together to find agreement about which pieces we are willing to pursue together. It means making sure someone is looking out for developers and finding people who are willing to put in incredible efforts to help make that change happen. – And these are all Really Good Things™.

We’re technically minded people.  We don’t generally like this sort of thing.

So yes, it’s true that I might see more immediately tangible results on a particular API or set of APIs if I very actively focused efforts in that direction, but it doesn’t deal with the bigger problem – the mountain.  What I’d really like to to is help to change the whole world’s mind, it seems like a bigger win for us all in the long run. What I really want to do convince us all to channel Jimi and say….

Well, I stand up next to a mountain And I chop it down with the edge of my hand…

If any of this makes any sense to you – or if you’re just willing to humor me…

If you are an AC in the W3C – lend your voice and vote for a reform candidate (below).  If you’re not an AC, but know one – reach out.  If you don’t know one – tweet or share your support – we’re all connected, chances are pretty good that someone in your social network does.  While I believe that all of the nominees below are excellent, I think there is something critical about sticking one or more developers directly into one of these slots for reasons explained above. If I’m wrong and these groups don’t matter, you’ve lost nothing.


Lea is an actual developer and active in the community — you can find her at talks and conferences all the time. It should be obvious why think some real practitioner voice is important, but she also is an invited expert in the CSS WG and worked for W3C for a while in developer relations — so she has a very unique perspective having seen it from all sides and brings a direct connection to represent developers in the dialog.

Boaz is also a developer and the flâneur at http://bocoup.com/ — a passionate advocate of the open web who has done quite a bit of outreach, hosting (recorded/live) TC-39 and TAG events and whose employees are especially active in the community. He brings a drive and understanding of what it’s like for developers and non-megalithic companies to be involved and has a serious interest and skills with process. (Update: Boaz has since posted about why he is running).

Art is from Nokia — he has been working with standards for a long time, he’s been effective and involved and is outspoken and thoughtful on issues of process, involvement, licensing, etc. and has led efforts to streamline the ability to keep track of what is going on or how to get involved and open things up to developers.

Virginie is from Gemalto and active in a number of groups (she chairs the security group) and I think she can sum up why she is running and why you should vote for her much better than I can. Suffice it to say for purposes here: She sees the problems discussed here (as well as others), brings a unique perspective and has been increasingly involved in efforts to figure out how to help give developers a voice.

David is from Apple and he’s also been working on standards for a long time. He knows the process, the challenges and the history. I’ve not always agreed with him, but he has expressed a number of good things in relation to things mentioned above in his candidate statement which make me especially hopeful that he would be an excellent choice and a good voice to round things out.

While I left him out of my original post, it was an oversight – Soohong Daniel Park from Samsung also wrote an excellent AB statement which is full of great things and I believe would be a great choice as well.  I’ll leave you to read it.

The Extensible Web Summit: An Unexpected Journey

Still from Peter Jackson’s “The Hobbit: An Unexpected Journey” credit MGM/New Line Cinema/Wingnut thehobbitblog.com

This past Friday, two remarkable things happened. First, “we” held the first ever Extensible Web Summit in San Francisco, California (I’ll get to the “we” bit). Second, I attended it. While the former is amazing, the later is actually kind of incredible on several levels too – not because it is such an amazing thing to have me (believe me, my self opinion is not that high). Let me explain…

First, on a personal level, you should know that I am actually a hobbit. You see, I don’t do this sort of thing – at all.

“It’s a dangerous business, Frodo, going out your door. You step onto the road, and if you don’t keep your feet, there’s no knowing where you might be swept off to.”

I haven’t attended a conference since the late 1990’s, despite requests, demands, and even in one remarkable case where my own software was presented. Hell, I don’t even leave my house to go to work – my employer allows me to telecommute from my hobbit hole in the shire (rural Vermont – thank you for that Apollo Group). Except online or in places where I am employed or very comfortable I’m about a step away from Golem sometimes. It’s not healthy, I know, and recently I’m working on it – but for the past decade or so it’s been the case. So, my attendance is a small personal accomplishment for me. It was such an incredible pleasure to not just meet, but be warmly welcomed by so many of the great friends that I’ve made online that I felt compelled to share this very personal information. You all rock more than words can describe.

More importantly though, because I’m just a developer – and I went. Until very recently I’ve not been a member of anything officially in Web standards – and this meeting included my input. Not just me, but others like me. No one treated us as second class, though not many of us spoke up as much as I’d hoped we would have. It gave us an opportunity to see what’s going on, connect with other developers, standards writers, chairs and – very importantly – implementers to discuss things that are important to me and more fully understand the larger scope of what’s going on.. In other words: To play a role.

“That, my dear Frodo, is where I come in. For quite by chance, and the will of a Wizard, fate decided I would become part of this tale. It began, well, it began as you might expect. In a hole in the ground, there lived a Hobbit. Not a nasty, dirty, wet hole, full of worms and oozy smells; this was a Hobbit-hole, and that means good food, a warm hearth, and all the comforts of home.

Developer involvement and engagement is tremendously important to the vision we set out in the Extensible Web Manifesto. It’s critically important that we begin to work together to break down, tie together, prioritize and create an environment where we establish a virtuous cycle. To change the way we develop the Platform itself to be adaptable and evolutionary is an enormous effort – it’s not the natural state of existing standards bodies in general.

Your typical Web Standards Discussion…
Bilbo: Good morning.
Gandalf: What do you mean? Do you mean to wish me a good morning, or do you mean that it is a good morning whether I want it or not? Or, perhaps you mean to say that you feel good on this particular morning. Or are you simply stating that this is a morning to be good on?

But now, things are changing – and they are changing for the better. Obviously, in general, the manifesto is helping to change things, I believe we are actually approaching an architecture that lends itself to adaptation and evolution… But there is another thing worth noticing here too: Until a year or so ago, the vast majority of developers had never heard of the W3C Technical Architecture Group (TAG) and the general consensus was that they just weren’t that important. But then, developers got involved and helped elect a good group of folks across two elections and the TAG is turning out to be super useful in two ways: First, at helping to look across the working groups and break down the architecture of the platform, see connections and facilitate communication and realignment in working groups where it is necessary. Second, to help carry the voice of “We the developers” to places it was difficult for our voices to previously reach. This summit was actually an example of both. And it’s all out in public now – github, twitter, etc. If you write to TAG, they listen. I call that an incredible win.

You might recall that in the first sentence I said “we” held the first Extensible Web Summit. While the W3C was prominent, and it took place coinciding with a TAG face to face and other W3C meetings, it was not officially a W3C event. I am immensely grateful to Anne van Kesteren (whose TAG term has ended, but continues to be involved directly in the trenches/outreach) for championing this through TAG, and the whole TAG for setting it up and attending despite the fact that it was unofficial – that’s a really nice signal to us developers. And a huge thanks to Adobe for agreeing to provide a substantial location and to implementers and finally – to developers and implementors for laying down cash and budget to allow us to all show up.

It wasn’t perfect, don’t misunderstand. But no first attempt is. It was too short. The topics competed in not great ways such that we couldn’t get necessary people into all of them. There wasn’t enough setup/groundwork for more significant engagement so some sessions kind of jumped off the rails before they could really get started. Personally, I got much progress through side conversations in hallways and before/after – maybe as much I did in sessions themselves – but these are all things we can learn from and improve. It was good, but we can do better and I’m confident we will. If you were less than thrilled, provide some feedback and give it another chance. I’ll try to write about that in another post perhaps, or more likely to the TAG mailing list – but there were many, many positive aspects and I consider it a substantial step.

thcc_bilbo_01

Galadriel: Why the Halfling?
Gandalf: I do not know. Saruman believes it is only great power that can hold evil in check, but that is not what I have found. I found it is the small everyday deeds of ordinary folk that keep the darkness at bay…


All of this really stresses the importance of more direct developer involvement and finding ways to open the doors to the W3C/standards without creating total chaos. If this sounds good to you too, keep watching as my next post will be about an incredible opportunity arising there – which, as many of my posts – involves an election and requires your help. Stay tuned.

W3C TAG FAQ

If you follow me (or many of my friends) on twitter or on G+ (or even on Facebook which I don’t really use) then you’ve probably seen lots of posts about W3C elections.  The funny thing about elections, unlike most of my other projects, is that they have a hard deadline and (I think) a potentially much bigger impact than anything else I’d be working on, so I find myself especially motivated to do what I think it is I need to do. As a result, they tend to dominate my streams for a while.  Over the last few elections, we’ve been gaining steam and more and more people are asking questions about it. That’s great, but since it’s hard to respond individually (or in 140 characters), I thought it would be worth putting together a FAQ which I (or others if they are so inclined) can point people to to explain the madness.

Who the freak is TAG?

TAG is the W3C Technical Architecture group.  It is one of only two groups in the W3C which is composed by election rather than appointment by groups who (generally) pay to belong to W3C (the other is the Advisory Board, or AB).  These two groups are also unique in two ways: First, in the sense that their charters cut across the spectrum of W3C,  second they are small – only 9 members in each and they are mostly elected.

Mostly Electeded?

Tim Berners-Lee (the creator of the Web) is the chair of the TAG and he gets to appoint 3 others.  You don’t even have to be a member of W3C to serve, you just need one to nominate you.

Is this a new thing?

Actually, no.  It’s been around for a long time (since 2001) – you’ve just likely never heard of it.

Why haven’t i heard of them before?

A few reasons: First (most importantly I think) is the fact that they’ve never done anything visible that you’d care about.  It’s mostly been a somewhat academic exercise at very high levels that produce documents and notes you will probably never read.  It’s helped a bit to steer things though which I address more in another section below. Second, you (probably) don’t get a vote (yet) – so traditionally, no one has bothered to tell you.  Seems pointless, right? Before you throw your hands up and close this window, keep reading – there is a rationale here.

Wait… If they don’t do anything I care about… Why should I care?

Simply put, because they could. In developer terms, its a bit like “discovering ajax” – there’s a great tool with a lot of potential just lying there dormant – waiting for someone to tap it and use it to do great things.

An honest history of why standards are in the shape they are in would be a lengthy post of it’s own (there are already several good ones), but these things we know: We don’t want to go down roads of the past and we do want to reform things that aren’t working well. We want to help blaze a bright new future. The candidates I support have a vision and common outlook (see The Extensible Web Manifesto for the thrust), and TAG is one of the tools that we can use to help realize it.

Wait… We need less appeal to authority, not, more…isn’t TAG problematic in this regard?

I don’t think so for a few reasons: As I said, historically TAG has been kind of a non-issue. What’s more, the candidates I support are made up of people with a common vision that swings the pendulum in a new direction in which there is, by design, less appeal to authority.

Things are what they are and we don’t get to pick where we start – recognizing current realities and operating within current structures to help drive things in a better way and (hopefully) bring some reform along the way just seems more logical than pretending reality doesn’t exist or needlessly tilting at windmills: W3C has a lot of things going for it. Additionally, it comes down to what powers TAG actually has…

Yeah… What powers does TAG have?

Power is a funny word – all organizations (including governments) have only the powers constituencies give them. The TAG doesn’t have a lot of official formal powers, but that doesn’t actually reflect the reality of things. By way of a somewhat US-centric analogy (the same holds in many counties, this just happens to be mine and serves as a decent example): The US president has comparatively weak formal powers – our congress maintains the power of the purse, to declare war and the ability to make and change laws. Our judiciary has the power to enforce and interpret laws. But if you think it doesn’t make much difference in practice who the president is, you really don’t pay attention. The president has a big influence on just about all these areas, in practice. But, and this is key: Their ability to do so is somewhat limited by their ability to mobilize popular support. Support provides political capital. While limited in official powers, in many ways, these powers are enough to let them help set the agenda. The bully pulpit lets them carry the voices of many in a powerful way. This is what I see as TAGs strength, and why I think the next item is key.

I am a developer, what can I do?

Most people reading this are likely average developers, and they don’t get to cast a vote (yet). However, if this its you – you can still raise your voice. Speak up and let the people who can vote know where we stand. If electors see a surge of support for a positive movement, we are likely to swing some votes our way. This matters more than you could imagine as there is actually low turnout – a (comparatively) few really involved companies with heavy interest tend to always show up and walk away with it. There is a great untapped wealth of organizations who are entitled a vote but don’t exercise their right to do do. If you know someone from a member organization, talk to them directly. If not, just show your support – as a community we are remarkably interconnected via social media. Just because you aren’t a member doesn’t mean that your message won’t reach them. What’s more, your voice helps empower our candidates described as in the political analogy above.

Why would an AC listen to me? They represent their organization…

For two important reasons: First, they are supposed to base their vote on what’s good for the Web at large and not just their company/organization. I believe that most ACs will act in good faith if presented with a compelling case, but maybe I am native in that. The second reason is more practical: Developers are the ones that make their organizations possible. Developers like you are the engine that drives their success. Their incentive to make you happy by representing you (assuming you are asking for something reasonable) is probably even stronger than that of a representative politician- business competition is strong. Even if they don’t vote your way, no organization wants to be out campaigning hard against developers. It seems to me, organizations will want to be on the right side of history, not the ones waving their hand and saying “let them eat cake”.

The vision the candidates I support isn’t just “reasonable” – its really good.

I think that the combination of these factors is healthy -there are good incentives for ACs to have a look at their platform and vision. Once they do, I honestly think most will become supporters themselves. A good movement is contagious.

I am an AC Representative – what can I do

If you are a W3C AC Representative, you can cast your vote! More still, tell another AC about why you did and encourage them to as well.

You keep saying I don’t get a vote yet – what’s that all about?

There have been informal proposals about changing the rules in a way that would let developers have some kind of more direct voice in the election process That’s where the other group, the AB comes in. That’s their job: Rules and process. It’s another post entirely- but if you’d like to see those sorts of reform as well – we’ll need to work together as a community.

Web Wars: Episode IV – A New Hope

Don’t let this happen to our Web…

Prologue
Long, long ago, in a galaxy far, far away…

It is a period of technical upheaval.  The plans for galactic XML domination have been thwarted and a breakaway group has taken over much of HTML base.  New technologies threaten the future stability of the Web.  Developers have begun challenging the status quo and established institutions.

The universe of Web standards stands poised on the brink of collapse and civil war.  A group of rebel developers with plan are building an alliance in an attempt to restore order to the force, establish a bright future and #extendthewebforward…

And they need your help…


Help me … You’re my only hope.

Let me cut to the chase: The Web needs your help. Here’s the TL;DR: The W3C has a group of 9 members called the Technical Architecture Group – I’ll explain what it is and why you should care later in the post, but the takeaway is that most of the members of this very small (9 member) group are elected and there is an election going on right now. I think the outcome really matters and that your involvement is key and that we need to help get Dave Herman and Domenic Denicola (more about them below the fold). Let me explain…

We can make a difference

Never underestimate the power of the force.

Historically, TAG hasn’t mattered much. It’s charter gives it a unique role, but chances are you probably have never heard of it. Almost exactly 1 year ago we ran a slate of “different” candidates for the TAG – well known, active, open-source developers who have been working tirelessly with standards bodies to solve real problems that matter. The aim was to give this group of representatives popular support and use TAGs charter to help drive things in a better direction, and carry our collective voices across the spectrum of standards bodies and working groups that we are currently not a part of – to reform Web standards.

So, for the first time in W3C history we ran a popular campaign via social media asking developers to voice their support – to lobby publicly and privately.  While you don’t get a vote (yet), the W3C members who do heard your voice and filled all of the open slots with our reformist candidates.  The chair seems to have also heard your voice as he appointed two additional candidates to co-chair.  Later, one of our candidates changed employers and according to the rules was ineligible to continue to serve (no organization can have more than one TAG member – good rule), so we ran another – and he won too.  We ran 4 candidates for the Advisory Board and 3 of the 4 won there too.

What’s it gotten us?

Here are a sampling of the ways our TAG members have helped us so far in less than a year:

  • Re-focused the TAG on things we care about.  Chances are that you’ve never read or heard of a single thing that’s come out of TAG since its inception – but you’ve probably heard of lots of stuff they are talking about now.
  • They helped to envision, co-authored and bring together a wide and diverse group of signatories for The Extensible Web Manifesto – an architectural plan that layers and moves exclusive power to innovate out of browsers and hopes to change the way we develop and think about standards themselves.
  • Championing these ideas in the TAG, which has now essentially adopted them as guiding principles.
  • TAG members are helping to coordinate the first Extensible Web Summit.
  • Opened up collaboration between groups and helped coordinate common visions and APIs – helping either directly or indirectly on just about every interesting new thing:  Service Workers, Promises, Streams, Archives as first class citizens, Web Components, ES6 Modules, Web Audio, etc. etc. etc.
  • They have helped make TAG a known entity by talking about it and involving developers in the discussions – TAG has had two public forums, has a twitter account and posts things to GitHub.

The Rebel Alliance

I’m excited that we have gotten two excellent candidates nominated and here’s why I will be doing everything I can to support these Jedi warriors in The Cause.

Dave Herman

Dave Herman: Nominated by Mozilla.

Dave Herman is from Mozilla Research – he is an original co-signer of extensiblewebmanifesto.org and has been with us all along. Everything his group does is conceptually extensible web. He is on TC-39, the committee in charge of JavaScript, and he is the co-creator of asm.js and advocate/developer of many polyfills and prollyfills. His group is working on Servo, a parallel process rendering engine, so his group has a lot of feedback and thoughts of efficient APIs that can make the Web more competitive too.

Domenic Denicola

Domenic Denicola: Nominated by JQuery

Domenic Denicola is also an original co-signer of  extensiblewebmanifesto.org and an active open source developer who has been increasingly involved in efforts – as a developer – with all of the major standards bodies. He has been an influential force in helping drive work on ideas like Streams and Promises and has also authored/advocated a number of polyfills and prollyfills, and this year has given several great conference talks evangelizing the extensible web and helping explain how to be involved with standards as a developer. He was nominated by jQuery.

This is your Web.

As the creator of the Web (and director of W3C) famously said, “This is for everyone”.

Aren’t you everyone? You are.

But historically speaking, you’ve had comparatively no real voice in the important decisions – so here is your chance:  Act.  Help us reform the W3C – let’s get these guys elected and start building a Web that is built to adapt, last and evolve.

Making your voice heard matters – and it’s so easy to click “Retweet” or “Like” or “+1” or 100 other things that help grow the movement . Write a post, make a video, send an email to a member you know.

Let’s make our collective voices heard and let the W3C know that we want representation that will help us #extendthewebforward.

A new hope TAG (2)

 

The DOM and Managing Complexity in API Evolution

Here’s a confession: I have a love/hate relationship with DOM related APIs.

You see, I love having a standard surface and that we have helped improve it in a lot of ways – there are some really great improvements.  But I still hate the reality of working with it sometimes. Originally I thought that it was merely because we know a lot of common usage patterns and while they are entirely accomplishable with existing APIs they currently tend to require a lot of boilerplate.  As I use them I often feel a dissonance between how simple the thing i am trying to describe is conceptually versus how much it really takes to accomplish — and how much that boilerplate can differ depending on which APIs i am dealing with.  Some of it is just sugar, sure, but recently I am thinking there might be more to it…

It’s frustrating when I see developers I know try to stick to native but eventually go and grab jQuery just for its DOM goodness, but – I kinda get it and it makes me think that maybe we’re missing something kind of fundamental.  Maybe not, but I’d like to throw it out there and see.  I’m happy to be wrong.

First, let me describe the more generalized problem as I see it, using the DOM APIs as an example:

  • Even as fundamentally new capabilities arrive, we are constantly attempting to simultaneous align the new trajectory of the bright and glorious future with the very real requirement of “Don’t break the Web”. 
  • The combination of these competing factors mean that legacy inevitably co-exists with things that have a different (hopefully better) new view on the world, but, at least for the near future (pronounced “years”) cannot completely eliminate the legacy.  In the case of DOM APIs, Arrays of Elements will live along side and cooperate with live NodeLists and Elements and jQuery array likes for years to come, and we’re still tweaking…
  • The crux then is that the true state of affairs is that developers work with APIs that are in a state of perpetual transition between the mediocre past and the increasingly better future.  

 Just add this, over there…

A developer knows that conceptually we want to “just add this thing over here” in the DOM tree.  Seems simple enough.  But:  Given the state of things, Is the thing we want to add an Element? An Array of Elements? A node list?  Or, god forbid, a fragment of HTML serialization? Worse still is if that serialization is something that requires context (a table row, for example).  Or – is it some new thing?  Think about it – how you approach the problem is affected by which of these “over here” is.

Yikes.

Yes, It’s all accomplishable. Yes, and it’s way better than where we were 5 years ago.  But it leaves developers wanting and leaves a lot of room for error.  I have to use Array.prototype to turn something that is *almost* an array into an array, I have to worry about whether I am talking about a single element or an array of elements, or a NodeList.  It all seems unnecessarily complex and error prone at some level.

It sees that sometimes, as in this case, perhaps we could recognize that this will be a common evolutionary step and perhaps we could stabilize continuity by way of a unifying API based on the common concepts?  

And if you think about it, it’s sort of the one thing core thing that jQuery does that still isn’t currently a proposed standard.

A proposal to get things started…

Let’s imagine that we create a new unification API based on the common concepts:  Select one or more points in the DOM to modify and then perform modification on them.  Let’s call them DOMModificationTargets in keeping with the generally unfriendly tradition – aliases and bike-shedding a better name are easy.

DOMModificationTargets is a constructible abstraction with an underlying array that allows you to perform common patterns of DOM operations on All of the Things

// A dead-simple unified API for conceptually similar 
// past, present and future APIs... It always freezes and
// exposes an underlying array
class DOMModificationTargets {

  // Create targets object via Any of the Things...
  DOMModificationTargets constructor(string markup);
  DOMModificationTargets ,constructor(Node node);
  DOMModificationTargets constructor(array elements);
  DOMModificationTargets constructor(NodeList list);

  // Perform conceptually common modifications on them...
  DOMModificationTargets append(string markup);
  DOMModificationTargets append(Node node);
  DOMModificationTargets append(array elements);
  DOMModificationTargets append(NodeList list);

  DOMModificationTargets prepend(string markup);
  DOMModificationTargets prepend(Node node);
  DOMModificationTargets prepend(array elements);
  DOMModificationTargets prepend(NodeList list);

  DOMModificationTargets replace(string markup);
  DOMModificationTargets replace(Node node);
  DOMModificationTargets replace(array elements);
  DOMModificationTargets replace(NodeList list);
  
  DOMModificationTargets replaceWith(String markup);
  DOMModificationTargets replaceWith(Node node);
  DOMModificationTargets replaceWith(array elements);
  DOMModificationTargets replaceWith(NodeList list);

  DOMModificationTargets addClass(string className);
  DOMModificationTargets removeClass(string className);
  DOMModificationTargets toggleClass(string className);
  DOMModificationTargets setAttribute(string name, string value);
  DOMModificationTargets removeAttribute(string name);

  array collection;

}

I’ve created a quick prollyfill for this on github with a demo – the source is a mere 200 lines of JavaScript.

Given that the underlying operations are the same, it would even be easy to add an extension pattern to this for prollyfilling future API types.

It it worth something to think that occasionally we should look at conceptually unifying/adapter APIs?  What do you think?