EDIT: The post to which I'm replying seems to have changed a bit, since I originally posted this. It originally said that I had HTML5 backward.
Yeah sorry I was attempting to make the rant as short as possible.
I increasingly find sites that simply don't work in one of the browsers. This is not some trivial eye candy effect, it's so bad that I need to fire up another browser and paste in the URI to get anything usable.
I test in all those browsers and regularly build Chromium from git. Some examples? Do you have any idea what specifically about these websites tends to be the blame? Like I said, as of late, all browsers are using literally the very same HTML parsing, even mostly share the same implementation, so there's no chance at least on the markup side of things that the interpretation of even old broken code could result in building a different DOM on most current browsers. If markup is to blame then it should always be consistently broken in the same way.
Of course, Javascript is another story entirely and still varies wildly especially when you get into the bleeding edge of advanced things, but you're talking about mostly either legacy or uninformed developers who don't maintain their sites properly I presume. I can't think of a reason things should be getting worse on that front. If the code is that bad it's a wonder it "works" in any browser. It can only really be nasty javascript hacks or obscure proprietary markup, which should be decreasing in abundance.
It was interesting that most of them were just a badge engineering exercise.
Not surprising. It's quite the trendy marketing gimmick indeed. But as I said, how do you determine what's "real" HTML5? There is no automated method. About all you can do is look for the presence of HTML5 specific features which are hopefully not buried in obfuscated javascript only to become apparent after building a DOM. And even if there aren't any, that doesn't mean it isn't HTML5. An "HTML5 doctype" is by no means a dead giveaway either, nor is the presence of an older XHTML doctype.
Has anybody here thought of running CSE over a body of data like that. If you've done it I'd appreciate some feedback. If you're thinking of doing it (probably using batch reports) give me a shout if you'd like somebody to talk it over with.
I don't think any automated tool could give meaningful numbers as to the overall "correctness" or "brokenness" of a website beyond a very simple validity check. Even that isn't going to work if the page uses a lot of inline javascript.
CSE probably wouldn't be the right tool for the job IMO. Hell, CSE gives me extra errors the more "correct" i'm being as there's no way it could understand why things are being done a certain way against it's overgeneralized generic advice that mostly isn't applicable. That isn't necessarily a bad thing. I can see the value in helping guide beginners except perhaps giving an overly simplified view of the nuances of various behaviors. Hopefully that's not misinterpreted, I just don't think it's output would say much of anything about code quality in a large scale analysis.
I would Imagine CSE throws a hissy fit when it sees code like this:
http://www.google.com/404Google 404 page is the way it is for a reason and not incorrect (though whether or not good practice may be a bit controversial).