Is it Google Search outdated for the modern, semantic, and social web!?
Deconstructuring and rebuilding the Google Search interface for a lesson, I found that it lacks many of the web new features.
OK, I know what you’re thinking. Google is the web, but its source code says something a bit different: I follow the Big G guide-lines from the beginning, learning new best practices every time… and I still think it’s a great reference for a developer — like a sort of “Holy Bible”. The Google Search website just became so complex it can’t literally follow them; it’s time for a major upgrade? I don’t know, I’m only thinking about how it would be.
Let me say something about Google itself. Maintaining its products is hard and I don’t think I’d be happy to join the company: I like to work on little, manageable projects much. Of course, being hired by Alphabet is a developer dream, but working on those services could be more than challenging. I can’t imagine the amount of issues to take care of. Then, I’m not surprised to see that Google Search still relies on outdated technologies for its front-end.
The main asset shows a ‘shy’ approach to the company’s biggest web innovations. I mean, as I’ll talk about later, Schema.org got a minimal support: we’re still far from the semantic web to date. This is maybe the worst lack I see, because other smaller issues could have a reason. Think about the strange font size behavior: shown texts range from
15px in the homepage, where it should have a single dimension, according to their roles.
Have you ever noticed? Me neither, before analyzing the Google Search main page via Chrome DevTools. Colors are ‘funny’ too… since it has two different shades of grey in the same component: static text and links have their own, unique declarations, but the users see no differences between them. I don’t think it’s worth; it’s impossible to understand what is actually clickable and what not at the first sight. You have to move the mouse cursor over them.
That said, scripts are all deferred… and placed just before the
</body> tag. It doesn’t make any sense, since the meaning of
defer is to have the same behavior, keeping them in the
<head>. I bet it’s something related to older browsers support, but other elements show a different approach: is it too hard to align them? I mean, we have the technology! Being such a huge company, maybe some assets have a strong legacy to deal with and Google Search does.
Newer HTML tags follow, since the input is still of type
text instead of
search, and I’m talking about what you get on the latest Chrome version which support them. I understand that Google can’t offer bleeding edge technologies in production, but HTML5 is now a seven years old markup language: an Android phone doesn’t last so long. Inline styles and scripts don’t get better — and I could go on for hours; this will change, sooner or later.
This is crucial. I didn’t mention that, once logged, the Google experience changes: a placeholder suddenly appears in the search input, while the only Schema.org reference dissapears. I can’t understand why it would happen… I mean, accessibility doesn’t belong to the user login process, neither semantics. Notice that there’s a big difference from the home to the results pages; there its support depends on keywords and it looks to be hardly deeper.
Structured data grew in the last years. I enjoy how rich snippets are built and shown, so I’m not considering this here: I just wonder why Google Search itself doesn’t implement them as it requires from publishers. OK, you may reply that a search engine should only render results and it doesn’t really matter if it follows its own guide-lines — and I partially agree with you. But I think that it would be great to do so, considering the next steps I’ll talk about.
I was excited when Schema.org was released for the first time. Now, excluding Google itself, I don’t see a huge support by third party: I can still remember an AMA with Bill Gates on Reddit in 2013, when he said that a semantic filesystem for Windows was one of his bigger regrets, and I think he was right. He talked about WinFS and Vista… I hope that Schema.org doesn’t follow, since in years we can’t see any progress on this path from Microsoft.
Yet another thing. I know, Open Graph is a Facebook product, but Search supports it: unfortunately, although Twitter Cards (if they’re still a thing) allow you to retrieve details from its specification, Google forces you to forget the DRY principles. Then, if you want a description to appear in the results, you must set both meta
og:description. This doesn’t have much to do with the source code. By the way, I really can’t understand why.
Standards serve a purpose. Doubling a declaration to support more than one is ridiculous: I’d love if Facebook ditches Open Graph to implement Schema.org, as long as Twitter, but I don’t think it’s realistic. Why don’t just let the developers choose between them? Schemas are semantically similar and it would be better to select the most relevant for a single project. Same for icons, etc., rendered on Android and iOS — content replication should be avoided.
This is a sort of side effect, but dealing with different standards «makes Jack a dully boy». Technically speaking, I think that Schema.org is better than Open Graph because you can go beyond the
<head> to declare your items: it lets me model an HTML structure before thinking about its semantics and is also easier to implement in a product owned by non-IT persons. On the other hand, it’s difficult to learn for developers: the Facebook solution is simpler.
In the end, it takes me half a day to replicate the Google Search interface without its logics: coded from scratch from a design it should have taken an hour or two. I used HTML5 tags, explicit WAI-ARIA roles, and a bit of ES6. I plan to go further as soon as possible, because learning how its original front-end works is great to understand how a search engine could be designed. Introducing new best practices helps me to optimize the actual experience.