I thought I could have this blog as a real single-page application with no server code. That sort of worked. If it only hadn't been for the previews.
|Published:||Thu, September 7, 2017, 16:50|
My first post on this blog was Building almost a real SPA blog. While the title mostly is still true, I ended up having a bit more .htaccess config and a small Python script for previews of posts in social media and add support for web feed.
I have to admit that I don't care too much about other search engines like Bing, Yahoo, etc. Google is in its own class when it comes to searching. However, because of the mentioned Python script I now have a tool for that as well. More about that in a second.
I sometimes post my articles to Twitter, Facebook and LinkedIn. When creating a preview of the links those sites certainly does not try to parse any of the contents as a modern browser. They just look for Twitter Cards or Open Graph Protocol (OGP) meta tags. If they don't find anything they typically default to the <title/> tag.
My index.html only has a standard <title/> tag which makes no sense in the context of my articles. That makes the previews pretty bad, so I mostly skipped them in the start. However, I think the content is so much better with a good preview of the contents. I'm sure it also makes more people click the links.
So far I've added support for Atom which you can acess at https://blog.roysolberg.com/atom.
Now that I had the script for generating the preview I needed a way to route the bots to it. For that I used the already existing .htaccess configuration file and look for the User-Agent header belonging to the different sites' bots. You can see the source code for .htaccess on GitHub.
Because of the preview setup I got support for other non-Google search engines for free. They access the same code as generated by the Python script. Google is still served the same site as you are.
Having the preview script it didn't take long to make support for the Atom feed. You can see the source code of the script on GitHub.
I don't think the Open Graph Protocol is very flexible or ready for single-page applications and thicker clients.
The first thing that hit me when creating the script was that I couldn't just inject the meta data as HTTP headers. I had to create HTML markup for it. I can't understand why one shouldn't be able to choose between headers and tags.
Secondly the different social media sites need to start doing what Google has been doing since 2014; render the pages with some kind of headless browser to be able to understand the contents and get dynamically injected OGP meta tags. It's not that much magic or resource demanding in 2017.
If you're interested you can have a look at the source code for this blog at https://github.com/roys/js-web-blog. The project itself is licenced under the MIT License, but for the contents (posts and images) I reserve all rights.