With Tag Management, the Best of Times and the Worst of Times

Recently I was thinking that tag management is the best thing that ever happened to us Web Analysts. Tag management has made us free and no longer we are entangled with the yoke of release cycles. No longer we are victims of┬áthe capriciousness of website developers who would insert our tracking tags to soon, or too late, or in the wrong page or with the wrong parameters and sometimes not at all. Now we can be, dare I say the word, “agile” and quickly change our tags and tracking codes and conversion pixels. We, and our retargeting vendors and display publishers and SEA managers rejoice, because we can start and stop and modify marketing campaigns in an instant.

But then I was thinking that tag management is the worst thing that ever happened to us Web Analysts. Now that we have spurned them website developers have become sulky, and even more obstinate, and where they have once been simply slow they now refuse us their services at all. They will not set a single javascript variable for us, let alone something as complex as a well planned data layer. “Scrape it from the DOM, traitor !” they will say, or “what’s wrong with parsing an URL, you bloody philanderer”, and as for datalayers they will only ever implement them as a sort of punishment when they can hide our much-needed data in some deeply nested structure at an undisclosed index. And along with our relationship, the quality of the data deteriorated.

In fact it seems to me that tag management systems, despite their best intentions, have caused a lot of damage. It used to be that we (as in “we implementation gals and guys”) just could say “this will not work until it is fixed in your platform”. We cannot do this anymore – we are forced to find workarounds, searching query strings and page titles, traversing the DOM, looping through datastructures, and that’s before the really expensive operations start (how about finding an product image, extracting the name from the image source and doing an ajax request to retrieve product information ?).

There are a lot of disadvantages to this approach, the most trivial of which is that they do not always work. And the more you rely on DOM extraction the closer your tracking is suddenly tied to the page structure, so it will fail whenever the page’s code is changed – and it will. HTML code may change. Urls may change. Page titles may change.

Custom javascript in the tag manager is dangerous; it might cause errors, or it may cause other scripts to cause errors. Sometimes marketers, full of joy over they sudden prowess with javascript “programming” forget that variables set in the tag manager exist in the global namespace and will overwrite other variables of the same name.

Even if you avoid errors and naming collisions DOM extraction and custom scripts will cost you money. The more we need to dice and slice and traverse and compute and calculate the slower the website will become. Amazon apparently did a study that showed how 100ms in additional page time will decrease revenue by 1%. Now, as much as you’d love to be, you are probably not Amazon, so this will not be true for you. But even so additional load time will cost you money in lost revenue, money that you could just as well have spend to avoid or alleviate the problem.

Because this blog post is, of course, not a declaration of defaitism; if anything it aims to be a call to action. At the end of the day DOM extraction and custom scripts and for the most part even query string parsing stinks. The only solution that deserves the name is a properly constructed data layer.

By “properly constructed” I mean a data layer that is complementary to the tracking tags you actually plan to use. You want to do enhanced e-commerce tracking in Google Analytics ? Google has a specification for that, which makes the process fast and reliable, not to mention that in GTM it can now be enabled with two simple settings. Ask your vendors, they might have specifications for their tags, too.

And don’t be afraid of redundancies. Your publishers require six different date formats in their various tracking tags ? The best solution is not to create a timestamp variable and compute six dates at the client side. The best solution is to have your server compute them, because if you do something like this a few times you will create an unmaintainable mess with your homecomputed variables (okay now, in reality nobody would be that redundant in the datalayer, but that doesn’t mean it’s a bad idea).

The point is that you should not use tag management to circumvent conflicts with your developers. Being able to implement tags without the help of developers, or to quickly start and stop campaigns is not worth it if the data you get out it is corrupted and invalid. It might be a bit of a disappointment for you – after all you were promised that you’d no longer need a developer to successfully track campaigns – but having the data set by the server is about the only way to get reliable tracking. If you plan your datalayer carefully you will still get a lot of benefit from your tag management system, but if you don’t, tag management might hurt you more than it helps.

Leave a Reply

Your email address will not be published. Required fields are marked *