gTLD vs subdomains?

Matt Cutts, Google’s king of web spam prevention, and marathon running, recently responded to some gTLD assertions from Adrian Kinderis of ARI Registry Services, arguing that ownership or use of a gTLD would not bestow any special advantage in SEO:

Google will attempt to rank new TLDs appropriately, but I don’t expect a new TLD to get any kind of initial preference over .com, and I wouldn’t bet on that happening in the long-term either. If you want to register an entirely new TLD for other reasons, that’s your choice, but you shouldn’t register a TLD in the mistaken belief that you’ll get some sort of boost in search engine rankings

This attracted my attention, since Google is one of the biggest participants in gTLD registration, with over 100 new domains applied for. Why would Google invest so heavily in gTLDs if they have no intention of providing any ranking benefit in their own search engine?

The plot thickened with another more recent announcement from Matt Cutts, this time a YouTube video explaining a policy change for Google ranking of results from the same domain. The upshot is that Google will penalise ‘domain clustering’ – you will in future see no more than about 4 results from a single domain. Justin Brigs has posted a nice summary here.

Although it seems that the main intent of this change is to limit the appearance of multiple results from domains like, there is some relevance to the gTLD initiative.

There are many sellers of third level domains, supporting businesses and use cases that might otherwise be served by secondary domains under a new gTLD. For example, CentralNIC has an active program to sell country branded 3rd level domains, intended “to provide an alternative to the existing Top Level Domains (TLDs) and Country Code Top Level Domains (ccTLDs), allowing the creation of a simultaneously local and global Internet Identity.” There are many local, geographic and business entities also promoting 3rd level domain.

All these 3rd level domain businesses will compete with new gTLD registry operators, and Google has just severely constrained their potential SEO performance, thus providing a de-facto boost to the gTLD camp.

Maybe Adrian Kinderis was right all along.


Tagged with: ,
Posted in Uncategorized

Do Not Track is way off track

Since about May 2012, anyone who has accessed a website owned or hosted in an EU country will very likely have seen some kind of message describing a new cookie policy and requesting user opt-in to provide permission for publisher use of cookies, similar to this example from The Economist:


For most users, this will represent a very minor irritation, or might trigger a momentary question about why would a website bother to ask for permission about cookies. Most of us woud click OK without thinking, and get on with our web browsing.

The existence of this kind of permission request is the tip of a large and complicated iceberg that has profound regulatory and commercial implications for the business of online publishing and advertising. This post will try to describe that iceberg and unpack some of those implications

This kind of notification and permission seeking has been the typical publisher response to the EU Directive 2009/136/EC, or Electronic Privacy Directive. Under Article 5(3) of this directive, publishers can store information in a visitors browser only if the the user is provided with “clear and comprehensive” information about the purposes for the storage and access of that information, and only if the user has given their consent for the publisher to do so. After initially consenting to such a cookie policy, the user’s consent can be carried over into subsequent requests to that same website.

In the UK interpretation of the Directive, there has been some interesting developments around what is actually required for a user to be deemed to have given their consent. The website of the Information Commissioner’s Office, the very body charged with enforcing regulation such as the EU Directive in the UK, has in January this year changed its own mechanism around gaining user consent to now use an opt-out mechanism:


and in doing so, has triggered some hilarious quintessentially British criticism from industry observers.

The ICO notes in its guidance for recognition of implied consent, the possibility that browser settings may in future provide robust mechanisms where consumer consent can be conveyed to publisher sites and networks.

Although not specifically referenced in the Directive or in the UK Regulations, it appears clear that a key mechanism intended to deliver this kind of browser-configurable consent indicator is the Do Not Track (DNT) initiative, a proposal first put forward in 2007 whereby users could configure their web browsers to include custom http headers that would indicate to a publisher site that the user does not wish his or her online activity to be monitored beyond the minimum requirement for the site to satisfy their request for web content.

The core idea in Do Not Track is to provide a reliable browser-based mechanism that would allow a user to indicate to publisher sites whether they prefer anonymity, which requires no tracking at all, or customisation of content which must be supported by some degree of tracking.

Actual observance and execution of the Do Not Track request relies entirely on the publisher site, and is a kind of honour system, similar to the robots exclusion standard where site owners can optionally include a ‘robots.txt’ file in their site hierarchy, with directions that govern the activity of compliant webcrawlers and other web robots to index none, or some of the website. There is no guarantee or mechanism to ensure that any web crawler will actually comply with the robots.txt directions, although major search engines do comply.

Unfortunately for users, there is little incentive for a publisher to honour Do Not Track requests. Publishers are facing growing commercial pressures to wring revenue out of their web publishing operations, and user tracking and behavioural targeting of advertising represent revenue opportunity. And there is as yet very consistency around how a user can request not to be tracked, very little consensus around what it actually means not to be tracked, and practically no regulatory enforcement of such requests.

A fascinating aside is the story of the Internet Exploreer 10 default DNT setting controversy. In June 2012, Microsoft announced that IE 10 would by default be set to include a Do Not Track header in every http request. The US Digital Advertising Alliance coalition raised a complaint  that by turning the Do Not Track setting on by default, IE was eliminating consumer choice, and the advertising industry therefore had no obligation to honour the DNC request. Shortly after this announcement, one of the authors of the DNC standard submitted a software patch to the open source Apache web server – the most popular webserver platform in the world – with the effect of ignoring all DNC request from IE 10 browsers. The logic was that by removing user choice from the setting, Microsoft was subverting the intent of DNC and allowing publishers and advertisers to wilfully ignore DNC requests, supporting a presumed underlying intent to appear to support user privacy while actually pursuing its own interests. This Apache change has since been reversed.

Since 2007 when Do Not Track was first proposed, the routine application of usage tracking techniques across the internet has exploded.

Many online users are aware to some degree that websites are able to deliver editorial content and advertising that is influenced by their previous online behaviour such as web searching and site browsing.

However, most users remain unaware of the extent to which their every click on every website is captured and used to build user profiles which are then used to fuel complex online advertising processes where automated systems bid for the right to deliver specific advertising messages into individual slots on specific pages of online sites so that they will be seen by specific users who have displayed characteristics deemed desirable by the advertisers.

Try installing the Collusion browser plugin for Firefox or Chrome, or at least take a minute to view the online demo. Or check out the TrackerMap from Evidon. You will be amazed, and quite probably horrified at how many additional sites capture and store information about your browsing as you go about your normal activities online. 

A key feature of current tracking technology is that not only do the publisher sites use their own cookies to track your visits to a particular site, but a fast growing ecosystem of 3rd party sites also use cookies to track your activity across multiple sites. These 3rd party sites are generally unknown to the user.

To privacy advocates, this is a dystopian scenario; to many in the online advertising industry, it is The Future. To web users and publishers, it is a battleground, fraught with conflicting interests and agendas. In the absence of clear guidelines or regulation, tracking technology continues to develop, and ordinary users remain badly under informed about the nature and extent of the tracking they are subject to.

In 2011 the W3C launched the Tracking Protection Working Group, with a charter to “improve user privacy and user control by defining mechanisms for expressing user preferences around Web tracking and for blocking or allowing Web tracking elements.” and to “standardize the technology and meaning of Do Not Track, and of Tracking Selection Lists”.

After two solid years of work in this forum, there is not only very little consensus on how a privacy option should be enabled in web browsers, and what kinds of usage data should remain collectable when privacy signalling is activated, but a fundamental schism appears to now be emerging between the privacy and behavioural advertising camps.

In a recent interview with AdExchanger, Jonathan Mayer, one of the W3C Tracking Protection Working Group members described the standoff as one where advertising industry intransigence is preventing consensus on common-sense understanding of the definition of Do Not Track:

One thing advocates long stood by is if a user says “Do not track me,” that should mean you’ll get rid of the unique ID identifier cookie if you’re a third party in the business of advertising or collecting user data. The advertising industry has said “No, we need to keep these cookies for certain users like market research, product improvement, etc”. It’s hard to come up with something that doesn’t count as market research or product improvement.

In an attempt to short circuit what appears to be an interminable debate on intent and delivery of DNT, Mozilla now appears to have adopted a strategy to pre-emptively deploy technology solutions that will further the privacy agenda regardless of industry recognition or honouring of DNT user intent.

Although it is not the only technique for tracking user behaviour, cookies are by far the most popular, and so-called 3rd party cookies in particular are the most contentious. 

Third party cookies are set from domains that the browser has not actually visited, unless the user has previously visited that 3rd party site. The kinds of domains the user only get to know about through tools like Collusion and TrackerMap.

In February 2013 Mozilla announced its intent to block the setting of 3rd party cookies in its Firefox web browser. And in a rare case of direct technical follow through from policy makers, Jonathan Mayer himself wrote the code  that will execute on this intent, now included in an alpha release of Firefox 22. With 23% of online browsing, this change to Firefox will have significant impact on behaviourally targeted advertising.

Mozilla’s new stance aligns with Apple’s existing stance for its Safari browser which also has DNT and 3rd party cookie prevention enabled by default. Internet Explorer has varying permissions based on a separate W3C standard called P3P, which has long been criticised as overly complicated and ultimately ineffective. Google’s Chrome web browser allows all cookies.

Cynics might argue that Mozilla and Apple have little at stake in this argument, since Mozilla does not directly benefit from online advertising, and Apple has its own separate ad targeting mechanisms that do not depend on cookies. Microsoft is backing away from the adserving game with the completion its sale of its Atlas ad server platform to Facebook, and so is no longer directly exposed to online ad industry revenues.

Google is now the only major browser developer that also has significant skin in the online advertising market, so it is perhaps unsurprising that Chrome is late to the DNC and 3rd party cookie party. 

Remember also that Google is the main financier for Mozilla, providing the vast bulk of Mozilla revenues in return for Google Search being configured as the default search for the Firefox browser. Industry observers will watch with great interest what happens in 2015 when the current deal expires.

What happens from here will be fascinating. Will we see behavioural targeting revenue growth take a serious hit with 3rd party cookies becoming far less reliable as tracking tools? Will we see non-cookie based unique IDs and tracking methods, which in general are perceived as being intrusive and undesirable, becoming more prevalent? Will we see publishers respond with increasing use of paywalls in attempts to monetise their content in other ways?

Watch this space for more as this drama unfolds.

Tagged with: , , ,
Posted in Uncategorized


This week ICANN finally and formally made the long anticipated decision to adopt the gTLD proposal that has been percolating through their processes for around 6 years now.

It feels to me that we’re on the cusp of some significant changes to how people will navigate online, and with that will come potential for some unprecedented new commercial opportunities.

I expect that the initial implementation phase will create a short term increase in reliance on search as the primary mechanism for finding relevant content. This puts Google in a position of even greater power, and opens up some interesting questions, such as what happens if Google can’t or wont index content in a particular gTLD?

You’ve got to imagine that until the new gTLDs gain mindshare among users, that any such unindexed gTLD will receive very little traffic. This may be enough to kill some or many of the proposed gTLDs very early.

But there is another plausible scenario that could play out very differently.

It’s not hard to imagine that users will quickly get comfortable with the new gTLD concept through some big players adopting them early.

Say Facebook gets on board and gives every one of it’s users a unique domain like yourname.facebook. And a bunch of global brands adopt their own gTLDs to help create global content and information architectures, and to help eliminate confusion from spam domains. Canon, Deloitte and others have already publicly declared their intent to do exactly that.

So all of a sudden you have the vast bulk of online users exposed to new gTLDs, and they begin to realize that those .com and .com.countrycode domains look a bit old fashioned.

And a bunch of entrepreneurs see the potential for using a new gTLD as a branding strategy and usage driver for a global consolidation of some of the highly monetised online classifieds vertical categories – like automotive, or employment, or travel.

Whoever owns .auto, .jobs, or .holiday has an excellent weapon to deliver a global campaign to wrestle that money from the various domestic incumbents all over the world.

This becomes possible only under a cutting edge proposal for online domain naming and navigation, but it harks back to the ancient tradition, where providers of the same services in old cities would cluster in the same neighborhood or in the same streets.

Saville Row in London is full of tailors, Kapabashi in Tokyo is full of kitchenware stores. Most older cities will have a ‘Baker Street’ and a ‘Butcher Street’.

This did not happen by accident. It happened because clustering of services makes them easier to find. It simplifies navigation. It removes the need for a map.

That simplification will help condition users to look to when they need a car, or to look for employment.

And those specialized search subdomains will be powered by specialist search providers, selected by the owners of those gTLD.

All the domestic incumbent operators of those verticals will need to establish priority placement and traffic deals with the specialist search operators for those global domains.

And suddenly Google’s search supremacy, the main map to the web, starts to look a bit shaky.

Tagged with: ,
Posted in Uncategorized

PGP for Gmail

Have recently found the FireGPG plugin for Firefox. Among its various nice features is the ability to integrate GPG/PGP into Gmail.

What can I say, it works really well.

Now there is even less excuse not to use secure email.

Here’s my public key

Tagged with:
Posted in Uncategorized

Gmail for Domains and other web 2.0 tools

Now that Tunnel Visionary is a global virtual company, we’ve adopted some new communications infrastructure to help keep the flexibility of use and accessibility up, and keep the cost of managing our systems down. In fact we’ve managed to reduce the cost to free! – well ok, free except for the effort involved in setting things up.

The first and most important change has been that we’ve started using Gmail for domains. If you haven’t heard of it, that is Gmail’s new capability to host email for domains other than If you’re interested you can register to get your own email domain hosted at Gmail.

This has given us the great Gmail functionality that many webmail users will already be familiar with, and also allows us to keep our addresses! This is cool. One of the best outcomes of this for me has been that Gmail’s spam filtering works better than any other I’ve previously used, and of course I can now easily access my mail from any connected device anywhere in the world, and from my various computers and operating systems without needing to do anything at all to keep my inbox and contacts lists unified. Hurray!

My only gripe with Gmail is that I’m yet to find a good PGP encryption solution. I’m hopeful that Freenigma might prove to be be the missing piece here.

As well as the Gmail functionality, we also get the Google Calendar which is fully iCal compliant and pretty much delivers all the functionality I ever really used in Outlook or Evolution.

The iCal-ability of Google Calendar has already proven to be a great enabler for us because we can now link in calendar feeds such as public holidays, and also project-based feeds of tasks and milestones generated from our BaseCamp project management system – which is another of the best web 2.0 services I’ve used.

And lastly, on one project we’ve started using a couple of new Google collaboration tools – the web word processor Writely and Google spreadsheets. These are underkill compared to the functionality of Microsoft Word or Excel or OpenOffice, but if you’re a 90%er like me (ie one of the 90% of users who actually use only 10% of the features) you’ll probably find these tools will do the job, and you can always export to a more fully featured tool for a final prettying-up of your document.

These show great promise as a way to eliminate much of the confusion and tedium of managing document version control among a collaborative work group. At time of writing these are accessible by invitation only, but I can invite you if you email a request to me. We’re still testing things out with these tools, but if they really work as well as they seem to, there’s every chance we’ll use them across other projects as well.

Posted in collaboration, Tunnel Visionary, web 2.0


Welcome to the Tunnel Visionary blog.

This blog site will contain occasional musings, thoughts and ideas from the heads of the good people of the Tunnel Visionary Group.

Tunnel Visionary is a global virtual consultancy, specialising in strategy, technology and content services for convergent and interactive media. Our expertise spans web, mobile, broadband, iTV and print.

In Australia we’ve produced online, mobile and iTV services for some of the biggest blue chip brands in the country.

In the UK we were instrumental in getting BSkyB‘s interactive TV platforms technically operable and commercially viable, and helped develop a new cross-platform content standard for the DVB.

How can we help you? Contact us for a free consultation.

Posted in iTV, Tunnel Visionary

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 270 other subscribers
%d bloggers like this: