
Over the last few years, as the IAB Europe’s Transparency and Consent Framework (TCF) has been in and out of the news, a few of my friends across the industry have nudged me to put the story behind the TCF down on paper.
After eight years, some of the details are a bit hazy to me, but it’s hard for me to forget the sequence of events that led to the invention of possibly one of the most consequential and divisive privacy protocols in all of AdTech.
Warning: this is quite a long read. If you’re interested in how messy the intersection of commerce, technology, and policy can be, it should be worth it.
Background
Late one Friday afternoon in 2017, Alice Lincoln, then the privacy lead at MediaMath, sent me an innocuous-looking Slack message along the lines of, “Do you mind taking a look at this and letting me know what you think?” After a long week of buy-side diligence, reading a relatively dense-looking white paper on privacy was significantly less attractive than logging off for the weekend, so I told her I’d have a look over the weekend.
That Sunday night, I found myself reading something that was a bit less “bedtime story” and a bit more “a bad dream.” What Alice had handed me was a relatively stern interpretation of implementation guidance for an impending European privacy regulation, the General Data Protection Regulation (GDPR), that, at first glance, represented a significant undermining of the Real-Time Bidding (RTB) ecosystem. I immediately shot off a note back to her: “We need to chat!”
A few weeks and a few conference calls later, I found myself on a plane from Chicago to MediaMath’s recently opened offices in London Blackfriars to join a meeting with a subset of IAB Europe’s GDPR Implementation Working Group (GIG, because why not make it a backronym… turns out policy wonks do have a sense of humor!).
As an aside, this would be the first of many short, last-minute trips to Europe. If you’re a US-based AVGeek trying to make Global Services/Concierge Key/Delta 360/similar, I recommend attaching yourself to a highly urgent yet slow-moving international policy topic. My spend on United skyrocketed thanks to GDPR, as did my weight, my lack of sleep, and the level of well-deserved spousal irritability!
The IAB Europe GIG’s purpose, as I understood it, was not only to interpret the writing of the GDPR (adopted April 2016) and parse DPA guidance that was trickling in to help IAB members prepare for implementation, slated for May 2018, but also to achieve greater alignment between industry players. The open web and third-party AdTech would need to agree on legal interpretations to even be able to work together on technical solutions. This was complicated by two very real challenges at the time: 1) the vertically integrated players in the market had a natural advantage, and, therefore, disincentive, to proactively engage in any policy and standards process, and 2) this was a time when most in the industry still did not have a real understanding of the scope and impact of the GDPR.
This is an already lengthy post so I will oversimplify this section for background: the GPDR’s provisions can be distilled down to a few basic tenets: individuals have the right to know what data is collected about them, to access and correct that data, to have it deleted under certain conditions, and to expect that their data is handled transparently, securely, on a lawful basis, and with consent.
GIG was tasked with determining a path to compliance. The prominent voices in the room that day were Matthias Matthiesen (IAB Europe), Alice Lincoln (MediaMath), Shane Wiley (Yahoo), Chris Hartsuiker (IAB Europe), and Ionel Naftanaila (IAB Europe and EDAA). I was the only engineering leader in the room. If you were in that room and I left you out, please accept my deepest apologies! Please send me a note, and I’ll edit this post for accuracy.
By this point, most of the mechanics of data subject access requests and deletion requests, which allow individuals transparency into the data collected about them and to have their data erased within a reasonable timeframe, were well understood. The significant challenge yet to be solved was that of securing and transmitting = consent.
Consent in a complex network
At face value, the expectation that an individual should be informed about and give consent to have their data processed seems reasonable. In practice, the internet economy is highly complex. Major brands that consumers have relationships with benefit from name recognition; the many technology providers that those brands directly work with to provide consumers the products and services they benefit from do not. The sheer scale of the number of brands and technology providers a consumer interacts with makes the task of achieving informed consent for all the providers required to deliver a product or service without introducing consent fatigue a real challenge.
In AdTech, the problem is one of sequential data transmission that occurs in the exact opposite direction of the transmission of economic value.
Let’s use a simplified example to illustrate. A typical 2015-ish advertiser’s supply chain would often include:
- The publisher whose content is being monetized,
- The publisher’s ad server, which helps publishers prioritize and count revenue from downstream demand sources,
- The publisher’s SSP, which enables them to connect programmatically to many demand sources and ultimately maximize ad fill rates and, therefore, revenue,
- The advertiser’s DSP, which enables the advertiser to manage, fulfill, measure, and optimize its advertising campaigns,
- The advertiser’s audience platform, which enables the advertiser to market to the right audiences selectively,
- The underlying common identity provider that the SSP/DSP, and/or DMP might need to leverage to enable those audiences to scale,
- The advertiser’s creative ad server or DCO vendor enables the advertiser to serve the right creative while tracking creative engagement,
- Any fraud (is this a real person or a bot? Is the ad malware or a real creative?), viewability (is this ad in view of the user, or is it covered up or on part of the screen that can’t be seen?), brand suitability (am i buying a site or piece of content that clashes with my brand message or values?), and independent measurement providers engaged with along the way,
- And finally, the advertiser’s publisher analytics platform enables them to understand how their products, services, and storefronts are used.
Each of these bullets represents one or multiple solution spaces that each have dozens of vendors within them.
Critically, given that many of the ads displayed on a publisher’s site are the result of a real-time auction, it’s impossible to know which ad will display ahead of time. Mathematically speaking, the sheer number of combinations of potential technology providers present in any one given ad transaction is so large that, for practical purposes, it is unbounded.
Prima facie, the choices discussed were simple: 1) A publisher could define a set of preferred vendors they limit working with, restricting bids from advertisers who work with vendors outside that list (with tradeoff of drastically reduced revenue), 2) it could pick the top ~10 in each category to simplify the user’s choice (with the tradeoff of reducing competition and drastically accelerating market consolidation ), or 3) it could attempt to achieve consent for as much of the market as possible (with the tradeoff of a more challenging user experience).
During a time when publishers were increasingly struggling for revenue, choices 1 and 2 were luxuries that only the most prominent publishers could afford. They would still require some form of standardized consent signaling.
NB: As has been discussed endlessly over the last decade, the tradeoffs in this problem space pit privacy against competition. Privacy advocates at the time viewed consolidation as a Very Good Thing™ for consumers, but broadly speaking, the IAB, IAB Europe, Tech Lab, and other industry bodies have a mandate to represent their entire member base, and take a neutral approach—avoiding solutions that advantaged any one particular party to the transaction or constrain the size of the market.
Deconstructing the problem led us to several principal entities: the consumer, the technical providers, the purposes, and the consent grant itself. Technical providers (referred to in the specification as “vendors”) in the ecosystem required a method to receive, store, transmit, and, in the event of an audit, prove consent for their declared processing purposes. The ecosystem would need to align on a method for receiving consent from the consumer, storing it in a standardized and accessible format, and transmitting it across vendors.
Technology
Several options were discussed: extending the “Do Not Track” standard, an ads.txt variant for vendor declaration, blockchain, and the sequential transmission of consent, akin to a daisy chain. Each option had its pros and cons.
Do Not Track
Do Not Track (DNT) was a standard implemented by the majority of web browsers by mid-2012, enabling users to opt out of tracking and data sharing by websites. The header was proposed in 2009, implemented broadly by 2012, and abandoned over the following decade due to a lack of implementation. DNT was quickly shelved as an option due to 1) a lack of vendor granularity in the signal—DNT was an “all or nothing” solution, 2) lack of a clear path to modify it, and 3) lack of interest of AdTech vendors to cede control of such an essential signal to browsers and the W3C (the browser standards body), whose interests were decidedly not aligned with that of IAB.
consent.txt
By this point in the AdTech timeline, the idea of per-publisher vendor declaration had become widely accepted, with the adoption of ads.txt (IAB Tech Lab’s newly launched standard for declaring which partners are authorized to sell a publisher’s ad inventory) gaining significant momentum.
A variant of this, “consent.txt”, was discussed as a solution for publishers to declare which vendors they obtain consent from. While cheap and straightforward to implement, there was no way to signal users’ choices, either individually or across vendors.
Centralized User Registry
The next option discussed was a centralized user registry. A single, pan-European database that would be regularly written to and/or queried by all AdTech players as the system of record for user consent data. This registry could centralize the consumer touchpoint (as a single sign-on solution would), therefore, simplifying the governance model for all parties and standardizing the user experience expectations. Parallels to Evidon were drawn as evidence to support this model.
Unfortunately, it wasn’t obvious who would build, own, or run such a database that would need to technically scale beyond the European population and support the QPS (Queries per Second) volumes of an industry built on top of a real-time transactional model. What would the commercial model be? How might a non-profit organization sustain it and how might the team behind it be incentivized to keep it running in perpetuity? How would industry ensure that the user experience implemented by this entity remains neutral, and would publishers even be open to externalizing one of the very first touchpoints they have with the consumer to an entity that didn’t exist to serve their needs directly?
Decentralized User Registry
A decentralized variant of the above was proposed. This was purported to be simpler from a commercial and cost alignment perspective: decentralizing the user registry allowed multiple entities to store consent data in their environments, which the industry could then connect to. Yet, this created a fan-out problem: without some form of coordination or service discovery mechanism, AdTech players seeking to consume the consent data would need to query multiple user registries to understand the current state of a user’s consent. What if the systems disagreed? This was a non-trivial technical problem at the scale of billions of user devices (phones, computers, tablets, smart TVs, etc.) and hundreds of network participants.
Blockchain
Blockchain was a natural extension of the decentralized user registry. By this point in 2017, interest in blockchain was rapidly accelerating, and concepts like self-sovereign identity began to enter the discussion as viable alternatives to the largest identity providers, Google and Facebook. Blockchain, at least conceptually, offered a method of decentralizing digital identity away from the leading tech monopolies and put control back into users’ hands.

While interesting to discuss, this solution seemed to create more questions than solutions.
- Who would be allowed to write to this blockchain? Is the user acting directly, or is the publisher acting on the user’s behalf? If the user, how would they authenticate themselves? If the publisher, how would they authenticate writes on behalf of the user?
- Would it be a public blockchain or a private blockchain? If public, could the consent state of a user be considered personal information (more on this later!), and therefore be restricted from being shared across node operators? If private, wouldn’t that make achieving transparency more difficult?
- Who would run the nodes themselves? Given the sheer volume of data, publishers likely wouldn’t want to incur this cost, nor would there be much utility in their retaining a copy of the whole thing. Who would be able to provide the nodes as a service, given the technical expertise and costs associated? Surely not the trade groups or standards bodies.
- Blockchain throughput, from the perspective of both transaction verification and propagation, just wasn’t fast enough to support the scale of the AdTech ecosystem, technically speaking.
Admittedly, none of us in the room were blockchain experts, but there were simply too many open questions with this model. Given that, as of the time of this article’s writing, there is still no widely adopted blockchain-based decentralized identity solution, perhaps we made the right decision in disqualifying this option.
Daisy Chain
Daisy Chain was the last of the options proposed. The concept was simple: piggy-back the existing RTB pipes to send a payload containing the consent state of the user from the publisher at the initiation of the ad request to each downstream participant, to allow each network participant to be able to decide, in real time, whether or not they’re able to process the request.
As proposed, this option was the most viable in that the consent signal (as long as it was injected according to the expectations laid out in the specification) could be trusted to be up to date, with a clear scope (the consent relates to this request and this request only), and could theoretically be granular enough to support the ability to demonstrate the consented purposes for each vendor.
However, there were a few problems with what was proposed:
- The protocol required compliant participation in order for the ecosystem to take effect. Given that the advertising industry has been marred with a history of bad actors, there was nothing technical in the protocol preventing bad actors from even being able to participate. This led to the philosophical debate: should the IAB EU be expected to police its membership, or did it suffice to leave that up to regional DPAs or other legal enforcement agencies?
- We hadn’t yet figured out a way to make the payload tamper-proof from the source. This seemed like a good feature to implement as a fast follower, using public key cryptography or other commonly accepted form of signing.
- The payload was just too big. At this point in 2017, we estimated that there were probably around 20mm QPS transacted across the globe, meaning that, given conservative estimates of what such a payload might weigh, structured as JSON in an OpenRTB extension, this piece of data might inevitably represent multiple incremental terabits per second globally. At some point, the scale of such a payload becomes a consideration, particularly in terms of the tax it places on cellular or broadband plan limits, especially in developing nations where quotas might be quite constrained compared to what we’re used to in Europe or the US.
At the end of a long workshop day, we didn’t have a viable solution.
The DaisyBit
One of the many small things that made me smile about my time at MediaMath was the silly little vernacular that developed as part of the culture. Phrases like “Win-Wins Win”, “Thing One, Thing Two”, or “Bouillabaisse”, etc. One such phrase was “The Shower Cycle,” the idea that some of our best thinking appears to us in the shower, like a form of subconscious cogitation released only by the meditative experience of warm water rushing over your neck—and sometimes you just need a few days (showers) to surface that thinking.
A few shower cycles later, a simple solution came to mind: decoupling the need to identify the specific technical provider from its specific consent state would make it possible to limit the RTB payload to only data scoped to that request and open up the aperture for compression methods that might slim down the payload.
One of the simplest and most common patterns in high performance / big data systems is that of data locality: avoiding unnecessary transmission of data that is infrequently changing, to limit the [CPU time, network, failure, etc.] costs associated with a unit of data processing to maximally that which changes request to request.
From the perspective of any recipient of an RTB request, the list of technical providers is the same for all consuming it. It doesn’t change often (perhaps a few times per month), whereas the contents of the consent state might differ wildly depending on ones’ prior exposure to that user and with which publishers/tech providers they have engaged with.
That’s when the idea of using a bitmap index came to mind: a technique from the 1980s database world that seemed like a perfect fit for both the consent use case and the need to minimize the weight added to the RTB payload. A bitmap index is exactly what it sounds like: a way to represent the presence or absence of an attribute (in our case, vendor-specific consent) using bits. For example, if you have a list of vendors and a user has granted consent to vendors 1, 4, and 7, the bitmap would look something like: 1001001. That string of bits is fast to create, computationally trivial to parse, and compresses extremely well.
This bifurcated approach allowed us to decouple the consent state from the vendor metadata entirely. We could now distribute what would later be named the Global Vendor List (GVL), a canonical, slowly changing registry of vendor IDs, separately from the consent payload itself. That meant the RTB payload could include a compact string of bits, representing consent status for a fixed, agreed-upon index of vendors. It worked. It was simple, scalable, and almost too obvious: apply a classic data structure to modern digital privacy enforcement. And yet, that little trick became one of the most technically consequential decisions behind the TCF spec.
Matthias cleverly came up with the “DaisyBit” name, a nod to both the daisy chain model of the RTB bidstream and the use of bit-level encoding. Both cute and correct.
A brief conversation regarding the governance of the vendor list ensued, which was promptly deferred to the IAB EU as part of their standard operating model.
Having solved the problem of signaling, several other key contributors came in over the next few months. Julia Shullman (AppNexus), Ghita Harris-Newton, Somer Simpson, and Heinz Baumann (Quantcast), Chris Paterson, and David Dabbs (Conversant), Charlie Simon (MediaMath) and various other members of the GIG became heavily involved in strategizing the rollout, galvanizing industry support, and most importantly: turning that proposal into an initial library and reference implementation. Quantcast in particular was critical to this stage, as they (Crispin Flowerday in particular) wrote the initial specs for the Consent API, GVL, DaisyBit, and over time crystallized a critical decision point, which was whether or not there should be one single pan-European consent namespace (a la one of the prevailing neutral unified ID providers at the time, digitru.st) vs a decentralized approach was preferable. Along the way, a new class of vendor to manage the consent state was coined the Consent Management Provider (and was shortly thereafter colloquially referred to as the Consent Management Platform).
“Oops, did we accidentally invent another LUMAscape box?”
Finally, in mid-October, Somer, Heinz, and Ghita’s draft proposal went to a vote, and GIG members overwhelmingly supported it. TCF v1.0 would not exist if it weren’t for their contributions.
With that vote, we had a solution.
It’s absolutely wild to think that not even five months after that vote, IAB Europe, a non-profit association with a total budget smaller than the smallest sub-teams of any of their member constituents, managed to stand up the GVL, related scaffolding, and tackle cost optimization challenges… having never run any technology before!
Of course, the first version of anything will never be perfect. The group understood some outstanding issues would need to be solved in subsequent versions: mutability and cryptographic signing of the consent string were of significant concern, the number of purposes wasn’t necessarily considered final, and so on. Critically, there was one piece of ambiguity in the specification that became a real nuisance later:
At the time of initial writing, the TCF policies did not clearly specify that for Purpose 1 (“Store and/or access information on a device”), the only allowed lawful basis was that of consent. In fact, the original ideas behind TCF didn’t even intend to cover legitimate interest. However, many in the industry were still clinging onto legitimate interest as the basis for personalized advertising and so, as the initial reference implementations for the GVL started to get built, this ambiguity was codified into the technical standard through the option for vendors to declare either Legitimate Interest or Consent as their basis for Purpose 1, at the 11th hour.
Purpose 1 existed exclusively to fulfill the Article 5(3) obligations of the ePrivacy Directive, while the rest of the purposes existed to meet the GDPR’s data processing obligations. Simply put, Purpose 1 wasn’t quite a purpose in itself; it was an activity pursued in the pursuit of another purpose. Yet, the concept of a “special feature” did not exist in v1.x of the TCF, and allowing vendors to register legitimate interest for Purpose 1 created several distinct problems, which all undermined v1.x:
- It suggested that legitimate interest was a legitimate option for Purpose 1, which was never the case.
- It rendered user choice meaningless for those declaring legitimate interest.
- Some publishers and vendors assumed that obtaining consent for Purpose 1 was sufficient to justify all downstream processing purposes. Not so.
- Many CMP implementations didn’t make it clear that one could simultaneously provide Consent for Purpose 1 and decline to consent for the rest, leading to disagreements about whether the user actually agreed to profiling and personalized advertising and thus if the chain of consent was valid.
- The National DPAs had yet to provide clear guidance, and the lack of consistent implementation by the market led to inconsistent regulatory interpretation, further confusing the industry.
In the background of all of this, discussions were happening about who should maintain such a standard. IAB Europe was not well-equipped to own such a complex technical standard. Yet at the time, the IAB Tech Lab was perceived by many in the European community to be US-centric, and conversely, European privacy concerns were not yet on the Tech Lab’s roadmap or radar as of the end of 2017. Thanks in part to the advocacy of several companies that were both involved in the GIG and had members on IAB Tech Lab’s Board, this quickly changed, and Dennis Buchheim and Jennifer Derke got directly involved early in 2018, with v1.1 being a joint announcement between IAB Europe and IAB Tech Lab, with Tech Lab taking on the role of technical steward, while IAB Europe continued to lead policy development, regulatory engagement, and overall framework governance and strategy. This approach to co-creation proved critical during the run-up to v2.0, which I’ll explain in greater detail below.
The Announcement
On November 27, 2017, I hopped on the 4:55 pm ORD-LHR redeye to announce what from that day on would enter the industry vernacular (for better or for worse) as the “Transparency and Consent Framework” at the European Interactive Digital Advertising Alliance (EDAA)’s 2017 Summit held at the Renaissance Hotel London/St Pancras. The theme for this summit, which brought together 200 participants across advertisers, agencies, AdTech, and media, was to advance the discussion around “…key policy and business challenges for the online advertising industry in promoting transparency and control.”
This was the first of what would become many presentations held across Europe over the next 12 months, and a newfound ability to get straight off a redeye flight and onto the stage to present. This particular presentation was memorable not for its content, but for the politics I was entirely unaware of until just a few moments before we were slated to talk. In the background, a last-minute push by a series of publishers attempted to prevent the launch from moving forward altogether. I later learned from those closer to the policy side of things that the 11th hour attempt to scuttle the TCF’s announcement were likely motivated by a very problematic point of view: a solution to the consent problem would be seen as undermining the prevailing industry policy objective and Germany’s unique legal situation that personalized advertising should be done without consent, under the legitimate interest legal basis!
At the time, Germany, unlike the rest of the EU, had not implemented the ePrivacy Directive’s consent requirement for cookies. As a result, German publishers were still able to avoid intrusive consent requests thanks to a greater scope for relying on the legitimate interest legal basis for personalized advertising. In contrast, consent was required virtually everywhere else in Europe. Many outside of Germany were holding out hope that the ePrivacy Directive’s consent requirement would yet be consigned to the dustbin of history. This manifested in a quite interesting and heated exchange between senior members of EDAA and IAB Europe immediately before Townsend Feehan introduced me and Matthias to the stage.
The presentation itself was relatively anticlimactic. Matthias explained the legal and policy background, and I followed him with a technically oriented explanation of the consent flow. The response? Blank stares. Silence. Polite applause.
I recall being quickly ushered in front of a video camera to provide a brief video explainer of what we had just presented, and oddly, that was it!
I later found out that behind the scenes, the drama about whether we should have launched continued quite heavily. IAB Europe faced significant criticism from influential industry stakeholders for allegedly undermining the short-term interests and policy aspirations of an industry that did not yet fully understand the implications of the GDPR or the new political and regulatory landscape in which they had to operate. Political and regulatory sentiment had shifted significantly against online advertising, but not everyone seemed to understand the extent of the change.
What followed over the coming weeks and months was a series of familiar excuses from many parties across the industries: “We already have consent banners” / “This is not our problem” / “Publishers just need to get consent for us; we’re fine” / “We have sequential liability in our contracts” / “We’re covered by indemnification.”
None of this changed until right before the implementation date, and the last-minute panic and eventual mass acceptance by the industry exposed that the industry, as a whole, hadn’t thought about the GDPR with much depth at all. Thanks to the work of the GIG, the problem was solved before most of the industry even realized it existed.
2018 Rollout
Immediately after the initial launch on April 25, 2018, exactly one month before the GDPR’s application date, a few of us set out to spread the news about the TCF across a series of events throughout Europe: AdWeek London 2018, several IAB Europe webinars, and various company sponsored townhalls. One event that stands out to me was IAB Europe’s Interact Summit in May of 2018.
By this point, David Wainberg, Colin O’Malley, Ghita, Ari Levenfeld, and Matthias had drafted the TCF policies entirely from scratch. With the support of the technical proposals, our presentation was far denser than that of November, complete with example code and documentation, as well as a short proposed roadmap. The purpose of this presentation was slightly different, given the audience: our goal was not to reiterate the announcement, but instead to spur engagement and industry adoption, as compliance with the GDPR became mandatory immediately following the summit.
The reason this particular event stands out to me more than the rest is the cognitive dissonance I experienced that day. While Matthias and I were walking from one stage to another, we were confronted by a talking head from the European Commission on screen, speaking proudly about the virtues of the GDPR. Specifically, he was extolling the anticipated impact it would have on enabling European startups to more effectively compete against the leading social networks and walled gardens in the global economy, thanks to the GDPR’s requirement for data portability.
Matthias and I looked at each other and shook our heads: by forcing onerous policy requirements onto European startups, limiting the ease of cross-company collaboration, and exposing users to even more consent fatigue, the exact opposite effect would occur: market leaders who already benefitted from having the largest audiences would tighten up further, making it even harder for new market entrants to succeed. Ben Thompson put it best:
GDPR will be a pain for Google and Facebook, but it will be lethal for many of their competitors, which means digital ad revenue post-GDPR…will go to Facebook and Google. That, of course, is already happening, which is why Google and Facebook haven’t embraced GDPR; it’s not like they need the help in building a moat, but they will get it none the less. Meanwhile, the inexorable shift of users to digital services and away from traditional advertising venues — which will result in the shift in advertising dollars to digital — is not going to slow down.
[…]
This forbids what I proposed: the easy re-creation of one’s social graph on other networks. Moreover, it’s an understandable regulation: my friend on Facebook didn’t give permission for their information to be given to Snapchat, for example. It does, though, make it that much more difficult to bootstrap a Facebook competitor: the most valuable data (from a business perspective, anyways) is the social graph, not the updates and pictures that must now be portable, which means that again, thanks to (reasonable!) regulation, Facebook’s position is that much more secure.
For those interested in this class of topic, I recommend reading the Draghi report on EU competitiveness.
The Path to 2.0
As adoption of TCF v1.1 grew, so did engagement from industry. In particular, a number of large European publishers got involved, with something other than user privacy in mind: the existence of a standard like the TCF gave them an opportunity to claw back control over how their data was being used by downstream vendors.
With sufficient granularity, TCF could represent a mechanism for publishers to restrict vendors from using user or publisher data in ways that didn’t fit that publisher’s strategic objectives. A number of notable changes were made: the consent string was expanded to signal purpose by vendor, it enabled publishers to signal that they had not disclosed a specific purpose or requested consent on behalf of vendors, and concepts like “special features” and “special purposes” were introduced into the framework.
One significant v1.1 holdout remained: Google. Their position in the market was unique: they were (and still are) the dominant publisher, SSP, DSP, and ad server. With DFP having a ~90% market share, DoubleClick Bid Manager having ~40% of the market, and AdX having somewhere between 40-60% of the market (given its relationship with DFP), this meant for most RTB ecosystem participants, there would be no consent signal flowing for a meaningful chunk of transactions (and, therefore, revenue).
Everyone found themselves asking the same questions: How does one enforce a consent decision in an environment where the availability of a consent signal is inconsistent? Put yourself in the shoes of any board member or executive of a smaller player in the market: how would you decide between a stricter interpretation of the GDPR than Google’s (with significant revenue downside) vs. taking a wait-and-see approach? In an environment where these are the choices, guidance from the DPAs is rare, and enforcement had yet to provide clarity, nobody wanted to be first.
Google had a few blockers to TCF v1.x, some procedural, some policy, and some technical.
Procedurally, while they were members of IAB Europe, they chose to actively engage with the TCF standards group later in the process, after an internal approach had already taken root within Google.
Policy-wise, they initially believed that their market position allowed them to implement a much simpler, vertically integrated consent model. With control over most of the ad stack, why rely on an external framework if they could just capture user consent directly through their own publisher integrations? In this model, they could present users with a simpler UX (with a POV that a consumer must only provide consent for up to 10 vendors) that enforced their position: that personalized ad targeting must rely on freely given, specific, informed consent, not just a blanket legitimate interest claim as many in the market were still clinging to.
Technically, the design of TCF v1.x presented gaps that were simply operational burdens for most players, but blockers for Google, that nearly everyone agreed needed to be resolved after the 11th-hour inclusion of legitimate interest into the initial spec:
- The original Daisybit format did not have any form of tamper-proofing built into it. There was no cryptographic signature, no reliable way to verify the chain of trust, and no standard for audit trail logging that could prove to a regulator that user consent was gathered, stored, and transmitted correctly and by which named CMP.
- More importantly, the format of 1.x did not meaningfully constrain how vendors applied the legal bases available under the GDPR—specifically, the split between consent and legitimate interest. Many vendors, especially those lower in the supply chain, defaulted to claiming legitimate interest for profiling and ad personalization, even where the GDPR’s guidance was still unsettled on whether that was acceptable. Google took a stricter approach: targeting users based on behavioral data requires unambiguous, granular, revocable consent, not just a generic legitimate interest claim buried in a privacy policy.
Given the lack of consistent guidance or a history of enforcement, consent misrepresentation was considered by many to be a significant risk, and the target on Google’s back was substantially greater than that of the rest of us.
With Google’s collaboration, the working committee resolved these gaps. In 2020, Google not only joined as a vendor but began mandating the rollout of TCF v2.0 to its publisher community, cementing the TCF as the industry standard for the transparent transmission of consent across the AdTech ecosystem.
By the end of 2019, the feeling of being alone in caring about this problem had been replaced by a sense of “interest saturation”: nearly every major player in the RTB ecosystem was engaged in the standards-setting process. For my part, I had developed a strong case of burnout on the problem. I had pivoted to tackling the next issue: Google’s intention to follow other browser vendors in sunsetting the third-party cookie and replace it with a set of then yet-to-be-proven privacy-focused AdTech primitives, now known as the Privacy Sandbox. (Google has famously (or infamously) since retreated on its plans to deprecate the third-party cookie.)
In Retrospect
A little over eight years later, it’s hard not to look back at the TCF’s launch with a bittersweet mix of pride and deep cynicism. As the ICO calls for feedback on its approach to regulating online advertising, it seems appropriate to share some of my takeaways from the process of building and launching such a complex standard:
- GDPR is a textbook example of the Cobra Effect. Globally, lawmakers appear to consistently underestimate how rules create incentives. Without the TCF, the walled gardens would have dominated the market, which was the exact opposite of the policy’s intent.
- Neutral standards are hard when significant revenue is at stake. Any standard that might meaningfully shift revenue between market participants is likely to be complex to develop and implement. Messy technology follows messy policy.
- As always, incentives rule everything around us. On the grand scheme of things, nearly every industry participant behaved rationally throughout the process. In many ways, this outcome was predictable.
- GDPR provided an opportunity for publishers to exert control on their vendors. Before GDPR, publishers weren’t necessarily aware of the data-related activities of their vendors that were enabled by the DOM and browser privacy models. GDPR gave publishers the opportunity to enforce vendor-level purpose restrictions via TCF. This is an orthogonal concern to user privacy, and was the source of a significant amount of complexity within the standard.
- Publisher control didn’t turn out to be all that. Publishers thought they were going to disallow vendors from using data from their sites to build profiles. They now had the possibility to signal that. What they then learned was that if they didn’t provide the permissions for this, those vendors would not monetize their inventory with personalized ads. As such, these granular controls essentially became a “Turn Money Off” feature that ended up bringing much needed commercial transparency to the market for demand and supply alike.
- In the data economy, privacy and competition are in direct tension. Stricter privacy rules can entrench incumbents if not carefully defined and implemented. Thankfully, the European Data Protection Board explicitly acknowledges this!
- Cookie walls still aren’t it. It feels bad knowing that TCF contributed to a worsening user experience, even if it was the least-bad regulatory-required option at the time. I’d love to see lawmakers clearly communicate their ideal spot on the “seamless user experience vs. full transparency and consent” continuum so that industry can co-create a better alternative to today’s consent pop-up, which satisfies no one.
- In hindsight, the browser might be the obvious place for consent signals, but they haven’t earned the right to it. A neutral, user-centric store of permissions, managed by the browser, might eliminate the need for so much duplicated friction. I believe that any major browser that embraces publishers’ monetization needs while enabling AdTech to seamlessly adopt the signals required to operate in a compliant manner will be quite competitive on the user experience and publisher preference front.
- The split between ePrivacy and GDPR made everything harder. The ePrivacy Directive is over 20 years old. It was written for another era. The need to comply with both the GDPR and ePrivacy, combined with the ePrivacy Directive’s fragmented national implementations and inconsistent enforcement, created unnecessary confusion. Merging GDPR and ePrivacy into one modernized regulation could have helped significantly.
- Proactive engagement by the DPAs and the European Commission could have helped. Their unwillingness to engage industry advocates in the policy-writing and to give practical guidance during the standards-setting process slowed the entire ecosystem down. Constructive support and clarity would have accelerated adoption instead of dragging it out.
- Big systemic shifts take time. It took more than four years for the TCF to reach broad adoption! Turning off non-TCF inventory on May 25, 2018, would have wiped out huge parts of the industry overnight. Complex systems don’t respond well to shock changes; incremental, adaptable approaches are more realistic and sustainable.
- AI is about to blow it all up. The rise of AI that “gets to know you” (agents, personal copilots, and context-aware assistants, etc.) will fundamentally change the privacy debate, again. Consumers are moving from static consent choices on a website to dynamic, continuous profiling that learns and adapts over time. The frameworks built for cookies and device IDs were already fragile; layering AI-driven personalization on top of them will stress them to a breaking point. We’ll be right back in the same tug-of-war: trying to balance meaningful user control, competitive dynamics, and legal risk in a landscape whose rate of change is orders of magnitude faster and less predictable.
Regardless of what you think of the GDPR or the TCF, consumers deserve better, and the only way this happens is through increased collaboration across industry and policymakers.
Acknowledgements
The intersection of commerce, technology, and privacy will always be a complex, messy, and uncomfortable space. I have a lot of respect for the many professionals who call this space their home and can gracefully balance a high level of technical expertise with legal tradecraft, commercial aptitude, and tasteful collaboration with their peers who are often also competitors.
It takes a village to turn such a complex undertaking into the industry standard. None of this would have been possible without those early GIG participants: Alice Lincoln, Julia Shullman, Chris Hartsuiker, Ghita Harris-Newton, Somer Simpson, Heinz Baumann, Chris Paterson, and, most importantly, Townsend Feehan, whose steadfast commitment and resolve through significant industry criticism made the TCF possible.
I’d be remiss not to thank the team at MediaMath for being laser-focused on doing what’s right for the industry and for the consumer. Charlie Simon was instrumental in MediaMath’s role as standards advocate and mustering up the internal resources required to follow through on implementation. Even more importantly, his prominent role in the framework signals working group played a critical role in providing valuable feedback that resulted in many improvements to the framework. Alice Lincoln’s lead role with the GIG and her steady hand with the GIG paved the way for industry to pile on during the months leading up to GDPR.
A special thanks to Matthias for helping to review this post and for being a phenomenal design partner-turned-friend, especially during those stressful days in 2017. I sincerely hope he writes a companion post to this, as his involvement, from the policy perspective, is much more instructive than my retrospective missive. Thanks also to James Rosewell, who, after a shorter retelling of the above over a Kölsch and Currywurst at the IAB Tech Lab dinner at DMEXCO 2024, encouraged me to write this all down and persisted with the politest of reminders over the next nine months.
Believe it or not, AI did not write this post. Perhaps if it had, it’d be shorter. Sorry about that. Thanks to Lauren Fritsky for helping me come off like a half-competent writer.