CODE
CODE
Hubspot Custom Styles

Optable Blog

Learn about the modern advertising landscape and how Optable's solutions can help your business.

Showing 11 results

To work the way they should, data clean rooms need to bring a fluid, real-time, embeddable infrastructure to data collaboration. And at the heart of such an offering, there needs to be an API that allows any client to deploy the data clean room approach across any inventory, any type of audience data and any third-party cloud provider.

 

In this way, any third-party application or platform should be able to benefit from a data clean room by embedding its API for secure, privacy-preserving data collaboration. 

 

This in turn enables a complete digital media workflow via API, and taking Optable’s service as an example, it looks like this: 

 

  1. Collecting data at the edge. The API is wrapped in our SDKs for iOS, Android and web to enable this, but it can be done for virtually any other platform.
  2. Creating audiences from data onboarded in the platform, whether it’s from traits or available identifiers.
  3. Enriching a device graph by feeding identifier associations and user attributes.
  4. Creating a data clean room and inviting a partner to match with an audience.
  5. Executing a match with the partner by using our open-source matching library and command-line utility that implements various PSI protocols. 
  6. Ultimately, all of this is done in order to enable analytics and straight-line activation. Both of these functions are available via API as well. 

One of the best applications of a data clean room API is in combination with a customer data platform (CDP). An API can be used to properly leverage audience data housed in a CDP, making this data actionable for activation and measurement with third parties. 

 

Another good example involves walled garden data and inventory. Whether it’s for CTV, audio or traditional web formats, an API can be used to effectively drive advertiser performance anchored in real customer data. 

Ultimately, the API is here to make it easy to leverage the data clean room approach in any third party platform or application. 

During lockdown, with Covid raging outside, those with the opportunity to do so turned to their gardens, treating them as sanctuaries, lavishing them with care and attention and cultivating what they could. 

And at about the same time, the ongoing eradication of public identifiers was inspiring a comparable new strategy for publishers. Edged out of the third-party-data-driven world they knew - but which had never really played to their strengths - they busied themselves creating their own walled gardens, their own content fortresses.

What have they grown? More personal data, more insights and a much deeper connection to their audiences - a connection anchored in consent. Publishers’ first-party data is private, relevant, hugely detailed and engaging, and so, like anything built with care and attention, these sanctuaries have a very real value to those they invite in.

Your data meets mine in a data clean room

First-party publisher data is manna for brands, and especially those who have been carefully tending their own data gardens. Google has found that brands using their own first-party data for key marketing functions achieved up to 2.9X revenue uplift and 1.5X increase in cost savings.

When brands work with publishers to mix their data and build relevant segments and publisher cohorts, the effect is equally compelling: The Guardian last year reported a 65% higher than average brand lift for brands using its first-party data. Wherever you look, the effect of first-party publisher data is emphatic.

However, at every step, old habits need to be questioned. For publishers, the best way to amplify the value of that data has always been to connect it to brands, but for all the obvious reasons, that can’t happen over public programmatic pipes anymore. 

Instead, the most efficient, effective, privacy-safe way for publishers to make their private data available for analytics and activation is through a new, proper, data clean room-enabled infrastructure. 

The proportion of publisher inventory that transits through clean rooms - what we call clean room media - is growing, as brands and publishers realise in unison that their old channels are drying up and new ones are needed.

We’ve been here before - only different

In fact, the shift is uncannily reminiscent of the old programmatic revolution - the very architecture the new privacy-conscious world is now working to replace. Just like clean room media, programmatic started small and ended up huge, as the scale of the opportunity - and the opportunity cost of ignoring it - became apparent. 

But clean room media is many leaps ahead of the old programmatic free-for-all, in that it allows publishers to easily monetize their newly available audience data in a safe, privacy-preserving way. And it gives brands bespoke data - better than anything they might have found in the old marketplace. 

So brands get what they need: more precision and performance through exclusively available audience data, while leveraging the data they’ve been carefully collecting and enriching in their own CDPs.

Publishers, meanwhile, get the reward for the deep, private, inimitable relationships they have developed with their users.

And, crucially, in this new ecosystem, consumers get more control and more privacy protection than ever before.

Exponential growth of clean room media

One publisher that uses Optable has seen its share of clean room media increase six-fold over the past few months, and it’s expected to continue growing exponentially.

So, just because programmatic is yesterday’s technology, does not mean that the technology of tomorrow shouldn’t adopt its trajectory.

Before outstaying their welcome, third-party cookies gave us the very worthwhile expectation of openness, interoperability and ease of use - all attributes of clean room media.

In the same way, tomorrow’s data solutions need to echo the revolutionary, problem-solving qualities that made programmatic the success it was - only with the addition of privacy, exclusivity, a better deal for brands and publishers and a renegotiated consumer contract.

As clean room media continues to grow as a category, it’s exciting to see more and more publishers and brands adopt this new way of transacting.

In today's data-driven world, concerns about privacy and data security have never been more critical. k-Anonymity is a privacy concept and technique that plays a pivotal role in safeguarding sensitive data. Let’s explore what k-anonymity is and how it‘s used to protect personal information.

What is k-Anonymity?

k-Anonymity is a privacy model designed to protect the identities of individuals when their data is being shared, published, or analyzed. It ensures that data cannot be linked to a specific person by making it indistinguishable from the data of at least 'k-1' other individuals. In simpler terms, k-anonymity obscures personal information within a crowd, making it impossible to identify a particular individual. 

The 'k' in k-anonymity represents the minimum number of similar individuals (or the “anonymity set”) within the dataset that an individual's data must blend with to guarantee their privacy. For example, if k is set to 5, the data must be indistinguishable from at least four other people's data.

How Does k-Anonymity Work?

To implement k-anonymity, data must be generalized to make it less identifiable, while ensuring that each data point is identical to a minimum of ‘k-1’ other entries. This is commonly done through two methods:

  1. Generalization: Data attributes are generalized to broader, less specific categories. For example, an individual's age may be generalized from their precise age to an age range, like 25-34.
  2. Suppression: Certain attributes may be entirely removed or suppressed if they are considered too revealing. For instance, exact dates of birth or home addresses may be suppressed to protect individual identities.

How are Marketers Using k-anonymity?

Online retailers use k-anonymity to protect customer data while analyzing purchase histories and preferences to enhance their services and recommendations. 

For example, individual users can be associated with data cohorts based on their interests on their mobile device. An advertiser can then target individuals in specific cohorts. This way, the advertiser does not learn any personally identifiable information (PII) and only learns that a specific individual belongs to certain cohorts. And as long as the cohorts are k-anonymous, they protect users from re-identification, especially for large values of k.

A drawback to using k-anonymity is that sometimes revealing just the cohort a user belongs to can leak sensitive information about a user. This is true, especially when the cohorts are based on sensitive topics such as race, religion, sexual orientation, etc. A simple solution to this problem is to use predefined and publicly visible cohort categories, such as in Google Topics.

In any case, cohorts can still be combined or correlated and used to re-identify users across multiple sites. That said, k-anonymity is often combined with other privacy protections to further reduce the probability of re-identification.

Blog
Data Collaboration
Data Governance & Privacy
PETs

Securing Ad Tech: The Role of Secure Computation in Data Privacy

In an era where data is the new gold, ensuring its privacy and security has never been more critical. Secure computation, is a powerful branch of cryptography, allowing companies to perform computations on sensitive data without revealing the actual information being processed. In this blog, we’ll explore what secure computation is and how it’s used to protect consumer data.

What is Secure Computation?

Secure computation is a cryptographic technique that enables multiple parties to jointly compute a function over their individual inputs while keeping those inputs private. This is known as "encryption in use" because the underlying data remains encrypted while it is being processed on remote servers or in the cloud.

The primary goal of secure computation is to ensure the confidentiality, integrity, and privacy of data throughout the computation process. It accomplishes this without relying on a trusted third party, making it particularly valuable in scenarios where data sharing and privacy are paramount. This means that two or more parties can collaborate on data analysis or computations without exposing their sensitive data to one another.

How are Media Companies and Brands Using Secure Computation to Collaborate?

Secure computation is applied in a range of scenarios where privacy and data security are paramount. Naturally, secure computation is a great fit for data sharing and collaboration among publishers and advertisers.

Both publishers and advertisers can benefit from a type of secure computation called Private Set Intersection (PSI) protocol. It allows two or more parties to compute the intersection of their private datasets without revealing any information about the records not in the intersection. Optable, for instance, provides an open-source matching utility that allows partners of Optable customers to securely match their first-party data sets with them using a PSI protocol.

How does secure computation work?

Secure computation can be implemented in two main ways: 1) via pure cryptography (using Fully Homomorphic Encryption (FHE) and Secure Multi-Party Computation (MPC)) or 2) through secure hardware (using Trusted Execution Environments (TEEs).

Fully Homomorphic Encryption

FHE is an incredibly powerful tool for protecting data privacy in the digital age. It enables analytics to be performed on encrypted data without ever having to decrypt it. The ad tech industry can certainly benefit from full-scale analytics without the risk of exposing personally identifiable information (PII).

While FHE has the potential to revolutionize the advertising ecosystem, it is unfortunately quite computationally intensive and limited in its current capabilities. Therefore it is not yet ready for widespread adoption. There is ongoing research to make FHE more efficient and functional in the future.

Secure Multi-Party Computation

MPC is a form of secure computation that uses a cryptographic protocol to enable two or more businesses with private data to perform a joint computation while keeping their individual inputs private. Each entity only learns what can be inferred from the computation result.

Often, the secure computation part is outsourced to two helper servers. Before data leaves a user's device, it is encrypted to both helper servers, which decrypt it partially and perform computation on the partially encrypted data. Neither server is ever able to see the original user data.

MPC protocols provide a high level of security but come with a tradeoff. They require sophisticated cryptographic operations which incur higher computation and communication costs. This makes this technology tailored for specific tasks, which can get very expensive.

How Does Optable Use MPC?

In the past year, Optable has been a leading contributor to the IAB Tech Lab’s Open Private Join and Activation (OPJA) that enables interoperable privacy safe ad activation based on PII data. At the heart of OPJA is a secure match using a PSI protocol that allows advertisers and publishers to match their PII data. One of the ways to perform this match is using MPC — the respective clean room vendors act as the MPC helper servers, which jointly compute the overlap without ever learning the identifiers not in the overlap.

In an age where data privacy is a growing concern, secure computation emerges as a vital technology that plays an important role helping companies comply with data protection regulations while still fostering innovation and cooperation among business partners.

When we launched the company earlier this year, in the middle of a global pandemic, our thesis was fairly simple:

  • As privacy becomes a feature across the board, both tech platform changes and increased regulation will make a huge impact on ad tech
  • This will create a movement where publishers and advertisers (as well as a slew of intermediaries) will have to actively ask users to share some data on themselves and get a permission to use it
  • The infrastructure that enabled data-driven advertising to function was built on cookies and IFAs, and it’s about to disappear
  • As a result, this data-sharing infrastructure is moving underground where collaboration will continue, but data will be shared using secure, privacy-preserving protocols

This all will result in a gradual onset of confusion and chaos, but ultimately, eventually, the ecosystem will be better off. The mess created by the programmatic revolution will be replaced by less wasteful, more ethical, more secure, new ways of dealing with ads.

It starts with three core functions that have to be satisfied by a new generation of customer data management technologies:

First, we need to deal with the identity crisis, with third-party cookies and IFAs slowly crumbling. We need a way to collect data that is respectful of the user and backed by consent, yet still uses identity data at the core. Without third-party cookies and IFAs, this will lead to a translation layer: from personal profiles stored by the publisher on the user to an addressable cohort across various touch-points (open web as much as mobile, CTV and audio). In addition to using local storage for data collection, there is also an opportunity to make use of first-party cookies, just like it was in the good ol’ days.

Second, although we do ingest data from CDPs and DMPs, assembling audiences and preparing them for anonymized activation using existing ad tech infrastructure is part of the the new way of working with audience data.  This activation can happen through ad servers, ad exchanges or other content personalization technologies.

And third, we need better ways to transact based on audience data.  When it comes to advertising, the value of data is amplified when it transits between partners. Cookie and IFA-based transaction models created a lot of trust issues which actually prevented great use of this data. The new generation of data management technologies will be decentralized, where partners will run their individual instances of the platform, and use secure multiparty computation protocols to collaborate.  This is a bit complicated at first, but ultimately this layer will enable the fundamental fabric of how ads are targeted and measured.

That, in essence, is what we do.

A mere 6 months after launching the company, we are starting to roll out our product to customers. It’s quite difficult to describe what the product IS, but we feel that calling it a Data Connectivity Platform is the best way to describe the core value that we’re bringing.

Having pre-seeded the company ourselves, we are also starting a fundraising process for our seed round. Our team counts 9 people now, and we are very much excited to grow it and accelerate our growth.

The crisp February air of Toronto welcomed a select group of media & advertising thought leaders to Optable's exclusive summit. The agenda promised deep dives into data strategy, privacy's impact, and navigating the ever-evolving media landscape. And it definitely delivered.

Data Collaboration Takes Center Stage

The opening panel, "How Publishers & Advertisers Are Using Data to Build Better Ad Campaigns in the Age of Privacy," kicked things off with a bang. Data collaboration emerged as the undeniable hero, bridging the gap in a fragmented ecosystem. Panelists from La Presse, The Globe & Mail, and Advance powered by Loblaw discussed their shared journey: adapting data strategies, wielding identity solutions, all while dancing around the ever-changing privacy regulations. The panel was moderating by Optable's own Ioana Tirtirau, Head of Customer Success, who helped the crowd to glean actionable insights that could be implemented within their own businesses.

One key takeaway? It's not just about the tech. "The future of advertising lies in finding the sweet spot where data insights combine to create a better experience for the audience and ultimately create business growth. Data is the interface with which were able to create better advertising partnerships." said one publisher exec. The audience couldn't have agreed more, recognizing the need for meaningful campaigns that respect customer privacy and provide real insights into customers’ wants and needs.

Privacy: The Driving Force (and Opportunity)

Deloitte's fireside chat shifted gears, focusing on the elephant in the room – privacy. Experts dissected the seismic shifts caused by regulations and platform moves, highlighting not just the challenges but also the opportunities. "CCPA, GDPR, Law 25, cookie deprecation – it's all about building trust," emphasized a Deloitte speaker. "And trust generates loyalty & engagement, which is the real gold in this game."

Beyond Trends: The Human-Centric Shift

The summit wasn't just about buzzwords and tech. It was about understanding that data and privacy are inherently human-centric. At its core, advertising is about connecting with people, and in the privacy age, that means that collaboration is key.

The cocktail hour wasn't just a networking opportunity; it was a testament to the energy and ideas bubbling up from the room. From Optable's own data experts to seasoned ad veterans, everyone recognized that the future isn't pre-programmed – it's in the hands of innovative minds who can harness data, respect privacy, and ultimately, rethink and rearchitect the media & advertising ecosystem to be more impactful for audiences and more sustainable for businesses.

Key Takeaways:

  • Data collaboration is growing rapidly, with the major cloud ecosystems acting as stewards.
  • Privacy regulations create challenges, but also unexpected opportunities.
  • Third party cookies are officially on their way out and will create a forcing function to re-think our ecosystem.

Optable's 'State of Data Collaboration' in Toronto wasn't just a glimpse into the future; it was a blueprint for navigating it. Armed with actionable insights and a renewed focus on the human element, data & advertising professionals left the venue empowered to redefine success in the privacy-first era.

The need to safeguard sensitive data and ensure the confidentiality of transactions has never been more critical. The Trusted Execution Environment (TEE) emerges as a pivotal technology in the demand for increased data privacy. In this blog, we will delve into the world of TEE, understand what it is, and explore its applications as a privacy-enhancing technology.

What is a Trusted Execution Environment?

TEE is a secure and isolated area within a computer or mobile device's central processing unit (CPU). It’s designed to execute code and processes in a highly protected environment, ensuring that sensitive data remains secure and isolated from all other software in the system. It achieves this level of security via special hardware that keeps data encrypted while in use in main memory. This ensures that any software or user even with full privilege only sees encrypted data at any point in time.

How Does TEE Work?

Using special hardware, TEEs encrypt all data that exits to the main memory. And decrypt back any data returning before processing, allowing the code and analytics to operate on plaintext data. This means that TEE can scale very well compared to other pure cryptographic secure computation approaches.

TEEs also offer a useful feature called remote attestation. This means remote clients can establish trust on the TEE by verifying the integrity of the code and data loaded in the TEE and establish a secure connection with it.

How Can Media Companies Benefit From TEEs?

TEEs are an attractive option for media companies who want to safely scale their data operations in a secure environment. TEEs offer the following benefits:

  • Tamper-Resistance: The hardware-based security of TEE provides tamper-resistant execution of code.
  • Secure Communication: Remote attestation provides a way to establish trust between TEEs and remote entities, enabling secure communication.
  • User Trust: TEE builds trust among users, assuring them that their data and transactions are protected.

Now, let’s look at a real-world example of data collaboration using a TEE. In our last blog post, we saw that one way to perform the secure matching in the IAB’s Open Private Join & Activation proposal is using an MPC protocol. Another way to perform this secure matching is using a TEE. With TEE, only one helper server is involved. First, the advertiser and the publisher establish the trust of the TEE via remote attestation. Then, they -each forward their encrypted PII data to the TEE server which decrypts them and performs the match on plaintext data.

TEEs come with their own privacy risks. They are vulnerable to side-channel attacks, such as memory access pattern attacks, which can be exploited to reveal information about the underlying data. Adding side-channel protections can help counter these attacks, but significantly increases the computational overhead. Fortunately, despite this TEEs scale very well.

In an industry facing ongoing scrutiny over data privacy concerns, TEEs are becoming a standard. This PET technology will continue to evolve and we expect to see it playing an increasingly vital role in data collaboration. 

Optable Core Values

We value diversity and inclusion and believe that the sum of different cultures, opinions and beliefs creates a stronger team that will deliver great results. A group of people with the desire to succeed. All pulling together in the same direction. Knowing that every single person has your back. With respect, trust, and the knowledge that any single one of our teammates is capable of taking the lead at the right time. With this attitude we all win. And when we don’t, we try again.  Because we learn quickly and don’t give up. 

Empathy

Showing empathy towards each other is probably the best way to get the most out of any given team. Every day brings new challenges but also new opportunities to reconsider how we see and value our colleagues. Empathy also helps us focus on listening. It forces us to reflect on our actions and words and it brings us closer together.

Trust

Building trust in our relationships is our promise. We are all about transparency in communication and actions. We are honest, we own our role, decisions, actions, and their consequences. We strive for an environment where we can rely on each other.  Trust is earned. And we never, ever make fake promises.

Innovation

Challenging one’s own thinking and having the mindset to strive for continuous improvement is what innovation means to us. We encourage curiosity, challenge assumptions, take calculated risks,  and anticipate changes. Failure is welcomed. It’s what allows us to learn and generate new ideas while enabling us to embrace changes and drive faster towards success.

Enthusiasm

Promoting excellence in the workplace is what enthusiastic employees do. It’s infectious, and an example for those around them to follow. It’s the core understanding that energy comes from energy so we recognize and reward those brave enough to smile in the face of challenge. We play to win as a team and lift everyone’s spirits to bring joy, satisfaction, and results.

Bias for Action

Taking initiative and embracing change help create a successful business. We don’t spend too much time overthinking decisions. We prefer acting on possible solutions instead of waiting for the perfect one. If it needs to get done, we identify solutions and start building. We are not perfectionists, but we work relentlessly to improve.

Blog
Interoperability
Activation
PETs

Open Private Join and Activation (OPJA)

Today the IAB Tech Lab is publishing version 1.0 of the Open Private Join and Activation (OPJA) clean room interoperability standard. Throughout the past year, together with a growing number of industry collaborators and members of the Tech Lab’s Privacy Enhancing Technologies (PETs) and Rearc Addressability working groups, our team played a leading role in developing OPJA with the goal of enabling interoperable privacy safe ad activation based on PII data.

Beyond our work on the initial proposal, we have several broader goals with OPJA:

  1. We aim to define an open and standard set of requirements for a type of clean room operation that enables an advertiser and a publisher to match sensitive datasets containing user PII, such as email addresses or phone numbers, while limiting information exchange between parties as much as possible.

  2. We want to develop and promote the adoption of standard mechanisms in OpenRTB that enable ad targeting of OPJA-matched user ad impressions, using any compatible SSP or DSP.
  1. We want to provide open reference implementations that enable OPJA while adhering to the stated requirements.

  2. We want to support both OPJA’s encrypted labels as a way of securely activating matched audiences from Optable, as well as interoperate with other vendors based on OPJA’s secure matching mechanisms.

While we think that there is room for clean room vendors and collaboration platforms to offer their own proprietary spin on the activation use case (many already do), we’re hoping that they will make an effort to evaluate and align their implementations to better adhere to OPJA, and we intend to make it easy for them to do so.

In order to achieve our goals, agreeing on an independently trustable manner in which user data can be matched and activated in the multi clean room vendor setting was imperative.

Doing this work in the open is essential, as it ensures that it is widely accessible and that any vendor can contribute ideas and review the proposed protocols and technologies. Open-source promotes transparency, collaboration, and inclusiveness in the development process. We believe that providing a common foundation that anyone can access, modify, and contribute to is essential to achieving interoperability between all vendors, instead of a select few.

Why Activation?

We decided to focus our initial interoperability standards efforts on the activation use case not only because it is a frequently encountered use case in industry, but also because we have noticed confusion regarding the extent to which user information is exchanged between parties that enable the use case in proprietary ways today.

On the surface, activation of overlapping audiences matched using a clean room is straightforward. Consider the case of an advertiser with a list of customers that wants to display ads to those customers when they are interacting with a publisher’s websites or applications. If users have provided personally identifying information, such as their email address, to both the publisher and advertiser directly, then the advertiser and publisher can compare datasets in a clean room in order to construct an audience of overlapping users. Here’s a Venn diagram illustrating the operation:

While seemingly simple on the surface, when it comes to the sharing of information associated with individual users, there are several subtle but material differences that may arise when such an operation is performed in practice. Notably, what new user information could the advertiser and publisher parties learn as a result of performing the match and targeting operation? Will the advertiser be able to track which of its individual customers are also browsing the publisher’s websites? And will the publisher learn which of its registered users are also the advertiser’s customers?

To answer such questions, a standard set of security and privacy design goals, input and output requirements, and clear documentation regarding the extent to which private user information is exchanged between parties when enabling the ad activation use case were all elaborated and made part of the OPJA specification. Ultimately, our goal with OPJA is to enable ad targeting on overlapping users without the parties leaking user information to each other. This is not only good for end user privacy, but it also prevents data sharing that could be exploited by competitors.

Raising the Privacy Bar

A defining characteristic of clean rooms is their potential to limit the scope of the processing of user data controlled by multiple parties. A simple example of this in practice is the construction of an aggregate report describing the intersection of two audiences originating from separate parties. In such a report, the joining, grouping, aggregation, and statistical noise injection can all be performed in a data clean room, thus preventing either party from learning anything about the other party’s data, other than what is included in the prescribed report.

This limiting capability of data clean rooms is inherent in the activation matching operation prescribed by the OPJA specification. In OPJA, a secure match is performed in order to determine which individual users are in the intersection of audiences originating from an advertiser and a publisher. Rather than the list of matched users being shared with either party, the presence or absence of each user in the intersection is encoded in the form of a label and is then encrypted. These encrypted user labels are shared with the publisher who cannot decrypt them, but who is able to insert them into ad requests. Ad requests are processed by ad tech (SSPs and DSPs), and only the advertiser’s designated DSP can decrypt corresponding match labels, enabling the DSP to make decisions on whether and how much to bid for the opportunity to show an ad. Critically, PII such as email address or phone number are never shared or transferred in ad requests, or outside of the match operation.

Equally important is that thanks to label encryption, OPJA allows the hiding of information about which individual users are in the audience intersection from both the advertiser and the publisher. This reduces data leakage between advertisers and publishers, and enables remarketing without requiring user tracking. Fundamentally, it’s an approach that adheres to the data minimization and purpose limitation principles of privacy by design.

Privacy Enhancing Technologies

OPJA outlines two approaches enabling the matching of user PII data in the multi-vendor setting, and they’re both based on Privacy Enhancing Technologies (PETs). The first is a purely software based, delegated private set intersection. This method enables the comparison of encrypted datasets using commutative encryption, without decrypting the data. The delegated helper server cannot decrypt the match data and is used merely to execute data comparison and generate encrypted data for activation. Additional trust in the helper server could be provided through hardware provided remote attestation.

The second approach is based on hardware provided Trusted Execution Environments (TEEs). This method ensures that match data is encrypted exclusively for the secure processing hardware provided by a helper server.

The use of PETs offers a robust foundation from which trust between vendors regarding how user data is matched can be achieved. OPJA matching requires that the data remains protected with encryption during processing, through a combination of cryptography software and TEE hardware. This greatly reduces the number of things that vendors and service providers need to trust each other with.

OPJA’s matching approaches are also not theoretically limited to a single cloud or infrastructure environment. These characteristics make PETs based approaches great as matching interoperability candidates in the multi-vendor setting.

Learn More

You can read the OPJA specification as well as the IAB Tech Lab Data Clean Room Guidelines here. Additionally, here's the Tech Lab's latest announcement on the 1.0 spec release.

For a fun introduction to OPJA, check out Digiday’s excellent WTF is IAB Tech Lab’s Open Private Join and Activation?

For a simple walkthrough on how commutative encryption can be used to enable double blind matching (not specific to OPJA), have a look at the little explainer here.

Integrate

If you’re a data or ad tech vendor (SSP, DSP, ad server) interested in interoperating with the Optable data collaboration platform using OPJA, we’d love to hear from you. Drop us an email.

Finally, it’s our hope that OPJA is a catalyst for future open proposals associated with measurement, audience modelling, and other use cases that involve the sharing of sensitive user data between advertisers and publishers.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

It’s time to turn your
data into opportunity.

Request demo