Skip to content

Episode 35: The Architecture of Privacy-First Marketing Data – Warehousing & Normalization

Hosted by Aaron Burnett with Special Guest Michelle Jennette

In this episode of The Digital Clinic, we explore the architecture behind HIPAA-compliant data warehouses with Michelle Jennette, BI Specialist at Wheelhouse DMG, as she walks us through the normalization processes and pipeline structures that enable privacy-first marketing analytics. 

Michelle shares the insights we’ve gained building data infrastructure for healthcare and other regulated industries, what works, what doesn’t, and why managing platform APIs in-house creates more problems than it solves. 

From extraction tools and transformation layers to UUID implementation and quality assurance protocols, this episode provides a clear roadmap for unifying multi-platform marketing data, moving beyond manual reporting, and creating a reliable foundation that protects patient privacy while powering smarter optimization. Whether you’re navigating attribution discrepancies, granularity mismatches, or custom conversion tracking, Michelle’s guidance helps turn data complexity into clarity and performance. 

As a bonus, check out our Field Guide to Data Normalization for First-Party Data Warehouses.

Listen & Subscribe:

Why Data Warehouses Enable Complete Customer Journey Visibility 

Aaron Burnett: Wheelhouse provides performance marketing to privacy-first industries where there are data restrictions that exist that don’t exist in other industries that aren’t so impacted by those privacy regulations. Data normalization supports a data warehouse that we maintain to serve our clients. Why is it important to have a data warehouse if you’re working in privacy-first industries? What do we enable for our clients and for ourselves? 

Michelle Jennette: Data warehouses allow us to keep all of our customer data secured in a spot that has a lot of security implementations in place. We keep every client that we have separated. It allows us to keep historical data all in one place. It allows us to unify our data as opposed to just having it out in some other place where you could keep it, such as just in the platforms. This kind of locks it up behind a gate. It locks it up into a warehouse that we govern with different security measures for each client. 

Aaron Burnett: And I’ve heard you say that having a warehouse enables us to bring all of this platform data together and see the entire user journey rather than just seeing through the keyhole with each of the platforms. So, can you describe our data warehouse, our data pipeline, and the reporting and insights that they enable? 

Michelle Jennette: What we do is we have all of our digital advertising platforms, right? We have Google, we have Facebook, we have Reddit—whatever digital advertising platforms you may have. We extract that data using Adverity. It is an extraction tool that we choose. We had used Supermetrics in the past. Just because of a couple of nuances with Supermetrics, we decided to go ahead and make the switch over to Adverity, which we’ve been really happy with over the years. So, we extract all of that data by building data streams. 

Each of these data streams are going to look a little bit different on each of the platforms. So, in Google, we have to pull two data streams. We have a campaign-level data stream for Performance Max campaigns, but all other ad types are going to be pulled at the ad level. So, from there, we have our data streams, and the data extracts into Adverity. We put it into AWS—the warehouse that we use, Redshift, which we call Compass here at Wheelhouse—and raw data is then loaded into Compass. 

From Compass, we then pull it into a program called DBT, which is a transformation tool that lives right inside, essentially, of our warehouse. We can do transformations right there, and that’s where we do a lot of transformations or normalization of data. This process is a lengthy process when you do have more complex clients, when you’re trying to unionize all of the data across all sorts of different platforms, especially when you’re looking at an ads platform like Reddit versus LinkedIn versus their Salesforce data or their CRM data. 

We do the normalization in DBT, and then we reload final reporting view tables back into Redshift Compass, where we are able to then pick up finalized tables and put them into whatever visualization tool that we are interested in using. So, a lot of times we use Power BI, Looker, Tableau, or QuickSight. That is where we can then build visualizations to deliver insights for our clients and allow our digital advertising team or SEO teams to make optimizations to better serve the client. 

Aaron Burnett: That’s great. So, you described that in a way that sounds very straightforward and simple. I know because I’ve been along for the ride, it hasn’t been straightforward and simple. So, we bring in data from the various advertising platforms, and you listed several of them, from analytics, sometimes from Search Console, sometimes from enterprise search data platforms, sometimes via API connections to CRM data. And sometimes we’re also uploading first-party data that we get from our clients or audience data from HIPAA-compliant data brokers. 

That’s a lot of complexity because each of those sources, as you alluded to, have different data structures. They have different naming conventions. A conversion in Meta is not the same as a conversion in anything that Google does. So, what challenges arise from that? And this sort of gets us to the notion, the concept of data normalization. Why is data normalization so critical, and how can this go wrong? 

Navigating the Complexity of Data Normalization 

Michelle Jennette: There’s a lot that can go wrong with data normalization. The first one that I probably preach the most—that I’m sure all of my teammates are sick of hearing me say—is naming conventions. Naming conventions serve a super important portion to all of these different platforms, specifically digital advertising platforms, because there are so many different dimensions that can be derived from campaigns within ad platforms. 

We have tried and have been successful in doing this with some of our clients: implementing standardized naming conventions all the way from the campaign names down to the ad, and sometimes even the creative-level naming conventions. This allows us to build dimensions directly from the campaigns, ad groups, ads, or creative names. That is obviously a challenge because there’s a lot of human error that goes into those and a lot of changes that happen within your ads. 

So, for example, if an ad is built and the digital advertising team is like, “Oh, you know what? I need to change this ad in some way. I need to update the name of it.” The reason that we don’t pull directly ad names, ad group names, or campaign names is because those human errors happen. So, we pull at the ID level and then use a CTE within our normalization that will pull the latest ad name, ad group name, or campaign name based on ad IDs. That’s one way that we make sure to normalize and to adjust for needing future band-aids, for lack of a better word. 

Attribution discrepancies, for example, between Google and Meta—you can have different attribution windows. So, we need to normalize that. Are we looking at click-through conversions, view-through conversions, and kind of level set on that? 

Granularity mismatches—so I did mention this between Google Ads, you have Performance Max, which can only be pulled at the campaign level, whereas all other ad types can be pulled at the ad level. That was one that whenever Performance Max first came out was tricky for us to catch. We were wondering, “Where’s our Performance Max data and why is it not pulling through?” before we went back to realize it doesn’t get to the granularity level of ad. So that’s why you have to pull at two different granularity levels for just one source. 

Sometimes more simple things like time zones and currencies. We do a lot of work with a company that has a lot of entities overseas, so we need to make sure that we’re aligning currencies. Usually we will convert everything back to USD using an automated transformation that pulls the latest financial exchange rates. 

Custom metrics and dimensions—that’s a big one also. So, the dimensions I already touched on, we build those mostly based on naming conventions. That has been a proven success for us. But custom metrics are going to be really interesting. So, a lot of times you have custom conversions. Both in Google and Facebook—or Meta, rather—are two really good examples of where custom metrics come into play. You need to normalize those and understand what custom metric in Google aligns with X custom metric in Meta and get those aligned and normalized. 

And then we do some stuff with latent reinstated data also. So, whenever we do data pulls, we do a backfill of seven days every day with our data fetches. That’s going to make sure that data is normalizing. For example, cost, impressions, clicks—that’s not really going to change. That’s a pretty static number. However, conversions in any sort of custom conversion are likely going to change based on your attribution window. So, by repulling the last seven days every single day, that reinstates data, so we get the freshest data on a daily basis. 

Aaron Burnett: Right. It’s very smart. And because we’re in a highly regulated environment where privacy rules and privacy laws are very stringent, we also need to do interesting and clever things to both protect end-user identity but preserve our ability to resolve identity internally. It’s important to note that the data warehouse that we operate is HIPAA-compliant and that we’re under BAA with our clients. And so, we have the same access to data and also the same obligations to maintain privacy and protect that data that they do. So, tell me about our use of private client ID or UUID, the complexity that creates, but also the value that creates. 

Protecting Privacy with UUIDs and HIPAA Compliance 

Michelle Jennette: UUIDs—unique user identifiers—are what we use with a lot of our clients’ CRM data. So, for example, the client gives you access to Salesforce. There’s a lot of private data in there. You have clients’ names, emails, whatever else. There may be private information that cannot be shared. So, what we do is assign every user a unique user identification number, the UUID, and that then anonymizes users so that we can track them all the way through the funnel so we understand their full funnel journey, completely anonymous. 

Aaron Burnett: That’s an internal identifier that does not live in any other system, isn’t shared to any other system, which is the way that we continue to preserve user privacy, patient privacy, that sort of thing. But it’s very powerful for us. Tell me how the process of data normalization supports effective data visualizations and reporting

From Visualization to Creative Performance Tracking 

Michelle Jennette: As we mentioned with the taxonomy of dimensions that we’ve started by using naming conventions, we use those naming conventions to create these different dimensions throughout all of our different platforms. Then those are pushed all the way downstream and aligned across each platform so that when all of the data is unionized in this last pretty box of the visualization, you can slice and dice data however you please—not just by, “All right, so here’s your Google data. Here’s your TikTok data. Here’s your Reddit data.” 

No, you can look at it at, “I want to look at my data broken down by objective across all of my platforms. I want to look at my data by awareness level across all of my platforms and see which awareness level is really driving the most conversions, which is driving the most upper-funnel kind of metrics that we want to look at—view-throughs, view contents, things of that nature.” It allows you just to look at your data in a unionized way, but in a more granular way than just at the source level. 

Aaron Burnett: As someone who is not as deeply involved in digital advertising, all of that is super compelling. It also has been compelling to be able to look at creative and to understand performance for creative across platforms as well. And I find the way that we present that really useful in that we show the creative, so you can look at the creative concept and see how it performs across platforms. 

Michelle Jennette: Yes. So, the creative reporting that we have is something that we are really proud of. There are hiccups along the way, like TikTok is a newer platform, right? So, their API doesn’t actually allow you to pull a creative static image like Meta’s does. It allows you to pull it, but it only stays fresh for one hour. 

Aaron Burnett: That’s very TikTok. 

Michelle Jennette: It’s so TikTok. So, what we do is when we run into hiccups like this, we work with Adverity, our extraction tool, and their service reps are really great to work with. When you bring them an issue in terms of, “Hey, this dimension’s not available, this metric’s not available, this attribute—like the creative image—is not available or not working properly,” they’ll get their engineering teams to work on a backend solution for that. 

Now, why that’s so important for us is because as an agency who’s managing 10-plus clients, we don’t have the ability to manage APIs in-house. They change quickly. There’s hundreds of them. So, using a tool like Adverity who does have a specific team per API allows them to be dynamic in that they can make these adjustments for us, make these updates for us. 

Aaron Burnett: Absolutely. All right, so you mentioned a pain point, a thing that we learned along the way. When we first built Compass and we first began to ingest data from advertising platforms, our initial approach was to integrate at a per-platform API level and just pull data directly in without any intermediary. You alluded to this a little bit, but can you describe why that didn’t work and why no one should try to do that? 

The API Management Challenge: Lessons from Direct Integration  

Michelle Jennette: You know what? I almost forgot that we did that years ago because I think I’ve just mentally blocked it out. Yeah, it was pretty bad. So, we had a brilliant engineer on our team who had experience with managing APIs. But when you have APIs—Google is a great example of one—they change their API on a regular basis. And it’s not just small tweaks and updates where, “Okay, we adjust this, turn this knob, do that.” They’re big adjustments that you need to recreate the entire API tap. 

So, in order to do that, you need a specific team per API to do such things where at some point it just becomes uneconomical to do so and to get a third-party like an Adverity, a Supermetrics, a Fivetran, whatever tool you may want to use, that’s just going to make it more economical and less of a headache for you downstream, and it’s going to be able to give you quicker results also. 

Aaron Burnett: Absolutely. Yeah, that was a painful episode in our experience. You’re absolutely right. The API changes frequently. What we also found, depending upon the platform, and Google was most infamous for this, is they changed the API first and tell you second and update their documentation third. And so, it was not good. Nobody should do that. 

Michelle Jennette: Yeah. We found much success with, like I said, Adverity. Supermetrics was great. We did have a couple of nuances that led us to change, and we’ve been really happy with Adverity thus far. 

Aaron Burnett: Adverity is good for many of the platforms. We still do have to have API-level connections with some platforms. For example, a CRM integration is an API-level integration. You’re not going to get that through a proxy, certainly not in a privacy-first sort of a context. Let’s talk about other gotchas, things that people should be careful of as they start to think about or do this work themselves. How have we made mistakes so that—and how can—what can we share so that others don’t make the same mistakes? 

Best Practices: Documentation, Structure, and Reusability 

Michelle Jennette: My coworkers will laugh at me because I am the documentation queen, but documentation is so key, absolutely key, because practices change all the time. And so, keeping up-to-date documentation on all of the standards that you use is going to be key. 

Some other kind of best practices that we’ve found in the past is just like things like our structure, our pipeline structure. So, within DBT—DBT, like I said, that’s our transformation tool, right?—we have created a standardized pipeline structure that wasn’t implemented when we first began this. And it was almost a little bit of the Wild West. And now that things have become more standardized, pipelines are cleaner. Anybody can really go in there and know what they’re looking at and what they’re doing. 

We break it up into four different folders. So, each client is already broken out to keep everything security-tight, but within each client folder we have a source folder—that’s where raw data’s being pulled in. A staging folder—that’s where you’re doing the majority of your transformations per platform. That’s where a lot of the meat is. Intermediate tables—that’s where a lot of the unionizing of data comes together. And then our marts, which is going to be your pretty packaged-up reporting view with a nice bow on top that’s ready for use, that can be pushed then to our visualization tools, to clients’ warehouses. We can push them into our clients’ warehouses like BigQuery if that’s of use to them. 

Reusability—so kind of something that I have mentioned multiple times. Things like standardized naming conventions or standardized attribution windows. That’s not always possible across clients, but within clients you can do that a lot. And then like templates for each common platform like Google Performance Max and TikTok and Facebook and Reddit, all paid social—those can use a lot of the same templates. 

And then of course, just QA-ing and monitoring. That’s a big part of the job. Making sure that you have data parity between your platforms and what we have in the warehouse, not only in the raw state but in the final reporting view state, and then constant monitoring at all levels. We have alerts on Adverity, so if data pulls do not happen at the very first point at Adverity, an alert goes off. That can either be an alert that’s sent via email—it alerts us in Slack so the greater team knows. DBT will alert us if there is a job run fail. Power BI will alert us, send an email notification if there’s a dashboard refresh fail. So, we have different alerts set at all points of the pipeline that allows us to keep a pretty good eye on it in terms of monitoring. 

Aaron Burnett: So, I know from working with you for years that you are humble and understated, so I won’t ask you to brag about some of the work that you do, but I’ll do just a little bit. It’ll only be painful for a minute or two. Some of the things that your work has enabled and the team’s work has enabled, for example, our tremendous efficiency with regard to onboarding clients. I know that even if we went back to a year ago, we took six weeks or so to onboard a client, and I know today it’s a week. If we were adding channels a year ago, it would take days and days, sometimes weeks. I know that we just added channels and it took us a day per channel to add them, and not just to add them once, but to add them along with all of the associated structures and processes and routines. 

We also, through our warehouse and through the BI practice that you’ve built on top of it, have automated a good deal of our reporting. So, for example, our digital advertising reporting automatically updates dashboards within the context of the presentations that are shared by our analysts. So, the value operationally in the efficiency operationally is super impressive. It’s a very mature data warehouse and process, and you’ve done a great job. 

Michelle Jennette: Thank you. We’ve come a long way. Our BI team is fantastic. It’s a small group of us, and we’ve come a long way. 

Aaron Burnett: Yeah. Yeah. I remember when we started this, our aspiration was to be able to see things that other people couldn’t see and know things that other people didn’t know. And it took us a while, but we’re there now, and it’s very exciting to see. 

Michelle Jennette: It is really exciting to see. So, I just added Reddit this morning on one of our clients’ data pipelines. And to your point, what would’ve taken us maybe a week a year or two ago, I got it implemented in an hour. So, things like that, when you just get these processes and standardizations in place, that’s what really makes the wheel turn. 

Aaron Burnett: All right, so let’s talk about the debate between ETL and ELT. I know that you have a strong perspective on this, one way that you think is better than the other. First of all, maybe define the acronyms for those who aren’t familiar, and then tell me which way works best for us and why. 

ETL vs. ELT: Choosing Privacy-First Transformation 

Michelle Jennette: Yeah, they’re quite similar. ETL: extract, transform, load. ELT: extract, load, transform. We use ELT. But what ETL does is it transforms the data before it even is ever loaded into the warehouse. A lot of times this makes sense if you’re using extraction tools or transformation tools that are external to your warehouse. 

What we use, ELT, is extract, load, transform. You extract the data from your source and then load it. We load the raw data into the data warehouse and then transform it. So DBT lays on top of our warehouse, which allows us to do the transformations post-load. So really we kind of load twice if you want to look at it that way. We almost do ELTL, if you will. So we extract within Adverity, load the raw data into the data warehouse. We then do the transformations in DBT on top of the warehouse, and then reload back in the final reporting view into Redshift. 

Aaron Burnett: What are the advantages of that approach over ETL? 

Michelle Jennette: ETL is more of something of the past, whereas ELT is like the way it’s done now. The reason being is that ETL was whenever you did your transformations outside of your warehouse, where ELT is everything’s inside the warehouse, so you don’t have another third-party that you’re bringing in and sharing your data to. It’s all done on top of the warehouse. 

Aaron Burnett: So, it supports our privacy-first focus. It ensures that we maintain HIPAA compliance for the warehouse. 

Michelle Jennette: Absolutely. 

Aaron Burnett: Let’s do a kind of a quick run-through: best practices and standards, the things that people should ensure they do if or as they are considering creating their own data pipeline, normalization processes, and data warehouse. 

Quality Assurance and Monitoring at Every Pipeline Stage 

Michelle Jennette: So, in terms of best practices and standards, we’ve learned a lot along the way. Documentation, as I mentioned, is going to be key. Everything is going to rely on that documentation because standards and practices do change quite often, so making sure that documentation, the rest of your team, is all following that. 

Let’s start with ensuring that you have a good extraction tool that you can trust—that your extraction tool and your raw data is coming in accurately. Doing a QA at every point that you’re touching data, so from your raw tables, checking your extractions right off the bat, checking them once your raw tables load into the warehouse, making sure that you have parity across what’s in your warehouse to what’s in the platforms. 

Having a good pipeline structure, so making sure that you have all of your tables in your pipeline structure organized in a way that when somebody can just go into your pipeline, they know what they’re looking at. It’s not this jumbled mess of, “Oh gosh, here’s a band-aid on this, here’s a band-aid on that.” No, you want to eliminate band-aids as much as possible, which is something that we’ve learned over the years. We used to bandage a lot of stuff up, and to this day, I’ll get questions from a digital advertising analyst saying, “Oh, you know what? I messed this up. Is it okay if we like bandage it up somehow?” And you’ve got to learn to put your foot down and just say, “No, we have standardizations and you’re going to have to go back and fix that so that down the line it doesn’t mess up anything.” 

So, you have four different areas: your sourcing tables, your staging tables, your intermediate tables, and then your marts. A pipeline structure like that has proven to give us a lot of success. 

And then reusability, so making sure that you’re able—so as you mentioned, Aaron, a year ago, some of this stuff would’ve taken us a week, six weeks to implement. By implementing these reusable factors such as naming conventions or templates in DBT, we have been able to speed up our process quite significantly, even in some of our most complex clients. I implemented on arguably one of our most complex clients today, Reddit Ads, and was able to implement that in an hour due to these reusable templates that we have created over the years. Reddit is a paid social platform. So paid social generally reads across all the same paid social platforms. There are going to be different intricacies that you will have to call out, but on the whole, a template across paid social is going to work across paid social. 

And then of course, the big one: making sure that you’re QA-ing and monitoring the whole thing. So, I mentioned that you QA at every single process or every single stage, rather—at extraction, at load, at transformation. And then for us at reload, making sure that you have data parity across all of those different stages and what’s in your platforms. 

And then monitoring, that’s a big one. So, this is what a lot of the job actually relies on, is a good monitoring system. So, we have alerts essentially set at every single aspect of the ELT process. At any point, we are alerted via Slack, via email, that something didn’t fetch right, something didn’t translate right. Jumping on those before the client can ever catch it, being the first in line to make sure that data is accurate, reliable, trustworthy, and ultimately our one source of truth across our teams and our clients. 

Aaron Burnett: Accuracy and fidelity is absolutely critical. We make decisions with tens of millions of dollars based on the information and insights that you and your team deliver. 

Michelle Jennette: Yep. Yeah. We cannot let our clients down at that price range. 

Aaron Burnett: Exactly. At any price range. Yeah. Yeah, exactly. All right, so if anyone listening here doesn’t have a data warehouse, is still pulling platform data, trying to make sense of this, trying to reconcile it maybe manually, maybe they’re still at a spreadsheet stage, maybe they’re just grabbing platform data and loading it into BigQuery—where should they start if they want to pursue the path that we’ve been on now since I guess 2021? 

Building In-House vs. Partnering: What You Need to Know 

Michelle Jennette: Gosh. Yeah, it makes my heart flutter even thinking about pulling platform data manually, right? The good old-fashioned way, because there’s just so much room for human error. As much as we don’t want to admit it, it’s so easy for that to happen. 

So, in terms of where to begin, you probably want to just look for an agency that can do a one-stop shop of everything. Because if you do this in-house, it can become quite pricey quickly. Like we’re fortunate enough to be an agency that is using an extraction tool like Adverity, but it’s not a cheap tool, and we’re able to spread that across all of our clients. So, it does become affordable then. But whenever you’re using that in-house and you’re paying the full load there, that is quite a pricey endeavor. And that’s just your extractions. But why that’s important is because, again, manual pulls are just not reliable. Backfilling, like a seven-day backfill every day is going to be nearly impossible to do that manually, whereas when you’re using an extraction tool, it does overwrite and reinstate all of your data automatically. 

So, I’m going down a rabbit hole now of all the reasons you shouldn’t do it by yourself. I’ve stressed yourself out thinking about starting over. I’m stressing myself out. I know. But in terms of where you should start, gosh, I think you should just hire us. 

Aaron Burnett: All right. That’s actually good, right? Your advice, having done this, is don’t start yourself—find a partner who has done this. 

Michelle Jennette: Yeah. Don’t go into this alone. 

Aaron Burnett: That’s fair. So, we’re not a huge agency, but we’re quite sophisticated. So, we have our own engineering folks, we have very sophisticated and advanced analytics folks. We have our BI practice that we’ve built up over the years. And we have the benefit of all of the things that we got wrong at first and have learned from as well. So aside from the expense, just the expense of paying for the tools, there is the expense of all the things that you’re going to get wrong as you embark on this path. 

Michelle Jennette: Oh, completely. And it’s an ever-revolving and learning process. New platforms come out all the time. Where was TikTok a few years ago? And now we need to learn all about TikTok and all the intricacies with the APIs on TikTok. But we have a dedicated team here at Wheelhouse that allows us to dive in, research, and focus our efforts on that. It’s an ever-learning experience to manage all of these pipelines and data sources and all of this stuff with AI that’s coming out, and how do we manage those differences and what they’re going to be doing to our different pipelines and transformations in the future. So yeah, it’s a lot to keep up with, but with a dedicated team, we’re blessed to be able to do such things. 

Aaron Burnett: Maybe the most helpful perspective and advice we could give folks who might be considering doing this themselves is it’s not a part-time job for any one person, any handful of people. It’s a full-time job across a number of disciplines for many different people if you’re going to do this in a way that truly delivers value, and in particular if you’re going to be able to deliver value in privacy-first industries. 

Michelle Jennette: Absolutely. Absolutely. And that just made me think of, if you are going to take this on and do this in-house by yourself, I think the most important part is going to be getting a reliable extraction tool because managing APIs—multiple APIs—is just, without a team, you’re just not going to be able to do it. So, making sure that you have a reliable extraction tool is going to be step one ultimately if you’re going to be doing this in-house. 

Aaron Burnett: Michelle, this has been great. Thank you very much for spending this time, and thank you for everything that you do for us. The work you do is amazing. 

Michelle Jennette: Oh, thank you so much, Aaron. It’s been great. 

Recent Podcasts

Episode 34: Privacy as Culture: Understanding Privacy Across Global Markets 
Episode 34: Privacy as Culture: Understanding Privacy Across Global Markets 

In this episode of the Digital Clinic, we talk with K Royal, Global Chief Privacy Officer & Deputy General Counsel at Crawford & Company, about how different cultures approach data protection and why it matters for your marketing strategy. The key insight? Privacy doesn’t come down to just checking regulatory boxes. When you prioritize transparency and genuine consent, you build trust that drives better marketing performance.

Digital Advertising, Digital Strategy, Healthcare, Uncategorized

Episode 33: Building the Ultimate MarTech Guide for Privacy-First Marketing Success
Episode 33: Building the Ultimate MarTech Guide for Privacy-First Marketing Success

In this episode of The Digital Clinic, we dive into building a privacy-compliant MarTech stack with Paul Weinstein, President at Wheelhouse DMG, as he breaks down the comprehensive 12-category guide we created to navigate demanding privacy environments.  

Digital Strategy, Healthcare, Technical Services

headshot of Alysa Hutnik
Episode 32: Beyond HIPAA – Privacy-First Marketing Strategies That Win

In this episode of The Digital Clinic, we dive into the rapidly shifting privacy enforcement landscape with Alysa Hutnik, Privacy and Information Security Practice Chair and Partner at Kelley Drye & Warren LLP. This episode delivers the strategic guidance you need to transform regulatory challenges into business opportunities, moving beyond the “all-you-can-eat buffet” mentality of ad tech to build privacy-first marketing that delivers better results while ensuring full compliance.

Content Strategy, Digital Strategy, Healthcare, Uncategorized

Wheelhouse DMG Mobile Logo in White and Gold

Contact Us
Please enable JavaScript in your browser to complete this form.
Name

Contact Us
Please enable JavaScript in your browser to complete this form.
Name