‘Fog of War’ Intensified by AI and Social Media, Tech Policy Expert Says

Samantha Aschieris /

As the war between Israel and Hamas rages on, Jake Denton, research associate for The Heritage Foundation’s Tech Policy Center, breaks down the role of artificial intelligence in the conflict.

“I think the one that everyone jumps to is the AI-generated content, deepfakes, things of that nature,” Denton says on this episode of “The Daily Signal Podcast.” (The Daily Signal is Heritage’s news outlet )

“There’s a few stories of synthetic audio recordings of a general saying that an attack’s coming or those types of things that we’ve even seen in the Russia-Ukraine conflict,” Denton says. “They go around on Telegram or WhatsApp.”

“They’re taken as fact because they sound really convincing,” the Heritage researcher says. “You add some background noise, and suddenly it’s like a whole production in an audio file. And typically, what you’re picking up on in a video is like the lips don’t sync, and so you know the audio is fake. When it’s an audio file, how do you even tell?”

Denton also highlights social media platforms such as the Chinese-owned app TikTok.

“What you’re seeing right now, especially on platforms like TikTok, is they’re just promoting things that are either fake or actual real synthetic media, like a true deepfake from the ground up and altered video, altered audio, all these things are getting promoted,” Denton says, adding:

And kids, at no fault of their own, are consuming this stuff, and they’re taking it as fact. It’s what you do.

You see a person with a million followers that has 12 million likes and you’re like, ‘This is a real video.’ You don’t really expect these kids to fact-check.

On today’s episode of “The Daily Signal Podcast,” Denton also discusses President Joe Biden‘s executive order on artificial intelligence, what he views as social media companies’ role in monitoring AI and combating fake images and videos, and how Americans can equip themselves to identify fake images and videos online.

Listen to the podcast below or read the lightly edited transcript:

Samantha Aschieris: Joining today’s episode of “The Daily Signal Podcast” is Jake Denton, research associate in The Heritage Foundation’s Tech Policy Center. Jake, welcome back to the show.

Jake Denton: Thanks for having me back.

Aschieris: Of course. Thanks for joining us. On Monday, President Joe Biden issued an executive order on artificial intelligence. According to a White House fact sheet, “the executive order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

Jake, before we get too far into today’s conversation talking about this executive order, first and foremost, what is artificial intelligence?

Denton: Well, it’s a topic of debate that we’re still having here in Washington when it comes to formulating legislative approach, regulatory approach. But just generally speaking, artificial intelligence is intelligence that simulates human intelligence. It’s the simplest, dumbest version you could possibly have, but people are taking it in very different directions.

And so, every piece of legislation we’re seeing has a different definition. I don’t think there’s a unified view of what it should be here in Washington, but I mean, that’s a point of contention. All the way down to the most simple aspect of this whole policy debate, we still have to find the tech in a meaningful way.

Aschieris: Thanks for that explanation. I wanted to now talk about this executive order. If you could just break it down for us a little bit more. I know it was rather long, so just break it down for us.

Denton: Yeah, the document’s huge. Depending on your font size, it can be upward of 110 pages, and there’s a lot in there. As you mentioned at the beginning here, everything from national security to AI labeling, so that gets into the synthetic media, deepfakes, having a watermark potentially.

And then, also as you mentioned, this diversity, equity, inclusion agenda. I think across the board, the framework that’s outlined in the fact sheet is strong. It’s something that most of us wouldn’t really object to. But when you read the actual order, it’s not really what’s presented in the fact sheet, which is typical. I mean, they butter it up, they make it sound nicer.

And really, what you find when you start to get into the details is that it’s really pushing that diversity, equity, inclusion agenda throughout all those various pillars and puts a little bit less priority on the actual intent of the pillar.

So, a national security pillar, for instance, includes red teaming, which is intended to find vulnerabilities and flaws within the AI model. But when you read the definition of red teaming that is laid out in the executive order, it includes looking for discriminatory or harmful outputs of an AI model. And so, if you’re red team, who is supposed to basically find vulnerabilities that could implicate national security, it is focused on finding outputs that might hurt someone’s feelings or that are discriminatory. Is our national security really safer? What are we doing?

And I think it all boils back down to something that we encounter all the time of they just use these blanket terms that have really been left undefined, like diversity, equity, inclusion, harm. And it just gives them broad authority to label anything they want that term to take action.

And so, we still do this in the social media realm. Now, we’re doing it at AI. We still don’t know what any of these words mean to them. They might not really even know what these words mean to them, and who’s to say how it ends? But they’re going to use this as a wedge in for these policy debates.

Aschieris: And just speaking of using it, how are you expecting the administration to actually enforce this executive order?

Denton: Well, it’s really the elephant in the room. You read through it, and when you consider the disclosures they’re asking for, there’s still a lot of autonomy on the company side, which isn’t necessarily a bad thing. They shouldn’t be forced to give away everything. But it’s going to be really hard for the government side to say if they got what they should have in that disclosure.

And then, furthermore, there is just a competency shortage here in Washington. There isn’t an AI workforce that is occupying the halls of Congress or even these regulatory agencies. And what this executive order does is it breaks off a particular jurisdiction for each agency or regulatory body, so that consumer issues are under a consumer regulatory agency, like the Commerce Department, and maybe nuclear-related things are under Department of Energy, which, probably a good thing, separate expertise.

But those agencies don’t have this robust AI team that can understand really what’s going on, and so there’s going to be a huge hiring push. The Biden White House rolled out ai.gov, which is like a jobs portal. But think, as the AI developer, you’re in demand. You currently work at a Silicon Valley company making seven figures, are you really going to just throw that away to come move to crime-ridden Washington, D.C., and take a government job? Probably not.

So, who’s filling these roles to interpret the disclosures? Enforceability just starts to crumble when you consider you don’t have the talent there, you use overly ambiguous words, which means even if you do enforce, it’s going to be very selective. It’s just like across the board, it’s really hard to see what this looks like in practice.

And I think that’s what’s really the big struggle right now for all these people, is everyone’s asking, “What does it look like in 10 years because of this executive order?” I don’t think anyone knows because there isn’t a really clear path presented.

And they’re calling this the foundation of further AI policy. Well, the foundation is seemingly nonexistent. We just threw stuff out there. So, hopefully, we actually get a real foundation later on, but it’s really tough to say what AI policy is going to look like in 10 years.

Aschieris: And just speaking of policy, obviously, we saw the president and the White House take this step on Monday. What would you like to see from Congress in terms of legislation, any steps regarding AI?

Denton: I think the core focus right now is all on national security, and rightfully so. These systems are going to pretty quickly integrate into very sensitive areas, critical systems that we can’t afford to just be vulnerable to adversaries or bad actors here in the states.

And so, something like explainability legislation, which I believe we’ve talked about before on the show, is critical. And yet, it’s really nonexistent in any of these proposed bills. No one’s really talking about it on the Hill. We go over there, people still aren’t getting it. And it’s pretty simple. All it is, we want to be able to audit the decision-making process of this model.

Everyone would think that if you ask it a question, you’d be able to figure out why it drew that conclusion, but not even the engineers in most of these companies can tell you why the model came up with that answer. And so, we want to essentially lift that black box—that’s what that phenomenon is called—so we can go through and figure out what data that they scraped across the internet contributed to that answer.

Maybe it was a fake news story, maybe it was a statistically flawed and disproven academic study. You think about all the different things on the internet that are disproven, and there’s no way to tell if that’s not the basis, like the foundation, for a given decision.

So, for a critical system, like is targeted in this executive order, that we’re really worried about causing real-world harm, you would think that there would be a way to audit them. There is nothing in that order for it.

And so, it’s almost like we’re just skipping steps here, trying to check every box to please the public, but we’re missing the unsexy Computer Science 101 type stuff that’s going to make this either work or fail. And so, we need to almost just go back, when it comes to Congress, and start from the ground up, which is explainability.

Aschieris: Now, of course, we are having this conversation against the backdrop of the ongoing war between Israel and Hamas. What has been the role of AI in this war that you’ve seen?

Denton: I think the one that everyone jumps to is the AI-generated content, deepfakes, things of that nature. There’s a few stories of synthetic audio recordings of a general saying that an attack is coming or those types of things that we’ve even seen in the Russia-Ukraine conflict. They go around on Telegram or WhatsApp. They’re taking this fact because they sound really convincing. You add some background noise and suddenly it’s like a whole production in an audio file.

And typically, what you’re picking up on in a video is, like, the lips don’t sync, and so you know the audio is fake. When it’s an audio file, how do you even tell? And so, you’ve seen a little bit of that. The synthetic audio. It’s been pretty well documented.

And then, on social media, you’re starting to see synthetic images or even Arma 3, the video game clips being taken as real-world military footage. And it’s shifting the news cycle and where people are paying attention.

A lot of that has to do with the algorithmic recommendations, which we’ve had them for a while, but its very foundation is still AI recommending you this content. There’s an element of AI in that, of what it’s recommending to you, what it’s prioritizing, what news stories that’s deciding you’re going see on your feed.

And so, what you’re seeing right now, especially on platforms like TikTok, is they’re just promoting things that are either fake or actual real synthetic media, like a true deepfake from the ground up and altered video, altered audio, all these things are getting promoted.

And kids, at no fault of their own, are consuming this stuff, and they’re taking it as fact. It’s what you do. You see a person with a million followers that has 12 million likes and you’re like, “This is a real video.” You don’t really expect these kids to fact-check. And even then, how are you going to fact-check a video of a conflict in the desert? Who do you know that’s going to tell you if that building is real?

So everyone’s running around with all sorts of ideas in their head, that scene might not have ever happened, that building might not have existed, and it’s all from the recommended content on feeds. So, the fog of war is being, essentially, intensified through social media and these AI systems.

Aschieris: It’s scary.

Denton: It certainly is. Yeah.

Aschieris: It’s really scary. And just speaking of social media companies, what do you view as their role in monitoring artificial intelligence and combating these fake images, these fake videos?

Denton: It’s tough because the generation side is rapidly outpacing the detection side. And so, it’s like, you want a platform like Twitter and you expect them to detect everything, but it’s just simply not possible. The tech isn’t there yet.

I think we all saw this with the Ben Shapiro image of the dead baby. And then, it ensued this crazy debate of, like, “Was it real? Was it fake?” And still, there’s, honestly, not a great answer on if it was real or if it was fake. Each side backtracked a little bit.

An image-checker said that it was fake and then it said it’s real. There’s a huge error variable. So, it just presents this point of, even if we require Twitter or Facebook or whoever to verify the authenticity of the image, they’re not going to be able to do it 100% of the time.

And so, I think the mechanisms like Community Notes on X are probably the way forward. It’s like a Band-Aid until we can figure out how to have a reliable detection system, but Community Notes are flawed. I mean, anyone can make a Community Notes account and then just troll people. And you saw it with that Ben Shapiro case.

I was on Twitter myself, or X rather, I guess. First post I see of the photo says, “This has been verified to be true for these three reasons.” And then, literally, the post right below it was the same image: “This has been verified to be false for these three reasons.” You’re sitting there like, “OK, what is real? This is all just a smoke and mirrors game.”

And so, I think it’s going to get worse, unfortunately. I think there’s going to have to be a straw that breaks the camel’s back for a real overhaul of how media is handled on these platforms. And we’re inching closer with stuff like TikTok and the way that they’re promoting content, but we’re a long ways from a clean information environment again that we may have seen pre-generative AI boom or even pre-TikTok. It’s probably further away or maybe even more out of reach than people realize.

Aschieris: And where do you even draw the line, from your perspective, between actual news being censored, which might be harder to verify, versus just letting fake news out on the internet?

Denton: Well, it’s a real struggle because, I mean, particularly within the generative AI lens, deepfake media is getting a lot harder to detect. There basically, at some point, has to be a human who verifies the check of the automated system that flags it, and then it’s that person’s choice.

So, there is a world in which we give increased authority to these platforms to flag AI-generated media. And it results in real stories being censored on the seemingly innocent basis that it was possibly, like, AI-generated, but we’re just trusting a person, again—this fact-checker who maybe worked the 2020 election and was flagging stuff for a whole other reason.

So really, you can’t untangle the political will of the Silicon Valley worker. They’re obviously going to exert their authority. And so, you want to figure out a way forward here where independent journalists still are allowed and platformed.

And just because you don’t have a mainstream backer that’s verifying this is true, doesn’t mean the story’s AI-generated, right? But there’s a very real world in which every independent journalist just gets deboosted or shadow-banned because they don’t have this verification enterprise behind them. And these legacy media outlets are getting sucked up into this stuff just as much as anyone else retweeting or posting fake media, but they just get to skate free, so it’s really tough.

I don’t really see a clear path forward. I think we’re just going to have to play around with the correct level of automated detection, human review. There’s got to be a way speedier appeals process. If your thing was wrongfully flagged at a critical time, you need to be able to get that up as quick as possible, and maybe the platform’s not incentivized to hear that appeal out.

I think about critical election moments, like the Iowa caucus, right? Let’s say a candidate had a stroke and someone takes the photo and posted it’s flagged as fake. That story doesn’t get out. Thirty minutes before voting starts, it’s confirmed as true, but every video has been flagged as fake. Maybe you have a strike on your account now. Who’s putting the image up, right?

And then, someone then screenshots the initial takedown. It’s like, “This was verified as false.” And suddenly, that voter at the ballot box is like, “Is the candidate I’m voting for even alive?” You don’t have any idea.

And we’re just months away from those scenarios. Not a single barrier has been erected to prevent that from happening. And if anything, just fuel has been added to the fire. People are getting reps in with the Ukraine stuff, with the Israel stuff, and then just recreationally playing around.

So, we’re refining our ability on an independent level to cause chaos and the platforms, and these sites are doing nothing to prevent it. So, I think, really, there’s going to be a big boom, something crazy’s going to happen, and it’ll require us to think things through a little bit deeper.

Aschieris: Well, before we go, I wanted to ask you how people can equip themselves to identify fake images and videos online.

Denton: It’s tough. I would say the best thing, it sounds crazy, tinfoil hat-esque, but just assume that everything is fake, and it’s the best way to be safe.

I mean, the reality is, people you trust are going to retweet and amplify things that are fake. They might very well know it’s fake and they view it as satire, but you’re just not expecting it, so you think it’s real. And you are just forced as this, like, independent fact-checker.

Well, we’re going to pretty quickly hit a point where the fake media outweighs, where it is more prominent than real media. And so, it’s better to just change your mentality now and be skeptical of everything. Even the highest production video, it’s coming from a legacy news outlet. Just question, think back of the best deepfake you’ve ever seen and compare it and be, like, “Actually, that could be a deepfake,” and just be skeptical.

I would say the best way, if you are really trying to figure it out, you look at things like the eyes, the fingers, the mouth. These are areas where the details don’t really come through very well in the AI-generated images, but skilled people could fix it. And so, it’s like you just hit this point where it’s better to just have a high guard, make a way higher barrier for you to trust an image.

We grew up just thinking everything we saw is real. The camera’s an accurate representation of reality. Today, that’s not really true. And so, it’s going to be really hard to deprogram that and build up this new defense system, but start now because it’s going to get a lot worse.

Aschieris: Well, Jake, thanks so much for joining us today. Any final thoughts before we go?

Denton: Just keep an eye on this stuff. I think the conflict takes your attention away from Silicon Valley, but Silicon Valley keeps chugging along, and so every day there’s a new leap in the AI world. We’re on that part of the curve where we’re making giant leaps every day, and not every single one gets attention, but they’re about just as important. So try and keep up with it because you’ll get taken advantage of if you lose track of what’s going on.

Aschieris: Well, Jake Denton, thanks so much for joining us.

Denton: For sure. Thanks for having me.

Have an opinion about this article? To sound off, please email [email protected] and we’ll consider publishing your edited remarks in our regular “We Hear You” feature. Remember to include the URL or headline of the article plus your name and town and/or state.