Deepfakes popped on the screen (literally) around 2019. We know them mostly from funny videos such as Bill Hader channeling Tom Cruise, or the Kate Middleton Mother’s Day pictures. But outside of our own entertainment, how are Deepfakes being used and why should we care?
Since widespread access to and adoption of OpenAI’s ChatGPT in 2023 and newly emerging generative technologies, the ability to fast-track development of hyper-realistic deepfakes has given threat actors a new tool for potential widespread fraud, manipulation and disinformation – and global organizations are taking notice. ~ Teneo by Courtney Adante
As we would expect, the initial and immediate targets were celebrities, politicians and high-profile executives. But the target is moving very quickly to everyday businesses and individuals using disinformation and fiction for political and criminal purposes. Sounds like yet another level of social evil. But lets make it a bit more real. With today’s high housing prices, if you had the ability to lower the pricing market in a particular neighborhood by using misinformation, would you? What if you were looking at acquiring another organization. Would a sprinkle of disinformation about their political views and activities potentially make their price or competitive bidding a bit easier for you to swallow? Most of us would say of course not! But there are a lot of people who would jump on the opportunity. And numbers show us, they do!
Some deepfakes are easy to spot. You only need to spend a few minutes on Instagram to find some. But others are impossible to truly know about.
Over half of internet traffic in 2023 was driven by artificial manipulators and content. ~ Source: Arkose Labs
So now that we shared yet “another” thing to worry about, how can we find out if the information we are accessing is real or created via malicious methods? Organizations are popping up to fill the need to “see clearly” in this very confusing environment. Giants, such as Microsoft, are introducing Microsoft Content Integrity Tools which are being used this summer within the European Elections. “These tools, already available to U.S. political campaigns, give organizations control over their own content, and combat the risks of AI-generated content and deepfakes. By attaching secure “Content Credentials” to their original media, the organizations can increase transparency as to who created or published an image, where and when content was created, whether it was generated by AI, and whether the image has been edited or tampered with since it was created.” ~ Microsoft Blogs April 2024.
However, there are other organizations which we have come across such as Koat.ai which were designed specifically to identify and track global data, free of manipulations so you can arrive at decisions in real time, confidently. The company tracks which data is bot or AI generated, and even where it is generated from! If you are an organization who wants to know what is being said about you or your industry, this is a priceless tool which cuts through all the fakeness.
Here are some additional fun facts from Koat.ai
70% of all online conversations are from fake accounts and bots
67% of hostile artificial actors are focused on Financial Services
38% is the year over year increase in artificial, hostile manipulators
Bottom line…. question everything that you read online. Quite literally, it is a 50/50 if it is real or bot generated. (tell your kids, tell your parents, tell your neighbors). However, if your organization is focused on risk management or image management, learning what is being said authentically versus via bad actors is a strong tool for your strategic approach.
As always, do contact us for more information. We love talking about this stuff!