AI and Behavioral Marketing: Gold for Business, Risk for People

Home > For Startups > AI and Behavioral Marketing: Gold for Business, Risk for People

Illustration

Do you want to be tricked by behavioral AI, pushing your emotional buttons through everything you see and hear online? Are your decisions your own, or are they shaped by people you cannot see? Learn about the forces that most people are completely unaware of. This article will provide an understanding of how you can make decisions safely in a digital world.

Dmytro Shestakov, October 2024The article was originally published on London Daily News.

The Magic of Marketing, Psychology, and AI

Undoubtedly, the convergence of AI technologies has sparked behavioral insights and boosted the development of highly personalised, engaging, and intuitive apps. Think about how your smartphone knows what you want to type next- this occurs through the predictive power of machine learning models, the contextual understanding of large language models, and the humanised responses of generative AI.

However, this technological breakthrough is not without risks. In the ideal world, behavioral psychology and marketing should equip a business with an opportunity to use its knowledge about the customer and boost profits. But what if it becomes a weapon of manipulation?

Human attention is the main target in AI marketing, and new tools prove unchallenged at keeping consumer focus. This power can be used to make online experiences better but can equally be misused. As AI learns more about our behavior, it gets harder to tell the difference between marketing and adversarial use of behavioral psychology, and this raises reasonable concerns about our freedom to make decisions.

Choice Architecture as a Behavioral Weapon

The root cause here is choice architecture, which derives from behavioral economics and is about careful design based on the psychological nuances of how options are presented to influence decisions. In the digital sphere, it’s how websites and apps guide decisions, with the user being none the wiser. Let’s say you are trying out a language-learning app. When the trial period is over, you open it and see a message: “Don’t lose your 30-day learning path! If you don’t subscribe now, your work will be lost.” A large “Keep My Progress—Subscribe” button is displayed below. A smaller, less obvious button says, “End Trial.” In this case, the app uses your fear of missing out to make you feel you need to subscribe immediately.

Choice architects use our psychological tendencies, heuristics, and biases to ‘guide’ or nudge behavior online. A nudge here and there is a normal part of a choice process that predictably influences your behavior without preventing the will of the user. A fitness app sends you a push notification right after work, suggesting a quick workout based on your usual free time. You are not required to exercise, but it makes it easier when you know you will have time. So, choice architects can create digital environments that use the way people act and combine it with the predictive power of machine learning tools, the ability to understand the context of large language models, and generative AI to respond naturally.

Panic Buttons and Phantom Timers

Within this frame of reference, it is also important to investigate the concept of human tendency and how this forms a part of the puzzle. You pick up your phone and open your go-to social media app. A pop-up suggests a new app that might be better, but you ignore it. Sticking with what you know feels easier. This is a tendency in action—choosing the familiar over the new, even if the new might be better. We all do this in small ways every day. Tendencies are the mental shortcuts we often take without realising it. These common thought patterns can be helpful, but they can also lead us astray from rational choices.

For example, what if your go-to social media app uses these tendencies to its advantage: “85% of your friends have switched to the new app. If you don’t upgrade now, your data and connections could be lost.” A countdown timer shows up and starts to tick away. The “Stay Here” button is small and grey, and the “Switch Now” button pulses red. Staying seems risky. You have your finger on “Switch Now” even though you don’t know anything about the new app. This shows how AI marketing pros can take advantage of the status quo bias and use social proof, fear of loss, fake scarcity, and visual manipulation to get users to decide quickly.

When we are under a lot of stress, we often use habits to help us. You might think, “If 85% of my friends changed, it must be okay” as you look at that pulsing “Switch Now” button. This instant opinion, which is based on how other people use the app, instead of how good it is, is an example of a heuristic. If you click “Switch Now,” you might not think about the new app’s payment terms.

Now, imagine you are browsing a popular e-commerce app on your smartphone. You’ve been eyeing a particular product for a while but haven’t made the purchase yet. Suddenly, a notification appears: “Only 2 left in stock. This is being looked at right now by 11 other buyers.” In the corner, a small timer starts counting down from 5 minutes. “Save for Later” is hard to see, but the “Buy Now” is instantly visible. As you think about the thing slipping away, your heart beats faster. You quickly click “Buy Now” without taking the time to look at other options or compare prices.

In this situation, you have just experienced the scarcity bias in action. The app took advantage of your fear of losing out by making you feel like you had to act immediately in a false competition. It’s more likely that the stock count, the number of watchers, and the timer were all a work of fiction. Bypassing your normal decision-making process, this mental shortcut made you buy something without even thinking. In the long run, you may realise that you overpaid or bought something you didn’t need.

Brain Hackers: Simple Questions Empty Your Wallet

Another tactic is the use of quizzes and questions to lure you into decisions, you otherwise might not have made. Imagine you’re using a popular online bookstore app. You’ve just finished a quick three-question quiz about reading preferences. The app cheerfully announces, “Great job! We’ve found the perfect book for you based on your tastes.” It shows you a book cover with a “95% Match” badge. The “Add to Cart” button is large and inviting, while “See Other Recommendations” is smaller.

You feel a sense of accomplishment from completing the quiz. The perfect match makes you curious. You tap to read more and notice the app has added a virtual bookmark with your name on to the book’s image. You tap “Add to Cart,” feeling like you have discovered a book that is uniquely meant for you.

This is what you would call “effort justification” and “emotional attachment” nudges in play. The app preyed upon your sense of involvement by making you think that your answers were important, even though the quiz was short. After that, the virtual bookmark created a personal link/emotional bond. It cleverly made you feel like the book was specifically made for you, you quickly made up your mind instead of taking time to think.

Next, you decide to shop for headphones. You find a pair you like for $50. Just before you add them to your cart, a message pops up: “Upgrade to wireless for only $15 more!” The app shows a picture of tangled wires next to a person enjoying wireless headphones. Again, you see visual prompts to upgrade, bringing your mind to the freedom of no wires and only a small price increase. This is an example of narrow framing, forcing you to concentrate on just one element of your choice, the app used a comparison nudge (wired versus wireless) and a framing nudge (showing the price as only a small add-on). By narrowing your focus, it led you to make a quick choice.

How AI Can Turn Your Data into Manipulation

Lastly, we’re going to look at how the data collected can be used to influence your future decisions. Let’s say that a standard generative AI-as-a-service (AIaaS) tool that is simple to find on the market was used in a political campaign. The job of this AI system is to craft millions of very specific messages, each one carefully thought out to appeal to a different voter. The campaign gives the AI psychological profiles of voters based on their online activities, social media posts, and digital tracks. This information lets the AI write messages that directly address the worries, hopes, and fears of each voter.

However, these autonomous agents can go far beyond simple personalisation. They employ sophisticated framing and nudging techniques, activating coherence biases to manipulate through consistency in your beliefs and actions, even if it means deviating from rational thought or ethical norms. This software can manipulate the textual, visual, and audio content that you consume, tailoring each piece to hack through your psychological vulnerabilities – all based on real-time analysis of your behavior and likely psychological state.

Hack-Proof Your Brain: 7 Steps to Avoiding AI Mind-Control

Think you are immune to online manipulation? The internet is full of clever ways to change what you do and how you think. No need to worry, though, the following tips will help you get around in the digital world like a pro:

1. Trust your gut. Always ask questions. Did that funny cat video become political suddenly? Are you feeling incredibly angry after reading the news? Your brain is telling you that something is not right. Pay attention to how you feel.

2. Add some variety to your feed. Follow people who think differently than you do, it’s good for your brain. If you only see posts you agree with, you are living in a bubble. Try following news sources from different political parties. You might be surprised at what you learn, and it’ll sharpen your thinking.

3. Fact-check before you share. Don’t spread lies by accident. That shocking statistic might be fake. Take a minute to research before hitting ‘share’. It’ll feel great knowing you’re not part of the problem.

4. Learn how AI and behavioral marketing work. Fear and social proof are used to manipulate you into spending money. Once you know the tactics, you’ll see them everywhere, and they’ll be less effective.

5. Guard your information. The less you share, the safer you are. Think twice before taking that fun personality quiz or sharing your birth date online. This information can be used to send you material that is designed to trick you. Make sure you regularly check your social media private settings. Being vague online is wise.

6. Take breaks from your phone, your brain needs rest. Try to put away your devices an hour before bed or have phone-free Saturdays. You will be surprised about how refreshed you feel, and you might even sleep better. Plus, it’s harder to manipulate you when you’re not always online.

7. Talk to people. You stay grounded when you talk about problems with real people. Talk about that emotional subject with your friends over coffee instead of writing about it on Facebook. Real-life conversations help you understand different viewpoints and remind you that the world isn’t as polarised as the internet makes it seem.

Remember, your mind is powerful, and you can use these tips to keep it that way. The more you practice, the better you’ll get at spotting AI manipulation, which is a superpower in the digital age. Ready to outsmart the internet? Start with one tip today and see how it changes your digital experience. You’ve got this!