00:002025 wasn't just another tech year. It's the year AI got personal.
00:07It spoke to us and shaped our emotions. AI brought us amazing new tools and great risks along with
00:14them. Here are three ways AI has shaped our lives and how to stay safe in 2026.
00:19AI chatbots with real-time voices now sound strikingly human.
00:23Many of us use them for work, but others turn to them for companionship and personal advice.
00:31And that's becoming a problem. Heavy daily chatbot use correlates with higher loneliness,
00:37stronger dependence and lower socialization, according to a 2025 MIT study. And that's not all.
00:44This year, psychiatrists discussed the emergence of a new psychosis induced by AI. People have
00:50spiraled out of control after bots validated their darkest thoughts. It's not a formal diagnosis yet,
00:56but the risks are being seriously discussed. In 2025, several US families have sued Character AI,
01:02an AI chatbot service, alleging that its roleplay bots harm teens and encourage suicide. The cases
01:10are moving forward. Why does this happen? Chatbots mirror our emotions. They don't apply reality checks.
01:17In this way, they can reinforce people's isolation and delusions. So here are three things not to do.
01:23Don't treat chatbots as therapists. They are not programmed for that. Avoid spending too much time
01:28with them. It can distort your thinking. And in general, be careful about what private data you're
01:34sharing with chatbots. Bottom line, use them for fun or productivity, not as your only lifeline.
01:41Next up, our faces. Tools like Sora 2 and Nano Banana now let you upload your photo or video
01:47and star in hyper-realistic images and videos. Sounds cool for influencers. And social media
01:53companies like Meta are indeed targeting this demographic with additional AI tools. They want
01:58them to produce more content to keep us longer on their platforms. Originally, using people faces was
02:05restricted on mass-market AI generators. But the dark side is, now your face can be cloned in minutes.
02:12And while there are safeguards in place on Sora 2 or Google's VO3, the technical progress in the field
02:18has made identity theft and revenge porn explode. Three tips to protect yourself. Always think before
02:24uploading images or videos with your face. Once it's out there, it's exploitable. Might sound dramatic,
02:30but it might be wise to first call into question all images you see on social media. And verify before
02:38sharing video proof. Deepfakes can mimic anyone, even their voices. The same tech that powers creative
02:44videos is fueling scams. In 2025, scammers leveled up. Imagine getting a call from your child in distress,
02:52crying and begging for money. You sent cash only to learn it was an AI deepfake. This isn't hypothetical.
02:59A US mom got a call from what sounded like her daughter. She only realized it was an AI scam after
03:05the FBI got involved. Crimes like these have risen rapidly in 2025. According to research by
03:11cyber security company Surfshark, financial losses from AI frauds reached more than 400 million US
03:17dollars in the first half of 2025. That's more than in all of 2024. Here are some strategies to stay safe.
03:24Always verify emergency calls through a second channel. Call back on a known number. Set friends
03:30and family safe words. It's a quick way to confirm someone's identity. Be skeptical of urgency and
03:37emotion. Those are classic scam triggers. AI in 2025 gave us incredible tools, but also new risks. Stay
03:44curious. Stay cautious. And remember, tech is powerful, but so is your own judgment.
Comments