Technology

Your Chats With AI Might Be Risky. Here’s What You Need To Know

Experts in security currently warn people that AI could do malicious tricks that can be harmful when not carefully used. This is especially true for users who are using AI chatbots such as ChatGPT.

According to the US Times Post, in an email, Yisroel Mirsky, head of the Offensive AI Research Lab at israel’s Ben-Gurion University, said that “Currently, anyone can read private chats sent by ChatGPT and other services.”

What are they capable of that we should worry about?

Before diving deep into the issue, here are some reminders of what negative AI is capable of:

  1. Spying on users’ conversation
  2. Collecting personal information
  3. Linking personal information to user accounts
  4. Storing sensitive data
  5. Requirement of personal Information for platform use
  6. Inability to use disposable or masked information

Recent Case: ChatGPT Outage

OpenAI’s ChatGPT recently experienced a global outage, affecting users worldwide. If you are a part of it, you might have encountered various issues such as being unable to access your ChatGPT accounts, empty chat histories, and chat screens not loading correctly. The outage led to users seeing a blank screen with the message “How can I help you today?” and experiencing missing chat history.

Hence, in the OpenAI’s recent blog post, they explained the issue. It stopped working because of a mistake in the computer program it uses. This mistake let some people see what others were talking about and even parts of their payment information. About 1.2% of the people who pay for ChatGPT Plus who are active might have seen this information accidentally. It included things like names, emails, and part of their credit card numbers.

Despite the percentage is quite high, OpenAI believes otherwise. “We believe the number of users whose data was actually revealed to someone else is extremely low.”

With that, OpenAI fixed the problem quickly by taking ChatGPT offline, but they couldn’t restore all the chat history. They say they care about users’ privacy and are sorry for the mistake. They promise to work hard to make things right again.

“In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, credit card type, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time,” they assured.

In their FAQ, they have warned users not to give confidential or sensitive information to the AI chatbot.

What kind of information should you not share?

When using AI, it’s important to keep personal details like your name and address private to protect your identity. Be cautious about sharing company information, especially sensitive data. Avoid giving out financial information, such as credit card numbers, to prevent fraud. Keep personal secrets, work-related details, and intellectual property like ideas and creative work confidential to avoid privacy breaches. Protect your passwords and never share them with AI systems or others to safeguard your online accounts and maintain security.

Hackers can steal your information with side channel attack

This occurs when hackers sneakily steal information by intercepting the communication between you and an AI system like ChatGPT. Even though the AI service might try to keep your chats private, hackers can find ways to listen in and grab your personal data without you knowing. They could do this if they’re on the same Wi-Fi network as you or even if they’re just lurking on the internet. It’s a sneaky way for them to get your private information without directly attacking the AI system itself. So, it’s important to be careful about what you share, even with AI, to avoid falling victim to these kinds of attacks.

Conclusions

While AI chatbots like ChatGPT can be helpful in many ways, it is crucial to be cautious about the information we share with them. Recent cases have shown that there can be vulnerabilities in the system that may compromise our privacy and security. It is essential to refrain from sharing personal details such as names, addresses, financial information, and confidential work-related information.

Additionally, we should be aware of side-channel attacks, where hackers can intercept our communication with AI systems and steal our data. By maintaining vigilance and being mindful of what we share, we can mitigate the risks associated with AI and protect our privacy effectively.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button