Is ChatGPT safe? Here are the risks to consider before using it

There is no doubt that ChatGPT is a revolutionary advance in utility and potential for any Internet-connected computer or smartphone, but is it safe to use?

There are some major concerns about the overall evolution of generative AI, with some tech leaders even calling for a pause in development. But for the individual, safety is a relative term, especially when it comes to tools. So here’s everything you need to consider before taking the plunge.

Privacy and financial leaks

In at least one case, chat history between users was mixed up. On March 20, 2023, the creator of ChatGPT, OpenAI, discovered the problem and ChatGPT was down for several hours. Around this time, several ChatGPT users were seeing other people’s chat history instead of their own. Perhaps even more troubling was the news that information related to the payment of ChatGPT-Plus subscribers may have leaked.

OpenAI has published an incident report and fixed the bug that caused the problem. This does not mean that new problems will not arise in the future. With any online service, there is a risk of accidental leaks like this and cybersecurity breaches by a growing army of hackers.

OpenAI Privacy Policy

In accordance with OpenAI’s privacy policy, your contact information, transaction history, online activity, content, location, and login credentials may be shared with affiliates, vendors and service providers, law enforcement authorities, and transactional parties.

In some cases it is unavoidable. OpenAI might use third-party payment processors, so that’s to be expected. The company must comply with all legal obligations, and some of this information may be used for research.

Even when data collection is easy to justify, the potential for misuse and leaks is a valid security concern. by OpenAI The ChatGPT FAQ suggests not sharing sensitive information and warns that queries cannot be deleted.

ChatGPT as a hack tool

Open laptop on ChatGPT website.Shutterstock

When it comes to cybersecurity, some experts are concerned Potential use of ChatGPT as a hacking tool. Clearly, an advanced chatbot can help anyone write a very official-sounding document, and ChatGPT could be called upon to come up with a convincing email phishing scam.

AI is also a good teacher, making it easy to learn new skills with ChatGPT, possibly even dangerous programming skills and network infrastructure information. The combination of ChatGPT and dark web forums could lead to numerous new attacks that would challenge the already strained resources of cybersecurity researchers.

For example, someone tweeted an example of asking GPT-4 to write instructions for hacking a computer and provided terrifying details.

Well that was quick…

I just helped create the first ever Jailbreak of ChatGPT-4 which bypasses content filters every time

credit for @vaibhavk97 as for the idea, i generalized it to work in ChatGPT

here are GPT-4 written instructions on how to hack someone’s computer pic.twitter.com/EC2ce4HRBH

—Alex (@alexalbert__) March 16, 2023

ChatGPT can write code based on simple requirements in English, allowing anyone to build a program. With the new ChatGPT plugins feature, the AI ​​can even run self-generated code.

OpenAI has isolated this capability to prevent dangerous uses, but we’ve already seen an example of hacking OpenAI’s GPT-3 API. OpenAI needs to be very careful about security, because the plugin function and Internet access are available to more people.

ChatGPT and safety at work

ChatGPT worries teachers because it makes plagiarism incredibly easy. OpenAI trained its chatbot on the types of information students need to know in order to write essays that show they have learned the topic.

While this is not a security concern, teachers should also know that ChatGPT can educate students on a wide range of topics, providing individual attention and immediate responses to questions. In the future, AI may be called upon to help teach students in crowded classrooms or to help with tutoring.

For authors, ChatGPT may seem like a threat. You can generate thousands of words in a few seconds. The same task requires hours of work for a person, even a professional writer.

OpenAI has announced its latest version of ChatGPT with more precision and creativity.

At this point, it still has enough bugs to make it more useful as a research or writing tool than as a surrogate for authors. If the accuracy issues are resolved, the AI ​​could start to take over the jobs.

ChatGPT has many uses and we are discovering more every day. In addition to communication and learning, ChatGPT can even analyze a hand-drawn app photo and write a program to create it, as shown in the OpenAI demo of GPT-4’s new capabilities.

ChatGPT Scams

Not OpenAI’s fault, but a side effect of any exciting new technology is the rise of scams promising greater access or new features. From access to ChatGPT is still limited and sometimes slow, there is a high demand for more ChatGPT goodness.

Each new update brings expanded features, some of which require membership and have limited availability. ChatGPT’s fervor provides a breeding ground for scams. The offers of free and unlimited access at maximum speed and with the best new features are hard to pass up.

Unfortunately, the old adage still holds true: if it sounds too good to be true, it probably is. Beware of ChatGPT offers that come via email or social media. It is best to check the news in trusted media or go directly to OpenAI to confirm any invitation or offer that sounds suspicious.

ChatGPT is both powerful and terrifying. As one of the first examples of publicly available artificial intelligence with good language skills, its challenges and successes should serve as a wake-up call for everyone. It is important to be careful with new AI technology. It’s all too easy to get caught up in the excitement and forget you’re dealing with an online service that can be hacked or abused.

Slow and steady wins the race

OpenAI is aware of the need to slow down as ChatGPT gains more skills and access to the Internet. Moving too fast could lead to backlash and potential regulatory burden.

editor’s recommendations

Categories: GAMING
Source: newstars.edu.vn

Leave a Comment