The National Information Technology Development Agency (NITDA) has issued a security advisory cautioning users about newly discovered vulnerabilities in OpenAI’s ChatGPT models, including GPT-4o and GPT-5.
In a post on its official X account on Sunday, NITDA said that seven flaws were identified that could allow attackers to manipulate the system through indirect prompt injection.
“By embedding hidden instructions in web pages, comments, or crafted URLs, attackers can cause ChatGPT to execute unintended commands simply through normal browsing, summarisation, or search actions,” the agency said.
The advisory noted additional risks:
- Bypassing safety filters using trusted domains.
- Exploiting markdown rendering bugs to conceal malicious content.
- Poisoning ChatGPT memory, allowing injected instructions to persist across future interactions.
While OpenAI has fixed parts of the issues, NITDA highlighted that LLMs still struggle to reliably separate genuine user intent from malicious data, posing significant threats such as:
- Unauthorised system actions.
- Exposure of sensitive data.
- Distorted outputs.
- Behavioural manipulation.
The agency warned that these risks can be triggered even without user interaction, particularly when ChatGPT processes search results or content containing hidden commands.
To mitigate threats, NITDA advised organisations to:
- Limit or disable browsing and summarisation of untrusted sites within enterprise environments.
- Enable ChatGPT features like browsing or memory only when operationally necessary.
- Regularly update and patch GPT-4o and GPT-5 models to address known vulnerabilities.
Users are encouraged to contact NITDA’s Computer Emergency Readiness and Response Team (CERRT) for further enquiries and guidance on securing ChatGPT deployments.












