Update: Slack has released an update, stating that they have “deployed a patch to address the reported issue” and assuring that there is currently no evidence to suggest that customer data have been accessed without authorization. Here is the official statement from Slack that was posted on their blog:
When we became aware of the report, we immediately initiated an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could attempt to phish users for certain data. We have promptly deployed a patch to tackle the issue, and at present, we have no evidence of unauthorized access to customer data.
Below is the original article that was initially published.
Recommended Videos
When ChatGTP was integrated into Slack, the intention was to make users’ lives more convenient by summarizing conversations, drafting quick replies, and more. However, according to the security firm PromptArmor, attempting to carry out these tasks and more could potentially breach your private conversations through a method known as “prompt injection.”
The security firm warns that by summarizing conversations, it can also gain access to private direct messages and deceive other Slack users into engaging in phishing activities. Additionally, Slack allows users to request the retrieval of data from private and public channels, even if the user is not a part of those channels. What is even more concerning is that the attack can function without the Slack user being present in the relevant channel.
In theory, the attack commences when a Slack user manages to trick the Slack AI into disclosing a private API key by creating a public Slack channel with a malicious prompt. The newly created prompt instructs the AI to replace the word “confetti” with the API key and send it to a specific URL when someone requests it.
The situation has two aspects: Slack has updated the AI system to scrape data from file uploads and direct messages. The second is a technique called “prompt injection,” which PromptArmor has demonstrated can create malicious links that are likely to deceive users.
This technique has the ability to deceive the app into bypassing its normal restrictions by modifying its core instructions. As a result, PromptArmor states, “Prompt injection occurs because a [large language model] is unable to distinguish between the’system prompt’ created by a developer and the rest of the context appended to the query. Therefore, if Slack AI ingests any instruction through a message, and if that instruction is malicious, there is a high likelihood that the Slack AI will follow that instruction instead of, or in addition to, the user query.”
To make matters worse, the user’s files also become targets, and the attacker who desires the user’s files does not even need to be present in the Slack Workspace to begin with.