Key Takeaways

  • SGLang, an open-source AI/ML framework, has been found to contain multiple unsafe deserialization vulnerabilities (CVE-2026-3059, CVE-2026-3060, and CVE-2026-3989), which allow for unauthenticated remote code execution, posing a significant risk to users and developers of AI applications.

  • These vulnerabilities are critical because they enable attackers to execute arbitrary code without authentication, potentially leading to the exposure of sensitive data and compromise of entire environments. With the increasing reliance on AI and machine learning, the implications for security and data integrity are substantial.

  • The impacted parties include organizations utilizing SGLang for deploying AI applications, as well as the larger community of developers and researchers in AI/ML, who may face increased risks due to the widespread use of unsafe Python serialization methods like pickle.

Orca Security identified multiple unsafe deserialization vulnerabilities in SGLang, a widely used AI/ML framework, leading to three critical CVEs that allow unauthenticated remote code execution and insecure deserialization, with no response from maintainers or available patches.