Google Cloud Platform Security Advisory
TorchServe is used to host PyTorch machine learning models for online prediction. Vertex AI provides prebuilt PyTorch model serving containers which depend on TorchServe. Vulnerabilities were recently discovered in TorchServe which would allow an attacker to take control of a TorchServe deployment if its model management API is exposed. Customers with PyTorch models deployed to Vertex AI online prediction are not affected by these vulnerabilities, since Vertex AI does not expose TorchServe’s model management API. Customers using TorchServe outside of Vertex AI should take precautions to ensure their deployments are set up securely.
What should I do?
Vertex AI customers with deployed models using Vertex AI’s prebuilt PyTorch serving containers do not need to take any action to address the vulnerabilities, since Vertex AI’s deployments do not expose TorchServe’s management server to the internet.
Customers who are using the prebuilt PyTorch containers in other contexts, or who are using a custom-built or third-party distribution of TorchServe, should do the following:
What vulnerabilities are being addressed?
TorchServe’s management API is bound to
CVE-2023-43654 and CVE-2022-1471 allow a user with access to the management API to load models from arbitrary sources and remotely execute code. Mitigations for both of these issues are included in TorchServe 0.8.2: the remote code execution path is removed, and a warning is emitted if the default value for