
A critical security flaw has been revealed in LangChain Core. It can also be exploited by an attacker to steal sensitive secrets and influence large-scale language model (LLM) responses through prompt injection.
LangChain Core (i.e. langchain-core) is a core Python package that is part of the LangChain ecosystem and provides core interfaces and model-agnostic abstractions for building LLM-powered applications.
This vulnerability is tracked as CVE-2025-68664 and has a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat reportedly reported the vulnerability on December 4, 2025. The code name is LangGrinch.
“A serialization injection vulnerability exists in LangChain’s dumps() and dumpd() functions,” project administrators said in an advisory. “The function does not escape the dictionary using the ‘lc’ key when serializing a free-form dictionary.”

“The ‘lc’ key is used internally by LangChain to mark serialized objects. If user-controlled data contains this key structure, it will be treated as a regular LangChain object during deserialization rather than plain user data.”
According to Cyata researcher Porat, the crux of the issue involves two functions that fail to escape user-controlled dictionaries containing the “lc” key. The “lc” marker represents a LangChain object in the framework’s internal serialization format.
“So if an attacker were able to serialize and then deserialize content containing the ‘lc’ key in the LangChain orchestration loop, an arbitrary insecure object could be instantiated, triggering many paths in the attacker’s favor,” Porat said.
This can have a variety of consequences, including extracting secrets from environment variables when deserialization is performed with “secrets_from_env=True” (previously set by default), instantiating classes within pre-approved trusted namespaces such as langchain_core, langchain, langchain_community, and even potentially leading to arbitrary code execution via Jinja2 templates.
Additionally, the escape bug allows injection of LangChain object structures via user-controlled fields such as metadata via prompt injection, additional _kwargs, or response metadata.
A patch released by LangChain introduces new restrictive defaults for load() and loads() with an allowlist parameter “allowed_objects” that allows users to specify which classes can be serialized/deserialized. Additionally, Jinja2 templates are now blocked by default and the “secrets_from_env” option is set to “False” to disable automatic secret loading from the environment.
The following versions of langchain-core are affected by CVE-2025-68664.
>= 1.0.0, < 1.2.5 (fixed in 1.2.5) < 0.3.81 (fixed in 0.3.81)
It is worth noting that a similar serialization injection flaw exists in LangChain.js. This is also due to not properly escaping the object with the “lc” key, allowing secret extraction and prompt injection. This vulnerability has been assigned CVE identifier CVE-2025-68665 (CVSS score: 8.6).

Affects the following npm packages:
@langchain/core >= 1.0.0, < 1.1.8 (fixed in 1.1.8) @langchain/core < 0.3.80 (0.3.80で修正) langchain >= 1.0.0, < 1.2.3 (fixed in 1.2.3) langchain < 0.3.37 (fixed in 0.3.37)
Given the importance of this vulnerability, we recommend that users update to the patched version as soon as possible for optimal protection.
“The most common attack vector is via LLM response fields such as addition_kwargs and response_metadata, which can be controlled by prompt injection and serialized/deserialized in streaming operations,” Porat said. “This is exactly the intersection of ‘AI meets traditional security’ where organizations are caught off guard. The LLM output is an untrusted input.”
Source link
