Use-after-free / Memory safety
Description
The commit fixes a use-after-free in the HTTP parser when freeParser() is called during llhttp_execute() while processing pipelined HTTP requests. If a synchronous 'close' event handler frees the parser mid-execution, the parser state is nulled while llhttp_execute() is still on the stack, which could crash on the next callback. The fix introduces an is_being_freed_ flag, sets it via parser->markFreed() in freeParser(), and makes Proxy::Raw abort callbacks if the flag is set, ensuring safe unwinding and preventing memory-safety crashes. A regression test was added to cover the scenario of two pipelined requests in one write.
Proof of Concept
Proof-of-concept (reproducible on pre-fix Node.js builds):
Prerequisites:
- A Node.js build prior to this fix (pre-25.9.0) with the vulnerable http parser.
- A client capable of sending two HTTP requests in a single TCP write (pipelining).
Server (runs on an arbitrary port):
const http = require('http');
const server = http.createServer((req, res) => {
// Trigger a close during parsing to simulate freeParser() being called mid-execute
req.socket.emit('close');
res.end();
});
server.listen(0, () => {
const port = server.address().port;
// Client sends two HTTP requests back-to-back in one TCP write (pipelined)
const net = require('net');
const client = net.connect(port, '127.0.0.1', () => {
client.write(
'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n' +
'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n'
);
});
client.on('data', () => { /* ignore response bodies */ });
client.on('end', () => {});
client.on('error', (e) => console.error('client error', e));
client.on('close', () => {
server.close();
});
});
Expected behavior on pre-fix builds:
- The server may crash due to a use-after-free once freeParser() is invoked mid llhttp_execute() during the first request, before the flag guards were added.
Expected behavior on fixed builds:
- The callbacks are aborted safely when is_being_freed_ is true, preventing the crash and allowing the server to terminate gracefully.
Commit Details
Author: Gerhard Stöbich
Date: 2026-03-06 06:01 UTC
Message:
http: fix use-after-free when freeParser is called during llhttp_execute
When pipelined requests arrive in one TCP segment, llhttp_execute()
parses them all in a single call. If a synchronous 'close' event
handler invokes freeParser() mid-execution, cleanParser() nulls out
parser state while llhttp_execute() is still on the stack, crashing on
the next callback.
Add an is_being_freed_ flag that freeParser() sets via
parser.markFreed() before cleaning state. Proxy::Raw checks the flag
before every callback and returns HPE_USER to abort execution early if
set.
PR-URL: https://github.com/nodejs/node/pull/62095
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
Reviewed-By: Colin Ihrig <cjihrig@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Yagiz Nizipli <yagiz@nizipli.com>
Triage Assessment
Vulnerability Type: Use-after-free / Memory safety
Confidence: HIGH
Reasoning:
The commit addresses a use-after-free vulnerability in the HTTP parser during llhttp_execute by introducing a guard (is_being_freed_ flag) and ensuring callbacks abort if freeing occurs. This prevents crashes or potential exploitation stemming from memory safety issues, especially under pipelined requests that could trigger a cross-boundary use-after-free. Added regression test further confirms the fix targets a security-sensitive crash scenario.
Verification Assessment
Vulnerability Type: Use-after-free / Memory safety
Confidence: HIGH
Affected Versions: < 25.9.0
Code Diff
diff --git a/lib/_http_common.js b/lib/_http_common.js
index 019b73a1ff225e..3c389ba054decc 100644
--- a/lib/_http_common.js
+++ b/lib/_http_common.js
@@ -189,8 +189,8 @@ function freeParser(parser, req, socket) {
if (parser) {
if (parser._consumed)
parser.unconsume();
- cleanParser(parser);
parser.remove();
+ cleanParser(parser);
if (parsers.free(parser) === false) {
// Make sure the parser's stack has unwound before deleting the
// corresponding C++ object through .close().
diff --git a/src/node_http_parser.cc b/src/node_http_parser.cc
index 8edd9f9523fd6f..50d7f9e6916096 100644
--- a/src/node_http_parser.cc
+++ b/src/node_http_parser.cc
@@ -628,6 +628,8 @@ class Parser : public AsyncWrap, public StreamListener {
Parser* parser;
ASSIGN_OR_RETURN_UNWRAP(&parser, args.This());
+ parser->is_being_freed_ = true;
+
if (parser->connectionsList_ != nullptr) {
parser->connectionsList_->Pop(parser);
parser->connectionsList_->PopActive(parser);
@@ -1012,6 +1014,7 @@ class Parser : public AsyncWrap, public StreamListener {
num_values_ = 0;
have_flushed_ = false;
got_exception_ = false;
+ is_being_freed_ = false;
headers_completed_ = false;
max_http_header_size_ = max_http_header_size;
}
@@ -1056,6 +1059,7 @@ class Parser : public AsyncWrap, public StreamListener {
size_t num_values_;
bool have_flushed_;
bool got_exception_;
+ bool is_being_freed_ = false;
size_t current_buffer_len_;
const char* current_buffer_data_;
bool headers_completed_ = false;
@@ -1075,6 +1079,9 @@ class Parser : public AsyncWrap, public StreamListener {
struct Proxy<int (Parser::*)(Args...), Member> {
static int Raw(llhttp_t* p, Args ... args) {
Parser* parser = ContainerOf(&Parser::parser_, p);
+ if (parser->is_being_freed_) {
+ return 0;
+ }
int rv = (parser->*Member)(std::forward<Args>(args)...);
if (rv == 0) {
rv = parser->MaybePause();
diff --git a/test/parallel/test-http-parser-freed-during-execute.js b/test/parallel/test-http-parser-freed-during-execute.js
new file mode 100644
index 00000000000000..c7605b75d95aa5
--- /dev/null
+++ b/test/parallel/test-http-parser-freed-during-execute.js
@@ -0,0 +1,23 @@
+'use strict';
+
+const common = require('../common');
+const { createServer } = require('http');
+const { connect } = require('net');
+
+// Regression test: ensure llhttp_execute() is aborted when freeParser() is
+// called synchronously during parsing of pipelined requests.
+const server = createServer(common.mustCall((req, res) => {
+ req.socket.emit('close');
+ res.end();
+}, 1));
+
+server.unref();
+
+server.listen(0, common.mustCall(() => {
+ // Two pipelined requests in one write, processed by a single llhttp_execute().
+ const client = connect(server.address().port);
+ client.end(
+ 'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n' +
+ 'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n',
+ );
+}));