Memory safety / Uninitialized read in HFS+ catalog record parsing
Description
The commit contains a genuine security vulnerability fix in the HFS+ handling code of the Linux kernel. It addresses a memory-safety issue where syzbot observed an uninitialized value being read from on-disk HFS+ catalog records. The root cause is that the code path reading catalog records did not validate that the on-disk record size (the length field for a catalog entry) matches the expected size for the record type. The patch introduces hfsplus_brec_read_cat(), which validates the record size against the type and returns -EIO on mismatch, preventing potential reads of uninitialized kernel memory. Additionally, the commit includes related hardening such as detecting corrupted allocator state during hfs_btree_open() (mount read-only on corruption) and fixing error-path lock handling to avoid deadlocks. These changes reduce exposure to memory-safety issues and panics when processing corrupted HFS+ images.
In short, this is a targeted memory-safety vulnerability fix in HFS+ catalog record parsing, not merely a dependency bump or a cosmetic code cleanup.
Proof of Concept
PoC Overview:
- This vulnerability exists in older Linux kernels that include the HFS+ patch set but before the fix in hfsplus_brec_read_cat() was applied. An attacker with read access to a corrupted or crafted HFS+ image could trigger a kernel memory read of uninitialized data when the on-disk catalog record size does not match the type-derived expected size.
Attack surface:
- A mounted HFS+ filesystem image containing intentionally corrupted catalog records (type + length) that do not conform to the HFS+ catalog record encoding.
Prerequisites:
- A kernel build that includes the vulnerable HFS+ code (e.g., v7.0-rc6) but not the patch in this commit.
- An HFS+ image/file system crafted with a catalog entry where the on-disk record length (entrylength) does not equal the size expected for the catalog record type (folder, file, folder-thread, file-thread).
- Access to a test machine or VM where you can build and boot a kernel and mount an HFS+ image.
Proof-of-Concept (high level, actionable steps):
1) Prepare a vulnerable kernel (pre-patch) and a test HFS+ image. Create the image and an HFS+ volume using standard tools (e.g., hfsprogs) in a test environment.
2) Locate a catalog record allocation in the HFS+ Catalog File that describes a file or folder. This typically involves inspecting the Catalog File and the B-tree header to locate a catalog record with a known type (e.g., kHFSPlusFileRecord).
3) Corrupt the on-disk record length for that catalog entry. The exact location of entrylength within a catalog record depends on record type, but you want to set it to a value that does not match the actual fixed-size payload for the selected type (e.g., set to a number far larger or smaller than sizeof(struct hfsplus_cat_file) or the equivalent calculated size for the thread records). For example:
- type = HFSPLUS_FILE (0x0002)
- set entrylength to an incorrect value (e.g., 0xFFFF)
4) Mount the HFS+ image with the vulnerable kernel and perform a benign operation that traverses the catalog (e.g., access or stat the file you corrupted).
5) Observe the outcome. Before the fix, the kernel may read beyond the intended record bounds, leading to an uninitialized value access reported by KMSAN/LSAN or a kernel crash. After applying the patch, the code path hfsplus_brec_read_cat() detects the mismatch and returns -EIO, preventing the unsafe read.
Concrete example (conceptual, replace offsets with real ones for your image):
- Open the HFS+ image and locate a catalog record for a file.
- At the record header, set the length field (entrylength) to 0xFFFF, while the type field remains 0x0002 (HFSPLUS_FILE).
- Mount and trigger a read of that catalog entry via normal filesystem operations.
- Expected result with the vulnerable kernel: uninitialized data read or a kernel vulnerability trigger; with the patched kernel: -EIO is returned and the read is safely aborted.
Note: The exact offsets and field layouts vary by HFS+ catalog record type and by the version of the HFS+ implementation. The key point is to create a mismatch between the on-disk record length and the calculated size for the catalog entry type so that the unvalidated read path is exercised. The patch adds an explicit check and rejects such mismatches, preventing the exploit.
Commit Details
Author: Linus Torvalds
Date: 2026-04-13 23:50 UTC
Message:
Merge tag 'hfs-v7.1-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/vdubeyko/hfs
Pull hfsplus updates from Viacheslav Dubeyko:
"This contains several fixes of syzbot reported issues and HFS+ fixes
of xfstests failures.
- Fix a syzbot reported issue of a KMSAN uninit-value in
hfsplus_strcasecmp().
The root cause was that hfs_brec_read() doesn't validate that the
on-disk record size matches the expected size for the record type
being read. The fix introduced hfsplus_brec_read_cat() wrapper that
validates the record size based on the type field and returns -EIO
if size doesn't match (Deepanshu Kartikey)
- Fix a syzbot reported issue of processing corrupted HFS+ images
where the b-tree allocation bitmap indicates that the header node
(Node 0) is free. Node 0 must always be allocated. Violating this
invariant leads to allocator corruption, which cascades into kernel
panics or undefined behavior.
Prevent trusting a corrupted allocator state by adding a validation
check during hfs_btree_open(). If corruption is detected, print a
warning identifying the specific corrupted tree and force the
filesystem to mount read-only (SB_RDONLY).
This prevents kernel panics from corrupted images while enabling
data recovery (Shardul Bankar)
- Fix a potential deadlock in hfsplus_fill_super().
hfsplus_fill_super() calls hfs_find_init() to initialize a search
structure, which acquires tree->tree_lock. If the subsequent call
to hfsplus_cat_build_key() fails, the function jumps to the
out_put_root error label without releasing the lock.
Fix this by adding the missing hfs_find_exit(&fd) call before
jumping to the out_put_root error label. This ensures that
tree->tree_lock is properly released on the error path (Zilin Guan)
- Update a files ctime after rename in hfsplus_rename() (Yangtao Li)
The rest of the patches introduce the HFS+ fixes for the case of
generic/348, generic/728, generic/533, generic/523, and generic/642
test-cases of xfstests suite"
* tag 'hfs-v7.1-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/vdubeyko/hfs:
hfsplus: fix generic/642 failure
hfsplus: rework logic of map nodes creation in xattr b-tree
hfsplus: fix logic of alloc/free b-tree node
hfsplus: fix error processing issue in hfs_bmap_free()
hfsplus: fix potential race conditions in b-tree functionality
hfsplus: extract hidden directory search into a helper function
hfsplus: fix held lock freed on hfsplus_fill_super()
hfsplus: fix generic/523 test-case failure
hfsplus: validate b-tree node 0 bitmap at mount time
hfsplus: refactor b-tree map page access and add node-type validation
hfsplus: fix to update ctime after rename
hfsplus: fix generic/533 test-case failure
hfsplus: set ctime after setxattr and removexattr
hfsplus: fix uninit-value by validating catalog record size
hfsplus: fix potential Allocation File corruption after fsync
Triage Assessment
Vulnerability Type: Memory safety / uninitialized read
Confidence: HIGH
Reasoning:
The patch includes a dedicated wrapper hfsplus_brec_read_cat to validate catalog record sizes and prevent uninitialized value reads (syzbot report). It also adds validation to detect corrupted allocator state and mount read-only to avoid panics, reducing risk from corrupted images. These changes address memory-safety issues and potential exploitation vectors in HFS+ handling.
Verification Assessment
Vulnerability Type: Memory safety / Uninitialized read in HFS+ catalog record parsing
Confidence: HIGH
Affected Versions: v7.0-rc6
Code Diff
diff --git a/fs/hfsplus/attributes.c b/fs/hfsplus/attributes.c
index 174cd13106ad66..7c2e589d455380 100644
--- a/fs/hfsplus/attributes.c
+++ b/fs/hfsplus/attributes.c
@@ -57,7 +57,8 @@ int hfsplus_attr_build_key(struct super_block *sb, hfsplus_btree_key *key,
if (name) {
int res = hfsplus_asc2uni(sb,
(struct hfsplus_unistr *)&key->attr.key_name,
- HFSPLUS_ATTR_MAX_STRLEN, name, strlen(name));
+ HFSPLUS_ATTR_MAX_STRLEN, name, strlen(name),
+ HFS_XATTR_NAME);
if (res)
return res;
len = be16_to_cpu(key->attr.key_name.length);
@@ -153,14 +154,22 @@ int hfsplus_find_attr(struct super_block *sb, u32 cnid,
if (err)
goto failed_find_attr;
err = hfs_brec_find(fd, hfs_find_rec_by_key);
- if (err)
+ if (err == -ENOENT) {
+ /* file exists but xattr is absent */
+ err = -ENODATA;
+ goto failed_find_attr;
+ } else if (err)
goto failed_find_attr;
} else {
err = hfsplus_attr_build_key(sb, fd->search_key, cnid, NULL);
if (err)
goto failed_find_attr;
err = hfs_brec_find(fd, hfs_find_1st_rec_by_cnid);
- if (err)
+ if (err == -ENOENT) {
+ /* file exists but xattr is absent */
+ err = -ENODATA;
+ goto failed_find_attr;
+ } else if (err)
goto failed_find_attr;
}
@@ -174,6 +183,9 @@ int hfsplus_attr_exists(struct inode *inode, const char *name)
struct super_block *sb = inode->i_sb;
struct hfs_find_data fd;
+ hfs_dbg("name %s, ino %llu\n",
+ name ? name : NULL, inode->i_ino);
+
if (!HFSPLUS_SB(sb)->attr_tree)
return 0;
@@ -241,6 +253,7 @@ int hfsplus_create_attr_nolock(struct inode *inode, const char *name,
return err;
}
+ hfsplus_mark_inode_dirty(HFSPLUS_ATTR_TREE_I(sb), HFSPLUS_I_ATTR_DIRTY);
hfsplus_mark_inode_dirty(inode, HFSPLUS_I_ATTR_DIRTY);
return 0;
@@ -292,15 +305,16 @@ int hfsplus_create_attr(struct inode *inode,
static int __hfsplus_delete_attr(struct inode *inode, u32 cnid,
struct hfs_find_data *fd)
{
- int err = 0;
+ int err;
__be32 found_cnid, record_type;
+ found_cnid = U32_MAX;
hfs_bnode_read(fd->bnode, &found_cnid,
fd->keyoffset +
offsetof(struct hfsplus_attr_key, cnid),
sizeof(__be32));
if (cnid != be32_to_cpu(found_cnid))
- return -ENOENT;
+ return -ENODATA;
hfs_bnode_read(fd->bnode, &record_type,
fd->entryoffset, sizeof(record_type));
@@ -326,8 +340,10 @@ static int __hfsplus_delete_attr(struct inode *inode, u32 cnid,
if (err)
return err;
+ hfsplus_mark_inode_dirty(HFSPLUS_ATTR_TREE_I(inode->i_sb),
+ HFSPLUS_I_ATTR_DIRTY);
hfsplus_mark_inode_dirty(inode, HFSPLUS_I_ATTR_DIRTY);
- return err;
+ return 0;
}
static
@@ -351,7 +367,10 @@ int hfsplus_delete_attr_nolock(struct inode *inode, const char *name,
}
err = hfs_brec_find(fd, hfs_find_rec_by_key);
- if (err)
+ if (err == -ENOENT) {
+ /* file exists but xattr is absent */
+ return -ENODATA;
+ } else if (err)
return err;
err = __hfsplus_delete_attr(inode, inode->i_ino, fd);
@@ -411,9 +430,14 @@ int hfsplus_delete_all_attrs(struct inode *dir, u32 cnid)
for (;;) {
err = hfsplus_find_attr(dir->i_sb, cnid, NULL, &fd);
- if (err) {
- if (err != -ENOENT)
- pr_err("xattr search failed\n");
+ if (err == -ENOENT || err == -ENODATA) {
+ /*
+ * xattr has not been found
+ */
+ err = -ENODATA;
+ goto end_delete_all;
+ } else if (err) {
+ pr_err("xattr search failed\n");
goto end_delete_all;
}
diff --git a/fs/hfsplus/bfind.c b/fs/hfsplus/bfind.c
index 336d654861c597..9a55fa6d529429 100644
--- a/fs/hfsplus/bfind.c
+++ b/fs/hfsplus/bfind.c
@@ -287,3 +287,54 @@ int hfs_brec_goto(struct hfs_find_data *fd, int cnt)
fd->bnode = bnode;
return res;
}
+
+/**
+ * hfsplus_brec_read_cat - read and validate a catalog record
+ * @fd: find data structure
+ * @entry: pointer to catalog entry to read into
+ *
+ * Reads a catalog record and validates its size matches the expected
+ * size based on the record type.
+ *
+ * Returns 0 on success, or negative error code on failure.
+ */
+int hfsplus_brec_read_cat(struct hfs_find_data *fd, hfsplus_cat_entry *entry)
+{
+ int res;
+ u32 expected_size;
+
+ res = hfs_brec_read(fd, entry, sizeof(hfsplus_cat_entry));
+ if (res)
+ return res;
+
+ /* Validate catalog record size based on type */
+ switch (be16_to_cpu(entry->type)) {
+ case HFSPLUS_FOLDER:
+ expected_size = sizeof(struct hfsplus_cat_folder);
+ break;
+ case HFSPLUS_FILE:
+ expected_size = sizeof(struct hfsplus_cat_file);
+ break;
+ case HFSPLUS_FOLDER_THREAD:
+ case HFSPLUS_FILE_THREAD:
+ /* Ensure we have at least the fixed fields before reading nodeName.length */
+ if (fd->entrylength < HFSPLUS_MIN_THREAD_SZ) {
+ pr_err("thread record too short (got %u)\n", fd->entrylength);
+ return -EIO;
+ }
+ expected_size = hfsplus_cat_thread_size(&entry->thread);
+ break;
+ default:
+ pr_err("unknown catalog record type %d\n",
+ be16_to_cpu(entry->type));
+ return -EIO;
+ }
+
+ if (fd->entrylength != expected_size) {
+ pr_err("catalog record size mismatch (type %d, got %u, expected %u)\n",
+ be16_to_cpu(entry->type), fd->entrylength, expected_size);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
index 250a226336ea7a..f8b5a8ae58ff58 100644
--- a/fs/hfsplus/bnode.c
+++ b/fs/hfsplus/bnode.c
@@ -420,7 +420,10 @@ void hfs_bnode_unlink(struct hfs_bnode *node)
tree->root = 0;
tree->depth = 0;
}
+
+ spin_lock(&tree->hash_lock);
set_bit(HFS_BNODE_DELETED, &node->flags);
+ spin_unlock(&tree->hash_lock);
}
static inline int hfs_bnode_hash(u32 num)
diff --git a/fs/hfsplus/brec.c b/fs/hfsplus/brec.c
index 6796c1a80e9970..e3df89284079db 100644
--- a/fs/hfsplus/brec.c
+++ b/fs/hfsplus/brec.c
@@ -239,6 +239,9 @@ static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
struct hfs_bnode_desc node_desc;
int num_recs, new_rec_off, new_off, old_rec_off;
int data_start, data_end, size;
+ size_t rec_off_tbl_size;
+ size_t node_desc_size = sizeof(struct hfs_bnode_desc);
+ size_t rec_size = sizeof(__be16);
tree = fd->tree;
node = fd->bnode;
@@ -265,18 +268,22 @@ static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
return next_node;
}
- size = tree->node_size / 2 - node->num_recs * 2 - 14;
- old_rec_off = tree->node_size - 4;
+ rec_off_tbl_size = node->num_recs * rec_size;
+ size = tree->node_size / 2;
+ size -= node_desc_size;
+ size -= rec_off_tbl_size;
+ old_rec_off = tree->node_size - (2 * rec_size);
+
num_recs = 1;
for (;;) {
data_start = hfs_bnode_read_u16(node, old_rec_off);
if (data_start > size)
break;
- old_rec_off -= 2;
+ old_rec_off -= rec_size;
if (++num_recs < node->num_recs)
continue;
- /* panic? */
hfs_bnode_put(node);
+ hfs_bnode_unlink(new_node);
hfs_bnode_put(new_node);
if (next_node)
hfs_bnode_put(next_node);
@@ -287,7 +294,7 @@ static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
/* new record is in the lower half,
* so leave some more space there
*/
- old_rec_off += 2;
+ old_rec_off += rec_size;
num_recs--;
data_start = hfs_bnode_read_u16(node, old_rec_off);
} else {
@@ -295,27 +302,28 @@ static struct hfs_bnode *hfs_bnode_split(struct hfs_find_data *fd)
hfs_bnode_get(new_node);
fd->bnode = new_node;
fd->record -= num_recs;
- fd->keyoffset -= data_start - 14;
- fd->entryoffset -= data_start - 14;
+ fd->keyoffset -= data_start - node_desc_size;
+ fd->entryoffset -= data_start - node_desc_size;
}
new_node->num_recs = node->num_recs - num_recs;
node->num_recs = num_recs;
- new_rec_off = tree->node_size - 2;
- new_off = 14;
+ new_rec_off = tree->node_size - rec_size;
+ new_off = node_desc_size;
size = data_start - new_off;
num_recs = new_node->num_recs;
data_end = data_start;
while (num_recs) {
hfs_bnode_write_u16(new_node, new_rec_off, new_off);
- old_rec_off -= 2;
- new_rec_off -= 2;
+ old_rec_off -= rec_size;
+ new_rec_off -= rec_size;
data_end = hfs_bnode_read_u16(node, old_rec_off);
new_off = data_end - size;
num_recs--;
}
hfs_bnode_write_u16(new_node, new_rec_off, new_off);
- hfs_bnode_copy(new_node, 14, node, data_start, data_end - data_start);
+ hfs_bnode_copy(new_node, node_desc_size,
+ node, data_start, data_end - data_start);
/* update new bnode header */
node_desc.next = cpu_to_be32(new_node->next);
diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
index 1220a2f2273718..761c74ccd6531e 100644
--- a/fs/hfsplus/btree.c
+++ b/fs/hfsplus/btree.c
@@ -129,12 +129,148 @@ u32 hfsplus_calc_btree_clump_size(u32 block_size, u32 node_size,
return clump_size;
}
+/* Context for iterating b-tree map pages
+ * @page_idx: The index of the page within the b-node's page array
+ * @off: The byte offset within the mapped page
+ * @len: The remaining length of the map record
+ */
+struct hfs_bmap_ctx {
+ unsigned int page_idx;
+ unsigned int off;
+ u16 len;
+};
+
+/*
+ * Finds the specific page containing the requested byte offset within the map
+ * record. Automatically handles the difference between header and map nodes.
+ * Returns the struct page pointer, or an ERR_PTR on failure.
+ * Note: The caller is responsible for mapping/unmapping the returned page.
+ */
+static struct page *hfs_bmap_get_map_page(struct hfs_bnode *node,
+ struct hfs_bmap_ctx *ctx,
+ u32 byte_offset)
+{
+ u16 rec_idx, off16;
+ unsigned int page_off;
+
+ if (node->this == HFSPLUS_TREE_HEAD) {
+ if (node->type != HFS_NODE_HEADER) {
+ pr_err("hfsplus: invalid btree header node\n");
+ return ERR_PTR(-EIO);
+ }
+ rec_idx = HFSPLUS_BTREE_HDR_MAP_REC_INDEX;
+ } else {
+ if (node->type != HFS_NODE_MAP) {
+ pr_err("hfsplus: invalid btree map node\n");
+ return ERR_PTR(-EIO);
+ }
+ rec_idx = HFSPLUS_BTREE_MAP_NODE_REC_INDEX;
+ }
+
+ ctx->len = hfs_brec_lenoff(node, rec_idx, &off16);
+ if (!ctx->len)
+ return ERR_PTR(-ENOENT);
+
+ if (!is_bnode_offset_valid(node, off16))
+ return ERR_PTR(-EIO);
+
+ ctx->len = check_and_correct_requested_length(node, off16, ctx->len);
+
+ if (byte_offset >= ctx->len)
+ return ERR_PTR(-EINVAL);
+
+ page_off = (u32)off16 + node->page_offset + byte_offset;
+ ctx->page_idx = page_off >> PAGE_SHIFT;
+ ctx->off = page_off & ~PAGE_MASK;
+
+ return node->page[ctx->page_idx];
+}
+
+/**
+ * hfs_bmap_test_bit - test a bit in the b-tree map
+ * @node: the b-tree node containing the map record
+ * @node_bit_idx: the relative bit index within the node's map record
+ *
+ * Returns true if set, false if clear or on failure.
+ */
+static bool hfs_bmap_test_bit(struct hfs_bnode *node, u32 node_bit_idx)
+{
+ struct hfs_bmap_ctx ctx;
+ struct page *page;
+ u8 *bmap, byte, mask;
+
+ page = hfs_bmap_get_map_page(node, &ctx, node_bit_idx / BITS_PER_BYTE);
+ if (IS_ERR(page))
+ return false;
+
+ bmap = kmap_local_page(page);
+ byte = bmap[ctx.off];
+ kunmap_local(bmap);
+
+ mask = 1 << (7 - (node_bit_idx % BITS_PER_BYTE));
+ return (byte & mask) != 0;
+}
+
+
+/**
+ * hfs_bmap_clear_bit - clear a bit in the b-tree map
+ * @node: the b-tree node containing the map record
+ * @node_bit_idx: the relative bit index within the node's map record
+ *
+ * Returns 0 on success, -EINVAL if already clear, or negative error code.
+ */
+static int hfs_bmap_clear_bit(struct hfs_bnode *node, u32 node_bit_idx)
+{
+ struct hfs_bmap_ctx ctx;
+ struct page *page;
+ u8 *bmap, mask;
+
+ page = hfs_bmap_get_map_page(node, &ctx, node_bit_idx / BITS_PER_BYTE);
+ if (IS_ERR(page))
+ return PTR_ERR(page);
+
+ bmap = kmap_local_page(page);
+
+ mask = 1 << (7 - (node_bit_idx % BITS_PER_BYTE));
+
+ if (!(bmap[ctx.off] & mask)) {
+ kunmap_local(bmap);
+ return -EINVAL;
+ }
+
+ bmap[ctx.off] &= ~mask;
+ set_page_dirty(page);
+ kunmap_local(bmap);
+
+ return 0;
+}
+
+#define HFS_EXTENT_TREE_NAME "Extents Overflow File"
+#define HFS_CATALOG_TREE_NAME "Catalog File"
+#define HFS_ATTR_TREE_NAME "Attributes File"
+#define HFS_UNKNOWN_TREE_NAME "Unknown B-tree"
+
+static const char *hfs_btree_name(u32 cnid)
+{
+ switch (cnid) {
+ case HFSPLUS_EXT_CNID:
+ return HFS_EXTENT_TREE_NAME;
+ case HFSPLUS_CAT_CNID:
+ return HFS_CATALOG_TREE_NAME;
+ case HFSPLUS_ATTR_CNID:
+ return HFS_ATTR_TREE_NAME;
+ default:
+ return HFS_UNKNOWN_TREE_NAME;
+ }
+}
+
/* Get a reference to a B*Tree and do some initial checks */
struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
{
struct hfs_btree *tree;
struct hfs_btree_header_rec *head;
struct address_space *mapping;
+ struct hfs_bnode *node;
struct inode *inode;
struct page *page;
unsigned int size;
@@ -242,6 +378,20 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id)
kunmap_local(head);
put_page(page);
+
+ node = hfs_bnode_find(tree, HFSPLUS_TREE_HEAD);
+ if (IS_ERR(node))
+ goto free_inode;
+
+ if (!hfs_bmap_test_bit(node, 0)) {
+ pr_warn("(%s): %s (cnid 0x%x) map record invalid or bitmap corruption detected, forcing read-only.\n",
+ sb->s_id, hfs_btree_name(id), id);
+ pr_warn("Run fsck.hfsplus to repair.\n");
+ sb->s_flags |= SB_RDONLY;
+ }
+
+ hfs_bnode_put(node);
+
return tree;
fail_page:
@@ -351,6 +501,8 @@ int hfs_bmap_reserve(struct hfs_btree *tree, u32 rsvd_nodes)
u32 count;
int res;
+ lockdep_assert_held(&tree->tree_lock);
+
if (rsvd_nodes <= 0)
return 0;
@@ -374,14 +526,14 @@ int hfs_bmap_reserve(struct hfs_btree *tree, u32 rsvd_nodes)
struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
{
struct hfs_bnode *node, *next_node;
- struct page **pagep;
+ struct hfs_bmap_ctx ctx;
+ struct page *page;
u32 nidx, idx;
- unsigned off;
- u16 off16;
- u16 len;
u8 *data, byte, m;
int i, res;
+ lockdep_assert_held(&tree->tree_lock);
+
res = hfs_bmap_reserve(tree, 1);
if (res)
return ERR_PTR(res);
@@ -390,32 +542,29 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
node = hfs_bnode_find(tree, nidx);
if (IS_ERR(node))
return node;
- len = hfs_brec_lenoff(node, 2, &off16);
- off = off16;
- if (!is_bnode_offset_valid(node, off)) {
+ page = hfs_bmap_get_map_page(node, &ctx, 0);
+ if (IS_ERR(page)) {
+ res = PTR_ERR(page);
hfs_bnode_put(node);
- return ERR_PTR(-EIO);
+ return ERR_PTR(res);
}
- len = check_and_correct_requested_length(node, off, len);
- off += node->page_offset;
- pagep = node->page + (off >> PAGE_SHIFT);
- data = kmap_local_page(*pagep);
- off &= ~PAGE_MASK;
+ data = kmap_local_page(page);
idx = 0;
for (;;) {
- while (len) {
- byte = data[off];
+ while (ctx.len) {
+ byte = data[ctx.off];
if (byte != 0xff) {
for (m = 0x80, i = 0; i < 8; m >>= 1, i++) {
if (!(byte & m)) {
idx += i;
- data[off] |= m;
- set_page_dirty(*pagep);
+ data[ctx.off] |= m;
+ set_page_dirty(page);
kunmap_local(data);
tree->free_nodes--;
+ hfs_btree_write(tree);
mark_inode_dirty(tree->inode);
hfs_bnode_put(node);
return hfs_bnode_create(tree,
@@ -423,19 +572,21 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
}
... [truncated]