Skip to content

Commit b85f369

Browse files
hygonitehcaster
authored andcommitted
mm/slab: use unsigned long for orig_size to ensure proper metadata align
When both KASAN and SLAB_STORE_USER are enabled, accesses to struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. This occurs because orig_size is currently defined as unsigned int, which only guarantees 4-byte alignment. When struct kasan_alloc_meta is placed after orig_size, it may end up at a 4-byte boundary rather than the required 8-byte boundary on 64-bit systems. Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS are assumed to require 64-bit accesses to be 64-bit aligned. See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b ("Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details. Change orig_size from unsigned int to unsigned long to ensure proper alignment for any subsequent metadata. This should not waste additional memory because kmalloc objects are already aligned to at least ARCH_KMALLOC_MINALIGN. Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: stable@vger.kernel.org Fixes: 6edf257 ("mm/slub: enable debugging memory wasting of kmalloc") Signed-off-by: Harry Yoo <harry.yoo@oracle.com> Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo/ Link: https://patch.msgid.link/20260113061845.159790-2-harry.yoo@oracle.com Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
1 parent 9346ee2 commit b85f369

1 file changed

Lines changed: 7 additions & 7 deletions

File tree

mm/slub.c

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -854,7 +854,7 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab,
854854
* request size in the meta data area, for better debug and sanity check.
855855
*/
856856
static inline void set_orig_size(struct kmem_cache *s,
857-
void *object, unsigned int orig_size)
857+
void *object, unsigned long orig_size)
858858
{
859859
void *p = kasan_reset_tag(object);
860860

@@ -864,10 +864,10 @@ static inline void set_orig_size(struct kmem_cache *s,
864864
p += get_info_end(s);
865865
p += sizeof(struct track) * 2;
866866

867-
*(unsigned int *)p = orig_size;
867+
*(unsigned long *)p = orig_size;
868868
}
869869

870-
static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
870+
static inline unsigned long get_orig_size(struct kmem_cache *s, void *object)
871871
{
872872
void *p = kasan_reset_tag(object);
873873

@@ -880,7 +880,7 @@ static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
880880
p += get_info_end(s);
881881
p += sizeof(struct track) * 2;
882882

883-
return *(unsigned int *)p;
883+
return *(unsigned long *)p;
884884
}
885885

886886
#ifdef CONFIG_SLUB_DEBUG
@@ -1195,7 +1195,7 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p)
11951195
off += 2 * sizeof(struct track);
11961196

11971197
if (slub_debug_orig_size(s))
1198-
off += sizeof(unsigned int);
1198+
off += sizeof(unsigned long);
11991199

12001200
off += kasan_metadata_size(s, false);
12011201

@@ -1407,7 +1407,7 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p)
14071407
off += 2 * sizeof(struct track);
14081408

14091409
if (s->flags & SLAB_KMALLOC)
1410-
off += sizeof(unsigned int);
1410+
off += sizeof(unsigned long);
14111411
}
14121412

14131413
off += kasan_metadata_size(s, false);
@@ -8040,7 +8040,7 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
80408040

80418041
/* Save the original kmalloc request size */
80428042
if (flags & SLAB_KMALLOC)
8043-
size += sizeof(unsigned int);
8043+
size += sizeof(unsigned long);
80448044
}
80458045
#endif
80468046

0 commit comments

Comments
 (0)