mirror of
https://github.com/lkl/linux.git
synced 2025-12-19 16:13:19 +09:00
mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation
After commitd6a71648db("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2) requests to buddy like SLUB does. SLAB has been using kmalloc caches to allocate freelist_idx_t array for off slab caches. But after the commit, freelist_size can be bigger than KMALLOC_MAX_CACHE_SIZE. Instead of using pointer to kmalloc cache, use kmalloc_node() and only check if the kmalloc cache is off slab during calculate_slab_order(). If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens as it allocates freelist_idx_t array directly from buddy. Link: https://lore.kernel.org/all/20221014205818.GA1428667@roeck-us.net/ Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net> Fixes:d6a71648db("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator") Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
This commit is contained in:
committed by
Vlastimil Babka
parent
d5eff73690
commit
e36ce448a0
@@ -33,7 +33,6 @@ struct kmem_cache {
|
||||
|
||||
size_t colour; /* cache colouring range */
|
||||
unsigned int colour_off; /* colour offset */
|
||||
struct kmem_cache *freelist_cache;
|
||||
unsigned int freelist_size;
|
||||
|
||||
/* constructor func */
|
||||
|
||||
Reference in New Issue
Block a user