Skip to content

Commit c1f1fda

Browse files
axiqiarafaeljw
authored andcommitted
ACPI: APEI: handle synchronous exceptions in task work
The memory uncorrected error could be signaled by asynchronous interrupt (specifically, SPI in arm64 platform), e.g. when an error is detected by a background scrubber, or signaled by synchronous exception (specifically, data abort exception in arm64 platform), e.g. when a CPU tries to access a poisoned cache line. Currently, both synchronous and asynchronous errors use memory_failure_queue() to schedule memory_failure() to exectute in a kworker context. As a result, when a user-space process is accessing a poisoned data, a data abort is taken and the memory_failure() is executed in the kworker context, which: - will send wrong si_code by SIGBUS signal in early_kill mode, and - can not kill the user-space in some cases resulting a synchronous error infinite loop Issue 1: send wrong si_code in early_kill mode Since commit a70297d ("ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on synchronous events")', the flag MF_ACTION_REQUIRED could be used to determine whether a synchronous exception occurs on ARM64 platform. When a synchronous exception is detected, the kernel is expected to terminate the current process which has accessed a poisoned page. This is done by sending a SIGBUS signal with error code BUS_MCEERR_AR, indicating an action-required machine check error on read. However, when kill_proc() is called to terminate the processes who has the poisoned page mapped, it sends the incorrect SIGBUS error code BUS_MCEERR_AO because the context in which it operates is not the one where the error was triggered. To reproduce this problem: #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1 # STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 5 addr 0xffffb0d75000 page not present Test passed The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error and it is not factually correct. After this change: # STEP1: enable early kill mode #sysctl -w vm.memory_failure_early_kill=1 vm.memory_failure_early_kill = 1 # STEP2: inject an UCE error and consume it to trigger a synchronous error #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 4 addr 0xffffb0d75000 page not present Test passed The si_code (code 4) from einj_mem_uc indicates that it is a BUS_MCEERR_AR error as expected. Issue 2: a synchronous error infinite loop If a user-space process, e.g. devmem, accesses a poisoned page for which the HWPoison flag is set, kill_accessing_process() is called to send SIGBUS to current processs with error info. Since the memory_failure() is executed in the kworker context, it will just do nothing but return EFAULT. So, devmem will access the posioned page and trigger an exception again, resulting in a synchronous error infinite loop. Such exception loop may cause platform firmware to exceed some threshold and reboot when Linux could have recovered from this error. To reproduce this problem: # STEP 1: inject an UCE error, and kernel will set HWPosion flag for related page #einj_mem_uc single 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 injecting ... triggering ... signal 7 code 4 addr 0xffffb0d75000 page not present Test passed # STEP 2: access the same page and it will trigger a synchronous error infinite loop devmem 0x4092d55b400 To fix above two issues, queue memory_failure() as a task_work so that it runs in the context of the process that is actually consuming the poisoned data. Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com> Tested-by: Ma Wupeng <mawupeng1@huawei.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Xiaofei Tan <tanxiaofei@huawei.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Jane Chu <jane.chu@oracle.com> Reviewed-by: Yazen Ghannam <yazen.ghannam@amd.com> Reviewed-by: Hanjun Guo <guohanjun@huawei.com> Link: https://patch.msgid.link/20250714114212.31660-3-xueshuai@linux.alibaba.com [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
1 parent 79a5ae3 commit c1f1fda

4 files changed

Lines changed: 45 additions & 51 deletions

File tree

drivers/acpi/apei/ghes.c

Lines changed: 45 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -464,28 +464,41 @@ static void ghes_clear_estatus(struct ghes *ghes,
464464
ghes_ack_error(ghes->generic_v2);
465465
}
466466

467-
/*
468-
* Called as task_work before returning to user-space.
469-
* Ensure any queued work has been done before we return to the context that
470-
* triggered the notification.
467+
/**
468+
* struct ghes_task_work - for synchronous RAS event
469+
*
470+
* @twork: callback_head for task work
471+
* @pfn: page frame number of corrupted page
472+
* @flags: work control flags
473+
*
474+
* Structure to pass task work to be handled before
475+
* returning to user-space via task_work_add().
471476
*/
472-
static void ghes_kick_task_work(struct callback_head *head)
477+
struct ghes_task_work {
478+
struct callback_head twork;
479+
u64 pfn;
480+
int flags;
481+
};
482+
483+
static void memory_failure_cb(struct callback_head *twork)
473484
{
474-
struct acpi_hest_generic_status *estatus;
475-
struct ghes_estatus_node *estatus_node;
476-
u32 node_len;
485+
struct ghes_task_work *twcb = container_of(twork, struct ghes_task_work, twork);
486+
int ret;
477487

478-
estatus_node = container_of(head, struct ghes_estatus_node, task_work);
479-
if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
480-
memory_failure_queue_kick(estatus_node->task_work_cpu);
488+
ret = memory_failure(twcb->pfn, twcb->flags);
489+
gen_pool_free(ghes_estatus_pool, (unsigned long)twcb, sizeof(*twcb));
481490

482-
estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
483-
node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus));
484-
gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len);
491+
if (!ret || ret == -EHWPOISON || ret == -EOPNOTSUPP)
492+
return;
493+
494+
pr_err("%#llx: Sending SIGBUS to %s:%d due to hardware memory corruption\n",
495+
twcb->pfn, current->comm, task_pid_nr(current));
496+
force_sig(SIGBUS);
485497
}
486498

487499
static bool ghes_do_memory_failure(u64 physical_addr, int flags)
488500
{
501+
struct ghes_task_work *twcb;
489502
unsigned long pfn;
490503

491504
if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
@@ -499,6 +512,18 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags)
499512
return false;
500513
}
501514

515+
if (flags == MF_ACTION_REQUIRED && current->mm) {
516+
twcb = (void *)gen_pool_alloc(ghes_estatus_pool, sizeof(*twcb));
517+
if (!twcb)
518+
return false;
519+
520+
twcb->pfn = pfn;
521+
twcb->flags = flags;
522+
init_task_work(&twcb->twork, memory_failure_cb);
523+
task_work_add(current, &twcb->twork, TWA_RESUME);
524+
return true;
525+
}
526+
502527
memory_failure_queue(pfn, flags);
503528
return true;
504529
}
@@ -842,7 +867,7 @@ int cxl_cper_kfifo_get(struct cxl_cper_work_data *wd)
842867
}
843868
EXPORT_SYMBOL_NS_GPL(cxl_cper_kfifo_get, "CXL");
844869

845-
static bool ghes_do_proc(struct ghes *ghes,
870+
static void ghes_do_proc(struct ghes *ghes,
846871
const struct acpi_hest_generic_status *estatus)
847872
{
848873
int sev, sec_sev;
@@ -912,8 +937,6 @@ static bool ghes_do_proc(struct ghes *ghes,
912937
current->comm, task_pid_nr(current));
913938
force_sig(SIGBUS);
914939
}
915-
916-
return queued;
917940
}
918941

919942
static void __ghes_print_estatus(const char *pfx,
@@ -1219,9 +1242,7 @@ static void ghes_proc_in_irq(struct irq_work *irq_work)
12191242
struct ghes_estatus_node *estatus_node;
12201243
struct acpi_hest_generic *generic;
12211244
struct acpi_hest_generic_status *estatus;
1222-
bool task_work_pending;
12231245
u32 len, node_len;
1224-
int ret;
12251246

12261247
llnode = llist_del_all(&ghes_estatus_llist);
12271248
/*
@@ -1236,25 +1257,16 @@ static void ghes_proc_in_irq(struct irq_work *irq_work)
12361257
estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
12371258
len = cper_estatus_len(estatus);
12381259
node_len = GHES_ESTATUS_NODE_LEN(len);
1239-
task_work_pending = ghes_do_proc(estatus_node->ghes, estatus);
1260+
1261+
ghes_do_proc(estatus_node->ghes, estatus);
1262+
12401263
if (!ghes_estatus_cached(estatus)) {
12411264
generic = estatus_node->generic;
12421265
if (ghes_print_estatus(NULL, generic, estatus))
12431266
ghes_estatus_cache_add(generic, estatus);
12441267
}
1245-
1246-
if (task_work_pending && current->mm) {
1247-
estatus_node->task_work.func = ghes_kick_task_work;
1248-
estatus_node->task_work_cpu = smp_processor_id();
1249-
ret = task_work_add(current, &estatus_node->task_work,
1250-
TWA_RESUME);
1251-
if (ret)
1252-
estatus_node->task_work.func = NULL;
1253-
}
1254-
1255-
if (!estatus_node->task_work.func)
1256-
gen_pool_free(ghes_estatus_pool,
1257-
(unsigned long)estatus_node, node_len);
1268+
gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node,
1269+
node_len);
12581270

12591271
llnode = next;
12601272
}
@@ -1315,7 +1327,6 @@ static int ghes_in_nmi_queue_one_entry(struct ghes *ghes,
13151327

13161328
estatus_node->ghes = ghes;
13171329
estatus_node->generic = ghes->generic;
1318-
estatus_node->task_work.func = NULL;
13191330
estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
13201331

13211332
if (__ghes_read_estatus(estatus, buf_paddr, fixmap_idx, len)) {

include/acpi/ghes.h

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,6 @@ struct ghes_estatus_node {
3535
struct llist_node llnode;
3636
struct acpi_hest_generic *generic;
3737
struct ghes *ghes;
38-
39-
int task_work_cpu;
40-
struct callback_head task_work;
4138
};
4239

4340
struct ghes_estatus_cache {

include/linux/mm.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3896,7 +3896,6 @@ enum mf_flags {
38963896
int mf_dax_kill_procs(struct address_space *mapping, pgoff_t index,
38973897
unsigned long count, int mf_flags);
38983898
extern int memory_failure(unsigned long pfn, int flags);
3899-
extern void memory_failure_queue_kick(int cpu);
39003899
extern int unpoison_memory(unsigned long pfn);
39013900
extern atomic_long_t num_poisoned_pages __read_mostly;
39023901
extern int soft_offline_page(unsigned long pfn, int flags);

mm/memory-failure.c

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2503,19 +2503,6 @@ static void memory_failure_work_func(struct work_struct *work)
25032503
}
25042504
}
25052505

2506-
/*
2507-
* Process memory_failure work queued on the specified CPU.
2508-
* Used to avoid return-to-userspace racing with the memory_failure workqueue.
2509-
*/
2510-
void memory_failure_queue_kick(int cpu)
2511-
{
2512-
struct memory_failure_cpu *mf_cpu;
2513-
2514-
mf_cpu = &per_cpu(memory_failure_cpu, cpu);
2515-
cancel_work_sync(&mf_cpu->work);
2516-
memory_failure_work_func(&mf_cpu->work);
2517-
}
2518-
25192506
static int __init memory_failure_init(void)
25202507
{
25212508
struct memory_failure_cpu *mf_cpu;

0 commit comments

Comments
 (0)