spin_lock_irq / spin_unlock_irq使用不当导致的系统卡死
这是在调试内核block层时遇到的一例奇葩的soft lock锁死问题(内核版本centos 8.3,4.18.0-240),现场如下:
- [? 760.247152] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:1:2635]
- ……………..
- [? 760.247184] CPU: 0 PID: 2635 Comm: kworker/0:1 Kdump: loaded Tainted: G??????????? E??? ---------r-? - 4.18.0 #4
- [? 760.247184] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
- [? 760.247190] Workqueue: events netstamp_clear
- [? 760.247193] RIP: 0010:smp_call_function_many+0x1ec/0x250
- [? 760.247194] Code: c7 e8 08 fa 74 00 3b 05 96 ee 2b 01 0f 83 99 fe ff ff 48 63 d0 48 8b 0b 48 03 0c d5 60 d8 15 bd 8b 51 18 83 e2 01 74 0a f3 90 <8b> 51 18 83 e2 01 75 f6 eb c7 48 c7 c2 00 4c 41 bd 4c 89 f6 89 df
- [? 760.247195] RSP: 0018:ffffa3b342ac3dd0 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff13
- [? 760.247196] RAX: 0000000000000001 RBX: ffff92adf5c2ae80 RCX: ffff92adf5c704e0
- [? 760.247196] RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff92acc7d536c0
- [? 760.247196] RBP: ffffffffbc027610 R08: 000000000002f060 R09: ffffffffbc04fc8a
- [? 760.247197] R10: ffffeec908b18a00 R11: 0000000000000000 R12: 0000000000000000
- [? 760.247197] R13: 0000000000000001 R14: 0000000000000080 R15: 0000000000000001
- [? 760.247198] FS:? 0000000000000000(0000) GS:ffff92adf5c00000(0000) knlGS:0000000000000000
- [? 760.247198] CS:? 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
- [? 760.247199] CR2: 00007fa9e40033c8 CR3: 000000001a00a003 CR4: 00000000003606f0
- [? 760.247216] Call Trace:
- [? 760.247221]? ? poke_int3_handler+0xe0/0xe0
- [? 760.247222]? on_each_cpu+0x28/0x60
- [? 760.247223]? text_poke_bp_batch+0x8b/0x160
- [? 760.247225]? arch_jump_label_transform_apply+0x2e/0x50
- [? 760.247226]? static_key_enable_cpuslocked+0x52/0x80
- [? 760.247228]? static_key_enable+0x16/0x20
- [? 760.247229]? process_one_work+0x1a7/0x360
- [? 760.247230]? worker_thread+0x30/0x390
- [? 760.247232]? ? create_worker+0x1a0/0x1a0
- [? 760.247232]? kthread+0x112/0x130
- [? 760.247233]? ? kthread_flush_work_fn+0x10/0x10
- [? 760.247235]? ret_from_fork+0x35/0x40
- [? 760.247237] Kernel panic - not syncing: softlockup: hung tasks
- [? 760.247239] CPU: 0 PID: 2635 Comm: kworker/0:1 Kdump: loaded Tainted: G??????????? EL?? ---------r-? - 4.18.0 #4
- [? 760.247239] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 02/27/2020
- [? 760.247241] Workqueue: events netstamp_clear
- [? 760.247241] Call Trace:
- [? 760.247243]? <IRQ>
- [? 760.247245]? dump_stack+0x5c/0x80
- [? 760.247247]? panic+0xe7/0x2a9
- [? 760.247249]? ? __switch_to_asm+0x51/0x70
- [? 760.247250]? watchdog_timer_fn.cold.8+0x85/0x9e
- [? 760.247251]? ? watchdog+0x30/0x30
- [? 760.247253]? __hrtimer_run_queues+0x100/0x280
- [? 760.247255]? hrtimer_interrupt+0x100/0x220
- [? 760.247256]? ? ktime_get+0x36/0xa0
- [? 760.247257]? smp_apic_timer_interrupt+0x6a/0x130
- [? 760.247259]? apic_timer_interrupt+0xf/0x20
- [? 760.247260]? </IRQ>
- [? 760.247262] RIP: 0010:smp_call_function_many+0x1ec/0x250
- [? 760.247263] Code: c7 e8 08 fa 74 00 3b 05 96 ee 2b 01 0f 83 99 fe ff ff 48 63 d0 48 8b 0b 48 03 0c d5 60 d8 15 bd 8b 51 18 83 e2 01 74 0a f3 90 <8b> 51 18 83 e2 01 75 f6 eb c7 48 c7 c2 00 4c 41 bd 4c 89 f6 89 df
- [? 760.247263] RSP: 0018:ffffa3b342ac3dd0 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff13
- [? 760.247264] RAX: 0000000000000001 RBX: ffff92adf5c2ae80 RCX: ffff92adf5c704e0
- [? 760.247265] RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff92acc7d536c0
- [? 760.247266] RBP: ffffffffbc027610 R08: 000000000002f060 R09: ffffffffbc04fc8a
- [? 760.247266] R10: ffffeec908b18a00 R11: 0000000000000000 R12: 0000000000000000
- [? 760.247267] R13: 0000000000000001 R14: 0000000000000080 R15: 0000000000000001
- [? 760.247268]? ? poke_int3_handler+0xe0/0xe0
- [? 760.247269]? ? native_send_call_func_ipi+0xda/0x120
- [? 760.247271]? ? poke_int3_handler+0xe0/0xe0
- [? 760.247272]? on_each_cpu+0x28/0x60
- [? 760.247274]? text_poke_bp_batch+0x8b/0x160
- [? 760.247275]? arch_jump_label_transform_apply+0x2e/0x50
- [? 760.247276]? static_key_enable_cpuslocked+0x52/0x80
- [? 760.247277]? static_key_enable+0x16/0x20
- [? 760.247278]? process_one_work+0x1a7/0x360
- [? 760.247279]? worker_thread+0x30/0x390
先启动crash分析:crash /usr/lib/debug/lib/modules/4.18.0-240.el8.x86_64/vmlinux /var/crash/127.0.0.1-2023-02-19-02\:05\:59/vmcore,bt看下卡死时各个进程的栈回溯信息:
- ??? DUMPFILE: /var/crash/127.0.0.1-2023-02-19-02:05:59/vmcore? [PARTIAL DUMP]
- ??????? CPUS: 4
- ??????? DATE: Sun Feb 19 02:04:55 2023
- ????? UPTIME: 00:11:46
- LOAD AVERAGE: 21.78, 17.84, 9.85
- ?????? TASKS: 415
- ??? NODENAME: localhost.localdomain
- ???? RELEASE: 4.18.0
- ???? VERSION: #4 SMP Sun Feb 19 00:38:10 PST 2023
- ???? MACHINE: x86_64? (3407 Mhz)
- ????? MEMORY: 8 GB
- ?????? PANIC: "Kernel panic - not syncing: softlockup: hung tasks"
- ???????? PID: 2635
- ???? COMMAND: "kworker/0:1"
- ??????? TASK: ffff92acd6cf17c0? [THREAD_INFO: ffff92acd6cf17c0]
- ???????? CPU: 0
- ?????? STATE: TASK_RUNNING (PANIC)
- crash> bt
- PID: 2635?? TASK: ffff92acd6cf17c0? CPU: 0?? COMMAND: "kworker/0:1"
- ?#0 [ffff92adf5c03d48] machine_kexec at ffffffffbc05bf3e
- ?#1 [ffff92adf5c03da0] __crash_kexec at ffffffffbc16072d
- ?#2 [ffff92adf5c03e68] panic at ffffffffbc0b5dc7
- ?#3 [ffff92adf5c03ee8] watchdog_timer_fn.cold.8 at ffffffffbc196dc7
- ?#4 [ffff92adf5c03f18] __hrtimer_run_queues at ffffffffbc1408a0
- ?#5 [ffff92adf5c03f78] hrtimer_interrupt at ffffffffbc141070
- ?#6 [ffff92adf5c03fd8] smp_apic_timer_interrupt at ffffffffbca027da
- ?#7 [ffff92adf5c03ff0] apic_timer_interrupt at ffffffffbca01d6f
- --- <IRQ stack> ---
- ?#8 [ffffa3b342ac3d28] apic_timer_interrupt at ffffffffbca01d6f
- ??? [exception RIP: smp_call_function_many+492]
- ??? RIP: ffffffffbc15677c? RSP: ffffa3b342ac3dd0? RFLAGS: 00000202
- ??? RAX: 0000000000000001? RBX: ffff92adf5c2ae80? RCX: ffff92adf5c704e0
- ??? RDX: 0000000000000001? RSI: 0000000000000000? RDI: ffff92acc7d536c0
- ??? RBP: ffffffffbc027610?? R8: 000000000002f060?? R9: ffffffffbc04fc8a
- ??? R10: ffffeec908b18a00? R11: 0000000000000000? R12: 0000000000000000
- ??? R13: 0000000000000001? R14: 0000000000000080? R15: 0000000000000001
- ??? ORIG_RAX: ffffffffffffff13? CS: 0010? SS: 0018
- ?#9 [ffffa3b342ac3e10] on_each_cpu at ffffffffbc156838
- #10 [ffffa3b342ac3e30] text_poke_bp_batch at ffffffffbc027fab
- #11 [ffffa3b342ac3e70] arch_jump_label_transform_apply at ffffffffbc0251be
- #12 [ffffa3b342ac3e78] static_key_enable_cpuslocked at ffffffffbc226422
- #13 [ffffa3b342ac3e88] static_key_enable at ffffffffbc226466
- #14 [ffffa3b342ac3e98] process_one_work at ffffffffbc0d3477
- #15 [ffffa3b342ac3ed8] worker_thread at ffffffffbc0d3b40
- #16 [ffffa3b342ac3f10] kthread at ffffffffbc0d9502
- #17 [ffffa3b342ac3f50] ret_from_fork at ffffffffbca00255
- crash> bt -a
- PID: 2635?? TASK: ffff92acd6cf17c0? CPU: 0?? COMMAND: "kworker/0:1"
- ?#0 [ffff92adf5c03d48] machine_kexec at ffffffffbc05bf3e
- ?#1 [ffff92adf5c03da0] __crash_kexec at ffffffffbc16072d
- ?#2 [ffff92adf5c03e68] panic at ffffffffbc0b5dc7
- ?#3 [ffff92adf5c03ee8] watchdog_timer_fn.cold.8 at ffffffffbc196dc7
- ?#4 [ffff92adf5c03f18] __hrtimer_run_queues at ffffffffbc1408a0
- ?#5 [ffff92adf5c03f78] hrtimer_interrupt at ffffffffbc141070
- ?#6 [ffff92adf5c03fd8] smp_apic_timer_interrupt at ffffffffbca027da
- ?#7 [ffff92adf5c03ff0] apic_timer_interrupt at ffffffffbca01d6f
- --- <IRQ stack> ---
- ?#8 [ffffa3b342ac3d28] apic_timer_interrupt at ffffffffbca01d6f
- ??? [exception RIP: smp_call_function_many+492]
- ??? RIP: ffffffffbc15677c? RSP: ffffa3b342ac3dd0? RFLAGS: 00000202
- ??? RAX: 0000000000000001? RBX: ffff92adf5c2ae80? RCX: ffff92adf5c704e0
- ??? RDX: 0000000000000001? RSI: 0000000000000000? RDI: ffff92acc7d536c0
- ??? RBP: ffffffffbc027610?? R8: 000000000002f060?? R9: ffffffffbc04fc8a
- ??? R10: ffffeec908b18a00? R11: 0000000000000000? R12: 0000000000000000
- ??? R13: 0000000000000001? R14: 0000000000000080? R15: 0000000000000001
- ??? ORIG_RAX: ffffffffffffff13? CS: 0010? SS: 0018
- ?#9 [ffffa3b342ac3e10] on_each_cpu at ffffffffbc156838
- #10 [ffffa3b342ac3e30] text_poke_bp_batch at ffffffffbc027fab
- #11 [ffffa3b342ac3e70] arch_jump_label_transform_apply at ffffffffbc0251be
- #12 [ffffa3b342ac3e78] static_key_enable_cpuslocked at ffffffffbc226422
- #13 [ffffa3b342ac3e88] static_key_enable at ffffffffbc226466
- #14 [ffffa3b342ac3e98] process_one_work at ffffffffbc0d3477
- #15 [ffffa3b342ac3ed8] worker_thread at ffffffffbc0d3b40
- #16 [ffffa3b342ac3f10] kthread at ffffffffbc0d9502
- #17 [ffffa3b342ac3f50] ret_from_fork at ffffffffbca00255
- PID: 2857?? TASK: ffff92ad555497c0? CPU: 1?? COMMAND: "fio"
- ?#0 [fffffe0000032e50] crash_nmi_callback at ffffffffbc04eee3
- ?#1 [fffffe0000032e58] nmi_handle at ffffffffbc023703
- ?#2 [fffffe0000032eb0] default_do_nmi at ffffffffbc023ade
- ?#3 [fffffe0000032ed0] do_nmi at ffffffffbc023cb8
- ?#4 [fffffe0000032ef0] end_repeat_nmi at ffffffffbca016d4
- ??? [exception RIP: native_queued_spin_lock_slowpath+32]
- ??? RIP: ffffffffbc111190? RSP: ffff92adf5c43e38? RFLAGS: 00000002
- ??? RAX: 0000000000000001? RBX: 0000000000000246? RCX: 0000000000008801
- ??? RDX: 0000000000000001? RSI: 0000000000000001? RDI: ffff92adc0fc6400
- ??? RBP: ffff92acd0a39800?? R8: 0000000000000000?? R9: 0000000000000000
- ??? R10: 0000000000000068? R11: 0000000000000000? R12: ffff92ad55426910
- ??? R13: ffff92adc0fc6400? R14: 000000a494918997? R15: 0000000000008801
- ??? ORIG_RAX: ffffffffffffffff? CS: 0010? SS: 0018
- --- <NMI exception stack> ---
- ?#5 [ffff92adf5c43e38] native_queued_spin_lock_slowpath at ffffffffbc111190
- ?#6 [ffff92adf5c43e38] _raw_spin_lock_irqsave at ffffffffbc8cdc12
- ?#7 [ffff92adf5c43e48] bfq_finish_requeue_request at ffffffffc0664555 [bfq]
- ?#8 [ffff92adf5c43ea0] blk_mq_free_request at ffffffffbc4070ca
- ?#9 [ffff92adf5c43ec8] scsi_end_request at ffffffffbc5d0e7a
- #10 [ffff92adf5c43f00] scsi_io_completion at ffffffffbc5d0fc8
- #11 [ffff92adf5c43f48] blk_done_softirq at ffffffffbc405ee1
- #12 [ffff92adf5c43f80] __softirqentry_text_start at ffffffffbcc000e4
- #13 [ffff92adf5c43fe0] irq_exit at ffffffffbc0bc1d7
- #14 [ffff92adf5c43ff0] call_function_single_interrupt at ffffffffbca01e0f
- --- <IRQ stack> ---
- #15 [ffffa3b34223f728] call_function_single_interrupt at ffffffffbca01e0f
- ??? [exception RIP: bfq_pos_tree_add_move+109]
- ??? RIP: ffffffffc0667f0d? RSP: ffffa3b34223f7d0? RFLAGS: 00000246
- ??? RAX: ffff92adc0fc6200? RBX: ffff92acd0b89000? RCX: ffff92acd6900c38
- ??? RDX: 0000000000000000? RSI: ffff92acd967ba58? RDI: ffff92acd0b89038
- ??? RBP: ffff92adc0fc6000?? R8: 0000000000000000?? R9: 0000000000000000
- ??? R10: 0000000000000000? R11: 0000000000000000? R12: 0000000000000001
- ??? R13: ffff92adc0fc6000? R14: ffff92acd20593f0? R15: ffff92acd0b89058
- ??? ORIG_RAX: ffffffffffffff04? CS: 0010? SS: 0018
- #16 [ffffa3b34223f7f8] bfq_remove_request.cold.58 at ffffffffc06680cc [bfq]
- #17 [ffffa3b34223f878] bfq_finish_requeue_request at ffffffffc0664948 [bfq]
- #18 [ffffa3b34223f8d0] blk_mq_free_request at ffffffffbc4070ca
- #19 [ffffa3b34223f8f8] bfq_bio_merge at ffffffffc065da34 [bfq]
- #20 [ffffa3b34223f948] blk_mq_make_request at ffffffffbc40abb0
- #21 [ffffa3b34223f9d8] generic_make_request at ffffffffbc3fe85f
- #22 [ffffa3b34223fa38] submit_bio at ffffffffbc3feadc
- #23 [ffffa3b34223fa78] do_blockdev_direct_IO at ffffffffbc31ee96
- #24 [ffffa3b34223fc78] ext4_direct_IO at ffffffffc0734ea6 [ext4]
- #25 [ffffa3b34223fcf0] generic_file_read_iter at ffffffffbc22da7f
- #26 [ffffa3b34223fd38] aio_read at ffffffffbc3313a5
- #27 [ffffa3b34223fe40] io_submit_one at ffffffffbc33165b
- #28 [ffffa3b34223feb8] __x64_sys_io_submit at ffffffffbc331b82
- #29 [ffffa3b34223ff38] do_syscall_64 at ffffffffbc00419b
- #30 [ffffa3b34223ff50] entry_SYSCALL_64_after_hwframe at ffffffffbca000ad
- PID: 491??? TASK: ffff92acd7bf97c0? CPU: 2?? COMMAND: "kworker/2:1H"
- ?#0 [fffffe000005de50] crash_nmi_callback at ffffffffbc04eee3
- ?#1 [fffffe000005de58] nmi_handle at ffffffffbc023703
- ?#2 [fffffe000005deb0] default_do_nmi at ffffffffbc023ade
- ?#3 [fffffe000005ded0] do_nmi at ffffffffbc023cb8
- ?#4 [fffffe000005def0] end_repeat_nmi at ffffffffbca016d4
- ??? [exception RIP: native_queued_spin_lock_slowpath+32]
- ??? RIP: ffffffffbc111190? RSP: ffffa3b3414c3d50? RFLAGS: 00000002
- ??? RAX: 0000000000000001? RBX: ffff92acd7f2bc00? RCX: ffff92adc0fc6428
- ??? RDX: 0000000000000001? RSI: 0000000000000001? RDI: ffff92adc0fc6400
- ??? RBP: ffff92adc0fc6000?? R8: 0000000000000000?? R9: 0000000000000000
- ??? R10: 0000000000000000? R11: 0000000000000000? R12: 0000000000000000
- ??? R13: ffff92acd20593f0? R14: ffff92adc0fc6400? R15: ffff92ad55422220
- ??? ORIG_RAX: ffffffffffffffff? CS: 0010? SS: 0018
- --- <NMI exception stack> ---
- ?#5 [ffffa3b3414c3d50] native_queued_spin_lock_slowpath at ffffffffbc111190
- ?#6 [ffffa3b3414c3d50] _raw_spin_lock_irq at ffffffffbc8cde43
- ?#7 [ffffa3b3414c3d58] bfq_dispatch_request at ffffffffc0661187 [bfq]
- ?#8 [ffffa3b3414c3db8] blk_mq_do_dispatch_sched at ffffffffbc40f455
- ?#9 [ffffa3b3414c3e10] __blk_mq_sched_dispatch_requests at ffffffffbc40ff89
- #10 [ffffa3b3414c3e70] blk_mq_sched_dispatch_requests at ffffffffbc410010
- #11 [ffffa3b3414c3e80] __blk_mq_run_hw_queue at ffffffffbc407691
- #12 [ffffa3b3414c3e98] process_one_work at ffffffffbc0d3477
- #13 [ffffa3b3414c3ed8] worker_thread at ffffffffbc0d3b40
- #14 [ffffa3b3414c3f10] kthread at ffffffffbc0d9502
- #15 [ffffa3b3414c3f50] ret_from_fork at ffffffffbca00255
- PID: 2861?? TASK: ffff92ad55522f80? CPU: 3?? COMMAND: "fio"
- ?#0 [fffffe0000088e50] crash_nmi_callback at ffffffffbc04eee3
- ?#1 [fffffe0000088e58] nmi_handle at ffffffffbc023703
- ?#2 [fffffe0000088eb0] default_do_nmi at ffffffffbc023ade
- ?#3 [fffffe0000088ed0] do_nmi at ffffffffbc023cb8
- ?#4 [fffffe0000088ef0] end_repeat_nmi at ffffffffbca016d4
- ??? [exception RIP: native_queued_spin_lock_slowpath+32]
- ??? RIP: ffffffffbc111190? RSP: ffff92adf5cc3e38? RFLAGS: 00000002
- ??? RAX: 0000000000000001? RBX: 0000000000000246? RCX: 0000000000008801
- ??? RDX: 0000000000000001? RSI: 0000000000000001? RDI: ffff92adc0fc6400
- ??? RBP: ffff92acd6900c00?? R8: 0000000000000000?? R9: 0000000000000000
- ??? R10: 0000000000000068? R11: 0000000000000000? R12: ffff92acd0b02d10
- ??? R13: ffff92adc0fc6400? R14: 000000a49490f1bc? R15: 0000000000008801
- ??? ORIG_RAX: ffffffffffffffff? CS: 0010? SS: 0018
- --- <NMI exception stack> ---
- ?#5 [ffff92adf5cc3e38] native_queued_spin_lock_slowpath at ffffffffbc111190
- ?#6 [ffff92adf5cc3e38] _raw_spin_lock_irqsave at ffffffffbc8cdc12
- ?#7 [ffff92adf5cc3e48] bfq_finish_requeue_request at ffffffffc0664555 [bfq]
- ?#8 [ffff92adf5cc3ea0] blk_mq_free_request at ffffffffbc4070ca
- ?#9 [ffff92adf5cc3ec8] scsi_end_request at ffffffffbc5d0e7a
- #10 [ffff92adf5cc3f00] scsi_io_completion at ffffffffbc5d0fc8
- #11 [ffff92adf5cc3f48] blk_done_softirq at ffffffffbc405ee1
- #12 [ffff92adf5cc3f80] __softirqentry_text_start at ffffffffbcc000e4
- #13 [ffff92adf5cc3fe0] irq_exit at ffffffffbc0bc1d7
- #14 [ffff92adf5cc3ff0] call_function_single_interrupt at ffffffffbca01e0f
- --- <IRQ stack> ---
- #15 [ffffa3b34225f898] call_function_single_interrupt at ffffffffbca01e0f
- ??? [exception RIP: bfq_bio_merge+5]
- ??? RIP: ffffffffc065d955? RSP: ffffa3b34225f948? RFLAGS: 00000286
- ??? RAX: ffffffffc065d950? RBX: ffff92acd20593f0? RCX: 0000000000000000
- ??? RDX: ffff92acd967e400? RSI: ffff92ad6e641e00? RDI: ffff92acd7f2bc00
- ??? RBP: ffffa3b34225fa30?? R8: 0000000000000001?? R9: 0000000000000001
- ??? R10: 0000000000008000? R11: 0000000000000040? R12: 0000000000000000
- ??? R13: ffff92ad6e641e00? R14: 0000000000000001? R15: ffff92ad6e592800
- ??? ORIG_RAX: ffffffffffffff04? CS: 0010? SS: 0018
- #16 [ffffa3b34225f948] blk_mq_make_request at ffffffffbc40abb0
- #17 [ffffa3b34225f9d8] generic_make_request at ffffffffbc3fe85f
- #18 [ffffa3b34225fa38] submit_bio at ffffffffbc3feadc
- #19 [ffffa3b34225fa78] do_blockdev_direct_IO at ffffffffbc31ee96
- #20 [ffffa3b34225fc78] ext4_direct_IO at ffffffffc0734ea6 [ext4]
- #21 [ffffa3b34225fcf0] generic_file_read_iter at ffffffffbc22da7f
- #22 [ffffa3b34225fd38] aio_read at ffffffffbc3313a5
- #23 [ffffa3b34225fe40] io_submit_one at ffffffffbc33165b
- #24 [ffffa3b34225feb8] __x64_sys_io_submit at ffffffffbc331b82
- #25 [ffffa3b34225ff38] do_syscall_64 at ffffffffbc00419b
- #26 [ffffa3b34225ff50] entry_SYSCALL_64_after_hwframe at ffffffffbca000ad
根据这些信息,发现pid是491和2857这两个进程嫌疑很大。先看下491这个进程的栈回溯:
crash> bt 491
PID: 491??? TASK: ffff92acd7bf97c0? CPU: 2?? COMMAND: "kworker/2:1H"
?#0 [fffffe000005de50] crash_nmi_callback at ffffffffbc04eee3
?#1 [fffffe000005de58] nmi_handle at ffffffffbc023703
?#2 [fffffe000005deb0] default_do_nmi at ffffffffbc023ade
?#3 [fffffe000005ded0] do_nmi at ffffffffbc023cb8
?#4 [fffffe000005def0] end_repeat_nmi at ffffffffbca016d4
??? [exception RIP: native_queued_spin_lock_slowpath+32]
??? RIP: ffffffffbc111190? RSP: ffffa3b3414c3d50? RFLAGS: 00000002
??? RAX: 0000000000000001? RBX: ffff92acd7f2bc00? RCX: ffff92adc0fc6428
??? RDX: 0000000000000001? RSI: 0000000000000001? RDI: ffff92adc0fc6400
??? RBP: ffff92adc0fc6000?? R8: 0000000000000000?? R9: 0000000000000000
??? R10: 0000000000000000? R11: 0000000000000000? R12: 0000000000000000
??? R13: ffff92acd20593f0? R14: ffff92adc0fc6400? R15: ffff92ad55422220
??? ORIG_RAX: ffffffffffffffff? CS: 0010? SS: 0018
--- <NMI exception stack> ---
?#5 [ffffa3b3414c3d50] native_queued_spin_lock_slowpath at ffffffffbc111190
?#6 [ffffa3b3414c3d50] _raw_spin_lock_irq at ffffffffbc8cde43
?#7 [ffffa3b3414c3d58] bfq_dispatch_request at ffffffffc0661187 [bfq]
?#8 [ffffa3b3414c3db8] blk_mq_do_dispatch_sched at ffffffffbc40f455
?#9 [ffffa3b3414c3e10] __blk_mq_sched_dispatch_requests at ffffffffbc40ff89
#10 [ffffa3b3414c3e70] blk_mq_sched_dispatch_requests at ffffffffbc410010
#11 [ffffa3b3414c3e80] __blk_mq_run_hw_queue at ffffffffbc407691
#12 [ffffa3b3414c3e98] process_one_work at ffffffffbc0d3477
#13 [ffffa3b3414c3ed8] worker_thread at ffffffffbc0d3b40
#14 [ffffa3b3414c3f10] kthread at ffffffffbc0d9502
#15 [ffffa3b3414c3f50] ret_from_fork at ffffffffbca00255
2857进程最需要分析看看,这个嫌疑更大
- PID: 2857?? TASK: ffff92ad555497c0? CPU: 1?? COMMAND: "fio"
- ?#0 [fffffe0000032e50] crash_nmi_callback at ffffffffbc04eee3
- ?#1 [fffffe0000032e58] nmi_handle at ffffffffbc023703
- ?#2 [fffffe0000032eb0] default_do_nmi at ffffffffbc023ade
- ?#3 [fffffe0000032ed0] do_nmi at ffffffffbc023cb8
- ?#4 [fffffe0000032ef0] end_repeat_nmi at ffffffffbca016d4
- ??? [exception RIP: native_queued_spin_lock_slowpath+32]
- ??? RIP: ffffffffbc111190? RSP: ffff92adf5c43e38? RFLAGS: 00000002
- ??? RAX: 0000000000000001? RBX: 0000000000000246? RCX: 0000000000008801
- ??? RDX: 0000000000000001? RSI: 0000000000000001? RDI: ffff92adc0fc6400
- ??? RBP: ffff92acd0a39800?? R8: 0000000000000000?? R9: 0000000000000000
- ??? R10: 0000000000000068? R11: 0000000000000000? R12: ffff92ad55426910
- ??? R13: ffff92adc0fc6400? R14: 000000a494918997? R15: 0000000000008801
- ??? ORIG_RAX: ffffffffffffffff? CS: 0010? SS: 0018
- --- <NMI exception stack> ---
- ?#5 [ffff92adf5c43e38] native_queued_spin_lock_slowpath at ffffffffbc111190
- ?#6 [ffff92adf5c43e38] _raw_spin_lock_irqsave at ffffffffbc8cdc12
- ?//这里获取spin_lock_irq(&bfqd->lock) bfqd->lock 锁失败,之后就大面积出现获取 bfqd->lock 锁失败而卡死
- ?#7 [ffff92adf5c43e48] bfq_finish_requeue_request at ffffffffc0664555 [bfq]
- ?#8 [ffff92adf5c43ea0] blk_mq_free_request at ffffffffbc4070ca
- ?#9 [ffff92adf5c43ec8] scsi_end_request at ffffffffbc5d0e7a
- #10 [ffff92adf5c43f00] scsi_io_completion at ffffffffbc5d0fc8
- #11 [ffff92adf5c43f48] blk_done_softirq at ffffffffbc405ee1
- #12 [ffff92adf5c43f80] __softirqentry_text_start at ffffffffbcc000e4
- #13 [ffff92adf5c43fe0] irq_exit at ffffffffbc0bc1d7
- #14 [ffff92adf5c43ff0] call_function_single_interrupt at ffffffffbca01e0f
- --- <IRQ stack> ---
- #15 [ffffa3b34223f728] call_function_single_interrupt at ffffffffbca01e0f
- ??? //这里竟然产生了中断
- ??? [exception RIP: bfq_pos_tree_add_move+109]
- ??? RIP: ffffffffc0667f0d? RSP: ffffa3b34223f7d0? RFLAGS: 00000246
- ??? RAX: ffff92adc0fc6200? RBX: ffff92acd0b89000? RCX: ffff92acd6900c38
- ??? RDX: 0000000000000000? RSI: ffff92acd967ba58? RDI: ffff92acd0b89038
- ??? RBP: ffff92adc0fc6000?? R8: 0000000000000000?? R9: 0000000000000000
- ??? R10: 0000000000000000? R11: 0000000000000000? R12: 0000000000000001
- ??? R13: ffff92adc0fc6000? R14: ffff92acd20593f0? R15: ffff92acd0b89058
- ??? ORIG_RAX: ffffffffffffff04? CS: 0010? SS: 0018
- #16 [ffffa3b34223f7f8] bfq_remove_request.cold.58 at ffffffffc06680cc [bfq]
- #17 [ffffa3b34223f878] bfq_finish_requeue_request at ffffffffc0664948 [bfq]
- #18 [ffffa3b34223f8d0] blk_mq_free_request at ffffffffbc4070ca
- ?????????????????????? //这里 spin_lock_irq(&bfqd->lock) 加bfqd->lock锁关中断
- #19 [ffffa3b34223f8f8] bfq_bio_merge at ffffffffc065da34 [bfq]
- #20 [ffffa3b34223f948] blk_mq_make_request at ffffffffbc40abb0
- #21 [ffffa3b34223f9d8] generic_make_request at ffffffffbc3fe85f
- #22 [ffffa3b34223fa38] submit_bio at ffffffffbc3feadc
- #23 [ffffa3b34223fa78] do_blockdev_direct_IO at ffffffffbc31ee96
- #24 [ffffa3b34223fc78] ext4_direct_IO at ffffffffc0734ea6 [ext4]
- #25 [ffffa3b34223fcf0] generic_file_read_iter at ffffffffbc22da7f
- #26 [ffffa3b34223fd38] aio_read at ffffffffbc3313a5
- #27 [ffffa3b34223fe40] io_submit_one at ffffffffbc33165b
- #28 [ffffa3b34223feb8] __x64_sys_io_submit at ffffffffbc331b82
2857号进程执行到bfq_bio_merge()发生的卡死,看下这个函数:
- static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio)
- {
- ??? //这里 spin_lock_irq(&bfqd->lock) 加bfqd->lock锁关中断
- ??? spin_lock_irq(&bfqd->lock);
- ??? ret = blk_mq_sched_try_merge(q, bio, &free);
- ??? if (free)
- ??????? blk_mq_free_request(free);
- ??? spin_unlock_irq(&bfqd->lock);
- }
先spin_lock_irq(&bfqd->lock) 加bfqd->lock锁并关中断,然后继续执行到 bfq_bio_merge ->blk_mq_free_request->bfq_finish_requeue_request->bfq_remove_request->bfq_pos_tree_add_move函数时,却产生了中断,中断函数里执行bfq_finish_requeue_request->spin_lock_irqsave(&bfqd->lock, flags) 再次获取bfqd->lock锁失败而卡死。这些都发生在cpu1上的线程,先 spin_lock_irq(&bfqd->lock) 获取 bfqd->lock 锁,然后产生中断后又执行 spin_lock_irq(&bfqd->lock)获取bfqd->lock 锁失败。这样肯定会失败呀,因为两次获取同一把锁!
有一个很大的疑问,明明 bfq_bio_merge()里的spin_lock_irq(&bfqd->lock) 是关闭中断的!谁开了中断?导致后续执行bfq_bio_merge ->blk_mq_free_request->bfq_finish_requeue_request->bfq_remove_request->bfq_pos_tree_add_move时,产生了中断,中断函数里执行bfq_finish_requeue_request->spin_lock_irqsave(&bfqd->lock, flags) 再次获取bfqd->lock锁失败而卡死。
排查发现,竟然是 2857号进程先执行bfq_bio_merge->blk_mq_sched_try_merge->attempt_back_merge->attempt_merge->blk_account_io_merge函数意外的开启了中断,如下红色代码:
- static void blk_account_io_merge(struct request *req)?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
- {
- ??? if (blk_do_io_stat(req)) {
- ??????????? struct hd_struct *part;
- ??????????? part_stat_lock();
- ??????????? part = req->part;
- ??????????? part_dec_in_flight(req->q, part, rq_data_dir(req));
- ??????????? hd_struct_put(part);
- ??????????? part_stat_unlock();
- ??????????? if(req->rq_disk && req->rq_disk->process_io.enable && req->p_process_rq_stat){
- ??????????????? spin_lock_irq(&(req->rq_disk->process_io.process_io_insert_lock));
- ??????????????? list_del(&req->p_process_rq_stat->process_io_insert);
- ??????????????? //这里开启了中断!!!!!!!!!!!!!!!!!!!!
- ? ? ? ? ? ? ? ??spin_unlock_irq(&(req->rq_disk->process_io.process_io_insert_lock));
- ? ? ? ? ? ? ? ? kmem_cache_free(req->rq_disk->process_io.process_rq_stat_cachep,req->p_process_rq_stat);
- ??????????????? atomic_dec(&(req->rq_disk->process_io.rq_in_queue));
- ??????????????? req->p_process_rq_stat = NULL;
- ??????????? }
- ??? }
- }
竟然是我在 blk_account_io_merge()执行了 spin_unlock_irq 开启了中断!服了,又是自己埋了一个坑然后自己跳进去!再次惊醒!这又是不熟悉上下文导致的!上层bfq_bio_merge()执行 spin_lock_irq(&bfqd->lock) 后,底层调用我的 blk_account_io_merge函数里的spin_unlock_irq,莫名其妙开启了本地中断!导致后续一系列不符合逻辑的获取bfqd->lock 锁失败的奇葩问题。
这是一个很大的教训, spin_lock_irq/ spin_unlock_irq 用起来有风险呀!最保险的是使用 spin_lock_irqsave/spin_unlock_irqrestore。因为spin_unlock_irqrestore 会恢复 spin_lock_irqsave 执行时的cpu中断状态,就不会有这个问题了。而spin_unlock_irq不管关中断前,中断是否关闭或开启,都强制开启中断!这样就会导致中断状态错乱,发生未知问题!比如,本应该是关闭中断的,但spin_unlock_irq却开启了中断,就会出现卡死。
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!