From 2fc3151ccd19200db51a7424c2009fdee862c891 Mon Sep 17 00:00:00 2001 From: Tiejun Chen Date: Thu, 30 May 2019 17:53:46 -0700 Subject: [PATCH] update -rt to 4.19.37-rt20 Signed-off-by: Tiejun Chen --- examples/rt-for-vmware.yml | 2 +- kernel/Makefile | 4 +- ...-at91-add-TCB-registers-definitions.patch} | 11 +- ...rs-Add-a-new-driver-for-the-Atmel-A.patch} | 23 +- ...rs-timer-atmel-tcb-add-clockevent-d.patch} | 22 +- ...rivers-atmel-pit-make-option-silent.patch} | 12 +- ...t91-Implement-clocksource-selection.patch} | 10 +- ...nfigs-at91-use-new-TCB-timer-driver.patch} | 14 +- ... 0007-ARM-configs-at91-unselect-PIT.patch} | 14 +- ...ts-Move-pending-table-allocation-to-.patch | 30 ++- ...-convert-worker-lock-to-raw-spinlock.patch | 38 ++-- ...m-qi-simplify-CGR-allocation-freeing.patch | 32 +-- ...obustify-CFS-bandwidth-timer-locking.patch | 29 ++- ...12-arm-Convert-arm-boot_lock-to-raw.patch} | 98 +++++---- ...-let-setaffinity-unmask-threaded-EOI.patch | 18 +- ...probe-replace-patch_lock-to-raw-lock.patch | 69 ------ ...rqsave-in-cgroup_rstat_flush_locked.patch} | 14 +- .../0015-arm-unwind-use_raw_lock.patch | 83 -------- ...ize-cookie-hash-table-raw-spinlocks.patch} | 22 +- ...bus-include-header-for-get_irq_regs.patch} | 10 +- ...e-irqflags.h-for-raw_local_irq_save.patch} | 10 +- ...patch => 0018-efi-Allow-efi-runtime.patch} | 12 +- ...i-drop-task_lock-from-efi_switch_mm.patch} | 10 +- ..._layout-before-altenates-are-applie.patch} | 27 ++- ...phandle-cache-outside-of-the-devtre.patch} | 11 +- ...ke-quarantine_lock-a-raw_spinlock_t.patch} | 16 +- ...pedited-GP-parallelization-cleverne.patch} | 17 +- ...kmemleak_lock-to-raw-spinlock-on-RT.patch} | 25 ++- ...replace-seqcount_t-with-a-seqlock_t.patch} | 42 ++-- ...ide-a-pointer-to-the-valid-CPU-mask.patch} | 197 +++++++++++------- ...rnel-sched-core-add-migrate_disable.patch} | 36 +++- ...able-Add-export_symbol_gpl-for-__mi.patch} | 12 +- ...-not-disable-enable-clocks-in-a-row.patch} | 22 +- ...-Allow-higher-clock-rates-for-clock.patch} | 32 ++- ...1-timekeeping-Split-jiffies-seqlock.patch} | 40 ++-- ...-signal-Revert-ptrace-preempt-magic.patch} | 12 +- ...t-sched-Use-msleep-instead-of-yield.patch} | 12 +- ...q-remove-BUG_ON-irqs_disabled-check.patch} | 12 +- ...o-no-disable-interrupts-in-giveback.patch} | 14 +- ...ovide-PREEMPT_RT_BASE-config-switch.patch} | 10 +- ...able-CONFIG_CPUMASK_OFFSTACK-for-RT.patch} | 14 +- ...bel-disable-if-stop_machine-is-used.patch} | 12 +- ...onfig-options-which-are-not-RT-comp.patch} | 15 +- ...h => 0040-lockdep-disable-self-test.patch} | 10 +- ...ch => 0041-mm-Allow-only-slub-on-RT.patch} | 11 +- ...ocking-Disable-spin-on-owner-for-RT.patch} | 13 +- ...43-rcu-Disable-RCU_FAST_NO_HZ-on-RT.patch} | 11 +- ...44-rcu-make-RCU_BOOST-default-on-RT.patch} | 10 +- ...Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch} | 10 +- ...6-net-core-disable-NET_RX_BUSY_POLL.patch} | 13 +- ...047-arm-disable-NEON-in-kernel-mode.patch} | 22 +- ...048-powerpc-Use-generic-rwsem-on-RT.patch} | 10 +- ...le-in-kernel-MPIC-emulation-for-PRE.patch} | 11 +- ... 0050-powerpc-Disable-highmem-on-RT.patch} | 10 +- ... => 0051-mips-Disable-highmem-on-RT.patch} | 10 +- ...6-Use-generic-rwsem_spinlocks-on-rt.patch} | 11 +- ...s-trigger-disable-CPU-trigger-on-RT.patch} | 10 +- ...op-K8-s-driver-from-beeing-selected.patch} | 10 +- ...che.patch => 0055-md-disable-bcache.patch} | 13 +- ...-efi-Disable-runtime-services-on-RT.patch} | 10 +- ...057-printk-Add-a-printk-kill-switch.patch} | 28 ++- ...early_printk-boot-param-to-help-wit.patch} | 15 +- ...t-Provide-preempt_-_-no-rt-variants.patch} | 11 +- ...migrate_disable-enable-in-different.patch} | 17 +- ...atch => 0061-rt-Add-local-irq-locks.patch} | 16 +- ...rovide-get-put-_locked_ptr-variants.patch} | 14 +- ...atterlist-Do-not-disable-irqs-on-RT.patch} | 12 +- ...x86-Delay-calling-signals-in-atomic.patch} | 29 ++- ...gnal-delay-calling-signals-on-32bit.patch} | 10 +- ...ead-Replace-bh_uptodate_lock-for-rt.patch} | 44 ++-- ...state-lock-and-journal-head-lock-rt.patch} | 19 +- ...t_bl-Make-list-head-locking-RT-safe.patch} | 12 +- ...list_bl-fixup-bogus-lockdep-warning.patch} | 10 +- ...> 0070-genirq-Disable-irqpoll-on-rt.patch} | 13 +- ...genirq-Force-interrupt-thread-on-RT.patch} | 20 +- ...-zone-lock-while-freeing-pages-from.patch} | 26 ++- ...-zone-lock-while-freeing-pages-from.patch} | 30 +-- ...-change-list_lock-to-raw_spinlock_t.patch} | 132 ++++++------ ...ing-back-empty-slubs-to-IRQ-enabled.patch} | 42 ++-- ...age_alloc-rt-friendly-per-cpu-pages.patch} | 34 +-- ...77-mm-swap-Convert-to-percpu-locked.patch} | 47 +++-- ...-perform-lru_add_drain_all-remotely.patch} | 30 +-- ...-per-cpu-variables-with-preempt-dis.patch} | 44 ++-- ...lit-page-table-locks-for-vector-pag.patch} | 13 +- ...patch => 0081-mm-Enable-SLUB-for-RT.patch} | 14 +- ...082-slub-Enable-irqs-for-__GFP_WAIT.patch} | 16 +- ... 0083-slub-Disable-SLUB_CPU_PARTIAL.patch} | 10 +- ...-t-call-schedule_work_on-in-preempt.patch} | 16 +- ...lace-local_irq_disable-with-local-l.patch} | 25 ++- ...c-copy-with-get_cpu_var-and-locking.patch} | 24 ++- ...e-preemption-__split_large_page-aft.patch} | 23 +- ... => 0088-radix-tree-use-local-locks.patch} | 36 ++-- ...-timers-Prepare-for-full-preemption.patch} | 37 ++-- ...90-x86-kvm-Require-const-tsc-for-RT.patch} | 14 +- ...c-Don-t-use-completion-s-wait-queue.patch} | 24 ++- ...tch => 0092-wait.h-include-atomic.h.patch} | 13 +- ...ple-Simple-work-queue-implemenation.patch} | 22 +- ...a-shit-statement-in-SWORK_EVENT_PEN.patch} | 11 +- ...5-completion-Use-simple-wait-queues.patch} | 86 +++++--- ...h => 0096-fs-aio-simple-simple-work.patch} | 14 +- ...oke-the-affinity-callback-via-a-wor.patch} | 23 +- ...d-schedule_work-with-interrupts-dis.patch} | 17 +- ...te-hrtimer_init-hrtimer_init_sleepe.patch} | 67 +++--- ...00-hrtimers-Prepare-full-preemption.patch} | 65 +++--- ...-by-default-into-the-softirq-contex.patch} | 75 ++++--- ...ir-Make-the-hrtimers-non-hard-again.patch} | 12 +- ...schedule_work-call-to-helper-thread.patch} | 10 +- ...e-change-before-hrtimer_cancel-in-d.patch} | 12 +- ...imers-Thread-posix-cpu-timers-on-rt.patch} | 31 ++- ...hed-Move-task_struct-cleanup-to-RCU.patch} | 24 ++- ...number-of-task-migrations-per-batch.patch} | 14 +- ...0108-sched-Move-mmdrop-to-RCU-on-RT.patch} | 40 ++-- ...-stack-kprobe-clean-up-to-__put_tas.patch} | 18 +- ...tate-for-tasks-blocked-on-sleeping-.patch} | 27 ++- ...unt-rcu_preempt_depth-on-RT-in-migh.patch} | 21 +- ...proper-LOCK_OFFSET-for-cond_resched.patch} | 10 +- ...0113-sched-Disable-TTWU_QUEUE-on-RT.patch} | 10 +- ...nly-wake-up-idle-workers-if-not-blo.patch} | 13 +- ...ase-the-nr-of-migratory-tasks-when-.patch} | 17 +- ...hotplug-Lightweight-get-online-cpus.patch} | 24 ++- ...-disabled-counter-to-tracing-output.patch} | 30 ++- ...ch => 0118-lockdep-Make-it-RT-aware.patch} | 26 ++- ...asklets-from-going-into-infinite-sp.patch} | 40 ++-- ...emption-after-reenabling-interrupts.patch} | 51 +++-- ...ftirq-Disable-softirq-stacks-for-RT.patch} | 48 +++-- ...=> 0122-softirq-Split-softirq-locks.patch} | 64 ++++-- ...use-local_bh_disable-in-netif_rx_ni.patch} | 12 +- ...bling-of-softirq-processing-in-irq-.patch} | 34 ++- ...lit-timer-softirqs-out-of-ksoftirqd.patch} | 25 ++- ...al_softirq_pending-messages-if-ksof.patch} | 22 +- ...al_softirq_pending-messages-if-task.patch} | 14 +- ... 0128-rtmutex-trylock-is-okay-on-RT.patch} | 14 +- ...nfs-turn-rmdir_sem-into-a-semaphore.patch} | 34 ++- ...e-various-new-futex-race-conditions.patch} | 44 ++-- ...n-when-a-requeued-RT-task-times-out.patch} | 20 +- ...-unlock-symetry-versus-pi_lock-and-.patch} | 13 +- ...atch => 0133-pid.h-include-atomic.h.patch} | 10 +- ...rm-include-definition-for-cpumask_t.patch} | 12 +- ...re-Do-NOT-include-rwlock.h-directly.patch} | 11 +- ...6-rtmutex-Add-rtmutex_lock_killable.patch} | 26 ++- ...137-rtmutex-Make-lock_killable-work.patch} | 12 +- ...pinlock-Split-the-lock-types-header.patch} | 26 ++- ... => 0139-rtmutex-Avoid-include-hell.patch} | 10 +- ...rbtree-don-t-include-the-rcu-header.patch} | 32 ++- ...ex-Provide-rt_mutex_slowlock_locked.patch} | 22 +- ...ckdep-less-version-of-rt_mutex-s-lo.patch} | 28 ++- ...ex-add-sleeping-lock-implementation.patch} | 116 +++++++---- ...tex-implementation-based-on-rtmutex.patch} | 16 +- ...sem-implementation-based-on-rtmutex.patch} | 16 +- ...ock-implementation-based-on-rtmutex.patch} | 21 +- ...preserve-state-like-a-sleeping-lock.patch} | 12 +- ...> 0148-rtmutex-wire-up-RT-s-locking.patch} | 64 ++++-- ...tex-add-ww_mutex-addon-for-mutex-rt.patch} | 46 ++-- ...=> 0150-kconfig-Add-PREEMPT_RT_FULL.patch} | 20 +- ...fix-deadlock-in-device-mapper-block.patch} | 15 +- ...tex-Flush-block-plug-on-__down_read.patch} | 12 +- ...e-init-the-wait_lock-in-rt_mutex_in.patch} | 12 +- ...ce-fix-ptrace-vs-tasklist_lock-race.patch} | 28 ++- ...utex-annotate-sleeping-lock-context.patch} | 56 +++-- ...able-fallback-to-preempt_disable-in.patch} | 38 ++-- ...ck-for-__LINUX_SPINLOCK_TYPES_H-on-.patch} | 48 +++-- ...patch => 0158-rcu-Frob-softirq-test.patch} | 38 ++-- ...9-rcu-Merge-RCU-bh-into-RCU-preempt.patch} | 79 ++++--- ...e-ksoftirqd-do-RCU-quiescent-states.patch} | 23 +- ...ate-softirq-processing-from-rcutree.patch} | 73 ++++--- ...use-cpu_online-instead-custom-check.patch} | 28 ++- ...lace-local_irqsave-with-a-locallock.patch} | 18 +- ...normal_after_boot-by-default-for-RT.patch} | 12 +- ...rial-omap-Make-the-locking-RT-aware.patch} | 14 +- ...l-pl011-Make-the-locking-work-on-RT.patch} | 16 +- ...explicitly-initialize-the-flags-var.patch} | 13 +- ...prove-the-serial-console-PASS_LIMIT.patch} | 17 +- ...-don-t-take-the-trylock-during-oops.patch} | 12 +- ...sem-Remove-preempt_disable-variants.patch} | 49 +++-- ...te_mm-by-preempt_-disable-enable-_r.patch} | 17 +- ...ack-explicit-INIT_HLIST_BL_HEAD-ini.patch} | 15 +- ...-preemption-on-i_dir_seq-s-write-si.patch} | 37 ++-- ...-of-local-lock-in-multi_cpu-decompr.patch} | 12 +- ...mal-Defer-thermal-wakups-to-threads.patch} | 20 +- ...-preemption-around-local_bh_disable.patch} | 12 +- ...oll-Do-not-disable-preemption-on-RT.patch} | 14 +- ...r-preempt-disable-region-which-suck.patch} | 21 +- ...atch => 0179-block-mq-use-cpu_light.patch} | 12 +- ...ck-mq-do-not-invoke-preempt_disable.patch} | 16 +- ...-mq-don-t-complete-requests-via-IPI.patch} | 32 ++- ...Make-raid5_percpu-handling-RT-aware.patch} | 23 +- ...atch => 0183-rt-Introduce-cpu_chill.patch} | 20 +- ...timer-Don-t-lose-state-in-cpu_chill.patch} | 10 +- ...hill-save-task-state-in-saved_state.patch} | 13 +- ...-blk_queue_usage_counter_release-in.patch} | 18 +- ...block-Use-cpu_chill-for-retry-loops.patch} | 15 +- ...ache-Use-cpu_chill-in-trylock-loops.patch} | 19 +- ...-Use-cpu_chill-instead-of-cpu_relax.patch} | 23 +- ...se-swait_queue-instead-of-waitqueue.patch} | 74 ++++--- ...ch => 0191-workqueue-Use-normal-rcu.patch} | 54 ++--- ...al-irq-lock-instead-of-irq-disable-.patch} | 33 +-- ...-workqueue-versus-ata-piix-livelock.patch} | 14 +- ...angle-worker-accounting-from-rqlock.patch} | 52 +++-- ... => 0195-debugobjects-Make-RT-aware.patch} | 12 +- ... 0196-seqlock-Prevent-rt-starvation.patch} | 28 ++- ...c_xprt_do_enqueue-use-get_cpu_light.patch} | 15 +- ...198-net-Use-skbufhead-with-raw-lock.patch} | 38 ++-- ...ecursion-to-per-task-variable-on-RT.patch} | 48 +++-- ...-to-delegate-processing-a-softirq-t.patch} | 30 ++- ...ke-qdisc-s-busylock-in-__dev_xmit_s.patch} | 13 +- ...disc-use-a-seqlock-instead-seqcount.patch} | 65 ++++-- ...missing-serialization-in-ip_send_un.patch} | 20 +- ... 0204-net-add-a-lock-around-icmp_sk.patch} | 16 +- ...chedule_irqoff-disable-interrupts-o.patch} | 22 +- ...push-most-work-into-softirq-context.patch} | 50 +++-- ....patch => 0207-printk-Make-rt-aware.patch} | 20 +- ...-t-try-to-print-from-IRQ-NMI-region.patch} | 12 +- ...ntk-Drop-the-logbuf_lock-more-often.patch} | 18 +- ...-translation-section-permission-fau.patch} | 18 +- ...irq_set_irqchip_state-documentation.patch} | 12 +- ...ngrade-preempt_disable-d-region-to-.patch} | 17 +- ...preemp_disable-in-addition-to-local.patch} | 26 ++- ...4-kgdb-serial-Short-term-workaround.patch} | 25 ++- ...sysfs-Add-sys-kernel-realtime-entry.patch} | 20 +- ...> 0216-mm-rt-kmap_atomic-scheduling.patch} | 54 +++-- ...ighmem-Add-a-already-used-pte-check.patch} | 12 +- ...0218-arm-highmem-Flush-tlb-on-unmap.patch} | 10 +- ...h => 0219-arm-Enable-highmem-for-rt.patch} | 22 +- ...tch => 0220-scsi-fcoe-Make-RT-aware.patch} | 32 ++- ...pto-Reduce-preempt-disabled-regions.patch} | 20 +- ...preempt-disabled-regions-more-algos.patch} | 50 +++-- ...pto-limit-more-FPU-enabled-sections.patch} | 20 +- ...serialize-RT-percpu-scratch-buffer-.patch} | 18 +- ...-a-lock-instead-preempt_disable-loc.patch} | 20 +- ...ndom_bytes-for-RT_FULL-in-init_oops.patch} | 11 +- ...ckprotector-Avoid-random-pool-on-rt.patch} | 13 +- ...h => 0228-random-Make-it-work-on-rt.patch} | 47 +++-- ...om-avoid-preempt_disable-ed-section.patch} | 12 +- ...0-cpu-hotplug-Implement-CPU-pinning.patch} | 22 +- ...d-user-tasks-to-be-awakened-to-the-.patch} | 14 +- ...uct-tape-RT-rwlock-usage-for-non-RT.patch} | 18 +- ...ve-preemption-disabling-in-netif_rx.patch} | 16 +- ...-local_irq_disable-kmalloc-headache.patch} | 14 +- ...users-of-napi_alloc_cache-against-r.patch} | 18 +- ...ialize-xt_write_recseq-sections-on-.patch} | 22 +- ...dd-a-mutex-around-devnet_rename_seq.patch} | 24 ++- ...Only-do-hardirq-context-test-for-ra.patch} | 15 +- ...fix-warnings-due-to-missing-PREEMPT.patch} | 31 +-- ...hed-Add-support-for-lazy-preemption.patch} | 114 ++++++---- ...1-ftrace-Fix-trace-header-alignment.patch} | 12 +- ...242-x86-Support-for-lazy-preemption.patch} | 44 ++-- ...properly-check-against-preempt-mask.patch} | 13 +- ...use-proper-return-label-on-32bit-x8.patch} | 11 +- ...arm-Add-support-for-lazy-preemption.patch} | 46 ++-- ...rpc-Add-support-for-lazy-preemption.patch} | 50 +++-- ...arch-arm64-Add-lazy-preempt-support.patch} | 38 ++-- ...-Protect-send_msg-with-a-local-lock.patch} | 16 +- ...m-Replace-bit-spinlocks-with-rtmute.patch} | 20 +- ...t-disable-preemption-in-zcomp_strea.patch} | 28 ++- ...zcomp_stream_get-smp_processor_id-u.patch} | 16 +- ...2-tpm_tis-fix-stall-after-iowrite-s.patch} | 16 +- ...-deferral-of-watchdogd-wakeup-on-RT.patch} | 18 +- ...se-preempt_disable-enable_rt-where-.patch} | 23 +- ...l_lock-unlock_irq-in-intel_pipe_upd.patch} | 22 +- ...0256-drm-i915-disable-tracing-on-RT.patch} | 10 +- ..._I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch} | 13 +- ...oups-use-simple-wait-in-css_release.patch} | 20 +- ...ert-callback_lock-to-raw_spinlock_t.patch} | 42 ++-- ...a-locallock-instead-preempt_disable.patch} | 18 +- ...kqueue-Prevent-deadlock-stall-on-RT.patch} | 35 ++-- ...-tasks-to-cache-one-sigqueue-struct.patch} | 53 +++-- ...0263-Add-localversion-for-RT-release.patch | 21 ++ ...iommu-Use-a-locallock-instead-local_.patch | 96 +++++++++ .../patches-4.19.x-rt/0265-localversion.patch | 13 -- .../0265-powerpc-reshuffle-TIF-bits.patch | 151 ++++++++++++++ ...-Convert-show_lock-to-raw_spinlock_t.patch | 62 ++++++ ...isable-interrupts-independently-of-t.patch | 50 +++++ ...-Fix-a-lockup-in-wait_for_completion.patch | 68 ++++++ .../0269-Linux-4.19.37-rt20-REBASE.patch | 19 ++ 274 files changed, 4968 insertions(+), 2361 deletions(-) rename kernel/patches-4.19.x-rt/{0001-0001-ARM-at91-add-TCB-registers-definitions.patch => 0001-ARM-at91-add-TCB-registers-definitions.patch} (95%) rename kernel/patches-4.19.x-rt/{0002-0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch => 0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch} (94%) rename kernel/patches-4.19.x-rt/{0003-0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch => 0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch} (91%) rename kernel/patches-4.19.x-rt/{0004-0004-clocksource-drivers-atmel-pit-make-option-silent.patch => 0004-clocksource-drivers-atmel-pit-make-option-silent.patch} (71%) rename kernel/patches-4.19.x-rt/{0005-0005-ARM-at91-Implement-clocksource-selection.patch => 0005-ARM-at91-Implement-clocksource-selection.patch} (82%) rename kernel/patches-4.19.x-rt/{0006-0006-ARM-configs-at91-use-new-TCB-timer-driver.patch => 0006-ARM-configs-at91-use-new-TCB-timer-driver.patch} (65%) rename kernel/patches-4.19.x-rt/{0007-0007-ARM-configs-at91-unselect-PIT.patch => 0007-ARM-configs-at91-unselect-PIT.patch} (69%) rename kernel/patches-4.19.x-rt/{0012-arm-convert-boot-lock-to-raw.patch => 0012-arm-Convert-arm-boot_lock-to-raw.patch} (71%) delete mode 100644 kernel/patches-4.19.x-rt/0014-arm-kprobe-replace-patch_lock-to-raw-lock.patch rename kernel/patches-4.19.x-rt/{0016-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch => 0014-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch} (75%) delete mode 100644 kernel/patches-4.19.x-rt/0015-arm-unwind-use_raw_lock.patch rename kernel/patches-4.19.x-rt/{0017-fscache-initialize-cookie-hash-table-raw-spinlocks.patch => 0015-fscache-initialize-cookie-hash-table-raw-spinlocks.patch} (67%) rename kernel/patches-4.19.x-rt/{0018-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch => 0016-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch} (78%) rename kernel/patches-4.19.x-rt/{0019-percpu-include-irqflags.h-for-raw_local_irq_save.patch => 0017-percpu-include-irqflags.h-for-raw_local_irq_save.patch} (69%) rename kernel/patches-4.19.x-rt/{0020-efi-Allow-efi-runtime.patch => 0018-efi-Allow-efi-runtime.patch} (65%) rename kernel/patches-4.19.x-rt/{0021-x86-efi-drop-task_lock-from-efi_switch_mm.patch => 0019-x86-efi-drop-task_lock-from-efi_switch_mm.patch} (85%) rename kernel/patches-4.19.x-rt/{0022-arm64-KVM-compute_layout-before-altenates-are-applie.patch => 0020-arm64-KVM-compute_layout-before-altenates-are-applie.patch} (64%) rename kernel/patches-4.19.x-rt/{0023-of-allocate-free-phandle-cache-outside-of-the-devtre.patch => 0021-of-allocate-free-phandle-cache-outside-of-the-devtre.patch} (88%) rename kernel/patches-4.19.x-rt/{0024-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch => 0022-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch} (84%) rename kernel/patches-4.19.x-rt/{0025-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch => 0023-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch} (72%) rename kernel/patches-4.19.x-rt/{0026-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch => 0024-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch} (84%) rename kernel/patches-4.19.x-rt/{0027-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch => 0025-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch} (75%) rename kernel/patches-4.19.x-rt/{0028-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch => 0026-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch} (74%) rename kernel/patches-4.19.x-rt/{0029-add_migrate_disable.patch => 0027-kernel-sched-core-add-migrate_disable.patch} (81%) rename kernel/patches-4.19.x-rt/{0030-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch => 0028-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch} (74%) rename kernel/patches-4.19.x-rt/{0031-at91_dont_enable_disable_clock.patch => 0029-arm-at91-do-not-disable-enable-clocks-in-a-row.patch} (74%) rename kernel/patches-4.19.x-rt/{0032-clocksource-tclib-allow-higher-clockrates.patch => 0030-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch} (79%) rename kernel/patches-4.19.x-rt/{0033-timekeeping-split-jiffies-lock.patch => 0031-timekeeping-Split-jiffies-seqlock.patch} (73%) rename kernel/patches-4.19.x-rt/{0034-signal-revert-ptrace-preempt-magic.patch => 0032-signal-Revert-ptrace-preempt-magic.patch} (68%) rename kernel/patches-4.19.x-rt/{0035-net-sched-dev_deactivate_many-use-msleep-1-instead-o.patch => 0033-net-sched-Use-msleep-instead-of-yield.patch} (89%) rename kernel/patches-4.19.x-rt/{0036-dm-rq-remove-BUG_ON-irqs_disabled-check.patch => 0034-dm-rq-remove-BUG_ON-irqs_disabled-check.patch} (72%) rename kernel/patches-4.19.x-rt/{0037-usb-do-not-disable-interrupts-in-giveback.patch => 0035-usb-do-no-disable-interrupts-in-giveback.patch} (74%) rename kernel/patches-4.19.x-rt/{0038-rt-preempt-base-config.patch => 0036-rt-Provide-PREEMPT_RT_BASE-config-switch.patch} (82%) rename kernel/patches-4.19.x-rt/{0039-cpumask-disable-offstack-on-rt.patch => 0037-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch} (87%) rename kernel/patches-4.19.x-rt/{0040-jump-label-rt.patch => 0038-jump-label-disable-if-stop_machine-is-used.patch} (77%) rename kernel/patches-4.19.x-rt/{0041-kconfig-disable-a-few-options-rt.patch => 0039-kconfig-Disable-config-options-which-are-not-RT-comp.patch} (68%) rename kernel/patches-4.19.x-rt/{0042-lockdep-disable-self-test.patch => 0040-lockdep-disable-self-test.patch} (79%) rename kernel/patches-4.19.x-rt/{0043-mm-disable-sloub-rt.patch => 0041-mm-Allow-only-slub-on-RT.patch} (76%) rename kernel/patches-4.19.x-rt/{0044-mutex-no-spin-on-rt.patch => 0042-locking-Disable-spin-on-owner-for-RT.patch} (68%) rename kernel/patches-4.19.x-rt/{0045-rcu-disable-rcu-fast-no-hz-on-rt.patch => 0043-rcu-Disable-RCU_FAST_NO_HZ-on-RT.patch} (72%) rename kernel/patches-4.19.x-rt/{0046-rcu-make-RCU_BOOST-default-on-RT.patch => 0044-rcu-make-RCU_BOOST-default-on-RT.patch} (77%) rename kernel/patches-4.19.x-rt/{0047-sched-disable-rt-group-sched-on-rt.patch => 0045-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch} (75%) rename kernel/patches-4.19.x-rt/{0048-net_disable_NET_RX_BUSY_POLL.patch => 0046-net-core-disable-NET_RX_BUSY_POLL.patch} (71%) rename kernel/patches-4.19.x-rt/{0049-arm-disable-NEON-in-kernel-mode.patch => 0047-arm-disable-NEON-in-kernel-mode.patch} (87%) rename kernel/patches-4.19.x-rt/{0050-power-use-generic-rwsem-on-rt.patch => 0048-powerpc-Use-generic-rwsem-on-RT.patch} (65%) rename kernel/patches-4.19.x-rt/{0051-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch => 0049-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch} (83%) rename kernel/patches-4.19.x-rt/{0052-power-disable-highmem-on-rt.patch => 0050-powerpc-Disable-highmem-on-RT.patch} (64%) rename kernel/patches-4.19.x-rt/{0053-mips-disable-highmem-on-rt.patch => 0051-mips-Disable-highmem-on-RT.patch} (71%) rename kernel/patches-4.19.x-rt/{0054-x86-use-gen-rwsem-spinlocks-rt.patch => 0052-x86-Use-generic-rwsem_spinlocks-on-rt.patch} (69%) rename kernel/patches-4.19.x-rt/{0055-leds-trigger-disable-CPU-trigger-on-RT.patch => 0053-leds-trigger-disable-CPU-trigger-on-RT.patch} (83%) rename kernel/patches-4.19.x-rt/{0056-cpufreq-drop-K8-s-driver-from-beeing-selected.patch => 0054-cpufreq-drop-K8-s-driver-from-beeing-selected.patch} (79%) rename kernel/patches-4.19.x-rt/{0057-md-disable-bcache.patch => 0055-md-disable-bcache.patch} (74%) rename kernel/patches-4.19.x-rt/{0058-efi-Disable-runtime-services-on-RT.patch => 0056-efi-Disable-runtime-services-on-RT.patch} (81%) rename kernel/patches-4.19.x-rt/{0059-printk-kill.patch => 0057-printk-Add-a-printk-kill-switch.patch} (78%) rename kernel/patches-4.19.x-rt/{0060-printk-27force_early_printk-27-boot-param-to-help-with-debugging.patch => 0058-printk-Add-force_early_printk-boot-param-to-help-wit.patch} (63%) rename kernel/patches-4.19.x-rt/{0061-preempt-nort-rt-variants.patch => 0059-preempt-Provide-preempt_-_-no-rt-variants.patch} (81%) rename kernel/patches-4.19.x-rt/{0062-futex-workaround-migrate_disable-enable-in-different.patch => 0060-futex-workaround-migrate_disable-enable-in-different.patch} (77%) rename kernel/patches-4.19.x-rt/{0063-rt-local-irq-lock.patch => 0061-rt-Add-local-irq-locks.patch} (94%) rename kernel/patches-4.19.x-rt/{0064-locallock-provide-get-put-_locked_ptr-variants.patch => 0062-locallock-provide-get-put-_locked_ptr-variants.patch} (70%) rename kernel/patches-4.19.x-rt/{0065-mm-scatterlist-dont-disable-irqs-on-RT.patch => 0063-mm-scatterlist-Do-not-disable-irqs-on-RT.patch} (62%) rename kernel/patches-4.19.x-rt/{0066-oleg-signal-rt-fix.patch => 0064-signal-x86-Delay-calling-signals-in-atomic.patch} (80%) rename kernel/patches-4.19.x-rt/{0067-x86-signal-delay-calling-signals-on-32bit.patch => 0065-x86-signal-delay-calling-signals-on-32bit.patch} (82%) rename kernel/patches-4.19.x-rt/{0068-fs-replace-bh_uptodate_lock-for-rt.patch => 0066-buffer_head-Replace-bh_uptodate_lock-for-rt.patch} (73%) rename kernel/patches-4.19.x-rt/{0069-fs-jbd-replace-bh_state-lock.patch => 0067-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch} (76%) rename kernel/patches-4.19.x-rt/{0070-list_bl.h-make-list-head-locking-RT-safe.patch => 0068-list_bl-Make-list-head-locking-RT-safe.patch} (90%) rename kernel/patches-4.19.x-rt/{0071-list_bl-fixup-bogus-lockdep-warning.patch => 0069-list_bl-fixup-bogus-lockdep-warning.patch} (90%) rename kernel/patches-4.19.x-rt/{0072-genirq-disable-irqpoll-on-rt.patch => 0070-genirq-Disable-irqpoll-on-rt.patch} (72%) rename kernel/patches-4.19.x-rt/{0073-genirq-force-threading.patch => 0071-genirq-Force-interrupt-thread-on-RT.patch} (58%) rename kernel/patches-4.19.x-rt/{0074-0001-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch => 0072-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch} (85%) rename kernel/patches-4.19.x-rt/{0075-0002-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch => 0073-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch} (81%) rename kernel/patches-4.19.x-rt/{0076-0003-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch => 0074-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch} (73%) rename kernel/patches-4.19.x-rt/{0077-0004-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch => 0075-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch} (74%) rename kernel/patches-4.19.x-rt/{0078-mm-page_alloc-rt-friendly-per-cpu-pages.patch => 0076-mm-page_alloc-rt-friendly-per-cpu-pages.patch} (84%) rename kernel/patches-4.19.x-rt/{0079-mm-convert-swap-to-percpu-locked.patch => 0077-mm-swap-Convert-to-percpu-locked.patch} (78%) rename kernel/patches-4.19.x-rt/{0080-mm-perform-lru_add_drain_all-remotely.patch => 0078-mm-perform-lru_add_drain_all-remotely.patch} (84%) rename kernel/patches-4.19.x-rt/{0081-mm-make-vmstat-rt-aware.patch => 0079-mm-vmstat-Protect-per-cpu-variables-with-preempt-dis.patch} (63%) rename kernel/patches-4.19.x-rt/{0082-re-preempt_rt_full-arm-coredump-fails-for-cpu-3e-3d-4.patch => 0080-ARM-Initialize-split-page-table-locks-for-vector-pag.patch} (82%) rename kernel/patches-4.19.x-rt/{0083-mm-enable-slub.patch => 0081-mm-Enable-SLUB-for-RT.patch} (63%) rename kernel/patches-4.19.x-rt/{0084-slub-enable-irqs-for-no-wait.patch => 0082-slub-Enable-irqs-for-__GFP_WAIT.patch} (67%) rename kernel/patches-4.19.x-rt/{0085-slub-disable-SLUB_CPU_PARTIAL.patch => 0083-slub-Disable-SLUB_CPU_PARTIAL.patch} (88%) rename kernel/patches-4.19.x-rt/{0086-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch => 0084-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch} (84%) rename kernel/patches-4.19.x-rt/{0087-mm-memcontrol-do_not_disable_irq.patch => 0085-mm-memcontrol-Replace-local_irq_disable-with-local-l.patch} (77%) rename kernel/patches-4.19.x-rt/{0088-mm_zsmalloc_copy_with_get_cpu_var_and_locking.patch => 0086-mm-zsmalloc-copy-with-get_cpu_var-and-locking.patch} (84%) rename kernel/patches-4.19.x-rt/{0089-x86-mm-pat-disable-preemption-__split_large_page-aft.patch => 0087-x86-mm-pat-disable-preemption-__split_large_page-aft.patch} (66%) rename kernel/patches-4.19.x-rt/{0090-radix-tree-use-local-locks.patch => 0088-radix-tree-use-local-locks.patch} (77%) rename kernel/patches-4.19.x-rt/{0091-timers-prepare-for-full-preemption.patch => 0089-timers-Prepare-for-full-preemption.patch} (76%) rename kernel/patches-4.19.x-rt/{0092-x86-kvm-require-const-tsc-for-rt.patch => 0090-x86-kvm-Require-const-tsc-for-RT.patch} (65%) rename kernel/patches-4.19.x-rt/{0093-pci-switchtec-Don-t-use-completion-s-wait-queue.patch => 0091-pci-switchtec-Don-t-use-completion-s-wait-queue.patch} (76%) rename kernel/patches-4.19.x-rt/{0094-wait.h-include-atomic.h.patch => 0092-wait.h-include-atomic.h.patch} (73%) rename kernel/patches-4.19.x-rt/{0095-work-simple-Simple-work-queue-implemenation.patch => 0093-work-simple-Simple-work-queue-implemenation.patch} (87%) rename kernel/patches-4.19.x-rt/{0096-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch => 0094-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch} (73%) rename kernel/patches-4.19.x-rt/{0097-completion-use-simple-wait-queues.patch => 0095-completion-Use-simple-wait-queues.patch} (74%) rename kernel/patches-4.19.x-rt/{0098-fs-aio-simple-simple-work.patch => 0096-fs-aio-simple-simple-work.patch} (87%) rename kernel/patches-4.19.x-rt/{0099-genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch => 0097-genirq-Do-not-invoke-the-affinity-callback-via-a-wor.patch} (78%) rename kernel/patches-4.19.x-rt/{0100-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch => 0098-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch} (72%) rename kernel/patches-4.19.x-rt/{0101-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch => 0099-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch} (76%) rename kernel/patches-4.19.x-rt/{0102-hrtimers-prepare-full-preemption.patch => 0100-hrtimers-Prepare-full-preemption.patch} (72%) rename kernel/patches-4.19.x-rt/{0103-hrtimer-by-timers-by-default-into-the-softirq-context.patch => 0101-hrtimer-by-timers-by-default-into-the-softirq-contex.patch} (70%) rename kernel/patches-4.19.x-rt/{0104-sched-fair-Make-the-hrtimers-non-hard-again.patch => 0102-sched-fair-Make-the-hrtimers-non-hard-again.patch} (73%) rename kernel/patches-4.19.x-rt/{0105-hrtimer-Move-schedule_work-call-to-helper-thread.patch => 0103-hrtimer-Move-schedule_work-call-to-helper-thread.patch} (90%) rename kernel/patches-4.19.x-rt/{0106-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch => 0104-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch} (82%) rename kernel/patches-4.19.x-rt/{0107-posix-timers-thread-posix-cpu-timers-on-rt.patch => 0105-posix-timers-Thread-posix-cpu-timers-on-rt.patch} (87%) rename kernel/patches-4.19.x-rt/{0108-sched-delay-put-task.patch => 0106-sched-Move-task_struct-cleanup-to-RCU.patch} (71%) rename kernel/patches-4.19.x-rt/{0109-sched-limit-nr-migrate.patch => 0107-sched-Limit-the-number-of-task-migrations-per-batch.patch} (61%) rename kernel/patches-4.19.x-rt/{0110-sched-mmdrop-delayed.patch => 0108-sched-Move-mmdrop-to-RCU-on-RT.patch} (69%) rename kernel/patches-4.19.x-rt/{0111-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch => 0109-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch} (73%) rename kernel/patches-4.19.x-rt/{0112-sched-rt-mutex-wakeup.patch => 0110-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch} (74%) rename kernel/patches-4.19.x-rt/{0113-sched-might-sleep-do-not-account-rcu-depth.patch => 0111-sched-Do-not-account-rcu_preempt_depth-on-RT-in-migh.patch} (66%) rename kernel/patches-4.19.x-rt/{0114-cond-resched-lock-rt-tweak.patch => 0112-sched-Use-the-proper-LOCK_OFFSET-for-cond_resched.patch} (67%) rename kernel/patches-4.19.x-rt/{0115-sched-disable-ttwu-queue.patch => 0113-sched-Disable-TTWU_QUEUE-on-RT.patch} (73%) rename kernel/patches-4.19.x-rt/{0116-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch => 0114-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch} (77%) rename kernel/patches-4.19.x-rt/{0117-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch => 0115-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch} (90%) rename kernel/patches-4.19.x-rt/{0118-hotplug-light-get-online-cpus.patch => 0116-hotplug-Lightweight-get-online-cpus.patch} (75%) rename kernel/patches-4.19.x-rt/{0119-ftrace-migrate-disable-tracing.patch => 0117-trace-Add-migrate-disabled-counter-to-tracing-output.patch} (65%) rename kernel/patches-4.19.x-rt/{0120-lockdep-no-softirq-accounting-on-rt.patch => 0118-lockdep-Make-it-RT-aware.patch} (70%) rename kernel/patches-4.19.x-rt/{0121-tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch => 0119-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch} (88%) rename kernel/patches-4.19.x-rt/{0122-softirq-preempt-fix-3-re.patch => 0120-softirq-Check-preemption-after-reenabling-interrupts.patch} (69%) rename kernel/patches-4.19.x-rt/{0123-softirq-disable-softirq-stacks-for-rt.patch => 0121-softirq-Disable-softirq-stacks-for-RT.patch} (68%) rename kernel/patches-4.19.x-rt/{0124-softirq-split-locks.patch => 0122-softirq-Split-softirq-locks.patch} (91%) rename kernel/patches-4.19.x-rt/{0125-net-core-use-local_bh_disable-in-netif_rx_ni.patch => 0123-net-core-use-local_bh_disable-in-netif_rx_ni.patch} (73%) rename kernel/patches-4.19.x-rt/{0126-irq-allow-disabling-of-softirq-processing-in-irq-thread-context.patch => 0124-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch} (78%) rename kernel/patches-4.19.x-rt/{0127-softirq-split-timer-softirqs-out-of-ksoftirqd.patch => 0125-softirq-split-timer-softirqs-out-of-ksoftirqd.patch} (90%) rename kernel/patches-4.19.x-rt/{0128-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch => 0126-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch} (85%) rename kernel/patches-4.19.x-rt/{0129-softirq-Avoid-local_softirq_pending-messages-if-task.patch => 0127-softirq-Avoid-local_softirq_pending-messages-if-task.patch} (75%) rename kernel/patches-4.19.x-rt/{0130-rtmutex-trylock-is-okay-on-RT.patch => 0128-rtmutex-trylock-is-okay-on-RT.patch} (61%) rename kernel/patches-4.19.x-rt/{0131-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch => 0129-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch} (75%) rename kernel/patches-4.19.x-rt/{0132-rtmutex-futex-prepare-rt.patch => 0130-rtmutex-Handle-the-various-new-futex-race-conditions.patch} (80%) rename kernel/patches-4.19.x-rt/{0133-futex-requeue-pi-fix.patch => 0131-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch} (83%) rename kernel/patches-4.19.x-rt/{0134-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch => 0132-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch} (78%) rename kernel/patches-4.19.x-rt/{0135-pid.h-include-atomic.h.patch => 0133-pid.h-include-atomic.h.patch} (78%) rename kernel/patches-4.19.x-rt/{0136-arm-include-definition-for-cpumask_t.patch => 0134-arm-include-definition-for-cpumask_t.patch} (65%) rename kernel/patches-4.19.x-rt/{0137-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch => 0135-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch} (71%) rename kernel/patches-4.19.x-rt/{0138-rtmutex-lock-killable.patch => 0136-rtmutex-Add-rtmutex_lock_killable.patch} (65%) rename kernel/patches-4.19.x-rt/{0139-rtmutex-Make-lock_killable-work.patch => 0137-rtmutex-Make-lock_killable-work.patch} (73%) rename kernel/patches-4.19.x-rt/{0140-spinlock-types-separate-raw.patch => 0138-spinlock-Split-the-lock-types-header.patch} (83%) rename kernel/patches-4.19.x-rt/{0141-rtmutex-avoid-include-hell.patch => 0139-rtmutex-Avoid-include-hell.patch} (68%) rename kernel/patches-4.19.x-rt/{0142-rtmutex_dont_include_rcu.patch => 0140-rbtree-don-t-include-the-rcu-header.patch} (85%) rename kernel/patches-4.19.x-rt/{0143-rtmutex-Provide-rt_mutex_slowlock_locked.patch => 0141-rtmutex-Provide-rt_mutex_slowlock_locked.patch} (85%) rename kernel/patches-4.19.x-rt/{0144-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch => 0142-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch} (81%) rename kernel/patches-4.19.x-rt/{0145-rtmutex-add-sleeping-lock-implementation.patch => 0143-rtmutex-add-sleeping-lock-implementation.patch} (88%) rename kernel/patches-4.19.x-rt/{0146-rtmutex-add-mutex-implementation-based-on-rtmutex.patch => 0144-rtmutex-add-mutex-implementation-based-on-rtmutex.patch} (95%) rename kernel/patches-4.19.x-rt/{0147-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch => 0145-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch} (95%) rename kernel/patches-4.19.x-rt/{0148-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch => 0146-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch} (95%) rename kernel/patches-4.19.x-rt/{0149-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch => 0147-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch} (69%) rename kernel/patches-4.19.x-rt/{0150-rtmutex-wire-up-RT-s-locking.patch => 0148-rtmutex-wire-up-RT-s-locking.patch} (69%) rename kernel/patches-4.19.x-rt/{0151-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch => 0149-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch} (87%) rename kernel/patches-4.19.x-rt/{0152-kconfig-preempt-rt-full.patch => 0150-kconfig-Add-PREEMPT_RT_FULL.patch} (72%) rename kernel/patches-4.19.x-rt/{0153-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch => 0151-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch} (83%) rename kernel/patches-4.19.x-rt/{0154-locking-rt-mutex-Flush-block-plug-on-__down_read.patch => 0152-locking-rt-mutex-Flush-block-plug-on-__down_read.patch} (72%) rename kernel/patches-4.19.x-rt/{0155-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch => 0153-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch} (71%) rename kernel/patches-4.19.x-rt/{0156-ptrace-fix-ptrace-vs-tasklist_lock-race.patch => 0154-ptrace-fix-ptrace-vs-tasklist_lock-race.patch} (81%) rename kernel/patches-4.19.x-rt/{0157-rtmutex-annotate-sleeping-lock-context.patch => 0155-rtmutex-annotate-sleeping-lock-context.patch} (77%) rename kernel/patches-4.19.x-rt/{0158-sched-migrate_disable-fallback-to-preempt_disable-in.patch => 0156-sched-migrate_disable-fallback-to-preempt_disable-in.patch} (80%) rename kernel/patches-4.19.x-rt/{0159-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch => 0157-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch} (66%) rename kernel/patches-4.19.x-rt/{0160-peter_zijlstra-frob-rcu.patch => 0158-rcu-Frob-softirq-test.patch} (93%) rename kernel/patches-4.19.x-rt/{0161-rcu-merge-rcu-bh-into-rcu-preempt-for-rt.patch => 0159-rcu-Merge-RCU-bh-into-RCU-preempt.patch} (75%) rename kernel/patches-4.19.x-rt/{0162-patch-to-introduce-rcu-bh-qs-where-safe-from-softirq.patch => 0160-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch} (81%) rename kernel/patches-4.19.x-rt/{0163-rcu-Eliminate-softirq-processing-from-rcutree.patch => 0161-rcu-Eliminate-softirq-processing-from-rcutree.patch} (88%) rename kernel/patches-4.19.x-rt/{0164-srcu-use-cpu_online-instead-custom-check.patch => 0162-srcu-use-cpu_online-instead-custom-check.patch} (74%) rename kernel/patches-4.19.x-rt/{0165-srcu-replace-local_irqsave-with-a-locallock.patch => 0163-srcu-replace-local_irqsave-with-a-locallock.patch} (76%) rename kernel/patches-4.19.x-rt/{0166-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch => 0164-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch} (75%) rename kernel/patches-4.19.x-rt/{0167-drivers-tty-fix-omap-lock-crap.patch => 0165-tty-serial-omap-Make-the-locking-RT-aware.patch} (72%) rename kernel/patches-4.19.x-rt/{0168-drivers-tty-pl011-irq-disable-madness.patch => 0166-tty-serial-pl011-Make-the-locking-work-on-RT.patch} (67%) rename kernel/patches-4.19.x-rt/{0169-tty-serial-pl011-warning-about-uninitialized.patch => 0167-tty-serial-pl011-explicitly-initialize-the-flags-var.patch} (72%) rename kernel/patches-4.19.x-rt/{0170-rt-serial-warn-fix.patch => 0168-rt-Improve-the-serial-console-PASS_LIMIT.patch} (63%) rename kernel/patches-4.19.x-rt/{0171-tty-serial-8250-don-t-take-the-trylock-during-oops.patch => 0169-tty-serial-8250-don-t-take-the-trylock-during-oops.patch} (63%) rename kernel/patches-4.19.x-rt/{0172-peterz-percpu-rwsem-rt.patch => 0170-locking-percpu-rwsem-Remove-preempt_disable-variants.patch} (74%) rename kernel/patches-4.19.x-rt/{0173-mm-protect-activate-switch-mm.patch => 0171-mm-Protect-activate_mm-by-preempt_-disable-enable-_r.patch} (87%) rename kernel/patches-4.19.x-rt/{0174-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-in.patch => 0172-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-ini.patch} (79%) rename kernel/patches-4.19.x-rt/{0175-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch => 0173-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch} (72%) rename kernel/patches-4.19.x-rt/{0176-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch => 0174-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch} (81%) rename kernel/patches-4.19.x-rt/{0177-thermal-Defer-thermal-wakups-to-threads.patch => 0175-thermal-Defer-thermal-wakups-to-threads.patch} (81%) rename kernel/patches-4.19.x-rt/{0178-x86-fpu-Disable-preemption-around-local_bh_disable.patch => 0176-x86-fpu-Disable-preemption-around-local_bh_disable.patch} (65%) rename kernel/patches-4.19.x-rt/{0179-epoll-use-get-cpu-light.patch => 0177-fs-epoll-Do-not-disable-preemption-on-RT.patch} (62%) rename kernel/patches-4.19.x-rt/{0180-mm-vmalloc-use-get-cpu-light.patch => 0178-mm-vmalloc-Another-preempt-disable-region-which-suck.patch} (66%) rename kernel/patches-4.19.x-rt/{0181-block-mq-use-cpu_light.patch => 0179-block-mq-use-cpu_light.patch} (67%) rename kernel/patches-4.19.x-rt/{0182-block-mq-drop-preempt-disable.patch => 0180-block-mq-do-not-invoke-preempt_disable.patch} (68%) rename kernel/patches-4.19.x-rt/{0183-block-mq-don-t-complete-requests-via-IPI.patch => 0181-block-mq-don-t-complete-requests-via-IPI.patch} (69%) rename kernel/patches-4.19.x-rt/{0184-md-raid5-percpu-handling-rt-aware.patch => 0182-md-raid5-Make-raid5_percpu-handling-RT-aware.patch} (70%) rename kernel/patches-4.19.x-rt/{0185-rt-introduce-cpu-chill.patch => 0183-rt-Introduce-cpu_chill.patch} (85%) rename kernel/patches-4.19.x-rt/{0186-hrtimer-Don-t-lose-state-in-cpu_chill.patch => 0184-hrtimer-Don-t-lose-state-in-cpu_chill.patch} (82%) rename kernel/patches-4.19.x-rt/{0187-hrtimer-cpu_chill-save-task-state-in-saved_state.patch => 0185-hrtimer-cpu_chill-save-task-state-in-saved_state.patch} (83%) rename kernel/patches-4.19.x-rt/{0188-block-blk-mq-move-blk_queue_usage_counter_release-in.patch => 0186-block-blk-mq-move-blk_queue_usage_counter_release-in.patch} (87%) rename kernel/patches-4.19.x-rt/{0189-block-use-cpu-chill.patch => 0187-block-Use-cpu_chill-for-retry-loops.patch} (70%) rename kernel/patches-4.19.x-rt/{0190-fs-dcache-use-cpu-chill-in-trylock-loops.patch => 0188-fs-dcache-Use-cpu_chill-in-trylock-loops.patch} (73%) rename kernel/patches-4.19.x-rt/{0191-net-use-cpu-chill.patch => 0189-net-Use-cpu_chill-instead-of-cpu_relax.patch} (66%) rename kernel/patches-4.19.x-rt/{0192-fs-dcache-use-swait_queue-instead-of-waitqueue.patch => 0190-fs-dcache-use-swait_queue-instead-of-waitqueue.patch} (70%) rename kernel/patches-4.19.x-rt/{0193-workqueue-use-rcu.patch => 0191-workqueue-Use-normal-rcu.patch} (84%) rename kernel/patches-4.19.x-rt/{0194-workqueue-use-locallock.patch => 0192-workqueue-Use-local-irq-lock-instead-of-irq-disable-.patch} (80%) rename kernel/patches-4.19.x-rt/{0195-work-queue-work-around-irqsafe-timer-optimization.patch => 0193-workqueue-Prevent-workqueue-versus-ata-piix-livelock.patch} (93%) rename kernel/patches-4.19.x-rt/{0196-workqueue-distangle-from-rq-lock.patch => 0194-sched-Distangle-worker-accounting-from-rqlock.patch} (83%) rename kernel/patches-4.19.x-rt/{0197-debugobjects-rt.patch => 0195-debugobjects-Make-RT-aware.patch} (58%) rename kernel/patches-4.19.x-rt/{0198-seqlock-prevent-rt-starvation.patch => 0196-seqlock-Prevent-rt-starvation.patch} (81%) rename kernel/patches-4.19.x-rt/{0199-sunrpc-make-svc_xprt_do_enqueue-use-get_cpu_light.patch => 0197-sunrpc-Make-svc_xprt_do_enqueue-use-get_cpu_light.patch} (81%) rename kernel/patches-4.19.x-rt/{0200-skbufhead-raw-lock.patch => 0198-net-Use-skbufhead-with-raw-lock.patch} (73%) rename kernel/patches-4.19.x-rt/{0201-net-move-xmit_recursion-to-per-task-variable-on-RT.patch => 0199-net-move-xmit_recursion-to-per-task-variable-on-RT.patch} (79%) rename kernel/patches-4.19.x-rt/{0202-net-provide-a-way-to-delegate-processing-a-softirq-t.patch => 0200-net-provide-a-way-to-delegate-processing-a-softirq-t.patch} (73%) rename kernel/patches-4.19.x-rt/{0203-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch => 0201-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch} (74%) rename kernel/patches-4.19.x-rt/{0204-net-Qdisc-use-a-seqlock-instead-seqcount.patch => 0202-net-Qdisc-use-a-seqlock-instead-seqcount.patch} (77%) rename kernel/patches-4.19.x-rt/{0205-net-add-back-the-missing-serialization-in-ip_send_un.patch => 0203-net-add-back-the-missing-serialization-in-ip_send_un.patch} (82%) rename kernel/patches-4.19.x-rt/{0206-net-add-a-lock-around-icmp_sk.patch => 0204-net-add-a-lock-around-icmp_sk.patch} (73%) rename kernel/patches-4.19.x-rt/{0207-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch => 0205-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch} (76%) rename kernel/patches-4.19.x-rt/{0208-irqwork-push_most_work_into_softirq_context.patch => 0206-irqwork-push-most-work-into-softirq-context.patch} (77%) rename kernel/patches-4.19.x-rt/{0209-printk-rt-aware.patch => 0207-printk-Make-rt-aware.patch} (79%) rename kernel/patches-4.19.x-rt/{0210-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch => 0208-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch} (72%) rename kernel/patches-4.19.x-rt/{0211-HACK-printk-drop-the-logbuf_lock-more-often.patch => 0209-printk-Drop-the-logbuf_lock-more-often.patch} (69%) rename kernel/patches-4.19.x-rt/{0212-ARM-enable-irq-in-translation-section-permission-fau.patch => 0210-ARM-enable-irq-in-translation-section-permission-fau.patch} (84%) rename kernel/patches-4.19.x-rt/{0213-genirq-update-irq_set_irqchip_state-documentation.patch => 0211-genirq-update-irq_set_irqchip_state-documentation.patch} (71%) rename kernel/patches-4.19.x-rt/{0214-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch => 0212-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch} (70%) rename kernel/patches-4.19.x-rt/{0215-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch => 0213-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch} (77%) rename kernel/patches-4.19.x-rt/{0216-kgb-serial-hackaround.patch => 0214-kgdb-serial-Short-term-workaround.patch} (70%) rename kernel/patches-4.19.x-rt/{0217-sysfs-realtime-entry.patch => 0215-sysfs-Add-sys-kernel-realtime-entry.patch} (73%) rename kernel/patches-4.19.x-rt/{0218-mm-rt-kmap-atomic-scheduling.patch => 0216-mm-rt-kmap_atomic-scheduling.patch} (78%) rename kernel/patches-4.19.x-rt/{0219-x86-highmem-add-a-already-used-pte-check.patch => 0217-x86-highmem-Add-a-already-used-pte-check.patch} (59%) rename kernel/patches-4.19.x-rt/{0220-arm-highmem-flush-tlb-on-unmap.patch => 0218-arm-highmem-Flush-tlb-on-unmap.patch} (76%) rename kernel/patches-4.19.x-rt/{0221-arm-enable-highmem-for-rt.patch => 0219-arm-Enable-highmem-for-rt.patch} (84%) rename kernel/patches-4.19.x-rt/{0222-scsi-fcoe-rt-aware.patch => 0220-scsi-fcoe-Make-RT-aware.patch} (71%) rename kernel/patches-4.19.x-rt/{0223-x86-crypto-reduce-preempt-disabled-regions.patch => 0221-x86-crypto-Reduce-preempt-disabled-regions.patch} (78%) rename kernel/patches-4.19.x-rt/{0224-crypto-Reduce-preempt-disabled-regions-more-algos.patch => 0222-crypto-Reduce-preempt-disabled-regions-more-algos.patch} (76%) rename kernel/patches-4.19.x-rt/{0225-crypto-limit-more-FPU-enabled-sections.patch => 0223-crypto-limit-more-FPU-enabled-sections.patch} (78%) rename kernel/patches-4.19.x-rt/{0226-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch => 0224-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch} (81%) rename kernel/patches-4.19.x-rt/{0227-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch => 0225-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch} (76%) rename kernel/patches-4.19.x-rt/{0228-panic-disable-random-on-rt.patch => 0226-panic-skip-get_random_bytes-for-RT_FULL-in-init_oops.patch} (66%) rename kernel/patches-4.19.x-rt/{0229-x86-stackprot-no-random-on-rt.patch => 0227-x86-stackprotector-Avoid-random-pool-on-rt.patch} (78%) rename kernel/patches-4.19.x-rt/{0230-random-make-it-work-on-rt.patch => 0228-random-Make-it-work-on-rt.patch} (74%) rename kernel/patches-4.19.x-rt/{0231-random-avoid-preempt_disable-ed-section.patch => 0229-random-avoid-preempt_disable-ed-section.patch} (86%) rename kernel/patches-4.19.x-rt/{0232-cpu-hotplug--Implement-CPU-pinning.patch => 0230-cpu-hotplug-Implement-CPU-pinning.patch} (78%) rename kernel/patches-4.19.x-rt/{0233-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch => 0231-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch} (71%) rename kernel/patches-4.19.x-rt/{0234-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch => 0232-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch} (80%) rename kernel/patches-4.19.x-rt/{0235-upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch => 0233-net-Remove-preemption-disabling-in-netif_rx.patch} (81%) rename kernel/patches-4.19.x-rt/{0236-net-another-local-irq-disable-alloc-atomic-headache.patch => 0234-net-Another-local_irq_disable-kmalloc-headache.patch} (75%) rename kernel/patches-4.19.x-rt/{0237-net-core-protect-users-of-napi_alloc_cache-against-r.patch => 0235-net-core-protect-users-of-napi_alloc_cache-against-r.patch} (83%) rename kernel/patches-4.19.x-rt/{0238-net-fix-iptable-xt-write-recseq-begin-rt-fallout.patch => 0236-net-netfilter-Serialize-xt_write_recseq-sections-on-.patch} (72%) rename kernel/patches-4.19.x-rt/{0239-net-make-devnet_rename_seq-a-mutex.patch => 0237-net-Add-a-mutex-around-devnet_rename_seq.patch} (76%) rename kernel/patches-4.19.x-rt/{0240-lockdep-selftest-only-do-hardirq-context-test-for-raw-spinlock.patch => 0238-lockdep-selftest-Only-do-hardirq-context-test-for-ra.patch} (86%) rename kernel/patches-4.19.x-rt/{0241-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch => 0239-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch} (75%) rename kernel/patches-4.19.x-rt/{0242-preempt-lazy-support.patch => 0240-sched-Add-support-for-lazy-preemption.patch} (79%) rename kernel/patches-4.19.x-rt/{0243-ftrace-Fix-trace-header-alignment.patch => 0241-ftrace-Fix-trace-header-alignment.patch} (87%) rename kernel/patches-4.19.x-rt/{0244-x86-preempt-lazy.patch => 0242-x86-Support-for-lazy-preemption.patch} (79%) rename kernel/patches-4.19.x-rt/{0245-x86-lazy-preempt-properly-check-against-preempt-mask.patch => 0243-x86-lazy-preempt-properly-check-against-preempt-mask.patch} (62%) rename kernel/patches-4.19.x-rt/{0246-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch => 0244-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch} (77%) rename kernel/patches-4.19.x-rt/{0247-arm-preempt-lazy-support.patch => 0245-arm-Add-support-for-lazy-preemption.patch} (74%) rename kernel/patches-4.19.x-rt/{0248-powerpc-preempt-lazy-support.patch => 0246-powerpc-Add-support-for-lazy-preemption.patch} (75%) rename kernel/patches-4.19.x-rt/{0249-arch-arm64-Add-lazy-preempt-support.patch => 0247-arch-arm64-Add-lazy-preempt-support.patch} (75%) rename kernel/patches-4.19.x-rt/{0250-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch => 0248-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch} (82%) rename kernel/patches-4.19.x-rt/{0251-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch => 0249-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch} (77%) rename kernel/patches-4.19.x-rt/{0252-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch => 0250-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch} (69%) rename kernel/patches-4.19.x-rt/{0253-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch => 0251-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch} (65%) rename kernel/patches-4.19.x-rt/{0254-tpm_tis-fix-stall-after-iowrite-s.patch => 0252-tpm_tis-fix-stall-after-iowrite-s.patch} (78%) rename kernel/patches-4.19.x-rt/{0255-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch => 0253-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch} (78%) rename kernel/patches-4.19.x-rt/{0256-drmradeoni915_Use_preempt_disableenable_rt()_where_recommended.patch => 0254-drm-radeon-i915-Use-preempt_disable-enable_rt-where-.patch} (60%) rename kernel/patches-4.19.x-rt/{0257-drmi915_Use_local_lockunlock_irq()_in_intel_pipe_update_startend().patch => 0255-drm-i915-Use-local_lock-unlock_irq-in-intel_pipe_upd.patch} (85%) rename kernel/patches-4.19.x-rt/{0258-drm-i915-disable-tracing-on-RT.patch => 0256-drm-i915-disable-tracing-on-RT.patch} (80%) rename kernel/patches-4.19.x-rt/{0259-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch => 0257-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch} (66%) rename kernel/patches-4.19.x-rt/{0260-cgroups-use-simple-wait-in-css_release.patch => 0258-cgroups-use-simple-wait-in-css_release.patch} (81%) rename kernel/patches-4.19.x-rt/{0261-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch => 0259-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch} (87%) rename kernel/patches-4.19.x-rt/{0262-apparmor-use-a-locallock-instead-preempt_disable.patch => 0260-apparmor-use-a-locallock-instead-preempt_disable.patch} (76%) rename kernel/patches-4.19.x-rt/{0263-workqueue-prevent-deadlock-stall.patch => 0261-workqueue-Prevent-deadlock-stall-on-RT.patch} (83%) rename kernel/patches-4.19.x-rt/{0264-signals-allow-rt-tasks-to-cache-one-sigqueue-struct.patch => 0262-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch} (73%) create mode 100644 kernel/patches-4.19.x-rt/0263-Add-localversion-for-RT-release.patch create mode 100644 kernel/patches-4.19.x-rt/0264-powerpc-pseries-iommu-Use-a-locallock-instead-local_.patch delete mode 100644 kernel/patches-4.19.x-rt/0265-localversion.patch create mode 100644 kernel/patches-4.19.x-rt/0265-powerpc-reshuffle-TIF-bits.patch create mode 100644 kernel/patches-4.19.x-rt/0266-tty-sysrq-Convert-show_lock-to-raw_spinlock_t.patch create mode 100644 kernel/patches-4.19.x-rt/0267-drm-i915-Don-t-disable-interrupts-independently-of-t.patch create mode 100644 kernel/patches-4.19.x-rt/0268-sched-completion-Fix-a-lockup-in-wait_for_completion.patch create mode 100644 kernel/patches-4.19.x-rt/0269-Linux-4.19.37-rt20-REBASE.patch diff --git a/examples/rt-for-vmware.yml b/examples/rt-for-vmware.yml index 369982616..c5381433b 100644 --- a/examples/rt-for-vmware.yml +++ b/examples/rt-for-vmware.yml @@ -1,5 +1,5 @@ kernel: - image: linuxkit/kernel:4.19.25-rt + image: linuxkit/kernel:4.19.37-rt cmdline: "console=tty0" init: - linuxkit/init:v0.7 diff --git a/kernel/Makefile b/kernel/Makefile index 1a5ad0b13..2a9d49585 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -255,14 +255,14 @@ $(eval $(call kernel,5.1.5,5.1.x,$(EXTRA),$(DEBUG))) $(eval $(call kernel,5.0.19,5.0.x,$(EXTRA),$(DEBUG))) $(eval $(call kernel,4.19.46,4.19.x,$(EXTRA),$(DEBUG))) $(eval $(call kernel,4.19.46,4.19.x,,-dbg)) -$(eval $(call kernel,4.19.25,4.19.x,-rt,)) +$(eval $(call kernel,4.19.37,4.19.x,-rt,)) $(eval $(call kernel,4.14.122,4.14.x,$(EXTRA),$(DEBUG))) $(eval $(call kernel,4.9.179,4.9.x,$(EXTRA),$(DEBUG))) else ifeq ($(ARCH),aarch64) $(eval $(call kernel,5.1.5,5.1.x,$(EXTRA),$(DEBUG))) $(eval $(call kernel,4.19.46,4.19.x,$(EXTRA),$(DEBUG))) -$(eval $(call kernel,4.19.25,4.19.x,-rt,)) +$(eval $(call kernel,4.19.37,4.19.x,-rt,)) else ifeq ($(ARCH),s390x) $(eval $(call kernel,5.1.5,5.1.x,$(EXTRA),$(DEBUG))) diff --git a/kernel/patches-4.19.x-rt/0001-0001-ARM-at91-add-TCB-registers-definitions.patch b/kernel/patches-4.19.x-rt/0001-ARM-at91-add-TCB-registers-definitions.patch similarity index 95% rename from kernel/patches-4.19.x-rt/0001-0001-ARM-at91-add-TCB-registers-definitions.patch rename to kernel/patches-4.19.x-rt/0001-ARM-at91-add-TCB-registers-definitions.patch index 547035dac..1a3f50e2c 100644 --- a/kernel/patches-4.19.x-rt/0001-0001-ARM-at91-add-TCB-registers-definitions.patch +++ b/kernel/patches-4.19.x-rt/0001-ARM-at91-add-TCB-registers-definitions.patch @@ -1,6 +1,7 @@ +From bc4d8f04b5bd123853531af90f1ec548d8ab61e4 Mon Sep 17 00:00:00 2001 From: Alexandre Belloni Date: Thu, 13 Sep 2018 13:30:18 +0200 -Subject: [PATCH 1/7] ARM: at91: add TCB registers definitions +Subject: [PATCH 001/269] ARM: at91: add TCB registers definitions Add registers and bits definitions for the timer counter blocks found on Atmel ARM SoCs. @@ -10,10 +11,13 @@ Tested-by: Andras Szemzo Signed-off-by: Alexandre Belloni Signed-off-by: Sebastian Andrzej Siewior --- - include/soc/at91/atmel_tcb.h | 183 +++++++++++++++++++++++++++++++++++++++++++ + include/soc/at91/atmel_tcb.h | 183 +++++++++++++++++++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 include/soc/at91/atmel_tcb.h +diff --git a/include/soc/at91/atmel_tcb.h b/include/soc/at91/atmel_tcb.h +new file mode 100644 +index 000000000000..657e234b1483 --- /dev/null +++ b/include/soc/at91/atmel_tcb.h @@ -0,0 +1,183 @@ @@ -200,3 +204,6 @@ Signed-off-by: Sebastian Andrzej Siewior +}; + +#endif /* __SOC_ATMEL_TCB_H */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0002-0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch b/kernel/patches-4.19.x-rt/0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch similarity index 94% rename from kernel/patches-4.19.x-rt/0002-0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch rename to kernel/patches-4.19.x-rt/0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch index 58026eee9..980ed8571 100644 --- a/kernel/patches-4.19.x-rt/0002-0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch +++ b/kernel/patches-4.19.x-rt/0002-clocksource-drivers-Add-a-new-driver-for-the-Atmel-A.patch @@ -1,7 +1,8 @@ +From 1eef86c9b8aa09d8e57f4ee5684c7bfd28f6900f Mon Sep 17 00:00:00 2001 From: Alexandre Belloni Date: Thu, 13 Sep 2018 13:30:19 +0200 -Subject: [PATCH 2/7] clocksource/drivers: Add a new driver for the Atmel ARM - TC blocks +Subject: [PATCH 002/269] clocksource/drivers: Add a new driver for the Atmel + ARM TC blocks Add a driver for the Atmel Timer Counter Blocks. This driver provides a clocksource and two clockevent devices. @@ -23,15 +24,17 @@ Tested-by: Andras Szemzo Signed-off-by: Alexandre Belloni Signed-off-by: Sebastian Andrzej Siewior --- - drivers/clocksource/Kconfig | 8 - drivers/clocksource/Makefile | 3 - drivers/clocksource/timer-atmel-tcb.c | 410 ++++++++++++++++++++++++++++++++++ + drivers/clocksource/Kconfig | 8 + + drivers/clocksource/Makefile | 3 +- + drivers/clocksource/timer-atmel-tcb.c | 410 ++++++++++++++++++++++++++ 3 files changed, 420 insertions(+), 1 deletion(-) create mode 100644 drivers/clocksource/timer-atmel-tcb.c +diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig +index c1ddafa4c299..c5a5ad4e22e7 100644 --- a/drivers/clocksource/Kconfig +++ b/drivers/clocksource/Kconfig -@@ -404,6 +404,14 @@ config ATMEL_ST +@@ -414,6 +414,14 @@ config ATMEL_ST help Support for the Atmel ST timer. @@ -46,6 +49,8 @@ Signed-off-by: Sebastian Andrzej Siewior config CLKSRC_EXYNOS_MCT bool "Exynos multi core timer driver" if COMPILE_TEST depends on ARM || ARM64 +diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile +index db51b2427e8a..0df9384a1230 100644 --- a/drivers/clocksource/Makefile +++ b/drivers/clocksource/Makefile @@ -3,7 +3,8 @@ obj-$(CONFIG_TIMER_OF) += timer-of.o @@ -58,6 +63,9 @@ Signed-off-by: Sebastian Andrzej Siewior obj-$(CONFIG_X86_PM_TIMER) += acpi_pm.o obj-$(CONFIG_SCx200HR_TIMER) += scx200_hrt.o obj-$(CONFIG_CS5535_CLOCK_EVENT_SRC) += cs5535-clockevt.o +diff --git a/drivers/clocksource/timer-atmel-tcb.c b/drivers/clocksource/timer-atmel-tcb.c +new file mode 100644 +index 000000000000..21fbe430f91b --- /dev/null +++ b/drivers/clocksource/timer-atmel-tcb.c @@ -0,0 +1,410 @@ @@ -471,3 +479,6 @@ Signed-off-by: Sebastian Andrzej Siewior + bits); +} +TIMER_OF_DECLARE(atmel_tcb_clksrc, "atmel,tcb-timer", tcb_clksrc_init); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0003-0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch b/kernel/patches-4.19.x-rt/0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch similarity index 91% rename from kernel/patches-4.19.x-rt/0003-0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch rename to kernel/patches-4.19.x-rt/0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch index 698988cbd..963935f05 100644 --- a/kernel/patches-4.19.x-rt/0003-0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch +++ b/kernel/patches-4.19.x-rt/0003-clocksource-drivers-timer-atmel-tcb-add-clockevent-d.patch @@ -1,6 +1,7 @@ +From f6803050ab0965a1255a3b407ca429a04c5cb230 Mon Sep 17 00:00:00 2001 From: Alexandre Belloni Date: Thu, 13 Sep 2018 13:30:20 +0200 -Subject: [PATCH 3/7] clocksource/drivers: timer-atmel-tcb: add clockevent +Subject: [PATCH 003/269] clocksource/drivers: timer-atmel-tcb: add clockevent device on separate channel Add an other clockevent device that uses a separate TCB channel when @@ -9,9 +10,11 @@ available. Signed-off-by: Alexandre Belloni Signed-off-by: Sebastian Andrzej Siewior --- - drivers/clocksource/timer-atmel-tcb.c | 217 +++++++++++++++++++++++++++++++++- + drivers/clocksource/timer-atmel-tcb.c | 217 +++++++++++++++++++++++++- 1 file changed, 212 insertions(+), 5 deletions(-) +diff --git a/drivers/clocksource/timer-atmel-tcb.c b/drivers/clocksource/timer-atmel-tcb.c +index 21fbe430f91b..63ce3b69338a 100644 --- a/drivers/clocksource/timer-atmel-tcb.c +++ b/drivers/clocksource/timer-atmel-tcb.c @@ -32,7 +32,7 @@ struct atmel_tcb_clksrc { @@ -23,10 +26,11 @@ Signed-off-by: Sebastian Andrzej Siewior static struct clk *tcb_clk_get(struct device_node *node, int channel) { -@@ -48,6 +48,203 @@ static struct clk *tcb_clk_get(struct de +@@ -47,6 +47,203 @@ static struct clk *tcb_clk_get(struct device_node *node, int channel) + return of_clk_get_by_name(node->parent, "t0_clk"); } - /* ++/* + * Clockevent device using its own channel + */ + @@ -223,11 +227,10 @@ Signed-off-by: Sebastian Andrzej Siewior + return ret; +} + -+/* + /* * Clocksource and clockevent using the same channel(s) */ - static u64 tc_get_cycles(struct clocksource *cs) -@@ -363,7 +560,7 @@ static int __init tcb_clksrc_init(struct +@@ -363,7 +560,7 @@ static int __init tcb_clksrc_init(struct device_node *node) int irq, err, chan1 = -1; unsigned bits; @@ -236,7 +239,7 @@ Signed-off-by: Sebastian Andrzej Siewior return -ENODEV; /* -@@ -395,12 +592,22 @@ static int __init tcb_clksrc_init(struct +@@ -395,12 +592,22 @@ static int __init tcb_clksrc_init(struct device_node *node) return irq; } @@ -262,3 +265,6 @@ Signed-off-by: Sebastian Andrzej Siewior } } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0004-0004-clocksource-drivers-atmel-pit-make-option-silent.patch b/kernel/patches-4.19.x-rt/0004-clocksource-drivers-atmel-pit-make-option-silent.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0004-0004-clocksource-drivers-atmel-pit-make-option-silent.patch rename to kernel/patches-4.19.x-rt/0004-clocksource-drivers-atmel-pit-make-option-silent.patch index a8ada4bc5..676593c6a 100644 --- a/kernel/patches-4.19.x-rt/0004-0004-clocksource-drivers-atmel-pit-make-option-silent.patch +++ b/kernel/patches-4.19.x-rt/0004-clocksource-drivers-atmel-pit-make-option-silent.patch @@ -1,6 +1,7 @@ +From 873075a203c574d322429e4a8cd0686541293903 Mon Sep 17 00:00:00 2001 From: Alexandre Belloni Date: Thu, 13 Sep 2018 13:30:21 +0200 -Subject: [PATCH 4/7] clocksource/drivers: atmel-pit: make option silent +Subject: [PATCH 004/269] clocksource/drivers: atmel-pit: make option silent To conform with the other option, make the ATMEL_PIT option silent so it can be selected from the platform @@ -9,12 +10,14 @@ Tested-by: Alexander Dahl Signed-off-by: Alexandre Belloni Signed-off-by: Sebastian Andrzej Siewior --- - drivers/clocksource/Kconfig | 5 ++++- + drivers/clocksource/Kconfig | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) +diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig +index c5a5ad4e22e7..076aa8184961 100644 --- a/drivers/clocksource/Kconfig +++ b/drivers/clocksource/Kconfig -@@ -393,8 +393,11 @@ config ARMV7M_SYSTICK +@@ -403,8 +403,11 @@ config ARMV7M_SYSTICK This options enables support for the ARMv7M system timer unit config ATMEL_PIT @@ -27,3 +30,6 @@ Signed-off-by: Sebastian Andrzej Siewior config ATMEL_ST bool "Atmel ST timer support" if COMPILE_TEST +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0005-0005-ARM-at91-Implement-clocksource-selection.patch b/kernel/patches-4.19.x-rt/0005-ARM-at91-Implement-clocksource-selection.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0005-0005-ARM-at91-Implement-clocksource-selection.patch rename to kernel/patches-4.19.x-rt/0005-ARM-at91-Implement-clocksource-selection.patch index b044504c9..b9a8c0ba7 100644 --- a/kernel/patches-4.19.x-rt/0005-0005-ARM-at91-Implement-clocksource-selection.patch +++ b/kernel/patches-4.19.x-rt/0005-ARM-at91-Implement-clocksource-selection.patch @@ -1,6 +1,7 @@ +From e0dc436f11c998b38ee3dc4cd269d5075ea12b7e Mon Sep 17 00:00:00 2001 From: Alexandre Belloni Date: Thu, 13 Sep 2018 13:30:22 +0200 -Subject: [PATCH 5/7] ARM: at91: Implement clocksource selection +Subject: [PATCH 005/269] ARM: at91: Implement clocksource selection Allow selecting and unselecting the PIT clocksource driver so it doesn't have to be compile when unused. @@ -9,9 +10,11 @@ Tested-by: Alexander Dahl Signed-off-by: Alexandre Belloni Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/mach-at91/Kconfig | 25 +++++++++++++++++++++++++ + arch/arm/mach-at91/Kconfig | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) +diff --git a/arch/arm/mach-at91/Kconfig b/arch/arm/mach-at91/Kconfig +index 903f23c309df..fa493a86e2bb 100644 --- a/arch/arm/mach-at91/Kconfig +++ b/arch/arm/mach-at91/Kconfig @@ -107,6 +107,31 @@ config SOC_AT91SAM9 @@ -46,3 +49,6 @@ Signed-off-by: Sebastian Andrzej Siewior config HAVE_AT91_UTMI bool +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0006-0006-ARM-configs-at91-use-new-TCB-timer-driver.patch b/kernel/patches-4.19.x-rt/0006-ARM-configs-at91-use-new-TCB-timer-driver.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0006-0006-ARM-configs-at91-use-new-TCB-timer-driver.patch rename to kernel/patches-4.19.x-rt/0006-ARM-configs-at91-use-new-TCB-timer-driver.patch index aaeac8138..3aca5ad36 100644 --- a/kernel/patches-4.19.x-rt/0006-0006-ARM-configs-at91-use-new-TCB-timer-driver.patch +++ b/kernel/patches-4.19.x-rt/0006-ARM-configs-at91-use-new-TCB-timer-driver.patch @@ -1,6 +1,7 @@ +From ca4a1c8ce5f7224d99ef6c2a6754468cb72ea4c3 Mon Sep 17 00:00:00 2001 From: Alexandre Belloni Date: Thu, 13 Sep 2018 13:30:23 +0200 -Subject: [PATCH 6/7] ARM: configs: at91: use new TCB timer driver +Subject: [PATCH 006/269] ARM: configs: at91: use new TCB timer driver Unselecting ATMEL_TCLIB switches the TCB timer driver from tcb_clksrc to timer-atmel-tcb. @@ -8,10 +9,12 @@ timer-atmel-tcb. Signed-off-by: Alexandre Belloni Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/configs/at91_dt_defconfig | 1 - - arch/arm/configs/sama5_defconfig | 1 - + arch/arm/configs/at91_dt_defconfig | 1 - + arch/arm/configs/sama5_defconfig | 1 - 2 files changed, 2 deletions(-) +diff --git a/arch/arm/configs/at91_dt_defconfig b/arch/arm/configs/at91_dt_defconfig +index e4b1be66b3f5..09f262e59fef 100644 --- a/arch/arm/configs/at91_dt_defconfig +++ b/arch/arm/configs/at91_dt_defconfig @@ -64,7 +64,6 @@ CONFIG_BLK_DEV_LOOP=y @@ -22,6 +25,8 @@ Signed-off-by: Sebastian Andrzej Siewior CONFIG_ATMEL_SSC=y CONFIG_SCSI=y CONFIG_BLK_DEV_SD=y +diff --git a/arch/arm/configs/sama5_defconfig b/arch/arm/configs/sama5_defconfig +index 2080025556b5..f2bbc6339ca6 100644 --- a/arch/arm/configs/sama5_defconfig +++ b/arch/arm/configs/sama5_defconfig @@ -75,7 +75,6 @@ CONFIG_BLK_DEV_LOOP=y @@ -32,3 +37,6 @@ Signed-off-by: Sebastian Andrzej Siewior CONFIG_ATMEL_SSC=y CONFIG_EEPROM_AT24=y CONFIG_SCSI=y +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0007-0007-ARM-configs-at91-unselect-PIT.patch b/kernel/patches-4.19.x-rt/0007-ARM-configs-at91-unselect-PIT.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0007-0007-ARM-configs-at91-unselect-PIT.patch rename to kernel/patches-4.19.x-rt/0007-ARM-configs-at91-unselect-PIT.patch index f5694ce09..cfbd75bba 100644 --- a/kernel/patches-4.19.x-rt/0007-0007-ARM-configs-at91-unselect-PIT.patch +++ b/kernel/patches-4.19.x-rt/0007-ARM-configs-at91-unselect-PIT.patch @@ -1,6 +1,7 @@ +From 2c83222f4057f755febccd002f3720bbf73a6473 Mon Sep 17 00:00:00 2001 From: Alexandre Belloni Date: Thu, 13 Sep 2018 13:30:24 +0200 -Subject: [PATCH 7/7] ARM: configs: at91: unselect PIT +Subject: [PATCH 007/269] ARM: configs: at91: unselect PIT The PIT is not required anymore to successfully boot and may actually harm in case preempt-rt is used because the PIT interrupt is shared. @@ -9,10 +10,12 @@ Disable it so the TCB clocksource is used. Signed-off-by: Alexandre Belloni Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/configs/at91_dt_defconfig | 1 + - arch/arm/configs/sama5_defconfig | 1 + + arch/arm/configs/at91_dt_defconfig | 1 + + arch/arm/configs/sama5_defconfig | 1 + 2 files changed, 2 insertions(+) +diff --git a/arch/arm/configs/at91_dt_defconfig b/arch/arm/configs/at91_dt_defconfig +index 09f262e59fef..f4b253bd05ed 100644 --- a/arch/arm/configs/at91_dt_defconfig +++ b/arch/arm/configs/at91_dt_defconfig @@ -19,6 +19,7 @@ CONFIG_ARCH_MULTI_V5=y @@ -23,6 +26,8 @@ Signed-off-by: Sebastian Andrzej Siewior CONFIG_AEABI=y CONFIG_UACCESS_WITH_MEMCPY=y CONFIG_ZBOOT_ROM_TEXT=0x0 +diff --git a/arch/arm/configs/sama5_defconfig b/arch/arm/configs/sama5_defconfig +index f2bbc6339ca6..be92871ab155 100644 --- a/arch/arm/configs/sama5_defconfig +++ b/arch/arm/configs/sama5_defconfig @@ -20,6 +20,7 @@ CONFIG_ARCH_AT91=y @@ -33,3 +38,6 @@ Signed-off-by: Sebastian Andrzej Siewior CONFIG_AEABI=y CONFIG_UACCESS_WITH_MEMCPY=y CONFIG_ZBOOT_ROM_TEXT=0x0 +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0008-irqchip-gic-v3-its-Move-pending-table-allocation-to-.patch b/kernel/patches-4.19.x-rt/0008-irqchip-gic-v3-its-Move-pending-table-allocation-to-.patch index 3041a5b8c..75038df7f 100644 --- a/kernel/patches-4.19.x-rt/0008-irqchip-gic-v3-its-Move-pending-table-allocation-to-.patch +++ b/kernel/patches-4.19.x-rt/0008-irqchip-gic-v3-its-Move-pending-table-allocation-to-.patch @@ -1,15 +1,18 @@ +From bb357496d72d05e2841899655c8e709d7c369ab0 Mon Sep 17 00:00:00 2001 From: Marc Zyngier Date: Fri, 27 Jul 2018 13:38:54 +0100 -Subject: [PATCH] irqchip/gic-v3-its: Move pending table allocation to init - time +Subject: [PATCH 008/269] irqchip/gic-v3-its: Move pending table allocation to + init time Signed-off-by: Marc Zyngier Signed-off-by: Sebastian Andrzej Siewior --- - drivers/irqchip/irq-gic-v3-its.c | 80 ++++++++++++++++++++++++------------- - include/linux/irqchip/arm-gic-v3.h | 1 + drivers/irqchip/irq-gic-v3-its.c | 80 +++++++++++++++++++----------- + include/linux/irqchip/arm-gic-v3.h | 1 + 2 files changed, 53 insertions(+), 28 deletions(-) +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c +index 65ab2c80529c..21681f0f85f4 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -179,6 +179,7 @@ static DEFINE_RAW_SPINLOCK(vmovp_lock); @@ -20,7 +23,7 @@ Signed-off-by: Sebastian Andrzej Siewior #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base) #define gic_data_rdist_vlpi_base() (gic_data_rdist_rd_base() + SZ_128K) -@@ -1628,7 +1629,7 @@ static void its_free_prop_table(struct p +@@ -1631,7 +1632,7 @@ static void its_free_prop_table(struct page *prop_page) get_order(LPI_PROPBASE_SZ)); } @@ -29,8 +32,8 @@ Signed-off-by: Sebastian Andrzej Siewior { phys_addr_t paddr; -@@ -1951,30 +1952,47 @@ static void its_free_pending_table(struc - get_order(max_t(u32, LPI_PENDBASE_SZ, SZ_64K))); +@@ -1979,30 +1980,47 @@ static u64 its_clear_vpend_valid(void __iomem *vlpi_base) + return val; } -static void its_cpu_init_lpis(void) @@ -92,7 +95,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* set PROPBASE */ val = (page_to_phys(gic_rdists->prop_page) | GICR_PROPBASER_InnerShareable | -@@ -2026,6 +2044,10 @@ static void its_cpu_init_lpis(void) +@@ -2078,6 +2096,10 @@ static void its_cpu_init_lpis(void) /* Make sure the GIC has seen the above */ dsb(sy); @@ -103,7 +106,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static void its_cpu_init_collection(struct its_node *its) -@@ -3521,16 +3543,6 @@ static int redist_disable_lpis(void) +@@ -3558,16 +3580,6 @@ static int redist_disable_lpis(void) u64 timeout = USEC_PER_SEC; u64 val; @@ -120,7 +123,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!gic_rdists_supports_plpis()) { pr_info("CPU%d: LPIs not supported\n", smp_processor_id()); return -ENXIO; -@@ -3540,7 +3552,18 @@ static int redist_disable_lpis(void) +@@ -3577,7 +3589,18 @@ static int redist_disable_lpis(void) if (!(val & GICR_CTLR_ENABLE_LPIS)) return 0; @@ -140,7 +143,7 @@ Signed-off-by: Sebastian Andrzej Siewior smp_processor_id()); add_taint(TAINT_CRAP, LOCKDEP_STILL_OK); -@@ -3796,7 +3819,8 @@ int __init its_init(struct fwnode_handle +@@ -3833,7 +3856,8 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists, } gic_rdists = rdists; @@ -150,6 +153,8 @@ Signed-off-by: Sebastian Andrzej Siewior if (err) return err; +diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h +index 3188c0bef3e7..5b57501fd2e7 100644 --- a/include/linux/irqchip/arm-gic-v3.h +++ b/include/linux/irqchip/arm-gic-v3.h @@ -585,6 +585,7 @@ struct rdists { @@ -160,3 +165,6 @@ Signed-off-by: Sebastian Andrzej Siewior } __percpu *rdist; struct page *prop_page; u64 flags; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0009-kthread-convert-worker-lock-to-raw-spinlock.patch b/kernel/patches-4.19.x-rt/0009-kthread-convert-worker-lock-to-raw-spinlock.patch index 6404b1cce..f06be8897 100644 --- a/kernel/patches-4.19.x-rt/0009-kthread-convert-worker-lock-to-raw-spinlock.patch +++ b/kernel/patches-4.19.x-rt/0009-kthread-convert-worker-lock-to-raw-spinlock.patch @@ -1,6 +1,7 @@ +From 9d8b1db47a7e355eb0c34a8af57f3613db6cb18c Mon Sep 17 00:00:00 2001 From: Julia Cartwright Date: Fri, 28 Sep 2018 21:03:51 +0000 -Subject: [PATCH] kthread: convert worker lock to raw spinlock +Subject: [PATCH 009/269] kthread: convert worker lock to raw spinlock In order to enable the queuing of kthread work items from hardirq context even when PREEMPT_RT_FULL is enabled, convert the worker @@ -16,10 +17,12 @@ Reported-by: Tim Sander Signed-off-by: Julia Cartwright Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/kthread.h | 2 +- - kernel/kthread.c | 42 +++++++++++++++++++++--------------------- + include/linux/kthread.h | 2 +- + kernel/kthread.c | 42 ++++++++++++++++++++--------------------- 2 files changed, 22 insertions(+), 22 deletions(-) +diff --git a/include/linux/kthread.h b/include/linux/kthread.h +index c1961761311d..ad292898f7f2 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -85,7 +85,7 @@ enum { @@ -31,9 +34,11 @@ Signed-off-by: Sebastian Andrzej Siewior struct list_head work_list; struct list_head delayed_work_list; struct task_struct *task; +diff --git a/kernel/kthread.c b/kernel/kthread.c +index 087d18d771b5..5641b55783a6 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c -@@ -599,7 +599,7 @@ void __kthread_init_worker(struct kthrea +@@ -599,7 +599,7 @@ void __kthread_init_worker(struct kthread_worker *worker, struct lock_class_key *key) { memset(worker, 0, sizeof(struct kthread_worker)); @@ -68,7 +73,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (work) { __set_current_state(TASK_RUNNING); -@@ -812,12 +812,12 @@ bool kthread_queue_work(struct kthread_w +@@ -812,12 +812,12 @@ bool kthread_queue_work(struct kthread_worker *worker, bool ret = false; unsigned long flags; @@ -83,7 +88,7 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } EXPORT_SYMBOL_GPL(kthread_queue_work); -@@ -843,7 +843,7 @@ void kthread_delayed_work_timer_fn(struc +@@ -843,7 +843,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t) if (WARN_ON_ONCE(!worker)) return; @@ -92,7 +97,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Work must not be used with >1 worker, see kthread_queue_work(). */ WARN_ON_ONCE(work->worker != worker); -@@ -852,7 +852,7 @@ void kthread_delayed_work_timer_fn(struc +@@ -852,7 +852,7 @@ void kthread_delayed_work_timer_fn(struct timer_list *t) list_del_init(&work->node); kthread_insert_work(worker, work, &worker->work_list); @@ -101,7 +106,7 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(kthread_delayed_work_timer_fn); -@@ -908,14 +908,14 @@ bool kthread_queue_delayed_work(struct k +@@ -908,14 +908,14 @@ bool kthread_queue_delayed_work(struct kthread_worker *worker, unsigned long flags; bool ret = false; @@ -118,7 +123,7 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } EXPORT_SYMBOL_GPL(kthread_queue_delayed_work); -@@ -951,7 +951,7 @@ void kthread_flush_work(struct kthread_w +@@ -951,7 +951,7 @@ void kthread_flush_work(struct kthread_work *work) if (!worker) return; @@ -127,7 +132,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Work must not be used with >1 worker, see kthread_queue_work(). */ WARN_ON_ONCE(work->worker != worker); -@@ -963,7 +963,7 @@ void kthread_flush_work(struct kthread_w +@@ -963,7 +963,7 @@ void kthread_flush_work(struct kthread_work *work) else noop = true; @@ -136,7 +141,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!noop) wait_for_completion(&fwork.done); -@@ -996,9 +996,9 @@ static bool __kthread_cancel_work(struct +@@ -996,9 +996,9 @@ static bool __kthread_cancel_work(struct kthread_work *work, bool is_dwork, * any queuing is blocked by setting the canceling counter. */ work->canceling++; @@ -148,7 +153,7 @@ Signed-off-by: Sebastian Andrzej Siewior work->canceling--; } -@@ -1045,7 +1045,7 @@ bool kthread_mod_delayed_work(struct kth +@@ -1045,7 +1045,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker, unsigned long flags; int ret = false; @@ -157,7 +162,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Do not bother with canceling when never queued. */ if (!work->worker) -@@ -1062,7 +1062,7 @@ bool kthread_mod_delayed_work(struct kth +@@ -1062,7 +1062,7 @@ bool kthread_mod_delayed_work(struct kthread_worker *worker, fast_queue: __kthread_queue_delayed_work(worker, dwork, delay); out: @@ -166,7 +171,7 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } EXPORT_SYMBOL_GPL(kthread_mod_delayed_work); -@@ -1076,7 +1076,7 @@ static bool __kthread_cancel_work_sync(s +@@ -1076,7 +1076,7 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork) if (!worker) goto out; @@ -175,7 +180,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Work must not be used with >1 worker, see kthread_queue_work(). */ WARN_ON_ONCE(work->worker != worker); -@@ -1090,13 +1090,13 @@ static bool __kthread_cancel_work_sync(s +@@ -1090,13 +1090,13 @@ static bool __kthread_cancel_work_sync(struct kthread_work *work, bool is_dwork) * In the meantime, block any queuing by setting the canceling counter. */ work->canceling++; @@ -192,3 +197,6 @@ Signed-off-by: Sebastian Andrzej Siewior out: return ret; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0010-crypto-caam-qi-simplify-CGR-allocation-freeing.patch b/kernel/patches-4.19.x-rt/0010-crypto-caam-qi-simplify-CGR-allocation-freeing.patch index bcf3bf8d7..1e31a80bc 100644 --- a/kernel/patches-4.19.x-rt/0010-crypto-caam-qi-simplify-CGR-allocation-freeing.patch +++ b/kernel/patches-4.19.x-rt/0010-crypto-caam-qi-simplify-CGR-allocation-freeing.patch @@ -1,6 +1,7 @@ +From b37ee7bd4ac42c97c3fce905634cf808345a25ac Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Horia=20Geant=C4=83?= Date: Mon, 8 Oct 2018 14:09:37 +0300 -Subject: [PATCH] crypto: caam/qi - simplify CGR allocation, freeing +Subject: [PATCH 010/269] crypto: caam/qi - simplify CGR allocation, freeing MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -22,27 +23,29 @@ Reported-by: Sebastian Andrzej Siewior Signed-off-by: Horia Geantă Signed-off-by: Herbert Xu --- - drivers/crypto/caam/qi.c | 43 ++++--------------------------------------- - drivers/crypto/caam/qi.h | 2 +- + drivers/crypto/caam/qi.c | 43 ++++------------------------------------ + drivers/crypto/caam/qi.h | 2 +- 2 files changed, 5 insertions(+), 40 deletions(-) +diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c +index 67f7f8c42c93..b84e6c8b1e13 100644 --- a/drivers/crypto/caam/qi.c +++ b/drivers/crypto/caam/qi.c -@@ -84,13 +84,6 @@ static u64 times_congested; +@@ -83,13 +83,6 @@ EXPORT_SYMBOL(caam_congested); + static u64 times_congested; #endif - /* +-/* - * CPU from where the module initialised. This is required because QMan driver - * requires CGRs to be removed from same CPU from where they were originally - * allocated. - */ -static int mod_init_cpu; - --/* + /* * This is a a cache of buffers, from which the users of CAAM QI driver * can allocate short (CAAM_QI_MEMCACHE_SIZE) buffers. It's faster than - * doing malloc on the hotpath. -@@ -492,12 +485,11 @@ void caam_drv_ctx_rel(struct caam_drv_ct +@@ -492,12 +485,11 @@ void caam_drv_ctx_rel(struct caam_drv_ctx *drv_ctx) } EXPORT_SYMBOL(caam_drv_ctx_rel); @@ -57,7 +60,7 @@ Signed-off-by: Herbert Xu for_each_cpu(i, cpus) { struct napi_struct *irqtask; -@@ -510,26 +502,12 @@ int caam_qi_shutdown(struct device *qide +@@ -510,26 +502,12 @@ int caam_qi_shutdown(struct device *qidev) dev_err(qidev, "Rsp FQ kill failed, cpu: %d\n", i); } @@ -86,7 +89,7 @@ Signed-off-by: Herbert Xu } static void cgr_cb(struct qman_portal *qm, struct qman_cgr *cgr, int congested) -@@ -718,22 +696,11 @@ int caam_qi_init(struct platform_device +@@ -718,22 +696,11 @@ int caam_qi_init(struct platform_device *caam_pdev) struct device *ctrldev = &caam_pdev->dev, *qidev; struct caam_drv_private *ctrlpriv; const cpumask_t *cpus = qman_affine_cpus(); @@ -109,7 +112,7 @@ Signed-off-by: Herbert Xu qi_pdev_info.parent = ctrldev; qi_pdev_info.dma_mask = dma_get_mask(ctrldev); qi_pdev = platform_device_register_full(&qi_pdev_info); -@@ -795,8 +762,6 @@ int caam_qi_init(struct platform_device +@@ -795,8 +762,6 @@ int caam_qi_init(struct platform_device *caam_pdev) return -ENOMEM; } @@ -118,9 +121,11 @@ Signed-off-by: Herbert Xu #ifdef CONFIG_DEBUG_FS debugfs_create_file("qi_congested", 0444, ctrlpriv->ctl, ×_congested, &caam_fops_u64_ro); +diff --git a/drivers/crypto/caam/qi.h b/drivers/crypto/caam/qi.h +index 357b69f57072..b6c8acc30853 100644 --- a/drivers/crypto/caam/qi.h +++ b/drivers/crypto/caam/qi.h -@@ -174,7 +174,7 @@ int caam_drv_ctx_update(struct caam_drv_ +@@ -174,7 +174,7 @@ int caam_drv_ctx_update(struct caam_drv_ctx *drv_ctx, u32 *sh_desc); void caam_drv_ctx_rel(struct caam_drv_ctx *drv_ctx); int caam_qi_init(struct platform_device *pdev); @@ -129,3 +134,6 @@ Signed-off-by: Herbert Xu /** * qi_cache_alloc - Allocate buffers from CAAM-QI cache +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0011-sched-fair-Robustify-CFS-bandwidth-timer-locking.patch b/kernel/patches-4.19.x-rt/0011-sched-fair-Robustify-CFS-bandwidth-timer-locking.patch index b1af1e937..9574293e2 100644 --- a/kernel/patches-4.19.x-rt/0011-sched-fair-Robustify-CFS-bandwidth-timer-locking.patch +++ b/kernel/patches-4.19.x-rt/0011-sched-fair-Robustify-CFS-bandwidth-timer-locking.patch @@ -1,6 +1,7 @@ +From 78f68e44994c830d70aa92bb86a47b204ff605c6 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 7 Jan 2019 13:52:31 +0100 -Subject: [PATCH] sched/fair: Robustify CFS-bandwidth timer locking +Subject: [PATCH 011/269] sched/fair: Robustify CFS-bandwidth timer locking Traditionally hrtimer callbacks were run with IRQs disabled, but with the introduction of HRTIMER_MODE_SOFT it is possible they run from @@ -24,12 +25,14 @@ Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20190107125231.GE14122@hirez.programming.kicks-ass.net Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/fair.c | 30 ++++++++++++++++-------------- + kernel/sched/fair.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index 4aa8e7d90c25..53acadf72cd9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c -@@ -4553,7 +4553,7 @@ static u64 distribute_cfs_runtime(struct +@@ -4553,7 +4553,7 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b, struct rq *rq = rq_of(cfs_rq); struct rq_flags rf; @@ -38,7 +41,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!cfs_rq_throttled(cfs_rq)) goto next; -@@ -4570,7 +4570,7 @@ static u64 distribute_cfs_runtime(struct +@@ -4570,7 +4570,7 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b, unthrottle_cfs_rq(cfs_rq); next: @@ -47,7 +50,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!remaining) break; -@@ -4586,7 +4586,7 @@ static u64 distribute_cfs_runtime(struct +@@ -4586,7 +4586,7 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b, * period the timer is deactivated until scheduling resumes; cfs_b->idle is * used to track this state. */ @@ -56,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior { u64 runtime, runtime_expires; int throttled; -@@ -4628,11 +4628,11 @@ static int do_sched_cfs_period_timer(str +@@ -4628,11 +4628,11 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun) while (throttled && cfs_b->runtime > 0 && !cfs_b->distribute_running) { runtime = cfs_b->runtime; cfs_b->distribute_running = 1; @@ -70,7 +73,7 @@ Signed-off-by: Sebastian Andrzej Siewior cfs_b->distribute_running = 0; throttled = !list_empty(&cfs_b->throttled_cfs_rq); -@@ -4741,17 +4741,18 @@ static __always_inline void return_cfs_r +@@ -4741,17 +4741,18 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq) static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b) { u64 runtime = 0, slice = sched_cfs_bandwidth_slice(); @@ -92,7 +95,7 @@ Signed-off-by: Sebastian Andrzej Siewior return; } -@@ -4762,18 +4763,18 @@ static void do_sched_cfs_slack_timer(str +@@ -4762,18 +4763,18 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b) if (runtime) cfs_b->distribute_running = 1; @@ -114,20 +117,23 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -4851,20 +4852,21 @@ static enum hrtimer_restart sched_cfs_pe +@@ -4853,11 +4854,12 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) { struct cfs_bandwidth *cfs_b = container_of(timer, struct cfs_bandwidth, period_timer); + unsigned long flags; int overrun; int idle = 0; + int count = 0; - raw_spin_lock(&cfs_b->lock); + raw_spin_lock_irqsave(&cfs_b->lock, flags); for (;;) { overrun = hrtimer_forward_now(timer, cfs_b->period); if (!overrun) - break; +@@ -4885,11 +4887,11 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer) + count = 0; + } - idle = do_sched_cfs_period_timer(cfs_b, overrun); + idle = do_sched_cfs_period_timer(cfs_b, overrun, flags); @@ -139,3 +145,6 @@ Signed-off-by: Sebastian Andrzej Siewior return idle ? HRTIMER_NORESTART : HRTIMER_RESTART; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0012-arm-convert-boot-lock-to-raw.patch b/kernel/patches-4.19.x-rt/0012-arm-Convert-arm-boot_lock-to-raw.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0012-arm-convert-boot-lock-to-raw.patch rename to kernel/patches-4.19.x-rt/0012-arm-Convert-arm-boot_lock-to-raw.patch index bd61c05b9..23d2519f0 100644 --- a/kernel/patches-4.19.x-rt/0012-arm-convert-boot-lock-to-raw.patch +++ b/kernel/patches-4.19.x-rt/0012-arm-Convert-arm-boot_lock-to-raw.patch @@ -1,6 +1,7 @@ +From fa6e4c3d085352808073b23fdff79729db01930a Mon Sep 17 00:00:00 2001 From: Frank Rowand Date: Mon, 19 Sep 2011 14:51:14 -0700 -Subject: arm: Convert arm boot_lock to raw +Subject: [PATCH 012/269] arm: Convert arm boot_lock to raw The arm boot_lock is used by the secondary processor startup code. The locking task is the idle thread, which has idle->sched_class == &idle_sched_class. @@ -23,16 +24,18 @@ Acked-by: Krzysztof Kozlowski Tested-by: Krzysztof Kozlowski [Exynos5422 Linaro PM-QA] Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/mach-exynos/platsmp.c | 12 ++++++------ - arch/arm/mach-hisi/platmcpm.c | 22 +++++++++++----------- - arch/arm/mach-omap2/omap-smp.c | 10 +++++----- - arch/arm/mach-prima2/platsmp.c | 10 +++++----- - arch/arm/mach-qcom/platsmp.c | 10 +++++----- - arch/arm/mach-spear/platsmp.c | 10 +++++----- - arch/arm/mach-sti/platsmp.c | 10 +++++----- - arch/arm/plat-versatile/platsmp.c | 10 +++++----- + arch/arm/mach-exynos/platsmp.c | 12 ++++++------ + arch/arm/mach-hisi/platmcpm.c | 22 +++++++++++----------- + arch/arm/mach-omap2/omap-smp.c | 10 +++++----- + arch/arm/mach-prima2/platsmp.c | 10 +++++----- + arch/arm/mach-qcom/platsmp.c | 10 +++++----- + arch/arm/mach-spear/platsmp.c | 10 +++++----- + arch/arm/mach-sti/platsmp.c | 10 +++++----- + arch/arm/plat-versatile/platsmp.c | 10 +++++----- 8 files changed, 47 insertions(+), 47 deletions(-) +diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c +index 6a1e682371b3..17dca0ff336e 100644 --- a/arch/arm/mach-exynos/platsmp.c +++ b/arch/arm/mach-exynos/platsmp.c @@ -239,7 +239,7 @@ static void write_pen_release(int val) @@ -44,7 +47,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void exynos_secondary_init(unsigned int cpu) { -@@ -252,8 +252,8 @@ static void exynos_secondary_init(unsign +@@ -252,8 +252,8 @@ static void exynos_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ @@ -55,7 +58,7 @@ Signed-off-by: Sebastian Andrzej Siewior } int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr) -@@ -317,7 +317,7 @@ static int exynos_boot_secondary(unsigne +@@ -317,7 +317,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * Set synchronisation state between this boot processor * and the secondary one */ @@ -64,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * The secondary processor is waiting to be released from -@@ -344,7 +344,7 @@ static int exynos_boot_secondary(unsigne +@@ -344,7 +344,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) if (timeout == 0) { printk(KERN_ERR "cpu1 power enable failed"); @@ -73,7 +76,7 @@ Signed-off-by: Sebastian Andrzej Siewior return -ETIMEDOUT; } } -@@ -390,7 +390,7 @@ static int exynos_boot_secondary(unsigne +@@ -390,7 +390,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) * calibrations, then wait for it to finish */ fail: @@ -82,6 +85,8 @@ Signed-off-by: Sebastian Andrzej Siewior return pen_release != -1 ? ret : 0; } +diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c +index f66815c3dd07..00524abd963f 100644 --- a/arch/arm/mach-hisi/platmcpm.c +++ b/arch/arm/mach-hisi/platmcpm.c @@ -61,7 +61,7 @@ @@ -93,7 +98,7 @@ Signed-off-by: Sebastian Andrzej Siewior static u32 fabric_phys_addr; /* * [0]: bootwrapper physical address -@@ -113,7 +113,7 @@ static int hip04_boot_secondary(unsigned +@@ -113,7 +113,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle) if (cluster >= HIP04_MAX_CLUSTERS || cpu >= HIP04_MAX_CPUS_PER_CLUSTER) return -EINVAL; @@ -102,7 +107,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (hip04_cpu_table[cluster][cpu]) goto out; -@@ -147,7 +147,7 @@ static int hip04_boot_secondary(unsigned +@@ -147,7 +147,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle) out: hip04_cpu_table[cluster][cpu]++; @@ -111,7 +116,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } -@@ -162,11 +162,11 @@ static void hip04_cpu_die(unsigned int l +@@ -162,11 +162,11 @@ static void hip04_cpu_die(unsigned int l_cpu) cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); @@ -125,7 +130,7 @@ Signed-off-by: Sebastian Andrzej Siewior return; } else if (hip04_cpu_table[cluster][cpu] > 1) { pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu); -@@ -174,7 +174,7 @@ static void hip04_cpu_die(unsigned int l +@@ -174,7 +174,7 @@ static void hip04_cpu_die(unsigned int l_cpu) } last_man = hip04_cluster_is_down(cluster); @@ -134,7 +139,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (last_man) { /* Since it's Cortex A15, disable L2 prefetching. */ asm volatile( -@@ -203,7 +203,7 @@ static int hip04_cpu_kill(unsigned int l +@@ -203,7 +203,7 @@ static int hip04_cpu_kill(unsigned int l_cpu) cpu >= HIP04_MAX_CPUS_PER_CLUSTER); count = TIMEOUT_MSEC / POLL_MSEC; @@ -143,7 +148,7 @@ Signed-off-by: Sebastian Andrzej Siewior for (tries = 0; tries < count; tries++) { if (hip04_cpu_table[cluster][cpu]) goto err; -@@ -211,10 +211,10 @@ static int hip04_cpu_kill(unsigned int l +@@ -211,10 +211,10 @@ static int hip04_cpu_kill(unsigned int l_cpu) data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster)); if (data & CORE_WFI_STATUS(cpu)) break; @@ -156,7 +161,7 @@ Signed-off-by: Sebastian Andrzej Siewior } if (tries >= count) goto err; -@@ -231,10 +231,10 @@ static int hip04_cpu_kill(unsigned int l +@@ -231,10 +231,10 @@ static int hip04_cpu_kill(unsigned int l_cpu) goto err; if (hip04_cluster_is_down(cluster)) hip04_set_snoop_filter(cluster, 0); @@ -169,9 +174,11 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } #endif +diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c +index 1c73694c871a..ac4d2f030b87 100644 --- a/arch/arm/mach-omap2/omap-smp.c +++ b/arch/arm/mach-omap2/omap-smp.c -@@ -69,7 +69,7 @@ static const struct omap_smp_config omap +@@ -69,7 +69,7 @@ static const struct omap_smp_config omap5_cfg __initconst = { .startup_addr = omap5_secondary_startup, }; @@ -180,7 +187,7 @@ Signed-off-by: Sebastian Andrzej Siewior void __iomem *omap4_get_scu_base(void) { -@@ -177,8 +177,8 @@ static void omap4_secondary_init(unsigne +@@ -177,8 +177,8 @@ static void omap4_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ @@ -191,7 +198,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) -@@ -191,7 +191,7 @@ static int omap4_boot_secondary(unsigned +@@ -191,7 +191,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) * Set synchronisation state between this boot processor * and the secondary one */ @@ -200,7 +207,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Update the AuxCoreBoot0 with boot state for secondary core. -@@ -270,7 +270,7 @@ static int omap4_boot_secondary(unsigned +@@ -270,7 +270,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle) * Now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ @@ -209,6 +216,8 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } +diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c +index 75ef5d4be554..c17c86e5d860 100644 --- a/arch/arm/mach-prima2/platsmp.c +++ b/arch/arm/mach-prima2/platsmp.c @@ -22,7 +22,7 @@ @@ -220,7 +229,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void sirfsoc_secondary_init(unsigned int cpu) { -@@ -36,8 +36,8 @@ static void sirfsoc_secondary_init(unsig +@@ -36,8 +36,8 @@ static void sirfsoc_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ @@ -231,7 +240,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static const struct of_device_id clk_ids[] = { -@@ -75,7 +75,7 @@ static int sirfsoc_boot_secondary(unsign +@@ -75,7 +75,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) /* make sure write buffer is drained */ mb(); @@ -240,7 +249,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * The secondary processor is waiting to be released from -@@ -107,7 +107,7 @@ static int sirfsoc_boot_secondary(unsign +@@ -107,7 +107,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ @@ -249,6 +258,8 @@ Signed-off-by: Sebastian Andrzej Siewior return pen_release != -1 ? -ENOSYS : 0; } +diff --git a/arch/arm/mach-qcom/platsmp.c b/arch/arm/mach-qcom/platsmp.c +index 5494c9e0c909..e8ce157d3548 100644 --- a/arch/arm/mach-qcom/platsmp.c +++ b/arch/arm/mach-qcom/platsmp.c @@ -46,7 +46,7 @@ @@ -260,7 +271,7 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_HOTPLUG_CPU static void qcom_cpu_die(unsigned int cpu) -@@ -60,8 +60,8 @@ static void qcom_secondary_init(unsigned +@@ -60,8 +60,8 @@ static void qcom_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ @@ -271,7 +282,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static int scss_release_secondary(unsigned int cpu) -@@ -284,7 +284,7 @@ static int qcom_boot_secondary(unsigned +@@ -284,7 +284,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int)) * set synchronisation state between this boot processor * and the secondary one */ @@ -280,7 +291,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Send the secondary CPU a soft interrupt, thereby causing -@@ -297,7 +297,7 @@ static int qcom_boot_secondary(unsigned +@@ -297,7 +297,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int)) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ @@ -289,6 +300,8 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } +diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c +index 39038a03836a..6da5c93872bf 100644 --- a/arch/arm/mach-spear/platsmp.c +++ b/arch/arm/mach-spear/platsmp.c @@ -32,7 +32,7 @@ static void write_pen_release(int val) @@ -300,7 +313,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void __iomem *scu_base = IOMEM(VA_SCU_BASE); -@@ -47,8 +47,8 @@ static void spear13xx_secondary_init(uns +@@ -47,8 +47,8 @@ static void spear13xx_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ @@ -311,7 +324,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) -@@ -59,7 +59,7 @@ static int spear13xx_boot_secondary(unsi +@@ -59,7 +59,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) * set synchronisation state between this boot processor * and the secondary one */ @@ -320,7 +333,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * The secondary processor is waiting to be released from -@@ -84,7 +84,7 @@ static int spear13xx_boot_secondary(unsi +@@ -84,7 +84,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ @@ -329,6 +342,8 @@ Signed-off-by: Sebastian Andrzej Siewior return pen_release != -1 ? -ENOSYS : 0; } +diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c +index 231f19e17436..a3419b7003e6 100644 --- a/arch/arm/mach-sti/platsmp.c +++ b/arch/arm/mach-sti/platsmp.c @@ -35,7 +35,7 @@ static void write_pen_release(int val) @@ -340,7 +355,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void sti_secondary_init(unsigned int cpu) { -@@ -48,8 +48,8 @@ static void sti_secondary_init(unsigned +@@ -48,8 +48,8 @@ static void sti_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ @@ -351,7 +366,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) -@@ -60,7 +60,7 @@ static int sti_boot_secondary(unsigned i +@@ -60,7 +60,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) * set synchronisation state between this boot processor * and the secondary one */ @@ -360,7 +375,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * The secondary processor is waiting to be released from -@@ -91,7 +91,7 @@ static int sti_boot_secondary(unsigned i +@@ -91,7 +91,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ @@ -369,6 +384,8 @@ Signed-off-by: Sebastian Andrzej Siewior return pen_release != -1 ? -ENOSYS : 0; } +diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c +index c2366510187a..6b60f582b738 100644 --- a/arch/arm/plat-versatile/platsmp.c +++ b/arch/arm/plat-versatile/platsmp.c @@ -32,7 +32,7 @@ static void write_pen_release(int val) @@ -380,7 +397,7 @@ Signed-off-by: Sebastian Andrzej Siewior void versatile_secondary_init(unsigned int cpu) { -@@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned i +@@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned int cpu) /* * Synchronise with the boot thread. */ @@ -391,7 +408,7 @@ Signed-off-by: Sebastian Andrzej Siewior } int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) -@@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned in +@@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) * Set synchronisation state between this boot processor * and the secondary one */ @@ -400,7 +417,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * This is really belt and braces; we hold unintended secondary -@@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned in +@@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle) * now the secondary core is starting up let it run its * calibrations, then wait for it to finish */ @@ -409,3 +426,6 @@ Signed-off-by: Sebastian Andrzej Siewior return pen_release != -1 ? -ENOSYS : 0; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0013-x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch b/kernel/patches-4.19.x-rt/0013-x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch index 9206860a3..8c5e4d265 100644 --- a/kernel/patches-4.19.x-rt/0013-x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch +++ b/kernel/patches-4.19.x-rt/0013-x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch @@ -1,6 +1,7 @@ +From 4debab2aa3d29fcdb5b9cd132416094c54e9361b Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 17 Jul 2018 18:25:31 +0200 -Subject: [PATCH] x86/ioapic: Don't let setaffinity unmask threaded EOI +Subject: [PATCH 013/269] x86/ioapic: Don't let setaffinity unmask threaded EOI interrupt too early There is an issue with threaded interrupts which are marked ONESHOT @@ -29,12 +30,14 @@ Signed-off-by: Thomas Gleixner ifdef paths (spotted by Andy Shevchenko)] Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/kernel/apic/io_apic.c | 23 +++++++++++++---------- + arch/x86/kernel/apic/io_apic.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) +diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c +index ff0d14cd9e82..c2bd6e0433f8 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c -@@ -1722,19 +1722,20 @@ static bool io_apic_level_ack_pending(st +@@ -1722,19 +1722,20 @@ static bool io_apic_level_ack_pending(struct mp_chip_data *data) return false; } @@ -59,7 +62,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Only migrate the irq if the ack has been received. * * On rare occasions the broadcast level triggered ack gets -@@ -1763,15 +1764,17 @@ static inline void ioapic_irqd_unmask(st +@@ -1763,15 +1764,17 @@ static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked) */ if (!io_apic_level_ack_pending(data->chip_data)) irq_move_masked_irq(data); @@ -80,7 +83,7 @@ Signed-off-by: Sebastian Andrzej Siewior { } #endif -@@ -1780,11 +1783,11 @@ static void ioapic_ack_level(struct irq_ +@@ -1780,11 +1783,11 @@ static void ioapic_ack_level(struct irq_data *irq_data) { struct irq_cfg *cfg = irqd_cfg(irq_data); unsigned long v; @@ -94,7 +97,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * It appears there is an erratum which affects at least version 0x11 -@@ -1839,7 +1842,7 @@ static void ioapic_ack_level(struct irq_ +@@ -1839,7 +1842,7 @@ static void ioapic_ack_level(struct irq_data *irq_data) eoi_ioapic_pin(cfg->vector, irq_data->chip_data); } @@ -103,3 +106,6 @@ Signed-off-by: Sebastian Andrzej Siewior } static void ioapic_ir_ack_level(struct irq_data *irq_data) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0014-arm-kprobe-replace-patch_lock-to-raw-lock.patch b/kernel/patches-4.19.x-rt/0014-arm-kprobe-replace-patch_lock-to-raw-lock.patch deleted file mode 100644 index 9a0fa6413..000000000 --- a/kernel/patches-4.19.x-rt/0014-arm-kprobe-replace-patch_lock-to-raw-lock.patch +++ /dev/null @@ -1,69 +0,0 @@ -From: Yang Shi -Date: Thu, 10 Nov 2016 16:17:55 -0800 -Subject: [PATCH] arm: kprobe: replace patch_lock to raw lock - -When running kprobe on -rt kernel, the below bug is caught: - -BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931 -in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0 -INFO: lockdep is turned off. -irq event stamp: 238 -hardirqs last enabled at (237): [<80b5aecc>] _raw_spin_unlock_irqrestore+0x88/0x90 -hardirqs last disabled at (238): [<80b56d88>] __schedule+0xec/0x94c -softirqs last enabled at (0): [<80225584>] copy_process.part.5+0x30c/0x1994 -softirqs last disabled at (0): [< (null)>] (null) -Preemption disabled at:[<802f2b98>] cpu_stopper_thread+0xc0/0x140 - -CPU: 0 PID: 14 Comm: migration/0 Tainted: G O 4.8.3-rt2 #1 -Hardware name: Freescale LS1021A -[<80212e7c>] (unwind_backtrace) from [<8020cd2c>] (show_stack+0x20/0x24) -[<8020cd2c>] (show_stack) from [<80689e14>] (dump_stack+0xa0/0xcc) -[<80689e14>] (dump_stack) from [<8025a43c>] (___might_sleep+0x1b8/0x2a4) -[<8025a43c>] (___might_sleep) from [<80b5b324>] (rt_spin_lock+0x34/0x74) -[<80b5b324>] (rt_spin_lock) from [<80b5c31c>] (__patch_text_real+0x70/0xe8) -[<80b5c31c>] (__patch_text_real) from [<80b5c3ac>] (patch_text_stop_machine+0x18/0x20) -[<80b5c3ac>] (patch_text_stop_machine) from [<802f2920>] (multi_cpu_stop+0xfc/0x134) -[<802f2920>] (multi_cpu_stop) from [<802f2ba0>] (cpu_stopper_thread+0xc8/0x140) -[<802f2ba0>] (cpu_stopper_thread) from [<802563a4>] (smpboot_thread_fn+0x1a4/0x354) -[<802563a4>] (smpboot_thread_fn) from [<80251d38>] (kthread+0x104/0x11c) -[<80251d38>] (kthread) from [<80207f70>] (ret_from_fork+0x14/0x24) - -Since patch_text_stop_machine() is called in stop_machine() which disables IRQ, -sleepable lock should be not used in this atomic context, so replace patch_lock -to raw lock. - -Signed-off-by: Yang Shi -Signed-off-by: Sebastian Andrzej Siewior ---- - arch/arm/kernel/patch.c | 6 +++--- - 1 file changed, 3 insertions(+), 3 deletions(-) - ---- a/arch/arm/kernel/patch.c -+++ b/arch/arm/kernel/patch.c -@@ -16,7 +16,7 @@ struct patch { - unsigned int insn; - }; - --static DEFINE_SPINLOCK(patch_lock); -+static DEFINE_RAW_SPINLOCK(patch_lock); - - static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags) - __acquires(&patch_lock) -@@ -33,7 +33,7 @@ static void __kprobes *patch_map(void *a - return addr; - - if (flags) -- spin_lock_irqsave(&patch_lock, *flags); -+ raw_spin_lock_irqsave(&patch_lock, *flags); - else - __acquire(&patch_lock); - -@@ -48,7 +48,7 @@ static void __kprobes patch_unmap(int fi - clear_fixmap(fixmap); - - if (flags) -- spin_unlock_irqrestore(&patch_lock, *flags); -+ raw_spin_unlock_irqrestore(&patch_lock, *flags); - else - __release(&patch_lock); - } diff --git a/kernel/patches-4.19.x-rt/0016-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch b/kernel/patches-4.19.x-rt/0014-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0016-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch rename to kernel/patches-4.19.x-rt/0014-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch index 92feca234..999b5618f 100644 --- a/kernel/patches-4.19.x-rt/0016-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch +++ b/kernel/patches-4.19.x-rt/0014-cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch @@ -1,6 +1,7 @@ +From 1117688ac7606703683b1ac8cacdbf02d47b4adb Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 3 Jul 2018 18:19:48 +0200 -Subject: [PATCH] cgroup: use irqsave in cgroup_rstat_flush_locked() +Subject: [PATCH 014/269] cgroup: use irqsave in cgroup_rstat_flush_locked() All callers of cgroup_rstat_flush_locked() acquire cgroup_rstat_lock either with spin_lock_irq() or spin_lock_irqsave(). @@ -16,12 +17,14 @@ Acquire the raw_spin_lock_t with disabled interrupts. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/cgroup/rstat.c | 5 +++-- + kernel/cgroup/rstat.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) +diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c +index bb95a35e8c2d..3266a9781b4e 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c -@@ -157,8 +157,9 @@ static void cgroup_rstat_flush_locked(st +@@ -159,8 +159,9 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep) raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu); struct cgroup *pos = NULL; @@ -32,7 +35,7 @@ Signed-off-by: Sebastian Andrzej Siewior while ((pos = cgroup_rstat_cpu_pop_updated(pos, cgrp, cpu))) { struct cgroup_subsys_state *css; -@@ -170,7 +171,7 @@ static void cgroup_rstat_flush_locked(st +@@ -172,7 +173,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep) css->ss->css_rstat_flush(css, cpu); rcu_read_unlock(); } @@ -41,3 +44,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* if @may_sleep, play nice and yield if necessary */ if (may_sleep && (need_resched() || +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0015-arm-unwind-use_raw_lock.patch b/kernel/patches-4.19.x-rt/0015-arm-unwind-use_raw_lock.patch deleted file mode 100644 index 9c10dd91c..000000000 --- a/kernel/patches-4.19.x-rt/0015-arm-unwind-use_raw_lock.patch +++ /dev/null @@ -1,83 +0,0 @@ -From: Sebastian Andrzej Siewior -Date: Fri, 20 Sep 2013 14:31:54 +0200 -Subject: arm/unwind: use a raw_spin_lock - -Mostly unwind is done with irqs enabled however SLUB may call it with -irqs disabled while creating a new SLUB cache. - -I had system freeze while loading a module which called -kmem_cache_create() on init. That means SLUB's __slab_alloc() disabled -interrupts and then - -->new_slab_objects() - ->new_slab() - ->setup_object() - ->setup_object_debug() - ->init_tracking() - ->set_track() - ->save_stack_trace() - ->save_stack_trace_tsk() - ->walk_stackframe() - ->unwind_frame() - ->unwind_find_idx() - =>spin_lock_irqsave(&unwind_lock); - - -Signed-off-by: Sebastian Andrzej Siewior ---- - arch/arm/kernel/unwind.c | 14 +++++++------- - 1 file changed, 7 insertions(+), 7 deletions(-) - ---- a/arch/arm/kernel/unwind.c -+++ b/arch/arm/kernel/unwind.c -@@ -93,7 +93,7 @@ extern const struct unwind_idx __start_u - static const struct unwind_idx *__origin_unwind_idx; - extern const struct unwind_idx __stop_unwind_idx[]; - --static DEFINE_SPINLOCK(unwind_lock); -+static DEFINE_RAW_SPINLOCK(unwind_lock); - static LIST_HEAD(unwind_tables); - - /* Convert a prel31 symbol to an absolute address */ -@@ -201,7 +201,7 @@ static const struct unwind_idx *unwind_f - /* module unwind tables */ - struct unwind_table *table; - -- spin_lock_irqsave(&unwind_lock, flags); -+ raw_spin_lock_irqsave(&unwind_lock, flags); - list_for_each_entry(table, &unwind_tables, list) { - if (addr >= table->begin_addr && - addr < table->end_addr) { -@@ -213,7 +213,7 @@ static const struct unwind_idx *unwind_f - break; - } - } -- spin_unlock_irqrestore(&unwind_lock, flags); -+ raw_spin_unlock_irqrestore(&unwind_lock, flags); - } - - pr_debug("%s: idx = %p\n", __func__, idx); -@@ -529,9 +529,9 @@ struct unwind_table *unwind_table_add(un - tab->begin_addr = text_addr; - tab->end_addr = text_addr + text_size; - -- spin_lock_irqsave(&unwind_lock, flags); -+ raw_spin_lock_irqsave(&unwind_lock, flags); - list_add_tail(&tab->list, &unwind_tables); -- spin_unlock_irqrestore(&unwind_lock, flags); -+ raw_spin_unlock_irqrestore(&unwind_lock, flags); - - return tab; - } -@@ -543,9 +543,9 @@ void unwind_table_del(struct unwind_tabl - if (!tab) - return; - -- spin_lock_irqsave(&unwind_lock, flags); -+ raw_spin_lock_irqsave(&unwind_lock, flags); - list_del(&tab->list); -- spin_unlock_irqrestore(&unwind_lock, flags); -+ raw_spin_unlock_irqrestore(&unwind_lock, flags); - - kfree(tab); - } diff --git a/kernel/patches-4.19.x-rt/0017-fscache-initialize-cookie-hash-table-raw-spinlocks.patch b/kernel/patches-4.19.x-rt/0015-fscache-initialize-cookie-hash-table-raw-spinlocks.patch similarity index 67% rename from kernel/patches-4.19.x-rt/0017-fscache-initialize-cookie-hash-table-raw-spinlocks.patch rename to kernel/patches-4.19.x-rt/0015-fscache-initialize-cookie-hash-table-raw-spinlocks.patch index 8dd59acf1..20f180196 100644 --- a/kernel/patches-4.19.x-rt/0017-fscache-initialize-cookie-hash-table-raw-spinlocks.patch +++ b/kernel/patches-4.19.x-rt/0015-fscache-initialize-cookie-hash-table-raw-spinlocks.patch @@ -1,6 +1,7 @@ +From 8cf7a5b4f03a2829c823971a12c1a206bcba069d Mon Sep 17 00:00:00 2001 From: Clark Williams Date: Tue, 3 Jul 2018 13:34:30 -0500 -Subject: [PATCH] fscache: initialize cookie hash table raw spinlocks +Subject: [PATCH 015/269] fscache: initialize cookie hash table raw spinlocks The fscache cookie mechanism uses a hash table of hlist_bl_head structures. The PREEMPT_RT patcheset adds a raw spinlock to this structure and so on PREEMPT_RT @@ -12,14 +13,16 @@ Use the init function for fscache cookies. Signed-off-by: Clark Williams Signed-off-by: Sebastian Andrzej Siewior --- - fs/fscache/cookie.c | 8 ++++++++ - fs/fscache/main.c | 1 + - include/linux/fscache.h | 1 + + fs/fscache/cookie.c | 8 ++++++++ + fs/fscache/main.c | 1 + + include/linux/fscache.h | 1 + 3 files changed, 10 insertions(+) +diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c +index c550512ce335..d5d57da32ffa 100644 --- a/fs/fscache/cookie.c +++ b/fs/fscache/cookie.c -@@ -962,3 +962,11 @@ int __fscache_check_consistency(struct f +@@ -962,3 +962,11 @@ int __fscache_check_consistency(struct fscache_cookie *cookie, return -ESTALE; } EXPORT_SYMBOL(__fscache_check_consistency); @@ -31,6 +34,8 @@ Signed-off-by: Sebastian Andrzej Siewior + for (i = 0; i < (1 << fscache_cookie_hash_shift) - 1; i++) + INIT_HLIST_BL_HEAD(&fscache_cookie_hash[i]); +} +diff --git a/fs/fscache/main.c b/fs/fscache/main.c +index 30ad89db1efc..1d5f1d679ffa 100644 --- a/fs/fscache/main.c +++ b/fs/fscache/main.c @@ -149,6 +149,7 @@ static int __init fscache_init(void) @@ -41,9 +46,11 @@ Signed-off-by: Sebastian Andrzej Siewior fscache_root = kobject_create_and_add("fscache", kernel_kobj); if (!fscache_root) +diff --git a/include/linux/fscache.h b/include/linux/fscache.h +index 84b90a79d75a..87a9330eafa2 100644 --- a/include/linux/fscache.h +++ b/include/linux/fscache.h -@@ -230,6 +230,7 @@ extern void __fscache_readpages_cancel(s +@@ -230,6 +230,7 @@ extern void __fscache_readpages_cancel(struct fscache_cookie *cookie, extern void __fscache_disable_cookie(struct fscache_cookie *, const void *, bool); extern void __fscache_enable_cookie(struct fscache_cookie *, const void *, loff_t, bool (*)(void *), void *); @@ -51,3 +58,6 @@ Signed-off-by: Sebastian Andrzej Siewior /** * fscache_register_netfs - Register a filesystem as desiring caching services +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0018-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch b/kernel/patches-4.19.x-rt/0016-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0018-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch rename to kernel/patches-4.19.x-rt/0016-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch index 9d5c1f0db..f48fb4e6f 100644 --- a/kernel/patches-4.19.x-rt/0018-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch +++ b/kernel/patches-4.19.x-rt/0016-Drivers-hv-vmbus-include-header-for-get_irq_regs.patch @@ -1,6 +1,7 @@ +From 841d8b9e20d17d7907421dc223346198287e81a1 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 29 Aug 2018 21:59:04 +0200 -Subject: [PATCH] Drivers: hv: vmbus: include header for get_irq_regs() +Subject: [PATCH 016/269] Drivers: hv: vmbus: include header for get_irq_regs() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -18,9 +19,11 @@ Reported-by: Bernhard Landauer Reported-by: Ralf Ramsauer Signed-off-by: Sebastian Andrzej Siewior --- - drivers/hv/hyperv_vmbus.h | 1 + + drivers/hv/hyperv_vmbus.h | 1 + 1 file changed, 1 insertion(+) +diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h +index 87d3d7da78f8..1d2d8a4b837d 100644 --- a/drivers/hv/hyperv_vmbus.h +++ b/drivers/hv/hyperv_vmbus.h @@ -31,6 +31,7 @@ @@ -31,3 +34,6 @@ Signed-off-by: Sebastian Andrzej Siewior #include "hv_trace.h" +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0019-percpu-include-irqflags.h-for-raw_local_irq_save.patch b/kernel/patches-4.19.x-rt/0017-percpu-include-irqflags.h-for-raw_local_irq_save.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0019-percpu-include-irqflags.h-for-raw_local_irq_save.patch rename to kernel/patches-4.19.x-rt/0017-percpu-include-irqflags.h-for-raw_local_irq_save.patch index 86a018707..319a92f3a 100644 --- a/kernel/patches-4.19.x-rt/0019-percpu-include-irqflags.h-for-raw_local_irq_save.patch +++ b/kernel/patches-4.19.x-rt/0017-percpu-include-irqflags.h-for-raw_local_irq_save.patch @@ -1,6 +1,7 @@ +From d77a9b0754acbc89c7884b3505afdbb49677b36a Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 11 Oct 2018 16:39:59 +0200 -Subject: [PATCH] percpu: include irqflags.h for raw_local_irq_save() +Subject: [PATCH 017/269] percpu: include irqflags.h for raw_local_irq_save() The header percpu.h header file is using raw_local_irq_save() but does not include irqflags.h for its definition. It compiles because the @@ -11,9 +12,11 @@ Include irqflags.h in percpu.h. Signed-off-by: Sebastian Andrzej Siewior --- - include/asm-generic/percpu.h | 1 + + include/asm-generic/percpu.h | 1 + 1 file changed, 1 insertion(+) +diff --git a/include/asm-generic/percpu.h b/include/asm-generic/percpu.h +index 1817a8415a5e..942d64c0476e 100644 --- a/include/asm-generic/percpu.h +++ b/include/asm-generic/percpu.h @@ -5,6 +5,7 @@ @@ -24,3 +27,6 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_SMP +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0020-efi-Allow-efi-runtime.patch b/kernel/patches-4.19.x-rt/0018-efi-Allow-efi-runtime.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0020-efi-Allow-efi-runtime.patch rename to kernel/patches-4.19.x-rt/0018-efi-Allow-efi-runtime.patch index 5a3dfd3b4..c43135163 100644 --- a/kernel/patches-4.19.x-rt/0020-efi-Allow-efi-runtime.patch +++ b/kernel/patches-4.19.x-rt/0018-efi-Allow-efi-runtime.patch @@ -1,6 +1,7 @@ +From 10c47a6dadf91edee1d414002f91cc73bbe59c90 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 26 Jul 2018 15:06:10 +0200 -Subject: [PATCH] efi: Allow efi=runtime +Subject: [PATCH 018/269] efi: Allow efi=runtime In case the option "efi=noruntime" is default at built-time, the user could overwrite its sate by `efi=runtime' and allow it again. @@ -8,12 +9,14 @@ could overwrite its sate by `efi=runtime' and allow it again. Acked-by: Ard Biesheuvel Signed-off-by: Sebastian Andrzej Siewior --- - drivers/firmware/efi/efi.c | 3 +++ + drivers/firmware/efi/efi.c | 3 +++ 1 file changed, 3 insertions(+) +diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c +index 2a29dd9c986d..ab668e17fd05 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c -@@ -113,6 +113,9 @@ static int __init parse_efi_cmdline(char +@@ -113,6 +113,9 @@ static int __init parse_efi_cmdline(char *str) if (parse_option_str(str, "noruntime")) disable_runtime = true; @@ -23,3 +26,6 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } early_param("efi", parse_efi_cmdline); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0021-x86-efi-drop-task_lock-from-efi_switch_mm.patch b/kernel/patches-4.19.x-rt/0019-x86-efi-drop-task_lock-from-efi_switch_mm.patch similarity index 85% rename from kernel/patches-4.19.x-rt/0021-x86-efi-drop-task_lock-from-efi_switch_mm.patch rename to kernel/patches-4.19.x-rt/0019-x86-efi-drop-task_lock-from-efi_switch_mm.patch index eec4cea4c..b91f1fa0b 100644 --- a/kernel/patches-4.19.x-rt/0021-x86-efi-drop-task_lock-from-efi_switch_mm.patch +++ b/kernel/patches-4.19.x-rt/0019-x86-efi-drop-task_lock-from-efi_switch_mm.patch @@ -1,6 +1,7 @@ +From d1af306cedb5a02314565763b49992b10ce5d802 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 24 Jul 2018 14:48:55 +0200 -Subject: [PATCH] x86/efi: drop task_lock() from efi_switch_mm() +Subject: [PATCH 019/269] x86/efi: drop task_lock() from efi_switch_mm() efi_switch_mm() is a wrapper around switch_mm() which saves current's ->active_mm, sets the requests mm as ->active_mm and invokes @@ -18,9 +19,11 @@ Remove task_lock() and also update the comment to reflect it. Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/platform/efi/efi_64.c | 10 ++++------ + arch/x86/platform/efi/efi_64.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) +diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c +index ee5d08f25ce4..e8da7f492970 100644 --- a/arch/x86/platform/efi/efi_64.c +++ b/arch/x86/platform/efi/efi_64.c @@ -619,18 +619,16 @@ void __init efi_dump_pagetable(void) @@ -46,3 +49,6 @@ Signed-off-by: Sebastian Andrzej Siewior } #ifdef CONFIG_EFI_MIXED +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0022-arm64-KVM-compute_layout-before-altenates-are-applie.patch b/kernel/patches-4.19.x-rt/0020-arm64-KVM-compute_layout-before-altenates-are-applie.patch similarity index 64% rename from kernel/patches-4.19.x-rt/0022-arm64-KVM-compute_layout-before-altenates-are-applie.patch rename to kernel/patches-4.19.x-rt/0020-arm64-KVM-compute_layout-before-altenates-are-applie.patch index d3748a712..b28093325 100644 --- a/kernel/patches-4.19.x-rt/0022-arm64-KVM-compute_layout-before-altenates-are-applie.patch +++ b/kernel/patches-4.19.x-rt/0020-arm64-KVM-compute_layout-before-altenates-are-applie.patch @@ -1,6 +1,8 @@ +From 6d4ae829b2e8c46b1d730790bf2644e5a053cf14 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 26 Jul 2018 09:13:42 +0200 -Subject: [PATCH] arm64: KVM: compute_layout before altenates are applied +Subject: [PATCH 020/269] arm64: KVM: compute_layout before altenates are + applied compute_layout() is invoked as part of an alternative fixup under stop_machine() and needs a sleeping lock as part of get_random_long(). @@ -9,14 +11,16 @@ Invoke compute_layout() before the alternatives are applied. Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm64/include/asm/alternative.h | 6 ++++++ - arch/arm64/kernel/alternative.c | 1 + - arch/arm64/kvm/va_layout.c | 7 +------ + arch/arm64/include/asm/alternative.h | 6 ++++++ + arch/arm64/kernel/alternative.c | 1 + + arch/arm64/kvm/va_layout.c | 7 +------ 3 files changed, 8 insertions(+), 6 deletions(-) +diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h +index 4b650ec1d7dd..f561ea0ac645 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h -@@ -35,6 +35,12 @@ void apply_alternatives_module(void *sta +@@ -35,6 +35,12 @@ void apply_alternatives_module(void *start, size_t length); static inline void apply_alternatives_module(void *start, size_t length) { } #endif @@ -29,9 +33,11 @@ Signed-off-by: Sebastian Andrzej Siewior #define ALTINSTR_ENTRY(feature,cb) \ " .word 661b - .\n" /* label */ \ " .if " __stringify(cb) " == 0\n" \ +diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c +index b5d603992d40..f92815d56d17 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c -@@ -224,6 +224,7 @@ static int __apply_alternatives_multi_st +@@ -224,6 +224,7 @@ static int __apply_alternatives_multi_stop(void *unused) void __init apply_alternatives_all(void) { /* better not try code patching on a live SMP system */ @@ -39,6 +45,8 @@ Signed-off-by: Sebastian Andrzej Siewior stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); } +diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c +index c712a7376bc1..792da0e125de 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -33,7 +33,7 @@ static u8 tag_lsb; @@ -50,7 +58,7 @@ Signed-off-by: Sebastian Andrzej Siewior { phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start); u64 hyp_va_msb; -@@ -121,8 +121,6 @@ void __init kvm_update_va_mask(struct al +@@ -121,8 +121,6 @@ void __init kvm_update_va_mask(struct alt_instr *alt, BUG_ON(nr_inst != 5); @@ -59,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior for (i = 0; i < nr_inst; i++) { u32 rd, rn, insn, oinsn; -@@ -167,9 +165,6 @@ void kvm_patch_vector_branch(struct alt_ +@@ -167,9 +165,6 @@ void kvm_patch_vector_branch(struct alt_instr *alt, return; } @@ -69,3 +77,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Compute HYP VA by using the same computation as kern_hyp_va() */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0023-of-allocate-free-phandle-cache-outside-of-the-devtre.patch b/kernel/patches-4.19.x-rt/0021-of-allocate-free-phandle-cache-outside-of-the-devtre.patch similarity index 88% rename from kernel/patches-4.19.x-rt/0023-of-allocate-free-phandle-cache-outside-of-the-devtre.patch rename to kernel/patches-4.19.x-rt/0021-of-allocate-free-phandle-cache-outside-of-the-devtre.patch index 8d14e1f77..25c69bdc1 100644 --- a/kernel/patches-4.19.x-rt/0023-of-allocate-free-phandle-cache-outside-of-the-devtre.patch +++ b/kernel/patches-4.19.x-rt/0021-of-allocate-free-phandle-cache-outside-of-the-devtre.patch @@ -1,6 +1,8 @@ +From 1ab1616de2aaaa7392ebb706a457af2fdcd2b82a Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 31 Aug 2018 14:16:30 +0200 -Subject: [PATCH] of: allocate / free phandle cache outside of the devtree_lock +Subject: [PATCH 021/269] of: allocate / free phandle cache outside of the + devtree_lock The phandle cache code allocates memory while holding devtree_lock which is a raw_spinlock_t. Memory allocation (and free()) is not possible on @@ -12,9 +14,11 @@ Cc: Frank Rowand Cc: devicetree@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - drivers/of/base.c | 19 +++++++++++++------ + drivers/of/base.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) +diff --git a/drivers/of/base.c b/drivers/of/base.c +index 3f21ea6a90dc..2c7cf83b200c 100644 --- a/drivers/of/base.c +++ b/drivers/of/base.c @@ -130,31 +130,34 @@ static u32 phandle_cache_mask; @@ -93,3 +97,6 @@ Signed-off-by: Sebastian Andrzej Siewior } void __init of_core_init(void) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0024-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch b/kernel/patches-4.19.x-rt/0022-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0024-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch rename to kernel/patches-4.19.x-rt/0022-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch index 6172cac6f..aa4be48c2 100644 --- a/kernel/patches-4.19.x-rt/0024-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch +++ b/kernel/patches-4.19.x-rt/0022-mm-kasan-make-quarantine_lock-a-raw_spinlock_t.patch @@ -1,6 +1,7 @@ +From a61c877f81f1f0b850090df19e08d51cf9465955 Mon Sep 17 00:00:00 2001 From: Clark Williams Date: Tue, 18 Sep 2018 10:29:31 -0500 -Subject: [PATCH] mm/kasan: make quarantine_lock a raw_spinlock_t +Subject: [PATCH 022/269] mm/kasan: make quarantine_lock a raw_spinlock_t The static lock quarantine_lock is used in quarantine.c to protect the quarantine queue datastructures. It is taken inside quarantine queue @@ -17,9 +18,11 @@ the lock is held is limited. Signed-off-by: Clark Williams Signed-off-by: Sebastian Andrzej Siewior --- - mm/kasan/quarantine.c | 18 +++++++++--------- + mm/kasan/quarantine.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) +diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c +index 3a8ddf8baf7d..b209dbaefde8 100644 --- a/mm/kasan/quarantine.c +++ b/mm/kasan/quarantine.c @@ -103,7 +103,7 @@ static int quarantine_head; @@ -31,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior DEFINE_STATIC_SRCU(remove_cache_srcu); /* Maximum size of the global queue. */ -@@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_me +@@ -190,7 +190,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache) if (unlikely(q->bytes > QUARANTINE_PERCPU_SIZE)) { qlist_move_all(q, &temp); @@ -40,7 +43,7 @@ Signed-off-by: Sebastian Andrzej Siewior WRITE_ONCE(quarantine_size, quarantine_size + temp.bytes); qlist_move_all(&temp, &global_quarantine[quarantine_tail]); if (global_quarantine[quarantine_tail].bytes >= -@@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_me +@@ -203,7 +203,7 @@ void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache) if (new_tail != quarantine_head) quarantine_tail = new_tail; } @@ -67,7 +70,7 @@ Signed-off-by: Sebastian Andrzej Siewior qlist_free_all(&to_free, NULL); srcu_read_unlock(&remove_cache_srcu, srcu_idx); -@@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem +@@ -310,17 +310,17 @@ void quarantine_remove_cache(struct kmem_cache *cache) */ on_each_cpu(per_cpu_remove_cache, cache, 1); @@ -89,3 +92,6 @@ Signed-off-by: Sebastian Andrzej Siewior qlist_free_all(&to_free, cache); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0025-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch b/kernel/patches-4.19.x-rt/0023-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0025-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch rename to kernel/patches-4.19.x-rt/0023-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch index 579a4c586..95bd9530f 100644 --- a/kernel/patches-4.19.x-rt/0025-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch +++ b/kernel/patches-4.19.x-rt/0023-EXP-rcu-Revert-expedited-GP-parallelization-cleverne.patch @@ -1,6 +1,8 @@ -From: Paul E. McKenney +From b710c9561c0a7ddf1c7fef8d3bd3bc6d9e140a4e Mon Sep 17 00:00:00 2001 +From: "Paul E. McKenney" Date: Mon, 29 Oct 2018 11:53:01 +0100 -Subject: [PATCH] EXP rcu: Revert expedited GP parallelization cleverness +Subject: [PATCH 023/269] EXP rcu: Revert expedited GP parallelization + cleverness (Commit 258ba8e089db23f760139266c232f01bad73f85c from linux-rcu) @@ -13,12 +15,14 @@ Suggested-by: Sebastian Andrzej Siewior Signed-off-by: Paul E. McKenney Signed-off-by: Sebastian Andrzej Siewior --- - kernel/rcu/tree_exp.h | 9 +-------- + kernel/rcu/tree_exp.h | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) +diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h +index 0b2c2ad69629..a0486414edb4 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h -@@ -472,7 +472,6 @@ static void sync_rcu_exp_select_node_cpu +@@ -472,7 +472,6 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp) static void sync_rcu_exp_select_cpus(struct rcu_state *rsp, smp_call_func_t func) { @@ -26,7 +30,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct rcu_node *rnp; trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset")); -@@ -494,13 +493,7 @@ static void sync_rcu_exp_select_cpus(str +@@ -494,13 +493,7 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp, continue; } INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus); @@ -41,3 +45,6 @@ Signed-off-by: Sebastian Andrzej Siewior rnp->exp_need_flush = true; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0026-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch b/kernel/patches-4.19.x-rt/0024-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0026-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch rename to kernel/patches-4.19.x-rt/0024-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch index 5f508352d..e6f6587ad 100644 --- a/kernel/patches-4.19.x-rt/0026-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0024-kmemleak-Turn-kmemleak_lock-to-raw-spinlock-on-RT.patch @@ -1,6 +1,10 @@ +From b32df881582f39cab5e57b894f554f8573170cf7 Mon Sep 17 00:00:00 2001 From: He Zhe Date: Wed, 19 Dec 2018 16:30:57 +0100 -Subject: [PATCH] kmemleak: Turn kmemleak_lock to raw spinlock on RT +Subject: [PATCH 024/269] kmemleak: Turn kmemleak_lock to raw spinlock on RT +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit kmemleak_lock, as a rwlock on RT, can possibly be held in atomic context and causes the follow BUG. @@ -70,9 +74,11 @@ Link: https://lkml.kernel.org/r/1542877459-144382-1-git-send-email-zhe.he@windri Link: https://lkml.kernel.org/r/20181218150744.GB20197@arrakis.emea.arm.com Signed-off-by: Sebastian Andrzej Siewior --- - mm/kmemleak.c | 20 ++++++++++---------- + mm/kmemleak.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) +diff --git a/mm/kmemleak.c b/mm/kmemleak.c +index 17dd883198ae..b68a3d0d075f 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -26,7 +26,7 @@ @@ -93,7 +99,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* allocation caches for kmemleak internal data */ static struct kmem_cache *object_cache; -@@ -491,9 +491,9 @@ static struct kmemleak_object *find_and_ +@@ -491,9 +491,9 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias) struct kmemleak_object *object; rcu_read_lock(); @@ -105,7 +111,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* check whether the object is still available */ if (object && !get_object(object)) -@@ -513,13 +513,13 @@ static struct kmemleak_object *find_and_ +@@ -513,13 +513,13 @@ static struct kmemleak_object *find_and_remove_object(unsigned long ptr, int ali unsigned long flags; struct kmemleak_object *object; @@ -121,7 +127,7 @@ Signed-off-by: Sebastian Andrzej Siewior return object; } -@@ -593,7 +593,7 @@ static struct kmemleak_object *create_ob +@@ -593,7 +593,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, /* kernel backtrace */ object->trace_len = __save_stack_trace(object->trace); @@ -130,7 +136,7 @@ Signed-off-by: Sebastian Andrzej Siewior min_addr = min(min_addr, ptr); max_addr = max(max_addr, ptr + size); -@@ -624,7 +624,7 @@ static struct kmemleak_object *create_ob +@@ -624,7 +624,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, list_add_tail_rcu(&object->object_list, &object_list); out: @@ -139,7 +145,7 @@ Signed-off-by: Sebastian Andrzej Siewior return object; } -@@ -1310,7 +1310,7 @@ static void scan_block(void *_start, voi +@@ -1310,7 +1310,7 @@ static void scan_block(void *_start, void *_end, unsigned long *end = _end - (BYTES_PER_POINTER - 1); unsigned long flags; @@ -148,7 +154,7 @@ Signed-off-by: Sebastian Andrzej Siewior for (ptr = start; ptr < end; ptr++) { struct kmemleak_object *object; unsigned long pointer; -@@ -1367,7 +1367,7 @@ static void scan_block(void *_start, voi +@@ -1367,7 +1367,7 @@ static void scan_block(void *_start, void *_end, spin_unlock(&object->lock); } } @@ -157,3 +163,6 @@ Signed-off-by: Sebastian Andrzej Siewior } /* +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0027-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch b/kernel/patches-4.19.x-rt/0025-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0027-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch rename to kernel/patches-4.19.x-rt/0025-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch index 002906141..e47fb7ae2 100644 --- a/kernel/patches-4.19.x-rt/0027-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch +++ b/kernel/patches-4.19.x-rt/0025-NFSv4-replace-seqcount_t-with-a-seqlock_t.patch @@ -1,10 +1,7 @@ -Date: Fri, 28 Oct 2016 23:05:11 +0200 -From: Sebastian Andrzej Siewior -To: Trond Myklebust -Cc: Anna Schumaker , - linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, - tglx@linutronix.de -Subject: NFSv4: replace seqcount_t with a seqlock_t +From 82889085f9639d9aed51313cf8fd8e8ca32b8e8b Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior +Date: Fri, 28 Oct 2016 23:05:11 +0200 +Subject: [PATCH 025/269] NFSv4: replace seqcount_t with a seqlock_t The raw_write_seqcount_begin() in nfs4_reclaim_open_state() bugs me because it maps to preempt_disable() in -RT which I can't have at this @@ -22,15 +19,17 @@ block readers). Reported-by: kernel test robot Signed-off-by: Sebastian Andrzej Siewior --- - fs/nfs/delegation.c | 4 ++-- - fs/nfs/nfs4_fs.h | 2 +- - fs/nfs/nfs4proc.c | 4 ++-- - fs/nfs/nfs4state.c | 22 ++++++++++++++++------ + fs/nfs/delegation.c | 4 ++-- + fs/nfs/nfs4_fs.h | 2 +- + fs/nfs/nfs4proc.c | 4 ++-- + fs/nfs/nfs4state.c | 22 ++++++++++++++++------ 4 files changed, 21 insertions(+), 11 deletions(-) +diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c +index 75fe92eaa681..e8d05393443f 100644 --- a/fs/nfs/delegation.c +++ b/fs/nfs/delegation.c -@@ -152,11 +152,11 @@ static int nfs_delegation_claim_opens(st +@@ -152,11 +152,11 @@ static int nfs_delegation_claim_opens(struct inode *inode, sp = state->owner; /* Block nfs4_proc_unlck */ mutex_lock(&sp->so_delegreturn_mutex); @@ -44,6 +43,8 @@ Signed-off-by: Sebastian Andrzej Siewior err = -EAGAIN; mutex_unlock(&sp->so_delegreturn_mutex); put_nfs_open_context(ctx); +diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h +index 63287d911c08..2ae55eaa4a1e 100644 --- a/fs/nfs/nfs4_fs.h +++ b/fs/nfs/nfs4_fs.h @@ -114,7 +114,7 @@ struct nfs4_state_owner { @@ -55,9 +56,11 @@ Signed-off-by: Sebastian Andrzej Siewior struct mutex so_delegreturn_mutex; }; +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 580e37bc3fe2..9d010731f901 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c -@@ -2859,7 +2859,7 @@ static int _nfs4_open_and_get_state(stru +@@ -2863,7 +2863,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata, unsigned int seq; int ret; @@ -66,7 +69,7 @@ Signed-off-by: Sebastian Andrzej Siewior ret = _nfs4_proc_open(opendata, ctx); if (ret != 0) -@@ -2900,7 +2900,7 @@ static int _nfs4_open_and_get_state(stru +@@ -2904,7 +2904,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata, if (d_inode(dentry) == state->inode) { nfs_inode_attach_open_context(ctx); @@ -75,9 +78,11 @@ Signed-off-by: Sebastian Andrzej Siewior nfs4_schedule_stateid_recovery(server, state); } +diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c +index d2f645d34eb1..1698dd2ca20b 100644 --- a/fs/nfs/nfs4state.c +++ b/fs/nfs/nfs4state.c -@@ -511,7 +511,7 @@ nfs4_alloc_state_owner(struct nfs_server +@@ -511,7 +511,7 @@ nfs4_alloc_state_owner(struct nfs_server *server, nfs4_init_seqid_counter(&sp->so_seqid); atomic_set(&sp->so_count, 1); INIT_LIST_HEAD(&sp->so_lru); @@ -86,7 +91,7 @@ Signed-off-by: Sebastian Andrzej Siewior mutex_init(&sp->so_delegreturn_mutex); return sp; } -@@ -1564,8 +1564,12 @@ static int nfs4_reclaim_open_state(struc +@@ -1564,8 +1564,12 @@ static int nfs4_reclaim_open_state(struct nfs4_state_owner *sp, const struct nfs * recovering after a network partition or a reboot from a * server that doesn't support a grace period. */ @@ -100,7 +105,7 @@ Signed-off-by: Sebastian Andrzej Siewior restart: list_for_each_entry(state, &sp->so_states, open_states) { if (!test_and_clear_bit(ops->state_flag_bit, &state->flags)) -@@ -1652,14 +1656,20 @@ static int nfs4_reclaim_open_state(struc +@@ -1652,14 +1656,20 @@ static int nfs4_reclaim_open_state(struct nfs4_state_owner *sp, const struct nfs spin_lock(&sp->so_lock); goto restart; } @@ -125,3 +130,6 @@ Signed-off-by: Sebastian Andrzej Siewior return status; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0028-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch b/kernel/patches-4.19.x-rt/0026-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0028-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch rename to kernel/patches-4.19.x-rt/0026-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch index 18460630d..6ccf0b411 100644 --- a/kernel/patches-4.19.x-rt/0028-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch +++ b/kernel/patches-4.19.x-rt/0026-kernel-sched-Provide-a-pointer-to-the-valid-CPU-mask.patch @@ -1,6 +1,8 @@ +From 3ace22e122817ae9b6da2d0c49209a834f96375c Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 4 Apr 2017 12:50:16 +0200 -Subject: [PATCH] kernel: sched: Provide a pointer to the valid CPU mask +Subject: [PATCH 026/269] kernel: sched: Provide a pointer to the valid CPU + mask MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -56,34 +58,36 @@ Cc: Ingo Molnar Cc: Rafael J. Wysocki Signed-off-by: Sebastian Andrzej Siewior --- - arch/ia64/kernel/mca.c | 2 - - arch/mips/include/asm/switch_to.h | 4 +- - arch/mips/kernel/mips-mt-fpaff.c | 2 - - arch/mips/kernel/traps.c | 6 ++-- - arch/powerpc/platforms/cell/spufs/sched.c | 2 - - arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c | 2 - - drivers/infiniband/hw/hfi1/affinity.c | 6 ++-- - drivers/infiniband/hw/hfi1/sdma.c | 3 -- - drivers/infiniband/hw/qib/qib_file_ops.c | 7 ++-- - fs/proc/array.c | 4 +- - include/linux/sched.h | 5 ++- - init/init_task.c | 3 +- - kernel/cgroup/cpuset.c | 2 - - kernel/fork.c | 2 + - kernel/sched/core.c | 40 ++++++++++++++-------------- - kernel/sched/cpudeadline.c | 4 +- - kernel/sched/cpupri.c | 4 +- - kernel/sched/deadline.c | 6 ++-- - kernel/sched/fair.c | 32 +++++++++++----------- - kernel/sched/rt.c | 4 +- - kernel/trace/trace_hwlat.c | 2 - - lib/smp_processor_id.c | 2 - - samples/trace_events/trace-events-sample.c | 2 - + arch/ia64/kernel/mca.c | 2 +- + arch/mips/include/asm/switch_to.h | 4 +-- + arch/mips/kernel/mips-mt-fpaff.c | 2 +- + arch/mips/kernel/traps.c | 6 ++-- + arch/powerpc/platforms/cell/spufs/sched.c | 2 +- + arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c | 2 +- + drivers/infiniband/hw/hfi1/affinity.c | 6 ++-- + drivers/infiniband/hw/hfi1/sdma.c | 3 +- + drivers/infiniband/hw/qib/qib_file_ops.c | 7 ++-- + fs/proc/array.c | 4 +-- + include/linux/sched.h | 5 +-- + init/init_task.c | 3 +- + kernel/cgroup/cpuset.c | 2 +- + kernel/fork.c | 2 ++ + kernel/sched/core.c | 40 ++++++++++----------- + kernel/sched/cpudeadline.c | 4 +-- + kernel/sched/cpupri.c | 4 +-- + kernel/sched/deadline.c | 6 ++-- + kernel/sched/fair.c | 32 ++++++++--------- + kernel/sched/rt.c | 4 +-- + kernel/trace/trace_hwlat.c | 2 +- + lib/smp_processor_id.c | 2 +- + samples/trace_events/trace-events-sample.c | 2 +- 23 files changed, 74 insertions(+), 72 deletions(-) +diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c +index 6115464d5f03..f09e34c8409c 100644 --- a/arch/ia64/kernel/mca.c +++ b/arch/ia64/kernel/mca.c -@@ -1824,7 +1824,7 @@ format_mca_init_stack(void *mca_data, un +@@ -1824,7 +1824,7 @@ format_mca_init_stack(void *mca_data, unsigned long offset, ti->cpu = cpu; p->stack = ti; p->state = TASK_UNINTERRUPTIBLE; @@ -92,6 +96,8 @@ Signed-off-by: Sebastian Andrzej Siewior INIT_LIST_HEAD(&p->tasks); p->parent = p->real_parent = p->group_leader = p; INIT_LIST_HEAD(&p->children); +diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h +index e610473d61b8..1428b4febbc9 100644 --- a/arch/mips/include/asm/switch_to.h +++ b/arch/mips/include/asm/switch_to.h @@ -42,7 +42,7 @@ extern struct task_struct *ll_task; @@ -112,9 +118,11 @@ Signed-off-by: Sebastian Andrzej Siewior } \ next->thread.emulated_fp = 0; \ } while(0) +diff --git a/arch/mips/kernel/mips-mt-fpaff.c b/arch/mips/kernel/mips-mt-fpaff.c +index a7c0f97e4b0d..1a08428eedcf 100644 --- a/arch/mips/kernel/mips-mt-fpaff.c +++ b/arch/mips/kernel/mips-mt-fpaff.c -@@ -177,7 +177,7 @@ asmlinkage long mipsmt_sys_sched_getaffi +@@ -177,7 +177,7 @@ asmlinkage long mipsmt_sys_sched_getaffinity(pid_t pid, unsigned int len, if (retval) goto out_unlock; @@ -123,6 +131,8 @@ Signed-off-by: Sebastian Andrzej Siewior cpumask_and(&mask, &allowed, cpu_active_mask); out_unlock: +diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c +index 9dab0ed1b227..3623cf32f5f4 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c @@ -1174,12 +1174,12 @@ static void mt_ase_fp_affinity(void) @@ -141,9 +151,11 @@ Signed-off-by: Sebastian Andrzej Siewior &mt_fpu_cpumask); set_cpus_allowed_ptr(current, &tmask); set_thread_flag(TIF_FPUBOUND); +diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c +index c9ef3c532169..cb10249b1125 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c -@@ -141,7 +141,7 @@ void __spu_update_sched_info(struct spu_ +@@ -141,7 +141,7 @@ void __spu_update_sched_info(struct spu_context *ctx) * runqueue. The context will be rescheduled on the proper node * if it is timesliced or preempted. */ @@ -152,9 +164,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* Save the current cpu id for spu interrupt routing. */ ctx->last_ran = raw_smp_processor_id(); +diff --git a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c +index f8c260d522ca..befeec6414b0 100644 --- a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c +++ b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c -@@ -1435,7 +1435,7 @@ static int pseudo_lock_dev_mmap(struct f +@@ -1435,7 +1435,7 @@ static int pseudo_lock_dev_mmap(struct file *filp, struct vm_area_struct *vma) * may be scheduled elsewhere and invalidate entries in the * pseudo-locked region. */ @@ -163,6 +177,8 @@ Signed-off-by: Sebastian Andrzej Siewior mutex_unlock(&rdtgroup_mutex); return -EINVAL; } +diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c +index bedd5fba33b0..3f4259f11a35 100644 --- a/drivers/infiniband/hw/hfi1/affinity.c +++ b/drivers/infiniband/hw/hfi1/affinity.c @@ -1037,7 +1037,7 @@ int hfi1_get_proc_affinity(int node) @@ -192,9 +208,11 @@ Signed-off-by: Sebastian Andrzej Siewior hfi1_cdbg(PROC, "PID %u %s affinity set to CPU set(s) %*pbl", current->pid, current->comm, cpumask_pr_args(proc_mask)); +diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c +index 88e326d6cc49..b0d01ace6611 100644 --- a/drivers/infiniband/hw/hfi1/sdma.c +++ b/drivers/infiniband/hw/hfi1/sdma.c -@@ -855,14 +855,13 @@ struct sdma_engine *sdma_select_user_eng +@@ -855,14 +855,13 @@ struct sdma_engine *sdma_select_user_engine(struct hfi1_devdata *dd, { struct sdma_rht_node *rht_node; struct sdma_engine *sde = NULL; @@ -210,9 +228,11 @@ Signed-off-by: Sebastian Andrzej Siewior goto out; cpu_id = smp_processor_id(); +diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c +index 98e1ce14fa2a..5d3828625017 100644 --- a/drivers/infiniband/hw/qib/qib_file_ops.c +++ b/drivers/infiniband/hw/qib/qib_file_ops.c -@@ -1142,7 +1142,7 @@ static __poll_t qib_poll(struct file *fp +@@ -1142,7 +1142,7 @@ static __poll_t qib_poll(struct file *fp, struct poll_table_struct *pt) static void assign_ctxt_affinity(struct file *fp, struct qib_devdata *dd) { struct qib_filedata *fd = fp->private_data; @@ -221,7 +241,7 @@ Signed-off-by: Sebastian Andrzej Siewior const struct cpumask *local_mask = cpumask_of_pcibus(dd->pcidev->bus); int local_cpu; -@@ -1623,9 +1623,8 @@ static int qib_assign_ctxt(struct file * +@@ -1623,9 +1623,8 @@ static int qib_assign_ctxt(struct file *fp, const struct qib_user_info *uinfo) ret = find_free_ctxt(i_minor - 1, fp, uinfo); else { int unit; @@ -233,9 +253,11 @@ Signed-off-by: Sebastian Andrzej Siewior if (weight == 1 && !test_bit(cpu, qib_cpulist)) if (!find_hca(cpu, &unit) && unit >= 0) +diff --git a/fs/proc/array.c b/fs/proc/array.c +index 0ceb3b6b37e7..ccfef702c771 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c -@@ -381,9 +381,9 @@ static inline void task_context_switch_c +@@ -381,9 +381,9 @@ static inline void task_context_switch_counts(struct seq_file *m, static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) { seq_printf(m, "Cpus_allowed:\t%*pb\n", @@ -247,6 +269,8 @@ Signed-off-by: Sebastian Andrzej Siewior } static inline void task_core_dumping(struct seq_file *m, struct mm_struct *mm) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 5dc024e28397..fdb8ba398ea8 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -660,7 +660,8 @@ struct task_struct { @@ -268,6 +292,8 @@ Signed-off-by: Sebastian Andrzej Siewior #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */ #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */ +diff --git a/init/init_task.c b/init/init_task.c +index 5aebe3be4d7c..0b49b9cf5571 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -71,7 +71,8 @@ struct task_struct init_task @@ -280,9 +306,11 @@ Signed-off-by: Sebastian Andrzej Siewior .nr_cpus_allowed= NR_CPUS, .mm = NULL, .active_mm = &init_mm, +diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c +index 266f10cb7222..ef085d84a940 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c -@@ -2090,7 +2090,7 @@ static void cpuset_fork(struct task_stru +@@ -2090,7 +2090,7 @@ static void cpuset_fork(struct task_struct *task) if (task_css_is_root(task, cpuset_cgrp_id)) return; @@ -291,9 +319,11 @@ Signed-off-by: Sebastian Andrzej Siewior task->mems_allowed = current->mems_allowed; } +diff --git a/kernel/fork.c b/kernel/fork.c +index 64ef113e387e..bfe9c5c3eb88 100644 --- a/kernel/fork.c +++ b/kernel/fork.c -@@ -845,6 +845,8 @@ static struct task_struct *dup_task_stru +@@ -845,6 +845,8 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) #ifdef CONFIG_STACKPROTECTOR tsk->stack_canary = get_random_canary(); #endif @@ -302,9 +332,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * One for us, one for whoever does the "release_task()" (usually +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index d7f409866cdf..80badc70c258 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -876,7 +876,7 @@ static inline bool is_per_cpu_kthread(st +@@ -878,7 +878,7 @@ static inline bool is_per_cpu_kthread(struct task_struct *p) */ static inline bool is_cpu_allowed(struct task_struct *p, int cpu) { @@ -313,7 +345,7 @@ Signed-off-by: Sebastian Andrzej Siewior return false; if (is_per_cpu_kthread(p)) -@@ -971,7 +971,7 @@ static int migration_cpu_stop(void *data +@@ -973,7 +973,7 @@ static int migration_cpu_stop(void *data) local_irq_disable(); /* * We need to explicitly wake pending tasks before running @@ -322,7 +354,7 @@ Signed-off-by: Sebastian Andrzej Siewior * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test. */ sched_ttwu_pending(); -@@ -1002,7 +1002,7 @@ static int migration_cpu_stop(void *data +@@ -1004,7 +1004,7 @@ static int migration_cpu_stop(void *data) */ void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask) { @@ -331,7 +363,7 @@ Signed-off-by: Sebastian Andrzej Siewior p->nr_cpus_allowed = cpumask_weight(new_mask); } -@@ -1072,7 +1072,7 @@ static int __set_cpus_allowed_ptr(struct +@@ -1074,7 +1074,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, goto out; } @@ -340,7 +372,7 @@ Signed-off-by: Sebastian Andrzej Siewior goto out; if (!cpumask_intersects(new_mask, cpu_valid_mask)) { -@@ -1235,10 +1235,10 @@ static int migrate_swap_stop(void *data) +@@ -1237,10 +1237,10 @@ static int migrate_swap_stop(void *data) if (task_cpu(arg->src_task) != arg->src_cpu) goto unlock; @@ -353,7 +385,7 @@ Signed-off-by: Sebastian Andrzej Siewior goto unlock; __migrate_swap_task(arg->src_task, arg->dst_cpu); -@@ -1280,10 +1280,10 @@ int migrate_swap(struct task_struct *cur +@@ -1282,10 +1282,10 @@ int migrate_swap(struct task_struct *cur, struct task_struct *p, if (!cpu_active(arg.src_cpu) || !cpu_active(arg.dst_cpu)) goto out; @@ -366,7 +398,7 @@ Signed-off-by: Sebastian Andrzej Siewior goto out; trace_sched_swap_numa(cur, arg.src_cpu, p, arg.dst_cpu); -@@ -1428,7 +1428,7 @@ void kick_process(struct task_struct *p) +@@ -1430,7 +1430,7 @@ void kick_process(struct task_struct *p) EXPORT_SYMBOL_GPL(kick_process); /* @@ -375,7 +407,7 @@ Signed-off-by: Sebastian Andrzej Siewior * * A few notes on cpu_active vs cpu_online: * -@@ -1468,14 +1468,14 @@ static int select_fallback_rq(int cpu, s +@@ -1470,14 +1470,14 @@ static int select_fallback_rq(int cpu, struct task_struct *p) for_each_cpu(dest_cpu, nodemask) { if (!cpu_active(dest_cpu)) continue; @@ -392,7 +424,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!is_cpu_allowed(p, dest_cpu)) continue; -@@ -1519,7 +1519,7 @@ static int select_fallback_rq(int cpu, s +@@ -1521,7 +1521,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p) } /* @@ -401,7 +433,7 @@ Signed-off-by: Sebastian Andrzej Siewior */ static inline int select_task_rq(struct task_struct *p, int cpu, int sd_flags, int wake_flags) -@@ -1529,11 +1529,11 @@ int select_task_rq(struct task_struct *p +@@ -1531,11 +1531,11 @@ int select_task_rq(struct task_struct *p, int cpu, int sd_flags, int wake_flags) if (p->nr_cpus_allowed > 1) cpu = p->sched_class->select_task_rq(p, cpu, sd_flags, wake_flags); else @@ -415,7 +447,7 @@ Signed-off-by: Sebastian Andrzej Siewior * CPU. * * Since this is common to all placement strategies, this lives here. -@@ -2400,7 +2400,7 @@ void wake_up_new_task(struct task_struct +@@ -2402,7 +2402,7 @@ void wake_up_new_task(struct task_struct *p) #ifdef CONFIG_SMP /* * Fork balancing, do it here and not earlier because: @@ -424,7 +456,7 @@ Signed-off-by: Sebastian Andrzej Siewior * - any previously selected CPU might disappear through hotplug * * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq, -@@ -4273,7 +4273,7 @@ static int __sched_setscheduler(struct t +@@ -4275,7 +4275,7 @@ static int __sched_setscheduler(struct task_struct *p, * the entire root_domain to become SCHED_DEADLINE. We * will also fail if there's no bandwidth available. */ @@ -433,7 +465,7 @@ Signed-off-by: Sebastian Andrzej Siewior rq->rd->dl_bw.bw == 0) { task_rq_unlock(rq, p, &rf); return -EPERM; -@@ -4872,7 +4872,7 @@ long sched_getaffinity(pid_t pid, struct +@@ -4874,7 +4874,7 @@ long sched_getaffinity(pid_t pid, struct cpumask *mask) goto out_unlock; raw_spin_lock_irqsave(&p->pi_lock, flags); @@ -442,7 +474,7 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irqrestore(&p->pi_lock, flags); out_unlock: -@@ -5452,7 +5452,7 @@ int task_can_attach(struct task_struct * +@@ -5454,7 +5454,7 @@ int task_can_attach(struct task_struct *p, * allowed nodes is unnecessary. Thus, cpusets are not * applicable for such threads. This prevents checking for * success of set_cpus_allowed_ptr() on all attached tasks @@ -451,7 +483,7 @@ Signed-off-by: Sebastian Andrzej Siewior */ if (p->flags & PF_NO_SETAFFINITY) { ret = -EINVAL; -@@ -5479,7 +5479,7 @@ int migrate_task_to(struct task_struct * +@@ -5481,7 +5481,7 @@ int migrate_task_to(struct task_struct *p, int target_cpu) if (curr_cpu == target_cpu) return 0; @@ -460,7 +492,7 @@ Signed-off-by: Sebastian Andrzej Siewior return -EINVAL; /* TODO: This is not properly updating schedstats */ -@@ -5617,7 +5617,7 @@ static void migrate_tasks(struct rq *dea +@@ -5619,7 +5619,7 @@ static void migrate_tasks(struct rq *dead_rq, struct rq_flags *rf) put_prev_task(rq, next); /* @@ -469,9 +501,11 @@ Signed-off-by: Sebastian Andrzej Siewior * both pi_lock and rq->lock, such that holding either * stabilizes the mask. * +diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c +index 50316455ea66..d57fb2f8ae67 100644 --- a/kernel/sched/cpudeadline.c +++ b/kernel/sched/cpudeadline.c -@@ -124,14 +124,14 @@ int cpudl_find(struct cpudl *cp, struct +@@ -124,14 +124,14 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p, const struct sched_dl_entity *dl_se = &p->dl; if (later_mask && @@ -488,9 +522,11 @@ Signed-off-by: Sebastian Andrzej Siewior dl_time_before(dl_se->deadline, cp->elements[0].dl)) { if (later_mask) cpumask_set_cpu(best_cpu, later_mask); +diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c +index daaadf939ccb..f7d2c10b4c92 100644 --- a/kernel/sched/cpupri.c +++ b/kernel/sched/cpupri.c -@@ -98,11 +98,11 @@ int cpupri_find(struct cpupri *cp, struc +@@ -98,11 +98,11 @@ int cpupri_find(struct cpupri *cp, struct task_struct *p, if (skip) continue; @@ -504,9 +540,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * We have to ensure that we have at least one bit +diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c +index 91e4202b0634..f927b1f45474 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c -@@ -539,7 +539,7 @@ static struct rq *dl_task_offline_migrat +@@ -539,7 +539,7 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p * If we cannot preempt any rq, fall back to pick any * online CPU: */ @@ -515,7 +553,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (cpu >= nr_cpu_ids) { /* * Failed to find any suitable CPU. -@@ -1824,7 +1824,7 @@ static void set_curr_task_dl(struct rq * +@@ -1824,7 +1824,7 @@ static void set_curr_task_dl(struct rq *rq) static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu) { if (!task_running(rq, p) && @@ -524,7 +562,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 1; return 0; } -@@ -1974,7 +1974,7 @@ static struct rq *find_lock_later_rq(str +@@ -1974,7 +1974,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq) /* Retry if something changed. */ if (double_lock_balance(rq, later_rq)) { if (unlikely(task_rq(task) != rq || @@ -533,9 +571,11 @@ Signed-off-by: Sebastian Andrzej Siewior task_running(rq, task) || !dl_task(task) || !task_on_rq_queued(task))) { +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index 53acadf72cd9..c17d63b06026 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c -@@ -1630,7 +1630,7 @@ static void task_numa_compare(struct tas +@@ -1630,7 +1630,7 @@ static void task_numa_compare(struct task_numa_env *env, * be incurred if the tasks were swapped. */ /* Skip this swap candidate if cannot move to the source cpu */ @@ -544,7 +584,7 @@ Signed-off-by: Sebastian Andrzej Siewior goto unlock; /* -@@ -1727,7 +1727,7 @@ static void task_numa_find_cpu(struct ta +@@ -1727,7 +1727,7 @@ static void task_numa_find_cpu(struct task_numa_env *env, for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) { /* Skip this CPU if the source task cannot migrate */ @@ -553,7 +593,7 @@ Signed-off-by: Sebastian Andrzej Siewior continue; env->dst_cpu = cpu; -@@ -5712,7 +5712,7 @@ find_idlest_group(struct sched_domain *s +@@ -5737,7 +5737,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, /* Skip over this group if it has no CPUs allowed */ if (!cpumask_intersects(sched_group_span(group), @@ -562,7 +602,7 @@ Signed-off-by: Sebastian Andrzej Siewior continue; local_group = cpumask_test_cpu(this_cpu, -@@ -5844,7 +5844,7 @@ find_idlest_group_cpu(struct sched_group +@@ -5869,7 +5869,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this return cpumask_first(sched_group_span(group)); /* Traverse only the allowed CPUs */ @@ -571,7 +611,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (available_idle_cpu(i)) { struct rq *rq = cpu_rq(i); struct cpuidle_state *idle = idle_get_state(rq); -@@ -5884,7 +5884,7 @@ static inline int find_idlest_cpu(struct +@@ -5909,7 +5909,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p { int new_cpu = cpu; @@ -580,7 +620,7 @@ Signed-off-by: Sebastian Andrzej Siewior return prev_cpu; /* -@@ -6001,7 +6001,7 @@ static int select_idle_core(struct task_ +@@ -6026,7 +6026,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int if (!test_idle_cores(target, false)) return -1; @@ -589,7 +629,7 @@ Signed-off-by: Sebastian Andrzej Siewior for_each_cpu_wrap(core, cpus, target) { bool idle = true; -@@ -6035,7 +6035,7 @@ static int select_idle_smt(struct task_s +@@ -6060,7 +6060,7 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t return -1; for_each_cpu(cpu, cpu_smt_mask(target)) { @@ -598,7 +638,7 @@ Signed-off-by: Sebastian Andrzej Siewior continue; if (available_idle_cpu(cpu)) return cpu; -@@ -6098,7 +6098,7 @@ static int select_idle_cpu(struct task_s +@@ -6123,7 +6123,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t for_each_cpu_wrap(cpu, sched_domain_span(sd), target) { if (!--nr) return -1; @@ -607,7 +647,7 @@ Signed-off-by: Sebastian Andrzej Siewior continue; if (available_idle_cpu(cpu)) break; -@@ -6135,7 +6135,7 @@ static int select_idle_sibling(struct ta +@@ -6160,7 +6160,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) recent_used_cpu != target && cpus_share_cache(recent_used_cpu, target) && available_idle_cpu(recent_used_cpu) && @@ -616,7 +656,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Replace recent_used_cpu with prev as it is a potential * candidate for the next wake: -@@ -6353,7 +6353,7 @@ select_task_rq_fair(struct task_struct * +@@ -6378,7 +6378,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f if (sd_flag & SD_BALANCE_WAKE) { record_wakee(p); want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu) @@ -625,7 +665,7 @@ Signed-off-by: Sebastian Andrzej Siewior } rcu_read_lock(); -@@ -7092,14 +7092,14 @@ int can_migrate_task(struct task_struct +@@ -7117,14 +7117,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) /* * We do not migrate tasks that are: * 1) throttled_lb_pair, or @@ -642,7 +682,7 @@ Signed-off-by: Sebastian Andrzej Siewior int cpu; schedstat_inc(p->se.statistics.nr_failed_migrations_affine); -@@ -7119,7 +7119,7 @@ int can_migrate_task(struct task_struct +@@ -7144,7 +7144,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env) /* Prevent to re-select dst_cpu via env's CPUs: */ for_each_cpu_and(cpu, env->dst_grpmask, env->cpus) { @@ -651,7 +691,7 @@ Signed-off-by: Sebastian Andrzej Siewior env->flags |= LBF_DST_PINNED; env->new_dst_cpu = cpu; break; -@@ -7716,7 +7716,7 @@ check_cpu_capacity(struct rq *rq, struct +@@ -7741,7 +7741,7 @@ check_cpu_capacity(struct rq *rq, struct sched_domain *sd) /* * Group imbalance indicates (and tries to solve) the problem where balancing @@ -660,7 +700,7 @@ Signed-off-by: Sebastian Andrzej Siewior * * Imagine a situation of two groups of 4 CPUs each and 4 tasks each with a * cpumask covering 1 CPU of the first group and 3 CPUs of the second group. -@@ -8331,7 +8331,7 @@ static struct sched_group *find_busiest_ +@@ -8356,7 +8356,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env) /* * If the busiest group is imbalanced the below checks don't * work because they assume all things are equal, which typically @@ -669,7 +709,7 @@ Signed-off-by: Sebastian Andrzej Siewior */ if (busiest->group_type == group_imbalanced) goto force_balance; -@@ -8727,7 +8727,7 @@ static int load_balance(int this_cpu, st +@@ -8752,7 +8752,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, * if the curr task on busiest CPU can't be * moved to this_cpu: */ @@ -678,9 +718,11 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irqrestore(&busiest->lock, flags); env.flags |= LBF_ALL_PINNED; +diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c +index 2e2955a8cf8f..4857ca145119 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c -@@ -1611,7 +1611,7 @@ static void put_prev_task_rt(struct rq * +@@ -1611,7 +1611,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p) static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu) { if (!task_running(rq, p) && @@ -689,7 +731,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 1; return 0; -@@ -1748,7 +1748,7 @@ static struct rq *find_lock_lowest_rq(st +@@ -1748,7 +1748,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq) * Also make sure that it wasn't scheduled on its rq. */ if (unlikely(task_rq(task) != rq || @@ -698,6 +740,8 @@ Signed-off-by: Sebastian Andrzej Siewior task_running(rq, task) || !rt_task(task) || !task_on_rq_queued(task))) { +diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c +index 1e6db9cbe4dc..fa95139445b2 100644 --- a/kernel/trace/trace_hwlat.c +++ b/kernel/trace/trace_hwlat.c @@ -277,7 +277,7 @@ static void move_to_next_cpu(void) @@ -709,9 +753,11 @@ Signed-off-by: Sebastian Andrzej Siewior goto disable; get_online_cpus(); +diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c +index 85925aaa4fff..fb35c45b9421 100644 --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c -@@ -22,7 +22,7 @@ notrace static unsigned int check_preemp +@@ -22,7 +22,7 @@ notrace static unsigned int check_preemption_disabled(const char *what1, * Kernel threads bound to a single CPU can safely use * smp_processor_id(): */ @@ -720,6 +766,8 @@ Signed-off-by: Sebastian Andrzej Siewior goto out; /* +diff --git a/samples/trace_events/trace-events-sample.c b/samples/trace_events/trace-events-sample.c +index 5522692100ba..8b4be8e1802a 100644 --- a/samples/trace_events/trace-events-sample.c +++ b/samples/trace_events/trace-events-sample.c @@ -33,7 +33,7 @@ static void simple_thread_func(int cnt) @@ -731,3 +779,6 @@ Signed-off-by: Sebastian Andrzej Siewior trace_foo_with_template_simple("HELLO", cnt); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0029-add_migrate_disable.patch b/kernel/patches-4.19.x-rt/0027-kernel-sched-core-add-migrate_disable.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0029-add_migrate_disable.patch rename to kernel/patches-4.19.x-rt/0027-kernel-sched-core-add-migrate_disable.patch index 144e7c0a8..229f43d13 100644 --- a/kernel/patches-4.19.x-rt/0029-add_migrate_disable.patch +++ b/kernel/patches-4.19.x-rt/0027-kernel-sched-core-add-migrate_disable.patch @@ -1,15 +1,18 @@ +From 2fc8b5c9ca4ff2df7913d6e6d75a98bdece9b264 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Sat, 27 May 2017 19:02:06 +0200 -Subject: kernel/sched/core: add migrate_disable() +Subject: [PATCH 027/269] kernel/sched/core: add migrate_disable() --- - include/linux/preempt.h | 23 ++++++++ - include/linux/sched.h | 7 ++ - include/linux/smp.h | 3 + - kernel/sched/core.c | 130 +++++++++++++++++++++++++++++++++++++++++++++++- - kernel/sched/debug.c | 4 + + include/linux/preempt.h | 23 +++++++ + include/linux/sched.h | 7 +++ + include/linux/smp.h | 3 + + kernel/sched/core.c | 130 +++++++++++++++++++++++++++++++++++++++- + kernel/sched/debug.c | 4 ++ 5 files changed, 165 insertions(+), 2 deletions(-) +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index c01813c3fbe9..3196d0e76719 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -185,6 +185,22 @@ do { \ @@ -49,6 +52,8 @@ Subject: kernel/sched/core: add migrate_disable() #endif /* CONFIG_PREEMPT_COUNT */ #ifdef MODULE +diff --git a/include/linux/sched.h b/include/linux/sched.h +index fdb8ba398ea8..df39ad5916e7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -662,6 +662,13 @@ struct task_struct { @@ -65,6 +70,8 @@ Subject: kernel/sched/core: add migrate_disable() #ifdef CONFIG_PREEMPT_RCU int rcu_read_lock_nesting; +diff --git a/include/linux/smp.h b/include/linux/smp.h +index 9fb239e12b82..5801e516ba63 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -202,6 +202,9 @@ static inline int get_boot_cpu_id(void) @@ -77,9 +84,11 @@ Subject: kernel/sched/core: add migrate_disable() /* * Callback to arch code if there's nosmp or maxcpus=0 on the * boot command line: +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 80badc70c258..3df110e8c6f9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -1006,7 +1006,15 @@ void set_cpus_allowed_common(struct task +@@ -1008,7 +1008,15 @@ void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_ma p->nr_cpus_allowed = cpumask_weight(new_mask); } @@ -96,7 +105,7 @@ Subject: kernel/sched/core: add migrate_disable() { struct rq *rq = task_rq(p); bool queued, running; -@@ -1035,6 +1043,20 @@ void do_set_cpus_allowed(struct task_str +@@ -1037,6 +1045,20 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) set_curr_task(rq, p); } @@ -117,7 +126,7 @@ Subject: kernel/sched/core: add migrate_disable() /* * Change a given task's CPU affinity. Migrate the thread to a * proper CPU and schedule it away if the CPU it's executing on -@@ -1093,9 +1115,16 @@ static int __set_cpus_allowed_ptr(struct +@@ -1095,9 +1117,16 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, } /* Can the task run on the task's current CPU? If so, we're done */ @@ -135,7 +144,7 @@ Subject: kernel/sched/core: add migrate_disable() dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask); if (task_running(rq, p) || p->state == TASK_WAKING) { struct migration_arg arg = { p, dest_cpu }; -@@ -7058,3 +7087,100 @@ const u32 sched_prio_to_wmult[40] = { +@@ -7060,3 +7089,100 @@ const u32 sched_prio_to_wmult[40] = { }; #undef CREATE_TRACE_POINTS @@ -236,9 +245,11 @@ Subject: kernel/sched/core: add migrate_disable() +} +EXPORT_SYMBOL(migrate_enable); +#endif +diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c +index 141ea9ff210e..34c27afae009 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c -@@ -978,6 +978,10 @@ void proc_sched_show_task(struct task_st +@@ -982,6 +982,10 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns, P(dl.runtime); P(dl.deadline); } @@ -249,3 +260,6 @@ Subject: kernel/sched/core: add migrate_disable() #undef PN_SCHEDSTAT #undef PN #undef __PN +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0030-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch b/kernel/patches-4.19.x-rt/0028-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0030-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch rename to kernel/patches-4.19.x-rt/0028-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch index 274c8a0af..87f95476d 100644 --- a/kernel/patches-4.19.x-rt/0030-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch +++ b/kernel/patches-4.19.x-rt/0028-sched-migrate_disable-Add-export_symbol_gpl-for-__mi.patch @@ -1,6 +1,7 @@ +From 0af010b771c642c17c33fbc991e183c04427af59 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 9 Oct 2018 17:34:50 +0200 -Subject: [PATCH] sched/migrate_disable: Add export_symbol_gpl for +Subject: [PATCH 028/269] sched/migrate_disable: Add export_symbol_gpl for __migrate_disabled Jonathan reported that lttng/modules can't use __migrate_disabled(). @@ -16,12 +17,14 @@ EXPORT_SYMBOL_GPL to allow the module/LTTNG usage. Reported-by: Jonathan Rajott Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/core.c | 1 + + kernel/sched/core.c | 1 + 1 file changed, 1 insertion(+) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 3df110e8c6f9..9c4a9f0a627b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -1011,6 +1011,7 @@ int __migrate_disabled(struct task_struc +@@ -1013,6 +1013,7 @@ int __migrate_disabled(struct task_struct *p) { return p->migrate_disable; } @@ -29,3 +32,6 @@ Signed-off-by: Sebastian Andrzej Siewior #endif static void __do_set_cpus_allowed_tail(struct task_struct *p, +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0031-at91_dont_enable_disable_clock.patch b/kernel/patches-4.19.x-rt/0029-arm-at91-do-not-disable-enable-clocks-in-a-row.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0031-at91_dont_enable_disable_clock.patch rename to kernel/patches-4.19.x-rt/0029-arm-at91-do-not-disable-enable-clocks-in-a-row.patch index 417e2e792..ddbd99720 100644 --- a/kernel/patches-4.19.x-rt/0031-at91_dont_enable_disable_clock.patch +++ b/kernel/patches-4.19.x-rt/0029-arm-at91-do-not-disable-enable-clocks-in-a-row.patch @@ -1,6 +1,7 @@ +From 245bd7bd92ce193e01ef35fbdaae505d5eefd28b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior -Date: Wed, 09 Mar 2016 10:51:06 +0100 -Subject: arm: at91: do not disable/enable clocks in a row +Date: Wed, 9 Mar 2016 10:51:06 +0100 +Subject: [PATCH 029/269] arm: at91: do not disable/enable clocks in a row Currently the driver will disable the clock and enable it one line later if it is switching from periodic mode into one shot. @@ -8,9 +9,11 @@ This can be avoided and causes a needless warning on -RT. Signed-off-by: Sebastian Andrzej Siewior --- - drivers/clocksource/tcb_clksrc.c | 33 +++++++++++++++++++++++++++++---- + drivers/clocksource/tcb_clksrc.c | 33 ++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) +diff --git a/drivers/clocksource/tcb_clksrc.c b/drivers/clocksource/tcb_clksrc.c +index 43f4d5c4d6fa..de6baf564dfe 100644 --- a/drivers/clocksource/tcb_clksrc.c +++ b/drivers/clocksource/tcb_clksrc.c @@ -126,6 +126,7 @@ static struct clocksource clksrc = { @@ -21,7 +24,7 @@ Signed-off-by: Sebastian Andrzej Siewior void __iomem *regs; }; -@@ -143,6 +144,24 @@ static struct tc_clkevt_device *to_tc_cl +@@ -143,6 +144,24 @@ static struct tc_clkevt_device *to_tc_clkevt(struct clock_event_device *clkevt) */ static u32 timer_clock; @@ -46,7 +49,7 @@ Signed-off-by: Sebastian Andrzej Siewior static int tc_shutdown(struct clock_event_device *d) { struct tc_clkevt_device *tcd = to_tc_clkevt(d); -@@ -150,8 +169,14 @@ static int tc_shutdown(struct clock_even +@@ -150,8 +169,14 @@ static int tc_shutdown(struct clock_event_device *d) writel(0xff, regs + ATMEL_TC_REG(2, IDR)); writel(ATMEL_TC_CLKDIS, regs + ATMEL_TC_REG(2, CCR)); @@ -62,7 +65,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } -@@ -164,7 +189,7 @@ static int tc_set_oneshot(struct clock_e +@@ -164,7 +189,7 @@ static int tc_set_oneshot(struct clock_event_device *d) if (clockevent_state_oneshot(d) || clockevent_state_periodic(d)) tc_shutdown(d); @@ -71,7 +74,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* slow clock, count up to RC, then irq and stop */ writel(timer_clock | ATMEL_TC_CPCSTOP | ATMEL_TC_WAVE | -@@ -186,7 +211,7 @@ static int tc_set_periodic(struct clock_ +@@ -186,7 +211,7 @@ static int tc_set_periodic(struct clock_event_device *d) /* By not making the gentime core emulate periodic mode on top * of oneshot, we get lower overhead and improved accuracy. */ @@ -80,7 +83,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* slow clock, count up to RC, then irq and restart */ writel(timer_clock | ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO, -@@ -220,7 +245,7 @@ static struct tc_clkevt_device clkevt = +@@ -220,7 +245,7 @@ static struct tc_clkevt_device clkevt = { /* Should be lower than at91rm9200's system timer */ .rating = 125, .set_next_event = tc_next_event, @@ -89,3 +92,6 @@ Signed-off-by: Sebastian Andrzej Siewior .set_state_periodic = tc_set_periodic, .set_state_oneshot = tc_set_oneshot, }, +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0032-clocksource-tclib-allow-higher-clockrates.patch b/kernel/patches-4.19.x-rt/0030-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0032-clocksource-tclib-allow-higher-clockrates.patch rename to kernel/patches-4.19.x-rt/0030-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch index 8fba8081a..94b5d25d1 100644 --- a/kernel/patches-4.19.x-rt/0032-clocksource-tclib-allow-higher-clockrates.patch +++ b/kernel/patches-4.19.x-rt/0030-clocksource-TCLIB-Allow-higher-clock-rates-for-clock.patch @@ -1,6 +1,11 @@ +From 7b123775c97399cd5ca5394392bf72c5d73f2808 Mon Sep 17 00:00:00 2001 From: Benedikt Spranger Date: Mon, 8 Mar 2010 18:57:04 +0100 -Subject: clocksource: TCLIB: Allow higher clock rates for clock events +Subject: [PATCH 030/269] clocksource: TCLIB: Allow higher clock rates for + clock events +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit As default the TCLIB uses the 32KiHz base clock rate for clock events. Add a compile time selection to allow higher clock resulution. @@ -10,10 +15,12 @@ Add a compile time selection to allow higher clock resulution. Signed-off-by: Benedikt Spranger Signed-off-by: Thomas Gleixner --- - drivers/clocksource/tcb_clksrc.c | 36 +++++++++++++++++++++--------------- - drivers/misc/Kconfig | 12 ++++++++++-- + drivers/clocksource/tcb_clksrc.c | 36 +++++++++++++++++++------------- + drivers/misc/Kconfig | 12 +++++++++-- 2 files changed, 31 insertions(+), 17 deletions(-) +diff --git a/drivers/clocksource/tcb_clksrc.c b/drivers/clocksource/tcb_clksrc.c +index de6baf564dfe..ba15242a6066 100644 --- a/drivers/clocksource/tcb_clksrc.c +++ b/drivers/clocksource/tcb_clksrc.c @@ -25,8 +25,7 @@ @@ -34,7 +41,7 @@ Signed-off-by: Thomas Gleixner void __iomem *regs; }; -@@ -135,13 +135,6 @@ static struct tc_clkevt_device *to_tc_cl +@@ -135,13 +135,6 @@ static struct tc_clkevt_device *to_tc_clkevt(struct clock_event_device *clkevt) return container_of(clkevt, struct tc_clkevt_device, clkevt); } @@ -48,7 +55,7 @@ Signed-off-by: Thomas Gleixner static u32 timer_clock; static void tc_clk_disable(struct clock_event_device *d) -@@ -191,7 +184,7 @@ static int tc_set_oneshot(struct clock_e +@@ -191,7 +184,7 @@ static int tc_set_oneshot(struct clock_event_device *d) tc_clk_enable(d); @@ -57,7 +64,7 @@ Signed-off-by: Thomas Gleixner writel(timer_clock | ATMEL_TC_CPCSTOP | ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO, regs + ATMEL_TC_REG(2, CMR)); writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER)); -@@ -213,10 +206,10 @@ static int tc_set_periodic(struct clock_ +@@ -213,10 +206,10 @@ static int tc_set_periodic(struct clock_event_device *d) */ tc_clk_enable(d); @@ -70,7 +77,7 @@ Signed-off-by: Thomas Gleixner /* Enable clock and interrupts on RC compare */ writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER)); -@@ -243,7 +236,11 @@ static struct tc_clkevt_device clkevt = +@@ -243,7 +236,11 @@ static struct tc_clkevt_device clkevt = { .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT, /* Should be lower than at91rm9200's system timer */ @@ -82,7 +89,7 @@ Signed-off-by: Thomas Gleixner .set_next_event = tc_next_event, .set_state_shutdown = tc_shutdown_clk_off, .set_state_periodic = tc_set_periodic, -@@ -265,8 +262,9 @@ static irqreturn_t ch2_irq(int irq, void +@@ -265,8 +262,9 @@ static irqreturn_t ch2_irq(int irq, void *handle) return IRQ_NONE; } @@ -93,7 +100,7 @@ Signed-off-by: Thomas Gleixner int ret; struct clk *t2_clk = tc->clk[2]; int irq = tc->irq[2]; -@@ -287,7 +285,11 @@ static int __init setup_clkevents(struct +@@ -287,7 +285,11 @@ static int __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx) clkevt.regs = tc->regs; clkevt.clk = t2_clk; @@ -106,7 +113,7 @@ Signed-off-by: Thomas Gleixner clkevt.clkevt.cpumask = cpumask_of(0); -@@ -298,7 +300,7 @@ static int __init setup_clkevents(struct +@@ -298,7 +300,7 @@ static int __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx) return ret; } @@ -127,6 +134,8 @@ Signed-off-by: Thomas Gleixner if (ret) goto err_unregister_clksrc; +diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig +index 3726eacdf65d..0900dec7ec04 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -69,8 +69,7 @@ config ATMEL_TCB_CLKSRC @@ -155,3 +164,6 @@ Signed-off-by: Thomas Gleixner config DUMMY_IRQ tristate "Dummy IRQ handler" default n +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0033-timekeeping-split-jiffies-lock.patch b/kernel/patches-4.19.x-rt/0031-timekeeping-Split-jiffies-seqlock.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0033-timekeeping-split-jiffies-lock.patch rename to kernel/patches-4.19.x-rt/0031-timekeeping-Split-jiffies-seqlock.patch index f03f9d045..13e1912aa 100644 --- a/kernel/patches-4.19.x-rt/0033-timekeeping-split-jiffies-lock.patch +++ b/kernel/patches-4.19.x-rt/0031-timekeeping-Split-jiffies-seqlock.patch @@ -1,22 +1,25 @@ -Subject: timekeeping: Split jiffies seqlock +From 5a0bfb35b3b826135a39a8e8744e9926b5be7607 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 14 Feb 2013 22:36:59 +0100 +Subject: [PATCH 031/269] timekeeping: Split jiffies seqlock Replace jiffies_lock seqlock with a simple seqcounter and a rawlock so it can be taken in atomic context on RT. Signed-off-by: Thomas Gleixner --- - kernel/time/jiffies.c | 7 ++++--- - kernel/time/tick-common.c | 10 ++++++---- - kernel/time/tick-sched.c | 19 ++++++++++++------- - kernel/time/timekeeping.c | 6 ++++-- - kernel/time/timekeeping.h | 3 ++- + kernel/time/jiffies.c | 7 ++++--- + kernel/time/tick-common.c | 10 ++++++---- + kernel/time/tick-sched.c | 19 ++++++++++++------- + kernel/time/timekeeping.c | 6 ++++-- + kernel/time/timekeeping.h | 3 ++- 5 files changed, 28 insertions(+), 17 deletions(-) +diff --git a/kernel/time/jiffies.c b/kernel/time/jiffies.c +index 497719127bf9..62acb8914c9e 100644 --- a/kernel/time/jiffies.c +++ b/kernel/time/jiffies.c -@@ -74,7 +74,8 @@ static struct clocksource clocksource_ji +@@ -74,7 +74,8 @@ static struct clocksource clocksource_jiffies = { .max_cycles = 10, }; @@ -38,6 +41,8 @@ Signed-off-by: Thomas Gleixner return ret; } EXPORT_SYMBOL(get_jiffies_64); +diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c +index a02e0f6b287c..32f5101f07ce 100644 --- a/kernel/time/tick-common.c +++ b/kernel/time/tick-common.c @@ -79,13 +79,15 @@ int tick_is_oneshot_available(void) @@ -58,7 +63,7 @@ Signed-off-by: Thomas Gleixner update_wall_time(); } -@@ -157,9 +159,9 @@ void tick_setup_periodic(struct clock_ev +@@ -157,9 +159,9 @@ void tick_setup_periodic(struct clock_event_device *dev, int broadcast) ktime_t next; do { @@ -70,9 +75,11 @@ Signed-off-by: Thomas Gleixner clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT); +diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c +index 5b33e2f5c0ed..54fd344ef973 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c -@@ -67,7 +67,8 @@ static void tick_do_update_jiffies64(kti +@@ -67,7 +67,8 @@ static void tick_do_update_jiffies64(ktime_t now) return; /* Reevaluate with jiffies_lock held */ @@ -82,7 +89,7 @@ Signed-off-by: Thomas Gleixner delta = ktime_sub(now, last_jiffies_update); if (delta >= tick_period) { -@@ -90,10 +91,12 @@ static void tick_do_update_jiffies64(kti +@@ -90,10 +91,12 @@ static void tick_do_update_jiffies64(ktime_t now) /* Keep the tick_next_period variable up to date */ tick_next_period = ktime_add(last_jiffies_update, tick_period); } else { @@ -97,7 +104,7 @@ Signed-off-by: Thomas Gleixner update_wall_time(); } -@@ -104,12 +107,14 @@ static ktime_t tick_init_jiffy_update(vo +@@ -104,12 +107,14 @@ static ktime_t tick_init_jiffy_update(void) { ktime_t period; @@ -114,7 +121,7 @@ Signed-off-by: Thomas Gleixner return period; } -@@ -652,10 +657,10 @@ static ktime_t tick_nohz_next_event(stru +@@ -652,10 +657,10 @@ static ktime_t tick_nohz_next_event(struct tick_sched *ts, int cpu) /* Read jiffies and the time when jiffies were updated last */ do { @@ -127,6 +134,8 @@ Signed-off-by: Thomas Gleixner ts->last_jiffies = basejiff; ts->timer_expires_base = basemono; +diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c +index 7846ce24ecc0..68cf97548cba 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -2417,8 +2417,10 @@ EXPORT_SYMBOL(hardpps); @@ -142,9 +151,11 @@ Signed-off-by: Thomas Gleixner + raw_spin_unlock(&jiffies_lock); update_wall_time(); } +diff --git a/kernel/time/timekeeping.h b/kernel/time/timekeeping.h +index 141ab3ab0354..099737f6f10c 100644 --- a/kernel/time/timekeeping.h +++ b/kernel/time/timekeeping.h -@@ -18,7 +18,8 @@ extern void timekeeping_resume(void); +@@ -25,7 +25,8 @@ static inline void sched_clock_resume(void) { } extern void do_timer(unsigned long ticks); extern void update_wall_time(void); @@ -154,3 +165,6 @@ Signed-off-by: Thomas Gleixner #define CS_NAME_LEN 32 +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0034-signal-revert-ptrace-preempt-magic.patch b/kernel/patches-4.19.x-rt/0032-signal-Revert-ptrace-preempt-magic.patch similarity index 68% rename from kernel/patches-4.19.x-rt/0034-signal-revert-ptrace-preempt-magic.patch rename to kernel/patches-4.19.x-rt/0032-signal-Revert-ptrace-preempt-magic.patch index ea0adee2f..d8aee4bd7 100644 --- a/kernel/patches-4.19.x-rt/0034-signal-revert-ptrace-preempt-magic.patch +++ b/kernel/patches-4.19.x-rt/0032-signal-Revert-ptrace-preempt-magic.patch @@ -1,6 +1,7 @@ -Subject: signal: Revert ptrace preempt magic +From a9a18a8c88bd90bdac5f33690be17244dc22bd22 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 21 Sep 2011 19:57:12 +0200 +Subject: [PATCH 032/269] signal: Revert ptrace preempt magic Upstream commit '53da1d9456fe7f8 fix ptrace slowness' is nothing more than a bandaid around the ptrace design trainwreck. It's not a @@ -8,12 +9,14 @@ correctness issue, it's merily a cosmetic bandaid. Signed-off-by: Thomas Gleixner --- - kernel/signal.c | 8 -------- + kernel/signal.c | 8 -------- 1 file changed, 8 deletions(-) +diff --git a/kernel/signal.c b/kernel/signal.c +index 9102d60fc5c6..f29def2be652 100644 --- a/kernel/signal.c +++ b/kernel/signal.c -@@ -2094,15 +2094,7 @@ static void ptrace_stop(int exit_code, i +@@ -2094,15 +2094,7 @@ static void ptrace_stop(int exit_code, int why, int clear_code, siginfo_t *info) if (gstop_done && ptrace_reparented(current)) do_notify_parent_cldstop(current, false, why); @@ -29,3 +32,6 @@ Signed-off-by: Thomas Gleixner freezable_schedule(); } else { /* +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0035-net-sched-dev_deactivate_many-use-msleep-1-instead-o.patch b/kernel/patches-4.19.x-rt/0033-net-sched-Use-msleep-instead-of-yield.patch similarity index 89% rename from kernel/patches-4.19.x-rt/0035-net-sched-dev_deactivate_many-use-msleep-1-instead-o.patch rename to kernel/patches-4.19.x-rt/0033-net-sched-Use-msleep-instead-of-yield.patch index aa00107b5..f13739e2d 100644 --- a/kernel/patches-4.19.x-rt/0035-net-sched-dev_deactivate_many-use-msleep-1-instead-o.patch +++ b/kernel/patches-4.19.x-rt/0033-net-sched-Use-msleep-instead-of-yield.patch @@ -1,6 +1,7 @@ +From b1e277ed2b65bf647c2a6dc2d103ffe5aa2e4fa7 Mon Sep 17 00:00:00 2001 From: Marc Kleine-Budde Date: Wed, 5 Mar 2014 00:49:47 +0100 -Subject: net: sched: Use msleep() instead of yield() +Subject: [PATCH 033/269] net: sched: Use msleep() instead of yield() On PREEMPT_RT enabled systems the interrupt handler run as threads at prio 50 (by default). If a high priority userspace process tries to shut down a busy @@ -41,12 +42,14 @@ solution. Signed-off-by: Marc Kleine-Budde Signed-off-by: Sebastian Andrzej Siewior --- - net/sched/sch_generic.c | 2 +- + net/sched/sch_generic.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index 77b289da7763..31b9c2b415b4 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c -@@ -1184,7 +1184,7 @@ void dev_deactivate_many(struct list_hea +@@ -1183,7 +1183,7 @@ void dev_deactivate_many(struct list_head *head) /* Wait for outstanding qdisc_run calls. */ list_for_each_entry(dev, head, close_list) { while (some_qdisc_is_busy(dev)) @@ -55,3 +58,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* The new qdisc is assigned at this point so we can safely * unwind stale skb lists and qdisc statistics */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0036-dm-rq-remove-BUG_ON-irqs_disabled-check.patch b/kernel/patches-4.19.x-rt/0034-dm-rq-remove-BUG_ON-irqs_disabled-check.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0036-dm-rq-remove-BUG_ON-irqs_disabled-check.patch rename to kernel/patches-4.19.x-rt/0034-dm-rq-remove-BUG_ON-irqs_disabled-check.patch index c962ab317..b1e260240 100644 --- a/kernel/patches-4.19.x-rt/0036-dm-rq-remove-BUG_ON-irqs_disabled-check.patch +++ b/kernel/patches-4.19.x-rt/0034-dm-rq-remove-BUG_ON-irqs_disabled-check.patch @@ -1,6 +1,7 @@ +From 812137beb49a5dea2e269ea9739d0ed291e27375 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 27 Mar 2018 16:24:15 +0200 -Subject: [PATCH] dm rq: remove BUG_ON(!irqs_disabled) check +Subject: [PATCH 034/269] dm rq: remove BUG_ON(!irqs_disabled) check In commit 052189a2ec95 ("dm: remove superfluous irq disablement in dm_request_fn") the spin_lock_irq() was replaced with spin_lock() + a @@ -15,12 +16,14 @@ Cc: Keith Busch Cc: Mike Snitzer Signed-off-by: Sebastian Andrzej Siewior --- - drivers/md/dm-rq.c | 1 - + drivers/md/dm-rq.c | 1 - 1 file changed, 1 deletion(-) +diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c +index 6e547b8dd298..29736c7e5f1f 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c -@@ -688,7 +688,6 @@ static void dm_old_request_fn(struct req +@@ -688,7 +688,6 @@ static void dm_old_request_fn(struct request_queue *q) /* Establish tio->ti before queuing work (map_tio_request) */ tio->ti = ti; kthread_queue_work(&md->kworker, &tio->work); @@ -28,3 +31,6 @@ Signed-off-by: Sebastian Andrzej Siewior } } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0037-usb-do-not-disable-interrupts-in-giveback.patch b/kernel/patches-4.19.x-rt/0035-usb-do-no-disable-interrupts-in-giveback.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0037-usb-do-not-disable-interrupts-in-giveback.patch rename to kernel/patches-4.19.x-rt/0035-usb-do-no-disable-interrupts-in-giveback.patch index bf2be250a..471514e68 100644 --- a/kernel/patches-4.19.x-rt/0037-usb-do-not-disable-interrupts-in-giveback.patch +++ b/kernel/patches-4.19.x-rt/0035-usb-do-no-disable-interrupts-in-giveback.patch @@ -1,6 +1,7 @@ +From e958966734633c26363abc8920eca9c38e5cd7ce Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 8 Nov 2013 17:34:54 +0100 -Subject: usb: do no disable interrupts in giveback +Subject: [PATCH 035/269] usb: do no disable interrupts in giveback Since commit 94dfd7ed ("USB: HCD: support giveback of URB in tasklet context") the USB code disables interrupts before invoking the complete @@ -14,12 +15,14 @@ Longeterm we should force all HCDs to complete in the same context. Signed-off-by: Sebastian Andrzej Siewior --- - drivers/usb/core/hcd.c | 3 --- + drivers/usb/core/hcd.c | 3 --- 1 file changed, 3 deletions(-) +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c +index 1c21955fe7c0..7863dec34f0b 100644 --- a/drivers/usb/core/hcd.c +++ b/drivers/usb/core/hcd.c -@@ -1738,7 +1738,6 @@ static void __usb_hcd_giveback_urb(struc +@@ -1738,7 +1738,6 @@ static void __usb_hcd_giveback_urb(struct urb *urb) struct usb_hcd *hcd = bus_to_hcd(urb->dev->bus); struct usb_anchor *anchor = urb->anchor; int status = urb->unlinked; @@ -27,7 +30,7 @@ Signed-off-by: Sebastian Andrzej Siewior urb->hcpriv = NULL; if (unlikely((urb->transfer_flags & URB_SHORT_NOT_OK) && -@@ -1766,9 +1765,7 @@ static void __usb_hcd_giveback_urb(struc +@@ -1766,9 +1765,7 @@ static void __usb_hcd_giveback_urb(struct urb *urb) * and no one may trigger the above deadlock situation when * running complete() in tasklet. */ @@ -37,3 +40,6 @@ Signed-off-by: Sebastian Andrzej Siewior usb_anchor_resume_wakeups(anchor); atomic_dec(&urb->use_count); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0038-rt-preempt-base-config.patch b/kernel/patches-4.19.x-rt/0036-rt-Provide-PREEMPT_RT_BASE-config-switch.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0038-rt-preempt-base-config.patch rename to kernel/patches-4.19.x-rt/0036-rt-Provide-PREEMPT_RT_BASE-config-switch.patch index dd7d86d8b..0ce597586 100644 --- a/kernel/patches-4.19.x-rt/0038-rt-preempt-base-config.patch +++ b/kernel/patches-4.19.x-rt/0036-rt-Provide-PREEMPT_RT_BASE-config-switch.patch @@ -1,6 +1,7 @@ -Subject: rt: Provide PREEMPT_RT_BASE config switch +From 588e8fb01ec7915ef280606b80bd605f49c56915 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 17 Jun 2011 12:39:57 +0200 +Subject: [PATCH 036/269] rt: Provide PREEMPT_RT_BASE config switch Introduce PREEMPT_RT_BASE which enables parts of PREEMPT_RT_FULL. Forces interrupt threading and enables some of the RT @@ -8,9 +9,11 @@ substitutions for testing. Signed-off-by: Thomas Gleixner --- - kernel/Kconfig.preempt | 21 ++++++++++++++++++--- + kernel/Kconfig.preempt | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) +diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt +index cd1655122ec0..027db5976c2f 100644 --- a/kernel/Kconfig.preempt +++ b/kernel/Kconfig.preempt @@ -1,3 +1,10 @@ @@ -55,3 +58,6 @@ Signed-off-by: Thomas Gleixner - bool \ No newline at end of file + bool +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0039-cpumask-disable-offstack-on-rt.patch b/kernel/patches-4.19.x-rt/0037-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0039-cpumask-disable-offstack-on-rt.patch rename to kernel/patches-4.19.x-rt/0037-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch index 87887d374..b2f4a29f9 100644 --- a/kernel/patches-4.19.x-rt/0039-cpumask-disable-offstack-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0037-cpumask-Disable-CONFIG_CPUMASK_OFFSTACK-for-RT.patch @@ -1,6 +1,7 @@ -Subject: cpumask: Disable CONFIG_CPUMASK_OFFSTACK for RT +From 9480b8b41cb649337466e43807eff3816a9530bc Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 14 Dec 2011 01:03:49 +0100 +Subject: [PATCH 037/269] cpumask: Disable CONFIG_CPUMASK_OFFSTACK for RT There are "valid" GFP_ATOMIC allocations such as @@ -40,10 +41,12 @@ which forbid allocations at run-time. Signed-off-by: Thomas Gleixner --- - arch/x86/Kconfig | 2 +- - lib/Kconfig | 1 + + arch/x86/Kconfig | 2 +- + lib/Kconfig | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index e76d16ac2776..04a45d6d0167 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -934,7 +934,7 @@ config CALGARY_IOMMU_ENABLED_BY_DEFAULT @@ -55,6 +58,8 @@ Signed-off-by: Thomas Gleixner ---help--- Enable maximum number of CPUS and NUMA Nodes for this architecture. If unsure, say N. +diff --git a/lib/Kconfig b/lib/Kconfig +index a3928d4438b5..a50b2158f7cd 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -441,6 +441,7 @@ config CHECK_SIGNATURE @@ -65,3 +70,6 @@ Signed-off-by: Thomas Gleixner help Use dynamic allocation for cpumask_var_t, instead of putting them on the stack. This is a bit more expensive, but avoids +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0040-jump-label-rt.patch b/kernel/patches-4.19.x-rt/0038-jump-label-disable-if-stop_machine-is-used.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0040-jump-label-rt.patch rename to kernel/patches-4.19.x-rt/0038-jump-label-disable-if-stop_machine-is-used.patch index 02184b19a..c1d5b0e61 100644 --- a/kernel/patches-4.19.x-rt/0040-jump-label-rt.patch +++ b/kernel/patches-4.19.x-rt/0038-jump-label-disable-if-stop_machine-is-used.patch @@ -1,6 +1,7 @@ -Subject: jump-label: disable if stop_machine() is used +From d23a435dc809c84e3185683681ef735f2097fe57 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Wed, 08 Jul 2015 17:14:48 +0200 +Date: Wed, 8 Jul 2015 17:14:48 +0200 +Subject: [PATCH 038/269] jump-label: disable if stop_machine() is used Some architectures are using stop_machine() while switching the opcode which leads to latency spikes. @@ -19,9 +20,11 @@ Signed-off-by: Thomas Gleixner [bigeasy: only ARM for now] Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/Kconfig | 2 +- + arch/arm/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig +index cd4c74daf71e..27a5f0b9ddc7 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -51,7 +51,7 @@ config ARM @@ -33,3 +36,6 @@ Signed-off-by: Sebastian Andrzej Siewior select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0041-kconfig-disable-a-few-options-rt.patch b/kernel/patches-4.19.x-rt/0039-kconfig-Disable-config-options-which-are-not-RT-comp.patch similarity index 68% rename from kernel/patches-4.19.x-rt/0041-kconfig-disable-a-few-options-rt.patch rename to kernel/patches-4.19.x-rt/0039-kconfig-Disable-config-options-which-are-not-RT-comp.patch index 75e518436..1b4dda183 100644 --- a/kernel/patches-4.19.x-rt/0041-kconfig-disable-a-few-options-rt.patch +++ b/kernel/patches-4.19.x-rt/0039-kconfig-Disable-config-options-which-are-not-RT-comp.patch @@ -1,15 +1,19 @@ -Subject: kconfig: Disable config options which are not RT compatible +From 6c83d4802fcd91010b16a5a69456c7370cd10f9f Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 24 Jul 2011 12:11:43 +0200 +Subject: [PATCH 039/269] kconfig: Disable config options which are not RT + compatible Disable stuff which is known to have issues on RT Signed-off-by: Thomas Gleixner --- - arch/Kconfig | 1 + - mm/Kconfig | 2 +- + arch/Kconfig | 1 + + mm/Kconfig | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) +diff --git a/arch/Kconfig b/arch/Kconfig +index 6801123932a5..42b9062b9dbf 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -28,6 +28,7 @@ config OPROFILE @@ -20,6 +24,8 @@ Signed-off-by: Thomas Gleixner select RING_BUFFER select RING_BUFFER_ALLOW_SWAP help +diff --git a/mm/Kconfig b/mm/Kconfig +index de64ea658716..438460486a5b 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -377,7 +377,7 @@ config NOMMU_INITIAL_TRIM_EXCESS @@ -31,3 +37,6 @@ Signed-off-by: Thomas Gleixner select COMPACTION select RADIX_TREE_MULTIORDER help +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0042-lockdep-disable-self-test.patch b/kernel/patches-4.19.x-rt/0040-lockdep-disable-self-test.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0042-lockdep-disable-self-test.patch rename to kernel/patches-4.19.x-rt/0040-lockdep-disable-self-test.patch index a0485fa98..8866d416c 100644 --- a/kernel/patches-4.19.x-rt/0042-lockdep-disable-self-test.patch +++ b/kernel/patches-4.19.x-rt/0040-lockdep-disable-self-test.patch @@ -1,6 +1,7 @@ +From 968d103b4727308889b77f3fa556e149bba6d56c Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 17 Oct 2017 16:36:18 +0200 -Subject: [PATCH] lockdep: disable self-test +Subject: [PATCH 040/269] lockdep: disable self-test MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -12,9 +13,11 @@ during boot and it needs to be investigated… Signed-off-by: Sebastian Andrzej Siewior --- - lib/Kconfig.debug | 2 +- + lib/Kconfig.debug | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug +index 4966c4fbe7f7..92e7d88946f7 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1207,7 +1207,7 @@ config DEBUG_ATOMIC_SLEEP @@ -26,3 +29,6 @@ Signed-off-by: Sebastian Andrzej Siewior help Say Y here if you want the kernel to run a short self-test during bootup. The self-test checks whether common types of locking bugs +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0043-mm-disable-sloub-rt.patch b/kernel/patches-4.19.x-rt/0041-mm-Allow-only-slub-on-RT.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0043-mm-disable-sloub-rt.patch rename to kernel/patches-4.19.x-rt/0041-mm-Allow-only-slub-on-RT.patch index b15a7471a..fede8384d 100644 --- a/kernel/patches-4.19.x-rt/0043-mm-disable-sloub-rt.patch +++ b/kernel/patches-4.19.x-rt/0041-mm-Allow-only-slub-on-RT.patch @@ -1,16 +1,18 @@ +From 16680836f36c75ccaff96ab3155869144b0dd028 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 3 Jul 2009 08:44:03 -0500 -Subject: mm: Allow only slub on RT +Subject: [PATCH 041/269] mm: Allow only slub on RT Disable SLAB and SLOB on -RT. Only SLUB is adopted to -RT needs. Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner - --- - init/Kconfig | 2 ++ + init/Kconfig | 2 ++ 1 file changed, 2 insertions(+) +diff --git a/init/Kconfig b/init/Kconfig +index 864af10bb1b9..f3f073942c30 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1634,6 +1634,7 @@ choice @@ -29,3 +31,6 @@ Signed-off-by: Thomas Gleixner help SLOB replaces the stock allocator with a drastically simpler allocator. SLOB is generally more space efficient but +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0044-mutex-no-spin-on-rt.patch b/kernel/patches-4.19.x-rt/0042-locking-Disable-spin-on-owner-for-RT.patch similarity index 68% rename from kernel/patches-4.19.x-rt/0044-mutex-no-spin-on-rt.patch rename to kernel/patches-4.19.x-rt/0042-locking-Disable-spin-on-owner-for-RT.patch index 6f7ca0e2e..9e1993b9f 100644 --- a/kernel/patches-4.19.x-rt/0044-mutex-no-spin-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0042-locking-Disable-spin-on-owner-for-RT.patch @@ -1,15 +1,21 @@ +From a506cf490ae3e346c6082877f109fcf34568f22d Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 17 Jul 2011 21:51:45 +0200 -Subject: locking: Disable spin on owner for RT +Subject: [PATCH 042/269] locking: Disable spin on owner for RT +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit Drop spin on owner for mutex / rwsem. We are most likely not using it but… Signed-off-by: Thomas Gleixner --- - kernel/Kconfig.locks | 4 ++-- + kernel/Kconfig.locks | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) +diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks +index 84d882f3e299..af27c4000812 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -225,11 +225,11 @@ config ARCH_SUPPORTS_ATOMIC_RMW @@ -26,3 +32,6 @@ Signed-off-by: Thomas Gleixner config LOCK_SPIN_ON_OWNER def_bool y +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0045-rcu-disable-rcu-fast-no-hz-on-rt.patch b/kernel/patches-4.19.x-rt/0043-rcu-Disable-RCU_FAST_NO_HZ-on-RT.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0045-rcu-disable-rcu-fast-no-hz-on-rt.patch rename to kernel/patches-4.19.x-rt/0043-rcu-Disable-RCU_FAST_NO_HZ-on-RT.patch index 13b4e6beb..471ae277d 100644 --- a/kernel/patches-4.19.x-rt/0045-rcu-disable-rcu-fast-no-hz-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0043-rcu-Disable-RCU_FAST_NO_HZ-on-RT.patch @@ -1,16 +1,18 @@ -Subject: rcu: Disable RCU_FAST_NO_HZ on RT +From 30987f403875e211eee90cac11127e04b1a27c73 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 28 Oct 2012 13:26:09 +0000 +Subject: [PATCH 043/269] rcu: Disable RCU_FAST_NO_HZ on RT This uses a timer_list timer from the irq disabled guts of the idle code. Disable it for now to prevent wreckage. Signed-off-by: Thomas Gleixner - --- - kernel/rcu/Kconfig | 2 +- + kernel/rcu/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig +index 9210379c0353..644264be90f0 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -172,7 +172,7 @@ config RCU_FANOUT_LEAF @@ -22,3 +24,6 @@ Signed-off-by: Thomas Gleixner default n help This option permits CPUs to enter dynticks-idle state even if +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0046-rcu-make-RCU_BOOST-default-on-RT.patch b/kernel/patches-4.19.x-rt/0044-rcu-make-RCU_BOOST-default-on-RT.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0046-rcu-make-RCU_BOOST-default-on-RT.patch rename to kernel/patches-4.19.x-rt/0044-rcu-make-RCU_BOOST-default-on-RT.patch index fa1fb18b8..a8d71c6b2 100644 --- a/kernel/patches-4.19.x-rt/0046-rcu-make-RCU_BOOST-default-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0044-rcu-make-RCU_BOOST-default-on-RT.patch @@ -1,6 +1,7 @@ +From 709173f4678f7f2f0b834e508d8044821d1c2354 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 21 Mar 2014 20:19:05 +0100 -Subject: rcu: make RCU_BOOST default on RT +Subject: [PATCH 044/269] rcu: make RCU_BOOST default on RT Since it is no longer invoked from the softirq people run into OOM more often if the priority of the RCU thread is too low. Making boosting @@ -9,9 +10,11 @@ someone knows better. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/rcu/Kconfig | 4 ++-- + kernel/rcu/Kconfig | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) +diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig +index 644264be90f0..a243a78ff38c 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -190,8 +190,8 @@ config RCU_FAST_NO_HZ @@ -25,3 +28,6 @@ Signed-off-by: Sebastian Andrzej Siewior help This option boosts the priority of preempted RCU readers that block the current preemptible RCU grace period for too long. +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0047-sched-disable-rt-group-sched-on-rt.patch b/kernel/patches-4.19.x-rt/0045-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0047-sched-disable-rt-group-sched-on-rt.patch rename to kernel/patches-4.19.x-rt/0045-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch index b752b8171..756af2efc 100644 --- a/kernel/patches-4.19.x-rt/0047-sched-disable-rt-group-sched-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0045-sched-Disable-CONFIG_RT_GROUP_SCHED-on-RT.patch @@ -1,6 +1,7 @@ -Subject: sched: Disable CONFIG_RT_GROUP_SCHED on RT +From 56d2f884391ba7e98721f6639f87698e46429c7f Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Mon, 18 Jul 2011 17:03:52 +0200 +Subject: [PATCH 045/269] sched: Disable CONFIG_RT_GROUP_SCHED on RT Carsten reported problems when running: @@ -13,9 +14,11 @@ shell. Disabling CONFIG_RT_GROUP_SCHED solves that as well. Signed-off-by: Thomas Gleixner --- - init/Kconfig | 1 + + init/Kconfig | 1 + 1 file changed, 1 insertion(+) +diff --git a/init/Kconfig b/init/Kconfig +index f3f073942c30..707ca4d49944 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -781,6 +781,7 @@ config CFS_BANDWIDTH @@ -26,3 +29,6 @@ Signed-off-by: Thomas Gleixner default n help This feature lets you explicitly allocate real CPU bandwidth +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0048-net_disable_NET_RX_BUSY_POLL.patch b/kernel/patches-4.19.x-rt/0046-net-core-disable-NET_RX_BUSY_POLL.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0048-net_disable_NET_RX_BUSY_POLL.patch rename to kernel/patches-4.19.x-rt/0046-net-core-disable-NET_RX_BUSY_POLL.patch index 910258078..26f9d52fd 100644 --- a/kernel/patches-4.19.x-rt/0048-net_disable_NET_RX_BUSY_POLL.patch +++ b/kernel/patches-4.19.x-rt/0046-net-core-disable-NET_RX_BUSY_POLL.patch @@ -1,6 +1,10 @@ +From a5a9737c0c6edf17eecb16a923a936432f11019e Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Sat, 27 May 2017 19:02:06 +0200 -Subject: net/core: disable NET_RX_BUSY_POLL +Subject: [PATCH 046/269] net/core: disable NET_RX_BUSY_POLL +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit sk_busy_loop() does preempt_disable() followed by a few operations which can take sleeping locks and may get long. @@ -12,9 +16,11 @@ could be invoked again. Signed-off-by: Sebastian Andrzej Siewior --- - net/Kconfig | 2 +- + net/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/net/Kconfig b/net/Kconfig +index 228dfa382eec..bc8d01996f22 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -275,7 +275,7 @@ config CGROUP_NET_CLASSID @@ -26,3 +32,6 @@ Signed-off-by: Sebastian Andrzej Siewior config BQL bool +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0049-arm-disable-NEON-in-kernel-mode.patch b/kernel/patches-4.19.x-rt/0047-arm-disable-NEON-in-kernel-mode.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0049-arm-disable-NEON-in-kernel-mode.patch rename to kernel/patches-4.19.x-rt/0047-arm-disable-NEON-in-kernel-mode.patch index a24aedbe4..2f78bbd73 100644 --- a/kernel/patches-4.19.x-rt/0049-arm-disable-NEON-in-kernel-mode.patch +++ b/kernel/patches-4.19.x-rt/0047-arm-disable-NEON-in-kernel-mode.patch @@ -1,6 +1,7 @@ +From 0db6c523b2591dbf527c759ef1b3718f96bc3c29 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 1 Dec 2017 10:42:03 +0100 -Subject: [PATCH] arm*: disable NEON in kernel mode +Subject: [PATCH 047/269] arm*: disable NEON in kernel mode NEON in kernel mode is used by the crypto algorithms and raid6 code. While the raid6 code looks okay, the crypto algorithms do not: NEON @@ -13,14 +14,16 @@ stay on due to possible EFI callbacks so here I disable each algorithm. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/Kconfig | 2 +- - arch/arm64/crypto/Kconfig | 28 ++++++++++++++-------------- - arch/arm64/crypto/crc32-ce-glue.c | 3 ++- + arch/arm/Kconfig | 2 +- + arch/arm64/crypto/Kconfig | 28 ++++++++++++++-------------- + arch/arm64/crypto/crc32-ce-glue.c | 3 ++- 3 files changed, 17 insertions(+), 16 deletions(-) +diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig +index 27a5f0b9ddc7..91f4f80a6f24 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig -@@ -2160,7 +2160,7 @@ config NEON +@@ -2161,7 +2161,7 @@ config NEON config KERNEL_MODE_NEON bool "Support for NEON in kernel mode" @@ -29,6 +32,8 @@ Signed-off-by: Sebastian Andrzej Siewior help Say Y to include support for NEON in kernel mode. +diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig +index d51944ff9f91..0d4b3f0cfba6 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -19,43 +19,43 @@ config CRYPTO_SHA512_ARM64 @@ -141,9 +146,11 @@ Signed-off-by: Sebastian Andrzej Siewior select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64_NEON_BLK select CRYPTO_AES_ARM64 +diff --git a/arch/arm64/crypto/crc32-ce-glue.c b/arch/arm64/crypto/crc32-ce-glue.c +index 34b4e3d46aab..ae055cdad8cf 100644 --- a/arch/arm64/crypto/crc32-ce-glue.c +++ b/arch/arm64/crypto/crc32-ce-glue.c -@@ -208,7 +208,8 @@ static struct shash_alg crc32_pmull_algs +@@ -208,7 +208,8 @@ static struct shash_alg crc32_pmull_algs[] = { { static int __init crc32_pmull_mod_init(void) { @@ -153,3 +160,6 @@ Signed-off-by: Sebastian Andrzej Siewior crc32_pmull_algs[0].update = crc32_pmull_update; crc32_pmull_algs[1].update = crc32c_pmull_update; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0050-power-use-generic-rwsem-on-rt.patch b/kernel/patches-4.19.x-rt/0048-powerpc-Use-generic-rwsem-on-RT.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0050-power-use-generic-rwsem-on-rt.patch rename to kernel/patches-4.19.x-rt/0048-powerpc-Use-generic-rwsem-on-RT.patch index 46e0a5edf..478b571b5 100644 --- a/kernel/patches-4.19.x-rt/0050-power-use-generic-rwsem-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0048-powerpc-Use-generic-rwsem-on-RT.patch @@ -1,14 +1,17 @@ +From 24bc2177006a16588c79a438ba84122ec215135a Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 14 Jul 2015 14:26:34 +0200 -Subject: powerpc: Use generic rwsem on RT +Subject: [PATCH 048/269] powerpc: Use generic rwsem on RT Use generic code which uses rtmutex Signed-off-by: Thomas Gleixner --- - arch/powerpc/Kconfig | 3 ++- + arch/powerpc/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) +diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig +index a80669209155..9952764db9c5 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -105,10 +105,11 @@ config LOCKDEP_SUPPORT @@ -24,3 +27,6 @@ Signed-off-by: Thomas Gleixner config GENERIC_LOCKBREAK bool +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0051-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch b/kernel/patches-4.19.x-rt/0049-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0051-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch rename to kernel/patches-4.19.x-rt/0049-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch index ddf890e5c..8b81330dd 100644 --- a/kernel/patches-4.19.x-rt/0051-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch +++ b/kernel/patches-4.19.x-rt/0049-powerpc-kvm-Disable-in-kernel-MPIC-emulation-for-PRE.patch @@ -1,6 +1,8 @@ +From 86dd7e931e1f812e0fc9b44545ed1f9ffc80dcae Mon Sep 17 00:00:00 2001 From: Bogdan Purcareata Date: Fri, 24 Apr 2015 15:53:13 +0000 -Subject: powerpc/kvm: Disable in-kernel MPIC emulation for PREEMPT_RT_FULL +Subject: [PATCH 049/269] powerpc/kvm: Disable in-kernel MPIC emulation for + PREEMPT_RT_FULL While converting the openpic emulation code to use a raw_spinlock_t enables guests to run on RT, there's still a performance issue. For interrupts sent in @@ -22,9 +24,11 @@ Acked-by: Scott Wood Signed-off-by: Bogdan Purcareata Signed-off-by: Sebastian Andrzej Siewior --- - arch/powerpc/kvm/Kconfig | 1 + + arch/powerpc/kvm/Kconfig | 1 + 1 file changed, 1 insertion(+) +diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig +index 68a0e9d5b440..6f4d5d7615af 100644 --- a/arch/powerpc/kvm/Kconfig +++ b/arch/powerpc/kvm/Kconfig @@ -178,6 +178,7 @@ config KVM_E500MC @@ -35,3 +39,6 @@ Signed-off-by: Sebastian Andrzej Siewior select HAVE_KVM_IRQCHIP select HAVE_KVM_IRQFD select HAVE_KVM_IRQ_ROUTING +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0052-power-disable-highmem-on-rt.patch b/kernel/patches-4.19.x-rt/0050-powerpc-Disable-highmem-on-RT.patch similarity index 64% rename from kernel/patches-4.19.x-rt/0052-power-disable-highmem-on-rt.patch rename to kernel/patches-4.19.x-rt/0050-powerpc-Disable-highmem-on-RT.patch index 19eeb6ac5..3947fee11 100644 --- a/kernel/patches-4.19.x-rt/0052-power-disable-highmem-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0050-powerpc-Disable-highmem-on-RT.patch @@ -1,14 +1,17 @@ -Subject: powerpc: Disable highmem on RT +From f5b4401c967f9ead16662b347d2082f8f2743205 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Mon, 18 Jul 2011 17:08:34 +0200 +Subject: [PATCH 050/269] powerpc: Disable highmem on RT The current highmem handling on -RT is not compatible and needs fixups. Signed-off-by: Thomas Gleixner --- - arch/powerpc/Kconfig | 2 +- + arch/powerpc/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig +index 9952764db9c5..1563820a37e8 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -398,7 +398,7 @@ menu "Kernel options" @@ -20,3 +23,6 @@ Signed-off-by: Thomas Gleixner source kernel/Kconfig.hz +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0053-mips-disable-highmem-on-rt.patch b/kernel/patches-4.19.x-rt/0051-mips-Disable-highmem-on-RT.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0053-mips-disable-highmem-on-rt.patch rename to kernel/patches-4.19.x-rt/0051-mips-Disable-highmem-on-RT.patch index e9c0bedfa..44948abc2 100644 --- a/kernel/patches-4.19.x-rt/0053-mips-disable-highmem-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0051-mips-Disable-highmem-on-RT.patch @@ -1,14 +1,17 @@ -Subject: mips: Disable highmem on RT +From 29b46bfd781d871ae857c940e6ef76454bf356c2 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Mon, 18 Jul 2011 17:10:12 +0200 +Subject: [PATCH 051/269] mips: Disable highmem on RT The current highmem handling on -RT is not compatible and needs fixups. Signed-off-by: Thomas Gleixner --- - arch/mips/Kconfig | 2 +- + arch/mips/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig +index 201caf226b47..bd268302efa4 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -2517,7 +2517,7 @@ config MIPS_CRC_SUPPORT @@ -20,3 +23,6 @@ Signed-off-by: Thomas Gleixner config CPU_SUPPORTS_HIGHMEM bool +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0054-x86-use-gen-rwsem-spinlocks-rt.patch b/kernel/patches-4.19.x-rt/0052-x86-Use-generic-rwsem_spinlocks-on-rt.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0054-x86-use-gen-rwsem-spinlocks-rt.patch rename to kernel/patches-4.19.x-rt/0052-x86-Use-generic-rwsem_spinlocks-on-rt.patch index 0fe3da4e8..38802d8e2 100644 --- a/kernel/patches-4.19.x-rt/0054-x86-use-gen-rwsem-spinlocks-rt.patch +++ b/kernel/patches-4.19.x-rt/0052-x86-Use-generic-rwsem_spinlocks-on-rt.patch @@ -1,16 +1,18 @@ +From 789344b11534d2799fbc807496846f21869124b5 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 26 Jul 2009 02:21:32 +0200 -Subject: x86: Use generic rwsem_spinlocks on -rt +Subject: [PATCH 052/269] x86: Use generic rwsem_spinlocks on -rt Simplifies the separation of anon_rw_semaphores and rw_semaphores for -rt. Signed-off-by: Thomas Gleixner - --- - arch/x86/Kconfig | 5 ++++- + arch/x86/Kconfig | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index 04a45d6d0167..1b05ae86bdde 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -264,8 +264,11 @@ config ARCH_MAY_HAVE_PC_FDC @@ -26,3 +28,6 @@ Signed-off-by: Thomas Gleixner config GENERIC_CALIBRATE_DELAY def_bool y +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0055-leds-trigger-disable-CPU-trigger-on-RT.patch b/kernel/patches-4.19.x-rt/0053-leds-trigger-disable-CPU-trigger-on-RT.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0055-leds-trigger-disable-CPU-trigger-on-RT.patch rename to kernel/patches-4.19.x-rt/0053-leds-trigger-disable-CPU-trigger-on-RT.patch index 68f8c113b..070bb7bbc 100644 --- a/kernel/patches-4.19.x-rt/0055-leds-trigger-disable-CPU-trigger-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0053-leds-trigger-disable-CPU-trigger-on-RT.patch @@ -1,6 +1,7 @@ +From 7554227ac04319dadc334245535dd1d21d258de0 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 23 Jan 2014 14:45:59 +0100 -Subject: leds: trigger: disable CPU trigger on -RT +Subject: [PATCH 053/269] leds: trigger: disable CPU trigger on -RT as it triggers: |CPU: 0 PID: 0 Comm: swapper Not tainted 3.12.8-rt10 #141 @@ -19,9 +20,11 @@ as it triggers: Signed-off-by: Sebastian Andrzej Siewior --- - drivers/leds/trigger/Kconfig | 1 + + drivers/leds/trigger/Kconfig | 1 + 1 file changed, 1 insertion(+) +diff --git a/drivers/leds/trigger/Kconfig b/drivers/leds/trigger/Kconfig +index 4018af769969..b4ce8c115949 100644 --- a/drivers/leds/trigger/Kconfig +++ b/drivers/leds/trigger/Kconfig @@ -63,6 +63,7 @@ config LEDS_TRIGGER_BACKLIGHT @@ -32,3 +35,6 @@ Signed-off-by: Sebastian Andrzej Siewior help This allows LEDs to be controlled by active CPUs. This shows the active CPUs across an array of LEDs so you can see which +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0056-cpufreq-drop-K8-s-driver-from-beeing-selected.patch b/kernel/patches-4.19.x-rt/0054-cpufreq-drop-K8-s-driver-from-beeing-selected.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0056-cpufreq-drop-K8-s-driver-from-beeing-selected.patch rename to kernel/patches-4.19.x-rt/0054-cpufreq-drop-K8-s-driver-from-beeing-selected.patch index 6b60722a5..2208897b4 100644 --- a/kernel/patches-4.19.x-rt/0056-cpufreq-drop-K8-s-driver-from-beeing-selected.patch +++ b/kernel/patches-4.19.x-rt/0054-cpufreq-drop-K8-s-driver-from-beeing-selected.patch @@ -1,6 +1,7 @@ +From 57c3607ed990ada1d1636542d00bd3ed95e243da Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 9 Apr 2015 15:23:01 +0200 -Subject: cpufreq: drop K8's driver from beeing selected +Subject: [PATCH 054/269] cpufreq: drop K8's driver from beeing selected Ralf posted a picture of a backtrace from @@ -16,9 +17,11 @@ I have no machine with this, I simply switch it off. Reported-by: Ralf Mardorf Signed-off-by: Sebastian Andrzej Siewior --- - drivers/cpufreq/Kconfig.x86 | 2 +- + drivers/cpufreq/Kconfig.x86 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/drivers/cpufreq/Kconfig.x86 b/drivers/cpufreq/Kconfig.x86 +index 35f71825b7f3..bb4a6160d0f7 100644 --- a/drivers/cpufreq/Kconfig.x86 +++ b/drivers/cpufreq/Kconfig.x86 @@ -125,7 +125,7 @@ config X86_POWERNOW_K7_ACPI @@ -30,3 +33,6 @@ Signed-off-by: Sebastian Andrzej Siewior help This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors. Support for K10 and newer processors is now in acpi-cpufreq. +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0057-md-disable-bcache.patch b/kernel/patches-4.19.x-rt/0055-md-disable-bcache.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0057-md-disable-bcache.patch rename to kernel/patches-4.19.x-rt/0055-md-disable-bcache.patch index 6c510466c..2c4bc18ad 100644 --- a/kernel/patches-4.19.x-rt/0057-md-disable-bcache.patch +++ b/kernel/patches-4.19.x-rt/0055-md-disable-bcache.patch @@ -1,6 +1,10 @@ +From 53eb768ccfb675d61d67bd236402aa90434a6923 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 29 Aug 2013 11:48:57 +0200 -Subject: md: disable bcache +Subject: [PATCH 055/269] md: disable bcache +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit It uses anon semaphores |drivers/md/bcache/request.c: In function ‘cached_dev_write_complete’: @@ -16,9 +20,11 @@ either we get rid of those or we have to introduce them… Signed-off-by: Sebastian Andrzej Siewior --- - drivers/md/bcache/Kconfig | 1 + + drivers/md/bcache/Kconfig | 1 + 1 file changed, 1 insertion(+) +diff --git a/drivers/md/bcache/Kconfig b/drivers/md/bcache/Kconfig +index f6e0a8b3a61e..18c03d79a442 100644 --- a/drivers/md/bcache/Kconfig +++ b/drivers/md/bcache/Kconfig @@ -1,6 +1,7 @@ @@ -29,3 +35,6 @@ Signed-off-by: Sebastian Andrzej Siewior select CRC64 help Allows a block device to be used as cache for other devices; uses +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0058-efi-Disable-runtime-services-on-RT.patch b/kernel/patches-4.19.x-rt/0056-efi-Disable-runtime-services-on-RT.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0058-efi-Disable-runtime-services-on-RT.patch rename to kernel/patches-4.19.x-rt/0056-efi-Disable-runtime-services-on-RT.patch index 30094a012..412995274 100644 --- a/kernel/patches-4.19.x-rt/0058-efi-Disable-runtime-services-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0056-efi-Disable-runtime-services-on-RT.patch @@ -1,6 +1,7 @@ +From 62309a1da779bde384a7645a7d3e2713520a76da Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 26 Jul 2018 15:03:16 +0200 -Subject: [PATCH] efi: Disable runtime services on RT +Subject: [PATCH 056/269] efi: Disable runtime services on RT Based on meassurements the EFI functions get_variable / get_next_variable take up to 2us which looks okay. @@ -23,9 +24,11 @@ This was observed on "EFI v2.60 by SoftIron Overdrive 1000". Acked-by: Ard Biesheuvel Signed-off-by: Sebastian Andrzej Siewior --- - drivers/firmware/efi/efi.c | 2 +- + drivers/firmware/efi/efi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c +index ab668e17fd05..f58ab9ed4ade 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -87,7 +87,7 @@ struct mm_struct efi_mm = { @@ -37,3 +40,6 @@ Signed-off-by: Sebastian Andrzej Siewior static int __init setup_noefi(char *arg) { disable_runtime = true; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0059-printk-kill.patch b/kernel/patches-4.19.x-rt/0057-printk-Add-a-printk-kill-switch.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0059-printk-kill.patch rename to kernel/patches-4.19.x-rt/0057-printk-Add-a-printk-kill-switch.patch index a9d397ddf..bcfcfc57a 100644 --- a/kernel/patches-4.19.x-rt/0059-printk-kill.patch +++ b/kernel/patches-4.19.x-rt/0057-printk-Add-a-printk-kill-switch.patch @@ -1,17 +1,20 @@ -Subject: printk: Add a printk kill switch +From 09acfc4d67168f054485eb40955069fa2390a5ec Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 22 Jul 2011 17:58:40 +0200 +Subject: [PATCH 057/269] printk: Add a printk kill switch Add a prinkt-kill-switch. This is used from (NMI) watchdog to ensure that it does not dead-lock with the early printk code. Signed-off-by: Thomas Gleixner --- - include/linux/printk.h | 2 + - kernel/printk/printk.c | 79 ++++++++++++++++++++++++++++++++++++------------- - kernel/watchdog_hld.c | 10 ++++++ + include/linux/printk.h | 2 ++ + kernel/printk/printk.c | 79 +++++++++++++++++++++++++++++++----------- + kernel/watchdog_hld.c | 10 ++++++ 3 files changed, 71 insertions(+), 20 deletions(-) +diff --git a/include/linux/printk.h b/include/linux/printk.h +index cf3eccfe1543..30ebf5f82a7c 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -140,9 +140,11 @@ struct va_format { @@ -26,6 +29,8 @@ Signed-off-by: Thomas Gleixner #endif #ifdef CONFIG_PRINTK_NMI +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c +index 06045abd1887..413160a93814 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -405,6 +405,58 @@ DEFINE_RAW_SPINLOCK(logbuf_lock); @@ -87,7 +92,7 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_PRINTK DECLARE_WAIT_QUEUE_HEAD(log_wait); /* the next printk record to read by syslog(READ) or /proc/kmsg */ -@@ -1897,6 +1949,13 @@ asmlinkage int vprintk_emit(int facility +@@ -1897,6 +1949,13 @@ asmlinkage int vprintk_emit(int facility, int level, bool in_sched = false; unsigned long flags; @@ -101,7 +106,7 @@ Signed-off-by: Thomas Gleixner if (level == LOGLEVEL_SCHED) { level = LOGLEVEL_DEFAULT; in_sched = true; -@@ -2037,26 +2096,6 @@ static bool suppress_message_printing(in +@@ -2037,26 +2096,6 @@ static bool suppress_message_printing(int level) { return false; } #endif /* CONFIG_PRINTK */ @@ -128,9 +133,11 @@ Signed-off-by: Thomas Gleixner static int __add_preferred_console(char *name, int idx, char *options, char *brl_options) { +diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c +index 71381168dede..685443375dc0 100644 --- a/kernel/watchdog_hld.c +++ b/kernel/watchdog_hld.c -@@ -24,6 +24,8 @@ static DEFINE_PER_CPU(bool, hard_watchdo +@@ -24,6 +24,8 @@ static DEFINE_PER_CPU(bool, hard_watchdog_warn); static DEFINE_PER_CPU(bool, watchdog_nmi_touch); static DEFINE_PER_CPU(struct perf_event *, watchdog_ev); static DEFINE_PER_CPU(struct perf_event *, dead_event); @@ -139,7 +146,7 @@ Signed-off-by: Thomas Gleixner static struct cpumask dead_events_mask; static unsigned long hardlockup_allcpu_dumped; -@@ -134,6 +136,13 @@ static void watchdog_overflow_callback(s +@@ -134,6 +136,13 @@ static void watchdog_overflow_callback(struct perf_event *event, /* only print hardlockups once */ if (__this_cpu_read(hard_watchdog_warn) == true) return; @@ -153,7 +160,7 @@ Signed-off-by: Thomas Gleixner pr_emerg("Watchdog detected hard LOCKUP on cpu %d", this_cpu); print_modules(); -@@ -151,6 +160,7 @@ static void watchdog_overflow_callback(s +@@ -151,6 +160,7 @@ static void watchdog_overflow_callback(struct perf_event *event, !test_and_set_bit(0, &hardlockup_allcpu_dumped)) trigger_allbutself_cpu_backtrace(); @@ -161,3 +168,6 @@ Signed-off-by: Thomas Gleixner if (hardlockup_panic) nmi_panic(regs, "Hard LOCKUP"); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0060-printk-27force_early_printk-27-boot-param-to-help-with-debugging.patch b/kernel/patches-4.19.x-rt/0058-printk-Add-force_early_printk-boot-param-to-help-wit.patch similarity index 63% rename from kernel/patches-4.19.x-rt/0060-printk-27force_early_printk-27-boot-param-to-help-with-debugging.patch rename to kernel/patches-4.19.x-rt/0058-printk-Add-force_early_printk-boot-param-to-help-wit.patch index 95c2233c1..46d56fa3c 100644 --- a/kernel/patches-4.19.x-rt/0060-printk-27force_early_printk-27-boot-param-to-help-with-debugging.patch +++ b/kernel/patches-4.19.x-rt/0058-printk-Add-force_early_printk-boot-param-to-help-wit.patch @@ -1,6 +1,8 @@ -Subject: printk: Add "force_early_printk" boot param to help with debugging +From 3dd75cbf0c1ddd8dc0a7c0492e86f7293a730145 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra -Date: Fri, 02 Sep 2011 14:41:29 +0200 +Date: Fri, 2 Sep 2011 14:41:29 +0200 +Subject: [PATCH 058/269] printk: Add "force_early_printk" boot param to help + with debugging Gives me an option to screw printk and actually see what the machine says. @@ -10,12 +12,14 @@ Link: http://lkml.kernel.org/r/1314967289.1301.11.camel@twins Signed-off-by: Thomas Gleixner Link: http://lkml.kernel.org/n/tip-ykb97nsfmobq44xketrxs977@git.kernel.org --- - kernel/printk/printk.c | 7 +++++++ + kernel/printk/printk.c | 7 +++++++ 1 file changed, 7 insertions(+) +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c +index 413160a93814..6553508ff388 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c -@@ -435,6 +435,13 @@ asmlinkage void early_printk(const char +@@ -435,6 +435,13 @@ asmlinkage void early_printk(const char *fmt, ...) */ static bool __read_mostly printk_killswitch; @@ -29,3 +33,6 @@ Link: http://lkml.kernel.org/n/tip-ykb97nsfmobq44xketrxs977@git.kernel.org void printk_kill(void) { printk_killswitch = true; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0061-preempt-nort-rt-variants.patch b/kernel/patches-4.19.x-rt/0059-preempt-Provide-preempt_-_-no-rt-variants.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0061-preempt-nort-rt-variants.patch rename to kernel/patches-4.19.x-rt/0059-preempt-Provide-preempt_-_-no-rt-variants.patch index c9a87c546..d9eb70eac 100644 --- a/kernel/patches-4.19.x-rt/0061-preempt-nort-rt-variants.patch +++ b/kernel/patches-4.19.x-rt/0059-preempt-Provide-preempt_-_-no-rt-variants.patch @@ -1,16 +1,18 @@ +From 31772df387205be3a95e3d0bc21b7b81a244f6df Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 24 Jul 2009 12:38:56 +0200 -Subject: preempt: Provide preempt_*_(no)rt variants +Subject: [PATCH 059/269] preempt: Provide preempt_*_(no)rt variants RT needs a few preempt_disable/enable points which are not necessary otherwise. Implement variants to avoid #ifdeffery. Signed-off-by: Thomas Gleixner - --- - include/linux/preempt.h | 18 +++++++++++++++++- + include/linux/preempt.h | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index 3196d0e76719..f7a17fcc3fec 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -181,7 +181,11 @@ do { \ @@ -45,3 +47,6 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_PREEMPT_NOTIFIERS struct preempt_notifier; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0062-futex-workaround-migrate_disable-enable-in-different.patch b/kernel/patches-4.19.x-rt/0060-futex-workaround-migrate_disable-enable-in-different.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0062-futex-workaround-migrate_disable-enable-in-different.patch rename to kernel/patches-4.19.x-rt/0060-futex-workaround-migrate_disable-enable-in-different.patch index 307166696..b489a8cb7 100644 --- a/kernel/patches-4.19.x-rt/0062-futex-workaround-migrate_disable-enable-in-different.patch +++ b/kernel/patches-4.19.x-rt/0060-futex-workaround-migrate_disable-enable-in-different.patch @@ -1,6 +1,8 @@ +From c78bd62f56b86aa7717ac7a79e288fa8b3978573 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 8 Mar 2017 14:23:35 +0100 -Subject: [PATCH] futex: workaround migrate_disable/enable in different context +Subject: [PATCH 060/269] futex: workaround migrate_disable/enable in different + context migrate_disable()/migrate_enable() takes a different path in atomic() vs !atomic() context. These little hacks ensure that we don't underflow / overflow @@ -10,12 +12,14 @@ enabled and unlock it with interrupts disabled. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - kernel/futex.c | 19 +++++++++++++++++++ + kernel/futex.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) +diff --git a/kernel/futex.c b/kernel/futex.c +index 5a26d843a015..1bd0950bea4e 100644 --- a/kernel/futex.c +++ b/kernel/futex.c -@@ -2856,6 +2856,14 @@ static int futex_lock_pi(u32 __user *uad +@@ -2859,6 +2859,14 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, * before __rt_mutex_start_proxy_lock() is done. */ raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); @@ -30,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior spin_unlock(q.lock_ptr); /* * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter -@@ -2864,6 +2872,7 @@ static int futex_lock_pi(u32 __user *uad +@@ -2867,6 +2875,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, */ ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current); raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock); @@ -38,7 +42,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (ret) { if (ret == 1) -@@ -3012,11 +3021,21 @@ static int futex_unlock_pi(u32 __user *u +@@ -3015,11 +3024,21 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) * rt_waiter. Also see the WARN in wake_futex_pi(). */ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); @@ -60,3 +64,6 @@ Signed-off-by: Sebastian Andrzej Siewior put_pi_state(pi_state); /* +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0063-rt-local-irq-lock.patch b/kernel/patches-4.19.x-rt/0061-rt-Add-local-irq-locks.patch similarity index 94% rename from kernel/patches-4.19.x-rt/0063-rt-local-irq-lock.patch rename to kernel/patches-4.19.x-rt/0061-rt-Add-local-irq-locks.patch index c27d96204..2d8d9f01a 100644 --- a/kernel/patches-4.19.x-rt/0063-rt-local-irq-lock.patch +++ b/kernel/patches-4.19.x-rt/0061-rt-Add-local-irq-locks.patch @@ -1,6 +1,7 @@ -Subject: rt: Add local irq locks +From 5b811e266fa9c293395c73c7a21e7e5c5a51deb1 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Mon, 20 Jun 2011 09:03:47 +0200 +Subject: [PATCH 061/269] rt: Add local irq locks Introduce locallock. For !RT this maps to preempt_disable()/ local_irq_disable() so there is not much that changes. For RT this will @@ -12,10 +13,14 @@ is held and the owner is preempted. Signed-off-by: Thomas Gleixner --- - include/linux/locallock.h | 271 ++++++++++++++++++++++++++++++++++++++++++++++ - include/linux/percpu.h | 29 ++++ + include/linux/locallock.h | 271 ++++++++++++++++++++++++++++++++++++++ + include/linux/percpu.h | 29 ++++ 2 files changed, 300 insertions(+) + create mode 100644 include/linux/locallock.h +diff --git a/include/linux/locallock.h b/include/linux/locallock.h +new file mode 100644 +index 000000000000..d658c2552601 --- /dev/null +++ b/include/linux/locallock.h @@ -0,0 +1,271 @@ @@ -290,6 +295,8 @@ Signed-off-by: Thomas Gleixner +#endif + +#endif +diff --git a/include/linux/percpu.h b/include/linux/percpu.h +index 70b7123f38c7..24421bf8c4b3 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -19,6 +19,35 @@ @@ -328,3 +335,6 @@ Signed-off-by: Thomas Gleixner /* minimum unit size, also is the maximum supported allocation size */ #define PCPU_MIN_UNIT_SIZE PFN_ALIGN(32 << 10) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0064-locallock-provide-get-put-_locked_ptr-variants.patch b/kernel/patches-4.19.x-rt/0062-locallock-provide-get-put-_locked_ptr-variants.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0064-locallock-provide-get-put-_locked_ptr-variants.patch rename to kernel/patches-4.19.x-rt/0062-locallock-provide-get-put-_locked_ptr-variants.patch index 14b1bf7ab..2baa52693 100644 --- a/kernel/patches-4.19.x-rt/0064-locallock-provide-get-put-_locked_ptr-variants.patch +++ b/kernel/patches-4.19.x-rt/0062-locallock-provide-get-put-_locked_ptr-variants.patch @@ -1,6 +1,7 @@ +From 251ca7087d744d8b174f8488d2f7ea42cedaccf3 Mon Sep 17 00:00:00 2001 From: Julia Cartwright Date: Mon, 7 May 2018 08:58:56 -0500 -Subject: [PATCH] locallock: provide {get,put}_locked_ptr() variants +Subject: [PATCH 062/269] locallock: provide {get,put}_locked_ptr() variants Provide a set of locallocked accessors for pointers to per-CPU data; this is useful for dynamically-allocated per-CPU regions, for example. @@ -11,12 +12,14 @@ variants. Signed-off-by: Julia Cartwright Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/locallock.h | 10 ++++++++++ + include/linux/locallock.h | 10 ++++++++++ 1 file changed, 10 insertions(+) +diff --git a/include/linux/locallock.h b/include/linux/locallock.h +index d658c2552601..921eab83cd34 100644 --- a/include/linux/locallock.h +++ b/include/linux/locallock.h -@@ -222,6 +222,14 @@ static inline int __local_unlock_irqrest +@@ -222,6 +222,14 @@ static inline int __local_unlock_irqrestore(struct local_irq_lock *lv, #define put_locked_var(lvar, var) local_unlock(lvar); @@ -31,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior #define local_lock_cpu(lvar) \ ({ \ local_lock(lvar); \ -@@ -262,6 +270,8 @@ static inline void local_irq_lock_init(i +@@ -262,6 +270,8 @@ static inline void local_irq_lock_init(int lvar) { } #define get_locked_var(lvar, var) get_cpu_var(var) #define put_locked_var(lvar, var) put_cpu_var(var) @@ -40,3 +43,6 @@ Signed-off-by: Sebastian Andrzej Siewior #define local_lock_cpu(lvar) get_cpu() #define local_unlock_cpu(lvar) put_cpu() +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0065-mm-scatterlist-dont-disable-irqs-on-RT.patch b/kernel/patches-4.19.x-rt/0063-mm-scatterlist-Do-not-disable-irqs-on-RT.patch similarity index 62% rename from kernel/patches-4.19.x-rt/0065-mm-scatterlist-dont-disable-irqs-on-RT.patch rename to kernel/patches-4.19.x-rt/0063-mm-scatterlist-Do-not-disable-irqs-on-RT.patch index 792b70d25..7de15391b 100644 --- a/kernel/patches-4.19.x-rt/0065-mm-scatterlist-dont-disable-irqs-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0063-mm-scatterlist-Do-not-disable-irqs-on-RT.patch @@ -1,18 +1,21 @@ +From bdf1c5db6f1c5d8fe706592f9373849948d65813 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 3 Jul 2009 08:44:34 -0500 -Subject: mm/scatterlist: Do not disable irqs on RT +Subject: [PATCH 063/269] mm/scatterlist: Do not disable irqs on RT For -RT it is enough to keep pagefault disabled (which is currently handled by kmap_atomic()). Signed-off-by: Thomas Gleixner --- - lib/scatterlist.c | 2 +- + lib/scatterlist.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/lib/scatterlist.c b/lib/scatterlist.c +index 7c6096a71704..5c2c68962709 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c -@@ -776,7 +776,7 @@ void sg_miter_stop(struct sg_mapping_ite +@@ -776,7 +776,7 @@ void sg_miter_stop(struct sg_mapping_iter *miter) flush_kernel_dcache_page(miter->page); if (miter->__flags & SG_MITER_ATOMIC) { @@ -21,3 +24,6 @@ Signed-off-by: Thomas Gleixner kunmap_atomic(miter->addr); } else kunmap(miter->page); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0066-oleg-signal-rt-fix.patch b/kernel/patches-4.19.x-rt/0064-signal-x86-Delay-calling-signals-in-atomic.patch similarity index 80% rename from kernel/patches-4.19.x-rt/0066-oleg-signal-rt-fix.patch rename to kernel/patches-4.19.x-rt/0064-signal-x86-Delay-calling-signals-in-atomic.patch index 60ee65ebd..586b93743 100644 --- a/kernel/patches-4.19.x-rt/0066-oleg-signal-rt-fix.patch +++ b/kernel/patches-4.19.x-rt/0064-signal-x86-Delay-calling-signals-in-atomic.patch @@ -1,6 +1,7 @@ +From d892f2116baf1643d4d3c792231c687fa49b71ce Mon Sep 17 00:00:00 2001 From: Oleg Nesterov Date: Tue, 14 Jul 2015 14:26:34 +0200 -Subject: signal/x86: Delay calling signals in atomic +Subject: [PATCH 064/269] signal/x86: Delay calling signals in atomic On x86_64 we must disable preemption before we enable interrupts for stack faults, int3 and debugging, because the current task is using @@ -29,16 +30,17 @@ Signed-off-by: Oleg Nesterov Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner --- - - arch/x86/entry/common.c | 7 +++++++ - arch/x86/include/asm/signal.h | 13 +++++++++++++ - include/linux/sched.h | 4 ++++ - kernel/signal.c | 37 +++++++++++++++++++++++++++++++++++-- + arch/x86/entry/common.c | 7 +++++++ + arch/x86/include/asm/signal.h | 13 ++++++++++++ + include/linux/sched.h | 4 ++++ + kernel/signal.c | 37 +++++++++++++++++++++++++++++++++-- 4 files changed, 59 insertions(+), 2 deletions(-) +diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c +index 3b2490b81918..ec46ee700791 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c -@@ -151,6 +151,13 @@ static void exit_to_usermode_loop(struct +@@ -151,6 +151,13 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) if (cached_flags & _TIF_NEED_RESCHED) schedule(); @@ -52,6 +54,8 @@ Signed-off-by: Thomas Gleixner if (cached_flags & _TIF_UPROBE) uprobe_notify_resume(regs); +diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h +index 33d3c88a7225..fb0438d06ca7 100644 --- a/arch/x86/include/asm/signal.h +++ b/arch/x86/include/asm/signal.h @@ -28,6 +28,19 @@ typedef struct { @@ -74,6 +78,8 @@ Signed-off-by: Thomas Gleixner #ifndef CONFIG_COMPAT typedef sigset_t compat_sigset_t; #endif +diff --git a/include/linux/sched.h b/include/linux/sched.h +index df39ad5916e7..535e57775208 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -881,6 +881,10 @@ struct task_struct { @@ -87,9 +93,11 @@ Signed-off-by: Thomas Gleixner unsigned long sas_ss_sp; size_t sas_ss_size; unsigned int sas_ss_flags; +diff --git a/kernel/signal.c b/kernel/signal.c +index f29def2be652..57c48b3d1491 100644 --- a/kernel/signal.c +++ b/kernel/signal.c -@@ -1268,8 +1268,8 @@ int do_send_sig_info(int sig, struct sig +@@ -1268,8 +1268,8 @@ int do_send_sig_info(int sig, struct siginfo *info, struct task_struct *p, * We don't want to have recursive SIGSEGV's etc, for example, * that is why we also clear SIGNAL_UNKILLABLE. */ @@ -100,7 +108,7 @@ Signed-off-by: Thomas Gleixner { unsigned long int flags; int ret, blocked, ignored; -@@ -1298,6 +1298,39 @@ force_sig_info(int sig, struct siginfo * +@@ -1298,6 +1298,39 @@ force_sig_info(int sig, struct siginfo *info, struct task_struct *t) return ret; } @@ -140,3 +148,6 @@ Signed-off-by: Thomas Gleixner /* * Nuke all other threads in the group. */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0067-x86-signal-delay-calling-signals-on-32bit.patch b/kernel/patches-4.19.x-rt/0065-x86-signal-delay-calling-signals-on-32bit.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0067-x86-signal-delay-calling-signals-on-32bit.patch rename to kernel/patches-4.19.x-rt/0065-x86-signal-delay-calling-signals-on-32bit.patch index e41ae9db7..a6a4c561a 100644 --- a/kernel/patches-4.19.x-rt/0067-x86-signal-delay-calling-signals-on-32bit.patch +++ b/kernel/patches-4.19.x-rt/0065-x86-signal-delay-calling-signals-on-32bit.patch @@ -1,6 +1,7 @@ +From 6828880f532efdf1ded1248f5e0ea555e9520eda Mon Sep 17 00:00:00 2001 From: Yang Shi Date: Thu, 10 Dec 2015 10:58:51 -0800 -Subject: x86/signal: delay calling signals on 32bit +Subject: [PATCH 065/269] x86/signal: delay calling signals on 32bit When running some ptrace single step tests on x86-32 machine, the below problem is triggered: @@ -26,9 +27,11 @@ from IST context") which was merged in v4.1-rc1. Signed-off-by: Yang Shi Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/include/asm/signal.h | 2 +- + arch/x86/include/asm/signal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h +index fb0438d06ca7..c00e27af2205 100644 --- a/arch/x86/include/asm/signal.h +++ b/arch/x86/include/asm/signal.h @@ -37,7 +37,7 @@ typedef struct { @@ -40,3 +43,6 @@ Signed-off-by: Sebastian Andrzej Siewior #define ARCH_RT_DELAYS_SIGNAL_SEND #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0068-fs-replace-bh_uptodate_lock-for-rt.patch b/kernel/patches-4.19.x-rt/0066-buffer_head-Replace-bh_uptodate_lock-for-rt.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0068-fs-replace-bh_uptodate_lock-for-rt.patch rename to kernel/patches-4.19.x-rt/0066-buffer_head-Replace-bh_uptodate_lock-for-rt.patch index b5e2a42e8..e299bd0df 100644 --- a/kernel/patches-4.19.x-rt/0068-fs-replace-bh_uptodate_lock-for-rt.patch +++ b/kernel/patches-4.19.x-rt/0066-buffer_head-Replace-bh_uptodate_lock-for-rt.patch @@ -1,21 +1,24 @@ +From 651a49976e8e481190cc465a5590940a6f6bbcf9 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 18 Mar 2011 09:18:52 +0100 -Subject: buffer_head: Replace bh_uptodate_lock for -rt +Subject: [PATCH 066/269] buffer_head: Replace bh_uptodate_lock for -rt Wrap the bit_spin_lock calls into a separate inline and add the RT replacements with a real spinlock. Signed-off-by: Thomas Gleixner --- - fs/buffer.c | 21 +++++++-------------- - fs/ext4/page-io.c | 6 ++---- - fs/ntfs/aops.c | 10 +++------- - include/linux/buffer_head.h | 34 ++++++++++++++++++++++++++++++++++ + fs/buffer.c | 21 +++++++-------------- + fs/ext4/page-io.c | 6 ++---- + fs/ntfs/aops.c | 10 +++------- + include/linux/buffer_head.h | 34 ++++++++++++++++++++++++++++++++++ 4 files changed, 46 insertions(+), 25 deletions(-) +diff --git a/fs/buffer.c b/fs/buffer.c +index a550e0d8e965..a5b3a456dbff 100644 --- a/fs/buffer.c +++ b/fs/buffer.c -@@ -273,8 +273,7 @@ static void end_buffer_async_read(struct +@@ -274,8 +274,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate) * decide that the page is now completely done. */ first = page_buffers(page); @@ -25,7 +28,7 @@ Signed-off-by: Thomas Gleixner clear_buffer_async_read(bh); unlock_buffer(bh); tmp = bh; -@@ -287,8 +286,7 @@ static void end_buffer_async_read(struct +@@ -288,8 +287,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate) } tmp = tmp->b_this_page; } while (tmp != bh); @@ -35,7 +38,7 @@ Signed-off-by: Thomas Gleixner /* * If none of the buffers had errors and they are all -@@ -300,9 +298,7 @@ static void end_buffer_async_read(struct +@@ -301,9 +299,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate) return; still_busy: @@ -46,7 +49,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -329,8 +325,7 @@ void end_buffer_async_write(struct buffe +@@ -330,8 +326,7 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate) } first = page_buffers(page); @@ -56,7 +59,7 @@ Signed-off-by: Thomas Gleixner clear_buffer_async_write(bh); unlock_buffer(bh); -@@ -342,15 +337,12 @@ void end_buffer_async_write(struct buffe +@@ -343,15 +338,12 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate) } tmp = tmp->b_this_page; } @@ -74,7 +77,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(end_buffer_async_write); -@@ -3360,6 +3352,7 @@ struct buffer_head *alloc_buffer_head(gf +@@ -3368,6 +3360,7 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags) struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags); if (ret) { INIT_LIST_HEAD(&ret->b_assoc_buffers); @@ -82,9 +85,11 @@ Signed-off-by: Thomas Gleixner preempt_disable(); __this_cpu_inc(bh_accounting.nr); recalc_bh_state(); +diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c +index db7590178dfc..d76364124443 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c -@@ -95,8 +95,7 @@ static void ext4_finish_bio(struct bio * +@@ -95,8 +95,7 @@ static void ext4_finish_bio(struct bio *bio) * We check all buffers in the page under BH_Uptodate_Lock * to avoid races with other end io clearing async_write flags */ @@ -94,7 +99,7 @@ Signed-off-by: Thomas Gleixner do { if (bh_offset(bh) < bio_start || bh_offset(bh) + bh->b_size > bio_end) { -@@ -108,8 +107,7 @@ static void ext4_finish_bio(struct bio * +@@ -108,8 +107,7 @@ static void ext4_finish_bio(struct bio *bio) if (bio->bi_status) buffer_io_error(bh); } while ((bh = bh->b_this_page) != head); @@ -104,9 +109,11 @@ Signed-off-by: Thomas Gleixner if (!under_io) { #ifdef CONFIG_EXT4_FS_ENCRYPTION if (data_page) +diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c +index 8946130c87ad..71d0b3ba70f8 100644 --- a/fs/ntfs/aops.c +++ b/fs/ntfs/aops.c -@@ -106,8 +106,7 @@ static void ntfs_end_buffer_async_read(s +@@ -106,8 +106,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate) "0x%llx.", (unsigned long long)bh->b_blocknr); } first = page_buffers(page); @@ -116,7 +123,7 @@ Signed-off-by: Thomas Gleixner clear_buffer_async_read(bh); unlock_buffer(bh); tmp = bh; -@@ -122,8 +121,7 @@ static void ntfs_end_buffer_async_read(s +@@ -122,8 +121,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate) } tmp = tmp->b_this_page; } while (tmp != bh); @@ -126,7 +133,7 @@ Signed-off-by: Thomas Gleixner /* * If none of the buffers had errors then we can set the page uptodate, * but we first have to perform the post read mst fixups, if the -@@ -156,9 +154,7 @@ static void ntfs_end_buffer_async_read(s +@@ -156,9 +154,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate) unlock_page(page); return; still_busy: @@ -137,6 +144,8 @@ Signed-off-by: Thomas Gleixner } /** +diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h +index 96225a77c112..8a1bcfb145d7 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -76,8 +76,42 @@ struct buffer_head { @@ -182,3 +191,6 @@ Signed-off-by: Thomas Gleixner /* * macro tricks to expand the set_buffer_foo(), clear_buffer_foo() * and buffer_foo() functions. +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0069-fs-jbd-replace-bh_state-lock.patch b/kernel/patches-4.19.x-rt/0067-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0069-fs-jbd-replace-bh_state-lock.patch rename to kernel/patches-4.19.x-rt/0067-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch index a56bc6909..dea54a626 100644 --- a/kernel/patches-4.19.x-rt/0069-fs-jbd-replace-bh_state-lock.patch +++ b/kernel/patches-4.19.x-rt/0067-fs-jbd-jbd2-Make-state-lock-and-journal-head-lock-rt.patch @@ -1,6 +1,8 @@ +From 6107effb93a85ff7db4857dca4a0acc2ec4a7d5c Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 18 Mar 2011 10:11:25 +0100 -Subject: fs: jbd/jbd2: Make state lock and journal head lock rt safe +Subject: [PATCH 067/269] fs: jbd/jbd2: Make state lock and journal head lock + rt safe bit_spin_locks break under RT. @@ -10,7 +12,13 @@ Signed-off-by: Thomas Gleixner include/linux/buffer_head.h | 8 ++++++++ include/linux/jbd2.h | 24 ++++++++++++++++++++++++ 2 files changed, 32 insertions(+) +--- + include/linux/buffer_head.h | 8 ++++++++ + include/linux/jbd2.h | 24 ++++++++++++++++++++++++ + 2 files changed, 32 insertions(+) +diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h +index 8a1bcfb145d7..5869330d1f38 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -78,6 +78,10 @@ struct buffer_head { @@ -24,7 +32,7 @@ Signed-off-by: Thomas Gleixner #endif }; -@@ -109,6 +113,10 @@ static inline void buffer_head_init_lock +@@ -109,6 +113,10 @@ static inline void buffer_head_init_locks(struct buffer_head *bh) { #ifdef CONFIG_PREEMPT_RT_BASE spin_lock_init(&bh->b_uptodate_lock); @@ -35,9 +43,11 @@ Signed-off-by: Thomas Gleixner #endif } +diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h +index b708e5169d1d..018665350951 100644 --- a/include/linux/jbd2.h +++ b/include/linux/jbd2.h -@@ -347,32 +347,56 @@ static inline struct journal_head *bh2jh +@@ -347,32 +347,56 @@ static inline struct journal_head *bh2jh(struct buffer_head *bh) static inline void jbd_lock_bh_state(struct buffer_head *bh) { @@ -94,3 +104,6 @@ Signed-off-by: Thomas Gleixner } #define J_ASSERT(assert) BUG_ON(!(assert)) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0070-list_bl.h-make-list-head-locking-RT-safe.patch b/kernel/patches-4.19.x-rt/0068-list_bl-Make-list-head-locking-RT-safe.patch similarity index 90% rename from kernel/patches-4.19.x-rt/0070-list_bl.h-make-list-head-locking-RT-safe.patch rename to kernel/patches-4.19.x-rt/0068-list_bl-Make-list-head-locking-RT-safe.patch index 9ea1a600d..fe64a30c3 100644 --- a/kernel/patches-4.19.x-rt/0070-list_bl.h-make-list-head-locking-RT-safe.patch +++ b/kernel/patches-4.19.x-rt/0068-list_bl-Make-list-head-locking-RT-safe.patch @@ -1,6 +1,7 @@ +From 44a67462ebab9e354cfa669144248912fa92ca24 Mon Sep 17 00:00:00 2001 From: Paul Gortmaker Date: Fri, 21 Jun 2013 15:07:25 -0400 -Subject: list_bl: Make list head locking RT safe +Subject: [PATCH 068/269] list_bl: Make list head locking RT safe As per changes in include/linux/jbd_common.h for avoiding the bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal @@ -47,9 +48,11 @@ concern. Signed-off-by: Paul Gortmaker Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/list_bl.h | 28 ++++++++++++++++++++++++++-- + include/linux/list_bl.h | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) +diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h +index 3fc2cc57ba1b..69b659259bac 100644 --- a/include/linux/list_bl.h +++ b/include/linux/list_bl.h @@ -3,6 +3,7 @@ @@ -85,7 +88,7 @@ Signed-off-by: Sebastian Andrzej Siewior static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h) { -@@ -119,12 +129,26 @@ static inline void hlist_bl_del_init(str +@@ -119,12 +129,26 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n) static inline void hlist_bl_lock(struct hlist_bl_head *b) { @@ -112,3 +115,6 @@ Signed-off-by: Sebastian Andrzej Siewior } static inline bool hlist_bl_is_locked(struct hlist_bl_head *b) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0071-list_bl-fixup-bogus-lockdep-warning.patch b/kernel/patches-4.19.x-rt/0069-list_bl-fixup-bogus-lockdep-warning.patch similarity index 90% rename from kernel/patches-4.19.x-rt/0071-list_bl-fixup-bogus-lockdep-warning.patch rename to kernel/patches-4.19.x-rt/0069-list_bl-fixup-bogus-lockdep-warning.patch index 43ebad4b9..dd13f6d1c 100644 --- a/kernel/patches-4.19.x-rt/0071-list_bl-fixup-bogus-lockdep-warning.patch +++ b/kernel/patches-4.19.x-rt/0069-list_bl-fixup-bogus-lockdep-warning.patch @@ -1,6 +1,7 @@ +From 20f64514264a9d0ea1533f4743f542a1fb056a16 Mon Sep 17 00:00:00 2001 From: Josh Cartwright Date: Thu, 31 Mar 2016 00:04:25 -0500 -Subject: [PATCH] list_bl: fixup bogus lockdep warning +Subject: [PATCH 069/269] list_bl: fixup bogus lockdep warning At first glance, the use of 'static inline' seems appropriate for INIT_HLIST_BL_HEAD(). @@ -69,9 +70,11 @@ Tested-by: Luis Claudio R. Goncalves Signed-off-by: Josh Cartwright Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/list_bl.h | 12 +++++++----- + include/linux/list_bl.h | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) +diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h +index 69b659259bac..0b5de7d9ffcf 100644 --- a/include/linux/list_bl.h +++ b/include/linux/list_bl.h @@ -43,13 +43,15 @@ struct hlist_bl_node { @@ -95,3 +98,6 @@ Signed-off-by: Sebastian Andrzej Siewior static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0072-genirq-disable-irqpoll-on-rt.patch b/kernel/patches-4.19.x-rt/0070-genirq-Disable-irqpoll-on-rt.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0072-genirq-disable-irqpoll-on-rt.patch rename to kernel/patches-4.19.x-rt/0070-genirq-Disable-irqpoll-on-rt.patch index 4b0751e6c..5e2c63346 100644 --- a/kernel/patches-4.19.x-rt/0072-genirq-disable-irqpoll-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0070-genirq-Disable-irqpoll-on-rt.patch @@ -1,19 +1,21 @@ +From 7520cd851f5733f5e69fe73008893f4be48506f9 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 3 Jul 2009 08:29:57 -0500 -Subject: genirq: Disable irqpoll on -rt +Subject: [PATCH 070/269] genirq: Disable irqpoll on -rt Creates long latencies for no value Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner - --- - kernel/irq/spurious.c | 8 ++++++++ + kernel/irq/spurious.c | 8 ++++++++ 1 file changed, 8 insertions(+) +diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c +index d867d6ddafdd..cd12ee86c01e 100644 --- a/kernel/irq/spurious.c +++ b/kernel/irq/spurious.c -@@ -442,6 +442,10 @@ MODULE_PARM_DESC(noirqdebug, "Disable ir +@@ -442,6 +442,10 @@ MODULE_PARM_DESC(noirqdebug, "Disable irq lockup detection when true"); static int __init irqfixup_setup(char *str) { @@ -35,3 +37,6 @@ Signed-off-by: Thomas Gleixner irqfixup = 2; printk(KERN_WARNING "Misrouted IRQ fixup and polling support " "enabled\n"); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0073-genirq-force-threading.patch b/kernel/patches-4.19.x-rt/0071-genirq-Force-interrupt-thread-on-RT.patch similarity index 58% rename from kernel/patches-4.19.x-rt/0073-genirq-force-threading.patch rename to kernel/patches-4.19.x-rt/0071-genirq-Force-interrupt-thread-on-RT.patch index d89fe34ce..cddab1f9e 100644 --- a/kernel/patches-4.19.x-rt/0073-genirq-force-threading.patch +++ b/kernel/patches-4.19.x-rt/0071-genirq-Force-interrupt-thread-on-RT.patch @@ -1,19 +1,22 @@ -Subject: genirq: Force interrupt thread on RT +From 22860bd2c33dc3abc1b0aa695f8f455595762a93 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Sun, 03 Apr 2011 11:57:29 +0200 +Date: Sun, 3 Apr 2011 11:57:29 +0200 +Subject: [PATCH 071/269] genirq: Force interrupt thread on RT Force threaded_irqs and optimize the code (force_irqthreads) in regard to this. Signed-off-by: Thomas Gleixner --- - include/linux/interrupt.h | 4 ++++ - kernel/irq/manage.c | 2 ++ + include/linux/interrupt.h | 4 ++++ + kernel/irq/manage.c | 2 ++ 2 files changed, 6 insertions(+) +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index eeceac3376fc..315f852b4981 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h -@@ -427,7 +427,11 @@ extern int irq_set_irqchip_state(unsigne +@@ -427,7 +427,11 @@ extern int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which, bool state); #ifdef CONFIG_IRQ_FORCED_THREADING @@ -25,6 +28,8 @@ Signed-off-by: Thomas Gleixner #else #define force_irqthreads (0) #endif +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index 5c0ba5ca5930..94a18cf54293 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -23,6 +23,7 @@ @@ -35,7 +40,7 @@ Signed-off-by: Thomas Gleixner __read_mostly bool force_irqthreads; EXPORT_SYMBOL_GPL(force_irqthreads); -@@ -32,6 +33,7 @@ static int __init setup_forced_irqthread +@@ -32,6 +33,7 @@ static int __init setup_forced_irqthreads(char *arg) return 0; } early_param("threadirqs", setup_forced_irqthreads); @@ -43,3 +48,6 @@ Signed-off-by: Thomas Gleixner #endif static void __synchronize_hardirq(struct irq_desc *desc) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0074-0001-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch b/kernel/patches-4.19.x-rt/0072-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch similarity index 85% rename from kernel/patches-4.19.x-rt/0074-0001-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch rename to kernel/patches-4.19.x-rt/0072-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch index f0996b587..614b743f0 100644 --- a/kernel/patches-4.19.x-rt/0074-0001-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch +++ b/kernel/patches-4.19.x-rt/0072-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch @@ -1,7 +1,8 @@ +From 3c22477fe8ef4919a3fb0314834751ad2e2134d8 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 28 May 2018 15:24:20 +0200 -Subject: [PATCH 1/4] Split IRQ-off and zone->lock while freeing pages from PCP - list #1 +Subject: [PATCH 072/269] Split IRQ-off and zone->lock while freeing pages from + PCP list #1 Split the IRQ-off section while accessing the PCP list from zone->lock while freeing pages. @@ -12,12 +13,14 @@ free_pcppages_bulk(). Signed-off-by: Peter Zijlstra Signed-off-by: Sebastian Andrzej Siewior --- - mm/page_alloc.c | 82 +++++++++++++++++++++++++++++++++++--------------------- + mm/page_alloc.c | 82 +++++++++++++++++++++++++++++++------------------ 1 file changed, 52 insertions(+), 30 deletions(-) +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index 8e6932a140b8..8c10f34364c0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c -@@ -1095,7 +1095,7 @@ static inline void prefetch_buddy(struct +@@ -1095,7 +1095,7 @@ static inline void prefetch_buddy(struct page *page) } /* @@ -26,7 +29,7 @@ Signed-off-by: Sebastian Andrzej Siewior * Assumes all pages on list are in same zone, and of same order. * count is the number of pages to free. * -@@ -1106,14 +1106,41 @@ static inline void prefetch_buddy(struct +@@ -1106,14 +1106,41 @@ static inline void prefetch_buddy(struct page *page) * pinned" detection logic. */ static void free_pcppages_bulk(struct zone *zone, int count, @@ -72,7 +75,7 @@ Signed-off-by: Sebastian Andrzej Siewior while (count) { struct list_head *list; -@@ -1145,7 +1172,7 @@ static void free_pcppages_bulk(struct zo +@@ -1145,7 +1172,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, if (bulkfree_pcp_prepare(page)) continue; @@ -81,7 +84,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * We are going to put the page back to the global -@@ -1160,26 +1187,6 @@ static void free_pcppages_bulk(struct zo +@@ -1160,26 +1187,6 @@ static void free_pcppages_bulk(struct zone *zone, int count, prefetch_buddy(page); } while (--count && --batch_free && !list_empty(list)); } @@ -108,7 +111,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static void free_one_page(struct zone *zone, -@@ -2536,13 +2543,18 @@ void drain_zone_pages(struct zone *zone, +@@ -2536,13 +2543,18 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) { unsigned long flags; int to_drain, batch; @@ -128,7 +131,7 @@ Signed-off-by: Sebastian Andrzej Siewior } #endif -@@ -2558,14 +2570,21 @@ static void drain_pages_zone(unsigned in +@@ -2558,14 +2570,21 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) unsigned long flags; struct per_cpu_pageset *pset; struct per_cpu_pages *pcp; @@ -152,7 +155,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -2787,7 +2806,10 @@ static void free_unref_page_commit(struc +@@ -2787,7 +2806,10 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) pcp->count++; if (pcp->count >= pcp->high) { unsigned long batch = READ_ONCE(pcp->batch); @@ -164,3 +167,6 @@ Signed-off-by: Sebastian Andrzej Siewior } } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0075-0002-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch b/kernel/patches-4.19.x-rt/0073-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0075-0002-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch rename to kernel/patches-4.19.x-rt/0073-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch index e9b94d119..21ec6592e 100644 --- a/kernel/patches-4.19.x-rt/0075-0002-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch +++ b/kernel/patches-4.19.x-rt/0073-Split-IRQ-off-and-zone-lock-while-freeing-pages-from.patch @@ -1,7 +1,8 @@ +From e4639c8f6abcfb4b8b26aa296089349739103578 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 28 May 2018 15:24:21 +0200 -Subject: [PATCH 2/4] Split IRQ-off and zone->lock while freeing pages from PCP - list #2 +Subject: [PATCH 073/269] Split IRQ-off and zone->lock while freeing pages from + PCP list #2 Split the IRQ-off section while accessing the PCP list from zone->lock while freeing pages. @@ -12,12 +13,14 @@ free_pcppages_bulk(). Signed-off-by: Peter Zijlstra Signed-off-by: Sebastian Andrzej Siewior --- - mm/page_alloc.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++---------- + mm/page_alloc.c | 60 ++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 50 insertions(+), 10 deletions(-) +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index 8c10f34364c0..4d630aebd84f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c -@@ -1105,8 +1105,8 @@ static inline void prefetch_buddy(struct +@@ -1105,8 +1105,8 @@ static inline void prefetch_buddy(struct page *page) * And clear the zone's pages_scanned counter, to hold off the "all pages are * pinned" detection logic. */ @@ -28,7 +31,7 @@ Signed-off-by: Sebastian Andrzej Siewior { bool isolated_pageblocks; struct page *page, *tmp; -@@ -1121,12 +1121,27 @@ static void free_pcppages_bulk(struct zo +@@ -1121,12 +1121,27 @@ static void free_pcppages_bulk(struct zone *zone, int count, */ list_for_each_entry_safe(page, tmp, head, lru) { int mt = get_pcppage_migratetype(page); @@ -56,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior __free_one_page(page, page_to_pfn(page), zone, 0, mt); trace_mm_page_pcpu_drain(page, 0, mt); } -@@ -2554,7 +2569,7 @@ void drain_zone_pages(struct zone *zone, +@@ -2554,7 +2569,7 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) local_irq_restore(flags); if (to_drain > 0) @@ -65,7 +68,7 @@ Signed-off-by: Sebastian Andrzej Siewior } #endif -@@ -2584,7 +2599,7 @@ static void drain_pages_zone(unsigned in +@@ -2584,7 +2599,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) local_irq_restore(flags); if (count) @@ -74,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -2777,7 +2792,8 @@ static bool free_unref_page_prepare(stru +@@ -2777,7 +2792,8 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn) return true; } @@ -84,7 +87,7 @@ Signed-off-by: Sebastian Andrzej Siewior { struct zone *zone = page_zone(page); struct per_cpu_pages *pcp; -@@ -2806,10 +2822,8 @@ static void free_unref_page_commit(struc +@@ -2806,10 +2822,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) pcp->count++; if (pcp->count >= pcp->high) { unsigned long batch = READ_ONCE(pcp->batch); @@ -115,7 +118,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -2837,6 +2855,11 @@ void free_unref_page_list(struct list_he +@@ -2837,6 +2855,11 @@ void free_unref_page_list(struct list_head *list) struct page *page, *next; unsigned long flags, pfn; int batch_count = 0; @@ -127,7 +130,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Prepare pages for freeing */ list_for_each_entry_safe(page, next, list, lru) { -@@ -2849,10 +2872,12 @@ void free_unref_page_list(struct list_he +@@ -2849,10 +2872,12 @@ void free_unref_page_list(struct list_head *list) local_irq_save(flags); list_for_each_entry_safe(page, next, list, lru) { unsigned long pfn = page_private(page); @@ -141,7 +144,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Guard against excessive IRQ disabled times when we get -@@ -2865,6 +2890,21 @@ void free_unref_page_list(struct list_he +@@ -2865,6 +2890,21 @@ void free_unref_page_list(struct list_head *list) } } local_irq_restore(flags); @@ -163,3 +166,6 @@ Signed-off-by: Sebastian Andrzej Siewior } /* +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0076-0003-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch b/kernel/patches-4.19.x-rt/0074-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0076-0003-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch rename to kernel/patches-4.19.x-rt/0074-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch index f0bc5d128..ff15da48a 100644 --- a/kernel/patches-4.19.x-rt/0076-0003-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch +++ b/kernel/patches-4.19.x-rt/0074-mm-SLxB-change-list_lock-to-raw_spinlock_t.patch @@ -1,6 +1,7 @@ +From 21da9341b8a6c5d9308bf0c2fa3fe4647749f125 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Mon, 28 May 2018 15:24:22 +0200 -Subject: [PATCH 3/4] mm/SLxB: change list_lock to raw_spinlock_t +Subject: [PATCH 074/269] mm/SLxB: change list_lock to raw_spinlock_t The list_lock is used with used with IRQs off on RT. Make it a raw_spinlock_t otherwise the interrupts won't be disabled on -RT. The locking rules remain @@ -11,14 +12,16 @@ file for struct kmem_cache_node defintion. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - mm/slab.c | 94 +++++++++++++++++++++++++++++++------------------------------- - mm/slab.h | 2 - - mm/slub.c | 50 ++++++++++++++++---------------- + mm/slab.c | 94 +++++++++++++++++++++++++++---------------------------- + mm/slab.h | 2 +- + mm/slub.c | 50 ++++++++++++++--------------- 3 files changed, 73 insertions(+), 73 deletions(-) +diff --git a/mm/slab.c b/mm/slab.c +index b8e0ec74330f..21fe15fb9624 100644 --- a/mm/slab.c +++ b/mm/slab.c -@@ -233,7 +233,7 @@ static void kmem_cache_node_init(struct +@@ -233,7 +233,7 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent) parent->shared = NULL; parent->alien = NULL; parent->colour_next = 0; @@ -27,7 +30,7 @@ Signed-off-by: Sebastian Andrzej Siewior parent->free_objects = 0; parent->free_touched = 0; } -@@ -600,9 +600,9 @@ static noinline void cache_free_pfmemall +@@ -600,9 +600,9 @@ static noinline void cache_free_pfmemalloc(struct kmem_cache *cachep, page_node = page_to_nid(page); n = get_node(cachep, page_node); @@ -39,7 +42,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); } -@@ -730,7 +730,7 @@ static void __drain_alien_cache(struct k +@@ -731,7 +731,7 @@ static void __drain_alien_cache(struct kmem_cache *cachep, struct kmem_cache_node *n = get_node(cachep, node); if (ac->avail) { @@ -48,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Stuff objects into the remote nodes shared array first. * That way we could avoid the overhead of putting the objects -@@ -741,7 +741,7 @@ static void __drain_alien_cache(struct k +@@ -742,7 +742,7 @@ static void __drain_alien_cache(struct kmem_cache *cachep, free_block(cachep, ac->entry, ac->avail, node, list); ac->avail = 0; @@ -57,7 +60,7 @@ Signed-off-by: Sebastian Andrzej Siewior } } -@@ -814,9 +814,9 @@ static int __cache_free_alien(struct kme +@@ -815,9 +815,9 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp, slabs_destroy(cachep, &list); } else { n = get_node(cachep, page_node); @@ -69,7 +72,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); } return 1; -@@ -857,10 +857,10 @@ static int init_cache_node(struct kmem_c +@@ -858,10 +858,10 @@ static int init_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp) */ n = get_node(cachep, node); if (n) { @@ -82,7 +85,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } -@@ -939,7 +939,7 @@ static int setup_kmem_cache_node(struct +@@ -940,7 +940,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, goto fail; n = get_node(cachep, node); @@ -91,7 +94,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (n->shared && force_change) { free_block(cachep, n->shared->entry, n->shared->avail, node, &list); -@@ -957,7 +957,7 @@ static int setup_kmem_cache_node(struct +@@ -958,7 +958,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, new_alien = NULL; } @@ -100,7 +103,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); /* -@@ -996,7 +996,7 @@ static void cpuup_canceled(long cpu) +@@ -997,7 +997,7 @@ static void cpuup_canceled(long cpu) if (!n) continue; @@ -109,7 +112,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Free limit for this kmem_cache_node */ n->free_limit -= cachep->batchcount; -@@ -1009,7 +1009,7 @@ static void cpuup_canceled(long cpu) +@@ -1010,7 +1010,7 @@ static void cpuup_canceled(long cpu) } if (!cpumask_empty(mask)) { @@ -118,7 +121,7 @@ Signed-off-by: Sebastian Andrzej Siewior goto free_slab; } -@@ -1023,7 +1023,7 @@ static void cpuup_canceled(long cpu) +@@ -1024,7 +1024,7 @@ static void cpuup_canceled(long cpu) alien = n->alien; n->alien = NULL; @@ -127,7 +130,7 @@ Signed-off-by: Sebastian Andrzej Siewior kfree(shared); if (alien) { -@@ -1207,7 +1207,7 @@ static void __init init_list(struct kmem +@@ -1208,7 +1208,7 @@ static void __init init_list(struct kmem_cache *cachep, struct kmem_cache_node * /* * Do not assume that spinlocks can be initialized via memcpy: */ @@ -136,7 +139,7 @@ Signed-off-by: Sebastian Andrzej Siewior MAKE_ALL_LISTS(cachep, ptr, nodeid); cachep->node[nodeid] = ptr; -@@ -1378,11 +1378,11 @@ slab_out_of_memory(struct kmem_cache *ca +@@ -1379,11 +1379,11 @@ slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid) for_each_kmem_cache_node(cachep, node, n) { unsigned long total_slabs, free_slabs, free_objs; @@ -150,7 +153,7 @@ Signed-off-by: Sebastian Andrzej Siewior pr_warn(" node %d: slabs: %ld/%ld, objs: %ld/%ld\n", node, total_slabs - free_slabs, total_slabs, -@@ -2175,7 +2175,7 @@ static void check_spinlock_acquired(stru +@@ -2178,7 +2178,7 @@ static void check_spinlock_acquired(struct kmem_cache *cachep) { #ifdef CONFIG_SMP check_irq_off(); @@ -159,7 +162,7 @@ Signed-off-by: Sebastian Andrzej Siewior #endif } -@@ -2183,7 +2183,7 @@ static void check_spinlock_acquired_node +@@ -2186,7 +2186,7 @@ static void check_spinlock_acquired_node(struct kmem_cache *cachep, int node) { #ifdef CONFIG_SMP check_irq_off(); @@ -168,7 +171,7 @@ Signed-off-by: Sebastian Andrzej Siewior #endif } -@@ -2223,9 +2223,9 @@ static void do_drain(void *arg) +@@ -2226,9 +2226,9 @@ static void do_drain(void *arg) check_irq_off(); ac = cpu_cache_get(cachep); n = get_node(cachep, node); @@ -180,7 +183,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); ac->avail = 0; } -@@ -2243,9 +2243,9 @@ static void drain_cpu_caches(struct kmem +@@ -2246,9 +2246,9 @@ static void drain_cpu_caches(struct kmem_cache *cachep) drain_alien_cache(cachep, n->alien); for_each_kmem_cache_node(cachep, node, n) { @@ -192,7 +195,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); } -@@ -2267,10 +2267,10 @@ static int drain_freelist(struct kmem_ca +@@ -2270,10 +2270,10 @@ static int drain_freelist(struct kmem_cache *cache, nr_freed = 0; while (nr_freed < tofree && !list_empty(&n->slabs_free)) { @@ -205,7 +208,7 @@ Signed-off-by: Sebastian Andrzej Siewior goto out; } -@@ -2283,7 +2283,7 @@ static int drain_freelist(struct kmem_ca +@@ -2286,7 +2286,7 @@ static int drain_freelist(struct kmem_cache *cache, * to the cache. */ n->free_objects -= cache->num; @@ -214,7 +217,7 @@ Signed-off-by: Sebastian Andrzej Siewior slab_destroy(cache, page); nr_freed++; } -@@ -2731,7 +2731,7 @@ static void cache_grow_end(struct kmem_c +@@ -2734,7 +2734,7 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page) INIT_LIST_HEAD(&page->lru); n = get_node(cachep, page_to_nid(page)); @@ -223,7 +226,7 @@ Signed-off-by: Sebastian Andrzej Siewior n->total_slabs++; if (!page->active) { list_add_tail(&page->lru, &(n->slabs_free)); -@@ -2741,7 +2741,7 @@ static void cache_grow_end(struct kmem_c +@@ -2744,7 +2744,7 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page) STATS_INC_GROWN(cachep); n->free_objects += cachep->num - page->active; @@ -232,7 +235,7 @@ Signed-off-by: Sebastian Andrzej Siewior fixup_objfreelist_debug(cachep, &list); } -@@ -2909,7 +2909,7 @@ static struct page *get_first_slab(struc +@@ -2912,7 +2912,7 @@ static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) { struct page *page; @@ -241,7 +244,7 @@ Signed-off-by: Sebastian Andrzej Siewior page = list_first_entry_or_null(&n->slabs_partial, struct page, lru); if (!page) { n->free_touched = 1; -@@ -2935,10 +2935,10 @@ static noinline void *cache_alloc_pfmema +@@ -2938,10 +2938,10 @@ static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, if (!gfp_pfmemalloc_allowed(flags)) return NULL; @@ -254,7 +257,7 @@ Signed-off-by: Sebastian Andrzej Siewior return NULL; } -@@ -2947,7 +2947,7 @@ static noinline void *cache_alloc_pfmema +@@ -2950,7 +2950,7 @@ static noinline void *cache_alloc_pfmemalloc(struct kmem_cache *cachep, fixup_slab_list(cachep, n, page, &list); @@ -263,7 +266,7 @@ Signed-off-by: Sebastian Andrzej Siewior fixup_objfreelist_debug(cachep, &list); return obj; -@@ -3006,7 +3006,7 @@ static void *cache_alloc_refill(struct k +@@ -3009,7 +3009,7 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) if (!n->free_objects && (!shared || !shared->avail)) goto direct_grow; @@ -272,7 +275,7 @@ Signed-off-by: Sebastian Andrzej Siewior shared = READ_ONCE(n->shared); /* See if we can refill from the shared array */ -@@ -3030,7 +3030,7 @@ static void *cache_alloc_refill(struct k +@@ -3033,7 +3033,7 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags) must_grow: n->free_objects -= ac->avail; alloc_done: @@ -281,7 +284,7 @@ Signed-off-by: Sebastian Andrzej Siewior fixup_objfreelist_debug(cachep, &list); direct_grow: -@@ -3255,7 +3255,7 @@ static void *____cache_alloc_node(struct +@@ -3258,7 +3258,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, BUG_ON(!n); check_irq_off(); @@ -290,7 +293,7 @@ Signed-off-by: Sebastian Andrzej Siewior page = get_first_slab(n, false); if (!page) goto must_grow; -@@ -3273,12 +3273,12 @@ static void *____cache_alloc_node(struct +@@ -3276,12 +3276,12 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, fixup_slab_list(cachep, n, page, &list); @@ -305,7 +308,7 @@ Signed-off-by: Sebastian Andrzej Siewior page = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid); if (page) { /* This slab isn't counted yet so don't update free_objects */ -@@ -3454,7 +3454,7 @@ static void cache_flusharray(struct kmem +@@ -3457,7 +3457,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) check_irq_off(); n = get_node(cachep, node); @@ -314,7 +317,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (n->shared) { struct array_cache *shared_array = n->shared; int max = shared_array->limit - shared_array->avail; -@@ -3483,7 +3483,7 @@ static void cache_flusharray(struct kmem +@@ -3486,7 +3486,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) STATS_SET_FREEABLE(cachep, i); } #endif @@ -323,7 +326,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); ac->avail -= batchcount; memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail); -@@ -3893,9 +3893,9 @@ static int __do_tune_cpucache(struct kme +@@ -3896,9 +3896,9 @@ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit, node = cpu_to_mem(cpu); n = get_node(cachep, node); @@ -335,7 +338,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); } free_percpu(prev); -@@ -4020,9 +4020,9 @@ static void drain_array(struct kmem_cach +@@ -4023,9 +4023,9 @@ static void drain_array(struct kmem_cache *cachep, struct kmem_cache_node *n, return; } @@ -347,7 +350,7 @@ Signed-off-by: Sebastian Andrzej Siewior slabs_destroy(cachep, &list); } -@@ -4106,7 +4106,7 @@ void get_slabinfo(struct kmem_cache *cac +@@ -4109,7 +4109,7 @@ void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo) for_each_kmem_cache_node(cachep, node, n) { check_irq_on(); @@ -356,7 +359,7 @@ Signed-off-by: Sebastian Andrzej Siewior total_slabs += n->total_slabs; free_slabs += n->free_slabs; -@@ -4115,7 +4115,7 @@ void get_slabinfo(struct kmem_cache *cac +@@ -4118,7 +4118,7 @@ void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo) if (n->shared) shared_avail += n->shared->avail; @@ -365,7 +368,7 @@ Signed-off-by: Sebastian Andrzej Siewior } num_objs = total_slabs * cachep->num; active_slabs = total_slabs - free_slabs; -@@ -4330,13 +4330,13 @@ static int leaks_show(struct seq_file *m +@@ -4333,13 +4333,13 @@ static int leaks_show(struct seq_file *m, void *p) for_each_kmem_cache_node(cachep, node, n) { check_irq_on(); @@ -381,9 +384,11 @@ Signed-off-by: Sebastian Andrzej Siewior } } while (!is_store_user_clean(cachep)); +diff --git a/mm/slab.h b/mm/slab.h +index 9632772e14be..d6b01d61f768 100644 --- a/mm/slab.h +++ b/mm/slab.h -@@ -453,7 +453,7 @@ static inline void slab_post_alloc_hook( +@@ -454,7 +454,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, * The slab lists for all objects. */ struct kmem_cache_node { @@ -392,9 +397,11 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_SLAB struct list_head slabs_partial; /* partial list first, better asm code */ +diff --git a/mm/slub.c b/mm/slub.c +index 09c0e24a06d8..9450fb6da89f 100644 --- a/mm/slub.c +++ b/mm/slub.c -@@ -1167,7 +1167,7 @@ static noinline int free_debug_processin +@@ -1167,7 +1167,7 @@ static noinline int free_debug_processing( unsigned long uninitialized_var(flags); int ret = 0; @@ -403,7 +410,7 @@ Signed-off-by: Sebastian Andrzej Siewior slab_lock(page); if (s->flags & SLAB_CONSISTENCY_CHECKS) { -@@ -1202,7 +1202,7 @@ static noinline int free_debug_processin +@@ -1202,7 +1202,7 @@ static noinline int free_debug_processing( bulk_cnt, cnt); slab_unlock(page); @@ -412,7 +419,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!ret) slab_fix(s, "Object at 0x%p not freed", object); return ret; -@@ -1802,7 +1802,7 @@ static void *get_partial_node(struct kme +@@ -1802,7 +1802,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, if (!n || !n->nr_partial) return NULL; @@ -421,7 +428,7 @@ Signed-off-by: Sebastian Andrzej Siewior list_for_each_entry_safe(page, page2, &n->partial, lru) { void *t; -@@ -1827,7 +1827,7 @@ static void *get_partial_node(struct kme +@@ -1827,7 +1827,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, break; } @@ -430,7 +437,7 @@ Signed-off-by: Sebastian Andrzej Siewior return object; } -@@ -2073,7 +2073,7 @@ static void deactivate_slab(struct kmem_ +@@ -2073,7 +2073,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, * that acquire_slab() will see a slab page that * is frozen */ @@ -439,7 +446,7 @@ Signed-off-by: Sebastian Andrzej Siewior } } else { m = M_FULL; -@@ -2084,7 +2084,7 @@ static void deactivate_slab(struct kmem_ +@@ -2084,7 +2084,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, * slabs from diagnostic functions will not see * any frozen slabs. */ @@ -448,7 +455,7 @@ Signed-off-by: Sebastian Andrzej Siewior } } -@@ -2119,7 +2119,7 @@ static void deactivate_slab(struct kmem_ +@@ -2119,7 +2119,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, goto redo; if (lock) @@ -457,7 +464,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (m == M_FREE) { stat(s, DEACTIVATE_EMPTY); -@@ -2154,10 +2154,10 @@ static void unfreeze_partials(struct kme +@@ -2154,10 +2154,10 @@ static void unfreeze_partials(struct kmem_cache *s, n2 = get_node(s, page_to_nid(page)); if (n != n2) { if (n) @@ -470,7 +477,7 @@ Signed-off-by: Sebastian Andrzej Siewior } do { -@@ -2186,7 +2186,7 @@ static void unfreeze_partials(struct kme +@@ -2186,7 +2186,7 @@ static void unfreeze_partials(struct kmem_cache *s, } if (n) @@ -479,7 +486,7 @@ Signed-off-by: Sebastian Andrzej Siewior while (discard_page) { page = discard_page; -@@ -2355,10 +2355,10 @@ static unsigned long count_partial(struc +@@ -2355,10 +2355,10 @@ static unsigned long count_partial(struct kmem_cache_node *n, unsigned long x = 0; struct page *page; @@ -492,7 +499,7 @@ Signed-off-by: Sebastian Andrzej Siewior return x; } #endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */ -@@ -2793,7 +2793,7 @@ static void __slab_free(struct kmem_cach +@@ -2793,7 +2793,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, do { if (unlikely(n)) { @@ -501,7 +508,7 @@ Signed-off-by: Sebastian Andrzej Siewior n = NULL; } prior = page->freelist; -@@ -2825,7 +2825,7 @@ static void __slab_free(struct kmem_cach +@@ -2825,7 +2825,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, * Otherwise the list_lock will synchronize with * other processors updating the list of slabs. */ @@ -510,7 +517,7 @@ Signed-off-by: Sebastian Andrzej Siewior } } -@@ -2867,7 +2867,7 @@ static void __slab_free(struct kmem_cach +@@ -2867,7 +2867,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, add_partial(n, page, DEACTIVATE_TO_TAIL); stat(s, FREE_ADD_PARTIAL); } @@ -519,7 +526,7 @@ Signed-off-by: Sebastian Andrzej Siewior return; slab_empty: -@@ -2882,7 +2882,7 @@ static void __slab_free(struct kmem_cach +@@ -2882,7 +2882,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page, remove_full(s, n, page); } @@ -537,7 +544,7 @@ Signed-off-by: Sebastian Andrzej Siewior INIT_LIST_HEAD(&n->partial); #ifdef CONFIG_SLUB_DEBUG atomic_long_set(&n->nr_slabs, 0); -@@ -3653,7 +3653,7 @@ static void free_partial(struct kmem_cac +@@ -3656,7 +3656,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) struct page *page, *h; BUG_ON(irqs_disabled()); @@ -546,7 +553,7 @@ Signed-off-by: Sebastian Andrzej Siewior list_for_each_entry_safe(page, h, &n->partial, lru) { if (!page->inuse) { remove_partial(n, page); -@@ -3663,7 +3663,7 @@ static void free_partial(struct kmem_cac +@@ -3666,7 +3666,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) "Objects remaining in %s on __kmem_cache_shutdown()"); } } @@ -555,7 +562,7 @@ Signed-off-by: Sebastian Andrzej Siewior list_for_each_entry_safe(page, h, &discard, lru) discard_slab(s, page); -@@ -3936,7 +3936,7 @@ int __kmem_cache_shrink(struct kmem_cach +@@ -3939,7 +3939,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) for (i = 0; i < SHRINK_PROMOTE_MAX; i++) INIT_LIST_HEAD(promote + i); @@ -564,7 +571,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Build lists of slabs to discard or promote. -@@ -3967,7 +3967,7 @@ int __kmem_cache_shrink(struct kmem_cach +@@ -3970,7 +3970,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) for (i = SHRINK_PROMOTE_MAX - 1; i >= 0; i--) list_splice(promote + i, &n->partial); @@ -573,7 +580,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Release empty slabs */ list_for_each_entry_safe(page, t, &discard, lru) -@@ -4381,7 +4381,7 @@ static int validate_slab_node(struct kme +@@ -4384,7 +4384,7 @@ static int validate_slab_node(struct kmem_cache *s, struct page *page; unsigned long flags; @@ -582,7 +589,7 @@ Signed-off-by: Sebastian Andrzej Siewior list_for_each_entry(page, &n->partial, lru) { validate_slab_slab(s, page, map); -@@ -4403,7 +4403,7 @@ static int validate_slab_node(struct kme +@@ -4406,7 +4406,7 @@ static int validate_slab_node(struct kmem_cache *s, s->name, count, atomic_long_read(&n->nr_slabs)); out: @@ -591,7 +598,7 @@ Signed-off-by: Sebastian Andrzej Siewior return count; } -@@ -4593,12 +4593,12 @@ static int list_locations(struct kmem_ca +@@ -4596,12 +4596,12 @@ static int list_locations(struct kmem_cache *s, char *buf, if (!atomic_long_read(&n->nr_slabs)) continue; @@ -606,3 +613,6 @@ Signed-off-by: Sebastian Andrzej Siewior } for (i = 0; i < t.count; i++) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0077-0004-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch b/kernel/patches-4.19.x-rt/0075-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0077-0004-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch rename to kernel/patches-4.19.x-rt/0075-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch index ee9464d80..c2a243b5e 100644 --- a/kernel/patches-4.19.x-rt/0077-0004-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch +++ b/kernel/patches-4.19.x-rt/0075-mm-SLUB-delay-giving-back-empty-slubs-to-IRQ-enabled.patch @@ -1,6 +1,7 @@ +From 7950585d96adfc3a0b99a639041dbaed50e2a496 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 21 Jun 2018 17:29:19 +0200 -Subject: [PATCH 4/4] mm/SLUB: delay giving back empty slubs to IRQ enabled +Subject: [PATCH 075/269] mm/SLUB: delay giving back empty slubs to IRQ enabled regions __free_slab() is invoked with disabled interrupts which increases the @@ -12,12 +13,14 @@ so it can be processed later. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - mm/slub.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++----- + mm/slub.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 69 insertions(+), 5 deletions(-) +diff --git a/mm/slub.c b/mm/slub.c +index 9450fb6da89f..7fd47a914f61 100644 --- a/mm/slub.c +++ b/mm/slub.c -@@ -1330,6 +1330,12 @@ static inline void dec_slabs_node(struct +@@ -1330,6 +1330,12 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node, #endif /* CONFIG_SLUB_DEBUG */ @@ -30,7 +33,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. -@@ -1684,6 +1690,16 @@ static void __free_slab(struct kmem_cach +@@ -1684,6 +1690,16 @@ static void __free_slab(struct kmem_cache *s, struct page *page) __free_pages(page, order); } @@ -47,7 +50,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void rcu_free_slab(struct rcu_head *h) { struct page *page = container_of(h, struct page, rcu_head); -@@ -1695,6 +1711,12 @@ static void free_slab(struct kmem_cache +@@ -1695,6 +1711,12 @@ static void free_slab(struct kmem_cache *s, struct page *page) { if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) { call_rcu(&page->rcu_head, rcu_free_slab); @@ -60,7 +63,7 @@ Signed-off-by: Sebastian Andrzej Siewior } else __free_slab(s, page); } -@@ -2223,14 +2245,21 @@ static void put_cpu_partial(struct kmem_ +@@ -2223,14 +2245,21 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) pobjects = oldpage->pobjects; pages = oldpage->pages; if (drain && pobjects > s->cpu_partial) { @@ -82,7 +85,7 @@ Signed-off-by: Sebastian Andrzej Siewior oldpage = NULL; pobjects = 0; pages = 0; -@@ -2300,7 +2329,22 @@ static bool has_cpu_slab(int cpu, void * +@@ -2300,7 +2329,22 @@ static bool has_cpu_slab(int cpu, void *info) static void flush_all(struct kmem_cache *s) { @@ -105,7 +108,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -2498,8 +2542,10 @@ static inline void *get_freelist(struct +@@ -2498,8 +2542,10 @@ static inline void *get_freelist(struct kmem_cache *s, struct page *page) * already disabled (which is the case for bulk allocation). */ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, @@ -117,7 +120,7 @@ Signed-off-by: Sebastian Andrzej Siewior void *freelist; struct page *page; -@@ -2555,6 +2601,13 @@ static void *___slab_alloc(struct kmem_c +@@ -2555,6 +2601,13 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, VM_BUG_ON(!c->page->frozen); c->freelist = get_freepointer(s, freelist); c->tid = next_tid(c->tid); @@ -131,7 +134,7 @@ Signed-off-by: Sebastian Andrzej Siewior return freelist; new_slab: -@@ -2570,7 +2623,7 @@ static void *___slab_alloc(struct kmem_c +@@ -2570,7 +2623,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (unlikely(!freelist)) { slab_out_of_memory(s, gfpflags, node); @@ -140,7 +143,7 @@ Signed-off-by: Sebastian Andrzej Siewior } page = c->page; -@@ -2583,7 +2636,7 @@ static void *___slab_alloc(struct kmem_c +@@ -2583,7 +2636,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto new_slab; /* Slab failed checks. Next slab needed */ deactivate_slab(s, page, get_freepointer(s, freelist), c); @@ -149,7 +152,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -2595,6 +2648,7 @@ static void *__slab_alloc(struct kmem_ca +@@ -2595,6 +2648,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, { void *p; unsigned long flags; @@ -157,7 +160,7 @@ Signed-off-by: Sebastian Andrzej Siewior local_irq_save(flags); #ifdef CONFIG_PREEMPT -@@ -2606,8 +2660,9 @@ static void *__slab_alloc(struct kmem_ca +@@ -2606,8 +2660,9 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, c = this_cpu_ptr(s->cpu_slab); #endif @@ -168,7 +171,7 @@ Signed-off-by: Sebastian Andrzej Siewior return p; } -@@ -3085,6 +3140,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca +@@ -3085,6 +3140,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { struct kmem_cache_cpu *c; @@ -176,7 +179,7 @@ Signed-off-by: Sebastian Andrzej Siewior int i; /* memcg and kmem_cache debug support */ -@@ -3108,7 +3164,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca +@@ -3108,7 +3164,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * of re-populating per CPU c->freelist */ p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, @@ -185,7 +188,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (unlikely(!p[i])) goto error; -@@ -3120,6 +3176,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca +@@ -3120,6 +3176,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, } c->tid = next_tid(c->tid); local_irq_enable(); @@ -193,7 +196,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Clear memory outside IRQ disabled fastpath loop */ if (unlikely(flags & __GFP_ZERO)) { -@@ -3134,6 +3191,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca +@@ -3134,6 +3191,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, return i; error: local_irq_enable(); @@ -201,7 +204,7 @@ Signed-off-by: Sebastian Andrzej Siewior slab_post_alloc_hook(s, flags, i, p); __kmem_cache_free_bulk(s, i, p); return 0; -@@ -4180,6 +4238,12 @@ void __init kmem_cache_init(void) +@@ -4183,6 +4241,12 @@ void __init kmem_cache_init(void) { static __initdata struct kmem_cache boot_kmem_cache, boot_kmem_cache_node; @@ -214,3 +217,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (debug_guardpage_minorder()) slub_max_order = 0; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0078-mm-page_alloc-rt-friendly-per-cpu-pages.patch b/kernel/patches-4.19.x-rt/0076-mm-page_alloc-rt-friendly-per-cpu-pages.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0078-mm-page_alloc-rt-friendly-per-cpu-pages.patch rename to kernel/patches-4.19.x-rt/0076-mm-page_alloc-rt-friendly-per-cpu-pages.patch index 612dafd9e..ff8294218 100644 --- a/kernel/patches-4.19.x-rt/0078-mm-page_alloc-rt-friendly-per-cpu-pages.patch +++ b/kernel/patches-4.19.x-rt/0076-mm-page_alloc-rt-friendly-per-cpu-pages.patch @@ -1,6 +1,7 @@ +From 31695882006c45fad86890ceff90dd7d65ea5dd3 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 3 Jul 2009 08:29:37 -0500 -Subject: mm: page_alloc: rt-friendly per-cpu pages +Subject: [PATCH 076/269] mm: page_alloc: rt-friendly per-cpu pages rt-friendly per-cpu pages: convert the irqs-off per-cpu locking method into a preemptible, explicit-per-cpu-locks method. @@ -12,9 +13,11 @@ Contains fixes from: Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner --- - mm/page_alloc.c | 63 ++++++++++++++++++++++++++++++++++++++------------------ + mm/page_alloc.c | 63 +++++++++++++++++++++++++++++++++---------------- 1 file changed, 43 insertions(+), 20 deletions(-) +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index 4d630aebd84f..4d11ec179aa7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -60,6 +60,7 @@ @@ -44,7 +47,7 @@ Signed-off-by: Thomas Gleixner int page_group_by_mobility_disabled __read_mostly; #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT -@@ -1296,10 +1309,10 @@ static void __free_pages_ok(struct page +@@ -1296,10 +1309,10 @@ static void __free_pages_ok(struct page *page, unsigned int order) return; migratetype = get_pfnblock_migratetype(page, pfn); @@ -57,7 +60,7 @@ Signed-off-by: Thomas Gleixner } static void __init __free_pages_boot_core(struct page *page, unsigned int order) -@@ -2560,13 +2573,13 @@ void drain_zone_pages(struct zone *zone, +@@ -2560,13 +2573,13 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) int to_drain, batch; LIST_HEAD(dst); @@ -73,7 +76,7 @@ Signed-off-by: Thomas Gleixner if (to_drain > 0) free_pcppages_bulk(zone, &dst, false); -@@ -2588,7 +2601,7 @@ static void drain_pages_zone(unsigned in +@@ -2588,7 +2601,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) LIST_HEAD(dst); int count; @@ -82,7 +85,7 @@ Signed-off-by: Thomas Gleixner pset = per_cpu_ptr(zone->pageset, cpu); pcp = &pset->pcp; -@@ -2596,7 +2609,7 @@ static void drain_pages_zone(unsigned in +@@ -2596,7 +2609,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) if (count) isolate_pcp_pages(count, pcp, &dst); @@ -91,7 +94,7 @@ Signed-off-by: Thomas Gleixner if (count) free_pcppages_bulk(zone, &dst, false); -@@ -2634,6 +2647,7 @@ void drain_local_pages(struct zone *zone +@@ -2634,6 +2647,7 @@ void drain_local_pages(struct zone *zone) drain_pages(cpu); } @@ -99,7 +102,7 @@ Signed-off-by: Thomas Gleixner static void drain_local_pages_wq(struct work_struct *work) { /* -@@ -2647,6 +2661,7 @@ static void drain_local_pages_wq(struct +@@ -2647,6 +2661,7 @@ static void drain_local_pages_wq(struct work_struct *work) drain_local_pages(NULL); preempt_enable(); } @@ -143,7 +146,7 @@ Signed-off-by: Thomas Gleixner if (!list_empty(&dst)) free_pcppages_bulk(zone, &dst, false); } -@@ -2869,7 +2892,7 @@ void free_unref_page_list(struct list_he +@@ -2869,7 +2892,7 @@ void free_unref_page_list(struct list_head *list) set_page_private(page, pfn); } @@ -152,7 +155,7 @@ Signed-off-by: Thomas Gleixner list_for_each_entry_safe(page, next, list, lru) { unsigned long pfn = page_private(page); enum zone_type type; -@@ -2884,12 +2907,12 @@ void free_unref_page_list(struct list_he +@@ -2884,12 +2907,12 @@ void free_unref_page_list(struct list_head *list) * a large list of pages to free. */ if (++batch_count == SWAP_CLUSTER_MAX) { @@ -168,7 +171,7 @@ Signed-off-by: Thomas Gleixner for (i = 0; i < __MAX_NR_ZONES; ) { struct page *page; -@@ -3038,7 +3061,7 @@ static struct page *rmqueue_pcplist(stru +@@ -3038,7 +3061,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, struct page *page; unsigned long flags; @@ -177,7 +180,7 @@ Signed-off-by: Thomas Gleixner pcp = &this_cpu_ptr(zone->pageset)->pcp; list = &pcp->lists[migratetype]; page = __rmqueue_pcplist(zone, migratetype, pcp, list); -@@ -3046,7 +3069,7 @@ static struct page *rmqueue_pcplist(stru +@@ -3046,7 +3069,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); zone_statistics(preferred_zone, zone); } @@ -186,7 +189,7 @@ Signed-off-by: Thomas Gleixner return page; } -@@ -3073,7 +3096,7 @@ struct page *rmqueue(struct zone *prefer +@@ -3073,7 +3096,7 @@ struct page *rmqueue(struct zone *preferred_zone, * allocate greater than order-1 page units with __GFP_NOFAIL. */ WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); @@ -195,7 +198,7 @@ Signed-off-by: Thomas Gleixner do { page = NULL; -@@ -3093,14 +3116,14 @@ struct page *rmqueue(struct zone *prefer +@@ -3093,14 +3116,14 @@ struct page *rmqueue(struct zone *preferred_zone, __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); zone_statistics(preferred_zone, zone); @@ -230,3 +233,6 @@ Signed-off-by: Thomas Gleixner } #ifdef CONFIG_MEMORY_HOTREMOVE +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0079-mm-convert-swap-to-percpu-locked.patch b/kernel/patches-4.19.x-rt/0077-mm-swap-Convert-to-percpu-locked.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0079-mm-convert-swap-to-percpu-locked.patch rename to kernel/patches-4.19.x-rt/0077-mm-swap-Convert-to-percpu-locked.patch index 5e8468f94..654a21a01 100644 --- a/kernel/patches-4.19.x-rt/0079-mm-convert-swap-to-percpu-locked.patch +++ b/kernel/patches-4.19.x-rt/0077-mm-swap-Convert-to-percpu-locked.patch @@ -1,20 +1,22 @@ +From 25ce0ae0ad1ef1ed724757c0137241db28a8208d Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 3 Jul 2009 08:29:51 -0500 -Subject: mm/swap: Convert to percpu locked +Subject: [PATCH 077/269] mm/swap: Convert to percpu locked Replace global locks (get_cpu + local_irq_save) with "local_locks()". Currently there is one of for "rotate" and one for "swap". Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner - --- - include/linux/swap.h | 2 ++ - mm/compaction.c | 6 ++++-- - mm/page_alloc.c | 3 ++- - mm/swap.c | 38 ++++++++++++++++++++++---------------- + include/linux/swap.h | 2 ++ + mm/compaction.c | 6 ++++-- + mm/page_alloc.c | 3 ++- + mm/swap.c | 38 ++++++++++++++++++++++---------------- 4 files changed, 30 insertions(+), 19 deletions(-) +diff --git a/include/linux/swap.h b/include/linux/swap.h +index 7bd0a6f2ac2b..e643672fa802 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -12,6 +12,7 @@ @@ -25,7 +27,7 @@ Signed-off-by: Thomas Gleixner #include struct notifier_block; -@@ -331,6 +332,7 @@ extern unsigned long nr_free_pagecache_p +@@ -331,6 +332,7 @@ extern unsigned long nr_free_pagecache_pages(void); /* linux/mm/swap.c */ @@ -33,9 +35,11 @@ Signed-off-by: Thomas Gleixner extern void lru_cache_add(struct page *); extern void lru_cache_add_anon(struct page *page); extern void lru_cache_add_file(struct page *page); +diff --git a/mm/compaction.c b/mm/compaction.c +index faca45ebe62d..f8ccb9d9daa3 100644 --- a/mm/compaction.c +++ b/mm/compaction.c -@@ -1657,10 +1657,12 @@ static enum compact_result compact_zone( +@@ -1657,10 +1657,12 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro block_start_pfn(cc->migrate_pfn, cc->order); if (cc->last_migrated_pfn < current_block_start) { @@ -50,9 +54,11 @@ Signed-off-by: Thomas Gleixner /* No more flushing until we migrate again */ cc->last_migrated_pfn = 0; } +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index 4d11ec179aa7..a01c15fdb723 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c -@@ -7205,8 +7205,9 @@ void __init free_area_init(unsigned long +@@ -7205,8 +7205,9 @@ void __init free_area_init(unsigned long *zones_size) static int page_alloc_cpu_dead(unsigned int cpu) { @@ -63,6 +69,8 @@ Signed-off-by: Thomas Gleixner drain_pages(cpu); /* +diff --git a/mm/swap.c b/mm/swap.c +index a3fc028e338e..4bac22ec1328 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -33,6 +33,7 @@ @@ -73,7 +81,7 @@ Signed-off-by: Thomas Gleixner #include #include -@@ -51,6 +52,8 @@ static DEFINE_PER_CPU(struct pagevec, lr +@@ -51,6 +52,8 @@ static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs); #ifdef CONFIG_SMP static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs); #endif @@ -82,7 +90,7 @@ Signed-off-by: Thomas Gleixner /* * This path almost never happens for VM activity - pages are normally -@@ -253,11 +256,11 @@ void rotate_reclaimable_page(struct page +@@ -253,11 +256,11 @@ void rotate_reclaimable_page(struct page *page) unsigned long flags; get_page(page); @@ -112,7 +120,7 @@ Signed-off-by: Thomas Gleixner } } -@@ -339,7 +343,7 @@ void activate_page(struct page *page) +@@ -334,7 +338,7 @@ void activate_page(struct page *page) static void __lru_cache_activate_page(struct page *page) { @@ -121,7 +129,7 @@ Signed-off-by: Thomas Gleixner int i; /* -@@ -361,7 +365,7 @@ static void __lru_cache_activate_page(st +@@ -356,7 +360,7 @@ static void __lru_cache_activate_page(struct page *page) } } @@ -130,7 +138,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -403,12 +407,12 @@ EXPORT_SYMBOL(mark_page_accessed); +@@ -398,12 +402,12 @@ EXPORT_SYMBOL(mark_page_accessed); static void __lru_cache_add(struct page *page) { @@ -145,7 +153,7 @@ Signed-off-by: Thomas Gleixner } /** -@@ -586,9 +590,9 @@ void lru_add_drain_cpu(int cpu) +@@ -581,9 +585,9 @@ void lru_add_drain_cpu(int cpu) unsigned long flags; /* No harm done if a racing interrupt already did this */ @@ -157,7 +165,7 @@ Signed-off-by: Thomas Gleixner } pvec = &per_cpu(lru_deactivate_file_pvecs, cpu); -@@ -620,11 +624,12 @@ void deactivate_file_page(struct page *p +@@ -615,11 +619,12 @@ void deactivate_file_page(struct page *page) return; if (likely(get_page_unless_zero(page))) { @@ -172,7 +180,7 @@ Signed-off-by: Thomas Gleixner } } -@@ -639,19 +644,20 @@ void mark_page_lazyfree(struct page *pag +@@ -634,19 +639,20 @@ void mark_page_lazyfree(struct page *page) { if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { @@ -196,4 +204,7 @@ Signed-off-by: Thomas Gleixner + local_unlock_cpu(swapvec_lock); } - static void lru_add_drain_per_cpu(struct work_struct *dummy) + #ifdef CONFIG_SMP +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0080-mm-perform-lru_add_drain_all-remotely.patch b/kernel/patches-4.19.x-rt/0078-mm-perform-lru_add_drain_all-remotely.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0080-mm-perform-lru_add_drain_all-remotely.patch rename to kernel/patches-4.19.x-rt/0078-mm-perform-lru_add_drain_all-remotely.patch index 28d23636d..2f230172c 100644 --- a/kernel/patches-4.19.x-rt/0080-mm-perform-lru_add_drain_all-remotely.patch +++ b/kernel/patches-4.19.x-rt/0078-mm-perform-lru_add_drain_all-remotely.patch @@ -1,6 +1,7 @@ +From c6e0c51ac7fe1d0892449e41e6792babe4d7c3fa Mon Sep 17 00:00:00 2001 From: Luiz Capitulino Date: Fri, 27 May 2016 15:03:28 +0200 -Subject: [PATCH] mm: perform lru_add_drain_all() remotely +Subject: [PATCH 078/269] mm: perform lru_add_drain_all() remotely lru_add_drain_all() works by scheduling lru_add_drain_cpu() to run on all CPUs that have non-empty LRU pagevecs and then waiting for @@ -19,12 +20,14 @@ Signed-off-by: Rik van Riel Signed-off-by: Luiz Capitulino Signed-off-by: Sebastian Andrzej Siewior --- - mm/swap.c | 36 ++++++++++++++++++++++++++++++------ + mm/swap.c | 36 ++++++++++++++++++++++++++++++------ 1 file changed, 30 insertions(+), 6 deletions(-) +diff --git a/mm/swap.c b/mm/swap.c +index 4bac22ec1328..0457927d3f0c 100644 --- a/mm/swap.c +++ b/mm/swap.c -@@ -590,9 +590,15 @@ void lru_add_drain_cpu(int cpu) +@@ -585,9 +585,15 @@ void lru_add_drain_cpu(int cpu) unsigned long flags; /* No harm done if a racing interrupt already did this */ @@ -40,9 +43,9 @@ Signed-off-by: Sebastian Andrzej Siewior } pvec = &per_cpu(lru_deactivate_file_pvecs, cpu); -@@ -660,6 +666,16 @@ void lru_add_drain(void) - local_unlock_cpu(swapvec_lock); - } +@@ -657,6 +663,16 @@ void lru_add_drain(void) + + #ifdef CONFIG_SMP +#ifdef CONFIG_PREEMPT_RT_BASE +static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work) @@ -54,13 +57,13 @@ Signed-off-by: Sebastian Andrzej Siewior + +#else + - static void lru_add_drain_per_cpu(struct work_struct *dummy) - { - lru_add_drain(); -@@ -667,6 +683,16 @@ static void lru_add_drain_per_cpu(struct - static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); + static void lru_add_drain_per_cpu(struct work_struct *dummy) +@@ -664,6 +680,16 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) + lru_add_drain(); + } + +static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work) +{ + struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); @@ -74,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Doesn't need any cpu hotplug locking because we do rely on per-cpu * kworkers being shut down before our page_alloc_cpu_dead callback is -@@ -691,21 +717,19 @@ void lru_add_drain_all(void) +@@ -688,21 +714,19 @@ void lru_add_drain_all(void) cpumask_clear(&has_work); for_each_online_cpu(cpu) { @@ -100,3 +103,6 @@ Signed-off-by: Sebastian Andrzej Siewior mutex_unlock(&lock); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0081-mm-make-vmstat-rt-aware.patch b/kernel/patches-4.19.x-rt/0079-mm-vmstat-Protect-per-cpu-variables-with-preempt-dis.patch similarity index 63% rename from kernel/patches-4.19.x-rt/0081-mm-make-vmstat-rt-aware.patch rename to kernel/patches-4.19.x-rt/0079-mm-vmstat-Protect-per-cpu-variables-with-preempt-dis.patch index ae673e318..7c908c68e 100644 --- a/kernel/patches-4.19.x-rt/0081-mm-make-vmstat-rt-aware.patch +++ b/kernel/patches-4.19.x-rt/0079-mm-vmstat-Protect-per-cpu-variables-with-preempt-dis.patch @@ -1,6 +1,8 @@ +From b0971a2847fd9cd9f59eb19e6761f6800a33150d Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 3 Jul 2009 08:30:13 -0500 -Subject: mm/vmstat: Protect per cpu variables with preempt disable on RT +Subject: [PATCH 079/269] mm/vmstat: Protect per cpu variables with preempt + disable on RT Disable preemption on -RT for the vmstat code. On vanila the code runs in IRQ-off regions while on -RT it is not. "preempt_disable" ensures that the @@ -8,15 +10,16 @@ same ressources is not updated in parallel due to preemption. Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner - --- - include/linux/vmstat.h | 4 ++++ - mm/vmstat.c | 12 ++++++++++++ + include/linux/vmstat.h | 4 ++++ + mm/vmstat.c | 12 ++++++++++++ 2 files changed, 16 insertions(+) +diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h +index f25cef84b41d..febee8649220 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h -@@ -54,7 +54,9 @@ DECLARE_PER_CPU(struct vm_event_state, v +@@ -54,7 +54,9 @@ DECLARE_PER_CPU(struct vm_event_state, vm_event_states); */ static inline void __count_vm_event(enum vm_event_item item) { @@ -26,7 +29,7 @@ Signed-off-by: Thomas Gleixner } static inline void count_vm_event(enum vm_event_item item) -@@ -64,7 +66,9 @@ static inline void count_vm_event(enum v +@@ -64,7 +66,9 @@ static inline void count_vm_event(enum vm_event_item item) static inline void __count_vm_events(enum vm_event_item item, long delta) { @@ -36,9 +39,11 @@ Signed-off-by: Thomas Gleixner } static inline void count_vm_events(enum vm_event_item item, long delta) +diff --git a/mm/vmstat.c b/mm/vmstat.c +index 4a387937f9f5..0cd11c5e3999 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c -@@ -320,6 +320,7 @@ void __mod_zone_page_state(struct zone * +@@ -320,6 +320,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, long x; long t; @@ -46,7 +51,7 @@ Signed-off-by: Thomas Gleixner x = delta + __this_cpu_read(*p); t = __this_cpu_read(pcp->stat_threshold); -@@ -329,6 +330,7 @@ void __mod_zone_page_state(struct zone * +@@ -329,6 +330,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item, x = 0; } __this_cpu_write(*p, x); @@ -54,7 +59,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(__mod_zone_page_state); -@@ -340,6 +342,7 @@ void __mod_node_page_state(struct pglist +@@ -340,6 +342,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, long x; long t; @@ -62,7 +67,7 @@ Signed-off-by: Thomas Gleixner x = delta + __this_cpu_read(*p); t = __this_cpu_read(pcp->stat_threshold); -@@ -349,6 +352,7 @@ void __mod_node_page_state(struct pglist +@@ -349,6 +352,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item, x = 0; } __this_cpu_write(*p, x); @@ -70,7 +75,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(__mod_node_page_state); -@@ -381,6 +385,7 @@ void __inc_zone_state(struct zone *zone, +@@ -381,6 +385,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item) s8 __percpu *p = pcp->vm_stat_diff + item; s8 v, t; @@ -78,7 +83,7 @@ Signed-off-by: Thomas Gleixner v = __this_cpu_inc_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { -@@ -389,6 +394,7 @@ void __inc_zone_state(struct zone *zone, +@@ -389,6 +394,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item) zone_page_state_add(v + overstep, zone, item); __this_cpu_write(*p, -overstep); } @@ -86,7 +91,7 @@ Signed-off-by: Thomas Gleixner } void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) -@@ -397,6 +403,7 @@ void __inc_node_state(struct pglist_data +@@ -397,6 +403,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) s8 __percpu *p = pcp->vm_node_stat_diff + item; s8 v, t; @@ -94,7 +99,7 @@ Signed-off-by: Thomas Gleixner v = __this_cpu_inc_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { -@@ -405,6 +412,7 @@ void __inc_node_state(struct pglist_data +@@ -405,6 +412,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) node_page_state_add(v + overstep, pgdat, item); __this_cpu_write(*p, -overstep); } @@ -102,7 +107,7 @@ Signed-off-by: Thomas Gleixner } void __inc_zone_page_state(struct page *page, enum zone_stat_item item) -@@ -425,6 +433,7 @@ void __dec_zone_state(struct zone *zone, +@@ -425,6 +433,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item) s8 __percpu *p = pcp->vm_stat_diff + item; s8 v, t; @@ -110,7 +115,7 @@ Signed-off-by: Thomas Gleixner v = __this_cpu_dec_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v < - t)) { -@@ -433,6 +442,7 @@ void __dec_zone_state(struct zone *zone, +@@ -433,6 +442,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item) zone_page_state_add(v - overstep, zone, item); __this_cpu_write(*p, overstep); } @@ -118,7 +123,7 @@ Signed-off-by: Thomas Gleixner } void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) -@@ -441,6 +451,7 @@ void __dec_node_state(struct pglist_data +@@ -441,6 +451,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) s8 __percpu *p = pcp->vm_node_stat_diff + item; s8 v, t; @@ -126,7 +131,7 @@ Signed-off-by: Thomas Gleixner v = __this_cpu_dec_return(*p); t = __this_cpu_read(pcp->stat_threshold); if (unlikely(v < - t)) { -@@ -449,6 +460,7 @@ void __dec_node_state(struct pglist_data +@@ -449,6 +460,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item) node_page_state_add(v - overstep, pgdat, item); __this_cpu_write(*p, overstep); } @@ -134,3 +139,6 @@ Signed-off-by: Thomas Gleixner } void __dec_zone_page_state(struct page *page, enum zone_stat_item item) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0082-re-preempt_rt_full-arm-coredump-fails-for-cpu-3e-3d-4.patch b/kernel/patches-4.19.x-rt/0080-ARM-Initialize-split-page-table-locks-for-vector-pag.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0082-re-preempt_rt_full-arm-coredump-fails-for-cpu-3e-3d-4.patch rename to kernel/patches-4.19.x-rt/0080-ARM-Initialize-split-page-table-locks-for-vector-pag.patch index 293ef2d04..84ef3ad84 100644 --- a/kernel/patches-4.19.x-rt/0082-re-preempt_rt_full-arm-coredump-fails-for-cpu-3e-3d-4.patch +++ b/kernel/patches-4.19.x-rt/0080-ARM-Initialize-split-page-table-locks-for-vector-pag.patch @@ -1,6 +1,8 @@ -Subject: ARM: Initialize split page table locks for vector page +From 1062ea19aa6e1c3dacb44d07747c89b4f66dadc2 Mon Sep 17 00:00:00 2001 From: Frank Rowand Date: Sat, 1 Oct 2011 18:58:13 -0700 +Subject: [PATCH 080/269] ARM: Initialize split page table locks for vector + page Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if PREEMPT_RT_FULL=y because vectors_user_mapping() creates a @@ -30,12 +32,14 @@ Cc: Peter Zijlstra Link: http://lkml.kernel.org/r/4E87C535.2030907@am.sony.com Signed-off-by: Thomas Gleixner --- - arch/arm/kernel/process.c | 24 ++++++++++++++++++++++++ + arch/arm/kernel/process.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) +diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c +index 82ab015bf42b..8d3c7ce34c24 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c -@@ -324,6 +324,30 @@ unsigned long arch_randomize_brk(struct +@@ -324,6 +324,30 @@ unsigned long arch_randomize_brk(struct mm_struct *mm) } #ifdef CONFIG_MMU @@ -66,3 +70,6 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_KUSER_HELPERS /* * The vectors page is always readable from user space for the +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0083-mm-enable-slub.patch b/kernel/patches-4.19.x-rt/0081-mm-Enable-SLUB-for-RT.patch similarity index 63% rename from kernel/patches-4.19.x-rt/0083-mm-enable-slub.patch rename to kernel/patches-4.19.x-rt/0081-mm-Enable-SLUB-for-RT.patch index 78657d755..f5dcce567 100644 --- a/kernel/patches-4.19.x-rt/0083-mm-enable-slub.patch +++ b/kernel/patches-4.19.x-rt/0081-mm-Enable-SLUB-for-RT.patch @@ -1,6 +1,7 @@ -Subject: mm: Enable SLUB for RT +From 7bd789a93c5b97d553b15fd8e446228d23456aff Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 25 Oct 2012 10:32:35 +0100 +Subject: [PATCH 081/269] mm: Enable SLUB for RT Avoid the memory allocation in IRQ section @@ -8,12 +9,14 @@ Signed-off-by: Thomas Gleixner [bigeasy: factor out everything except the kcalloc() workaorund ] Signed-off-by: Sebastian Andrzej Siewior --- - mm/slub.c | 6 ++++++ + mm/slub.c | 6 ++++++ 1 file changed, 6 insertions(+) +diff --git a/mm/slub.c b/mm/slub.c +index 7fd47a914f61..efd441e79e6f 100644 --- a/mm/slub.c +++ b/mm/slub.c -@@ -3677,6 +3677,11 @@ static void list_slab_objects(struct kme +@@ -3680,6 +3680,11 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, const char *text) { #ifdef CONFIG_SLUB_DEBUG @@ -25,7 +28,7 @@ Signed-off-by: Sebastian Andrzej Siewior void *addr = page_address(page); void *p; unsigned long *map = kcalloc(BITS_TO_LONGS(page->objects), -@@ -3698,6 +3703,7 @@ static void list_slab_objects(struct kme +@@ -3701,6 +3706,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, slab_unlock(page); kfree(map); #endif @@ -33,3 +36,6 @@ Signed-off-by: Sebastian Andrzej Siewior } /* +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0084-slub-enable-irqs-for-no-wait.patch b/kernel/patches-4.19.x-rt/0082-slub-Enable-irqs-for-__GFP_WAIT.patch similarity index 67% rename from kernel/patches-4.19.x-rt/0084-slub-enable-irqs-for-no-wait.patch rename to kernel/patches-4.19.x-rt/0082-slub-Enable-irqs-for-__GFP_WAIT.patch index 7ff4785d7..26bbab3cb 100644 --- a/kernel/patches-4.19.x-rt/0084-slub-enable-irqs-for-no-wait.patch +++ b/kernel/patches-4.19.x-rt/0082-slub-Enable-irqs-for-__GFP_WAIT.patch @@ -1,18 +1,21 @@ -Subject: slub: Enable irqs for __GFP_WAIT +From 11224977de88f7f3ddc92b29390c44fdf9a85820 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Wed, 09 Jan 2013 12:08:15 +0100 +Date: Wed, 9 Jan 2013 12:08:15 +0100 +Subject: [PATCH 082/269] slub: Enable irqs for __GFP_WAIT SYSTEM_RUNNING might be too late for enabling interrupts. Allocations with GFP_WAIT can happen before that. So use this as an indicator. Signed-off-by: Thomas Gleixner --- - mm/slub.c | 9 ++++++++- + mm/slub.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) +diff --git a/mm/slub.c b/mm/slub.c +index efd441e79e6f..2240b51a0549 100644 --- a/mm/slub.c +++ b/mm/slub.c -@@ -1570,10 +1570,17 @@ static struct page *allocate_slab(struct +@@ -1570,10 +1570,17 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) void *start, *p; int idx, order; bool shuffle; @@ -30,7 +33,7 @@ Signed-off-by: Thomas Gleixner local_irq_enable(); flags |= s->allocflags; -@@ -1632,7 +1639,7 @@ static struct page *allocate_slab(struct +@@ -1632,7 +1639,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) page->frozen = 1; out: @@ -39,3 +42,6 @@ Signed-off-by: Thomas Gleixner local_irq_disable(); if (!page) return NULL; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0085-slub-disable-SLUB_CPU_PARTIAL.patch b/kernel/patches-4.19.x-rt/0083-slub-Disable-SLUB_CPU_PARTIAL.patch similarity index 88% rename from kernel/patches-4.19.x-rt/0085-slub-disable-SLUB_CPU_PARTIAL.patch rename to kernel/patches-4.19.x-rt/0083-slub-Disable-SLUB_CPU_PARTIAL.patch index 998b4533a..6fb7a2170 100644 --- a/kernel/patches-4.19.x-rt/0085-slub-disable-SLUB_CPU_PARTIAL.patch +++ b/kernel/patches-4.19.x-rt/0083-slub-Disable-SLUB_CPU_PARTIAL.patch @@ -1,6 +1,7 @@ +From b8b912f1bb257eb44228b3bdb7652c4d6dcda56b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 15 Apr 2015 19:00:47 +0200 -Subject: slub: Disable SLUB_CPU_PARTIAL +Subject: [PATCH 083/269] slub: Disable SLUB_CPU_PARTIAL |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:915 |in_atomic(): 1, irqs_disabled(): 0, pid: 87, name: rcuop/7 @@ -31,9 +32,11 @@ Subject: slub: Disable SLUB_CPU_PARTIAL Signed-off-by: Sebastian Andrzej Siewior --- - init/Kconfig | 2 +- + init/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/init/Kconfig b/init/Kconfig +index 707ca4d49944..68b4e39e421b 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1698,7 +1698,7 @@ config SLAB_FREELIST_HARDENED @@ -45,3 +48,6 @@ Signed-off-by: Sebastian Andrzej Siewior bool "SLUB per cpu partial cache" help Per cpu partial caches accellerate objects allocation and freeing +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0086-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch b/kernel/patches-4.19.x-rt/0084-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0086-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch rename to kernel/patches-4.19.x-rt/0084-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch index 1d870ba2a..104645c66 100644 --- a/kernel/patches-4.19.x-rt/0086-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch +++ b/kernel/patches-4.19.x-rt/0084-mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch @@ -1,6 +1,8 @@ +From 107eee1a14857d0aecad3c1f56f8b4cabbadcf89 Mon Sep 17 00:00:00 2001 From: Yang Shi -Subject: mm/memcontrol: Don't call schedule_work_on in preemption disabled context Date: Wed, 30 Oct 2013 11:48:33 -0700 +Subject: [PATCH 084/269] mm/memcontrol: Don't call schedule_work_on in + preemption disabled context The following trace is triggered when running ltp oom test cases: @@ -42,13 +44,14 @@ replace the pair of get/put_cpu() to get/put_cpu_light(). Signed-off-by: Yang Shi Signed-off-by: Sebastian Andrzej Siewior --- - - mm/memcontrol.c | 4 ++-- + mm/memcontrol.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) +diff --git a/mm/memcontrol.c b/mm/memcontrol.c +index 7e7cc0cd89fe..174329de4779 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c -@@ -2052,7 +2052,7 @@ static void drain_all_stock(struct mem_c +@@ -2063,7 +2063,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) * as well as workers from this path always operate on the local * per-cpu data. CPU up doesn't touch memcg_stock at all. */ @@ -57,7 +60,7 @@ Signed-off-by: Sebastian Andrzej Siewior for_each_online_cpu(cpu) { struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); struct mem_cgroup *memcg; -@@ -2072,7 +2072,7 @@ static void drain_all_stock(struct mem_c +@@ -2083,7 +2083,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg) } css_put(&memcg->css); } @@ -66,3 +69,6 @@ Signed-off-by: Sebastian Andrzej Siewior mutex_unlock(&percpu_charge_mutex); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0087-mm-memcontrol-do_not_disable_irq.patch b/kernel/patches-4.19.x-rt/0085-mm-memcontrol-Replace-local_irq_disable-with-local-l.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0087-mm-memcontrol-do_not_disable_irq.patch rename to kernel/patches-4.19.x-rt/0085-mm-memcontrol-Replace-local_irq_disable-with-local-l.patch index b0c12fd04..86a59777b 100644 --- a/kernel/patches-4.19.x-rt/0087-mm-memcontrol-do_not_disable_irq.patch +++ b/kernel/patches-4.19.x-rt/0085-mm-memcontrol-Replace-local_irq_disable-with-local-l.patch @@ -1,15 +1,19 @@ +From b1fa5897c72583b68655f7eeca2e598dbfa8a0b5 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior -Subject: mm/memcontrol: Replace local_irq_disable with local locks Date: Wed, 28 Jan 2015 17:14:16 +0100 +Subject: [PATCH 085/269] mm/memcontrol: Replace local_irq_disable with local + locks There are a few local_irq_disable() which then take sleeping locks. This patch converts them local locks. Signed-off-by: Sebastian Andrzej Siewior --- - mm/memcontrol.c | 24 ++++++++++++++++-------- + mm/memcontrol.c | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-) +diff --git a/mm/memcontrol.c b/mm/memcontrol.c +index 174329de4779..d0f245d80f93 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -69,6 +69,7 @@ @@ -29,7 +33,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Whether legacy memory+swap accounting is active */ static bool do_memsw_account(void) { -@@ -4859,12 +4862,12 @@ static int mem_cgroup_move_account(struc +@@ -4884,12 +4887,12 @@ static int mem_cgroup_move_account(struct page *page, ret = 0; @@ -44,7 +48,7 @@ Signed-off-by: Sebastian Andrzej Siewior out_unlock: unlock_page(page); out: -@@ -5983,10 +5986,10 @@ void mem_cgroup_commit_charge(struct pag +@@ -6008,10 +6011,10 @@ void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, commit_charge(page, memcg, lrucare); @@ -57,7 +61,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (do_memsw_account() && PageSwapCache(page)) { swp_entry_t entry = { .val = page_private(page) }; -@@ -6055,7 +6058,7 @@ static void uncharge_batch(const struct +@@ -6080,7 +6083,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) memcg_oom_recover(ug->memcg); } @@ -66,7 +70,7 @@ Signed-off-by: Sebastian Andrzej Siewior __mod_memcg_state(ug->memcg, MEMCG_RSS, -ug->nr_anon); __mod_memcg_state(ug->memcg, MEMCG_CACHE, -ug->nr_file); __mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge); -@@ -6063,7 +6066,7 @@ static void uncharge_batch(const struct +@@ -6088,7 +6091,7 @@ static void uncharge_batch(const struct uncharge_gather *ug) __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); __this_cpu_add(ug->memcg->stat_cpu->nr_page_events, nr_pages); memcg_check_events(ug->memcg, ug->dummy_page); @@ -75,7 +79,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!mem_cgroup_is_root(ug->memcg)) css_put_many(&ug->memcg->css, nr_pages); -@@ -6226,10 +6229,10 @@ void mem_cgroup_migrate(struct page *old +@@ -6251,10 +6254,10 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) commit_charge(newpage, memcg, false); @@ -88,7 +92,7 @@ Signed-off-by: Sebastian Andrzej Siewior } DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); -@@ -6421,6 +6424,7 @@ void mem_cgroup_swapout(struct page *pag +@@ -6446,6 +6449,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) struct mem_cgroup *memcg, *swap_memcg; unsigned int nr_entries; unsigned short oldid; @@ -96,7 +100,7 @@ Signed-off-by: Sebastian Andrzej Siewior VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(page_count(page), page); -@@ -6466,13 +6470,17 @@ void mem_cgroup_swapout(struct page *pag +@@ -6491,13 +6495,17 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) * important here to have the interrupts disabled because it is the * only synchronisation we have for updating the per-CPU variables. */ @@ -114,3 +118,6 @@ Signed-off-by: Sebastian Andrzej Siewior } /** +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0088-mm_zsmalloc_copy_with_get_cpu_var_and_locking.patch b/kernel/patches-4.19.x-rt/0086-mm-zsmalloc-copy-with-get_cpu_var-and-locking.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0088-mm_zsmalloc_copy_with_get_cpu_var_and_locking.patch rename to kernel/patches-4.19.x-rt/0086-mm-zsmalloc-copy-with-get_cpu_var-and-locking.patch index b3b14d392..67aa36302 100644 --- a/kernel/patches-4.19.x-rt/0088-mm_zsmalloc_copy_with_get_cpu_var_and_locking.patch +++ b/kernel/patches-4.19.x-rt/0086-mm-zsmalloc-copy-with-get_cpu_var-and-locking.patch @@ -1,6 +1,7 @@ +From 83e42c20f52f70e65d03b214fd9c8579b0128f47 Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Tue, 22 Mar 2016 11:16:09 +0100 -Subject: [PATCH] mm/zsmalloc: copy with get_cpu_var() and locking +Subject: [PATCH 086/269] mm/zsmalloc: copy with get_cpu_var() and locking get_cpu_var() disables preemption and triggers a might_sleep() splat later. This is replaced with get_locked_var(). @@ -12,9 +13,11 @@ Signed-off-by: Mike Galbraith fixed the size magic] Signed-off-by: Sebastian Andrzej Siewior --- - mm/zsmalloc.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++++++----- + mm/zsmalloc.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 74 insertions(+), 6 deletions(-) +diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c +index 9da65552e7ca..63c193c1ff96 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -55,6 +55,7 @@ @@ -49,7 +52,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Object location (, ) is encoded as * as single (unsigned long) handle value. -@@ -320,7 +334,7 @@ static void SetZsPageMovable(struct zs_p +@@ -320,7 +334,7 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {} static int create_cache(struct zs_pool *pool) { @@ -58,7 +61,7 @@ Signed-off-by: Sebastian Andrzej Siewior 0, 0, NULL); if (!pool->handle_cachep) return 1; -@@ -344,10 +358,27 @@ static void destroy_cache(struct zs_pool +@@ -344,10 +358,27 @@ static void destroy_cache(struct zs_pool *pool) static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp) { @@ -88,7 +91,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void cache_free_handle(struct zs_pool *pool, unsigned long handle) { kmem_cache_free(pool->handle_cachep, (void *)handle); -@@ -366,12 +397,18 @@ static void cache_free_zspage(struct zs_ +@@ -366,12 +397,18 @@ static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage) static void record_obj(unsigned long handle, unsigned long obj) { @@ -115,7 +118,7 @@ Signed-off-by: Sebastian Andrzej Siewior static bool is_zspage_isolated(struct zspage *zspage) { -@@ -882,7 +920,13 @@ static unsigned long location_to_obj(str +@@ -882,7 +920,13 @@ static unsigned long location_to_obj(struct page *page, unsigned int obj_idx) static unsigned long handle_to_obj(unsigned long handle) { @@ -129,7 +132,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static unsigned long obj_to_head(struct page *page, void *obj) -@@ -896,22 +940,46 @@ static unsigned long obj_to_head(struct +@@ -896,22 +940,46 @@ static unsigned long obj_to_head(struct page *page, void *obj) static inline int testpin_tag(unsigned long handle) { @@ -176,7 +179,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static void reset_page(struct page *page) -@@ -1337,7 +1405,7 @@ void *zs_map_object(struct zs_pool *pool +@@ -1337,7 +1405,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, class = pool->size_class[class_idx]; off = (class->size * obj_idx) & ~PAGE_MASK; @@ -185,7 +188,7 @@ Signed-off-by: Sebastian Andrzej Siewior area->vm_mm = mm; if (off + class->size <= PAGE_SIZE) { /* this object is contained entirely within a page */ -@@ -1391,7 +1459,7 @@ void zs_unmap_object(struct zs_pool *poo +@@ -1391,7 +1459,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) __zs_unmap_object(area, pages, off, class->size); } @@ -194,3 +197,6 @@ Signed-off-by: Sebastian Andrzej Siewior migrate_read_unlock(zspage); unpin_tag(handle); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0089-x86-mm-pat-disable-preemption-__split_large_page-aft.patch b/kernel/patches-4.19.x-rt/0087-x86-mm-pat-disable-preemption-__split_large_page-aft.patch similarity index 66% rename from kernel/patches-4.19.x-rt/0089-x86-mm-pat-disable-preemption-__split_large_page-aft.patch rename to kernel/patches-4.19.x-rt/0087-x86-mm-pat-disable-preemption-__split_large_page-aft.patch index ec5c42533..d7fefb813 100644 --- a/kernel/patches-4.19.x-rt/0089-x86-mm-pat-disable-preemption-__split_large_page-aft.patch +++ b/kernel/patches-4.19.x-rt/0087-x86-mm-pat-disable-preemption-__split_large_page-aft.patch @@ -1,7 +1,8 @@ +From 2543c80b6aadc59c70c6b6e912ed1e6a9965b3c0 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 11 Dec 2018 21:53:43 +0100 -Subject: [PATCH] x86/mm/pat: disable preemption __split_large_page() after - spin_lock() +Subject: [PATCH 087/269] x86/mm/pat: disable preemption __split_large_page() + after spin_lock() Commit "x86/mm/pat: Disable preemption around __flush_tlb_all()" added a warning if __flush_tlb_all() is invoked in preemptible context. On !RT @@ -13,20 +14,23 @@ Disable preemption to avoid the warning __flush_tlb_all(). Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/mm/pageattr.c | 8 ++++++++ + arch/x86/mm/pageattr.c | 8 ++++++++ 1 file changed, 8 insertions(+) +diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c +index e2d4b25c7aa4..9626ebb9e3c8 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c -@@ -688,11 +688,17 @@ static int +@@ -687,12 +687,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, + pgprot_t ref_prot; spin_lock(&pgd_lock); - /* ++ /* + * Keep preemption disabled after __flush_tlb_all() which expects not be + * preempted during the flush of the local TLB. + */ + preempt_disable(); -+ /* + /* * Check for races, another CPU might have split this page * up for us already: */ @@ -36,7 +40,7 @@ Signed-off-by: Sebastian Andrzej Siewior spin_unlock(&pgd_lock); return 1; } -@@ -726,6 +732,7 @@ static int +@@ -726,6 +732,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, break; default: @@ -44,7 +48,7 @@ Signed-off-by: Sebastian Andrzej Siewior spin_unlock(&pgd_lock); return 1; } -@@ -764,6 +771,7 @@ static int +@@ -764,6 +771,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, * going on. */ __flush_tlb_all(); @@ -52,3 +56,6 @@ Signed-off-by: Sebastian Andrzej Siewior spin_unlock(&pgd_lock); return 0; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0090-radix-tree-use-local-locks.patch b/kernel/patches-4.19.x-rt/0088-radix-tree-use-local-locks.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0090-radix-tree-use-local-locks.patch rename to kernel/patches-4.19.x-rt/0088-radix-tree-use-local-locks.patch index 8cde3957b..3754f603b 100644 --- a/kernel/patches-4.19.x-rt/0090-radix-tree-use-local-locks.patch +++ b/kernel/patches-4.19.x-rt/0088-radix-tree-use-local-locks.patch @@ -1,6 +1,7 @@ +From 11c1fef6d646f26007271dd7486fe14176d6e6f6 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 25 Jan 2017 16:34:27 +0100 -Subject: [PATCH] radix-tree: use local locks +Subject: [PATCH 088/269] radix-tree: use local locks The preload functionality uses per-CPU variables and preempt-disable to ensure that it does not switch CPUs during its usage. This patch adds @@ -11,14 +12,16 @@ Cc: stable-rt@vger.kernel.org Reported-and-debugged-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/idr.h | 5 +---- - include/linux/radix-tree.h | 7 ++----- - lib/radix-tree.c | 32 +++++++++++++++++++++++--------- + include/linux/idr.h | 5 +---- + include/linux/radix-tree.h | 7 ++----- + lib/radix-tree.c | 32 +++++++++++++++++++++++--------- 3 files changed, 26 insertions(+), 18 deletions(-) +diff --git a/include/linux/idr.h b/include/linux/idr.h +index 3ec8628ce17f..54af68158f7d 100644 --- a/include/linux/idr.h +++ b/include/linux/idr.h -@@ -169,10 +169,7 @@ static inline bool idr_is_empty(const st +@@ -169,10 +169,7 @@ static inline bool idr_is_empty(const struct idr *idr) * Each idr_preload() should be matched with an invocation of this * function. See idr_preload() for details. */ @@ -30,9 +33,11 @@ Signed-off-by: Sebastian Andrzej Siewior /** * idr_for_each_entry() - Iterate over an IDR's elements of a given type. +diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h +index 34149e8b5f73..affb0fc4c5b6 100644 --- a/include/linux/radix-tree.h +++ b/include/linux/radix-tree.h -@@ -330,6 +330,8 @@ unsigned int radix_tree_gang_lookup_slot +@@ -330,6 +330,8 @@ unsigned int radix_tree_gang_lookup_slot(const struct radix_tree_root *, int radix_tree_preload(gfp_t gfp_mask); int radix_tree_maybe_preload(gfp_t gfp_mask); int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order); @@ -41,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior void radix_tree_init(void); void *radix_tree_tag_set(struct radix_tree_root *, unsigned long index, unsigned int tag); -@@ -349,11 +351,6 @@ unsigned int radix_tree_gang_lookup_tag_ +@@ -349,11 +351,6 @@ unsigned int radix_tree_gang_lookup_tag_slot(const struct radix_tree_root *, unsigned int max_items, unsigned int tag); int radix_tree_tagged(const struct radix_tree_root *, unsigned int tag); @@ -53,6 +58,8 @@ Signed-off-by: Sebastian Andrzej Siewior int radix_tree_split_preload(unsigned old_order, unsigned new_order, gfp_t); int radix_tree_split(struct radix_tree_root *, unsigned long index, unsigned new_order); +diff --git a/lib/radix-tree.c b/lib/radix-tree.c +index bc03ecc4dfd2..44257463f683 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -38,7 +38,7 @@ @@ -72,7 +79,7 @@ Signed-off-by: Sebastian Andrzej Siewior static inline struct radix_tree_node *entry_to_node(void *ptr) { -@@ -405,12 +406,13 @@ radix_tree_node_alloc(gfp_t gfp_mask, st +@@ -405,12 +406,13 @@ radix_tree_node_alloc(gfp_t gfp_mask, struct radix_tree_node *parent, * succeed in getting a node here (and never reach * kmem_cache_alloc) */ @@ -87,7 +94,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Update the allocation stack trace as this is more useful * for debugging. -@@ -476,14 +478,14 @@ static __must_check int __radix_tree_pre +@@ -476,14 +478,14 @@ static __must_check int __radix_tree_preload(gfp_t gfp_mask, unsigned nr) */ gfp_mask &= ~__GFP_ACCOUNT; @@ -105,7 +112,7 @@ Signed-off-by: Sebastian Andrzej Siewior rtp = this_cpu_ptr(&radix_tree_preloads); if (rtp->nr < nr) { node->parent = rtp->nodes; -@@ -525,7 +527,7 @@ int radix_tree_maybe_preload(gfp_t gfp_m +@@ -525,7 +527,7 @@ int radix_tree_maybe_preload(gfp_t gfp_mask) if (gfpflags_allow_blocking(gfp_mask)) return __radix_tree_preload(gfp_mask, RADIX_TREE_PRELOAD_SIZE); /* Preloading doesn't help anything with this gfp mask, skip it */ @@ -114,7 +121,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } EXPORT_SYMBOL(radix_tree_maybe_preload); -@@ -563,7 +565,7 @@ int radix_tree_maybe_preload_order(gfp_t +@@ -563,7 +565,7 @@ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order) /* Preloading doesn't help anything with this gfp mask, skip it */ if (!gfpflags_allow_blocking(gfp_mask)) { @@ -123,7 +130,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } -@@ -597,6 +599,12 @@ int radix_tree_maybe_preload_order(gfp_t +@@ -597,6 +599,12 @@ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order) return __radix_tree_preload(gfp_mask, nr_nodes); } @@ -154,7 +161,7 @@ Signed-off-by: Sebastian Andrzej Siewior int ida_pre_get(struct ida *ida, gfp_t gfp) { /* -@@ -2114,7 +2128,7 @@ int ida_pre_get(struct ida *ida, gfp_t g +@@ -2114,7 +2128,7 @@ int ida_pre_get(struct ida *ida, gfp_t gfp) * to return to the ida_pre_get() step. */ if (!__radix_tree_preload(gfp, IDA_PRELOAD_SIZE)) @@ -163,3 +170,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (!this_cpu_read(ida_bitmap)) { struct ida_bitmap *bitmap = kzalloc(sizeof(*bitmap), gfp); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0091-timers-prepare-for-full-preemption.patch b/kernel/patches-4.19.x-rt/0089-timers-Prepare-for-full-preemption.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0091-timers-prepare-for-full-preemption.patch rename to kernel/patches-4.19.x-rt/0089-timers-Prepare-for-full-preemption.patch index f8a40a25b..008963c58 100644 --- a/kernel/patches-4.19.x-rt/0091-timers-prepare-for-full-preemption.patch +++ b/kernel/patches-4.19.x-rt/0089-timers-Prepare-for-full-preemption.patch @@ -1,6 +1,7 @@ +From 558451a44923dab908e500200b3f6f02fd6e4fae Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 3 Jul 2009 08:29:34 -0500 -Subject: timers: Prepare for full preemption +Subject: [PATCH 089/269] timers: Prepare for full preemption When softirqs can be preempted we need to make sure that cancelling the timer from the active thread can not deadlock vs. a running timer @@ -8,16 +9,17 @@ callback. Add a waitqueue to resolve that. Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner - --- - include/linux/timer.h | 2 +- - kernel/sched/core.c | 9 +++++++-- - kernel/time/timer.c | 45 +++++++++++++++++++++++++++++++++++++++++---- + include/linux/timer.h | 2 +- + kernel/sched/core.c | 9 +++++++-- + kernel/time/timer.c | 45 +++++++++++++++++++++++++++++++++++++++---- 3 files changed, 49 insertions(+), 7 deletions(-) +diff --git a/include/linux/timer.h b/include/linux/timer.h +index 7b066fd38248..54627d046b3a 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h -@@ -172,7 +172,7 @@ extern void add_timer(struct timer_list +@@ -172,7 +172,7 @@ extern void add_timer(struct timer_list *timer); extern int try_to_del_timer_sync(struct timer_list *timer); @@ -26,9 +28,11 @@ Signed-off-by: Thomas Gleixner extern int del_timer_sync(struct timer_list *timer); #else # define del_timer_sync(t) del_timer(t) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 9c4a9f0a627b..ddf6282d9780 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -496,11 +496,14 @@ void resched_cpu(int cpu) +@@ -498,11 +498,14 @@ void resched_cpu(int cpu) */ int get_nohz_timer_target(void) { @@ -45,7 +49,7 @@ Signed-off-by: Thomas Gleixner rcu_read_lock(); for_each_domain(cpu, sd) { -@@ -519,6 +522,8 @@ int get_nohz_timer_target(void) +@@ -521,6 +524,8 @@ int get_nohz_timer_target(void) cpu = housekeeping_any_cpu(HK_FLAG_TIMER); unlock: rcu_read_unlock(); @@ -54,6 +58,8 @@ Signed-off-by: Thomas Gleixner return cpu; } +diff --git a/kernel/time/timer.c b/kernel/time/timer.c +index fa49cd753dea..bbe24e241643 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -44,6 +44,7 @@ @@ -74,7 +80,7 @@ Signed-off-by: Thomas Gleixner unsigned long clk; unsigned long next_expiry; unsigned int cpu; -@@ -1178,6 +1182,33 @@ void add_timer_on(struct timer_list *tim +@@ -1178,6 +1182,33 @@ void add_timer_on(struct timer_list *timer, int cpu) } EXPORT_SYMBOL_GPL(add_timer_on); @@ -108,7 +114,7 @@ Signed-off-by: Thomas Gleixner /** * del_timer - deactivate a timer. * @timer: the timer to be deactivated -@@ -1233,7 +1264,7 @@ int try_to_del_timer_sync(struct timer_l +@@ -1233,7 +1264,7 @@ int try_to_del_timer_sync(struct timer_list *timer) } EXPORT_SYMBOL(try_to_del_timer_sync); @@ -117,7 +123,7 @@ Signed-off-by: Thomas Gleixner /** * del_timer_sync - deactivate a timer and wait for the handler to finish. * @timer: the timer to be deactivated -@@ -1293,7 +1324,7 @@ int del_timer_sync(struct timer_list *ti +@@ -1293,7 +1324,7 @@ int del_timer_sync(struct timer_list *timer) int ret = try_to_del_timer_sync(timer); if (ret >= 0) return ret; @@ -126,7 +132,7 @@ Signed-off-by: Thomas Gleixner } } EXPORT_SYMBOL(del_timer_sync); -@@ -1354,13 +1385,16 @@ static void expire_timers(struct timer_b +@@ -1354,13 +1385,16 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head) fn = timer->function; @@ -144,7 +150,7 @@ Signed-off-by: Thomas Gleixner raw_spin_lock_irq(&base->lock); } } -@@ -1681,8 +1715,8 @@ static inline void __run_timers(struct t +@@ -1681,8 +1715,8 @@ static inline void __run_timers(struct timer_base *base) while (levels--) expire_timers(base, heads + levels); } @@ -154,7 +160,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -1927,6 +1961,9 @@ static void __init init_timer_cpu(int cp +@@ -1927,6 +1961,9 @@ static void __init init_timer_cpu(int cpu) base->cpu = cpu; raw_spin_lock_init(&base->lock); base->clk = jiffies; @@ -164,3 +170,6 @@ Signed-off-by: Thomas Gleixner } } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0092-x86-kvm-require-const-tsc-for-rt.patch b/kernel/patches-4.19.x-rt/0090-x86-kvm-Require-const-tsc-for-RT.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0092-x86-kvm-require-const-tsc-for-rt.patch rename to kernel/patches-4.19.x-rt/0090-x86-kvm-Require-const-tsc-for-RT.patch index b04b7841c..249874077 100644 --- a/kernel/patches-4.19.x-rt/0092-x86-kvm-require-const-tsc-for-rt.patch +++ b/kernel/patches-4.19.x-rt/0090-x86-kvm-Require-const-tsc-for-RT.patch @@ -1,6 +1,7 @@ -Subject: x86: kvm Require const tsc for RT +From ea0ad5586875098798cbf5d53bb21f2a5b82e537 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Sun, 06 Nov 2011 12:26:18 +0100 +Date: Sun, 6 Nov 2011 12:26:18 +0100 +Subject: [PATCH 090/269] x86: kvm Require const tsc for RT Non constant TSC is a nightmare on bare metal already, but with virtualization it becomes a complete disaster because the workarounds @@ -9,12 +10,14 @@ a guest on top of a RT host. Signed-off-by: Thomas Gleixner --- - arch/x86/kvm/x86.c | 7 +++++++ + arch/x86/kvm/x86.c | 7 +++++++ 1 file changed, 7 insertions(+) +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 4a61e1609c97..0b4fd313b626 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c -@@ -6698,6 +6698,13 @@ int kvm_arch_init(void *opaque) +@@ -6725,6 +6725,13 @@ int kvm_arch_init(void *opaque) goto out; } @@ -28,3 +31,6 @@ Signed-off-by: Thomas Gleixner r = kvm_mmu_module_init(); if (r) goto out_free_percpu; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0093-pci-switchtec-Don-t-use-completion-s-wait-queue.patch b/kernel/patches-4.19.x-rt/0091-pci-switchtec-Don-t-use-completion-s-wait-queue.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0093-pci-switchtec-Don-t-use-completion-s-wait-queue.patch rename to kernel/patches-4.19.x-rt/0091-pci-switchtec-Don-t-use-completion-s-wait-queue.patch index 006169578..a1e0f68d6 100644 --- a/kernel/patches-4.19.x-rt/0093-pci-switchtec-Don-t-use-completion-s-wait-queue.patch +++ b/kernel/patches-4.19.x-rt/0091-pci-switchtec-Don-t-use-completion-s-wait-queue.patch @@ -1,6 +1,7 @@ +From 8d76a7f3ba4284defc688a9131aa96e66eb1310a Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 4 Oct 2017 10:24:23 +0200 -Subject: [PATCH] pci/switchtec: Don't use completion's wait queue +Subject: [PATCH 091/269] pci/switchtec: Don't use completion's wait queue The poll callback is using completion's wait_queue_head_t member and puts it in poll_wait() so the poll() caller gets a wakeup after command @@ -18,9 +19,11 @@ Cc: Kurt Schwemmer Cc: Logan Gunthorpe Signed-off-by: Sebastian Andrzej Siewior --- - drivers/pci/switch/switchtec.c | 22 +++++++++++++--------- + drivers/pci/switch/switchtec.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) +diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c +index 37d0c15c9eeb..c396f3ef1852 100644 --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c @@ -43,10 +43,11 @@ struct switchtec_user { @@ -36,7 +39,7 @@ Signed-off-by: Sebastian Andrzej Siewior u32 cmd; u32 status; u32 return_code; -@@ -68,7 +69,7 @@ static struct switchtec_user *stuser_cre +@@ -68,7 +69,7 @@ static struct switchtec_user *stuser_create(struct switchtec_dev *stdev) stuser->stdev = stdev; kref_init(&stuser->kref); INIT_LIST_HEAD(&stuser->list); @@ -45,7 +48,7 @@ Signed-off-by: Sebastian Andrzej Siewior stuser->event_cnt = atomic_read(&stdev->event_cnt); dev_dbg(&stdev->dev, "%s: %p\n", __func__, stuser); -@@ -151,7 +152,7 @@ static int mrpc_queue_cmd(struct switcht +@@ -151,7 +152,7 @@ static int mrpc_queue_cmd(struct switchtec_user *stuser) kref_get(&stuser->kref); stuser->read_len = sizeof(stuser->data); stuser_set_state(stuser, MRPC_QUEUED); @@ -54,7 +57,7 @@ Signed-off-by: Sebastian Andrzej Siewior list_add_tail(&stuser->list, &stdev->mrpc_queue); mrpc_cmd_submit(stdev); -@@ -188,7 +189,8 @@ static void mrpc_complete_cmd(struct swi +@@ -188,7 +189,8 @@ static void mrpc_complete_cmd(struct switchtec_dev *stdev) stuser->read_len); out: @@ -64,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior list_del_init(&stuser->list); stuser_put(stuser); stdev->mrpc_busy = 0; -@@ -458,10 +460,11 @@ static ssize_t switchtec_dev_read(struct +@@ -458,10 +460,11 @@ static ssize_t switchtec_dev_read(struct file *filp, char __user *data, mutex_unlock(&stdev->mrpc_mutex); if (filp->f_flags & O_NONBLOCK) { @@ -78,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (rc < 0) return rc; } -@@ -509,7 +512,7 @@ static __poll_t switchtec_dev_poll(struc +@@ -509,7 +512,7 @@ static __poll_t switchtec_dev_poll(struct file *filp, poll_table *wait) struct switchtec_dev *stdev = stuser->stdev; __poll_t ret = 0; @@ -87,7 +90,7 @@ Signed-off-by: Sebastian Andrzej Siewior poll_wait(filp, &stdev->event_wq, wait); if (lock_mutex_and_test_alive(stdev)) -@@ -517,7 +520,7 @@ static __poll_t switchtec_dev_poll(struc +@@ -517,7 +520,7 @@ static __poll_t switchtec_dev_poll(struct file *filp, poll_table *wait) mutex_unlock(&stdev->mrpc_mutex); @@ -96,7 +99,7 @@ Signed-off-by: Sebastian Andrzej Siewior ret |= EPOLLIN | EPOLLRDNORM; if (stuser->event_cnt != atomic_read(&stdev->event_cnt)) -@@ -1041,7 +1044,8 @@ static void stdev_kill(struct switchtec_ +@@ -1041,7 +1044,8 @@ static void stdev_kill(struct switchtec_dev *stdev) /* Wake up and kill any users waiting on an MRPC request */ list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) { @@ -106,3 +109,6 @@ Signed-off-by: Sebastian Andrzej Siewior list_del_init(&stuser->list); stuser_put(stuser); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0094-wait.h-include-atomic.h.patch b/kernel/patches-4.19.x-rt/0092-wait.h-include-atomic.h.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0094-wait.h-include-atomic.h.patch rename to kernel/patches-4.19.x-rt/0092-wait.h-include-atomic.h.patch index 0a04f7859..b12bd9efa 100644 --- a/kernel/patches-4.19.x-rt/0094-wait.h-include-atomic.h.patch +++ b/kernel/patches-4.19.x-rt/0092-wait.h-include-atomic.h.patch @@ -1,6 +1,10 @@ +From f8a4f74be5bbce9f9664ebf005bb35f26875858f Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 28 Oct 2013 12:19:57 +0100 -Subject: wait.h: include atomic.h +Subject: [PATCH 092/269] wait.h: include atomic.h +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit | CC init/main.o |In file included from include/linux/mmzone.h:9:0, @@ -17,9 +21,11 @@ This pops up on ARM. Non-RT gets its atomic.h include from spinlock.h Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/wait.h | 1 + + include/linux/wait.h | 1 + 1 file changed, 1 insertion(+) +diff --git a/include/linux/wait.h b/include/linux/wait.h +index ed7c122cb31f..2b5ef8e94d19 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -10,6 +10,7 @@ @@ -30,3 +36,6 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct wait_queue_entry wait_queue_entry_t; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0095-work-simple-Simple-work-queue-implemenation.patch b/kernel/patches-4.19.x-rt/0093-work-simple-Simple-work-queue-implemenation.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0095-work-simple-Simple-work-queue-implemenation.patch rename to kernel/patches-4.19.x-rt/0093-work-simple-Simple-work-queue-implemenation.patch index 3aa8f7b5a..f62083cba 100644 --- a/kernel/patches-4.19.x-rt/0095-work-simple-Simple-work-queue-implemenation.patch +++ b/kernel/patches-4.19.x-rt/0093-work-simple-Simple-work-queue-implemenation.patch @@ -1,6 +1,7 @@ +From 7cf55f71248f4f3c603383a84c73c5e44bfb9229 Mon Sep 17 00:00:00 2001 From: Daniel Wagner Date: Fri, 11 Jul 2014 15:26:11 +0200 -Subject: work-simple: Simple work queue implemenation +Subject: [PATCH 093/269] work-simple: Simple work queue implemenation Provides a framework for enqueuing callbacks from irq context PREEMPT_RT_FULL safe. The callbacks are executed in kthread context. @@ -10,11 +11,16 @@ Bases on wait-simple. Cc: Sebastian Andrzej Siewior Signed-off-by: Daniel Wagner --- - include/linux/swork.h | 24 ++++++ - kernel/sched/Makefile | 2 - kernel/sched/swork.c | 173 ++++++++++++++++++++++++++++++++++++++++++++++++++ + include/linux/swork.h | 24 ++++++ + kernel/sched/Makefile | 2 +- + kernel/sched/swork.c | 173 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 198 insertions(+), 1 deletion(-) + create mode 100644 include/linux/swork.h + create mode 100644 kernel/sched/swork.c +diff --git a/include/linux/swork.h b/include/linux/swork.h +new file mode 100644 +index 000000000000..f175fa9a6016 --- /dev/null +++ b/include/linux/swork.h @@ -0,0 +1,24 @@ @@ -42,6 +48,8 @@ Signed-off-by: Daniel Wagner +void swork_put(void); + +#endif /* _LINUX_SWORK_H */ +diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile +index 7fe183404c38..2b765aa4e2c4 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -18,7 +18,7 @@ endif @@ -53,6 +61,9 @@ Signed-off-by: Daniel Wagner obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o pelt.o obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o +diff --git a/kernel/sched/swork.c b/kernel/sched/swork.c +new file mode 100644 +index 000000000000..a5b89fdacf19 --- /dev/null +++ b/kernel/sched/swork.c @@ -0,0 +1,173 @@ @@ -229,3 +240,6 @@ Signed-off-by: Daniel Wagner + mutex_unlock(&worker_mutex); +} +EXPORT_SYMBOL_GPL(swork_put); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0096-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch b/kernel/patches-4.19.x-rt/0094-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0096-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch rename to kernel/patches-4.19.x-rt/0094-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch index 28dcceea5..1370fc7bd 100644 --- a/kernel/patches-4.19.x-rt/0096-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch +++ b/kernel/patches-4.19.x-rt/0094-work-simple-drop-a-shit-statement-in-SWORK_EVENT_PEN.patch @@ -1,6 +1,8 @@ +From ba25a567c5891e2b1acd586212b0fd92ce755e71 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 10 Sep 2018 18:00:31 +0200 -Subject: [PATCH] work-simple: drop a shit statement in SWORK_EVENT_PENDING +Subject: [PATCH 094/269] work-simple: drop a shit statement in + SWORK_EVENT_PENDING Dan Carpenter reported | smatch warnings: @@ -13,9 +15,11 @@ Nevertheless I'm dropping that shift by zero to keep smatch quiet. Cc: Daniel Wagner Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/swork.c | 2 +- + kernel/sched/swork.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/sched/swork.c b/kernel/sched/swork.c +index a5b89fdacf19..c90d14b9b126 100644 --- a/kernel/sched/swork.c +++ b/kernel/sched/swork.c @@ -12,7 +12,7 @@ @@ -27,3 +31,6 @@ Signed-off-by: Sebastian Andrzej Siewior static DEFINE_MUTEX(worker_mutex); static struct sworker *glob_worker; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0097-completion-use-simple-wait-queues.patch b/kernel/patches-4.19.x-rt/0095-completion-Use-simple-wait-queues.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0097-completion-use-simple-wait-queues.patch rename to kernel/patches-4.19.x-rt/0095-completion-Use-simple-wait-queues.patch index 73bf7cf14..bfd797138 100644 --- a/kernel/patches-4.19.x-rt/0097-completion-use-simple-wait-queues.patch +++ b/kernel/patches-4.19.x-rt/0095-completion-Use-simple-wait-queues.patch @@ -1,6 +1,7 @@ -Subject: completion: Use simple wait queues +From d24dfe04ec75d5329d870c0d20f56f2cba4563ec Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 11 Jan 2013 11:23:51 +0100 +Subject: [PATCH 095/269] completion: Use simple wait queues Completions have no long lasting callbacks and therefor do not need the complex waitqueue variant. Use simple waitqueues which reduces the @@ -8,23 +9,25 @@ contention on the waitqueue lock. Signed-off-by: Thomas Gleixner --- - arch/powerpc/platforms/ps3/device-init.c | 4 +- - drivers/net/wireless/intersil/orinoco/orinoco_usb.c | 4 +- - drivers/usb/gadget/function/f_fs.c | 2 - - drivers/usb/gadget/legacy/inode.c | 4 +- - include/linux/completion.h | 8 ++-- - include/linux/suspend.h | 6 +++ - include/linux/swait.h | 2 + - kernel/power/hibernate.c | 7 ++++ - kernel/power/suspend.c | 4 ++ - kernel/sched/completion.c | 34 ++++++++++---------- - kernel/sched/core.c | 10 ++++- - kernel/sched/swait.c | 21 +++++++++++- + arch/powerpc/platforms/ps3/device-init.c | 4 +-- + .../wireless/intersil/orinoco/orinoco_usb.c | 4 +-- + drivers/usb/gadget/function/f_fs.c | 2 +- + drivers/usb/gadget/legacy/inode.c | 4 +-- + include/linux/completion.h | 8 ++--- + include/linux/suspend.h | 6 ++++ + include/linux/swait.h | 2 ++ + kernel/power/hibernate.c | 7 ++++ + kernel/power/suspend.c | 4 +++ + kernel/sched/completion.c | 34 +++++++++---------- + kernel/sched/core.c | 10 ++++-- + kernel/sched/swait.c | 21 +++++++++++- 12 files changed, 75 insertions(+), 31 deletions(-) +diff --git a/arch/powerpc/platforms/ps3/device-init.c b/arch/powerpc/platforms/ps3/device-init.c +index e7075aaff1bb..1580464a9d5b 100644 --- a/arch/powerpc/platforms/ps3/device-init.c +++ b/arch/powerpc/platforms/ps3/device-init.c -@@ -752,8 +752,8 @@ static int ps3_notification_read_write(s +@@ -752,8 +752,8 @@ static int ps3_notification_read_write(struct ps3_notification_device *dev, } pr_debug("%s:%u: notification %s issued\n", __func__, __LINE__, op); @@ -35,9 +38,11 @@ Signed-off-by: Thomas Gleixner if (kthread_should_stop()) res = -EINTR; if (res) { +diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c +index 94ad6fe29e69..52a49f0bbc19 100644 --- a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c +++ b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c -@@ -697,8 +697,8 @@ static void ezusb_req_ctx_wait(struct ez +@@ -697,8 +697,8 @@ static void ezusb_req_ctx_wait(struct ezusb_priv *upriv, while (!ctx->done.done && msecs--) udelay(1000); } else { @@ -48,9 +53,11 @@ Signed-off-by: Thomas Gleixner } break; default: +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c +index aa15593a3ac4..5e9269cd14fa 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c -@@ -1623,7 +1623,7 @@ static void ffs_data_put(struct ffs_data +@@ -1624,7 +1624,7 @@ static void ffs_data_put(struct ffs_data *ffs) pr_info("%s(): freeing\n", __func__); ffs_data_clear(ffs); BUG_ON(waitqueue_active(&ffs->ev.waitq) || @@ -59,9 +66,11 @@ Signed-off-by: Thomas Gleixner waitqueue_active(&ffs->wait)); destroy_workqueue(ffs->io_completion_wq); kfree(ffs->dev_name); +diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c +index 37ca0e669bd8..56a16587b221 100644 --- a/drivers/usb/gadget/legacy/inode.c +++ b/drivers/usb/gadget/legacy/inode.c -@@ -343,7 +343,7 @@ ep_io (struct ep_data *epdata, void *buf +@@ -343,7 +343,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len) spin_unlock_irq (&epdata->dev->lock); if (likely (value == 0)) { @@ -70,7 +79,7 @@ Signed-off-by: Thomas Gleixner if (value != 0) { spin_lock_irq (&epdata->dev->lock); if (likely (epdata->ep != NULL)) { -@@ -352,7 +352,7 @@ ep_io (struct ep_data *epdata, void *buf +@@ -352,7 +352,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len) usb_ep_dequeue (epdata->ep, epdata->req); spin_unlock_irq (&epdata->dev->lock); @@ -79,6 +88,8 @@ Signed-off-by: Thomas Gleixner if (epdata->status == -ECONNRESET) epdata->status = -EINTR; } else { +diff --git a/include/linux/completion.h b/include/linux/completion.h +index 519e94915d18..bf8e77001f18 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -9,7 +9,7 @@ @@ -99,7 +110,7 @@ Signed-off-by: Thomas Gleixner }; #define init_completion_map(x, m) __init_completion(x) -@@ -34,7 +34,7 @@ static inline void complete_acquire(stru +@@ -34,7 +34,7 @@ static inline void complete_acquire(struct completion *x) {} static inline void complete_release(struct completion *x) {} #define COMPLETION_INITIALIZER(work) \ @@ -108,7 +119,7 @@ Signed-off-by: Thomas Gleixner #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \ (*({ init_completion_map(&(work), &(map)); &(work); })) -@@ -85,7 +85,7 @@ static inline void complete_release(stru +@@ -85,7 +85,7 @@ static inline void complete_release(struct completion *x) {} static inline void __init_completion(struct completion *x) { x->done = 0; @@ -117,6 +128,8 @@ Signed-off-by: Thomas Gleixner } /** +diff --git a/include/linux/suspend.h b/include/linux/suspend.h +index 3f529ad9a9d2..328439ce71f5 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -196,6 +196,12 @@ struct platform_s2idle_ops { @@ -132,9 +145,11 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_SUSPEND extern suspend_state_t mem_sleep_current; extern suspend_state_t mem_sleep_default; +diff --git a/include/linux/swait.h b/include/linux/swait.h +index 73e06e9986d4..f426a0661aa0 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h -@@ -160,7 +160,9 @@ static inline bool swq_has_sleeper(struc +@@ -160,7 +160,9 @@ static inline bool swq_has_sleeper(struct swait_queue_head *wq) extern void swake_up_one(struct swait_queue_head *q); extern void swake_up_all(struct swait_queue_head *q); extern void swake_up_locked(struct swait_queue_head *q); @@ -144,6 +159,8 @@ Signed-off-by: Thomas Gleixner extern void prepare_to_swait_exclusive(struct swait_queue_head *q, struct swait_queue *wait, int state); extern long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait, int state); +diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c +index abef759de7c8..69e418787f21 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -681,6 +681,10 @@ static int load_image_and_restore(void) @@ -174,9 +191,11 @@ Signed-off-by: Thomas Gleixner pr_info("hibernation exit\n"); return error; +diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c +index 0bd595a0b610..a4456772d98e 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c -@@ -600,6 +600,8 @@ static int enter_state(suspend_state_t s +@@ -600,6 +600,8 @@ static int enter_state(suspend_state_t state) return error; } @@ -201,6 +220,8 @@ Signed-off-by: Thomas Gleixner return error; } EXPORT_SYMBOL(pm_suspend); +diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c +index a1ad5b7d5521..755a58084978 100644 --- a/kernel/sched/completion.c +++ b/kernel/sched/completion.c @@ -29,12 +29,12 @@ void complete(struct completion *x) @@ -259,7 +280,7 @@ Signed-off-by: Thomas Gleixner if (!x->done) return timeout; } -@@ -100,9 +100,9 @@ static inline long __sched +@@ -100,9 +100,9 @@ __wait_for_common(struct completion *x, complete_acquire(x); @@ -271,7 +292,7 @@ Signed-off-by: Thomas Gleixner complete_release(x); -@@ -291,12 +291,12 @@ bool try_wait_for_completion(struct comp +@@ -291,12 +291,12 @@ bool try_wait_for_completion(struct completion *x) if (!READ_ONCE(x->done)) return false; @@ -286,7 +307,7 @@ Signed-off-by: Thomas Gleixner return ret; } EXPORT_SYMBOL(try_wait_for_completion); -@@ -322,8 +322,8 @@ bool completion_done(struct completion * +@@ -322,8 +322,8 @@ bool completion_done(struct completion *x) * otherwise we can end up freeing the completion before complete() * is done referencing it. */ @@ -297,9 +318,11 @@ Signed-off-by: Thomas Gleixner return true; } EXPORT_SYMBOL(completion_done); +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index ddf6282d9780..8272d920b749 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -7107,7 +7107,10 @@ void migrate_disable(void) +@@ -7109,7 +7109,10 @@ void migrate_disable(void) return; } #ifdef CONFIG_SCHED_DEBUG @@ -311,7 +334,7 @@ Signed-off-by: Thomas Gleixner #endif if (p->migrate_disable) { -@@ -7137,7 +7140,10 @@ void migrate_enable(void) +@@ -7139,7 +7142,10 @@ void migrate_enable(void) } #ifdef CONFIG_SCHED_DEBUG @@ -323,9 +346,11 @@ Signed-off-by: Thomas Gleixner #endif WARN_ON_ONCE(p->migrate_disable <= 0); +diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c +index 66b59ac77c22..c7cb30cdd1b7 100644 --- a/kernel/sched/swait.c +++ b/kernel/sched/swait.c -@@ -32,6 +32,25 @@ void swake_up_locked(struct swait_queue_ +@@ -32,6 +32,25 @@ void swake_up_locked(struct swait_queue_head *q) } EXPORT_SYMBOL(swake_up_locked); @@ -351,7 +376,7 @@ Signed-off-by: Thomas Gleixner void swake_up_one(struct swait_queue_head *q) { unsigned long flags; -@@ -69,7 +88,7 @@ void swake_up_all(struct swait_queue_hea +@@ -69,7 +88,7 @@ void swake_up_all(struct swait_queue_head *q) } EXPORT_SYMBOL(swake_up_all); @@ -360,3 +385,6 @@ Signed-off-by: Thomas Gleixner { wait->task = current; if (list_empty(&wait->task_list)) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0098-fs-aio-simple-simple-work.patch b/kernel/patches-4.19.x-rt/0096-fs-aio-simple-simple-work.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0098-fs-aio-simple-simple-work.patch rename to kernel/patches-4.19.x-rt/0096-fs-aio-simple-simple-work.patch index 948376a9e..672de2fce 100644 --- a/kernel/patches-4.19.x-rt/0098-fs-aio-simple-simple-work.patch +++ b/kernel/patches-4.19.x-rt/0096-fs-aio-simple-simple-work.patch @@ -1,6 +1,7 @@ +From 39010d30f3244de6b51646a0325b6292d8c84282 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 16 Feb 2015 18:49:10 +0100 -Subject: fs/aio: simple simple work +Subject: [PATCH 096/269] fs/aio: simple simple work |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:768 |in_atomic(): 1, irqs_disabled(): 0, pid: 26, name: rcuos/2 @@ -24,9 +25,11 @@ Reported-By: Mike Galbraith Suggested-by: Benjamin LaHaise Signed-off-by: Sebastian Andrzej Siewior --- - fs/aio.c | 15 +++++++++++++-- + fs/aio.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) +diff --git a/fs/aio.c b/fs/aio.c +index 45d5ef8dd0a8..7db10b87c9bc 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -42,6 +42,7 @@ @@ -53,7 +56,7 @@ Signed-off-by: Sebastian Andrzej Siewior aio_mnt = kern_mount(&aio_fs); if (IS_ERR(aio_mnt)) panic("Failed to create aio fs mount."); -@@ -596,9 +599,9 @@ static void free_ioctx_reqs(struct percp +@@ -596,9 +599,9 @@ static void free_ioctx_reqs(struct percpu_ref *ref) * and ctx->users has dropped to 0, so we know no more kiocbs can be submitted - * now it's safe to cancel any that need to be. */ @@ -65,7 +68,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct aio_kiocb *req; spin_lock_irq(&ctx->ctx_lock); -@@ -616,6 +619,14 @@ static void free_ioctx_users(struct perc +@@ -616,6 +619,14 @@ static void free_ioctx_users(struct percpu_ref *ref) percpu_ref_put(&ctx->reqs); } @@ -80,3 +83,6 @@ Signed-off-by: Sebastian Andrzej Siewior static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm) { unsigned i, new_nr; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0099-genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch b/kernel/patches-4.19.x-rt/0097-genirq-Do-not-invoke-the-affinity-callback-via-a-wor.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0099-genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch rename to kernel/patches-4.19.x-rt/0097-genirq-Do-not-invoke-the-affinity-callback-via-a-wor.patch index 0756a713a..c2904d389 100644 --- a/kernel/patches-4.19.x-rt/0099-genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch +++ b/kernel/patches-4.19.x-rt/0097-genirq-Do-not-invoke-the-affinity-callback-via-a-wor.patch @@ -1,6 +1,8 @@ +From 2010005b28eea662f9390937d92563ea1c466e24 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 21 Aug 2013 17:48:46 +0200 -Subject: genirq: Do not invoke the affinity callback via a workqueue on RT +Subject: [PATCH 097/269] genirq: Do not invoke the affinity callback via a + workqueue on RT Joe Korty reported, that __irq_set_affinity_locked() schedules a workqueue while holding a rawlock which results in a might_sleep() @@ -9,10 +11,12 @@ This patch uses swork_queue() instead. Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/interrupt.h | 6 ++++++ - kernel/irq/manage.c | 43 ++++++++++++++++++++++++++++++++++++++++--- + include/linux/interrupt.h | 6 ++++++ + kernel/irq/manage.c | 43 ++++++++++++++++++++++++++++++++++++--- 2 files changed, 46 insertions(+), 3 deletions(-) +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index 315f852b4981..a943c07b54ba 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -13,6 +13,7 @@ @@ -43,9 +47,11 @@ Signed-off-by: Sebastian Andrzej Siewior void (*notify)(struct irq_affinity_notify *, const cpumask_t *mask); void (*release)(struct kref *ref); }; +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index 94a18cf54293..d2270f61d335 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c -@@ -259,7 +259,12 @@ int irq_set_affinity_locked(struct irq_d +@@ -259,7 +259,12 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, if (desc->affinity_notify) { kref_get(&desc->affinity_notify->kref); @@ -58,7 +64,7 @@ Signed-off-by: Sebastian Andrzej Siewior } irqd_set(data, IRQD_AFFINITY_SET); -@@ -297,10 +302,8 @@ int irq_set_affinity_hint(unsigned int i +@@ -297,10 +302,8 @@ int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m) } EXPORT_SYMBOL_GPL(irq_set_affinity_hint); @@ -70,7 +76,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct irq_desc *desc = irq_to_desc(notify->irq); cpumask_var_t cpumask; unsigned long flags; -@@ -322,6 +325,35 @@ static void irq_affinity_notify(struct w +@@ -322,6 +325,35 @@ static void irq_affinity_notify(struct work_struct *work) kref_put(¬ify->kref, notify->release); } @@ -106,7 +112,7 @@ Signed-off-by: Sebastian Andrzej Siewior /** * irq_set_affinity_notifier - control notification of IRQ affinity changes * @irq: Interrupt for which to enable/disable notification -@@ -350,7 +382,12 @@ irq_set_affinity_notifier(unsigned int i +@@ -350,7 +382,12 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) if (notify) { notify->irq = irq; kref_init(¬ify->kref); @@ -119,3 +125,6 @@ Signed-off-by: Sebastian Andrzej Siewior } raw_spin_lock_irqsave(&desc->lock, flags); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0100-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch b/kernel/patches-4.19.x-rt/0098-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0100-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch rename to kernel/patches-4.19.x-rt/0098-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch index e108d70f7..44692eab6 100644 --- a/kernel/patches-4.19.x-rt/0100-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch +++ b/kernel/patches-4.19.x-rt/0098-time-hrtimer-avoid-schedule_work-with-interrupts-dis.patch @@ -1,18 +1,22 @@ +From 49622b7282a6c10c5a70f3987df4ccfe3a32c92b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 15 Nov 2017 17:29:51 +0100 -Subject: [PATCH] time/hrtimer: avoid schedule_work() with interrupts disabled +Subject: [PATCH 098/269] time/hrtimer: avoid schedule_work() with interrupts + disabled The NOHZ code tries to schedule a workqueue with interrupts disabled. Since this does not work -RT I am switching it to swork instead. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/time/timer.c | 15 +++++++++++---- + kernel/time/timer.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) +diff --git a/kernel/time/timer.c b/kernel/time/timer.c +index bbe24e241643..696e7583137c 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c -@@ -217,8 +217,7 @@ static DEFINE_PER_CPU(struct timer_base, +@@ -217,8 +217,7 @@ static DEFINE_PER_CPU(struct timer_base, timer_bases[NR_BASES]); static DEFINE_STATIC_KEY_FALSE(timers_nohz_active); static DEFINE_MUTEX(timer_keys_mutex); @@ -22,7 +26,7 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_SMP unsigned int sysctl_timer_migration = 1; -@@ -236,7 +235,7 @@ static void timers_update_migration(void +@@ -236,7 +235,7 @@ static void timers_update_migration(void) static inline void timers_update_migration(void) { } #endif /* !CONFIG_SMP */ @@ -31,7 +35,7 @@ Signed-off-by: Sebastian Andrzej Siewior { mutex_lock(&timer_keys_mutex); timers_update_migration(); -@@ -246,9 +245,17 @@ static void timer_update_keys(struct wor +@@ -246,9 +245,17 @@ static void timer_update_keys(struct work_struct *work) void timers_update_nohz(void) { @@ -50,3 +54,6 @@ Signed-off-by: Sebastian Andrzej Siewior int timer_migration_handler(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0101-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch b/kernel/patches-4.19.x-rt/0099-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0101-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch rename to kernel/patches-4.19.x-rt/0099-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch index 54a6a1002..4eedbb27e 100644 --- a/kernel/patches-4.19.x-rt/0101-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch +++ b/kernel/patches-4.19.x-rt/0099-hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch @@ -1,6 +1,8 @@ +From 7223736bbeccbb731d509b603b15adcbf36bdade Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior -Date: Tue, 3 Jul 2018 11:25:41 +0200 -Subject: [PATCH v2] hrtimer: consolidate hrtimer_init() + hrtimer_init_sleeper() calls +Date: Tue, 3 Jul 2018 11:25:41 +0200 +Subject: [PATCH 099/269] hrtimer: consolidate hrtimer_init() + + hrtimer_init_sleeper() calls hrtimer_init_sleeper() calls require a prior initialisation of the hrtimer object with hrtimer_init(). Lets make the initialisation of @@ -15,18 +17,20 @@ Signed-off-by: Sebastian Andrzej Siewior [anna-maria: Updating the commit message, add staging/android/vsoc.c] Signed-off-by: Anna-Maria Gleixner --- - block/blk-mq.c | 3 -- - drivers/staging/android/vsoc.c | 6 +---- - include/linux/hrtimer.h | 19 ++++++++++++++-- - include/linux/wait.h | 4 +-- - kernel/futex.c | 19 +++++++--------- - kernel/time/hrtimer.c | 46 ++++++++++++++++++++++++++++++++--------- - net/core/pktgen.c | 4 +-- + block/blk-mq.c | 3 +-- + drivers/staging/android/vsoc.c | 6 ++--- + include/linux/hrtimer.h | 19 +++++++++++--- + include/linux/wait.h | 4 +-- + kernel/futex.c | 19 ++++++-------- + kernel/time/hrtimer.c | 46 ++++++++++++++++++++++++++-------- + net/core/pktgen.c | 4 +-- 7 files changed, 67 insertions(+), 34 deletions(-) +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 7d53f2314d7c..b0d0b74cf5a6 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c -@@ -3116,10 +3116,9 @@ static bool blk_mq_poll_hybrid_sleep(str +@@ -3124,10 +3124,9 @@ static bool blk_mq_poll_hybrid_sleep(struct request_queue *q, kt = nsecs; mode = HRTIMER_MODE_REL; @@ -38,9 +42,11 @@ Signed-off-by: Anna-Maria Gleixner do { if (blk_mq_rq_state(rq) == MQ_RQ_COMPLETE) break; +diff --git a/drivers/staging/android/vsoc.c b/drivers/staging/android/vsoc.c +index 22571abcaa4e..78a529d363f3 100644 --- a/drivers/staging/android/vsoc.c +++ b/drivers/staging/android/vsoc.c -@@ -437,12 +437,10 @@ static int handle_vsoc_cond_wait(struct +@@ -437,12 +437,10 @@ static int handle_vsoc_cond_wait(struct file *filp, struct vsoc_cond_wait *arg) return -EINVAL; wake_time = ktime_set(arg->wake_time_sec, arg->wake_time_nsec); @@ -55,9 +61,11 @@ Signed-off-by: Anna-Maria Gleixner } while (1) { +diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h +index 3892e9c8b2de..b8bbaabd5aff 100644 --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h -@@ -364,10 +364,17 @@ DECLARE_PER_CPU(struct tick_device, tick +@@ -364,10 +364,17 @@ DECLARE_PER_CPU(struct tick_device, tick_cpu_device); /* Initialize timers: */ extern void hrtimer_init(struct hrtimer *timer, clockid_t which_clock, enum hrtimer_mode mode); @@ -75,7 +83,7 @@ Signed-off-by: Anna-Maria Gleixner extern void destroy_hrtimer_on_stack(struct hrtimer *timer); #else -@@ -377,6 +384,15 @@ static inline void hrtimer_init_on_stack +@@ -377,6 +384,15 @@ static inline void hrtimer_init_on_stack(struct hrtimer *timer, { hrtimer_init(timer, which_clock, mode); } @@ -91,7 +99,7 @@ Signed-off-by: Anna-Maria Gleixner static inline void destroy_hrtimer_on_stack(struct hrtimer *timer) { } #endif -@@ -480,9 +496,6 @@ extern long hrtimer_nanosleep(const stru +@@ -480,9 +496,6 @@ extern long hrtimer_nanosleep(const struct timespec64 *rqtp, const enum hrtimer_mode mode, const clockid_t clockid); @@ -101,6 +109,8 @@ Signed-off-by: Anna-Maria Gleixner extern int schedule_hrtimeout_range(ktime_t *expires, u64 delta, const enum hrtimer_mode mode); extern int schedule_hrtimeout_range_clock(ktime_t *expires, +diff --git a/include/linux/wait.h b/include/linux/wait.h +index 2b5ef8e94d19..94bd2e841de6 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -489,8 +489,8 @@ do { \ @@ -114,9 +124,11 @@ Signed-off-by: Anna-Maria Gleixner if ((timeout) != KTIME_MAX) \ hrtimer_start_range_ns(&__t.timer, timeout, \ current->timer_slack_ns, \ +diff --git a/kernel/futex.c b/kernel/futex.c +index 1bd0950bea4e..fadd9bff6e3c 100644 --- a/kernel/futex.c +++ b/kernel/futex.c -@@ -2681,10 +2681,9 @@ static int futex_wait(u32 __user *uaddr, +@@ -2684,10 +2684,9 @@ static int futex_wait(u32 __user *uaddr, unsigned int flags, u32 val, if (abs_time) { to = &timeout; @@ -130,7 +142,7 @@ Signed-off-by: Anna-Maria Gleixner hrtimer_set_expires_range_ns(&to->timer, *abs_time, current->timer_slack_ns); } -@@ -2783,9 +2782,8 @@ static int futex_lock_pi(u32 __user *uad +@@ -2786,9 +2785,8 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, if (time) { to = &timeout; @@ -142,7 +154,7 @@ Signed-off-by: Anna-Maria Gleixner hrtimer_set_expires(&to->timer, *time); } -@@ -3209,10 +3207,9 @@ static int futex_wait_requeue_pi(u32 __u +@@ -3212,10 +3210,9 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, if (abs_time) { to = &timeout; @@ -156,9 +168,11 @@ Signed-off-by: Anna-Maria Gleixner hrtimer_set_expires_range_ns(&to->timer, *abs_time, current->timer_slack_ns); } +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index e1a549c9e399..4f43ece42f3b 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c -@@ -1648,13 +1648,44 @@ static enum hrtimer_restart hrtimer_wake +@@ -1648,13 +1648,44 @@ static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer) return HRTIMER_NORESTART; } @@ -204,7 +218,7 @@ Signed-off-by: Anna-Maria Gleixner int nanosleep_copyout(struct restart_block *restart, struct timespec64 *ts) { switch(restart->nanosleep.type) { -@@ -1678,8 +1709,6 @@ static int __sched do_nanosleep(struct h +@@ -1678,8 +1709,6 @@ static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mod { struct restart_block *restart; @@ -213,7 +227,7 @@ Signed-off-by: Anna-Maria Gleixner do { set_current_state(TASK_INTERRUPTIBLE); hrtimer_start_expires(&t->timer, mode); -@@ -1716,10 +1745,9 @@ static long __sched hrtimer_nanosleep_re +@@ -1716,10 +1745,9 @@ static long __sched hrtimer_nanosleep_restart(struct restart_block *restart) struct hrtimer_sleeper t; int ret; @@ -226,7 +240,7 @@ Signed-off-by: Anna-Maria Gleixner ret = do_nanosleep(&t, HRTIMER_MODE_ABS); destroy_hrtimer_on_stack(&t.timer); return ret; -@@ -1737,7 +1765,7 @@ long hrtimer_nanosleep(const struct time +@@ -1737,7 +1765,7 @@ long hrtimer_nanosleep(const struct timespec64 *rqtp, if (dl_task(current) || rt_task(current)) slack = 0; @@ -235,7 +249,7 @@ Signed-off-by: Anna-Maria Gleixner hrtimer_set_expires_range_ns(&t.timer, timespec64_to_ktime(*rqtp), slack); ret = do_nanosleep(&t, mode); if (ret != -ERESTART_RESTARTBLOCK) -@@ -1936,11 +1964,9 @@ schedule_hrtimeout_range_clock(ktime_t * +@@ -1936,11 +1964,9 @@ schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta, return -EINTR; } @@ -248,9 +262,11 @@ Signed-off-by: Anna-Maria Gleixner hrtimer_start_expires(&t.timer, mode); if (likely(t.task)) +diff --git a/net/core/pktgen.c b/net/core/pktgen.c +index 7f6938405fa1..b71d9eef334e 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c -@@ -2160,7 +2160,8 @@ static void spin(struct pktgen_dev *pkt_ +@@ -2160,7 +2160,8 @@ static void spin(struct pktgen_dev *pkt_dev, ktime_t spin_until) s64 remaining; struct hrtimer_sleeper t; @@ -260,7 +276,7 @@ Signed-off-by: Anna-Maria Gleixner hrtimer_set_expires(&t.timer, spin_until); remaining = ktime_to_ns(hrtimer_expires_remaining(&t.timer)); -@@ -2175,7 +2176,6 @@ static void spin(struct pktgen_dev *pkt_ +@@ -2175,7 +2176,6 @@ static void spin(struct pktgen_dev *pkt_dev, ktime_t spin_until) } while (ktime_compare(end_time, spin_until) < 0); } else { /* see do_nanosleep */ @@ -268,3 +284,6 @@ Signed-off-by: Anna-Maria Gleixner do { set_current_state(TASK_INTERRUPTIBLE); hrtimer_start_expires(&t.timer, HRTIMER_MODE_ABS); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0102-hrtimers-prepare-full-preemption.patch b/kernel/patches-4.19.x-rt/0100-hrtimers-Prepare-full-preemption.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0102-hrtimers-prepare-full-preemption.patch rename to kernel/patches-4.19.x-rt/0100-hrtimers-Prepare-full-preemption.patch index 7bf4bd295..23bc3dd41 100644 --- a/kernel/patches-4.19.x-rt/0102-hrtimers-prepare-full-preemption.patch +++ b/kernel/patches-4.19.x-rt/0100-hrtimers-Prepare-full-preemption.patch @@ -1,26 +1,28 @@ +From 87f5cf4447982ad964655f0831ea4deff2c59819 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Fri, 3 Jul 2009 08:29:34 -0500 -Subject: hrtimers: Prepare full preemption +Subject: [PATCH 100/269] hrtimers: Prepare full preemption Make cancellation of a running callback in softirq context safe against preemption. Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner - --- - fs/timerfd.c | 5 ++++- - include/linux/hrtimer.h | 13 ++++++++++++- - include/linux/posix-timers.h | 2 +- - kernel/time/alarmtimer.c | 2 +- - kernel/time/hrtimer.c | 33 ++++++++++++++++++++++++++++++++- - kernel/time/itimer.c | 1 + - kernel/time/posix-timers.c | 39 +++++++++++++++++++++++++++++++++++++-- + fs/timerfd.c | 5 ++++- + include/linux/hrtimer.h | 13 +++++++++++- + include/linux/posix-timers.h | 2 +- + kernel/time/alarmtimer.c | 2 +- + kernel/time/hrtimer.c | 33 +++++++++++++++++++++++++++++- + kernel/time/itimer.c | 1 + + kernel/time/posix-timers.c | 39 ++++++++++++++++++++++++++++++++++-- 7 files changed, 88 insertions(+), 7 deletions(-) +diff --git a/fs/timerfd.c b/fs/timerfd.c +index d69ad801eb80..82d0f52414a6 100644 --- a/fs/timerfd.c +++ b/fs/timerfd.c -@@ -471,7 +471,10 @@ static int do_timerfd_settime(int ufd, i +@@ -471,7 +471,10 @@ static int do_timerfd_settime(int ufd, int flags, break; } spin_unlock_irq(&ctx->wqh.lock); @@ -32,6 +34,8 @@ Signed-off-by: Thomas Gleixner } /* +diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h +index b8bbaabd5aff..73ad7309436a 100644 --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -22,6 +22,7 @@ @@ -52,7 +56,7 @@ Signed-off-by: Thomas Gleixner struct hrtimer *softirq_next_timer; struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES]; } ____cacheline_aligned; -@@ -433,6 +437,13 @@ static inline void hrtimer_restart(struc +@@ -433,6 +437,13 @@ static inline void hrtimer_restart(struct hrtimer *timer) hrtimer_start_expires(timer, HRTIMER_MODE_ABS); } @@ -66,7 +70,7 @@ Signed-off-by: Thomas Gleixner /* Query timers: */ extern ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust); -@@ -458,7 +469,7 @@ static inline int hrtimer_is_queued(stru +@@ -458,7 +469,7 @@ static inline int hrtimer_is_queued(struct hrtimer *timer) * Helper function to check, whether the timer is running the callback * function */ @@ -75,6 +79,8 @@ Signed-off-by: Thomas Gleixner { return timer->base->running == timer; } +diff --git a/include/linux/posix-timers.h b/include/linux/posix-timers.h +index ee7e987ea1b4..0571b498db73 100644 --- a/include/linux/posix-timers.h +++ b/include/linux/posix-timers.h @@ -114,8 +114,8 @@ struct k_itimer { @@ -87,6 +93,8 @@ Signed-off-by: Thomas Gleixner }; void run_posix_cpu_timers(struct task_struct *task); +diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c +index fdeb9bc6affb..966708e8ce14 100644 --- a/kernel/time/alarmtimer.c +++ b/kernel/time/alarmtimer.c @@ -436,7 +436,7 @@ int alarm_cancel(struct alarm *alarm) @@ -98,9 +106,11 @@ Signed-off-by: Thomas Gleixner } } EXPORT_SYMBOL_GPL(alarm_cancel); +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index 4f43ece42f3b..923a650e5c35 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c -@@ -939,6 +939,33 @@ u64 hrtimer_forward(struct hrtimer *time +@@ -939,6 +939,33 @@ u64 hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval) } EXPORT_SYMBOL_GPL(hrtimer_forward); @@ -134,7 +144,7 @@ Signed-off-by: Thomas Gleixner /* * enqueue_hrtimer - internal function to (re)start a timer * -@@ -1171,7 +1198,7 @@ int hrtimer_cancel(struct hrtimer *timer +@@ -1171,7 +1198,7 @@ int hrtimer_cancel(struct hrtimer *timer) if (ret >= 0) return ret; @@ -143,7 +153,7 @@ Signed-off-by: Thomas Gleixner } } EXPORT_SYMBOL_GPL(hrtimer_cancel); -@@ -1477,6 +1504,7 @@ static __latent_entropy void hrtimer_run +@@ -1477,6 +1504,7 @@ static __latent_entropy void hrtimer_run_softirq(struct softirq_action *h) hrtimer_update_softirq_timer(cpu_base, true); raw_spin_unlock_irqrestore(&cpu_base->lock, flags); @@ -151,7 +161,7 @@ Signed-off-by: Thomas Gleixner } #ifdef CONFIG_HIGH_RES_TIMERS -@@ -1846,6 +1874,9 @@ int hrtimers_prepare_cpu(unsigned int cp +@@ -1846,6 +1874,9 @@ int hrtimers_prepare_cpu(unsigned int cpu) cpu_base->softirq_next_timer = NULL; cpu_base->expires_next = KTIME_MAX; cpu_base->softirq_expires_next = KTIME_MAX; @@ -161,9 +171,11 @@ Signed-off-by: Thomas Gleixner return 0; } +diff --git a/kernel/time/itimer.c b/kernel/time/itimer.c +index 9a65713c8309..55b0e58368bf 100644 --- a/kernel/time/itimer.c +++ b/kernel/time/itimer.c -@@ -215,6 +215,7 @@ int do_setitimer(int which, struct itime +@@ -215,6 +215,7 @@ int do_setitimer(int which, struct itimerval *value, struct itimerval *ovalue) /* We are sharing ->siglock with it_real_fn() */ if (hrtimer_try_to_cancel(timer) < 0) { spin_unlock_irq(&tsk->sighand->siglock); @@ -171,9 +183,11 @@ Signed-off-by: Thomas Gleixner goto again; } expires = timeval_to_ktime(value->it_value); +diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c +index 5a01c4fdbfef..a5ec421e3437 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c -@@ -463,7 +463,7 @@ static struct k_itimer * alloc_posix_tim +@@ -463,7 +463,7 @@ static struct k_itimer * alloc_posix_timer(void) static void k_itimer_rcu_free(struct rcu_head *head) { @@ -182,7 +196,7 @@ Signed-off-by: Thomas Gleixner kmem_cache_free(posix_timers_cache, tmr); } -@@ -480,7 +480,7 @@ static void release_posix_timer(struct k +@@ -480,7 +480,7 @@ static void release_posix_timer(struct k_itimer *tmr, int it_id_set) } put_pid(tmr->it_pid); sigqueue_free(tmr->sigq); @@ -191,7 +205,7 @@ Signed-off-by: Thomas Gleixner } static int common_timer_create(struct k_itimer *new_timer) -@@ -821,6 +821,22 @@ static void common_hrtimer_arm(struct k_ +@@ -821,6 +821,22 @@ static void common_hrtimer_arm(struct k_itimer *timr, ktime_t expires, hrtimer_start_expires(timer, HRTIMER_MODE_ABS); } @@ -214,7 +228,7 @@ Signed-off-by: Thomas Gleixner static int common_hrtimer_try_to_cancel(struct k_itimer *timr) { return hrtimer_try_to_cancel(&timr->it.real.timer); -@@ -885,6 +901,7 @@ static int do_timer_settime(timer_t time +@@ -885,6 +901,7 @@ static int do_timer_settime(timer_t timer_id, int flags, if (!timr) return -EINVAL; @@ -222,7 +236,7 @@ Signed-off-by: Thomas Gleixner kc = timr->kclock; if (WARN_ON_ONCE(!kc || !kc->timer_set)) error = -EINVAL; -@@ -893,9 +910,12 @@ static int do_timer_settime(timer_t time +@@ -893,9 +910,12 @@ static int do_timer_settime(timer_t timer_id, int flags, unlock_timer(timr, flag); if (error == TIMER_RETRY) { @@ -235,7 +249,7 @@ Signed-off-by: Thomas Gleixner return error; } -@@ -977,10 +997,15 @@ SYSCALL_DEFINE1(timer_delete, timer_t, t +@@ -977,10 +997,15 @@ SYSCALL_DEFINE1(timer_delete, timer_t, timer_id) if (!timer) return -EINVAL; @@ -251,7 +265,7 @@ Signed-off-by: Thomas Gleixner spin_lock(¤t->sighand->siglock); list_del(&timer->list); -@@ -1006,8 +1031,18 @@ static void itimer_delete(struct k_itime +@@ -1006,8 +1031,18 @@ static void itimer_delete(struct k_itimer *timer) retry_delete: spin_lock_irqsave(&timer->it_lock, flags); @@ -270,3 +284,6 @@ Signed-off-by: Thomas Gleixner goto retry_delete; } list_del(&timer->list); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0103-hrtimer-by-timers-by-default-into-the-softirq-context.patch b/kernel/patches-4.19.x-rt/0101-hrtimer-by-timers-by-default-into-the-softirq-contex.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0103-hrtimer-by-timers-by-default-into-the-softirq-context.patch rename to kernel/patches-4.19.x-rt/0101-hrtimer-by-timers-by-default-into-the-softirq-contex.patch index 93167770f..156a3d6f8 100644 --- a/kernel/patches-4.19.x-rt/0103-hrtimer-by-timers-by-default-into-the-softirq-context.patch +++ b/kernel/patches-4.19.x-rt/0101-hrtimer-by-timers-by-default-into-the-softirq-contex.patch @@ -1,6 +1,8 @@ +From 7bbc9e32ebfc904f317e3e3808164cdcba6f7f6d Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 3 Jul 2009 08:44:31 -0500 -Subject: hrtimer: by timers by default into the softirq context +Subject: [PATCH 101/269] hrtimer: by timers by default into the softirq + context We can't have hrtimers callbacks running in hardirq context on RT. Therefore the timers are deferred to the softirq context by default. @@ -12,22 +14,24 @@ Those are: Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/kvm/lapic.c | 2 +- - include/linux/hrtimer.h | 6 ++++++ - kernel/events/core.c | 4 ++-- - kernel/sched/core.c | 2 +- - kernel/sched/deadline.c | 2 +- - kernel/sched/fair.c | 4 ++-- - kernel/sched/rt.c | 4 ++-- - kernel/time/hrtimer.c | 21 +++++++++++++++++++-- - kernel/time/tick-broadcast-hrtimer.c | 2 +- - kernel/time/tick-sched.c | 2 +- - kernel/watchdog.c | 2 +- + arch/x86/kvm/lapic.c | 2 +- + include/linux/hrtimer.h | 6 ++++++ + kernel/events/core.c | 4 ++-- + kernel/sched/core.c | 2 +- + kernel/sched/deadline.c | 2 +- + kernel/sched/fair.c | 4 ++-- + kernel/sched/rt.c | 4 ++-- + kernel/time/hrtimer.c | 21 +++++++++++++++++++-- + kernel/time/tick-broadcast-hrtimer.c | 2 +- + kernel/time/tick-sched.c | 2 +- + kernel/watchdog.c | 2 +- 11 files changed, 37 insertions(+), 14 deletions(-) +diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c +index 3692de84c420..e3c95654b0d1 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c -@@ -2250,7 +2250,7 @@ int kvm_create_lapic(struct kvm_vcpu *vc +@@ -2250,7 +2250,7 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu) apic->vcpu = vcpu; hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC, @@ -36,6 +40,8 @@ Signed-off-by: Sebastian Andrzej Siewior apic->lapic_timer.timer.function = apic_timer_fn; /* +diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h +index 73ad7309436a..2bdb047c7656 100644 --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -42,6 +42,7 @@ enum hrtimer_mode { @@ -58,9 +64,11 @@ Signed-off-by: Sebastian Andrzej Siewior }; /* +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 87bd96399d1c..36661d7a8581 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c -@@ -1102,7 +1102,7 @@ static void __perf_mux_hrtimer_init(stru +@@ -1102,7 +1102,7 @@ static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu) cpuctx->hrtimer_interval = ns_to_ktime(NSEC_PER_MSEC * interval); raw_spin_lock_init(&cpuctx->hrtimer_lock); @@ -69,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior timer->function = perf_mux_hrtimer_handler; } -@@ -9181,7 +9181,7 @@ static void perf_swevent_init_hrtimer(st +@@ -9183,7 +9183,7 @@ static void perf_swevent_init_hrtimer(struct perf_event *event) if (!is_sampling_event(event)) return; @@ -78,9 +86,11 @@ Signed-off-by: Sebastian Andrzej Siewior hwc->hrtimer.function = perf_swevent_hrtimer; /* +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 8272d920b749..4ed3b29cb0c8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -314,7 +314,7 @@ static void hrtick_rq_init(struct rq *rq +@@ -315,7 +315,7 @@ static void hrtick_rq_init(struct rq *rq) rq->hrtick_csd.info = rq; #endif @@ -89,9 +99,11 @@ Signed-off-by: Sebastian Andrzej Siewior rq->hrtick_timer.function = hrtick; } #else /* CONFIG_SCHED_HRTICK */ +diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c +index f927b1f45474..ad2a793a912b 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c -@@ -1054,7 +1054,7 @@ void init_dl_task_timer(struct sched_dl_ +@@ -1054,7 +1054,7 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se) { struct hrtimer *timer = &dl_se->dl_timer; @@ -100,9 +112,11 @@ Signed-off-by: Sebastian Andrzej Siewior timer->function = dl_task_timer; } +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index c17d63b06026..4193041b3cab 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c -@@ -4879,9 +4879,9 @@ void init_cfs_bandwidth(struct cfs_bandw +@@ -4904,9 +4904,9 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b) cfs_b->period = ns_to_ktime(default_cfs_period()); INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq); @@ -114,9 +128,11 @@ Signed-off-by: Sebastian Andrzej Siewior cfs_b->slack_timer.function = sched_cfs_slack_timer; cfs_b->distribute_running = 0; } +diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c +index 4857ca145119..32c9a9f54495 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c -@@ -45,8 +45,8 @@ void init_rt_bandwidth(struct rt_bandwid +@@ -45,8 +45,8 @@ void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime) raw_spin_lock_init(&rt_b->rt_runtime_lock); @@ -127,9 +143,11 @@ Signed-off-by: Sebastian Andrzej Siewior rt_b->rt_period_timer.function = sched_rt_period_timer; } +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index 923a650e5c35..abf24e60b6e8 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c -@@ -1135,7 +1135,9 @@ void hrtimer_start_range_ns(struct hrtim +@@ -1135,7 +1135,9 @@ void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, * Check whether the HRTIMER_MODE_SOFT bit and hrtimer.is_soft * match. */ @@ -139,7 +157,7 @@ Signed-off-by: Sebastian Andrzej Siewior base = lock_hrtimer_base(timer, &flags); -@@ -1295,10 +1297,17 @@ static inline int hrtimer_clockid_to_bas +@@ -1295,10 +1297,17 @@ static inline int hrtimer_clockid_to_base(clockid_t clock_id) static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, enum hrtimer_mode mode) { @@ -159,7 +177,7 @@ Signed-off-by: Sebastian Andrzej Siewior memset(timer, 0, sizeof(struct hrtimer)); cpu_base = raw_cpu_ptr(&hrtimer_bases); -@@ -1681,6 +1690,14 @@ static void __hrtimer_init_sleeper(struc +@@ -1681,6 +1690,14 @@ static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl, enum hrtimer_mode mode, struct task_struct *task) { @@ -174,9 +192,11 @@ Signed-off-by: Sebastian Andrzej Siewior __hrtimer_init(&sl->timer, clock_id, mode); sl->timer.function = hrtimer_wakeup; sl->task = task; +diff --git a/kernel/time/tick-broadcast-hrtimer.c b/kernel/time/tick-broadcast-hrtimer.c +index a59641fb88b6..52649fdea3b5 100644 --- a/kernel/time/tick-broadcast-hrtimer.c +++ b/kernel/time/tick-broadcast-hrtimer.c -@@ -106,7 +106,7 @@ static enum hrtimer_restart bc_handler(s +@@ -106,7 +106,7 @@ static enum hrtimer_restart bc_handler(struct hrtimer *t) void tick_setup_hrtimer_broadcast(void) { @@ -185,6 +205,8 @@ Signed-off-by: Sebastian Andrzej Siewior bctimer.function = bc_handler; clockevents_register_device(&ce_broadcast_hrtimer); } +diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c +index 54fd344ef973..c217af74dddf 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -1310,7 +1310,7 @@ void tick_setup_sched_timer(void) @@ -196,9 +218,11 @@ Signed-off-by: Sebastian Andrzej Siewior ts->sched_timer.function = tick_sched_timer; /* Get the next period (per-CPU) */ +diff --git a/kernel/watchdog.c b/kernel/watchdog.c +index bbc4940f21af..defd493ba967 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c -@@ -483,7 +483,7 @@ static void watchdog_enable(unsigned int +@@ -483,7 +483,7 @@ static void watchdog_enable(unsigned int cpu) * Start the timer first to prevent the NMI watchdog triggering * before the timer has a chance to fire. */ @@ -207,3 +231,6 @@ Signed-off-by: Sebastian Andrzej Siewior hrtimer->function = watchdog_timer_fn; hrtimer_start(hrtimer, ns_to_ktime(sample_period), HRTIMER_MODE_REL_PINNED); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0104-sched-fair-Make-the-hrtimers-non-hard-again.patch b/kernel/patches-4.19.x-rt/0102-sched-fair-Make-the-hrtimers-non-hard-again.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0104-sched-fair-Make-the-hrtimers-non-hard-again.patch rename to kernel/patches-4.19.x-rt/0102-sched-fair-Make-the-hrtimers-non-hard-again.patch index c63feecab..cef095d8a 100644 --- a/kernel/patches-4.19.x-rt/0104-sched-fair-Make-the-hrtimers-non-hard-again.patch +++ b/kernel/patches-4.19.x-rt/0102-sched-fair-Make-the-hrtimers-non-hard-again.patch @@ -1,6 +1,7 @@ +From f498fc065cd56d96f2583801142a348eb801e631 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 8 Jan 2019 12:31:06 +0100 -Subject: [PATCH] sched/fair: Make the hrtimers non-hard again +Subject: [PATCH 102/269] sched/fair: Make the hrtimers non-hard again Since commit "sched/fair: Robustify CFS-bandwidth timer locking" both hrtimer can run in softirq context because now interrupts are disabled @@ -8,12 +9,14 @@ as part of the locking procedure. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/fair.c | 4 ++-- + kernel/sched/fair.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index 4193041b3cab..c17d63b06026 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c -@@ -4879,9 +4879,9 @@ void init_cfs_bandwidth(struct cfs_bandw +@@ -4904,9 +4904,9 @@ void init_cfs_bandwidth(struct cfs_bandwidth *cfs_b) cfs_b->period = ns_to_ktime(default_cfs_period()); INIT_LIST_HEAD(&cfs_b->throttled_cfs_rq); @@ -25,3 +28,6 @@ Signed-off-by: Sebastian Andrzej Siewior cfs_b->slack_timer.function = sched_cfs_slack_timer; cfs_b->distribute_running = 0; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0105-hrtimer-Move-schedule_work-call-to-helper-thread.patch b/kernel/patches-4.19.x-rt/0103-hrtimer-Move-schedule_work-call-to-helper-thread.patch similarity index 90% rename from kernel/patches-4.19.x-rt/0105-hrtimer-Move-schedule_work-call-to-helper-thread.patch rename to kernel/patches-4.19.x-rt/0103-hrtimer-Move-schedule_work-call-to-helper-thread.patch index 029d45a5c..a53418657 100644 --- a/kernel/patches-4.19.x-rt/0105-hrtimer-Move-schedule_work-call-to-helper-thread.patch +++ b/kernel/patches-4.19.x-rt/0103-hrtimer-Move-schedule_work-call-to-helper-thread.patch @@ -1,6 +1,7 @@ +From ca493505f2f12750ca207582fc7b6ca69cbf504e Mon Sep 17 00:00:00 2001 From: Yang Shi Date: Mon, 16 Sep 2013 14:09:19 -0700 -Subject: hrtimer: Move schedule_work call to helper thread +Subject: [PATCH 103/269] hrtimer: Move schedule_work call to helper thread When run ltp leapsec_timer test, the following call trace is caught: @@ -46,9 +47,11 @@ Signed-off-by: Yang Shi [bigeasy: use swork_queue() instead a helper thread] Signed-off-by: Sebastian Andrzej Siewior --- - kernel/time/hrtimer.c | 24 ++++++++++++++++++++++++ + kernel/time/hrtimer.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index abf24e60b6e8..c72eb8bfc471 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -730,6 +730,29 @@ static void hrtimer_switch_to_hres(void) @@ -89,3 +92,6 @@ Signed-off-by: Sebastian Andrzej Siewior #else +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0106-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch b/kernel/patches-4.19.x-rt/0104-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0106-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch rename to kernel/patches-4.19.x-rt/0104-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch index dbfcc1623..ecba5a6a2 100644 --- a/kernel/patches-4.19.x-rt/0106-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch +++ b/kernel/patches-4.19.x-rt/0104-hrtimer-move-state-change-before-hrtimer_cancel-in-d.patch @@ -1,6 +1,7 @@ +From 78fffa8243d75e61f9508289b2f68d2f66cf34f6 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 6 Dec 2018 10:15:13 +0100 -Subject: [PATCH] hrtimer: move state change before hrtimer_cancel in +Subject: [PATCH 104/269] hrtimer: move state change before hrtimer_cancel in do_nanosleep() There is a small window between setting t->task to NULL and waking the @@ -23,12 +24,14 @@ Cc: stable-rt@vger.kernel.org Reviewed-by: Daniel Bristot de Oliveira Signed-off-by: Sebastian Andrzej Siewior --- - kernel/time/hrtimer.c | 2 +- + kernel/time/hrtimer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index c72eb8bfc471..cfa3599fa789 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c -@@ -1785,12 +1785,12 @@ static int __sched do_nanosleep(struct h +@@ -1785,12 +1785,12 @@ static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mod if (likely(t->task)) freezable_schedule(); @@ -42,3 +45,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (!t->task) return 0; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0107-posix-timers-thread-posix-cpu-timers-on-rt.patch b/kernel/patches-4.19.x-rt/0105-posix-timers-Thread-posix-cpu-timers-on-rt.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0107-posix-timers-thread-posix-cpu-timers-on-rt.patch rename to kernel/patches-4.19.x-rt/0105-posix-timers-Thread-posix-cpu-timers-on-rt.patch index a516023d7..dd3fb426a 100644 --- a/kernel/patches-4.19.x-rt/0107-posix-timers-thread-posix-cpu-timers-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0105-posix-timers-Thread-posix-cpu-timers-on-rt.patch @@ -1,6 +1,7 @@ +From 34b024b3a992c144a3df653c0ad623a8a69dc735 Mon Sep 17 00:00:00 2001 From: John Stultz Date: Fri, 3 Jul 2009 08:29:58 -0500 -Subject: posix-timers: Thread posix-cpu-timers on -rt +Subject: [PATCH 105/269] posix-timers: Thread posix-cpu-timers on -rt posix-cpu-timer code takes non -rt safe locks in hard irq context. Move it to a thread. @@ -9,14 +10,15 @@ context. Move it to a thread. Signed-off-by: John Stultz Signed-off-by: Thomas Gleixner - --- - include/linux/sched.h | 3 - init/init_task.c | 7 + - kernel/fork.c | 3 - kernel/time/posix-cpu-timers.c | 154 ++++++++++++++++++++++++++++++++++++++++- + include/linux/sched.h | 3 + + init/init_task.c | 7 ++ + kernel/fork.c | 3 + + kernel/time/posix-cpu-timers.c | 154 ++++++++++++++++++++++++++++++++- 4 files changed, 164 insertions(+), 3 deletions(-) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 535e57775208..c2dfe6939773 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -832,6 +832,9 @@ struct task_struct { @@ -29,9 +31,11 @@ Signed-off-by: Thomas Gleixner #endif /* Process credentials: */ +diff --git a/init/init_task.c b/init/init_task.c +index 0b49b9cf5571..9e3362748214 100644 --- a/init/init_task.c +++ b/init/init_task.c -@@ -50,6 +50,12 @@ static struct sighand_struct init_sighan +@@ -50,6 +50,12 @@ static struct sighand_struct init_sighand = { .signalfd_wqh = __WAIT_QUEUE_HEAD_INITIALIZER(init_sighand.signalfd_wqh), }; @@ -52,9 +56,11 @@ Signed-off-by: Thomas Gleixner .thread_pid = &init_struct_pid, .thread_group = LIST_HEAD_INIT(init_task.thread_group), .thread_node = LIST_HEAD_INIT(init_signals.thread_head), +diff --git a/kernel/fork.c b/kernel/fork.c +index bfe9c5c3eb88..1b8ac523aa99 100644 --- a/kernel/fork.c +++ b/kernel/fork.c -@@ -1575,6 +1575,9 @@ static void rt_mutex_init_task(struct ta +@@ -1575,6 +1575,9 @@ static void rt_mutex_init_task(struct task_struct *p) */ static void posix_cpu_timers_init(struct task_struct *tsk) { @@ -64,6 +70,8 @@ Signed-off-by: Thomas Gleixner tsk->cputime_expires.prof_exp = 0; tsk->cputime_expires.virt_exp = 0; tsk->cputime_expires.sched_exp = 0; +diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c +index 76801b9b481e..baeeaef3b721 100644 --- a/kernel/time/posix-cpu-timers.c +++ b/kernel/time/posix-cpu-timers.c @@ -3,8 +3,10 @@ @@ -85,7 +93,7 @@ Signed-off-by: Thomas Gleixner #include "posix-timers.h" -@@ -1136,14 +1139,12 @@ static inline int fastpath_timer_check(s +@@ -1136,14 +1139,12 @@ static inline int fastpath_timer_check(struct task_struct *tsk) * already updated our counts. We need to check if any timers fire now. * Interrupts are disabled. */ @@ -101,7 +109,7 @@ Signed-off-by: Thomas Gleixner /* * The fast path checks that there are no expired thread or thread * group timers. If that's so, just return. -@@ -1196,6 +1197,153 @@ void run_posix_cpu_timers(struct task_st +@@ -1196,6 +1197,153 @@ void run_posix_cpu_timers(struct task_struct *tsk) } } @@ -255,3 +263,6 @@ Signed-off-by: Thomas Gleixner /* * Set one of the process-wide special case CPU timers or RLIMIT_CPU. * The tsk->sighand->siglock must be held by the caller. +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0108-sched-delay-put-task.patch b/kernel/patches-4.19.x-rt/0106-sched-Move-task_struct-cleanup-to-RCU.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0108-sched-delay-put-task.patch rename to kernel/patches-4.19.x-rt/0106-sched-Move-task_struct-cleanup-to-RCU.patch index 9dd233ca3..6962054f4 100644 --- a/kernel/patches-4.19.x-rt/0108-sched-delay-put-task.patch +++ b/kernel/patches-4.19.x-rt/0106-sched-Move-task_struct-cleanup-to-RCU.patch @@ -1,17 +1,20 @@ -Subject: sched: Move task_struct cleanup to RCU +From 3c13de2cc91a9379fe1de22e474cad11805812f9 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 31 May 2011 16:59:16 +0200 +Subject: [PATCH 106/269] sched: Move task_struct cleanup to RCU __put_task_struct() does quite some expensive work. We don't want to burden random tasks with that. Signed-off-by: Thomas Gleixner --- - include/linux/sched.h | 3 +++ - include/linux/sched/task.h | 11 ++++++++++- - kernel/fork.c | 15 ++++++++++++++- + include/linux/sched.h | 3 +++ + include/linux/sched/task.h | 11 ++++++++++- + kernel/fork.c | 15 ++++++++++++++- 3 files changed, 27 insertions(+), 2 deletions(-) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index c2dfe6939773..a6f2f76b1162 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1186,6 +1186,9 @@ struct task_struct { @@ -24,6 +27,8 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_DEBUG_ATOMIC_SLEEP unsigned long task_state_change; #endif +diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h +index 108ede99e533..bb98c5b43f81 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -88,6 +88,15 @@ extern void sched_exec(void); @@ -42,7 +47,7 @@ Signed-off-by: Thomas Gleixner extern void __put_task_struct(struct task_struct *t); static inline void put_task_struct(struct task_struct *t) -@@ -95,7 +104,7 @@ static inline void put_task_struct(struc +@@ -95,7 +104,7 @@ static inline void put_task_struct(struct task_struct *t) if (atomic_dec_and_test(&t->usage)) __put_task_struct(t); } @@ -51,9 +56,11 @@ Signed-off-by: Thomas Gleixner struct task_struct *task_rcu_dereference(struct task_struct **ptask); #ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT +diff --git a/kernel/fork.c b/kernel/fork.c +index 1b8ac523aa99..b7e0aac93ee5 100644 --- a/kernel/fork.c +++ b/kernel/fork.c -@@ -671,7 +671,9 @@ static inline void put_signal_struct(str +@@ -671,7 +671,9 @@ static inline void put_signal_struct(struct signal_struct *sig) if (atomic_dec_and_test(&sig->sigcnt)) free_signal_struct(sig); } @@ -64,7 +71,7 @@ Signed-off-by: Thomas Gleixner void __put_task_struct(struct task_struct *tsk) { WARN_ON(!tsk->exit_state); -@@ -688,7 +690,18 @@ void __put_task_struct(struct task_struc +@@ -688,7 +690,18 @@ void __put_task_struct(struct task_struct *tsk) if (!profile_handoff_task(tsk)) free_task(tsk); } @@ -83,3 +90,6 @@ Signed-off-by: Thomas Gleixner void __init __weak arch_task_cache_init(void) { } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0109-sched-limit-nr-migrate.patch b/kernel/patches-4.19.x-rt/0107-sched-Limit-the-number-of-task-migrations-per-batch.patch similarity index 61% rename from kernel/patches-4.19.x-rt/0109-sched-limit-nr-migrate.patch rename to kernel/patches-4.19.x-rt/0107-sched-Limit-the-number-of-task-migrations-per-batch.patch index ba1947d2a..e176d5c61 100644 --- a/kernel/patches-4.19.x-rt/0109-sched-limit-nr-migrate.patch +++ b/kernel/patches-4.19.x-rt/0107-sched-Limit-the-number-of-task-migrations-per-batch.patch @@ -1,18 +1,21 @@ -Subject: sched: Limit the number of task migrations per batch +From 043af6e53425a94e13a6648ac0206a006f2d7792 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Mon, 06 Jun 2011 12:12:51 +0200 +Date: Mon, 6 Jun 2011 12:12:51 +0200 +Subject: [PATCH 107/269] sched: Limit the number of task migrations per batch Put an upper limit on the number of tasks which are migrated per batch to avoid large latencies. Signed-off-by: Thomas Gleixner --- - kernel/sched/core.c | 4 ++++ + kernel/sched/core.c | 4 ++++ 1 file changed, 4 insertions(+) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 4ed3b29cb0c8..f6504beff565 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -44,7 +44,11 @@ const_debug unsigned int sysctl_sched_fe +@@ -44,7 +44,11 @@ const_debug unsigned int sysctl_sched_features = * Number of tasks to iterate in a single balance run. * Limited because this is done with IRQs disabled. */ @@ -24,3 +27,6 @@ Signed-off-by: Thomas Gleixner /* * period over which we measure -rt task CPU usage in us. +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0110-sched-mmdrop-delayed.patch b/kernel/patches-4.19.x-rt/0108-sched-Move-mmdrop-to-RCU-on-RT.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0110-sched-mmdrop-delayed.patch rename to kernel/patches-4.19.x-rt/0108-sched-Move-mmdrop-to-RCU-on-RT.patch index ce90eb91a..8be5b64ab 100644 --- a/kernel/patches-4.19.x-rt/0110-sched-mmdrop-delayed.patch +++ b/kernel/patches-4.19.x-rt/0108-sched-Move-mmdrop-to-RCU-on-RT.patch @@ -1,18 +1,21 @@ -Subject: sched: Move mmdrop to RCU on RT +From 2870b4f8c6cadeb84fb963b2d58ffc546a4c3371 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Mon, 06 Jun 2011 12:20:33 +0200 +Date: Mon, 6 Jun 2011 12:20:33 +0200 +Subject: [PATCH 108/269] sched: Move mmdrop to RCU on RT Takes sleeping locks and calls into the memory allocator, so nothing we want to do in task switch and oder atomic contexts. Signed-off-by: Thomas Gleixner --- - include/linux/mm_types.h | 4 ++++ - include/linux/sched/mm.h | 11 +++++++++++ - kernel/fork.c | 13 +++++++++++++ - kernel/sched/core.c | 18 ++++++++++++++++-- + include/linux/mm_types.h | 4 ++++ + include/linux/sched/mm.h | 11 +++++++++++ + kernel/fork.c | 13 +++++++++++++ + kernel/sched/core.c | 18 ++++++++++++++++-- 4 files changed, 44 insertions(+), 2 deletions(-) +diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h +index 5ed8f6292a53..f430cf0a377e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -12,6 +12,7 @@ @@ -33,9 +36,11 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_HUGETLB_PAGE atomic_long_t hugetlb_usage; #endif +diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h +index cebb79fe2c72..6e578905e4ec 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h -@@ -49,6 +49,17 @@ static inline void mmdrop(struct mm_stru +@@ -49,6 +49,17 @@ static inline void mmdrop(struct mm_struct *mm) __mmdrop(mm); } @@ -50,9 +55,11 @@ Signed-off-by: Thomas Gleixner +# define mmdrop_delayed(mm) mmdrop(mm) +#endif + - /** - * mmget() - Pin the address space associated with a &struct mm_struct. - * @mm: The address space to pin. + /* + * This has to be called after a get_task_mm()/mmget_not_zero() + * followed by taking the mmap_sem for writing before modifying the +diff --git a/kernel/fork.c b/kernel/fork.c +index b7e0aac93ee5..857ce1a7269f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -637,6 +637,19 @@ void __mmdrop(struct mm_struct *mm) @@ -75,9 +82,11 @@ Signed-off-by: Thomas Gleixner static void mmdrop_async_fn(struct work_struct *work) { struct mm_struct *mm; +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index f6504beff565..551ce1adea4a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -2727,9 +2727,13 @@ static struct rq *finish_task_switch(str +@@ -2729,9 +2729,13 @@ static struct rq *finish_task_switch(struct task_struct *prev) * provided by mmdrop(), * - a sync_core for SYNC_CORE. */ @@ -92,7 +101,7 @@ Signed-off-by: Thomas Gleixner } if (unlikely(prev_state == TASK_DEAD)) { if (prev->sched_class->task_dead) -@@ -5557,6 +5561,8 @@ void sched_setnuma(struct task_struct *p +@@ -5559,6 +5563,8 @@ void sched_setnuma(struct task_struct *p, int nid) #endif /* CONFIG_NUMA_BALANCING */ #ifdef CONFIG_HOTPLUG_CPU @@ -101,7 +110,7 @@ Signed-off-by: Thomas Gleixner /* * Ensure that the idle task is using init_mm right before its CPU goes * offline. -@@ -5572,7 +5578,11 @@ void idle_task_exit(void) +@@ -5574,7 +5580,11 @@ void idle_task_exit(void) current->active_mm = &init_mm; finish_arch_post_lock_switch(); } @@ -114,7 +123,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -5884,6 +5894,10 @@ int sched_cpu_dying(unsigned int cpu) +@@ -5886,6 +5896,10 @@ int sched_cpu_dying(unsigned int cpu) update_max_interval(); nohz_balance_exit_idle(rq); hrtick_clear(rq); @@ -125,3 +134,6 @@ Signed-off-by: Thomas Gleixner return 0; } #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0111-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch b/kernel/patches-4.19.x-rt/0109-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0111-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch rename to kernel/patches-4.19.x-rt/0109-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch index 65db86c3b..27505f0eb 100644 --- a/kernel/patches-4.19.x-rt/0111-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch +++ b/kernel/patches-4.19.x-rt/0109-kernel-sched-move-stack-kprobe-clean-up-to-__put_tas.patch @@ -1,6 +1,7 @@ +From 5237487b97c59d69fbd880f60b8cc9ca5414a52a Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 21 Nov 2016 19:31:08 +0100 -Subject: [PATCH] kernel/sched: move stack + kprobe clean up to +Subject: [PATCH 109/269] kernel/sched: move stack + kprobe clean up to __put_task_struct() There is no need to free the stack before the task struct (except for reasons @@ -11,10 +12,12 @@ free memory in preempt disabled region. Cc: stable-rt@vger.kernel.org #for kprobe_flush_task() Signed-off-by: Sebastian Andrzej Siewior --- - kernel/fork.c | 10 ++++++++++ - kernel/sched/core.c | 9 --------- + kernel/fork.c | 10 ++++++++++ + kernel/sched/core.c | 9 --------- 2 files changed, 10 insertions(+), 9 deletions(-) +diff --git a/kernel/fork.c b/kernel/fork.c +index 857ce1a7269f..8a9241afefb0 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -40,6 +40,7 @@ @@ -25,7 +28,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include #include -@@ -693,6 +694,15 @@ void __put_task_struct(struct task_struc +@@ -693,6 +694,15 @@ void __put_task_struct(struct task_struct *tsk) WARN_ON(atomic_read(&tsk->usage)); WARN_ON(tsk == current); @@ -41,9 +44,11 @@ Signed-off-by: Sebastian Andrzej Siewior cgroup_free(tsk); task_numa_free(tsk); security_task_free(tsk); +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 551ce1adea4a..788947117ed2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -2739,15 +2739,6 @@ static struct rq *finish_task_switch(str +@@ -2741,15 +2741,6 @@ static struct rq *finish_task_switch(struct task_struct *prev) if (prev->sched_class->task_dead) prev->sched_class->task_dead(prev); @@ -59,3 +64,6 @@ Signed-off-by: Sebastian Andrzej Siewior put_task_struct(prev); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0112-sched-rt-mutex-wakeup.patch b/kernel/patches-4.19.x-rt/0110-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0112-sched-rt-mutex-wakeup.patch rename to kernel/patches-4.19.x-rt/0110-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch index 0127fdc75..6eb751e26 100644 --- a/kernel/patches-4.19.x-rt/0112-sched-rt-mutex-wakeup.patch +++ b/kernel/patches-4.19.x-rt/0110-sched-Add-saved_state-for-tasks-blocked-on-sleeping-.patch @@ -1,6 +1,8 @@ -Subject: sched: Add saved_state for tasks blocked on sleeping locks +From 63a798ec299b7daacf684067fbe7917856193133 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sat, 25 Jun 2011 09:21:04 +0200 +Subject: [PATCH 110/269] sched: Add saved_state for tasks blocked on sleeping + locks Spinlocks are state preserving in !RT. RT changes the state when a task gets blocked on a lock. So we need to remember the state before @@ -10,11 +12,13 @@ sleep is done, the saved state is restored. Signed-off-by: Thomas Gleixner --- - include/linux/sched.h | 3 +++ - kernel/sched/core.c | 33 ++++++++++++++++++++++++++++++++- - kernel/sched/sched.h | 1 + + include/linux/sched.h | 3 +++ + kernel/sched/core.c | 33 ++++++++++++++++++++++++++++++++- + kernel/sched/sched.h | 1 + 3 files changed, 36 insertions(+), 1 deletion(-) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index a6f2f76b1162..ad44849fba2e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -600,6 +600,8 @@ struct task_struct { @@ -26,7 +30,7 @@ Signed-off-by: Thomas Gleixner /* * This begins the randomizable portion of task_struct. Only -@@ -1613,6 +1615,7 @@ extern struct task_struct *find_get_task +@@ -1613,6 +1615,7 @@ extern struct task_struct *find_get_task_by_vpid(pid_t nr); extern int wake_up_state(struct task_struct *tsk, unsigned int state); extern int wake_up_process(struct task_struct *tsk); @@ -34,9 +38,11 @@ Signed-off-by: Thomas Gleixner extern void wake_up_new_task(struct task_struct *tsk); #ifdef CONFIG_SMP +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 788947117ed2..e7dccbb9973a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -1997,8 +1997,27 @@ try_to_wake_up(struct task_struct *p, un +@@ -1999,8 +1999,27 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) */ raw_spin_lock_irqsave(&p->pi_lock, flags); smp_mb__after_spinlock(); @@ -65,7 +71,7 @@ Signed-off-by: Thomas Gleixner trace_sched_waking(p); -@@ -2162,6 +2181,18 @@ int wake_up_process(struct task_struct * +@@ -2164,6 +2183,18 @@ int wake_up_process(struct task_struct *p) } EXPORT_SYMBOL(wake_up_process); @@ -84,9 +90,11 @@ Signed-off-by: Thomas Gleixner int wake_up_state(struct task_struct *p, unsigned int state) { return try_to_wake_up(p, state, 0); +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index 4c7a837d7c14..dd6ae39957ce 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h -@@ -1443,6 +1443,7 @@ static inline int task_on_rq_migrating(s +@@ -1443,6 +1443,7 @@ static inline int task_on_rq_migrating(struct task_struct *p) #define WF_SYNC 0x01 /* Waker goes to sleep after wakeup */ #define WF_FORK 0x02 /* Child wakeup after fork */ #define WF_MIGRATED 0x4 /* Internal use, task got migrated */ @@ -94,3 +102,6 @@ Signed-off-by: Thomas Gleixner /* * To aid in avoiding the subversion of "niceness" due to uneven distribution +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0113-sched-might-sleep-do-not-account-rcu-depth.patch b/kernel/patches-4.19.x-rt/0111-sched-Do-not-account-rcu_preempt_depth-on-RT-in-migh.patch similarity index 66% rename from kernel/patches-4.19.x-rt/0113-sched-might-sleep-do-not-account-rcu-depth.patch rename to kernel/patches-4.19.x-rt/0111-sched-Do-not-account-rcu_preempt_depth-on-RT-in-migh.patch index 48a9957fe..50bc8e846 100644 --- a/kernel/patches-4.19.x-rt/0113-sched-might-sleep-do-not-account-rcu-depth.patch +++ b/kernel/patches-4.19.x-rt/0111-sched-Do-not-account-rcu_preempt_depth-on-RT-in-migh.patch @@ -1,16 +1,20 @@ -Subject: sched: Do not account rcu_preempt_depth on RT in might_sleep() +From 01cbb896854fa0cccd07b728402d50b349946011 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Tue, 07 Jun 2011 09:19:06 +0200 +Date: Tue, 7 Jun 2011 09:19:06 +0200 +Subject: [PATCH 111/269] sched: Do not account rcu_preempt_depth on RT in + might_sleep() RT changes the rcu_preempt_depth semantics, so we cannot check for it in might_sleep(). Signed-off-by: Thomas Gleixner --- - include/linux/rcupdate.h | 7 +++++++ - kernel/sched/core.c | 2 +- + include/linux/rcupdate.h | 7 +++++++ + kernel/sched/core.c | 2 +- 2 files changed, 8 insertions(+), 1 deletion(-) +diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h +index 75e5b393cf44..0539f55bf7b3 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -73,6 +73,11 @@ void synchronize_rcu(void); @@ -25,7 +29,7 @@ Signed-off-by: Thomas Gleixner #else /* #ifdef CONFIG_PREEMPT_RCU */ -@@ -98,6 +103,8 @@ static inline int rcu_preempt_depth(void +@@ -98,6 +103,8 @@ static inline int rcu_preempt_depth(void) return 0; } @@ -34,9 +38,11 @@ Signed-off-by: Thomas Gleixner #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ /* Internal to kernel */ +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index e7dccbb9973a..8033a8f4efdd 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -6154,7 +6154,7 @@ void __init sched_init(void) +@@ -6156,7 +6156,7 @@ void __init sched_init(void) #ifdef CONFIG_DEBUG_ATOMIC_SLEEP static inline int preempt_count_equals(int preempt_offset) { @@ -45,3 +51,6 @@ Signed-off-by: Thomas Gleixner return (nested == preempt_offset); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0114-cond-resched-lock-rt-tweak.patch b/kernel/patches-4.19.x-rt/0112-sched-Use-the-proper-LOCK_OFFSET-for-cond_resched.patch similarity index 67% rename from kernel/patches-4.19.x-rt/0114-cond-resched-lock-rt-tweak.patch rename to kernel/patches-4.19.x-rt/0112-sched-Use-the-proper-LOCK_OFFSET-for-cond_resched.patch index c3caef32f..016894f35 100644 --- a/kernel/patches-4.19.x-rt/0114-cond-resched-lock-rt-tweak.patch +++ b/kernel/patches-4.19.x-rt/0112-sched-Use-the-proper-LOCK_OFFSET-for-cond_resched.patch @@ -1,15 +1,18 @@ -Subject: sched: Use the proper LOCK_OFFSET for cond_resched() +From 575557e0c67be96034f9528399a7b7361dae5dd2 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 17 Jul 2011 22:51:33 +0200 +Subject: [PATCH 112/269] sched: Use the proper LOCK_OFFSET for cond_resched() RT does not increment preempt count when a 'sleeping' spinlock is locked. Update PREEMPT_LOCK_OFFSET for that case. Signed-off-by: Thomas Gleixner --- - include/linux/preempt.h | 4 ++++ + include/linux/preempt.h | 4 ++++ 1 file changed, 4 insertions(+) +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index f7a17fcc3fec..b7fe717eb1f4 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -118,7 +118,11 @@ @@ -24,3 +27,6 @@ Signed-off-by: Thomas Gleixner /* * The preempt_count offset needed for things like: +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0115-sched-disable-ttwu-queue.patch b/kernel/patches-4.19.x-rt/0113-sched-Disable-TTWU_QUEUE-on-RT.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0115-sched-disable-ttwu-queue.patch rename to kernel/patches-4.19.x-rt/0113-sched-Disable-TTWU_QUEUE-on-RT.patch index 95221e680..d7699738a 100644 --- a/kernel/patches-4.19.x-rt/0115-sched-disable-ttwu-queue.patch +++ b/kernel/patches-4.19.x-rt/0113-sched-Disable-TTWU_QUEUE-on-RT.patch @@ -1,15 +1,18 @@ -Subject: sched: Disable TTWU_QUEUE on RT +From 5e05ad5c470039b646a457459138f582bc139f3f Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 13 Sep 2011 16:42:35 +0200 +Subject: [PATCH 113/269] sched: Disable TTWU_QUEUE on RT The queued remote wakeup mechanism can introduce rather large latencies if the number of migrated tasks is high. Disable it for RT. Signed-off-by: Thomas Gleixner --- - kernel/sched/features.h | 5 +++++ + kernel/sched/features.h | 5 +++++ 1 file changed, 5 insertions(+) +diff --git a/kernel/sched/features.h b/kernel/sched/features.h +index 85ae8488039c..68de18405857 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -46,11 +46,16 @@ SCHED_FEAT(LB_BIAS, true) @@ -29,3 +32,6 @@ Signed-off-by: Thomas Gleixner /* * When doing wakeups, attempt to limit superfluous scans of the LLC domain. +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0116-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch b/kernel/patches-4.19.x-rt/0114-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0116-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch rename to kernel/patches-4.19.x-rt/0114-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch index 7c3a3860d..496b5b345 100644 --- a/kernel/patches-4.19.x-rt/0116-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch +++ b/kernel/patches-4.19.x-rt/0114-sched-workqueue-Only-wake-up-idle-workers-if-not-blo.patch @@ -1,6 +1,8 @@ +From 1241476225268360ae571ec5de750f504cac3604 Mon Sep 17 00:00:00 2001 From: Steven Rostedt Date: Mon, 18 Mar 2013 15:12:49 -0400 -Subject: sched/workqueue: Only wake up idle workers if not blocked on sleeping spin lock +Subject: [PATCH 114/269] sched/workqueue: Only wake up idle workers if not + blocked on sleeping spin lock In -rt, most spin_locks() turn into mutexes. One of these spin_lock conversions is performed on the workqueue gcwq->lock. When the idle @@ -18,12 +20,14 @@ Check the saved_state too before waking up new workers. Signed-off-by: Steven Rostedt Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/core.c | 4 +++- + kernel/sched/core.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 8033a8f4efdd..acca3e94ee27 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -3496,8 +3496,10 @@ static void __sched notrace __schedule(b +@@ -3498,8 +3498,10 @@ static void __sched notrace __schedule(bool preempt) * If a worker went to sleep, notify and ask workqueue * whether it wants to wake up a task to maintain * concurrency. @@ -35,3 +39,6 @@ Signed-off-by: Sebastian Andrzej Siewior struct task_struct *to_wakeup; to_wakeup = wq_worker_sleeping(prev); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0117-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch b/kernel/patches-4.19.x-rt/0115-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch similarity index 90% rename from kernel/patches-4.19.x-rt/0117-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch rename to kernel/patches-4.19.x-rt/0115-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch index 9bae25ddb..d36c4347c 100644 --- a/kernel/patches-4.19.x-rt/0117-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch +++ b/kernel/patches-4.19.x-rt/0115-rt-Increase-decrease-the-nr-of-migratory-tasks-when-.patch @@ -1,6 +1,8 @@ +From 5fe7427b8a7b38b8b395ce68c2c6cb06b2f95a58 Mon Sep 17 00:00:00 2001 From: Daniel Bristot de Oliveira Date: Mon, 26 Jun 2017 17:07:15 +0200 -Subject: rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration +Subject: [PATCH 115/269] rt: Increase/decrease the nr of migratory tasks when + enabling/disabling migration There is a problem in the migrate_disable()/enable() implementation regarding the number of migratory tasks in the rt/dl RQs. The problem @@ -75,12 +77,14 @@ Cc: LKML Cc: linux-rt-users Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/core.c | 49 ++++++++++++++++++++++++++++++++++++++++++++----- + kernel/sched/core.c | 49 ++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 44 insertions(+), 5 deletions(-) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index acca3e94ee27..eb752804e8cf 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -7138,6 +7138,47 @@ const u32 sched_prio_to_wmult[40] = { +@@ -7140,6 +7140,47 @@ const u32 sched_prio_to_wmult[40] = { #if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP) @@ -128,7 +132,7 @@ Signed-off-by: Sebastian Andrzej Siewior void migrate_disable(void) { struct task_struct *p = current; -@@ -7161,10 +7202,9 @@ void migrate_disable(void) +@@ -7163,10 +7204,9 @@ void migrate_disable(void) } preempt_disable(); @@ -141,7 +145,7 @@ Signed-off-by: Sebastian Andrzej Siewior preempt_enable(); } -@@ -7196,9 +7236,8 @@ void migrate_enable(void) +@@ -7198,9 +7238,8 @@ void migrate_enable(void) preempt_disable(); @@ -152,3 +156,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (p->migrate_disable_update) { struct rq *rq; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0118-hotplug-light-get-online-cpus.patch b/kernel/patches-4.19.x-rt/0116-hotplug-Lightweight-get-online-cpus.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0118-hotplug-light-get-online-cpus.patch rename to kernel/patches-4.19.x-rt/0116-hotplug-Lightweight-get-online-cpus.patch index bf5a3a6b7..bc416cb21 100644 --- a/kernel/patches-4.19.x-rt/0118-hotplug-light-get-online-cpus.patch +++ b/kernel/patches-4.19.x-rt/0116-hotplug-Lightweight-get-online-cpus.patch @@ -1,6 +1,7 @@ -Subject: hotplug: Lightweight get online cpus +From 1e1a0808ffc8df10c6bc1e46f40a4948395f72a6 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 15 Jun 2011 12:36:06 +0200 +Subject: [PATCH 116/269] hotplug: Lightweight get online cpus get_online_cpus() is a heavy weight function which involves a global mutex. migrate_disable() wants a simpler construct which prevents only @@ -12,11 +13,13 @@ tasks on the cpu which should be brought down. Signed-off-by: Thomas Gleixner --- - include/linux/cpu.h | 5 +++++ - kernel/cpu.c | 15 +++++++++++++++ - kernel/sched/core.c | 4 ++++ + include/linux/cpu.h | 5 +++++ + kernel/cpu.c | 15 +++++++++++++++ + kernel/sched/core.c | 4 ++++ 3 files changed, 24 insertions(+) +diff --git a/include/linux/cpu.h b/include/linux/cpu.h +index 5041357d0297..3403eab853b7 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -111,6 +111,8 @@ extern void cpu_hotplug_disable(void); @@ -28,7 +31,7 @@ Signed-off-by: Thomas Gleixner #else /* CONFIG_HOTPLUG_CPU */ -@@ -122,6 +124,9 @@ static inline int cpus_read_trylock(voi +@@ -122,6 +124,9 @@ static inline int cpus_read_trylock(void) { return true; } static inline void lockdep_assert_cpus_held(void) { } static inline void cpu_hotplug_disable(void) { } static inline void cpu_hotplug_enable(void) { } @@ -38,6 +41,8 @@ Signed-off-by: Thomas Gleixner #endif /* !CONFIG_HOTPLUG_CPU */ /* Wrappers which go away once all code is converted */ +diff --git a/kernel/cpu.c b/kernel/cpu.c +index dc250ec2c096..f684f41492d3 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -281,6 +281,21 @@ static int cpu_hotplug_disabled; @@ -62,9 +67,11 @@ Signed-off-by: Thomas Gleixner DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock); void cpus_read_lock(void) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index eb752804e8cf..516f05702550 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -7202,6 +7202,7 @@ void migrate_disable(void) +@@ -7204,6 +7204,7 @@ void migrate_disable(void) } preempt_disable(); @@ -72,7 +79,7 @@ Signed-off-by: Thomas Gleixner migrate_disable_update_cpus_allowed(p); p->migrate_disable = 1; -@@ -7267,12 +7268,15 @@ void migrate_enable(void) +@@ -7269,12 +7270,15 @@ void migrate_enable(void) arg.task = p; arg.dest_cpu = dest_cpu; @@ -88,3 +95,6 @@ Signed-off-by: Thomas Gleixner preempt_enable(); } EXPORT_SYMBOL(migrate_enable); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0119-ftrace-migrate-disable-tracing.patch b/kernel/patches-4.19.x-rt/0117-trace-Add-migrate-disabled-counter-to-tracing-output.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0119-ftrace-migrate-disable-tracing.patch rename to kernel/patches-4.19.x-rt/0117-trace-Add-migrate-disabled-counter-to-tracing-output.patch index 15e60523d..1ffddc0d0 100644 --- a/kernel/patches-4.19.x-rt/0119-ftrace-migrate-disable-tracing.patch +++ b/kernel/patches-4.19.x-rt/0117-trace-Add-migrate-disabled-counter-to-tracing-output.patch @@ -1,15 +1,18 @@ +From e93174d8da86d81922b37dd559f026f1eb4cafb8 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 17 Jul 2011 21:56:42 +0200 -Subject: trace: Add migrate-disabled counter to tracing output +Subject: [PATCH 117/269] trace: Add migrate-disabled counter to tracing output Signed-off-by: Thomas Gleixner --- - include/linux/trace_events.h | 2 ++ - kernel/trace/trace.c | 9 ++++++--- - kernel/trace/trace_events.c | 2 ++ - kernel/trace/trace_output.c | 5 +++++ + include/linux/trace_events.h | 2 ++ + kernel/trace/trace.c | 9 ++++++--- + kernel/trace/trace_events.c | 2 ++ + kernel/trace/trace_output.c | 5 +++++ 4 files changed, 15 insertions(+), 3 deletions(-) +diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h +index 78a010e19ed4..0403d9696944 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -62,6 +62,8 @@ struct trace_entry { @@ -21,9 +24,11 @@ Signed-off-by: Thomas Gleixner }; #define TRACE_EVENT_TYPE_MAX \ +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index c65cea71d1ee..0af14953d52d 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c -@@ -2146,6 +2146,8 @@ tracing_generic_entry_update(struct trac +@@ -2146,6 +2146,8 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags, ((pc & SOFTIRQ_OFFSET) ? TRACE_FLAG_SOFTIRQ : 0) | (tif_need_resched() ? TRACE_FLAG_NEED_RESCHED : 0) | (test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0); @@ -32,7 +37,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL_GPL(tracing_generic_entry_update); -@@ -3349,9 +3351,10 @@ static void print_lat_help_header(struct +@@ -3349,9 +3351,10 @@ static void print_lat_help_header(struct seq_file *m) "# | / _----=> need-resched \n" "# || / _---=> hardirq/softirq \n" "# ||| / _--=> preempt-depth \n" @@ -46,9 +51,11 @@ Signed-off-by: Thomas Gleixner } static void print_event_info(struct trace_buffer *buf, struct seq_file *m) +diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c +index f94be0c2827b..acdb2c2067c6 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c -@@ -188,6 +188,8 @@ static int trace_define_common_fields(vo +@@ -188,6 +188,8 @@ static int trace_define_common_fields(void) __common_field(unsigned char, flags); __common_field(unsigned char, preempt_count); __common_field(int, pid); @@ -57,9 +64,11 @@ Signed-off-by: Thomas Gleixner return ret; } +diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c +index 6e6cc64faa38..46c96744f09d 100644 --- a/kernel/trace/trace_output.c +++ b/kernel/trace/trace_output.c -@@ -494,6 +494,11 @@ int trace_print_lat_fmt(struct trace_seq +@@ -494,6 +494,11 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry) else trace_seq_putc(s, '.'); @@ -71,3 +80,6 @@ Signed-off-by: Thomas Gleixner return !trace_seq_has_overflowed(s); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0120-lockdep-no-softirq-accounting-on-rt.patch b/kernel/patches-4.19.x-rt/0118-lockdep-Make-it-RT-aware.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0120-lockdep-no-softirq-accounting-on-rt.patch rename to kernel/patches-4.19.x-rt/0118-lockdep-Make-it-RT-aware.patch index 027a811cc..2678553ac 100644 --- a/kernel/patches-4.19.x-rt/0120-lockdep-no-softirq-accounting-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0118-lockdep-Make-it-RT-aware.patch @@ -1,15 +1,18 @@ -Subject: lockdep: Make it RT aware +From 1a31bace22b513efaa0864bd1d32d7d4c698a618 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 17 Jul 2011 18:51:23 +0200 +Subject: [PATCH 118/269] lockdep: Make it RT aware teach lockdep that we don't really do softirqs on -RT. Signed-off-by: Thomas Gleixner --- - include/linux/irqflags.h | 23 +++++++++++++++-------- - kernel/locking/lockdep.c | 2 ++ + include/linux/irqflags.h | 23 +++++++++++++++-------- + kernel/locking/lockdep.c | 2 ++ 2 files changed, 17 insertions(+), 8 deletions(-) +diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h +index 21619c92c377..b20eeb25e9fa 100644 --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -43,14 +43,6 @@ do { \ @@ -27,11 +30,10 @@ Signed-off-by: Thomas Gleixner #else # define trace_hardirqs_on() do { } while (0) # define trace_hardirqs_off() do { } while (0) -@@ -63,6 +55,21 @@ do { \ - # define lockdep_softirq_enter() do { } while (0) +@@ -64,6 +56,21 @@ do { \ # define lockdep_softirq_exit() do { } while (0) #endif -+ + +#if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PREEMPT_RT_FULL) +# define lockdep_softirq_enter() \ +do { \ @@ -46,12 +48,15 @@ Signed-off-by: Thomas Gleixner +# define lockdep_softirq_enter() do { } while (0) +# define lockdep_softirq_exit() do { } while (0) +#endif - ++ #if defined(CONFIG_IRQSOFF_TRACER) || \ defined(CONFIG_PREEMPT_TRACER) + extern void stop_critical_timings(void); +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c +index 26b57e24476f..6daeb369f691 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c -@@ -3823,6 +3823,7 @@ static void check_flags(unsigned long fl +@@ -3823,6 +3823,7 @@ static void check_flags(unsigned long flags) } } @@ -59,7 +64,7 @@ Signed-off-by: Thomas Gleixner /* * We dont accurately track softirq state in e.g. * hardirq contexts (such as on 4KSTACKS), so only -@@ -3837,6 +3838,7 @@ static void check_flags(unsigned long fl +@@ -3837,6 +3838,7 @@ static void check_flags(unsigned long flags) DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled); } } @@ -67,3 +72,6 @@ Signed-off-by: Thomas Gleixner if (!debug_locks) print_irqtrace_events(current); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0121-tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch b/kernel/patches-4.19.x-rt/0119-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch similarity index 88% rename from kernel/patches-4.19.x-rt/0121-tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch rename to kernel/patches-4.19.x-rt/0119-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch index 451c7923a..299c47093 100644 --- a/kernel/patches-4.19.x-rt/0121-tasklet-rt-prevent-tasklets-from-going-into-infinite-spin-in-rt.patch +++ b/kernel/patches-4.19.x-rt/0119-tasklet-Prevent-tasklets-from-going-into-infinite-sp.patch @@ -1,6 +1,8 @@ -Subject: tasklet: Prevent tasklets from going into infinite spin in RT +From f0dbaae62eb8d03e46818d0babb5889b3a5ce6eb Mon Sep 17 00:00:00 2001 From: Ingo Molnar -Date: Tue Nov 29 20:18:22 2011 -0500 +Date: Tue, 29 Nov 2011 20:18:22 -0500 +Subject: [PATCH 119/269] tasklet: Prevent tasklets from going into infinite + spin in RT When CONFIG_PREEMPT_RT_FULL is enabled, tasklets run as threads, and spinlocks turn are mutexes. But this can cause issues with @@ -9,21 +11,21 @@ if a tasklets are disabled with tasklet_disable(), the tasklet count is increased. When a tasklet runs, it checks this counter and if it is set, it adds itself back on the softirq queue and returns. - + The problem arises in RT because ksoftirq will see that a softirq is ready to run (the tasklet softirq just re-armed itself), and will not sleep, but instead run the softirqs again. The tasklet softirq will still see that the count is non-zero and will not execute the tasklet and requeue itself on the softirq again, which will cause ksoftirqd to run it again and again and again. - + It gets worse because ksoftirqd runs as a real-time thread. If it preempted the task that disabled tasklets, and that task has migration disabled, or can't run for other reasons, the tasklet softirq will never run because the count will never be zero, and ksoftirqd will go into an infinite loop. As an RT task, it this becomes a big problem. - + This is a hack solution to have tasklet_disable stop tasklets, and when a tasklet runs, instead of requeueing the tasklet softirqd it delays it. When tasklet_enable() is called, and tasklets are @@ -31,19 +33,20 @@ waiting, then the tasklet_enable() will kick the tasklets to continue. This prevents the lock up from ksoftirq going into an infinite loop. [ rostedt@goodmis.org: ported to 3.0-rt ] - + Signed-off-by: Ingo Molnar Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner - --- - include/linux/interrupt.h | 33 ++++++------ - kernel/softirq.c | 126 ++++++++++++++++++++++++++++++++++++++-------- + include/linux/interrupt.h | 33 +++++----- + kernel/softirq.c | 126 ++++++++++++++++++++++++++++++++------ 2 files changed, 125 insertions(+), 34 deletions(-) +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index a943c07b54ba..e74936c7be48 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h -@@ -542,8 +542,9 @@ static inline struct task_struct *this_c +@@ -542,8 +542,9 @@ static inline struct task_struct *this_cpu_ksoftirqd(void) to be executed on some cpu at least once after this. * If the tasklet is already scheduled, but its execution is still not started, it will be executed only once. @@ -55,7 +58,7 @@ Signed-off-by: Thomas Gleixner * Tasklet is strictly serialized wrt itself, but not wrt another tasklets. If client needs some intertask synchronization, he makes it with spinlocks. -@@ -568,27 +569,36 @@ struct tasklet_struct name = { NULL, 0, +@@ -568,27 +569,36 @@ struct tasklet_struct name = { NULL, 0, ATOMIC_INIT(1), func, data } enum { TASKLET_STATE_SCHED, /* Tasklet is scheduled for execution */ @@ -98,7 +101,7 @@ Signed-off-by: Thomas Gleixner #define tasklet_unlock_wait(t) do { } while (0) #define tasklet_unlock(t) do { } while (0) #endif -@@ -622,12 +632,7 @@ static inline void tasklet_disable(struc +@@ -622,12 +632,7 @@ static inline void tasklet_disable(struct tasklet_struct *t) smp_mb(); } @@ -112,6 +115,8 @@ Signed-off-by: Thomas Gleixner extern void tasklet_kill(struct tasklet_struct *t); extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu); extern void tasklet_init(struct tasklet_struct *t, +diff --git a/kernel/softirq.c b/kernel/softirq.c +index 6f584861d329..1d3a482246cc 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -21,6 +21,7 @@ @@ -122,7 +127,7 @@ Signed-off-by: Thomas Gleixner #include #include #include -@@ -475,11 +476,38 @@ static void __tasklet_schedule_common(st +@@ -475,11 +476,38 @@ static void __tasklet_schedule_common(struct tasklet_struct *t, unsigned long flags; local_irq_save(flags); @@ -165,7 +170,7 @@ Signed-off-by: Thomas Gleixner local_irq_restore(flags); } -@@ -497,11 +525,21 @@ void __tasklet_hi_schedule(struct taskle +@@ -497,11 +525,21 @@ void __tasklet_hi_schedule(struct tasklet_struct *t) } EXPORT_SYMBOL(__tasklet_hi_schedule); @@ -187,7 +192,7 @@ Signed-off-by: Thomas Gleixner local_irq_disable(); list = tl_head->head; -@@ -513,25 +551,56 @@ static void tasklet_action_common(struct +@@ -513,25 +551,56 @@ static void tasklet_action_common(struct softirq_action *a, struct tasklet_struct *t = list; list = list->next; @@ -259,7 +264,7 @@ Signed-off-by: Thomas Gleixner } } -@@ -563,7 +632,7 @@ void tasklet_kill(struct tasklet_struct +@@ -563,7 +632,7 @@ void tasklet_kill(struct tasklet_struct *t) while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) { do { @@ -292,3 +297,6 @@ Signed-off-by: Thomas Gleixner static int ksoftirqd_should_run(unsigned int cpu) { return local_softirq_pending(); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0122-softirq-preempt-fix-3-re.patch b/kernel/patches-4.19.x-rt/0120-softirq-Check-preemption-after-reenabling-interrupts.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0122-softirq-preempt-fix-3-re.patch rename to kernel/patches-4.19.x-rt/0120-softirq-Check-preemption-after-reenabling-interrupts.patch index c860215d5..ee4970952 100644 --- a/kernel/patches-4.19.x-rt/0122-softirq-preempt-fix-3-re.patch +++ b/kernel/patches-4.19.x-rt/0120-softirq-Check-preemption-after-reenabling-interrupts.patch @@ -1,6 +1,7 @@ -Subject: softirq: Check preemption after reenabling interrupts +From dcfab76d9eab264a1e79cc42713a004d2ef7658b Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Sun, 13 Nov 2011 17:17:09 +0100 (CET) +Date: Sun, 13 Nov 2011 17:17:09 +0100 +Subject: [PATCH 120/269] softirq: Check preemption after reenabling interrupts raise_softirq_irqoff() disables interrupts and wakes the softirq daemon, but after reenabling interrupts there is no preemption check, @@ -12,14 +13,15 @@ ones which show this behaviour. Reported-by: Carsten Emde Signed-off-by: Thomas Gleixner - --- - block/blk-softirq.c | 3 +++ - include/linux/preempt.h | 3 +++ - lib/irq_poll.c | 5 +++++ - net/core/dev.c | 7 +++++++ + block/blk-softirq.c | 3 +++ + include/linux/preempt.h | 3 +++ + lib/irq_poll.c | 5 +++++ + net/core/dev.c | 7 +++++++ 4 files changed, 18 insertions(+) +diff --git a/block/blk-softirq.c b/block/blk-softirq.c +index 15c1f5e12eb8..1628277885a1 100644 --- a/block/blk-softirq.c +++ b/block/blk-softirq.c @@ -53,6 +53,7 @@ static void trigger_softirq(void *data) @@ -30,7 +32,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -91,6 +92,7 @@ static int blk_softirq_cpu_dead(unsigned +@@ -91,6 +92,7 @@ static int blk_softirq_cpu_dead(unsigned int cpu) this_cpu_ptr(&blk_cpu_done)); raise_softirq_irqoff(BLOCK_SOFTIRQ); local_irq_enable(); @@ -38,7 +40,7 @@ Signed-off-by: Thomas Gleixner return 0; } -@@ -143,6 +145,7 @@ void __blk_complete_request(struct reque +@@ -143,6 +145,7 @@ void __blk_complete_request(struct request *req) goto do_local; local_irq_restore(flags); @@ -46,6 +48,8 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(__blk_complete_request); +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index b7fe717eb1f4..9984f2b75b73 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -187,8 +187,10 @@ do { \ @@ -67,9 +71,11 @@ Signed-off-by: Thomas Gleixner #define preemptible() 0 #define migrate_disable() barrier() +diff --git a/lib/irq_poll.c b/lib/irq_poll.c +index 86a709954f5a..9c069ef83d6d 100644 --- a/lib/irq_poll.c +++ b/lib/irq_poll.c -@@ -37,6 +37,7 @@ void irq_poll_sched(struct irq_poll *iop +@@ -37,6 +37,7 @@ void irq_poll_sched(struct irq_poll *iop) list_add_tail(&iop->list, this_cpu_ptr(&blk_cpu_iopoll)); __raise_softirq_irqoff(IRQ_POLL_SOFTIRQ); local_irq_restore(flags); @@ -77,7 +83,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(irq_poll_sched); -@@ -72,6 +73,7 @@ void irq_poll_complete(struct irq_poll * +@@ -72,6 +73,7 @@ void irq_poll_complete(struct irq_poll *iop) local_irq_save(flags); __irq_poll_complete(iop); local_irq_restore(flags); @@ -85,7 +91,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(irq_poll_complete); -@@ -96,6 +98,7 @@ static void __latent_entropy irq_poll_so +@@ -96,6 +98,7 @@ static void __latent_entropy irq_poll_softirq(struct softirq_action *h) } local_irq_enable(); @@ -93,7 +99,7 @@ Signed-off-by: Thomas Gleixner /* Even though interrupts have been re-enabled, this * access is safe because interrupts can only add new -@@ -133,6 +136,7 @@ static void __latent_entropy irq_poll_so +@@ -133,6 +136,7 @@ static void __latent_entropy irq_poll_softirq(struct softirq_action *h) __raise_softirq_irqoff(IRQ_POLL_SOFTIRQ); local_irq_enable(); @@ -101,7 +107,7 @@ Signed-off-by: Thomas Gleixner } /** -@@ -196,6 +200,7 @@ static int irq_poll_cpu_dead(unsigned in +@@ -196,6 +200,7 @@ static int irq_poll_cpu_dead(unsigned int cpu) this_cpu_ptr(&blk_cpu_iopoll)); __raise_softirq_irqoff(IRQ_POLL_SOFTIRQ); local_irq_enable(); @@ -109,9 +115,11 @@ Signed-off-by: Thomas Gleixner return 0; } +diff --git a/net/core/dev.c b/net/core/dev.c +index 3bcec116a5f2..3362d8897058 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -2712,6 +2712,7 @@ static void __netif_reschedule(struct Qd +@@ -2726,6 +2726,7 @@ static void __netif_reschedule(struct Qdisc *q) sd->output_queue_tailp = &q->next_sched; raise_softirq_irqoff(NET_TX_SOFTIRQ); local_irq_restore(flags); @@ -119,7 +127,7 @@ Signed-off-by: Thomas Gleixner } void __netif_schedule(struct Qdisc *q) -@@ -2774,6 +2775,7 @@ void __dev_kfree_skb_irq(struct sk_buff +@@ -2788,6 +2789,7 @@ void __dev_kfree_skb_irq(struct sk_buff *skb, enum skb_free_reason reason) __this_cpu_write(softnet_data.completion_queue, skb); raise_softirq_irqoff(NET_TX_SOFTIRQ); local_irq_restore(flags); @@ -127,7 +135,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(__dev_kfree_skb_irq); -@@ -4246,6 +4248,7 @@ static int enqueue_to_backlog(struct sk_ +@@ -4260,6 +4262,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu, rps_unlock(sd); local_irq_restore(flags); @@ -135,7 +143,7 @@ Signed-off-by: Thomas Gleixner atomic_long_inc(&skb->dev->rx_dropped); kfree_skb(skb); -@@ -5799,12 +5802,14 @@ static void net_rps_action_and_irq_enabl +@@ -5815,12 +5818,14 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd) sd->rps_ipi_list = NULL; local_irq_enable(); @@ -150,7 +158,7 @@ Signed-off-by: Thomas Gleixner } static bool sd_has_rps_ipi_waiting(struct softnet_data *sd) -@@ -5882,6 +5887,7 @@ void __napi_schedule(struct napi_struct +@@ -5898,6 +5903,7 @@ void __napi_schedule(struct napi_struct *n) local_irq_save(flags); ____napi_schedule(this_cpu_ptr(&softnet_data), n); local_irq_restore(flags); @@ -158,7 +166,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(__napi_schedule); -@@ -9289,6 +9295,7 @@ static int dev_cpu_dead(unsigned int old +@@ -9305,6 +9311,7 @@ static int dev_cpu_dead(unsigned int oldcpu) raise_softirq_irqoff(NET_TX_SOFTIRQ); local_irq_enable(); @@ -166,3 +174,6 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_RPS remsd = oldsd->rps_ipi_list; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0123-softirq-disable-softirq-stacks-for-rt.patch b/kernel/patches-4.19.x-rt/0121-softirq-Disable-softirq-stacks-for-RT.patch similarity index 68% rename from kernel/patches-4.19.x-rt/0123-softirq-disable-softirq-stacks-for-rt.patch rename to kernel/patches-4.19.x-rt/0121-softirq-Disable-softirq-stacks-for-RT.patch index 0160ba746..f15595b88 100644 --- a/kernel/patches-4.19.x-rt/0123-softirq-disable-softirq-stacks-for-rt.patch +++ b/kernel/patches-4.19.x-rt/0121-softirq-Disable-softirq-stacks-for-RT.patch @@ -1,22 +1,25 @@ -Subject: softirq: Disable softirq stacks for RT +From 7a6ae7f96331bdaeeac96006086d01805ca48612 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Mon, 18 Jul 2011 13:59:17 +0200 +Subject: [PATCH 121/269] softirq: Disable softirq stacks for RT Disable extra stacks for softirqs. We want to preempt softirqs and having them on special IRQ-stack does not make this easier. Signed-off-by: Thomas Gleixner --- - arch/powerpc/kernel/irq.c | 2 ++ - arch/powerpc/kernel/misc_32.S | 2 ++ - arch/powerpc/kernel/misc_64.S | 2 ++ - arch/sh/kernel/irq.c | 2 ++ - arch/sparc/kernel/irq_64.c | 2 ++ - arch/x86/entry/entry_64.S | 2 ++ - arch/x86/kernel/irq_32.c | 2 ++ - include/linux/interrupt.h | 2 +- + arch/powerpc/kernel/irq.c | 2 ++ + arch/powerpc/kernel/misc_32.S | 2 ++ + arch/powerpc/kernel/misc_64.S | 2 ++ + arch/sh/kernel/irq.c | 2 ++ + arch/sparc/kernel/irq_64.c | 2 ++ + arch/x86/entry/entry_64.S | 2 ++ + arch/x86/kernel/irq_32.c | 2 ++ + include/linux/interrupt.h | 2 +- 8 files changed, 15 insertions(+), 1 deletion(-) +diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c +index 916ddc4aac44..833d27f85aea 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -766,6 +766,7 @@ void irq_ctx_init(void) @@ -35,6 +38,8 @@ Signed-off-by: Thomas Gleixner irq_hw_number_t virq_to_hw(unsigned int virq) { +diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S +index 695b24a2d954..032ada21b7bd 100644 --- a/arch/powerpc/kernel/misc_32.S +++ b/arch/powerpc/kernel/misc_32.S @@ -42,6 +42,7 @@ @@ -45,7 +50,7 @@ Signed-off-by: Thomas Gleixner _GLOBAL(call_do_softirq) mflr r0 stw r0,4(r1) -@@ -58,6 +59,7 @@ +@@ -58,6 +59,7 @@ _GLOBAL(call_do_softirq) stw r10,THREAD+KSP_LIMIT(r2) mtlr r0 blr @@ -53,6 +58,8 @@ Signed-off-by: Thomas Gleixner /* * void call_do_irq(struct pt_regs *regs, struct thread_info *irqtp); +diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S +index 262ba9481781..4935ef9a142e 100644 --- a/arch/powerpc/kernel/misc_64.S +++ b/arch/powerpc/kernel/misc_64.S @@ -32,6 +32,7 @@ @@ -63,7 +70,7 @@ Signed-off-by: Thomas Gleixner _GLOBAL(call_do_softirq) mflr r0 std r0,16(r1) -@@ -42,6 +43,7 @@ +@@ -42,6 +43,7 @@ _GLOBAL(call_do_softirq) ld r0,16(r1) mtlr r0 blr @@ -71,6 +78,8 @@ Signed-off-by: Thomas Gleixner _GLOBAL(call_do_irq) mflr r0 +diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c +index 5717c7cbdd97..66dd399b2007 100644 --- a/arch/sh/kernel/irq.c +++ b/arch/sh/kernel/irq.c @@ -148,6 +148,7 @@ void irq_ctx_exit(int cpu) @@ -89,9 +98,11 @@ Signed-off-by: Thomas Gleixner #else static inline void handle_one_irq(unsigned int irq) { +diff --git a/arch/sparc/kernel/irq_64.c b/arch/sparc/kernel/irq_64.c +index 713670e6d13d..5dfc715343f9 100644 --- a/arch/sparc/kernel/irq_64.c +++ b/arch/sparc/kernel/irq_64.c -@@ -854,6 +854,7 @@ void __irq_entry handler_irq(int pil, st +@@ -854,6 +854,7 @@ void __irq_entry handler_irq(int pil, struct pt_regs *regs) set_irq_regs(old_regs); } @@ -107,9 +118,11 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_HOTPLUG_CPU void fixup_irqs(void) +diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S +index 617df50a11d9..ce2a6587ed11 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S -@@ -1039,6 +1039,7 @@ EXPORT_SYMBOL(native_load_gs_index) +@@ -1043,6 +1043,7 @@ bad_gs: jmp 2b .previous @@ -117,7 +130,7 @@ Signed-off-by: Thomas Gleixner /* Call softirq on interrupt stack. Interrupts are off. */ ENTRY(do_softirq_own_stack) pushq %rbp -@@ -1049,6 +1050,7 @@ ENTRY(do_softirq_own_stack) +@@ -1053,6 +1054,7 @@ ENTRY(do_softirq_own_stack) leaveq ret ENDPROC(do_softirq_own_stack) @@ -125,6 +138,8 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_XEN idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0 +diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c +index 95600a99ae93..9192d76085ba 100644 --- a/arch/x86/kernel/irq_32.c +++ b/arch/x86/kernel/irq_32.c @@ -130,6 +130,7 @@ void irq_ctx_init(int cpu) @@ -143,6 +158,8 @@ Signed-off-by: Thomas Gleixner bool handle_irq(struct irq_desc *desc, struct pt_regs *regs) { +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index e74936c7be48..cb2d1384cb0d 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -506,7 +506,7 @@ struct softirq_action @@ -154,3 +171,6 @@ Signed-off-by: Thomas Gleixner void do_softirq_own_stack(void); #else static inline void do_softirq_own_stack(void) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0124-softirq-split-locks.patch b/kernel/patches-4.19.x-rt/0122-softirq-Split-softirq-locks.patch similarity index 91% rename from kernel/patches-4.19.x-rt/0124-softirq-split-locks.patch rename to kernel/patches-4.19.x-rt/0122-softirq-Split-softirq-locks.patch index fe46eb113..729db5e9f 100644 --- a/kernel/patches-4.19.x-rt/0124-softirq-split-locks.patch +++ b/kernel/patches-4.19.x-rt/0122-softirq-Split-softirq-locks.patch @@ -1,6 +1,7 @@ +From 35e1d70c2ede4d34ff411570acf377f7ffe77e70 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Thu, 04 Oct 2012 14:20:47 +0100 -Subject: softirq: Split softirq locks +Date: Thu, 4 Oct 2012 14:20:47 +0100 +Subject: [PATCH 122/269] softirq: Split softirq locks The 3.x RT series removed the split softirq implementation in favour of pushing softirq processing into the context of the thread which @@ -24,15 +25,17 @@ threads. Signed-off-by: Thomas Gleixner --- - include/linux/bottom_half.h | 34 +++ - include/linux/interrupt.h | 15 + - include/linux/preempt.h | 15 + - include/linux/sched.h | 3 - init/main.c | 1 - kernel/softirq.c | 491 +++++++++++++++++++++++++++++++++++++------- - kernel/time/tick-sched.c | 9 + include/linux/bottom_half.h | 34 +++ + include/linux/interrupt.h | 15 +- + include/linux/preempt.h | 15 +- + include/linux/sched.h | 3 + + init/main.c | 1 + + kernel/softirq.c | 491 ++++++++++++++++++++++++++++++------ + kernel/time/tick-sched.c | 9 +- 7 files changed, 478 insertions(+), 90 deletions(-) +diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h +index a19519f4241d..40dd5ef9c154 100644 --- a/include/linux/bottom_half.h +++ b/include/linux/bottom_half.h @@ -4,6 +4,39 @@ @@ -82,6 +85,8 @@ Signed-off-by: Thomas Gleixner +#endif #endif /* _LINUX_BH_H */ +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index cb2d1384cb0d..6c25b962ba89 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -503,10 +503,11 @@ struct softirq_action @@ -98,7 +103,7 @@ Signed-off-by: Thomas Gleixner void do_softirq_own_stack(void); #else static inline void do_softirq_own_stack(void) -@@ -514,6 +515,9 @@ static inline void do_softirq_own_stack( +@@ -514,6 +515,9 @@ static inline void do_softirq_own_stack(void) __do_softirq(); } #endif @@ -108,7 +113,7 @@ Signed-off-by: Thomas Gleixner extern void open_softirq(int nr, void (*action)(struct softirq_action *)); extern void softirq_init(void); -@@ -521,6 +525,7 @@ extern void __raise_softirq_irqoff(unsig +@@ -521,6 +525,7 @@ extern void __raise_softirq_irqoff(unsigned int nr); extern void raise_softirq_irqoff(unsigned int nr); extern void raise_softirq(unsigned int nr); @@ -116,7 +121,7 @@ Signed-off-by: Thomas Gleixner DECLARE_PER_CPU(struct task_struct *, ksoftirqd); -@@ -638,6 +643,12 @@ extern void tasklet_kill_immediate(struc +@@ -638,6 +643,12 @@ extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu); extern void tasklet_init(struct tasklet_struct *t, void (*func)(unsigned long), unsigned long data); @@ -129,6 +134,8 @@ Signed-off-by: Thomas Gleixner struct tasklet_hrtimer { struct hrtimer timer; struct tasklet_struct tasklet; +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index 9984f2b75b73..27c3176d88d2 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -51,7 +51,11 @@ @@ -169,6 +176,8 @@ Signed-off-by: Thomas Gleixner #define in_nmi() (preempt_count() & NMI_MASK) #define in_task() (!(preempt_count() & \ (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET))) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index ad44849fba2e..7ecccccbd358 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1190,6 +1190,8 @@ struct task_struct { @@ -188,9 +197,11 @@ Signed-off-by: Thomas Gleixner #define PF_IDLE 0x00000002 /* I am an IDLE thread */ #define PF_EXITING 0x00000004 /* Getting shut down */ #define PF_EXITPIDONE 0x00000008 /* PI exit done on shut down */ +diff --git a/init/main.c b/init/main.c +index e083fac08aed..1647cb052be5 100644 --- a/init/main.c +++ b/init/main.c -@@ -561,6 +561,7 @@ asmlinkage __visible void __init start_k +@@ -561,6 +561,7 @@ asmlinkage __visible void __init start_kernel(void) setup_command_line(command_line); setup_nr_cpu_ids(); setup_per_cpu_areas(); @@ -198,6 +209,8 @@ Signed-off-by: Thomas Gleixner smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */ boot_cpu_hotplug_init(); +diff --git a/kernel/softirq.c b/kernel/softirq.c +index 1d3a482246cc..fd89f8ab85ac 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -26,7 +26,9 @@ @@ -210,7 +223,7 @@ Signed-off-by: Thomas Gleixner #define CREATE_TRACE_POINTS #include -@@ -63,6 +65,98 @@ const char * const softirq_to_name[NR_SO +@@ -63,6 +65,98 @@ const char * const softirq_to_name[NR_SOFTIRQS] = { "TASKLET", "SCHED", "HRTIMER", "RCU" }; @@ -337,7 +350,7 @@ Signed-off-by: Thomas Gleixner /* * If ksoftirqd is scheduled, we do not want to process pending softirqs * right now. Let ksoftirqd handle this at its own rate, to get fairness, -@@ -93,6 +208,47 @@ static bool ksoftirqd_running(unsigned l +@@ -93,6 +208,47 @@ static bool ksoftirqd_running(unsigned long pending) return tsk && (tsk->state == TASK_RUNNING); } @@ -385,7 +398,7 @@ Signed-off-by: Thomas Gleixner /* * preempt_count and SOFTIRQ_OFFSET usage: * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving -@@ -252,10 +408,8 @@ asmlinkage __visible void __softirq_entr +@@ -252,10 +408,8 @@ asmlinkage __visible void __softirq_entry __do_softirq(void) unsigned long end = jiffies + MAX_SOFTIRQ_TIME; unsigned long old_flags = current->flags; int max_restart = MAX_SOFTIRQ_RESTART; @@ -396,7 +409,7 @@ Signed-off-by: Thomas Gleixner /* * Mask out PF_MEMALLOC s current task context is borrowed for the -@@ -274,36 +428,7 @@ asmlinkage __visible void __softirq_entr +@@ -274,36 +428,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void) /* Reset the pending bitmask before enabling irqs */ set_softirq_pending(0); @@ -434,10 +447,11 @@ Signed-off-by: Thomas Gleixner pending = local_softirq_pending(); if (pending) { -@@ -340,6 +465,248 @@ asmlinkage __visible void do_softirq(voi +@@ -339,6 +464,248 @@ asmlinkage __visible void do_softirq(void) + local_irq_restore(flags); } - /* ++/* + * This function must run with irqs disabled! + */ +void raise_softirq_irqoff(unsigned int nr) @@ -679,10 +693,9 @@ Signed-off-by: Thomas Gleixner +} + +#endif /* PREEMPT_RT_FULL */ -+/* + /* * Enter an interrupt context. */ - void irq_enter(void) @@ -350,9 +717,9 @@ void irq_enter(void) * Prevent raise_softirq from needlessly waking up ksoftirqd * here, as softirq will be serviced on return from interrupt. @@ -784,7 +797,7 @@ Signed-off-by: Thomas Gleixner } #ifdef CONFIG_HOTPLUG_CPU -@@ -808,6 +1143,8 @@ static int takeover_tasklets(unsigned in +@@ -808,6 +1143,8 @@ static int takeover_tasklets(unsigned int cpu) static struct smp_hotplug_thread softirq_threads = { .store = &ksoftirqd, @@ -793,9 +806,11 @@ Signed-off-by: Thomas Gleixner .thread_should_run = ksoftirqd_should_run, .thread_fn = run_ksoftirqd, .thread_comm = "ksoftirqd/%u", +diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c +index c217af74dddf..6482945f8ae8 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c -@@ -891,14 +891,7 @@ static bool can_stop_idle_tick(int cpu, +@@ -891,14 +891,7 @@ static bool can_stop_idle_tick(int cpu, struct tick_sched *ts) return false; if (unlikely(local_softirq_pending() && cpu_online(cpu))) { @@ -811,3 +826,6 @@ Signed-off-by: Thomas Gleixner return false; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0125-net-core-use-local_bh_disable-in-netif_rx_ni.patch b/kernel/patches-4.19.x-rt/0123-net-core-use-local_bh_disable-in-netif_rx_ni.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0125-net-core-use-local_bh_disable-in-netif_rx_ni.patch rename to kernel/patches-4.19.x-rt/0123-net-core-use-local_bh_disable-in-netif_rx_ni.patch index 422a6251d..42f334e5a 100644 --- a/kernel/patches-4.19.x-rt/0125-net-core-use-local_bh_disable-in-netif_rx_ni.patch +++ b/kernel/patches-4.19.x-rt/0123-net-core-use-local_bh_disable-in-netif_rx_ni.patch @@ -1,6 +1,7 @@ +From e4b4f2fba2b81120beca06cd1c49f37ceb8bd9c2 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 16 Jun 2017 19:03:16 +0200 -Subject: [PATCH] net/core: use local_bh_disable() in netif_rx_ni() +Subject: [PATCH 123/269] net/core: use local_bh_disable() in netif_rx_ni() In 2004 netif_rx_ni() gained a preempt_disable() section around netif_rx() and its do_softirq() + testing for it. The do_softirq() part @@ -13,12 +14,14 @@ required. Signed-off-by: Sebastian Andrzej Siewior --- - net/core/dev.c | 6 ++---- + net/core/dev.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) +diff --git a/net/core/dev.c b/net/core/dev.c +index 3362d8897058..b8208b940b5d 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -4512,11 +4512,9 @@ int netif_rx_ni(struct sk_buff *skb) +@@ -4526,11 +4526,9 @@ int netif_rx_ni(struct sk_buff *skb) trace_netif_rx_ni_entry(skb); @@ -32,3 +35,6 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0126-irq-allow-disabling-of-softirq-processing-in-irq-thread-context.patch b/kernel/patches-4.19.x-rt/0124-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0126-irq-allow-disabling-of-softirq-processing-in-irq-thread-context.patch rename to kernel/patches-4.19.x-rt/0124-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch index 9d490d7cb..8e2f91255 100644 --- a/kernel/patches-4.19.x-rt/0126-irq-allow-disabling-of-softirq-processing-in-irq-thread-context.patch +++ b/kernel/patches-4.19.x-rt/0124-genirq-Allow-disabling-of-softirq-processing-in-irq-.patch @@ -1,6 +1,8 @@ -Subject: genirq: Allow disabling of softirq processing in irq thread context +From 68c9fb7ded900fff5f4e0a41978b36eb36292c66 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 31 Jan 2012 13:01:27 +0100 +Subject: [PATCH 124/269] genirq: Allow disabling of softirq processing in irq + thread context The processing of softirqs in irq thread context is a performance gain for the non-rt workloads of a system, but it's counterproductive for @@ -9,15 +11,16 @@ workload. Allow such interrupts to prevent softirq processing in their thread context. Signed-off-by: Thomas Gleixner - --- - include/linux/interrupt.h | 2 ++ - include/linux/irq.h | 4 +++- - kernel/irq/manage.c | 13 ++++++++++++- - kernel/irq/settings.h | 12 ++++++++++++ - kernel/softirq.c | 9 +++++++++ + include/linux/interrupt.h | 2 ++ + include/linux/irq.h | 4 +++- + kernel/irq/manage.c | 13 ++++++++++++- + kernel/irq/settings.h | 12 ++++++++++++ + kernel/softirq.c | 9 +++++++++ 5 files changed, 38 insertions(+), 2 deletions(-) +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index 6c25b962ba89..99f8b7ace7c9 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -62,6 +62,7 @@ @@ -36,6 +39,8 @@ Signed-off-by: Thomas Gleixner #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD) +diff --git a/include/linux/irq.h b/include/linux/irq.h +index c9bffda04a45..73d3146db74d 100644 --- a/include/linux/irq.h +++ b/include/linux/irq.h @@ -69,6 +69,7 @@ enum irqchip_irq_state; @@ -62,9 +67,11 @@ Signed-off-by: Thomas Gleixner #define IRQ_NO_BALANCING_MASK (IRQ_PER_CPU | IRQ_NO_BALANCING) +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index d2270f61d335..ba5bba5f1ffd 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c -@@ -970,7 +970,15 @@ irq_forced_thread_fn(struct irq_desc *de +@@ -973,7 +973,15 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action) atomic_inc(&desc->threads_handled); irq_finalize_oneshot(desc, action); @@ -81,7 +88,7 @@ Signed-off-by: Thomas Gleixner return ret; } -@@ -1480,6 +1488,9 @@ static int +@@ -1483,6 +1491,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new) irqd_set(&desc->irq_data, IRQD_NO_BALANCING); } @@ -91,6 +98,8 @@ Signed-off-by: Thomas Gleixner if (irq_settings_can_autoenable(desc)) { irq_startup(desc, IRQ_RESEND, IRQ_START_COND); } else { +diff --git a/kernel/irq/settings.h b/kernel/irq/settings.h +index e43795cd2ccf..47e2f9e23586 100644 --- a/kernel/irq/settings.h +++ b/kernel/irq/settings.h @@ -17,6 +17,7 @@ enum { @@ -109,7 +118,7 @@ Signed-off-by: Thomas Gleixner #undef IRQF_MODIFY_MASK #define IRQF_MODIFY_MASK GOT_YOU_MORON -@@ -41,6 +43,16 @@ irq_settings_clr_and_set(struct irq_desc +@@ -41,6 +43,16 @@ irq_settings_clr_and_set(struct irq_desc *desc, u32 clr, u32 set) desc->status_use_accessors |= (set & _IRQF_MODIFY_MASK); } @@ -126,6 +135,8 @@ Signed-off-by: Thomas Gleixner static inline bool irq_settings_is_per_cpu(struct irq_desc *desc) { return desc->status_use_accessors & _IRQ_PER_CPU; +diff --git a/kernel/softirq.c b/kernel/softirq.c +index fd89f8ab85ac..3e9333d148ad 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -598,6 +598,15 @@ void __local_bh_enable(void) @@ -144,3 +155,6 @@ Signed-off-by: Thomas Gleixner int in_serving_softirq(void) { return current->flags & PF_IN_SOFTIRQ; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0127-softirq-split-timer-softirqs-out-of-ksoftirqd.patch b/kernel/patches-4.19.x-rt/0125-softirq-split-timer-softirqs-out-of-ksoftirqd.patch similarity index 90% rename from kernel/patches-4.19.x-rt/0127-softirq-split-timer-softirqs-out-of-ksoftirqd.patch rename to kernel/patches-4.19.x-rt/0125-softirq-split-timer-softirqs-out-of-ksoftirqd.patch index 9ffce3fc6..4e6061e88 100644 --- a/kernel/patches-4.19.x-rt/0127-softirq-split-timer-softirqs-out-of-ksoftirqd.patch +++ b/kernel/patches-4.19.x-rt/0125-softirq-split-timer-softirqs-out-of-ksoftirqd.patch @@ -1,6 +1,7 @@ +From 5b5c9a38190fcf09aad69449f6552598a2502bf8 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 20 Jan 2016 16:34:17 +0100 -Subject: softirq: split timer softirqs out of ksoftirqd +Subject: [PATCH 125/269] softirq: split timer softirqs out of ksoftirqd The softirqd runs in -RT with SCHED_FIFO (prio 1) and deals mostly with timer wakeup which can not happen in hardirq context. The prio has been @@ -22,9 +23,11 @@ SCHED_OTHER priority and it won't defer RCU anymore. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - kernel/softirq.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++-------- + kernel/softirq.c | 85 +++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 73 insertions(+), 12 deletions(-) +diff --git a/kernel/softirq.c b/kernel/softirq.c +index 3e9333d148ad..fe4e59c80a08 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -59,6 +59,10 @@ EXPORT_PER_CPU_SYMBOL(irq_stat); @@ -56,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void handle_softirq(unsigned int vec_nr) { struct softirq_action *h = softirq_vec + vec_nr; -@@ -493,7 +508,6 @@ void __raise_softirq_irqoff(unsigned int +@@ -493,7 +508,6 @@ void __raise_softirq_irqoff(unsigned int nr) static inline void local_bh_disable_nort(void) { local_bh_disable(); } static inline void _local_bh_enable_nort(void) { _local_bh_enable(); } static void ksoftirqd_set_sched_params(unsigned int cpu) { } @@ -78,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * If we are not in a hard interrupt and inside a bh disabled -@@ -650,16 +668,29 @@ static void do_raise_softirq_irqoff(unsi +@@ -650,16 +668,29 @@ static void do_raise_softirq_irqoff(unsigned int nr) * delegate it to ksoftirqd. */ if (!in_irq() && current->softirq_nestcnt) @@ -112,7 +115,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -685,7 +716,7 @@ void raise_softirq_irqoff(unsigned int n +@@ -685,7 +716,7 @@ void raise_softirq_irqoff(unsigned int nr) * raise a WARN() if the condition is met. */ if (!current->softirq_nestcnt) @@ -121,10 +124,11 @@ Signed-off-by: Sebastian Andrzej Siewior } static inline int ksoftirqd_softirq_pending(void) -@@ -698,22 +729,37 @@ static inline void _local_bh_enable_nort +@@ -697,23 +728,38 @@ static inline void local_bh_disable_nort(void) { } + static inline void _local_bh_enable_nort(void) { } static inline void ksoftirqd_set_sched_params(unsigned int cpu) - { ++{ + /* Take over all but timer pending softirqs when starting */ + local_irq_disable(); + current->softirqs_raised = local_softirq_pending() & ~TIMER_SOFTIRQS; @@ -132,7 +136,7 @@ Signed-off-by: Sebastian Andrzej Siewior +} + +static inline void ktimer_softirqd_set_sched_params(unsigned int cpu) -+{ + { struct sched_param param = { .sched_priority = 1 }; sched_setscheduler(current, SCHED_FIFO, ¶m); @@ -172,7 +176,7 @@ Signed-off-by: Sebastian Andrzej Siewior local_irq_restore(flags); #endif } -@@ -1153,18 +1202,30 @@ static int takeover_tasklets(unsigned in +@@ -1153,18 +1202,30 @@ static int takeover_tasklets(unsigned int cpu) static struct smp_hotplug_thread softirq_threads = { .store = &ksoftirqd, .setup = ksoftirqd_set_sched_params, @@ -205,3 +209,6 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } early_initcall(spawn_ksoftirqd); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0128-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch b/kernel/patches-4.19.x-rt/0126-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch similarity index 85% rename from kernel/patches-4.19.x-rt/0128-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch rename to kernel/patches-4.19.x-rt/0126-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch index 81bf75165..afb33b2d9 100644 --- a/kernel/patches-4.19.x-rt/0128-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch +++ b/kernel/patches-4.19.x-rt/0126-softirq-Avoid-local_softirq_pending-messages-if-ksof.patch @@ -1,6 +1,7 @@ +From f76ac7c02f06f8b40b041c7b9ff9bc13c55bb353 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 18 Feb 2019 13:19:59 +0100 -Subject: [PATCH] softirq: Avoid "local_softirq_pending" messages if +Subject: [PATCH 126/269] softirq: Avoid "local_softirq_pending" messages if ksoftirqd is blocked If the ksoftirqd thread has a softirq pending and is blocked on the @@ -18,12 +19,14 @@ Cc: stable-rt@vger.kernel.org Tested-by: Juri Lelli Signed-off-by: Sebastian Andrzej Siewior --- - kernel/softirq.c | 57 +++++++++++++++++++++++++++++++++++++++---------------- + kernel/softirq.c | 57 ++++++++++++++++++++++++++++++++++-------------- 1 file changed, 41 insertions(+), 16 deletions(-) +diff --git a/kernel/softirq.c b/kernel/softirq.c +index fe4e59c80a08..1920985eeb09 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c -@@ -92,6 +92,31 @@ static inline void softirq_clr_runner(un +@@ -92,6 +92,31 @@ static inline void softirq_clr_runner(unsigned int sirq) sr->runner[sirq] = NULL; } @@ -55,7 +58,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * On preempt-rt a softirq running context might be blocked on a * lock. There might be no other runnable task on this CPU because the -@@ -104,6 +129,7 @@ static inline void softirq_clr_runner(un +@@ -104,6 +129,7 @@ static inline void softirq_clr_runner(unsigned int sirq) */ void softirq_check_pending_idle(void) { @@ -72,10 +75,6 @@ Signed-off-by: Sebastian Andrzej Siewior for (i = 0; i < NR_SOFTIRQS; i++) { - struct task_struct *tsk = sr->runner[i]; + tsk = sr->runner[i]; -+ -+ if (softirq_check_runner_tsk(tsk, &warnpending)) -+ warnpending &= ~(1 << i); -+ } - /* - * The wakeup code in rtmutex.c wakes up the task @@ -92,6 +91,10 @@ Signed-off-by: Sebastian Andrzej Siewior - } - raw_spin_unlock(&tsk->pi_lock); - } ++ if (softirq_check_runner_tsk(tsk, &warnpending)) ++ warnpending &= ~(1 << i); ++ } ++ + if (warnpending) { + tsk = __this_cpu_read(ksoftirqd); + softirq_check_runner_tsk(tsk, &warnpending); @@ -103,3 +106,6 @@ Signed-off-by: Sebastian Andrzej Siewior } if (warnpending) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0129-softirq-Avoid-local_softirq_pending-messages-if-task.patch b/kernel/patches-4.19.x-rt/0127-softirq-Avoid-local_softirq_pending-messages-if-task.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0129-softirq-Avoid-local_softirq_pending-messages-if-task.patch rename to kernel/patches-4.19.x-rt/0127-softirq-Avoid-local_softirq_pending-messages-if-task.patch index 2dc189782..310285f8c 100644 --- a/kernel/patches-4.19.x-rt/0129-softirq-Avoid-local_softirq_pending-messages-if-task.patch +++ b/kernel/patches-4.19.x-rt/0127-softirq-Avoid-local_softirq_pending-messages-if-task.patch @@ -1,7 +1,8 @@ +From 35b95587b8a912221d7eb0bdbb7aefb126c7db5d Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 19 Feb 2019 16:49:29 +0100 -Subject: [PATCH] softirq: Avoid "local_softirq_pending" messages if task - is in cpu_chill() +Subject: [PATCH 127/269] softirq: Avoid "local_softirq_pending" messages if + task is in cpu_chill() If the softirq thread enters cpu_chill() then ->state is UNINTERRUPTIBLE and has no ->pi_blocked_on set and so its mask is not taken into account. @@ -13,12 +14,14 @@ is held. Use the same mechanism for the softirq-pending check. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - kernel/softirq.c | 5 ++++- + kernel/softirq.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) +diff --git a/kernel/softirq.c b/kernel/softirq.c +index 1920985eeb09..27a4bb2303d0 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c -@@ -105,9 +105,12 @@ static bool softirq_check_runner_tsk(str +@@ -105,9 +105,12 @@ static bool softirq_check_runner_tsk(struct task_struct *tsk, * _before_ it sets pi_blocked_on to NULL under * tsk->pi_lock. So we need to check for both: state * and pi_blocked_on. @@ -32,3 +35,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* Clear all bits pending in that task */ *pending &= ~(tsk->softirqs_raised); ret = true; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0130-rtmutex-trylock-is-okay-on-RT.patch b/kernel/patches-4.19.x-rt/0128-rtmutex-trylock-is-okay-on-RT.patch similarity index 61% rename from kernel/patches-4.19.x-rt/0130-rtmutex-trylock-is-okay-on-RT.patch rename to kernel/patches-4.19.x-rt/0128-rtmutex-trylock-is-okay-on-RT.patch index 91c8a4633..6128ba19f 100644 --- a/kernel/patches-4.19.x-rt/0130-rtmutex-trylock-is-okay-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0128-rtmutex-trylock-is-okay-on-RT.patch @@ -1,6 +1,7 @@ +From 86d0b19c922c5c25ec598f869e859be148c058e2 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior -Date: Wed 02 Dec 2015 11:34:07 +0100 -Subject: rtmutex: trylock is okay on -RT +Date: Wed, 2 Dec 2015 11:34:07 +0100 +Subject: [PATCH 128/269] rtmutex: trylock is okay on -RT non-RT kernel could deadlock on rt_mutex_trylock() in softirq context. On -RT we don't run softirqs in IRQ context but in thread context so it is @@ -8,12 +9,14 @@ not a issue here. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rtmutex.c | 4 ++++ + kernel/locking/rtmutex.c | 4 ++++ 1 file changed, 4 insertions(+) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 9562aaa2afdc..72abe7c121fa 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -1583,7 +1583,11 @@ int __sched rt_mutex_trylock(struct rt_m +@@ -1583,7 +1583,11 @@ int __sched rt_mutex_trylock(struct rt_mutex *lock) { int ret; @@ -25,3 +28,6 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; ret = rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0131-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch b/kernel/patches-4.19.x-rt/0129-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0131-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch rename to kernel/patches-4.19.x-rt/0129-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch index 4ac463250..41a023ec1 100644 --- a/kernel/patches-4.19.x-rt/0131-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch +++ b/kernel/patches-4.19.x-rt/0129-fs-nfs-turn-rmdir_sem-into-a-semaphore.patch @@ -1,6 +1,7 @@ +From 8e56a215d6f5df86b3cfcf2386facd511db3d0ed Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 15 Sep 2016 10:51:27 +0200 -Subject: [PATCH] fs/nfs: turn rmdir_sem into a semaphore +Subject: [PATCH 129/269] fs/nfs: turn rmdir_sem into a semaphore The RW semaphore had a reader side which used the _non_owner version because it most likely took the reader lock in one thread and released it @@ -13,15 +14,17 @@ multiple readers anyway so that is not a loss. Signed-off-by: Sebastian Andrzej Siewior --- - fs/nfs/dir.c | 8 ++++++++ - fs/nfs/inode.c | 4 ++++ - fs/nfs/unlink.c | 31 +++++++++++++++++++++++++++---- - include/linux/nfs_fs.h | 4 ++++ + fs/nfs/dir.c | 8 ++++++++ + fs/nfs/inode.c | 4 ++++ + fs/nfs/unlink.c | 31 +++++++++++++++++++++++++++---- + include/linux/nfs_fs.h | 4 ++++ 4 files changed, 43 insertions(+), 4 deletions(-) +diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c +index 8bfaa658b2c1..62afe8ca1e36 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c -@@ -1786,7 +1786,11 @@ int nfs_rmdir(struct inode *dir, struct +@@ -1786,7 +1786,11 @@ int nfs_rmdir(struct inode *dir, struct dentry *dentry) trace_nfs_rmdir_enter(dir, dentry); if (d_really_is_positive(dentry)) { @@ -33,7 +36,7 @@ Signed-off-by: Sebastian Andrzej Siewior error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name); /* Ensure the VFS deletes this inode */ switch (error) { -@@ -1796,7 +1800,11 @@ int nfs_rmdir(struct inode *dir, struct +@@ -1796,7 +1800,11 @@ int nfs_rmdir(struct inode *dir, struct dentry *dentry) case -ENOENT: nfs_dentry_handle_enoent(dentry); } @@ -45,6 +48,8 @@ Signed-off-by: Sebastian Andrzej Siewior } else error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name); trace_nfs_rmdir_exit(dir, dentry, error); +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index b65aee481d13..110ee6f78c31 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -2103,7 +2103,11 @@ static void init_once(void *foo) @@ -59,9 +64,11 @@ Signed-off-by: Sebastian Andrzej Siewior mutex_init(&nfsi->commit_mutex); nfs4_init_once(nfsi); } +diff --git a/fs/nfs/unlink.c b/fs/nfs/unlink.c +index fd61bf0fce63..ce9100b5604d 100644 --- a/fs/nfs/unlink.c +++ b/fs/nfs/unlink.c -@@ -52,6 +52,29 @@ static void nfs_async_unlink_done(struct +@@ -52,6 +52,29 @@ static void nfs_async_unlink_done(struct rpc_task *task, void *calldata) rpc_restart_call_prepare(task); } @@ -91,7 +98,7 @@ Signed-off-by: Sebastian Andrzej Siewior /** * nfs_async_unlink_release - Release the sillydelete data. * @task: rpc_task of the sillydelete -@@ -65,7 +88,7 @@ static void nfs_async_unlink_release(voi +@@ -65,7 +88,7 @@ static void nfs_async_unlink_release(void *calldata) struct dentry *dentry = data->dentry; struct super_block *sb = dentry->d_sb; @@ -100,7 +107,7 @@ Signed-off-by: Sebastian Andrzej Siewior d_lookup_done(dentry); nfs_free_unlinkdata(data); dput(dentry); -@@ -118,10 +141,10 @@ static int nfs_call_unlink(struct dentry +@@ -118,10 +141,10 @@ static int nfs_call_unlink(struct dentry *dentry, struct inode *inode, struct nf struct inode *dir = d_inode(dentry->d_parent); struct dentry *alias; @@ -113,7 +120,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } if (!d_in_lookup(alias)) { -@@ -143,7 +166,7 @@ static int nfs_call_unlink(struct dentry +@@ -143,7 +166,7 @@ static int nfs_call_unlink(struct dentry *dentry, struct inode *inode, struct nf ret = 0; spin_unlock(&alias->d_lock); dput(alias); @@ -122,6 +129,8 @@ Signed-off-by: Sebastian Andrzej Siewior /* * If we'd displaced old cached devname, free it. At that * point dentry is definitely not a root, so we won't need +diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h +index a0831e9d19c9..94b6fefd90b0 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -163,7 +163,11 @@ struct nfs_inode { @@ -136,3 +145,6 @@ Signed-off-by: Sebastian Andrzej Siewior struct mutex commit_mutex; #if IS_ENABLED(CONFIG_NFS_V4) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0132-rtmutex-futex-prepare-rt.patch b/kernel/patches-4.19.x-rt/0130-rtmutex-Handle-the-various-new-futex-race-conditions.patch similarity index 80% rename from kernel/patches-4.19.x-rt/0132-rtmutex-futex-prepare-rt.patch rename to kernel/patches-4.19.x-rt/0130-rtmutex-Handle-the-various-new-futex-race-conditions.patch index 7a68754de..7138c83b7 100644 --- a/kernel/patches-4.19.x-rt/0132-rtmutex-futex-prepare-rt.patch +++ b/kernel/patches-4.19.x-rt/0130-rtmutex-Handle-the-various-new-futex-race-conditions.patch @@ -1,6 +1,7 @@ -Subject: rtmutex: Handle the various new futex race conditions +From 915b60215e529acc7c55ded1a85af2ad92a5c9c3 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 10 Jun 2011 11:04:15 +0200 +Subject: [PATCH 130/269] rtmutex: Handle the various new futex race conditions RT opens a few new interesting race conditions in the rtmutex/futex combo due to futex hash bucket lock being a 'sleeping' spinlock and @@ -8,14 +9,16 @@ therefor not disabling preemption. Signed-off-by: Thomas Gleixner --- - kernel/futex.c | 77 ++++++++++++++++++++++++++++++++-------- - kernel/locking/rtmutex.c | 36 +++++++++++++++--- - kernel/locking/rtmutex_common.h | 2 + + kernel/futex.c | 77 ++++++++++++++++++++++++++------- + kernel/locking/rtmutex.c | 36 ++++++++++++--- + kernel/locking/rtmutex_common.h | 2 + 3 files changed, 94 insertions(+), 21 deletions(-) +diff --git a/kernel/futex.c b/kernel/futex.c +index fadd9bff6e3c..be06626b29d2 100644 --- a/kernel/futex.c +++ b/kernel/futex.c -@@ -2143,6 +2143,16 @@ static int futex_requeue(u32 __user *uad +@@ -2146,6 +2146,16 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, requeue_pi_wake_futex(this, &key2, hb2); drop_count++; continue; @@ -32,7 +35,7 @@ Signed-off-by: Thomas Gleixner } else if (ret) { /* * rt_mutex_start_proxy_lock() detected a -@@ -3191,7 +3201,7 @@ static int futex_wait_requeue_pi(u32 __u +@@ -3194,7 +3204,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, struct hrtimer_sleeper timeout, *to = NULL; struct futex_pi_state *pi_state = NULL; struct rt_mutex_waiter rt_waiter; @@ -41,7 +44,7 @@ Signed-off-by: Thomas Gleixner union futex_key key2 = FUTEX_KEY_INIT; struct futex_q q = futex_q_init; int res, ret; -@@ -3249,20 +3259,55 @@ static int futex_wait_requeue_pi(u32 __u +@@ -3252,20 +3262,55 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, /* Queue the futex_q, drop the hb lock, wait for wakeup. */ futex_wait_queue_me(hb, &q, to); @@ -108,7 +111,7 @@ Signed-off-by: Thomas Gleixner /* Check if the requeue code acquired the second futex for us. */ if (!q.rt_waiter) { -@@ -3271,7 +3316,8 @@ static int futex_wait_requeue_pi(u32 __u +@@ -3274,7 +3319,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, * did a lock-steal - fix up the PI-state in that case. */ if (q.pi_state && (q.pi_state->owner != current)) { @@ -118,7 +121,7 @@ Signed-off-by: Thomas Gleixner ret = fixup_pi_state_owner(uaddr2, &q, current); if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) { pi_state = q.pi_state; -@@ -3282,7 +3328,7 @@ static int futex_wait_requeue_pi(u32 __u +@@ -3285,7 +3331,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, * the requeue_pi() code acquired for us. */ put_pi_state(q.pi_state); @@ -127,7 +130,7 @@ Signed-off-by: Thomas Gleixner } } else { struct rt_mutex *pi_mutex; -@@ -3296,7 +3342,8 @@ static int futex_wait_requeue_pi(u32 __u +@@ -3299,7 +3345,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, pi_mutex = &q.pi_state->pi_mutex; ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter); @@ -137,9 +140,11 @@ Signed-off-by: Thomas Gleixner if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter)) ret = 0; +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 72abe7c121fa..71d161c93b98 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -135,6 +135,11 @@ static void fixup_rt_mutex_waiters(struc +@@ -135,6 +135,11 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock) WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS); } @@ -161,7 +166,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -515,7 +521,7 @@ static int rt_mutex_adjust_prio_chain(st +@@ -515,7 +521,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, * reached or the state of the chain has changed while we * dropped the locks. */ @@ -170,7 +175,7 @@ Signed-off-by: Thomas Gleixner goto out_unlock_pi; /* -@@ -951,6 +957,22 @@ static int task_blocks_on_rt_mutex(struc +@@ -951,6 +957,22 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, return -EDEADLK; raw_spin_lock(&task->pi_lock); @@ -193,7 +198,7 @@ Signed-off-by: Thomas Gleixner waiter->task = task; waiter->lock = lock; waiter->prio = task->prio; -@@ -974,7 +996,7 @@ static int task_blocks_on_rt_mutex(struc +@@ -974,7 +996,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, rt_mutex_enqueue_pi(owner, waiter); rt_mutex_adjust_prio(owner); @@ -202,7 +207,7 @@ Signed-off-by: Thomas Gleixner chain_walk = 1; } else if (rt_mutex_cond_detect_deadlock(waiter, chwalk)) { chain_walk = 1; -@@ -1070,7 +1092,7 @@ static void remove_waiter(struct rt_mute +@@ -1070,7 +1092,7 @@ static void remove_waiter(struct rt_mutex *lock, { bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock)); struct task_struct *owner = rt_mutex_owner(lock); @@ -211,7 +216,7 @@ Signed-off-by: Thomas Gleixner lockdep_assert_held(&lock->wait_lock); -@@ -1096,7 +1118,8 @@ static void remove_waiter(struct rt_mute +@@ -1096,7 +1118,8 @@ static void remove_waiter(struct rt_mutex *lock, rt_mutex_adjust_prio(owner); /* Store the lock on which owner is blocked or NULL */ @@ -221,7 +226,7 @@ Signed-off-by: Thomas Gleixner raw_spin_unlock(&owner->pi_lock); -@@ -1132,7 +1155,8 @@ void rt_mutex_adjust_pi(struct task_stru +@@ -1132,7 +1155,8 @@ void rt_mutex_adjust_pi(struct task_struct *task) raw_spin_lock_irqsave(&task->pi_lock, flags); waiter = task->pi_blocked_on; @@ -231,6 +236,8 @@ Signed-off-by: Thomas Gleixner raw_spin_unlock_irqrestore(&task->pi_lock, flags); return; } +diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h +index d1d62f942be2..f4b6596d224a 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -130,6 +130,8 @@ enum rtmutex_chainwalk { @@ -242,3 +249,6 @@ Signed-off-by: Thomas Gleixner extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock); extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock, struct task_struct *proxy_owner); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0133-futex-requeue-pi-fix.patch b/kernel/patches-4.19.x-rt/0131-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0133-futex-requeue-pi-fix.patch rename to kernel/patches-4.19.x-rt/0131-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch index 7522ec9ce..1ce398ac7 100644 --- a/kernel/patches-4.19.x-rt/0133-futex-requeue-pi-fix.patch +++ b/kernel/patches-4.19.x-rt/0131-futex-Fix-bug-on-when-a-requeued-RT-task-times-out.patch @@ -1,6 +1,7 @@ +From c1664acee8627620a0406cc55b13d81c710f2bac Mon Sep 17 00:00:00 2001 From: Steven Rostedt Date: Tue, 14 Jul 2015 14:26:34 +0200 -Subject: futex: Fix bug on when a requeued RT task times out +Subject: [PATCH 131/269] futex: Fix bug on when a requeued RT task times out Requeue with timeout causes a bug with PREEMPT_RT_FULL. @@ -16,7 +17,7 @@ The bug comes from a timed out condition. double_lock_hb(); raw_spin_lock(pi_lock); - if (current->pi_blocked_on) { + if (current->pi_blocked_on) { } else { current->pi_blocked_on = PI_WAKE_INPROGRESS; run_spin_unlock(pi_lock); @@ -49,13 +50,15 @@ appropriately. Signed-off-by: Steven Rostedt Signed-off-by: Thomas Gleixner --- - kernel/locking/rtmutex.c | 31 ++++++++++++++++++++++++++++++- - kernel/locking/rtmutex_common.h | 1 + + kernel/locking/rtmutex.c | 31 ++++++++++++++++++++++++++++++- + kernel/locking/rtmutex_common.h | 1 + 2 files changed, 31 insertions(+), 1 deletion(-) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 71d161c93b98..1c3f56d3d9b6 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -137,7 +137,8 @@ static void fixup_rt_mutex_waiters(struc +@@ -137,7 +137,8 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock) static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter) { @@ -65,7 +68,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -1784,6 +1785,34 @@ int __rt_mutex_start_proxy_lock(struct r +@@ -1784,6 +1785,34 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, if (try_to_take_rt_mutex(lock, task, NULL)) return 1; @@ -100,6 +103,8 @@ Signed-off-by: Thomas Gleixner /* We enforce deadlock detection for futexes */ ret = task_blocks_on_rt_mutex(lock, waiter, task, RT_MUTEX_FULL_CHAINWALK); +diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h +index f4b6596d224a..461527f3f7af 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -131,6 +131,7 @@ enum rtmutex_chainwalk { @@ -110,3 +115,6 @@ Signed-off-by: Thomas Gleixner extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock); extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock, +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0134-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch b/kernel/patches-4.19.x-rt/0132-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0134-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch rename to kernel/patches-4.19.x-rt/0132-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch index af6fd47d7..9c9520667 100644 --- a/kernel/patches-4.19.x-rt/0134-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch +++ b/kernel/patches-4.19.x-rt/0132-futex-Ensure-lock-unlock-symetry-versus-pi_lock-and-.patch @@ -1,6 +1,8 @@ +From 03de38c7dbb4653aa5f13353b834b6be244a727d Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 1 Mar 2013 11:17:42 +0100 -Subject: futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock +Subject: [PATCH 132/269] futex: Ensure lock/unlock symetry versus pi_lock and + hash bucket lock In exit_pi_state_list() we have the following locking construct: @@ -25,12 +27,14 @@ Reported-by: Yong Zhang Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - kernel/futex.c | 2 ++ + kernel/futex.c | 2 ++ 1 file changed, 2 insertions(+) +diff --git a/kernel/futex.c b/kernel/futex.c +index be06626b29d2..eeb3e16fb9ec 100644 --- a/kernel/futex.c +++ b/kernel/futex.c -@@ -918,7 +918,9 @@ void exit_pi_state_list(struct task_stru +@@ -918,7 +918,9 @@ void exit_pi_state_list(struct task_struct *curr) if (head->next != next) { /* retain curr->pi_lock for the loop invariant */ raw_spin_unlock(&pi_state->pi_mutex.wait_lock); @@ -40,3 +44,6 @@ Signed-off-by: Sebastian Andrzej Siewior put_pi_state(pi_state); continue; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0135-pid.h-include-atomic.h.patch b/kernel/patches-4.19.x-rt/0133-pid.h-include-atomic.h.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0135-pid.h-include-atomic.h.patch rename to kernel/patches-4.19.x-rt/0133-pid.h-include-atomic.h.patch index 9510164b0..69f4a69ab 100644 --- a/kernel/patches-4.19.x-rt/0135-pid.h-include-atomic.h.patch +++ b/kernel/patches-4.19.x-rt/0133-pid.h-include-atomic.h.patch @@ -1,6 +1,7 @@ +From fab65ac89d2148c60793f1043b3391b8431674d1 Mon Sep 17 00:00:00 2001 From: Grygorii Strashko Date: Tue, 21 Jul 2015 19:43:56 +0300 -Subject: pid.h: include atomic.h +Subject: [PATCH 133/269] pid.h: include atomic.h This patch fixes build error: CC kernel/pid_namespace.o @@ -21,9 +22,11 @@ Vanilla gets this via spinlock.h. Signed-off-by: Grygorii Strashko Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/pid.h | 1 + + include/linux/pid.h | 1 + 1 file changed, 1 insertion(+) +diff --git a/include/linux/pid.h b/include/linux/pid.h +index 14a9a39da9c7..a9026a5da196 100644 --- a/include/linux/pid.h +++ b/include/linux/pid.h @@ -3,6 +3,7 @@ @@ -34,3 +37,6 @@ Signed-off-by: Sebastian Andrzej Siewior enum pid_type { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0136-arm-include-definition-for-cpumask_t.patch b/kernel/patches-4.19.x-rt/0134-arm-include-definition-for-cpumask_t.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0136-arm-include-definition-for-cpumask_t.patch rename to kernel/patches-4.19.x-rt/0134-arm-include-definition-for-cpumask_t.patch index 4bb2672be..c2458b92f 100644 --- a/kernel/patches-4.19.x-rt/0136-arm-include-definition-for-cpumask_t.patch +++ b/kernel/patches-4.19.x-rt/0134-arm-include-definition-for-cpumask_t.patch @@ -1,6 +1,7 @@ +From 3286a3abb2234e5ecf7605154781fbd762b3d726 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 22 Dec 2016 17:28:33 +0100 -Subject: [PATCH] arm: include definition for cpumask_t +Subject: [PATCH 134/269] arm: include definition for cpumask_t This definition gets pulled in by other files. With the (later) split of RCU and spinlock.h it won't compile anymore. @@ -8,9 +9,11 @@ The split is done in ("rbtree: don't include the rcu header"). Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/include/asm/irq.h | 2 ++ + arch/arm/include/asm/irq.h | 2 ++ 1 file changed, 2 insertions(+) +diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h +index 46d41140df27..c421b5b81946 100644 --- a/arch/arm/include/asm/irq.h +++ b/arch/arm/include/asm/irq.h @@ -23,6 +23,8 @@ @@ -21,4 +24,7 @@ Signed-off-by: Sebastian Andrzej Siewior + struct irqaction; struct pt_regs; - extern void migrate_irqs(void); + +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0137-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch b/kernel/patches-4.19.x-rt/0135-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0137-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch rename to kernel/patches-4.19.x-rt/0135-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch index 20855d74f..45bf57a30 100644 --- a/kernel/patches-4.19.x-rt/0137-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch +++ b/kernel/patches-4.19.x-rt/0135-locking-locktorture-Do-NOT-include-rwlock.h-directly.patch @@ -1,6 +1,8 @@ +From 55274d88157f847bb93b54d4b3c0d569995b8443 Mon Sep 17 00:00:00 2001 From: "Wolfgang M. Reimer" Date: Tue, 21 Jul 2015 16:20:07 +0200 -Subject: locking: locktorture: Do NOT include rwlock.h directly +Subject: [PATCH 135/269] locking: locktorture: Do NOT include rwlock.h + directly Including rwlock.h directly will cause kernel builds to fail if CONFIG_PREEMPT_RT_FULL is defined. The correct header file @@ -11,9 +13,11 @@ Cc: stable-rt@vger.kernel.org Signed-off-by: Wolfgang M. Reimer Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/locktorture.c | 1 - + kernel/locking/locktorture.c | 1 - 1 file changed, 1 deletion(-) +diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c +index 7d0b0ed74404..a81e6ef33a04 100644 --- a/kernel/locking/locktorture.c +++ b/kernel/locking/locktorture.c @@ -29,7 +29,6 @@ @@ -24,3 +28,6 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include #include +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0138-rtmutex-lock-killable.patch b/kernel/patches-4.19.x-rt/0136-rtmutex-Add-rtmutex_lock_killable.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0138-rtmutex-lock-killable.patch rename to kernel/patches-4.19.x-rt/0136-rtmutex-Add-rtmutex_lock_killable.patch index bad59d2de..0bbdc1cd3 100644 --- a/kernel/patches-4.19.x-rt/0138-rtmutex-lock-killable.patch +++ b/kernel/patches-4.19.x-rt/0136-rtmutex-Add-rtmutex_lock_killable.patch @@ -1,19 +1,22 @@ -Subject: rtmutex: Add rtmutex_lock_killable() +From 84d0c68fcaa44acc03d15941d982f4a0157903d0 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Thu, 09 Jun 2011 11:43:52 +0200 +Date: Thu, 9 Jun 2011 11:43:52 +0200 +Subject: [PATCH 136/269] rtmutex: Add rtmutex_lock_killable() Add "killable" type to rtmutex. We need this since rtmutex are used as "normal" mutexes which do use this type. Signed-off-by: Thomas Gleixner --- - include/linux/rtmutex.h | 1 + - kernel/locking/rtmutex.c | 19 +++++++++++++++++++ + include/linux/rtmutex.h | 1 + + kernel/locking/rtmutex.c | 19 +++++++++++++++++++ 2 files changed, 20 insertions(+) +diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h +index 6fd615a0eea9..81ece6a8291a 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h -@@ -115,6 +115,7 @@ extern void rt_mutex_lock(struct rt_mute +@@ -115,6 +115,7 @@ extern void rt_mutex_lock(struct rt_mutex *lock); #endif extern int rt_mutex_lock_interruptible(struct rt_mutex *lock); @@ -21,12 +24,15 @@ Signed-off-by: Thomas Gleixner extern int rt_mutex_timed_lock(struct rt_mutex *lock, struct hrtimer_sleeper *timeout); +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 1c3f56d3d9b6..a4b2af7718f8 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -1563,6 +1563,25 @@ int __sched __rt_mutex_futex_trylock(str +@@ -1562,6 +1562,25 @@ int __sched __rt_mutex_futex_trylock(struct rt_mutex *lock) + return __rt_mutex_slowtrylock(lock); } - /** ++/** + * rt_mutex_lock_killable - lock a rt_mutex killable + * + * @lock: the rt_mutex to be locked @@ -45,7 +51,9 @@ Signed-off-by: Thomas Gleixner +} +EXPORT_SYMBOL_GPL(rt_mutex_lock_killable); + -+/** + /** * rt_mutex_timed_lock - lock a rt_mutex interruptible * the timeout structure is provided - * by the caller +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0139-rtmutex-Make-lock_killable-work.patch b/kernel/patches-4.19.x-rt/0137-rtmutex-Make-lock_killable-work.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0139-rtmutex-Make-lock_killable-work.patch rename to kernel/patches-4.19.x-rt/0137-rtmutex-Make-lock_killable-work.patch index cc50d7ea5..59d804455 100644 --- a/kernel/patches-4.19.x-rt/0139-rtmutex-Make-lock_killable-work.patch +++ b/kernel/patches-4.19.x-rt/0137-rtmutex-Make-lock_killable-work.patch @@ -1,6 +1,7 @@ +From 05fb36753dd6a8fb6b5af57e77d7f195083d3348 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sat, 1 Apr 2017 12:50:59 +0200 -Subject: [PATCH] rtmutex: Make lock_killable work +Subject: [PATCH 137/269] rtmutex: Make lock_killable work Locking an rt mutex killable does not work because signal handling is restricted to TASK_INTERRUPTIBLE. @@ -11,12 +12,14 @@ Cc: stable-rt@vger.kernel.org Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rtmutex.c | 19 +++++++------------ + kernel/locking/rtmutex.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index a4b2af7718f8..f058bb976212 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -1201,18 +1201,13 @@ static int __sched +@@ -1201,18 +1201,13 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, if (try_to_take_rt_mutex(lock, current, waiter)) break; @@ -42,3 +45,6 @@ Signed-off-by: Sebastian Andrzej Siewior } raw_spin_unlock_irq(&lock->wait_lock); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0140-spinlock-types-separate-raw.patch b/kernel/patches-4.19.x-rt/0138-spinlock-Split-the-lock-types-header.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0140-spinlock-types-separate-raw.patch rename to kernel/patches-4.19.x-rt/0138-spinlock-Split-the-lock-types-header.patch index e2291eac0..3aa67f2d2 100644 --- a/kernel/patches-4.19.x-rt/0140-spinlock-types-separate-raw.patch +++ b/kernel/patches-4.19.x-rt/0138-spinlock-Split-the-lock-types-header.patch @@ -1,6 +1,7 @@ -Subject: spinlock: Split the lock types header +From 8eee663cf2becdb10a170336c2cf3fba5fe3be80 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 29 Jun 2011 19:34:01 +0200 +Subject: [PATCH 138/269] spinlock: Split the lock types header Split raw_spinlock into its own file and the remaining spinlock_t into its own non-RT header. The non-RT header will be replaced later by sleeping @@ -8,12 +9,16 @@ spinlocks. Signed-off-by: Thomas Gleixner --- - include/linux/rwlock_types.h | 4 ++ - include/linux/spinlock_types.h | 71 +----------------------------------- - include/linux/spinlock_types_nort.h | 33 ++++++++++++++++ - include/linux/spinlock_types_raw.h | 55 +++++++++++++++++++++++++++ + include/linux/rwlock_types.h | 4 ++ + include/linux/spinlock_types.h | 71 +---------------------------- + include/linux/spinlock_types_nort.h | 33 ++++++++++++++ + include/linux/spinlock_types_raw.h | 55 ++++++++++++++++++++++ 4 files changed, 94 insertions(+), 69 deletions(-) + create mode 100644 include/linux/spinlock_types_nort.h + create mode 100644 include/linux/spinlock_types_raw.h +diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h +index 857a72ceb794..c21683f3e14a 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -1,6 +1,10 @@ @@ -27,6 +32,8 @@ Signed-off-by: Thomas Gleixner /* * include/linux/rwlock_types.h - generic rwlock type definitions * and initializers +diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h +index 24b4e6f2c1a2..5c8664d57fb8 100644 --- a/include/linux/spinlock_types.h +++ b/include/linux/spinlock_types.h @@ -9,76 +9,9 @@ @@ -108,6 +115,9 @@ Signed-off-by: Thomas Gleixner #include +diff --git a/include/linux/spinlock_types_nort.h b/include/linux/spinlock_types_nort.h +new file mode 100644 +index 000000000000..f1dac1fb1d6a --- /dev/null +++ b/include/linux/spinlock_types_nort.h @@ -0,0 +1,33 @@ @@ -144,6 +154,9 @@ Signed-off-by: Thomas Gleixner +#define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x) + +#endif +diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h +new file mode 100644 +index 000000000000..822bf64a61d3 --- /dev/null +++ b/include/linux/spinlock_types_raw.h @@ -0,0 +1,55 @@ @@ -202,3 +215,6 @@ Signed-off-by: Thomas Gleixner +#define DEFINE_RAW_SPINLOCK(x) raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x) + +#endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0141-rtmutex-avoid-include-hell.patch b/kernel/patches-4.19.x-rt/0139-rtmutex-Avoid-include-hell.patch similarity index 68% rename from kernel/patches-4.19.x-rt/0141-rtmutex-avoid-include-hell.patch rename to kernel/patches-4.19.x-rt/0139-rtmutex-Avoid-include-hell.patch index a3b55f5b7..5a0d3eee1 100644 --- a/kernel/patches-4.19.x-rt/0141-rtmutex-avoid-include-hell.patch +++ b/kernel/patches-4.19.x-rt/0139-rtmutex-Avoid-include-hell.patch @@ -1,15 +1,18 @@ -Subject: rtmutex: Avoid include hell +From a006197b4fa5fcec0fd8bee40072cf420689c354 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 29 Jun 2011 20:06:39 +0200 +Subject: [PATCH 139/269] rtmutex: Avoid include hell Include only the required raw types. This avoids pulling in the complete spinlock header which in turn requires rtmutex.h at some point. Signed-off-by: Thomas Gleixner --- - include/linux/rtmutex.h | 2 +- + include/linux/rtmutex.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h +index 81ece6a8291a..a355289b1fa1 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -15,7 +15,7 @@ @@ -21,3 +24,6 @@ Signed-off-by: Thomas Gleixner extern int max_lock_depth; /* for sysctl */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0142-rtmutex_dont_include_rcu.patch b/kernel/patches-4.19.x-rt/0140-rbtree-don-t-include-the-rcu-header.patch similarity index 85% rename from kernel/patches-4.19.x-rt/0142-rtmutex_dont_include_rcu.patch rename to kernel/patches-4.19.x-rt/0140-rbtree-don-t-include-the-rcu-header.patch index d63e678a0..17adb1488 100644 --- a/kernel/patches-4.19.x-rt/0142-rtmutex_dont_include_rcu.patch +++ b/kernel/patches-4.19.x-rt/0140-rbtree-don-t-include-the-rcu-header.patch @@ -1,5 +1,10 @@ +From 3465bbb3bbbf562cd3d67f1c2f387eaa48a1af70 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior -Subject: rbtree: don't include the rcu header +Date: Tue, 26 Feb 2019 16:56:02 +0100 +Subject: [PATCH 140/269] rbtree: don't include the rcu header +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit The RCU header pulls in spinlock.h and fails due not yet defined types: @@ -18,11 +23,14 @@ a new header file which can be included by both users. Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/rbtree.h | 2 - - include/linux/rcu_assign_pointer.h | 54 +++++++++++++++++++++++++++++++++++++ - include/linux/rcupdate.h | 49 --------------------------------- + include/linux/rbtree.h | 2 +- + include/linux/rcu_assign_pointer.h | 54 ++++++++++++++++++++++++++++++ + include/linux/rcupdate.h | 49 +-------------------------- 3 files changed, 56 insertions(+), 49 deletions(-) + create mode 100644 include/linux/rcu_assign_pointer.h +diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h +index fcbeed4053ef..2aa2aec354c2 100644 --- a/include/linux/rbtree.h +++ b/include/linux/rbtree.h @@ -31,7 +31,7 @@ @@ -34,6 +42,9 @@ Signed-off-by: Sebastian Andrzej Siewior struct rb_node { unsigned long __rb_parent_color; +diff --git a/include/linux/rcu_assign_pointer.h b/include/linux/rcu_assign_pointer.h +new file mode 100644 +index 000000000000..7066962a4379 --- /dev/null +++ b/include/linux/rcu_assign_pointer.h @@ -0,0 +1,54 @@ @@ -91,6 +102,8 @@ Signed-off-by: Sebastian Andrzej Siewior +}) + +#endif +diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h +index 0539f55bf7b3..63cd0a1a99a0 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -42,6 +42,7 @@ @@ -101,10 +114,11 @@ Signed-off-by: Sebastian Andrzej Siewior #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) -@@ -372,54 +373,6 @@ static inline void rcu_preempt_sleep_che +@@ -371,54 +372,6 @@ static inline void rcu_preempt_sleep_check(void) { } + ((typeof(*p) __force __kernel *)(________p1)); \ }) - /** +-/** - * RCU_INITIALIZER() - statically initialize an RCU-protected global variable - * @v: The value to statically initialize with. - */ @@ -152,7 +166,9 @@ Signed-off-by: Sebastian Andrzej Siewior - _r_a_p__v; \ -}) - --/** + /** * rcu_swap_protected() - swap an RCU and a regular pointer * @rcu_ptr: RCU pointer - * @ptr: regular pointer +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0143-rtmutex-Provide-rt_mutex_slowlock_locked.patch b/kernel/patches-4.19.x-rt/0141-rtmutex-Provide-rt_mutex_slowlock_locked.patch similarity index 85% rename from kernel/patches-4.19.x-rt/0143-rtmutex-Provide-rt_mutex_slowlock_locked.patch rename to kernel/patches-4.19.x-rt/0141-rtmutex-Provide-rt_mutex_slowlock_locked.patch index 7bef86044..1251e1280 100644 --- a/kernel/patches-4.19.x-rt/0143-rtmutex-Provide-rt_mutex_slowlock_locked.patch +++ b/kernel/patches-4.19.x-rt/0141-rtmutex-Provide-rt_mutex_slowlock_locked.patch @@ -1,19 +1,22 @@ +From 28e2025df13c6a1c66fae452e91d26f8d2755460 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 12 Oct 2017 16:14:22 +0200 -Subject: rtmutex: Provide rt_mutex_slowlock_locked() +Subject: [PATCH 141/269] rtmutex: Provide rt_mutex_slowlock_locked() This is the inner-part of rt_mutex_slowlock(), required for rwsem-rt. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rtmutex.c | 67 ++++++++++++++++++++++------------------ - kernel/locking/rtmutex_common.h | 7 ++++ + kernel/locking/rtmutex.c | 67 +++++++++++++++++++-------------- + kernel/locking/rtmutex_common.h | 7 ++++ 2 files changed, 45 insertions(+), 29 deletions(-) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index f058bb976212..921345c31161 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -1244,35 +1244,16 @@ static void rt_mutex_handle_deadlock(int +@@ -1244,35 +1244,16 @@ static void rt_mutex_handle_deadlock(int res, int detect_deadlock, } } @@ -55,7 +58,7 @@ Signed-off-by: Sebastian Andrzej Siewior set_current_state(state); -@@ -1280,16 +1261,16 @@ rt_mutex_slowlock(struct rt_mutex *lock, +@@ -1280,16 +1261,16 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, if (unlikely(timeout)) hrtimer_start_expires(&timeout->timer, HRTIMER_MODE_ABS); @@ -76,7 +79,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -1297,6 +1278,34 @@ rt_mutex_slowlock(struct rt_mutex *lock, +@@ -1297,6 +1278,34 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, * unconditionally. We might have to fix that up. */ fixup_rt_mutex_waiters(lock); @@ -111,6 +114,8 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irqrestore(&lock->wait_lock, flags); +diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h +index 461527f3f7af..cb9815f0c766 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -15,6 +15,7 @@ @@ -121,7 +126,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * This is the control structure for tasks blocked on a rt_mutex, -@@ -159,6 +160,12 @@ extern bool __rt_mutex_futex_unlock(stru +@@ -159,6 +160,12 @@ extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock, struct wake_q_head *wqh); extern void rt_mutex_postunlock(struct wake_q_head *wake_q); @@ -134,3 +139,6 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_DEBUG_RT_MUTEXES # include "rtmutex-debug.h" +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0144-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch b/kernel/patches-4.19.x-rt/0142-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0144-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch rename to kernel/patches-4.19.x-rt/0142-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch index aa364eeb2..965c9be31 100644 --- a/kernel/patches-4.19.x-rt/0144-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch +++ b/kernel/patches-4.19.x-rt/0142-rtmutex-export-lockdep-less-version-of-rt_mutex-s-lo.patch @@ -1,20 +1,23 @@ +From cc9444912602fb283e5e75dc9ca36ee98cf8d0e9 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 12 Oct 2017 16:36:39 +0200 -Subject: rtmutex: export lockdep-less version of rt_mutex's lock, - trylock and unlock +Subject: [PATCH 142/269] rtmutex: export lockdep-less version of rt_mutex's + lock, trylock and unlock Required for lock implementation ontop of rtmutex. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rtmutex.c | 67 +++++++++++++++++++++++++--------------- - kernel/locking/rtmutex_common.h | 3 + + kernel/locking/rtmutex.c | 67 +++++++++++++++++++++------------ + kernel/locking/rtmutex_common.h | 3 ++ 2 files changed, 46 insertions(+), 24 deletions(-) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 921345c31161..d732976d0f05 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -1494,12 +1494,33 @@ rt_mutex_fastunlock(struct rt_mutex *loc +@@ -1494,12 +1494,33 @@ rt_mutex_fastunlock(struct rt_mutex *lock, rt_mutex_postunlock(&wake_q); } @@ -68,7 +71,7 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL_GPL(rt_mutex_lock_interruptible); -@@ -1575,13 +1587,10 @@ int __sched __rt_mutex_futex_trylock(str +@@ -1575,13 +1587,10 @@ int __sched __rt_mutex_futex_trylock(struct rt_mutex *lock) * Returns: * 0 on success * -EINTR when interrupted by a signal @@ -83,7 +86,7 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL_GPL(rt_mutex_lock_killable); -@@ -1616,6 +1625,18 @@ rt_mutex_timed_lock(struct rt_mutex *loc +@@ -1616,6 +1625,18 @@ rt_mutex_timed_lock(struct rt_mutex *lock, struct hrtimer_sleeper *timeout) } EXPORT_SYMBOL_GPL(rt_mutex_timed_lock); @@ -102,7 +105,7 @@ Signed-off-by: Sebastian Andrzej Siewior /** * rt_mutex_trylock - try to lock a rt_mutex * -@@ -1631,14 +1652,7 @@ int __sched rt_mutex_trylock(struct rt_m +@@ -1631,14 +1652,7 @@ int __sched rt_mutex_trylock(struct rt_mutex *lock) { int ret; @@ -118,7 +121,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (ret) mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_); -@@ -1646,6 +1660,11 @@ int __sched rt_mutex_trylock(struct rt_m +@@ -1646,6 +1660,11 @@ int __sched rt_mutex_trylock(struct rt_mutex *lock) } EXPORT_SYMBOL_GPL(rt_mutex_trylock); @@ -130,9 +133,11 @@ Signed-off-by: Sebastian Andrzej Siewior /** * rt_mutex_unlock - unlock a rt_mutex * +diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h +index cb9815f0c766..5955ad2aa2a8 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h -@@ -162,6 +162,9 @@ extern bool __rt_mutex_futex_unlock(stru +@@ -162,6 +162,9 @@ extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock, extern void rt_mutex_postunlock(struct wake_q_head *wake_q); /* RW semaphore special interface */ @@ -142,3 +147,6 @@ Signed-off-by: Sebastian Andrzej Siewior int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state, struct hrtimer_sleeper *timeout, enum rtmutex_chainwalk chwalk, +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0145-rtmutex-add-sleeping-lock-implementation.patch b/kernel/patches-4.19.x-rt/0143-rtmutex-add-sleeping-lock-implementation.patch similarity index 88% rename from kernel/patches-4.19.x-rt/0145-rtmutex-add-sleeping-lock-implementation.patch rename to kernel/patches-4.19.x-rt/0143-rtmutex-add-sleeping-lock-implementation.patch index 915dc6e99..603ed5791 100644 --- a/kernel/patches-4.19.x-rt/0145-rtmutex-add-sleeping-lock-implementation.patch +++ b/kernel/patches-4.19.x-rt/0143-rtmutex-add-sleeping-lock-implementation.patch @@ -1,25 +1,28 @@ +From 162034b085d74f4c4131bf4dc0c229a4c971cfae Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 12 Oct 2017 17:11:19 +0200 -Subject: rtmutex: add sleeping lock implementation +Subject: [PATCH 143/269] rtmutex: add sleeping lock implementation Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/kernel.h | 4 - include/linux/rtmutex.h | 21 + - include/linux/sched.h | 8 - include/linux/sched/wake_q.h | 27 ++ - include/linux/spinlock_rt.h | 156 +++++++++++++ - include/linux/spinlock_types_rt.h | 48 ++++ - kernel/fork.c | 1 - kernel/futex.c | 11 - kernel/locking/rtmutex.c | 436 ++++++++++++++++++++++++++++++++++---- - kernel/locking/rtmutex_common.h | 14 - - kernel/sched/core.c | 28 +- + include/linux/kernel.h | 4 + + include/linux/rtmutex.h | 21 +- + include/linux/sched.h | 8 + + include/linux/sched/wake_q.h | 27 +- + include/linux/spinlock_rt.h | 156 +++++++++++ + include/linux/spinlock_types_rt.h | 48 ++++ + kernel/fork.c | 1 + + kernel/futex.c | 11 +- + kernel/locking/rtmutex.c | 436 +++++++++++++++++++++++++++--- + kernel/locking/rtmutex_common.h | 14 +- + kernel/sched/core.c | 28 +- 11 files changed, 695 insertions(+), 59 deletions(-) create mode 100644 include/linux/spinlock_rt.h create mode 100644 include/linux/spinlock_types_rt.h +diff --git a/include/linux/kernel.h b/include/linux/kernel.h +index d6aac75b51ba..e3f1a7c3b953 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -259,6 +259,9 @@ extern int _cond_resched(void); @@ -40,6 +43,8 @@ Signed-off-by: Sebastian Andrzej Siewior # define sched_annotate_sleep() do { } while (0) #endif +diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h +index a355289b1fa1..138bd1e183e0 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -14,11 +14,15 @@ @@ -96,6 +101,8 @@ Signed-off-by: Sebastian Andrzej Siewior /** * rt_mutex_is_locked - is the mutex locked * @lock: the mutex to be queried +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 7ecccccbd358..1797fd3c8cbb 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -134,6 +134,9 @@ struct task_group; @@ -134,9 +141,11 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_RT_MUTEXES /* PI waiters blocked on a rt_mutex held by this task: */ +diff --git a/include/linux/sched/wake_q.h b/include/linux/sched/wake_q.h +index 10b19a192b2d..ce3ccff3d9d8 100644 --- a/include/linux/sched/wake_q.h +++ b/include/linux/sched/wake_q.h -@@ -47,8 +47,29 @@ static inline void wake_q_init(struct wa +@@ -47,8 +47,29 @@ static inline void wake_q_init(struct wake_q_head *head) head->lastp = &head->first; } @@ -169,6 +178,9 @@ Signed-off-by: Sebastian Andrzej Siewior +} #endif /* _LINUX_SCHED_WAKE_Q_H */ +diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h +new file mode 100644 +index 000000000000..3696a77fa77d --- /dev/null +++ b/include/linux/spinlock_rt.h @@ -0,0 +1,156 @@ @@ -328,6 +340,9 @@ Signed-off-by: Sebastian Andrzej Siewior +} + +#endif +diff --git a/include/linux/spinlock_types_rt.h b/include/linux/spinlock_types_rt.h +new file mode 100644 +index 000000000000..3e3d8c5f7a9a --- /dev/null +++ b/include/linux/spinlock_types_rt.h @@ -0,0 +1,48 @@ @@ -379,9 +394,11 @@ Signed-off-by: Sebastian Andrzej Siewior + spinlock_t name = __SPIN_LOCK_UNLOCKED(name) + +#endif +diff --git a/kernel/fork.c b/kernel/fork.c +index 8a9241afefb0..f62ae61064c7 100644 --- a/kernel/fork.c +++ b/kernel/fork.c -@@ -895,6 +895,7 @@ static struct task_struct *dup_task_stru +@@ -895,6 +895,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->splice_pipe = NULL; tsk->task_frag.page = NULL; tsk->wake_q.next = NULL; @@ -389,9 +406,11 @@ Signed-off-by: Sebastian Andrzej Siewior account_kernel_stack(tsk, 1); +diff --git a/kernel/futex.c b/kernel/futex.c +index eeb3e16fb9ec..2c5a5e180223 100644 --- a/kernel/futex.c +++ b/kernel/futex.c -@@ -1471,6 +1471,7 @@ static int wake_futex_pi(u32 __user *uad +@@ -1474,6 +1474,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_ struct task_struct *new_owner; bool postunlock = false; DEFINE_WAKE_Q(wake_q); @@ -399,7 +418,7 @@ Signed-off-by: Sebastian Andrzej Siewior int ret = 0; new_owner = rt_mutex_next_owner(&pi_state->pi_mutex); -@@ -1532,13 +1533,13 @@ static int wake_futex_pi(u32 __user *uad +@@ -1535,13 +1536,13 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_ pi_state->owner = new_owner; raw_spin_unlock(&new_owner->pi_lock); @@ -416,7 +435,7 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } -@@ -2850,7 +2851,7 @@ static int futex_lock_pi(u32 __user *uad +@@ -2853,7 +2854,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, goto no_block; } @@ -425,7 +444,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * On PREEMPT_RT_FULL, when hb->lock becomes an rt_mutex, we must not -@@ -3230,7 +3231,7 @@ static int futex_wait_requeue_pi(u32 __u +@@ -3233,7 +3234,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, * The waiter is allocated on our stack, manipulated by the requeue * code while we sleep on uaddr. */ @@ -434,6 +453,8 @@ Signed-off-by: Sebastian Andrzej Siewior ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE); if (unlikely(ret != 0)) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index d732976d0f05..88df1ff7ca2d 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -7,6 +7,11 @@ @@ -448,7 +469,7 @@ Signed-off-by: Sebastian Andrzej Siewior * * See Documentation/locking/rt-mutex-design.txt for details. */ -@@ -234,7 +239,7 @@ static inline bool unlock_rt_mutex_safe( +@@ -234,7 +239,7 @@ static inline bool unlock_rt_mutex_safe(struct rt_mutex *lock, * Only use with rt_mutex_waiter_{less,equal}() */ #define task_to_waiter(p) \ @@ -457,7 +478,7 @@ Signed-off-by: Sebastian Andrzej Siewior static inline int rt_mutex_waiter_less(struct rt_mutex_waiter *left, -@@ -274,6 +279,27 @@ rt_mutex_waiter_equal(struct rt_mutex_wa +@@ -274,6 +279,27 @@ rt_mutex_waiter_equal(struct rt_mutex_waiter *left, return 1; } @@ -485,7 +506,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter) { -@@ -378,6 +404,14 @@ static bool rt_mutex_cond_detect_deadloc +@@ -378,6 +404,14 @@ static bool rt_mutex_cond_detect_deadlock(struct rt_mutex_waiter *waiter, return debug_rt_mutex_detect_deadlock(waiter, chwalk); } @@ -500,7 +521,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Max number of times we'll walk the boosting chain: */ -@@ -703,13 +737,16 @@ static int rt_mutex_adjust_prio_chain(st +@@ -703,13 +737,16 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, * follow here. This is the end of the chain we are walking. */ if (!rt_mutex_owner(lock)) { @@ -519,7 +540,7 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irq(&lock->wait_lock); return 0; } -@@ -811,9 +848,11 @@ static int rt_mutex_adjust_prio_chain(st +@@ -811,9 +848,11 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, * @task: The task which wants to acquire the lock * @waiter: The waiter that is queued to the lock's wait tree if the * callsite called task_blocked_on_lock(), otherwise NULL @@ -533,7 +554,7 @@ Signed-off-by: Sebastian Andrzej Siewior { lockdep_assert_held(&lock->wait_lock); -@@ -849,12 +888,11 @@ static int try_to_take_rt_mutex(struct r +@@ -849,12 +888,11 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task, */ if (waiter) { /* @@ -549,7 +570,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * We can acquire the lock. Remove the waiter from the * lock waiters tree. -@@ -872,14 +910,12 @@ static int try_to_take_rt_mutex(struct r +@@ -872,14 +910,12 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task, */ if (rt_mutex_has_waiters(lock)) { /* @@ -568,7 +589,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * The current top waiter stays enqueued. We * don't have to change anything in the lock -@@ -926,6 +962,296 @@ static int try_to_take_rt_mutex(struct r +@@ -926,6 +962,296 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task, return 1; } @@ -865,7 +886,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Task blocks on lock. * -@@ -1039,6 +1365,7 @@ static int task_blocks_on_rt_mutex(struc +@@ -1039,6 +1365,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, * Called with lock->wait_lock held and interrupts disabled. */ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, @@ -873,7 +894,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct rt_mutex *lock) { struct rt_mutex_waiter *waiter; -@@ -1078,7 +1405,10 @@ static void mark_wakeup_next_waiter(stru +@@ -1078,7 +1405,10 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, * Pairs with preempt_enable() in rt_mutex_postunlock(); */ preempt_disable(); @@ -885,7 +906,7 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock(¤t->pi_lock); } -@@ -1162,21 +1492,22 @@ void rt_mutex_adjust_pi(struct task_stru +@@ -1162,21 +1492,22 @@ void rt_mutex_adjust_pi(struct task_struct *task) return; } next_lock = waiter->lock; @@ -910,7 +931,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /** -@@ -1293,7 +1624,7 @@ rt_mutex_slowlock(struct rt_mutex *lock, +@@ -1293,7 +1624,7 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, unsigned long flags; int ret = 0; @@ -919,7 +940,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Technically we could use raw_spin_[un]lock_irq() here, but this can -@@ -1366,7 +1697,8 @@ static inline int rt_mutex_slowtrylock(s +@@ -1366,7 +1697,8 @@ static inline int rt_mutex_slowtrylock(struct rt_mutex *lock) * Return whether the current task needs to call rt_mutex_postunlock(). */ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, @@ -929,7 +950,7 @@ Signed-off-by: Sebastian Andrzej Siewior { unsigned long flags; -@@ -1420,7 +1752,7 @@ static bool __sched rt_mutex_slowunlock( +@@ -1420,7 +1752,7 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, * * Queue the next waiter for wakeup once we release the wait_lock. */ @@ -938,7 +959,7 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irqrestore(&lock->wait_lock, flags); return true; /* call rt_mutex_postunlock() */ -@@ -1472,9 +1804,11 @@ rt_mutex_fasttrylock(struct rt_mutex *lo +@@ -1472,9 +1804,11 @@ rt_mutex_fasttrylock(struct rt_mutex *lock, /* * Performs the wakeup of the the top-waiter and re-enables preemption. */ @@ -951,7 +972,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Pairs with preempt_disable() in rt_mutex_slowunlock() */ preempt_enable(); -@@ -1483,15 +1817,17 @@ void rt_mutex_postunlock(struct wake_q_h +@@ -1483,15 +1817,17 @@ void rt_mutex_postunlock(struct wake_q_head *wake_q) static inline void rt_mutex_fastunlock(struct rt_mutex *lock, bool (*slowfn)(struct rt_mutex *lock, @@ -972,7 +993,7 @@ Signed-off-by: Sebastian Andrzej Siewior } int __sched __rt_mutex_lock_state(struct rt_mutex *lock, int state) -@@ -1673,16 +2009,13 @@ void __sched __rt_mutex_unlock(struct rt +@@ -1673,16 +2009,13 @@ void __sched __rt_mutex_unlock(struct rt_mutex *lock) void __sched rt_mutex_unlock(struct rt_mutex *lock) { mutex_release(&lock->dep_map, 1, _RET_IP_); @@ -993,7 +1014,7 @@ Signed-off-by: Sebastian Andrzej Siewior { lockdep_assert_held(&lock->wait_lock); -@@ -1699,23 +2032,35 @@ bool __sched __rt_mutex_futex_unlock(str +@@ -1699,23 +2032,35 @@ bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock, * avoid inversion prior to the wakeup. preempt_disable() * therein pairs with rt_mutex_postunlock(). */ @@ -1032,7 +1053,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /** -@@ -1754,7 +2099,7 @@ void __rt_mutex_init(struct rt_mutex *lo +@@ -1754,7 +2099,7 @@ void __rt_mutex_init(struct rt_mutex *lock, const char *name, if (name && key) debug_rt_mutex_init(lock, name, key); } @@ -1041,7 +1062,7 @@ Signed-off-by: Sebastian Andrzej Siewior /** * rt_mutex_init_proxy_locked - initialize and lock a rt_mutex on behalf of a -@@ -1949,6 +2294,7 @@ int rt_mutex_wait_proxy_lock(struct rt_m +@@ -1949,6 +2294,7 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock, struct hrtimer_sleeper *to, struct rt_mutex_waiter *waiter) { @@ -1049,7 +1070,7 @@ Signed-off-by: Sebastian Andrzej Siewior int ret; raw_spin_lock_irq(&lock->wait_lock); -@@ -1960,6 +2306,24 @@ int rt_mutex_wait_proxy_lock(struct rt_m +@@ -1960,6 +2306,24 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock, * have to fix that up. */ fixup_rt_mutex_waiters(lock); @@ -1074,6 +1095,8 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irq(&lock->wait_lock); return ret; +diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h +index 5955ad2aa2a8..6fcf0a3e180d 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -30,6 +30,7 @@ struct rt_mutex_waiter { @@ -1084,7 +1107,7 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_DEBUG_RT_MUTEXES unsigned long ip; struct pid *deadlock_task_pid; -@@ -139,7 +140,7 @@ extern void rt_mutex_init_proxy_locked(s +@@ -139,7 +140,7 @@ extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock, struct task_struct *proxy_owner); extern void rt_mutex_proxy_unlock(struct rt_mutex *lock, struct task_struct *proxy_owner); @@ -1093,7 +1116,7 @@ Signed-off-by: Sebastian Andrzej Siewior extern int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, struct rt_mutex_waiter *waiter, struct task_struct *task); -@@ -157,9 +158,12 @@ extern int __rt_mutex_futex_trylock(stru +@@ -157,9 +158,12 @@ extern int __rt_mutex_futex_trylock(struct rt_mutex *l); extern void rt_mutex_futex_unlock(struct rt_mutex *lock); extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock, @@ -1108,7 +1131,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* RW semaphore special interface */ extern int __rt_mutex_lock_state(struct rt_mutex *lock, int state); -@@ -169,6 +173,10 @@ int __sched rt_mutex_slowlock_locked(str +@@ -169,6 +173,10 @@ int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state, struct hrtimer_sleeper *timeout, enum rtmutex_chainwalk chwalk, struct rt_mutex_waiter *waiter); @@ -1119,9 +1142,11 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_DEBUG_RT_MUTEXES # include "rtmutex-debug.h" +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 516f05702550..e699500aea26 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -400,9 +400,15 @@ static bool set_nr_if_polling(struct tas +@@ -401,9 +401,15 @@ static bool set_nr_if_polling(struct task_struct *p) #endif #endif @@ -1139,7 +1164,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Atomically grab the task, if ->wake_q is !nil already it means -@@ -424,24 +430,32 @@ void wake_q_add(struct wake_q_head *head +@@ -426,24 +432,32 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task) head->lastp = &node->next; } @@ -1177,3 +1202,6 @@ Signed-off-by: Sebastian Andrzej Siewior put_task_struct(task); } } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0146-rtmutex-add-mutex-implementation-based-on-rtmutex.patch b/kernel/patches-4.19.x-rt/0144-rtmutex-add-mutex-implementation-based-on-rtmutex.patch similarity index 95% rename from kernel/patches-4.19.x-rt/0146-rtmutex-add-mutex-implementation-based-on-rtmutex.patch rename to kernel/patches-4.19.x-rt/0144-rtmutex-add-mutex-implementation-based-on-rtmutex.patch index 783831268..36b22d9f6 100644 --- a/kernel/patches-4.19.x-rt/0146-rtmutex-add-mutex-implementation-based-on-rtmutex.patch +++ b/kernel/patches-4.19.x-rt/0144-rtmutex-add-mutex-implementation-based-on-rtmutex.patch @@ -1,16 +1,20 @@ +From b2eccb42878894e44f005029aa9b2fc9962d9093 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 12 Oct 2017 17:17:03 +0200 -Subject: rtmutex: add mutex implementation based on rtmutex +Subject: [PATCH 144/269] rtmutex: add mutex implementation based on rtmutex Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/mutex_rt.h | 130 ++++++++++++++++++++++++++ - kernel/locking/mutex-rt.c | 223 ++++++++++++++++++++++++++++++++++++++++++++++ + include/linux/mutex_rt.h | 130 ++++++++++++++++++++++ + kernel/locking/mutex-rt.c | 223 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 353 insertions(+) create mode 100644 include/linux/mutex_rt.h create mode 100644 kernel/locking/mutex-rt.c +diff --git a/include/linux/mutex_rt.h b/include/linux/mutex_rt.h +new file mode 100644 +index 000000000000..3fcb5edb1d2b --- /dev/null +++ b/include/linux/mutex_rt.h @@ -0,0 +1,130 @@ @@ -144,6 +148,9 @@ Signed-off-by: Sebastian Andrzej Siewior +extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); + +#endif +diff --git a/kernel/locking/mutex-rt.c b/kernel/locking/mutex-rt.c +new file mode 100644 +index 000000000000..4f81595c0f52 --- /dev/null +++ b/kernel/locking/mutex-rt.c @@ -0,0 +1,223 @@ @@ -370,3 +377,6 @@ Signed-off-by: Sebastian Andrzej Siewior + return 1; +} +EXPORT_SYMBOL(atomic_dec_and_mutex_lock); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0147-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch b/kernel/patches-4.19.x-rt/0145-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch similarity index 95% rename from kernel/patches-4.19.x-rt/0147-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch rename to kernel/patches-4.19.x-rt/0145-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch index 21d1c44da..fcb7b11a6 100644 --- a/kernel/patches-4.19.x-rt/0147-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch +++ b/kernel/patches-4.19.x-rt/0145-rtmutex-add-rwsem-implementation-based-on-rtmutex.patch @@ -1,6 +1,7 @@ +From d8e44c235bb3238fc1848c72b906014f1d9a5fb1 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 12 Oct 2017 17:28:34 +0200 -Subject: rtmutex: add rwsem implementation based on rtmutex +Subject: [PATCH 145/269] rtmutex: add rwsem implementation based on rtmutex The RT specific R/W semaphore implementation restricts the number of readers to one because a writer cannot block on multiple readers and inherit its @@ -41,12 +42,15 @@ the approach. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/rwsem_rt.h | 68 ++++++++++ - kernel/locking/rwsem-rt.c | 293 ++++++++++++++++++++++++++++++++++++++++++++++ + include/linux/rwsem_rt.h | 68 +++++++++ + kernel/locking/rwsem-rt.c | 293 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 361 insertions(+) create mode 100644 include/linux/rwsem_rt.h create mode 100644 kernel/locking/rwsem-rt.c +diff --git a/include/linux/rwsem_rt.h b/include/linux/rwsem_rt.h +new file mode 100644 +index 000000000000..2018ff77904a --- /dev/null +++ b/include/linux/rwsem_rt.h @@ -0,0 +1,68 @@ @@ -118,6 +122,9 @@ Signed-off-by: Sebastian Andrzej Siewior +extern void __downgrade_write(struct rw_semaphore *sem); + +#endif +diff --git a/kernel/locking/rwsem-rt.c b/kernel/locking/rwsem-rt.c +new file mode 100644 +index 000000000000..7d3c5cf3d23d --- /dev/null +++ b/kernel/locking/rwsem-rt.c @@ -0,0 +1,293 @@ @@ -414,3 +421,6 @@ Signed-off-by: Sebastian Andrzej Siewior + /* Release it and account current as reader */ + __up_write_unlock(sem, WRITER_BIAS - 1, flags); +} +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0148-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch b/kernel/patches-4.19.x-rt/0146-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch similarity index 95% rename from kernel/patches-4.19.x-rt/0148-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch rename to kernel/patches-4.19.x-rt/0146-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch index 35ac0af12..02a983455 100644 --- a/kernel/patches-4.19.x-rt/0148-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch +++ b/kernel/patches-4.19.x-rt/0146-rtmutex-add-rwlock-implementation-based-on-rtmutex.patch @@ -1,20 +1,24 @@ +From d49ee1d88e89db7c1a404171e67553b7695c349c Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 12 Oct 2017 17:18:06 +0200 -Subject: rtmutex: add rwlock implementation based on rtmutex +Subject: [PATCH 146/269] rtmutex: add rwlock implementation based on rtmutex The implementation is bias-based, similar to the rwsem implementation. Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/rwlock_rt.h | 119 ++++++++++++ - include/linux/rwlock_types_rt.h | 55 +++++ - kernel/locking/rwlock-rt.c | 368 ++++++++++++++++++++++++++++++++++++++++ + include/linux/rwlock_rt.h | 119 +++++++++++ + include/linux/rwlock_types_rt.h | 55 +++++ + kernel/locking/rwlock-rt.c | 368 ++++++++++++++++++++++++++++++++ 3 files changed, 542 insertions(+) create mode 100644 include/linux/rwlock_rt.h create mode 100644 include/linux/rwlock_types_rt.h create mode 100644 kernel/locking/rwlock-rt.c +diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h +new file mode 100644 +index 000000000000..a9c4c2ac4d1f --- /dev/null +++ b/include/linux/rwlock_rt.h @@ -0,0 +1,119 @@ @@ -137,6 +141,9 @@ Signed-off-by: Sebastian Andrzej Siewior +void __write_rt_unlock(struct rt_rw_lock *lock); + +#endif +diff --git a/include/linux/rwlock_types_rt.h b/include/linux/rwlock_types_rt.h +new file mode 100644 +index 000000000000..546a1f8f1274 --- /dev/null +++ b/include/linux/rwlock_types_rt.h @@ -0,0 +1,55 @@ @@ -195,6 +202,9 @@ Signed-off-by: Sebastian Andrzej Siewior + } while (0) + +#endif +diff --git a/kernel/locking/rwlock-rt.c b/kernel/locking/rwlock-rt.c +new file mode 100644 +index 000000000000..aebb7ce25bc6 --- /dev/null +++ b/kernel/locking/rwlock-rt.c @@ -0,0 +1,368 @@ @@ -566,3 +576,6 @@ Signed-off-by: Sebastian Andrzej Siewior + do_rwlock_rt_init(rwlock, name, key); +} +EXPORT_SYMBOL(__rt_rwlock_init); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0149-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch b/kernel/patches-4.19.x-rt/0147-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0149-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch rename to kernel/patches-4.19.x-rt/0147-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch index 894bfd9c1..adbbbab00 100644 --- a/kernel/patches-4.19.x-rt/0149-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch +++ b/kernel/patches-4.19.x-rt/0147-rtmutex-rwlock-preserve-state-like-a-sleeping-lock.patch @@ -1,6 +1,7 @@ +From f4e21a9f84eb9919949bfe5763eb96637b90bb1e Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 11 Jan 2019 21:16:31 +0100 -Subject: [PATCH] rtmutex/rwlock: preserve state like a sleeping lock +Subject: [PATCH 147/269] rtmutex/rwlock: preserve state like a sleeping lock The rwlock is spinning while acquiring a lock. Therefore it must become a sleeping lock on RT and preserve its task state while sleeping and @@ -10,12 +11,14 @@ Reported-by: Joe Korty Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rwlock-rt.c | 2 +- + kernel/locking/rwlock-rt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/locking/rwlock-rt.c b/kernel/locking/rwlock-rt.c +index aebb7ce25bc6..8f90afe111ce 100644 --- a/kernel/locking/rwlock-rt.c +++ b/kernel/locking/rwlock-rt.c -@@ -128,7 +128,7 @@ void __sched __read_rt_lock(struct rt_rw +@@ -128,7 +128,7 @@ void __sched __read_rt_lock(struct rt_rw_lock *lock) * That would put Reader1 behind the writer waiting on * Reader2 to call read_unlock() which might be unbound. */ @@ -24,3 +27,6 @@ Signed-off-by: Sebastian Andrzej Siewior rt_spin_lock_slowlock_locked(m, &waiter, flags); /* * The slowlock() above is guaranteed to return with the rtmutex is +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0150-rtmutex-wire-up-RT-s-locking.patch b/kernel/patches-4.19.x-rt/0148-rtmutex-wire-up-RT-s-locking.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0150-rtmutex-wire-up-RT-s-locking.patch rename to kernel/patches-4.19.x-rt/0148-rtmutex-wire-up-RT-s-locking.patch index 0040b7ecf..151cf7766 100644 --- a/kernel/patches-4.19.x-rt/0150-rtmutex-wire-up-RT-s-locking.patch +++ b/kernel/patches-4.19.x-rt/0148-rtmutex-wire-up-RT-s-locking.patch @@ -1,20 +1,23 @@ +From 145de90802b872003bf17064f49d5b1ea94f1a5f Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 12 Oct 2017 17:31:14 +0200 -Subject: rtmutex: wire up RT's locking +Subject: [PATCH 148/269] rtmutex: wire up RT's locking Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/mutex.h | 20 +++++++++++++------- - include/linux/rwsem.h | 11 +++++++++++ - include/linux/spinlock.h | 12 +++++++++++- - include/linux/spinlock_api_smp.h | 4 +++- - include/linux/spinlock_types.h | 11 ++++++++--- - kernel/locking/Makefile | 9 ++++++++- - kernel/locking/spinlock.c | 7 +++++++ - kernel/locking/spinlock_debug.c | 5 +++++ + include/linux/mutex.h | 20 +++++++++++++------- + include/linux/rwsem.h | 11 +++++++++++ + include/linux/spinlock.h | 12 +++++++++++- + include/linux/spinlock_api_smp.h | 4 +++- + include/linux/spinlock_types.h | 11 ++++++++--- + kernel/locking/Makefile | 9 ++++++++- + kernel/locking/spinlock.c | 7 +++++++ + kernel/locking/spinlock_debug.c | 5 +++++ 8 files changed, 66 insertions(+), 13 deletions(-) +diff --git a/include/linux/mutex.h b/include/linux/mutex.h +index 3093dd162424..cad906f54d0a 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -22,6 +22,17 @@ @@ -49,13 +52,15 @@ Signed-off-by: Sebastian Andrzej Siewior #define __MUTEX_INITIALIZER(lockname) \ { .owner = ATOMIC_LONG_INIT(0) \ , .wait_lock = __SPIN_LOCK_UNLOCKED(lockname.wait_lock) \ -@@ -229,4 +233,6 @@ mutex_trylock_recursive(struct mutex *lo +@@ -229,4 +233,6 @@ mutex_trylock_recursive(struct mutex *lock) return mutex_trylock(lock); } +#endif /* !PREEMPT_RT_FULL */ + #endif /* __LINUX_MUTEX_H */ +diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h +index ab93b6eae696..b1e32373f44f 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -20,6 +20,10 @@ @@ -69,7 +74,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct rw_semaphore; #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK -@@ -114,6 +118,13 @@ static inline int rwsem_is_contended(str +@@ -114,6 +118,13 @@ static inline int rwsem_is_contended(struct rw_semaphore *sem) return !list_empty(&sem->wait_list); } @@ -83,9 +88,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * lock for reading */ +diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h +index e089157dcf97..5f5ad0630a26 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h -@@ -298,7 +298,11 @@ static inline void do_raw_spin_unlock(ra +@@ -298,7 +298,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) }) /* Include rwlock functions */ @@ -98,7 +105,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Pull the _spin_*()/_read_*()/_write_*() functions/declarations: -@@ -309,6 +313,10 @@ static inline void do_raw_spin_unlock(ra +@@ -309,6 +313,10 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) # include #endif @@ -109,7 +116,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Map the spin_lock functions to the raw variants for PREEMPT_RT=n */ -@@ -429,6 +437,8 @@ static __always_inline int spin_is_conte +@@ -429,6 +437,8 @@ static __always_inline int spin_is_contended(spinlock_t *lock) #define assert_spin_locked(lock) assert_raw_spin_locked(&(lock)->rlock) @@ -118,9 +125,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Pull the atomic_t declaration: * (asm-mips/atomic.h needs above definitions) +diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h +index 42dfab89e740..29d99ae5a8ab 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h -@@ -187,6 +187,8 @@ static inline int __raw_spin_trylock_bh( +@@ -187,6 +187,8 @@ static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) return 0; } @@ -130,6 +139,8 @@ Signed-off-by: Sebastian Andrzej Siewior +#endif #endif /* __LINUX_SPINLOCK_API_SMP_H */ +diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h +index 5c8664d57fb8..10bac715ea96 100644 --- a/include/linux/spinlock_types.h +++ b/include/linux/spinlock_types.h @@ -11,8 +11,13 @@ @@ -149,6 +160,8 @@ Signed-off-by: Sebastian Andrzej Siewior +#endif #endif /* __LINUX_SPINLOCK_TYPES_H */ +diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile +index 392c7f23af76..c0bf04b6b965 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -3,7 +3,7 @@ @@ -160,7 +173,7 @@ Signed-off-by: Sebastian Andrzej Siewior ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE) -@@ -12,7 +12,11 @@ CFLAGS_REMOVE_mutex-debug.o = $(CC_FLAGS +@@ -12,7 +12,11 @@ CFLAGS_REMOVE_mutex-debug.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_rtmutex-debug.o = $(CC_FLAGS_FTRACE) endif @@ -184,9 +197,11 @@ Signed-off-by: Sebastian Andrzej Siewior obj-$(CONFIG_QUEUED_RWLOCKS) += qrwlock.o obj-$(CONFIG_LOCK_TORTURE_TEST) += locktorture.o obj-$(CONFIG_WW_MUTEX_SELFTEST) += test-ww_mutex.o +diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c +index 936f3d14dd6b..e89b70f474af 100644 --- a/kernel/locking/spinlock.c +++ b/kernel/locking/spinlock.c -@@ -117,8 +117,11 @@ void __lockfunc __raw_##op##_lock_bh(loc +@@ -117,8 +117,11 @@ void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock) \ * __[spin|read|write]_lock_bh() */ BUILD_LOCK_OPS(spin, raw_spinlock); @@ -198,7 +213,7 @@ Signed-off-by: Sebastian Andrzej Siewior #endif -@@ -202,6 +205,8 @@ void __lockfunc _raw_spin_unlock_bh(raw_ +@@ -202,6 +205,8 @@ void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) EXPORT_SYMBOL(_raw_spin_unlock_bh); #endif @@ -207,7 +222,7 @@ Signed-off-by: Sebastian Andrzej Siewior #ifndef CONFIG_INLINE_READ_TRYLOCK int __lockfunc _raw_read_trylock(rwlock_t *lock) { -@@ -346,6 +351,8 @@ void __lockfunc _raw_write_unlock_bh(rwl +@@ -346,6 +351,8 @@ void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) EXPORT_SYMBOL(_raw_write_unlock_bh); #endif @@ -216,9 +231,11 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_DEBUG_LOCK_ALLOC void __lockfunc _raw_spin_lock_nested(raw_spinlock_t *lock, int subclass) +diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c +index 9aa0fccd5d43..76d0b40d9193 100644 --- a/kernel/locking/spinlock_debug.c +++ b/kernel/locking/spinlock_debug.c -@@ -31,6 +31,7 @@ void __raw_spin_lock_init(raw_spinlock_t +@@ -31,6 +31,7 @@ void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name, EXPORT_SYMBOL(__raw_spin_lock_init); @@ -226,7 +243,7 @@ Signed-off-by: Sebastian Andrzej Siewior void __rwlock_init(rwlock_t *lock, const char *name, struct lock_class_key *key) { -@@ -48,6 +49,7 @@ void __rwlock_init(rwlock_t *lock, const +@@ -48,6 +49,7 @@ void __rwlock_init(rwlock_t *lock, const char *name, } EXPORT_SYMBOL(__rwlock_init); @@ -234,7 +251,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void spin_dump(raw_spinlock_t *lock, const char *msg) { -@@ -135,6 +137,7 @@ void do_raw_spin_unlock(raw_spinlock_t * +@@ -135,6 +137,7 @@ void do_raw_spin_unlock(raw_spinlock_t *lock) arch_spin_unlock(&lock->raw_lock); } @@ -248,3 +265,6 @@ Signed-off-by: Sebastian Andrzej Siewior } + +#endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0151-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch b/kernel/patches-4.19.x-rt/0149-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0151-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch rename to kernel/patches-4.19.x-rt/0149-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch index f06eb9031..c7cb5e1c6 100644 --- a/kernel/patches-4.19.x-rt/0151-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch +++ b/kernel/patches-4.19.x-rt/0149-rtmutex-add-ww_mutex-addon-for-mutex-rt.patch @@ -1,14 +1,17 @@ +From cb5d05fc6f3f2a23c0dc2d3cdf925e62d8e9e13f Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 12 Oct 2017 17:34:38 +0200 -Subject: rtmutex: add ww_mutex addon for mutex-rt +Subject: [PATCH 149/269] rtmutex: add ww_mutex addon for mutex-rt Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rtmutex.c | 271 ++++++++++++++++++++++++++++++++++++++-- - kernel/locking/rtmutex_common.h | 2 - kernel/locking/rwsem-rt.c | 2 + kernel/locking/rtmutex.c | 271 ++++++++++++++++++++++++++++++-- + kernel/locking/rtmutex_common.h | 2 + + kernel/locking/rwsem-rt.c | 2 +- 3 files changed, 261 insertions(+), 14 deletions(-) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 88df1ff7ca2d..1f2dc2dfe2e7 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -23,6 +23,7 @@ @@ -60,7 +63,7 @@ Signed-off-by: Sebastian Andrzej Siewior static inline int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task, struct rt_mutex_waiter *waiter) -@@ -1523,7 +1558,8 @@ void rt_mutex_init_waiter(struct rt_mute +@@ -1523,7 +1558,8 @@ void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter, bool savestate) static int __sched __rt_mutex_slowlock(struct rt_mutex *lock, int state, struct hrtimer_sleeper *timeout, @@ -70,7 +73,7 @@ Signed-off-by: Sebastian Andrzej Siewior { int ret = 0; -@@ -1541,6 +1577,12 @@ static int __sched +@@ -1541,6 +1577,12 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, break; } @@ -83,7 +86,7 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irq(&lock->wait_lock); debug_rt_mutex_print_deadlock(waiter); -@@ -1575,16 +1617,106 @@ static void rt_mutex_handle_deadlock(int +@@ -1575,16 +1617,106 @@ static void rt_mutex_handle_deadlock(int res, int detect_deadlock, } } @@ -191,7 +194,7 @@ Signed-off-by: Sebastian Andrzej Siewior set_current_state(state); -@@ -1594,14 +1726,24 @@ int __sched rt_mutex_slowlock_locked(str +@@ -1594,14 +1726,24 @@ int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state, ret = task_blocks_on_rt_mutex(lock, waiter, current, chwalk); @@ -219,7 +222,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -1618,7 +1760,8 @@ int __sched rt_mutex_slowlock_locked(str +@@ -1618,7 +1760,8 @@ int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state, static int __sched rt_mutex_slowlock(struct rt_mutex *lock, int state, struct hrtimer_sleeper *timeout, @@ -229,7 +232,7 @@ Signed-off-by: Sebastian Andrzej Siewior { struct rt_mutex_waiter waiter; unsigned long flags; -@@ -1636,7 +1779,8 @@ rt_mutex_slowlock(struct rt_mutex *lock, +@@ -1636,7 +1779,8 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, */ raw_spin_lock_irqsave(&lock->wait_lock, flags); @@ -239,7 +242,7 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_unlock_irqrestore(&lock->wait_lock, flags); -@@ -1766,29 +1910,33 @@ static bool __sched rt_mutex_slowunlock( +@@ -1766,29 +1910,33 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, */ static inline int rt_mutex_fastlock(struct rt_mutex *lock, int state, @@ -277,7 +280,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static inline int -@@ -1833,7 +1981,7 @@ rt_mutex_fastunlock(struct rt_mutex *loc +@@ -1833,7 +1981,7 @@ rt_mutex_fastunlock(struct rt_mutex *lock, int __sched __rt_mutex_lock_state(struct rt_mutex *lock, int state) { might_sleep(); @@ -286,7 +289,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /** -@@ -1953,6 +2101,7 @@ rt_mutex_timed_lock(struct rt_mutex *loc +@@ -1953,6 +2101,7 @@ rt_mutex_timed_lock(struct rt_mutex *lock, struct hrtimer_sleeper *timeout) mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_); ret = rt_mutex_timed_fastlock(lock, TASK_INTERRUPTIBLE, timeout, RT_MUTEX_MIN_CHAINWALK, @@ -294,7 +297,7 @@ Signed-off-by: Sebastian Andrzej Siewior rt_mutex_slowlock); if (ret) mutex_release(&lock->dep_map, 1, _RET_IP_); -@@ -2300,7 +2449,7 @@ int rt_mutex_wait_proxy_lock(struct rt_m +@@ -2300,7 +2449,7 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock, raw_spin_lock_irq(&lock->wait_lock); /* sleep on the mutex */ set_current_state(TASK_INTERRUPTIBLE); @@ -303,7 +306,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might * have to fix that up. -@@ -2385,3 +2534,99 @@ bool rt_mutex_cleanup_proxy_lock(struct +@@ -2385,3 +2534,99 @@ bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock, return cleanup; } @@ -403,9 +406,11 @@ Signed-off-by: Sebastian Andrzej Siewior +} +EXPORT_SYMBOL(__rt_mutex_owner_current); +#endif +diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h +index 6fcf0a3e180d..546aaf058b9e 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h -@@ -165,6 +165,7 @@ extern void rt_mutex_postunlock(struct w +@@ -165,6 +165,7 @@ extern void rt_mutex_postunlock(struct wake_q_head *wake_q, struct wake_q_head *wake_sleeper_q); /* RW semaphore special interface */ @@ -413,7 +418,7 @@ Signed-off-by: Sebastian Andrzej Siewior extern int __rt_mutex_lock_state(struct rt_mutex *lock, int state); extern int __rt_mutex_trylock(struct rt_mutex *lock); -@@ -172,6 +173,7 @@ extern void __rt_mutex_unlock(struct rt_ +@@ -172,6 +173,7 @@ extern void __rt_mutex_unlock(struct rt_mutex *lock); int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state, struct hrtimer_sleeper *timeout, enum rtmutex_chainwalk chwalk, @@ -421,9 +426,11 @@ Signed-off-by: Sebastian Andrzej Siewior struct rt_mutex_waiter *waiter); void __sched rt_spin_lock_slowlock_locked(struct rt_mutex *lock, struct rt_mutex_waiter *waiter, +diff --git a/kernel/locking/rwsem-rt.c b/kernel/locking/rwsem-rt.c +index 7d3c5cf3d23d..660e22caf709 100644 --- a/kernel/locking/rwsem-rt.c +++ b/kernel/locking/rwsem-rt.c -@@ -131,7 +131,7 @@ static int __sched __down_read_common(st +@@ -131,7 +131,7 @@ static int __sched __down_read_common(struct rw_semaphore *sem, int state) */ rt_mutex_init_waiter(&waiter, false); ret = rt_mutex_slowlock_locked(m, state, NULL, RT_MUTEX_MIN_CHAINWALK, @@ -432,3 +439,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * The slowlock() above is guaranteed to return with the rtmutex (for * ret = 0) is now held, so there can't be a writer active. Increment +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0152-kconfig-preempt-rt-full.patch b/kernel/patches-4.19.x-rt/0150-kconfig-Add-PREEMPT_RT_FULL.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0152-kconfig-preempt-rt-full.patch rename to kernel/patches-4.19.x-rt/0150-kconfig-Add-PREEMPT_RT_FULL.patch index d1d7a5865..3430fe1c3 100644 --- a/kernel/patches-4.19.x-rt/0152-kconfig-preempt-rt-full.patch +++ b/kernel/patches-4.19.x-rt/0150-kconfig-Add-PREEMPT_RT_FULL.patch @@ -1,24 +1,29 @@ -Subject: kconfig: Add PREEMPT_RT_FULL +From 77032b07bcce84656ba960fea1a786fda5dcd81a Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 29 Jun 2011 14:58:57 +0200 +Subject: [PATCH 150/269] kconfig: Add PREEMPT_RT_FULL Introduce the final symbol for PREEMPT_RT_FULL. Signed-off-by: Thomas Gleixner --- - init/Makefile | 2 +- - kernel/Kconfig.preempt | 8 ++++++++ - scripts/mkcompile_h | 4 +++- + init/Makefile | 2 +- + kernel/Kconfig.preempt | 8 ++++++++ + scripts/mkcompile_h | 4 +++- 3 files changed, 12 insertions(+), 2 deletions(-) +diff --git a/init/Makefile b/init/Makefile +index a3e5ce2bcf08..7779232563ae 100644 --- a/init/Makefile +++ b/init/Makefile -@@ -34,4 +34,4 @@ mounts-$(CONFIG_BLK_DEV_MD) += do_mounts +@@ -34,4 +34,4 @@ silent_chk_compile.h = : include/generated/compile.h: FORCE @$($(quiet)chk_compile.h) $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkcompile_h $@ \ - "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CC) $(KBUILD_CFLAGS)" + "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CONFIG_PREEMPT_RT_FULL)" "$(CC) $(KBUILD_CFLAGS)" +diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt +index 027db5976c2f..907d72b3ba95 100644 --- a/kernel/Kconfig.preempt +++ b/kernel/Kconfig.preempt @@ -69,6 +69,14 @@ config PREEMPT_RTB @@ -36,6 +41,8 @@ Signed-off-by: Thomas Gleixner endchoice config PREEMPT_COUNT +diff --git a/scripts/mkcompile_h b/scripts/mkcompile_h +index 87f1fc9801d7..f67b15236936 100755 --- a/scripts/mkcompile_h +++ b/scripts/mkcompile_h @@ -5,7 +5,8 @@ TARGET=$1 @@ -56,3 +63,6 @@ Signed-off-by: Thomas Gleixner UTS_VERSION="$UTS_VERSION $CONFIG_FLAGS $TIMESTAMP" # Truncate to maximum length +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0153-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch b/kernel/patches-4.19.x-rt/0151-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0153-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch rename to kernel/patches-4.19.x-rt/0151-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch index 69e54df77..f13969df9 100644 --- a/kernel/patches-4.19.x-rt/0153-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch +++ b/kernel/patches-4.19.x-rt/0151-locking-rt-mutex-fix-deadlock-in-device-mapper-block.patch @@ -1,6 +1,8 @@ +From 810f1d5d210b1101d5b93300358d6362861ea392 Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Mon, 13 Nov 2017 12:56:53 -0500 -Subject: [PATCH] locking/rt-mutex: fix deadlock in device mapper / block-IO +Subject: [PATCH 151/269] locking/rt-mutex: fix deadlock in device mapper / + block-IO When some block device driver creates a bio and submits it to another block device driver, the bio is added to current->bio_list (in order to @@ -32,9 +34,11 @@ CC: stable-rt@vger.kernel.org Signed-off-by: Mikulas Patocka Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rtmutex.c | 13 +++++++++++++ + kernel/locking/rtmutex.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 1f2dc2dfe2e7..b38c3a92dce8 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -24,6 +24,7 @@ @@ -45,7 +49,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "rtmutex_common.h" -@@ -1919,6 +1920,15 @@ rt_mutex_fastlock(struct rt_mutex *lock, +@@ -1919,6 +1920,15 @@ rt_mutex_fastlock(struct rt_mutex *lock, int state, if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) return 0; @@ -61,7 +65,7 @@ Signed-off-by: Sebastian Andrzej Siewior return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK, ww_ctx); } -@@ -1936,6 +1946,9 @@ rt_mutex_timed_fastlock(struct rt_mutex +@@ -1936,6 +1946,9 @@ rt_mutex_timed_fastlock(struct rt_mutex *lock, int state, likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) return 0; @@ -71,3 +75,6 @@ Signed-off-by: Sebastian Andrzej Siewior return slowfn(lock, state, timeout, chwalk, ww_ctx); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0154-locking-rt-mutex-Flush-block-plug-on-__down_read.patch b/kernel/patches-4.19.x-rt/0152-locking-rt-mutex-Flush-block-plug-on-__down_read.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0154-locking-rt-mutex-Flush-block-plug-on-__down_read.patch rename to kernel/patches-4.19.x-rt/0152-locking-rt-mutex-Flush-block-plug-on-__down_read.patch index 6fe10914c..f28a3c08c 100644 --- a/kernel/patches-4.19.x-rt/0154-locking-rt-mutex-Flush-block-plug-on-__down_read.patch +++ b/kernel/patches-4.19.x-rt/0152-locking-rt-mutex-Flush-block-plug-on-__down_read.patch @@ -1,6 +1,7 @@ +From 9c3afee65f743bf1492e76f16139111e10d8f205 Mon Sep 17 00:00:00 2001 From: Scott Wood Date: Fri, 4 Jan 2019 15:33:21 -0500 -Subject: [PATCH] locking/rt-mutex: Flush block plug on __down_read() +Subject: [PATCH 152/269] locking/rt-mutex: Flush block plug on __down_read() __down_read() bypasses the rtmutex frontend to call rt_mutex_slowlock_locked() directly, and thus it needs to call @@ -10,9 +11,11 @@ Cc: stable-rt@vger.kernel.org Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rwsem-rt.c | 9 +++++++++ + kernel/locking/rwsem-rt.c | 9 +++++++++ 1 file changed, 9 insertions(+) +diff --git a/kernel/locking/rwsem-rt.c b/kernel/locking/rwsem-rt.c +index 660e22caf709..f518495bd6cc 100644 --- a/kernel/locking/rwsem-rt.c +++ b/kernel/locking/rwsem-rt.c @@ -1,5 +1,6 @@ @@ -22,7 +25,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include #include -@@ -87,6 +88,14 @@ static int __sched __down_read_common(st +@@ -87,6 +88,14 @@ static int __sched __down_read_common(struct rw_semaphore *sem, int state) if (__down_read_trylock(sem)) return 0; @@ -37,3 +40,6 @@ Signed-off-by: Sebastian Andrzej Siewior might_sleep(); raw_spin_lock_irq(&m->wait_lock); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0155-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch b/kernel/patches-4.19.x-rt/0153-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0155-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch rename to kernel/patches-4.19.x-rt/0153-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch index 6212718cf..ac45429ab 100644 --- a/kernel/patches-4.19.x-rt/0155-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch +++ b/kernel/patches-4.19.x-rt/0153-locking-rtmutex-re-init-the-wait_lock-in-rt_mutex_in.patch @@ -1,6 +1,7 @@ +From 4a9a885ab4f7e220568aa7c19704f1f6b020f545 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 16 Nov 2017 16:48:48 +0100 -Subject: [PATCH] locking/rtmutex: re-init the wait_lock in +Subject: [PATCH 153/269] locking/rtmutex: re-init the wait_lock in rt_mutex_init_proxy_locked() We could provide a key-class for the lockdep (and fixup all callers) or @@ -10,12 +11,14 @@ seeing a double-lock of the wait_lock. Reported-by: Fernando Lopez-Lezcano Signed-off-by: Sebastian Andrzej Siewior --- - kernel/locking/rtmutex.c | 8 ++++++++ + kernel/locking/rtmutex.c | 8 ++++++++ 1 file changed, 8 insertions(+) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index b38c3a92dce8..94788662b2f2 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -2281,6 +2281,14 @@ void rt_mutex_init_proxy_locked(struct r +@@ -2281,6 +2281,14 @@ void rt_mutex_init_proxy_locked(struct rt_mutex *lock, struct task_struct *proxy_owner) { __rt_mutex_init(lock, NULL, NULL); @@ -30,3 +33,6 @@ Signed-off-by: Sebastian Andrzej Siewior debug_rt_mutex_proxy_lock(lock, proxy_owner); rt_mutex_set_owner(lock, proxy_owner); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0156-ptrace-fix-ptrace-vs-tasklist_lock-race.patch b/kernel/patches-4.19.x-rt/0154-ptrace-fix-ptrace-vs-tasklist_lock-race.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0156-ptrace-fix-ptrace-vs-tasklist_lock-race.patch rename to kernel/patches-4.19.x-rt/0154-ptrace-fix-ptrace-vs-tasklist_lock-race.patch index 6a81b2669..86fbca9f9 100644 --- a/kernel/patches-4.19.x-rt/0156-ptrace-fix-ptrace-vs-tasklist_lock-race.patch +++ b/kernel/patches-4.19.x-rt/0154-ptrace-fix-ptrace-vs-tasklist_lock-race.patch @@ -1,6 +1,7 @@ +From de7eff6fda53e683a83289d9c0c0a2d774fbfe92 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 29 Aug 2013 18:21:04 +0200 -Subject: ptrace: fix ptrace vs tasklist_lock race +Subject: [PATCH 154/269] ptrace: fix ptrace vs tasklist_lock race As explained by Alexander Fyodorov : @@ -23,11 +24,13 @@ taken in case the caller is interrupted between looking into ->state and Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/sched.h | 49 +++++++++++++++++++++++++++++++++++++++++++++---- - kernel/ptrace.c | 9 ++++++++- - kernel/sched/core.c | 17 +++++++++++++++-- + include/linux/sched.h | 49 +++++++++++++++++++++++++++++++++++++++---- + kernel/ptrace.c | 9 +++++++- + kernel/sched/core.c | 17 +++++++++++++-- 3 files changed, 68 insertions(+), 7 deletions(-) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 1797fd3c8cbb..25e9a40f9576 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -101,12 +101,8 @@ struct task_group; @@ -43,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior #define task_contributes_to_load(task) ((task->state & TASK_UNINTERRUPTIBLE) != 0 && \ (task->flags & PF_FROZEN) == 0 && \ (task->state & TASK_NOLOAD) == 0) -@@ -1709,6 +1705,51 @@ static inline int test_tsk_need_resched( +@@ -1709,6 +1705,51 @@ static inline int test_tsk_need_resched(struct task_struct *tsk) return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED)); } @@ -95,9 +98,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * cond_resched() and cond_resched_lock(): latency reduction via * explicit rescheduling in places that are safe. The return +diff --git a/kernel/ptrace.c b/kernel/ptrace.c +index 21fec73d45d4..9c8d6f9f3a3a 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c -@@ -175,7 +175,14 @@ static bool ptrace_freeze_traced(struct +@@ -175,7 +175,14 @@ static bool ptrace_freeze_traced(struct task_struct *task) spin_lock_irq(&task->sighand->siglock); if (task_is_traced(task) && !__fatal_signal_pending(task)) { @@ -113,9 +118,11 @@ Signed-off-by: Sebastian Andrzej Siewior ret = true; } spin_unlock_irq(&task->sighand->siglock); +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index e699500aea26..14eb51dae23d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -1347,6 +1347,18 @@ int migrate_swap(struct task_struct *cur +@@ -1349,6 +1349,18 @@ int migrate_swap(struct task_struct *cur, struct task_struct *p, } #endif /* CONFIG_NUMA_BALANCING */ @@ -134,7 +141,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * wait_task_inactive - wait for a thread to unschedule. * -@@ -1391,7 +1403,7 @@ unsigned long wait_task_inactive(struct +@@ -1393,7 +1405,7 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state) * is actually now running somewhere else! */ while (task_running(rq, p)) { @@ -143,7 +150,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; cpu_relax(); } -@@ -1406,7 +1418,8 @@ unsigned long wait_task_inactive(struct +@@ -1408,7 +1420,8 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state) running = task_running(rq, p); queued = task_on_rq_queued(p); ncsw = 0; @@ -153,3 +160,6 @@ Signed-off-by: Sebastian Andrzej Siewior ncsw = p->nvcsw | LONG_MIN; /* sets MSB */ task_rq_unlock(rq, p, &rf); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0157-rtmutex-annotate-sleeping-lock-context.patch b/kernel/patches-4.19.x-rt/0155-rtmutex-annotate-sleeping-lock-context.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0157-rtmutex-annotate-sleeping-lock-context.patch rename to kernel/patches-4.19.x-rt/0155-rtmutex-annotate-sleeping-lock-context.patch index fb25f58a7..97418efde 100644 --- a/kernel/patches-4.19.x-rt/0157-rtmutex-annotate-sleeping-lock-context.patch +++ b/kernel/patches-4.19.x-rt/0155-rtmutex-annotate-sleeping-lock-context.patch @@ -1,6 +1,7 @@ +From 2a9b009589ed8b11c6c94e2af70c3d6fc4c957b8 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 21 Sep 2017 14:25:13 +0200 -Subject: [PATCH] rtmutex: annotate sleeping lock context +Subject: [PATCH 155/269] rtmutex: annotate sleeping lock context The RCU code complains on schedule() within a rcu_readlock() section. The valid scenario on -RT is if a sleeping is held. In order to suppress @@ -23,14 +24,16 @@ cpu_chill() to avoid the RCU warning from there. Reported-by: Grygorii Strashko Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/preempt.h | 9 +++++++++ - include/linux/sched.h | 26 ++++++++++++++++++++++++++ - kernel/locking/rtmutex.c | 12 ++++++++++-- - kernel/locking/rwlock-rt.c | 18 ++++++++++++++---- - kernel/rcu/tree_plugin.h | 6 +++++- - kernel/sched/core.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ + include/linux/preempt.h | 9 ++++++++ + include/linux/sched.h | 26 ++++++++++++++++++++++ + kernel/locking/rtmutex.c | 12 ++++++++-- + kernel/locking/rwlock-rt.c | 18 +++++++++++---- + kernel/rcu/tree_plugin.h | 6 ++++- + kernel/sched/core.c | 45 ++++++++++++++++++++++++++++++++++++++ 6 files changed, 109 insertions(+), 7 deletions(-) +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index 27c3176d88d2..9eafc34898b4 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -211,6 +211,15 @@ extern void migrate_enable(void); @@ -49,6 +52,8 @@ Signed-off-by: Sebastian Andrzej Siewior #else #define migrate_disable() barrier() #define migrate_enable() barrier() +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 25e9a40f9576..8f0bb5f6d39e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -673,6 +673,15 @@ struct task_struct { @@ -67,7 +72,7 @@ Signed-off-by: Sebastian Andrzej Siewior #endif #ifdef CONFIG_PREEMPT_RCU -@@ -1802,6 +1811,23 @@ static __always_inline bool need_resched +@@ -1802,6 +1811,23 @@ static __always_inline bool need_resched(void) return unlikely(tif_need_resched()); } @@ -91,9 +96,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Wrappers for p->thread_info->cpu access. No-op on UP. */ +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 94788662b2f2..2a9bf2443acc 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c -@@ -1141,6 +1141,7 @@ void __sched rt_spin_lock_slowunlock(str +@@ -1141,6 +1141,7 @@ void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock) void __lockfunc rt_spin_lock(spinlock_t *lock) { @@ -101,7 +108,7 @@ Signed-off-by: Sebastian Andrzej Siewior migrate_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock); -@@ -1155,6 +1156,7 @@ void __lockfunc __rt_spin_lock(struct rt +@@ -1155,6 +1156,7 @@ void __lockfunc __rt_spin_lock(struct rt_mutex *lock) #ifdef CONFIG_DEBUG_LOCK_ALLOC void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass) { @@ -109,7 +116,7 @@ Signed-off-by: Sebastian Andrzej Siewior migrate_disable(); spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_); rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock); -@@ -1168,6 +1170,7 @@ void __lockfunc rt_spin_unlock(spinlock_ +@@ -1168,6 +1170,7 @@ void __lockfunc rt_spin_unlock(spinlock_t *lock) spin_release(&lock->dep_map, 1, _RET_IP_); rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock); migrate_enable(); @@ -117,7 +124,7 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(rt_spin_unlock); -@@ -1193,12 +1196,15 @@ int __lockfunc rt_spin_trylock(spinlock_ +@@ -1193,12 +1196,15 @@ int __lockfunc rt_spin_trylock(spinlock_t *lock) { int ret; @@ -135,7 +142,7 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } EXPORT_SYMBOL(rt_spin_trylock); -@@ -1210,6 +1216,7 @@ int __lockfunc rt_spin_trylock_bh(spinlo +@@ -1210,6 +1216,7 @@ int __lockfunc rt_spin_trylock_bh(spinlock_t *lock) local_bh_disable(); ret = __rt_mutex_trylock(&lock->lock); if (ret) { @@ -143,7 +150,7 @@ Signed-off-by: Sebastian Andrzej Siewior migrate_disable(); spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); } else -@@ -1225,6 +1232,7 @@ int __lockfunc rt_spin_trylock_irqsave(s +@@ -1225,6 +1232,7 @@ int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags) *flags = 0; ret = __rt_mutex_trylock(&lock->lock); if (ret) { @@ -151,9 +158,11 @@ Signed-off-by: Sebastian Andrzej Siewior migrate_disable(); spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); } +diff --git a/kernel/locking/rwlock-rt.c b/kernel/locking/rwlock-rt.c +index 8f90afe111ce..c3b91205161c 100644 --- a/kernel/locking/rwlock-rt.c +++ b/kernel/locking/rwlock-rt.c -@@ -305,12 +305,15 @@ int __lockfunc rt_read_trylock(rwlock_t +@@ -305,12 +305,15 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock) { int ret; @@ -171,7 +180,7 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } EXPORT_SYMBOL(rt_read_trylock); -@@ -319,18 +322,22 @@ int __lockfunc rt_write_trylock(rwlock_t +@@ -319,18 +322,22 @@ int __lockfunc rt_write_trylock(rwlock_t *rwlock) { int ret; @@ -204,7 +213,7 @@ Signed-off-by: Sebastian Andrzej Siewior migrate_disable(); rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_); do_write_rt_lock(rwlock); -@@ -350,6 +358,7 @@ void __lockfunc rt_read_unlock(rwlock_t +@@ -350,6 +358,7 @@ void __lockfunc rt_read_unlock(rwlock_t *rwlock) rwlock_release(&rwlock->dep_map, 1, _RET_IP_); do_read_rt_unlock(rwlock); migrate_enable(); @@ -212,7 +221,7 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(rt_read_unlock); -@@ -358,6 +367,7 @@ void __lockfunc rt_write_unlock(rwlock_t +@@ -358,6 +367,7 @@ void __lockfunc rt_write_unlock(rwlock_t *rwlock) rwlock_release(&rwlock->dep_map, 1, _RET_IP_); do_write_rt_unlock(rwlock); migrate_enable(); @@ -220,9 +229,11 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(rt_write_unlock); +diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h +index a97c20ea9bce..564e3927e7b0 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h -@@ -337,9 +337,13 @@ static void rcu_preempt_note_context_swi +@@ -337,9 +337,13 @@ static void rcu_preempt_note_context_switch(bool preempt) struct task_struct *t = current; struct rcu_data *rdp; struct rcu_node *rnp; @@ -237,9 +248,11 @@ Signed-off-by: Sebastian Andrzej Siewior if (t->rcu_read_lock_nesting > 0 && !t->rcu_read_unlock_special.b.blocked) { +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 14eb51dae23d..a5226728e407 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -7307,4 +7307,49 @@ void migrate_enable(void) +@@ -7309,4 +7309,49 @@ void migrate_enable(void) preempt_enable(); } EXPORT_SYMBOL(migrate_enable); @@ -289,3 +302,6 @@ Signed-off-by: Sebastian Andrzej Siewior +} +EXPORT_SYMBOL(migrate_enable); #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0158-sched-migrate_disable-fallback-to-preempt_disable-in.patch b/kernel/patches-4.19.x-rt/0156-sched-migrate_disable-fallback-to-preempt_disable-in.patch similarity index 80% rename from kernel/patches-4.19.x-rt/0158-sched-migrate_disable-fallback-to-preempt_disable-in.patch rename to kernel/patches-4.19.x-rt/0156-sched-migrate_disable-fallback-to-preempt_disable-in.patch index 3b9f2326e..417953fee 100644 --- a/kernel/patches-4.19.x-rt/0158-sched-migrate_disable-fallback-to-preempt_disable-in.patch +++ b/kernel/patches-4.19.x-rt/0156-sched-migrate_disable-fallback-to-preempt_disable-in.patch @@ -1,7 +1,8 @@ +From 09cc5496ae17c924c25e80d5a300901957c44b54 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 5 Jul 2018 14:44:51 +0200 -Subject: [PATCH] sched/migrate_disable: fallback to preempt_disable() instead - barrier() +Subject: [PATCH 156/269] sched/migrate_disable: fallback to preempt_disable() + instead barrier() On SMP + !RT migrate_disable() is still around. It is not part of spin_lock() anymore so it has almost no users. However the futex code has a workaround for @@ -38,12 +39,14 @@ Cc: stable-rt@vger.kernel.org Reported-by: joe.korty@concurrent-rt.com Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/preempt.h | 6 +++--- - include/linux/sched.h | 4 ++-- - kernel/sched/core.c | 23 +++++++++++------------ - kernel/sched/debug.c | 2 +- + include/linux/preempt.h | 6 +++--- + include/linux/sched.h | 4 ++-- + kernel/sched/core.c | 23 +++++++++++------------ + kernel/sched/debug.c | 2 +- 4 files changed, 17 insertions(+), 18 deletions(-) +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index 9eafc34898b4..ed8413e7140f 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -204,7 +204,7 @@ do { \ @@ -55,7 +58,7 @@ Signed-off-by: Sebastian Andrzej Siewior extern void migrate_disable(void); extern void migrate_enable(void); -@@ -221,8 +221,8 @@ static inline int __migrate_disabled(str +@@ -221,8 +221,8 @@ static inline int __migrate_disabled(struct task_struct *p) } #else @@ -66,6 +69,8 @@ Signed-off-by: Sebastian Andrzej Siewior static inline int __migrate_disabled(struct task_struct *p) { return 0; +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 8f0bb5f6d39e..a023e1ba5d8f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -667,7 +667,7 @@ struct task_struct { @@ -87,9 +92,11 @@ Signed-off-by: Sebastian Andrzej Siewior int migrate_disable_atomic; # endif #endif +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index a5226728e407..fb205b1ec799 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -1029,7 +1029,7 @@ void set_cpus_allowed_common(struct task +@@ -1031,7 +1031,7 @@ void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_ma p->nr_cpus_allowed = cpumask_weight(new_mask); } @@ -98,7 +105,7 @@ Signed-off-by: Sebastian Andrzej Siewior int __migrate_disabled(struct task_struct *p) { return p->migrate_disable; -@@ -1069,7 +1069,7 @@ static void __do_set_cpus_allowed_tail(s +@@ -1071,7 +1071,7 @@ static void __do_set_cpus_allowed_tail(struct task_struct *p, void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { @@ -107,7 +114,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (__migrate_disabled(p)) { lockdep_assert_held(&p->pi_lock); -@@ -1142,7 +1142,7 @@ static int __set_cpus_allowed_ptr(struct +@@ -1144,7 +1144,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) goto out; @@ -116,7 +123,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (__migrate_disabled(p)) { p->migrate_disable_update = 1; goto out; -@@ -7163,7 +7163,7 @@ const u32 sched_prio_to_wmult[40] = { +@@ -7165,7 +7165,7 @@ const u32 sched_prio_to_wmult[40] = { #undef CREATE_TRACE_POINTS @@ -125,7 +132,7 @@ Signed-off-by: Sebastian Andrzej Siewior static inline void update_nr_migratory(struct task_struct *p, long delta) -@@ -7311,45 +7311,44 @@ EXPORT_SYMBOL(migrate_enable); +@@ -7313,45 +7313,44 @@ EXPORT_SYMBOL(migrate_enable); #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) void migrate_disable(void) { @@ -178,9 +185,11 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(migrate_enable); #endif +diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c +index 34c27afae009..cb6ad6fd2320 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c -@@ -978,7 +978,7 @@ void proc_sched_show_task(struct task_st +@@ -982,7 +982,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns, P(dl.runtime); P(dl.deadline); } @@ -189,3 +198,6 @@ Signed-off-by: Sebastian Andrzej Siewior P(migrate_disable); #endif P(nr_cpus_allowed); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0159-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch b/kernel/patches-4.19.x-rt/0157-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch similarity index 66% rename from kernel/patches-4.19.x-rt/0159-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch rename to kernel/patches-4.19.x-rt/0157-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch index 6cd8d3f31..a5c135502 100644 --- a/kernel/patches-4.19.x-rt/0159-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch +++ b/kernel/patches-4.19.x-rt/0157-locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch @@ -1,7 +1,8 @@ +From e283cad9ed8ce6e508399dc21fde2645ff2a9259 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 4 Aug 2017 17:40:42 +0200 -Subject: [PATCH 1/2] locking: don't check for __LINUX_SPINLOCK_TYPES_H on -RT - archs +Subject: [PATCH 157/269] locking: don't check for __LINUX_SPINLOCK_TYPES_H on + -RT archs Upstream uses arch_spinlock_t within spinlock_t and requests that spinlock_types.h header file is included first. @@ -13,18 +14,20 @@ Therefore I am dropping that check. Signed-off-by: Sebastian Andrzej Siewior --- - arch/alpha/include/asm/spinlock_types.h | 4 ---- - arch/arm/include/asm/spinlock_types.h | 4 ---- - arch/arm64/include/asm/spinlock_types.h | 4 ---- - arch/hexagon/include/asm/spinlock_types.h | 4 ---- - arch/ia64/include/asm/spinlock_types.h | 4 ---- - arch/powerpc/include/asm/spinlock_types.h | 4 ---- - arch/s390/include/asm/spinlock_types.h | 4 ---- - arch/sh/include/asm/spinlock_types.h | 4 ---- - arch/xtensa/include/asm/spinlock_types.h | 4 ---- - include/linux/spinlock_types_up.h | 4 ---- + arch/alpha/include/asm/spinlock_types.h | 4 ---- + arch/arm/include/asm/spinlock_types.h | 4 ---- + arch/arm64/include/asm/spinlock_types.h | 4 ---- + arch/hexagon/include/asm/spinlock_types.h | 4 ---- + arch/ia64/include/asm/spinlock_types.h | 4 ---- + arch/powerpc/include/asm/spinlock_types.h | 4 ---- + arch/s390/include/asm/spinlock_types.h | 4 ---- + arch/sh/include/asm/spinlock_types.h | 4 ---- + arch/xtensa/include/asm/spinlock_types.h | 4 ---- + include/linux/spinlock_types_up.h | 4 ---- 10 files changed, 40 deletions(-) +diff --git a/arch/alpha/include/asm/spinlock_types.h b/arch/alpha/include/asm/spinlock_types.h +index 1d5716bc060b..6883bc952d22 100644 --- a/arch/alpha/include/asm/spinlock_types.h +++ b/arch/alpha/include/asm/spinlock_types.h @@ -2,10 +2,6 @@ @@ -38,6 +41,8 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct { volatile unsigned int lock; } arch_spinlock_t; +diff --git a/arch/arm/include/asm/spinlock_types.h b/arch/arm/include/asm/spinlock_types.h +index 5976958647fe..a37c0803954b 100644 --- a/arch/arm/include/asm/spinlock_types.h +++ b/arch/arm/include/asm/spinlock_types.h @@ -2,10 +2,6 @@ @@ -51,6 +56,8 @@ Signed-off-by: Sebastian Andrzej Siewior #define TICKET_SHIFT 16 typedef struct { +diff --git a/arch/arm64/include/asm/spinlock_types.h b/arch/arm64/include/asm/spinlock_types.h +index a157ff465e27..f952fdda8346 100644 --- a/arch/arm64/include/asm/spinlock_types.h +++ b/arch/arm64/include/asm/spinlock_types.h @@ -16,10 +16,6 @@ @@ -64,6 +71,8 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include +diff --git a/arch/hexagon/include/asm/spinlock_types.h b/arch/hexagon/include/asm/spinlock_types.h +index 7a906b5214a4..d8f596fec022 100644 --- a/arch/hexagon/include/asm/spinlock_types.h +++ b/arch/hexagon/include/asm/spinlock_types.h @@ -21,10 +21,6 @@ @@ -77,6 +86,8 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct { volatile unsigned int lock; } arch_spinlock_t; +diff --git a/arch/ia64/include/asm/spinlock_types.h b/arch/ia64/include/asm/spinlock_types.h +index 6e345fefcdca..681408d6816f 100644 --- a/arch/ia64/include/asm/spinlock_types.h +++ b/arch/ia64/include/asm/spinlock_types.h @@ -2,10 +2,6 @@ @@ -90,6 +101,8 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct { volatile unsigned int lock; } arch_spinlock_t; +diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h +index 87adaf13b7e8..7305cb6a53e4 100644 --- a/arch/powerpc/include/asm/spinlock_types.h +++ b/arch/powerpc/include/asm/spinlock_types.h @@ -2,10 +2,6 @@ @@ -103,6 +116,8 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct { volatile unsigned int slock; } arch_spinlock_t; +diff --git a/arch/s390/include/asm/spinlock_types.h b/arch/s390/include/asm/spinlock_types.h +index cfed272e4fd5..8e28e8176ec8 100644 --- a/arch/s390/include/asm/spinlock_types.h +++ b/arch/s390/include/asm/spinlock_types.h @@ -2,10 +2,6 @@ @@ -116,6 +131,8 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct { int lock; } __attribute__ ((aligned (4))) arch_spinlock_t; +diff --git a/arch/sh/include/asm/spinlock_types.h b/arch/sh/include/asm/spinlock_types.h +index e82369f286a2..22ca9a98bbb8 100644 --- a/arch/sh/include/asm/spinlock_types.h +++ b/arch/sh/include/asm/spinlock_types.h @@ -2,10 +2,6 @@ @@ -129,6 +146,8 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct { volatile unsigned int lock; } arch_spinlock_t; +diff --git a/arch/xtensa/include/asm/spinlock_types.h b/arch/xtensa/include/asm/spinlock_types.h +index bb1fe6c1816e..8a22f1e7b6c9 100644 --- a/arch/xtensa/include/asm/spinlock_types.h +++ b/arch/xtensa/include/asm/spinlock_types.h @@ -2,10 +2,6 @@ @@ -142,6 +161,8 @@ Signed-off-by: Sebastian Andrzej Siewior typedef struct { volatile unsigned int slock; } arch_spinlock_t; +diff --git a/include/linux/spinlock_types_up.h b/include/linux/spinlock_types_up.h +index c09b6407ae1b..b0243ba07fb7 100644 --- a/include/linux/spinlock_types_up.h +++ b/include/linux/spinlock_types_up.h @@ -1,10 +1,6 @@ @@ -155,3 +176,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * include/linux/spinlock_types_up.h - spinlock type definitions for UP * +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0160-peter_zijlstra-frob-rcu.patch b/kernel/patches-4.19.x-rt/0158-rcu-Frob-softirq-test.patch similarity index 93% rename from kernel/patches-4.19.x-rt/0160-peter_zijlstra-frob-rcu.patch rename to kernel/patches-4.19.x-rt/0158-rcu-Frob-softirq-test.patch index 13f262ca1..e9c931860 100644 --- a/kernel/patches-4.19.x-rt/0160-peter_zijlstra-frob-rcu.patch +++ b/kernel/patches-4.19.x-rt/0158-rcu-Frob-softirq-test.patch @@ -1,6 +1,7 @@ -Subject: rcu: Frob softirq test +From 0a4604cc3cc194643ed11ab6909612b9bed4b4ad Mon Sep 17 00:00:00 2001 From: Peter Zijlstra -Date: Sat Aug 13 00:23:17 CEST 2011 +Date: Sat, 13 Aug 2011 00:23:17 +0200 +Subject: [PATCH 158/269] rcu: Frob softirq test With RT_FULL we get the below wreckage: @@ -10,15 +11,15 @@ With RT_FULL we get the below wreckage: [ 126.060490] ------------------------------------------------------- [ 126.060492] irq/24-eth0/1235 is trying to acquire lock: [ 126.060495] (&(lock)->wait_lock#2){+.+...}, at: [] rt_mutex_slowunlock+0x16/0x55 -[ 126.060503] +[ 126.060503] [ 126.060504] but task is already holding lock: [ 126.060506] (&p->pi_lock){-...-.}, at: [] try_to_wake_up+0x35/0x429 -[ 126.060511] +[ 126.060511] [ 126.060511] which lock already depends on the new lock. -[ 126.060513] -[ 126.060514] +[ 126.060513] +[ 126.060514] [ 126.060514] the existing dependency chain (in reverse order) is: -[ 126.060516] +[ 126.060516] [ 126.060516] -> #1 (&p->pi_lock){-...-.}: [ 126.060519] [] lock_acquire+0x145/0x18a [ 126.060524] [] _raw_spin_lock_irqsave+0x4b/0x85 @@ -29,7 +30,7 @@ With RT_FULL we get the below wreckage: [ 126.060541] [] rcu_boost_kthread+0x7d/0x9b [ 126.060544] [] kthread+0x99/0xa1 [ 126.060547] [] kernel_thread_helper+0x4/0x10 -[ 126.060551] +[ 126.060551] [ 126.060552] -> #0 (&(lock)->wait_lock#2){+.+...}: [ 126.060555] [] __lock_acquire+0x1157/0x1816 [ 126.060558] [] lock_acquire+0x145/0x18a @@ -49,23 +50,23 @@ With RT_FULL we get the below wreckage: [ 126.060603] [] irq_thread+0xde/0x1af [ 126.060606] [] kthread+0x99/0xa1 [ 126.060608] [] kernel_thread_helper+0x4/0x10 -[ 126.060611] +[ 126.060611] [ 126.060612] other info that might help us debug this: -[ 126.060614] +[ 126.060614] [ 126.060615] Possible unsafe locking scenario: -[ 126.060616] +[ 126.060616] [ 126.060617] CPU0 CPU1 [ 126.060619] ---- ---- [ 126.060620] lock(&p->pi_lock); [ 126.060623] lock(&(lock)->wait_lock); [ 126.060625] lock(&p->pi_lock); [ 126.060627] lock(&(lock)->wait_lock); -[ 126.060629] +[ 126.060629] [ 126.060629] *** DEADLOCK *** -[ 126.060630] +[ 126.060630] [ 126.060632] 1 lock held by irq/24-eth0/1235: [ 126.060633] #0: (&p->pi_lock){-...-.}, at: [] try_to_wake_up+0x35/0x429 -[ 126.060638] +[ 126.060638] [ 126.060638] stack backtrace: [ 126.060641] Pid: 1235, comm: irq/24-eth0 Not tainted 3.0.1-rt10+ #30 [ 126.060643] Call Trace: @@ -150,12 +151,14 @@ here... so this is very likely a bandaid and more thought is required. Cc: Paul E. McKenney Signed-off-by: Peter Zijlstra --- - kernel/rcu/tree_plugin.h | 2 +- + kernel/rcu/tree_plugin.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h +index 564e3927e7b0..429a2f144e19 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h -@@ -524,7 +524,7 @@ static void rcu_read_unlock_special(stru +@@ -524,7 +524,7 @@ static void rcu_read_unlock_special(struct task_struct *t) } /* Hardware IRQ handlers cannot block, complain if they get here. */ @@ -164,3 +167,6 @@ Signed-off-by: Peter Zijlstra lockdep_rcu_suspicious(__FILE__, __LINE__, "rcu_read_unlock() from irq or softirq with blocking in critical section!!!\n"); pr_alert("->rcu_read_unlock_special: %#x (b: %d, enq: %d nq: %d)\n", +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0161-rcu-merge-rcu-bh-into-rcu-preempt-for-rt.patch b/kernel/patches-4.19.x-rt/0159-rcu-Merge-RCU-bh-into-RCU-preempt.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0161-rcu-merge-rcu-bh-into-rcu-preempt-for-rt.patch rename to kernel/patches-4.19.x-rt/0159-rcu-Merge-RCU-bh-into-RCU-preempt.patch index c2f60ca7d..41805695c 100644 --- a/kernel/patches-4.19.x-rt/0161-rcu-merge-rcu-bh-into-rcu-preempt-for-rt.patch +++ b/kernel/patches-4.19.x-rt/0159-rcu-Merge-RCU-bh-into-RCU-preempt.patch @@ -1,6 +1,7 @@ -Subject: rcu: Merge RCU-bh into RCU-preempt -Date: Wed, 5 Oct 2011 11:59:38 -0700 +From dd8eae9da2e22bd7b41cea43792b107b3deb3fd7 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner +Date: Wed, 5 Oct 2011 11:59:38 -0700 +Subject: [PATCH 159/269] rcu: Merge RCU-bh into RCU-preempt The Linux kernel has long RCU-bh read-side critical sections that intolerably increase scheduling latency under mainline's RCU-bh rules, @@ -22,20 +23,21 @@ Signed-off-by: Thomas Gleixner Signed-off-by: Paul E. McKenney Link: http://lkml.kernel.org/r/20111005185938.GA20403@linux.vnet.ibm.com Signed-off-by: Thomas Gleixner - --- - include/linux/rcupdate.h | 19 +++++++++++++++++++ - include/linux/rcutree.h | 8 ++++++++ - kernel/rcu/rcu.h | 11 +++++++++-- - kernel/rcu/rcutorture.c | 7 +++++++ - kernel/rcu/tree.c | 26 ++++++++++++++++++++++++++ - kernel/rcu/tree.h | 2 ++ - kernel/rcu/update.c | 2 ++ + include/linux/rcupdate.h | 19 +++++++++++++++++++ + include/linux/rcutree.h | 8 ++++++++ + kernel/rcu/rcu.h | 11 +++++++++-- + kernel/rcu/rcutorture.c | 7 +++++++ + kernel/rcu/tree.c | 26 ++++++++++++++++++++++++++ + kernel/rcu/tree.h | 2 ++ + kernel/rcu/update.c | 2 ++ 7 files changed, 73 insertions(+), 2 deletions(-) +diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h +index 63cd0a1a99a0..60a9b5feefe2 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h -@@ -56,7 +56,11 @@ void call_rcu(struct rcu_head *head, rcu +@@ -56,7 +56,11 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); #define call_rcu call_rcu_sched #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ @@ -47,7 +49,7 @@ Signed-off-by: Thomas Gleixner void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); void synchronize_sched(void); void rcu_barrier_tasks(void); -@@ -263,7 +267,14 @@ extern struct lockdep_map rcu_sched_lock +@@ -263,7 +267,14 @@ extern struct lockdep_map rcu_sched_lock_map; extern struct lockdep_map rcu_callback_map; int debug_lockdep_rcu_enabled(void); int rcu_read_lock_held(void); @@ -77,7 +79,7 @@ Signed-off-by: Thomas Gleixner } /* -@@ -676,10 +691,14 @@ static inline void rcu_read_lock_bh(void +@@ -676,10 +691,14 @@ static inline void rcu_read_lock_bh(void) */ static inline void rcu_read_unlock_bh(void) { @@ -92,9 +94,11 @@ Signed-off-by: Thomas Gleixner local_bh_enable(); } +diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h +index 914655848ef6..462ce061bac7 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h -@@ -44,7 +44,11 @@ static inline void rcu_virt_note_context +@@ -44,7 +44,11 @@ static inline void rcu_virt_note_context_switch(int cpu) rcu_note_context_switch(false); } @@ -106,7 +110,7 @@ Signed-off-by: Thomas Gleixner void synchronize_sched_expedited(void); void synchronize_rcu_expedited(void); -@@ -72,7 +76,11 @@ static inline void synchronize_rcu_bh_ex +@@ -72,7 +76,11 @@ static inline void synchronize_rcu_bh_expedited(void) } void rcu_barrier(void); @@ -118,9 +122,11 @@ Signed-off-by: Thomas Gleixner void rcu_barrier_sched(void); bool rcu_eqs_special_set(int cpu); unsigned long get_state_synchronize_rcu(void); +diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h +index 4d04683c31b2..808cce9a5d43 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h -@@ -528,7 +528,6 @@ static inline void show_rcu_gp_kthreads( +@@ -528,7 +528,6 @@ static inline void show_rcu_gp_kthreads(void) { } static inline int rcu_get_gp_kthreads_prio(void) { return 0; } #else /* #ifdef CONFIG_TINY_RCU */ unsigned long rcu_get_gp_seq(void); @@ -128,7 +134,7 @@ Signed-off-by: Thomas Gleixner unsigned long rcu_sched_get_gp_seq(void); unsigned long rcu_exp_batches_completed(void); unsigned long rcu_exp_batches_completed_sched(void); -@@ -536,10 +535,18 @@ unsigned long srcu_batches_completed(str +@@ -536,10 +535,18 @@ unsigned long srcu_batches_completed(struct srcu_struct *sp); void show_rcu_gp_kthreads(void); int rcu_get_gp_kthreads_prio(void); void rcu_force_quiescent_state(void); @@ -148,9 +154,11 @@ Signed-off-by: Thomas Gleixner #endif /* #else #ifdef CONFIG_TINY_RCU */ #ifdef CONFIG_RCU_NOCB_CPU +diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c +index c596c6f1e457..7d2a615601e7 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c -@@ -434,6 +434,7 @@ static struct rcu_torture_ops rcu_ops = +@@ -434,6 +434,7 @@ static struct rcu_torture_ops rcu_ops = { .name = "rcu" }; @@ -158,7 +166,7 @@ Signed-off-by: Thomas Gleixner /* * Definitions for rcu_bh torture testing. */ -@@ -475,6 +476,12 @@ static struct rcu_torture_ops rcu_bh_ops +@@ -475,6 +476,12 @@ static struct rcu_torture_ops rcu_bh_ops = { .name = "rcu_bh" }; @@ -171,6 +179,8 @@ Signed-off-by: Thomas Gleixner /* * Don't even think about trying any of these in real life!!! * The names includes "busted", and they really means it! +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index f7e89c989df7..1456a3d97971 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -244,6 +244,7 @@ void rcu_sched_qs(void) @@ -209,7 +219,7 @@ Signed-off-by: Thomas Gleixner /* * Return the number of RCU expedited batches completed thus far for -@@ -599,6 +607,7 @@ unsigned long rcu_exp_batches_completed_ +@@ -599,6 +607,7 @@ unsigned long rcu_exp_batches_completed_sched(void) } EXPORT_SYMBOL_GPL(rcu_exp_batches_completed_sched); @@ -231,7 +241,7 @@ Signed-off-by: Thomas Gleixner /* * Force a quiescent state for RCU-sched. */ -@@ -674,9 +690,11 @@ void rcutorture_get_gp_data(enum rcutort +@@ -674,9 +690,11 @@ void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, case RCU_FLAVOR: rsp = rcu_state_p; break; @@ -243,7 +253,7 @@ Signed-off-by: Thomas Gleixner case RCU_SCHED_FLAVOR: rsp = &rcu_sched_state; break; -@@ -3049,6 +3067,7 @@ void call_rcu_sched(struct rcu_head *hea +@@ -3057,6 +3075,7 @@ void call_rcu_sched(struct rcu_head *head, rcu_callback_t func) } EXPORT_SYMBOL_GPL(call_rcu_sched); @@ -251,7 +261,7 @@ Signed-off-by: Thomas Gleixner /** * call_rcu_bh() - Queue an RCU for invocation after a quicker grace period. * @head: structure to be used for queueing the RCU updates. -@@ -3076,6 +3095,7 @@ void call_rcu_bh(struct rcu_head *head, +@@ -3084,6 +3103,7 @@ void call_rcu_bh(struct rcu_head *head, rcu_callback_t func) __call_rcu(head, func, &rcu_bh_state, -1, 0); } EXPORT_SYMBOL_GPL(call_rcu_bh); @@ -259,7 +269,7 @@ Signed-off-by: Thomas Gleixner /* * Queue an RCU callback for lazy invocation after a grace period. -@@ -3161,6 +3181,7 @@ void synchronize_sched(void) +@@ -3169,6 +3189,7 @@ void synchronize_sched(void) } EXPORT_SYMBOL_GPL(synchronize_sched); @@ -267,7 +277,7 @@ Signed-off-by: Thomas Gleixner /** * synchronize_rcu_bh - wait until an rcu_bh grace period has elapsed. * -@@ -3187,6 +3208,7 @@ void synchronize_rcu_bh(void) +@@ -3195,6 +3216,7 @@ void synchronize_rcu_bh(void) wait_rcu_gp(call_rcu_bh); } EXPORT_SYMBOL_GPL(synchronize_rcu_bh); @@ -275,7 +285,7 @@ Signed-off-by: Thomas Gleixner /** * get_state_synchronize_rcu - Snapshot current RCU state -@@ -3494,6 +3516,7 @@ static void _rcu_barrier(struct rcu_stat +@@ -3502,6 +3524,7 @@ static void _rcu_barrier(struct rcu_state *rsp) mutex_unlock(&rsp->barrier_mutex); } @@ -283,7 +293,7 @@ Signed-off-by: Thomas Gleixner /** * rcu_barrier_bh - Wait until all in-flight call_rcu_bh() callbacks complete. */ -@@ -3502,6 +3525,7 @@ void rcu_barrier_bh(void) +@@ -3510,6 +3533,7 @@ void rcu_barrier_bh(void) _rcu_barrier(&rcu_bh_state); } EXPORT_SYMBOL_GPL(rcu_barrier_bh); @@ -291,7 +301,7 @@ Signed-off-by: Thomas Gleixner /** * rcu_barrier_sched - Wait for in-flight call_rcu_sched() callbacks. -@@ -4149,7 +4173,9 @@ void __init rcu_init(void) +@@ -4157,7 +4181,9 @@ void __init rcu_init(void) rcu_bootup_announce(); rcu_init_geometry(); @@ -301,9 +311,11 @@ Signed-off-by: Thomas Gleixner rcu_init_one(&rcu_sched_state); if (dump_tree) rcu_dump_rcu_node_tree(&rcu_sched_state); +diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h +index 4e74df768c57..fbbff7c21148 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h -@@ -413,7 +413,9 @@ extern struct list_head rcu_struct_flavo +@@ -413,7 +413,9 @@ extern struct list_head rcu_struct_flavors; */ extern struct rcu_state rcu_sched_state; @@ -313,9 +325,11 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_PREEMPT_RCU extern struct rcu_state rcu_preempt_state; +diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c +index 81688a133552..6ffafb1b1584 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c -@@ -286,6 +286,7 @@ int rcu_read_lock_held(void) +@@ -288,6 +288,7 @@ int rcu_read_lock_held(void) } EXPORT_SYMBOL_GPL(rcu_read_lock_held); @@ -323,7 +337,7 @@ Signed-off-by: Thomas Gleixner /** * rcu_read_lock_bh_held() - might we be in RCU-bh read-side critical section? * -@@ -312,6 +313,7 @@ int rcu_read_lock_bh_held(void) +@@ -314,6 +315,7 @@ int rcu_read_lock_bh_held(void) return in_softirq() || irqs_disabled(); } EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held); @@ -331,3 +345,6 @@ Signed-off-by: Thomas Gleixner #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0162-patch-to-introduce-rcu-bh-qs-where-safe-from-softirq.patch b/kernel/patches-4.19.x-rt/0160-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0162-patch-to-introduce-rcu-bh-qs-where-safe-from-softirq.patch rename to kernel/patches-4.19.x-rt/0160-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch index 29d27889e..4abe02f10 100644 --- a/kernel/patches-4.19.x-rt/0162-patch-to-introduce-rcu-bh-qs-where-safe-from-softirq.patch +++ b/kernel/patches-4.19.x-rt/0160-rcu-Make-ksoftirqd-do-RCU-quiescent-states.patch @@ -1,6 +1,7 @@ -Subject: rcu: Make ksoftirqd do RCU quiescent states +From 435eba4b4298b15db7304d4b60e313d95f9b004f Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Wed, 5 Oct 2011 11:45:18 -0700 +Subject: [PATCH 160/269] rcu: Make ksoftirqd do RCU quiescent states Implementing RCU-bh in terms of RCU-preempt makes the system vulnerable to network-based denial-of-service attacks. This patch therefore @@ -21,12 +22,13 @@ in cases where __do_softirq() is invoked directly from ksoftirqd. Signed-off-by: Paul E. McKenney Link: http://lkml.kernel.org/r/20111005184518.GA21601@linux.vnet.ibm.com Signed-off-by: Thomas Gleixner - --- - kernel/rcu/tree.c | 18 +++++++++++++----- - kernel/rcu/tree_plugin.h | 8 +++++++- + kernel/rcu/tree.c | 18 +++++++++++++----- + kernel/rcu/tree_plugin.h | 8 +++++++- 2 files changed, 20 insertions(+), 6 deletions(-) +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 1456a3d97971..1a40e3d44cb8 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -244,7 +244,19 @@ void rcu_sched_qs(void) @@ -61,6 +63,8 @@ Signed-off-by: Thomas Gleixner #endif /* +diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h +index 429a2f144e19..bee9bffeb0ce 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -29,6 +29,7 @@ @@ -71,7 +75,7 @@ Signed-off-by: Thomas Gleixner #include #include #include "../time/tick-internal.h" -@@ -1407,7 +1408,7 @@ static void rcu_prepare_kthreads(int cpu +@@ -1407,7 +1408,7 @@ static void rcu_prepare_kthreads(int cpu) #endif /* #else #ifdef CONFIG_RCU_BOOST */ @@ -80,7 +84,7 @@ Signed-off-by: Thomas Gleixner /* * Check to see if any future RCU-related work will need to be done -@@ -1423,7 +1424,9 @@ int rcu_needs_cpu(u64 basemono, u64 *nex +@@ -1423,7 +1424,9 @@ int rcu_needs_cpu(u64 basemono, u64 *nextevt) *nextevt = KTIME_MAX; return rcu_cpu_has_callbacks(NULL); } @@ -90,7 +94,7 @@ Signed-off-by: Thomas Gleixner /* * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up * after it. -@@ -1520,6 +1523,8 @@ static bool __maybe_unused rcu_try_advan +@@ -1520,6 +1523,8 @@ static bool __maybe_unused rcu_try_advance_all_cbs(void) return cbs_ready; } @@ -99,7 +103,7 @@ Signed-off-by: Thomas Gleixner /* * Allow the CPU to enter dyntick-idle mode unless it has callbacks ready * to invoke. If the CPU has callbacks, try to advance them. Tell the -@@ -1562,6 +1567,7 @@ int rcu_needs_cpu(u64 basemono, u64 *nex +@@ -1562,6 +1567,7 @@ int rcu_needs_cpu(u64 basemono, u64 *nextevt) *nextevt = basemono + dj * TICK_NSEC; return 0; } @@ -107,3 +111,6 @@ Signed-off-by: Thomas Gleixner /* * Prepare a CPU for idle from an RCU perspective. The first major task +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0163-rcu-Eliminate-softirq-processing-from-rcutree.patch b/kernel/patches-4.19.x-rt/0161-rcu-Eliminate-softirq-processing-from-rcutree.patch similarity index 88% rename from kernel/patches-4.19.x-rt/0163-rcu-Eliminate-softirq-processing-from-rcutree.patch rename to kernel/patches-4.19.x-rt/0161-rcu-Eliminate-softirq-processing-from-rcutree.patch index 68d378818..c3f02ce94 100644 --- a/kernel/patches-4.19.x-rt/0163-rcu-Eliminate-softirq-processing-from-rcutree.patch +++ b/kernel/patches-4.19.x-rt/0161-rcu-Eliminate-softirq-processing-from-rcutree.patch @@ -1,6 +1,7 @@ +From ca691ed27290645375a66795b1d87fb910501211 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Mon, 4 Nov 2013 13:21:10 -0800 -Subject: rcu: Eliminate softirq processing from rcutree +Subject: [PATCH 161/269] rcu: Eliminate softirq processing from rcutree Running RCU out of softirq is a problem for some workloads that would like to manage RCU core processing independently of other softirq work, @@ -16,11 +17,13 @@ Tested-by: Mike Galbraith Signed-off-by: Paul E. McKenney Signed-off-by: Sebastian Andrzej Siewior --- - kernel/rcu/tree.c | 112 +++++++++++++++++++++++++++++++++---- - kernel/rcu/tree.h | 4 - - kernel/rcu/tree_plugin.h | 142 +++-------------------------------------------- - 3 files changed, 114 insertions(+), 144 deletions(-) + kernel/rcu/tree.c | 114 ++++++++++++++++++++++++++++--- + kernel/rcu/tree.h | 4 +- + kernel/rcu/tree_plugin.h | 142 +++------------------------------------ + 3 files changed, 115 insertions(+), 145 deletions(-) +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 1a40e3d44cb8..ae716ca783bc 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -61,6 +61,13 @@ @@ -37,7 +40,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "tree.h" #include "rcu.h" -@@ -2888,18 +2895,17 @@ static void +@@ -2896,18 +2903,17 @@ __rcu_process_callbacks(struct rcu_state *rsp) /* * Do RCU core processing for the current CPU. */ @@ -58,12 +61,15 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Schedule RCU callback invocation. If the specified type of RCU * does not support RCU priority boosting, just do a direct call, -@@ -2911,18 +2917,105 @@ static void invoke_rcu_callbacks(struct +@@ -2919,18 +2925,105 @@ static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp) { if (unlikely(!READ_ONCE(rcu_scheduler_fully_active))) return; - if (likely(!rsp->boost)) { - rcu_do_batch(rsp, rdp); +- return; +- } +- invoke_rcu_callbacks_kthread(); + rcu_do_batch(rsp, rdp); +} + @@ -75,18 +81,20 @@ Signed-off-by: Sebastian Andrzej Siewior + */ + if (t && (status != RCU_KTHREAD_YIELDING || is_idle_task(current))) + wake_up_process(t); -+} -+ + } + +/* + * Wake up this CPU's rcuc kthread to do RCU core processing. + */ -+static void invoke_rcu_core(void) -+{ + static void invoke_rcu_core(void) + { +- if (cpu_online(smp_processor_id())) +- raise_softirq(RCU_SOFTIRQ); + unsigned long flags; + struct task_struct *t; + + if (!cpu_online(smp_processor_id())) - return; ++ return; + local_irq_save(flags); + __this_cpu_write(rcu_cpu_has_work, 1); + t = __this_cpu_read(rcu_cpu_kthread_task); @@ -133,16 +141,14 @@ Signed-off-by: Sebastian Andrzej Siewior + *statusp = RCU_KTHREAD_WAITING; + return; + } - } -- invoke_rcu_callbacks_kthread(); ++ } + *statusp = RCU_KTHREAD_YIELDING; + trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield")); + schedule_timeout_interruptible(2); + trace_rcu_utilization(TPS("End CPU kthread@rcu_yield")); + *statusp = RCU_KTHREAD_WAITING; - } - --static void invoke_rcu_core(void) ++} ++ +static struct smp_hotplug_thread rcu_cpu_thread_spec = { + .store = &rcu_cpu_kthread_task, + .thread_should_run = rcu_cpu_kthread_should_run, @@ -156,9 +162,7 @@ Signed-off-by: Sebastian Andrzej Siewior + * Spawn per-CPU RCU core processing kthreads. + */ +static int __init rcu_spawn_core_kthreads(void) - { -- if (cpu_online(smp_processor_id())) -- raise_softirq(RCU_SOFTIRQ); ++{ + int cpu; + + for_each_possible_cpu(cpu) @@ -170,7 +174,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Handle any core-RCU processing required by a call_rcu() invocation. -@@ -4188,7 +4281,6 @@ void __init rcu_init(void) +@@ -4196,7 +4289,6 @@ void __init rcu_init(void) if (dump_tree) rcu_dump_rcu_node_tree(&rcu_sched_state); __rcu_init_preempt(); @@ -178,9 +182,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * We don't need protection against CPU-hotplug here because +diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h +index fbbff7c21148..98257d20feb2 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h -@@ -423,12 +423,10 @@ extern struct rcu_state rcu_preempt_stat +@@ -423,12 +423,10 @@ extern struct rcu_state rcu_preempt_state; int rcu_dynticks_snap(struct rcu_dynticks *rdtp); @@ -193,7 +199,7 @@ Signed-off-by: Sebastian Andrzej Siewior #ifndef RCU_TREE_NONCORE -@@ -451,8 +449,8 @@ static void dump_blkd_tasks(struct rcu_s +@@ -451,8 +449,8 @@ static void dump_blkd_tasks(struct rcu_state *rsp, struct rcu_node *rnp, int ncheck); static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags); static void rcu_preempt_boost_start_gp(struct rcu_node *rnp); @@ -203,6 +209,8 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_RCU_BOOST static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp, struct rcu_node *rnp); +diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h +index bee9bffeb0ce..2e8737f1010f 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -24,42 +24,16 @@ @@ -248,7 +256,7 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_RCU_NOCB_CPU static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */ static bool __read_mostly rcu_nocb_poll; /* Offload kthread are to poll. */ -@@ -1027,18 +1001,21 @@ dump_blkd_tasks(struct rcu_state *rsp, s +@@ -1027,18 +1001,21 @@ dump_blkd_tasks(struct rcu_state *rsp, struct rcu_node *rnp, int ncheck) #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ @@ -278,10 +286,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Carry out RCU priority boosting on the task indicated by ->exp_tasks * or ->boost_tasks, advancing the pointer to the next task in the -@@ -1177,23 +1154,6 @@ static void rcu_initiate_boost(struct rc +@@ -1176,23 +1153,6 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags) + } } - /* +-/* - * Wake up the per-CPU kthread to invoke RCU callbacks. - */ -static void invoke_rcu_callbacks_kthread(void) @@ -298,11 +307,10 @@ Signed-off-by: Sebastian Andrzej Siewior - local_irq_restore(flags); -} - --/* + /* * Is the current CPU running the RCU-callbacks kthread? * Caller must have preemption disabled. - */ -@@ -1247,67 +1207,6 @@ static int rcu_spawn_one_boost_kthread(s +@@ -1247,67 +1207,6 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp, return 0; } @@ -370,7 +378,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Set the per-rcu_node kthread's affinity to cover all CPUs that are * served by the rcu_node in question. The CPU hotplug lock is still -@@ -1338,26 +1237,12 @@ static void rcu_boost_kthread_setaffinit +@@ -1338,26 +1237,12 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu) free_cpumask_var(cm); } @@ -397,7 +405,7 @@ Signed-off-by: Sebastian Andrzej Siewior rcu_for_each_leaf_node(rcu_state_p, rnp) (void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp); } -@@ -1380,11 +1265,6 @@ static void rcu_initiate_boost(struct rc +@@ -1380,11 +1265,6 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags) raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } @@ -409,3 +417,6 @@ Signed-off-by: Sebastian Andrzej Siewior static bool rcu_is_callbacks_kthread(void) { return false; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0164-srcu-use-cpu_online-instead-custom-check.patch b/kernel/patches-4.19.x-rt/0162-srcu-use-cpu_online-instead-custom-check.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0164-srcu-use-cpu_online-instead-custom-check.patch rename to kernel/patches-4.19.x-rt/0162-srcu-use-cpu_online-instead-custom-check.patch index 7528bdafa..eafc6f037 100644 --- a/kernel/patches-4.19.x-rt/0164-srcu-use-cpu_online-instead-custom-check.patch +++ b/kernel/patches-4.19.x-rt/0162-srcu-use-cpu_online-instead-custom-check.patch @@ -1,6 +1,7 @@ +From cf507028c7a29d61fc47c6209aeca2d9d7cd0876 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 13 Sep 2017 14:43:41 +0200 -Subject: [PATCH] srcu: use cpu_online() instead custom check +Subject: [PATCH 162/269] srcu: use cpu_online() instead custom check The current check via srcu_online is slightly racy because after looking at srcu_online there could be an interrupt that interrupted us long @@ -13,10 +14,12 @@ CPU won't fire until the CPU is back online. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/rcu/srcutree.c | 22 ++++------------------ - kernel/rcu/tree.c | 4 ---- + kernel/rcu/srcutree.c | 22 ++++------------------ + kernel/rcu/tree.c | 4 ---- 2 files changed, 4 insertions(+), 22 deletions(-) +diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c +index 1ff17e297f0c..df0375453ba1 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -38,6 +38,7 @@ @@ -27,10 +30,11 @@ Signed-off-by: Sebastian Andrzej Siewior #include "rcu.h" #include "rcu_segcblist.h" -@@ -461,21 +462,6 @@ static void srcu_gp_start(struct srcu_st +@@ -460,21 +461,6 @@ static void srcu_gp_start(struct srcu_struct *sp) + WARN_ON_ONCE(state != SRCU_STATE_SCAN1); } - /* +-/* - * Track online CPUs to guide callback workqueue placement. - */ -DEFINE_PER_CPU(bool, srcu_online); @@ -45,11 +49,10 @@ Signed-off-by: Sebastian Andrzej Siewior - WRITE_ONCE(per_cpu(srcu_online, cpu), false); -} - --/* + /* * Place the workqueue handler on the specified CPU if online, otherwise * just run it whereever. This is useful for placing workqueue handlers - * that are to invoke the specified CPU's callbacks. -@@ -486,12 +472,12 @@ static bool srcu_queue_delayed_work_on(i +@@ -486,12 +472,12 @@ static bool srcu_queue_delayed_work_on(int cpu, struct workqueue_struct *wq, { bool ret; @@ -65,9 +68,11 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index ae716ca783bc..f162a4f54b05 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c -@@ -3776,8 +3776,6 @@ int rcutree_online_cpu(unsigned int cpu) +@@ -3784,8 +3784,6 @@ int rcutree_online_cpu(unsigned int cpu) rnp->ffmask |= rdp->grpmask; raw_spin_unlock_irqrestore_rcu_node(rnp, flags); } @@ -76,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE) return 0; /* Too early in boot for scheduler work. */ sync_sched_exp_online_cleanup(cpu); -@@ -3805,8 +3803,6 @@ int rcutree_offline_cpu(unsigned int cpu +@@ -3813,8 +3811,6 @@ int rcutree_offline_cpu(unsigned int cpu) } rcutree_affinity_setting(cpu, cpu); @@ -85,3 +90,6 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0165-srcu-replace-local_irqsave-with-a-locallock.patch b/kernel/patches-4.19.x-rt/0163-srcu-replace-local_irqsave-with-a-locallock.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0165-srcu-replace-local_irqsave-with-a-locallock.patch rename to kernel/patches-4.19.x-rt/0163-srcu-replace-local_irqsave-with-a-locallock.patch index 2e65e8a57..e3658cc81 100644 --- a/kernel/patches-4.19.x-rt/0165-srcu-replace-local_irqsave-with-a-locallock.patch +++ b/kernel/patches-4.19.x-rt/0163-srcu-replace-local_irqsave-with-a-locallock.patch @@ -1,6 +1,7 @@ +From 162767bbf4dfe16744f93ead7a5c938defc00489 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 12 Oct 2017 18:37:12 +0200 -Subject: [PATCH] srcu: replace local_irqsave() with a locallock +Subject: [PATCH 163/269] srcu: replace local_irqsave() with a locallock There are two instances which disable interrupts in order to become a stable this_cpu_ptr() pointer. The restore part is coupled with @@ -10,9 +11,11 @@ version of it. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/rcu/srcutree.c | 14 +++++++++----- + kernel/rcu/srcutree.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) +diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c +index df0375453ba1..0f09a1a9e17c 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -39,6 +39,7 @@ @@ -23,7 +26,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "rcu.h" #include "rcu_segcblist.h" -@@ -760,6 +761,8 @@ static void srcu_flip(struct srcu_struct +@@ -760,6 +761,8 @@ static void srcu_flip(struct srcu_struct *sp) * negligible when amoritized over that time period, and the extra latency * of a needlessly non-expedited grace period is similarly negligible. */ @@ -32,7 +35,7 @@ Signed-off-by: Sebastian Andrzej Siewior static bool srcu_might_be_idle(struct srcu_struct *sp) { unsigned long curseq; -@@ -768,13 +771,13 @@ static bool srcu_might_be_idle(struct sr +@@ -768,13 +771,13 @@ static bool srcu_might_be_idle(struct srcu_struct *sp) unsigned long t; /* If the local srcu_data structure has callbacks, not idle. */ @@ -49,7 +52,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * No local callbacks, so probabalistically probe global state. -@@ -852,7 +855,7 @@ void __call_srcu(struct srcu_struct *sp, +@@ -852,7 +855,7 @@ void __call_srcu(struct srcu_struct *sp, struct rcu_head *rhp, return; } rhp->func = func; @@ -58,7 +61,7 @@ Signed-off-by: Sebastian Andrzej Siewior sdp = this_cpu_ptr(sp->sda); spin_lock_rcu_node(sdp); rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp, false); -@@ -868,7 +871,8 @@ void __call_srcu(struct srcu_struct *sp, +@@ -868,7 +871,8 @@ void __call_srcu(struct srcu_struct *sp, struct rcu_head *rhp, sdp->srcu_gp_seq_needed_exp = s; needexp = true; } @@ -68,3 +71,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (needgp) srcu_funnel_gp_start(sp, sdp, s, do_norm); else if (needexp) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0166-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch b/kernel/patches-4.19.x-rt/0164-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0166-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch rename to kernel/patches-4.19.x-rt/0164-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch index 962fe570d..bbe5a9de0 100644 --- a/kernel/patches-4.19.x-rt/0166-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch +++ b/kernel/patches-4.19.x-rt/0164-rcu-enable-rcu_normal_after_boot-by-default-for-RT.patch @@ -1,6 +1,7 @@ +From f723e17e9826ed2e03a4b4c40c575ea2e2bf2c56 Mon Sep 17 00:00:00 2001 From: Julia Cartwright Date: Wed, 12 Oct 2016 11:21:14 -0500 -Subject: [PATCH] rcu: enable rcu_normal_after_boot by default for RT +Subject: [PATCH 164/269] rcu: enable rcu_normal_after_boot by default for RT The forcing of an expedited grace period is an expensive and very RT-application unfriendly operation, as it forcibly preempts all running @@ -14,12 +15,14 @@ Acked-by: Paul E. McKenney Signed-off-by: Julia Cartwright Signed-off-by: Sebastian Andrzej Siewior --- - kernel/rcu/update.c | 2 +- + kernel/rcu/update.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c +index 6ffafb1b1584..16d8dba23329 100644 --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c -@@ -67,7 +67,7 @@ extern int rcu_expedited; /* from sysctl +@@ -68,7 +68,7 @@ extern int rcu_expedited; /* from sysctl */ module_param(rcu_expedited, int, 0); extern int rcu_normal; /* from sysctl */ module_param(rcu_normal, int, 0); @@ -28,3 +31,6 @@ Signed-off-by: Sebastian Andrzej Siewior module_param(rcu_normal_after_boot, int, 0); #endif /* #ifndef CONFIG_TINY_RCU */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0167-drivers-tty-fix-omap-lock-crap.patch b/kernel/patches-4.19.x-rt/0165-tty-serial-omap-Make-the-locking-RT-aware.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0167-drivers-tty-fix-omap-lock-crap.patch rename to kernel/patches-4.19.x-rt/0165-tty-serial-omap-Make-the-locking-RT-aware.patch index f70e9198d..7acb9a394 100644 --- a/kernel/patches-4.19.x-rt/0167-drivers-tty-fix-omap-lock-crap.patch +++ b/kernel/patches-4.19.x-rt/0165-tty-serial-omap-Make-the-locking-RT-aware.patch @@ -1,6 +1,7 @@ -Subject: tty/serial/omap: Make the locking RT aware +From ccd76e8feed9271e97bc207e13fce803567e1017 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 28 Jul 2011 13:32:57 +0200 +Subject: [PATCH 165/269] tty/serial/omap: Make the locking RT aware The lock is a sleeping lock and local_irq_save() is not the optimsation we are looking for. Redo it to make it work on -RT and @@ -8,12 +9,14 @@ non-RT. Signed-off-by: Thomas Gleixner --- - drivers/tty/serial/omap-serial.c | 12 ++++-------- + drivers/tty/serial/omap-serial.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) +diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c +index 6420ae581a80..0f4f41ed9ffa 100644 --- a/drivers/tty/serial/omap-serial.c +++ b/drivers/tty/serial/omap-serial.c -@@ -1307,13 +1307,10 @@ serial_omap_console_write(struct console +@@ -1307,13 +1307,10 @@ serial_omap_console_write(struct console *co, const char *s, pm_runtime_get_sync(up->dev); @@ -30,7 +33,7 @@ Signed-off-by: Thomas Gleixner /* * First save the IER then disable the interrupts -@@ -1342,8 +1339,7 @@ serial_omap_console_write(struct console +@@ -1342,8 +1339,7 @@ serial_omap_console_write(struct console *co, const char *s, pm_runtime_mark_last_busy(up->dev); pm_runtime_put_autosuspend(up->dev); if (locked) @@ -40,3 +43,6 @@ Signed-off-by: Thomas Gleixner } static int __init +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0168-drivers-tty-pl011-irq-disable-madness.patch b/kernel/patches-4.19.x-rt/0166-tty-serial-pl011-Make-the-locking-work-on-RT.patch similarity index 67% rename from kernel/patches-4.19.x-rt/0168-drivers-tty-pl011-irq-disable-madness.patch rename to kernel/patches-4.19.x-rt/0166-tty-serial-pl011-Make-the-locking-work-on-RT.patch index 1558a8766..db6500f64 100644 --- a/kernel/patches-4.19.x-rt/0168-drivers-tty-pl011-irq-disable-madness.patch +++ b/kernel/patches-4.19.x-rt/0166-tty-serial-pl011-Make-the-locking-work-on-RT.patch @@ -1,18 +1,21 @@ -Subject: tty/serial/pl011: Make the locking work on RT +From 9ad06fff0efb4629430d5ced37c81e4f3ef040bf Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Tue, 08 Jan 2013 21:36:51 +0100 +Date: Tue, 8 Jan 2013 21:36:51 +0100 +Subject: [PATCH 166/269] tty/serial/pl011: Make the locking work on RT The lock is a sleeping lock and local_irq_save() is not the optimsation we are looking for. Redo it to make it work on -RT and non-RT. Signed-off-by: Thomas Gleixner --- - drivers/tty/serial/amba-pl011.c | 15 ++++++++++----- + drivers/tty/serial/amba-pl011.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) +diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c +index 89ade213a1a9..6be86f8c7e6a 100644 --- a/drivers/tty/serial/amba-pl011.c +++ b/drivers/tty/serial/amba-pl011.c -@@ -2216,13 +2216,19 @@ pl011_console_write(struct console *co, +@@ -2216,13 +2216,19 @@ pl011_console_write(struct console *co, const char *s, unsigned int count) clk_enable(uap->clk); @@ -35,7 +38,7 @@ Signed-off-by: Thomas Gleixner /* * First save the CR then disable the interrupts -@@ -2248,8 +2254,7 @@ pl011_console_write(struct console *co, +@@ -2248,8 +2254,7 @@ pl011_console_write(struct console *co, const char *s, unsigned int count) pl011_write(old_cr, uap, REG_CR); if (locked) @@ -45,3 +48,6 @@ Signed-off-by: Thomas Gleixner clk_disable(uap->clk); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0169-tty-serial-pl011-warning-about-uninitialized.patch b/kernel/patches-4.19.x-rt/0167-tty-serial-pl011-explicitly-initialize-the-flags-var.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0169-tty-serial-pl011-warning-about-uninitialized.patch rename to kernel/patches-4.19.x-rt/0167-tty-serial-pl011-explicitly-initialize-the-flags-var.patch index 76f39fd86..c445ed404 100644 --- a/kernel/patches-4.19.x-rt/0169-tty-serial-pl011-warning-about-uninitialized.patch +++ b/kernel/patches-4.19.x-rt/0167-tty-serial-pl011-explicitly-initialize-the-flags-var.patch @@ -1,6 +1,8 @@ +From e30b0dc820111e11ecc71383d20682d2eee77061 Mon Sep 17 00:00:00 2001 From: Kurt Kanzenbach Date: Mon, 24 Sep 2018 10:29:01 +0200 -Subject: [PATCH] tty: serial: pl011: explicitly initialize the flags variable +Subject: [PATCH 167/269] tty: serial: pl011: explicitly initialize the flags + variable MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -21,12 +23,14 @@ behavior and resolves the warning. Signed-off-by: Kurt Kanzenbach Signed-off-by: Sebastian Andrzej Siewior --- - drivers/tty/serial/amba-pl011.c | 2 +- + drivers/tty/serial/amba-pl011.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c +index 6be86f8c7e6a..59b4ab7b50bf 100644 --- a/drivers/tty/serial/amba-pl011.c +++ b/drivers/tty/serial/amba-pl011.c -@@ -2211,7 +2211,7 @@ pl011_console_write(struct console *co, +@@ -2211,7 +2211,7 @@ pl011_console_write(struct console *co, const char *s, unsigned int count) { struct uart_amba_port *uap = amba_ports[co->index]; unsigned int old_cr = 0, new_cr; @@ -35,3 +39,6 @@ Signed-off-by: Sebastian Andrzej Siewior int locked = 1; clk_enable(uap->clk); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0170-rt-serial-warn-fix.patch b/kernel/patches-4.19.x-rt/0168-rt-Improve-the-serial-console-PASS_LIMIT.patch similarity index 63% rename from kernel/patches-4.19.x-rt/0170-rt-serial-warn-fix.patch rename to kernel/patches-4.19.x-rt/0168-rt-Improve-the-serial-console-PASS_LIMIT.patch index 9119b686b..6ffd03ec2 100644 --- a/kernel/patches-4.19.x-rt/0170-rt-serial-warn-fix.patch +++ b/kernel/patches-4.19.x-rt/0168-rt-Improve-the-serial-console-PASS_LIMIT.patch @@ -1,6 +1,10 @@ -Subject: rt: Improve the serial console PASS_LIMIT +From 0a6ea176915e05db911401e89a925ee948f4434f Mon Sep 17 00:00:00 2001 From: Ingo Molnar -Date: Wed Dec 14 13:05:54 CET 2011 +Date: Wed, 14 Dec 2011 13:05:54 +0100 +Subject: [PATCH 168/269] rt: Improve the serial console PASS_LIMIT +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit Beyond the warning: @@ -12,12 +16,14 @@ give it a chance to continue in some really ugly situation. Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner --- - drivers/tty/serial/8250/8250_core.c | 11 ++++++++++- + drivers/tty/serial/8250/8250_core.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) +diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c +index 8fe3d0ed229e..a2baac4c8b63 100644 --- a/drivers/tty/serial/8250/8250_core.c +++ b/drivers/tty/serial/8250/8250_core.c -@@ -54,7 +54,16 @@ static struct uart_driver serial8250_reg +@@ -54,7 +54,16 @@ static struct uart_driver serial8250_reg; static unsigned int skip_txen_test; /* force skip of txen test at init time */ @@ -35,3 +41,6 @@ Signed-off-by: Thomas Gleixner #include /* +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0171-tty-serial-8250-don-t-take-the-trylock-during-oops.patch b/kernel/patches-4.19.x-rt/0169-tty-serial-8250-don-t-take-the-trylock-during-oops.patch similarity index 63% rename from kernel/patches-4.19.x-rt/0171-tty-serial-8250-don-t-take-the-trylock-during-oops.patch rename to kernel/patches-4.19.x-rt/0169-tty-serial-8250-don-t-take-the-trylock-during-oops.patch index f883ac294..9218696f3 100644 --- a/kernel/patches-4.19.x-rt/0171-tty-serial-8250-don-t-take-the-trylock-during-oops.patch +++ b/kernel/patches-4.19.x-rt/0169-tty-serial-8250-don-t-take-the-trylock-during-oops.patch @@ -1,6 +1,7 @@ +From 511eaf0e0ecbd9898b7f680f08ab0636062f3c7e Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 11 Apr 2016 16:55:02 +0200 -Subject: [PATCH] tty: serial: 8250: don't take the trylock during oops +Subject: [PATCH 169/269] tty: serial: 8250: don't take the trylock during oops An oops with irqs off (panic() from irqsafe hrtimer like the watchdog timer) will lead to a lockdep warning on each invocation and as such @@ -9,12 +10,14 @@ Therefore we skip the trylock in the oops case. Signed-off-by: Sebastian Andrzej Siewior --- - drivers/tty/serial/8250/8250_port.c | 4 +--- + drivers/tty/serial/8250/8250_port.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index 3f779d25ec0c..851d7f6046a4 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c -@@ -3239,10 +3239,8 @@ void serial8250_console_write(struct uar +@@ -3239,10 +3239,8 @@ void serial8250_console_write(struct uart_8250_port *up, const char *s, serial8250_rpm_get(up); @@ -26,3 +29,6 @@ Signed-off-by: Sebastian Andrzej Siewior else spin_lock_irqsave(&port->lock, flags); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0172-peterz-percpu-rwsem-rt.patch b/kernel/patches-4.19.x-rt/0170-locking-percpu-rwsem-Remove-preempt_disable-variants.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0172-peterz-percpu-rwsem-rt.patch rename to kernel/patches-4.19.x-rt/0170-locking-percpu-rwsem-Remove-preempt_disable-variants.patch index 4b832db88..ba7ee281a 100644 --- a/kernel/patches-4.19.x-rt/0172-peterz-percpu-rwsem-rt.patch +++ b/kernel/patches-4.19.x-rt/0170-locking-percpu-rwsem-Remove-preempt_disable-variants.patch @@ -1,6 +1,7 @@ -Subject: locking/percpu-rwsem: Remove preempt_disable variants +From 7b2e3123b8a2c8f1df0aa040b4c58d2f443fa8a5 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra -Date: Wed Nov 23 16:29:32 CET 2016 +Date: Wed, 23 Nov 2016 16:29:32 +0100 +Subject: [PATCH 170/269] locking/percpu-rwsem: Remove preempt_disable variants Effective revert commit: @@ -11,14 +12,15 @@ performance issue for PREEMPT=y. Signed-off-by: Peter Zijlstra (Intel) --- ---- - fs/locks.c | 32 ++++++++++++++++---------------- - include/linux/percpu-rwsem.h | 24 ++++-------------------- + fs/locks.c | 32 ++++++++++++++++---------------- + include/linux/percpu-rwsem.h | 24 ++++-------------------- 2 files changed, 20 insertions(+), 36 deletions(-) +diff --git a/fs/locks.c b/fs/locks.c +index 2ecb4db8c840..8259b7c7b5d2 100644 --- a/fs/locks.c +++ b/fs/locks.c -@@ -936,7 +936,7 @@ static int flock_lock_inode(struct inode +@@ -936,7 +936,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request) return -ENOMEM; } @@ -27,7 +29,7 @@ Signed-off-by: Peter Zijlstra (Intel) spin_lock(&ctx->flc_lock); if (request->fl_flags & FL_ACCESS) goto find_conflict; -@@ -977,7 +977,7 @@ static int flock_lock_inode(struct inode +@@ -977,7 +977,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request) out: spin_unlock(&ctx->flc_lock); @@ -36,7 +38,7 @@ Signed-off-by: Peter Zijlstra (Intel) if (new_fl) locks_free_lock(new_fl); locks_dispose_list(&dispose); -@@ -1015,7 +1015,7 @@ static int posix_lock_inode(struct inode +@@ -1015,7 +1015,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, new_fl2 = locks_alloc_lock(); } @@ -45,7 +47,7 @@ Signed-off-by: Peter Zijlstra (Intel) spin_lock(&ctx->flc_lock); /* * New lock request. Walk all POSIX locks and look for conflicts. If -@@ -1187,7 +1187,7 @@ static int posix_lock_inode(struct inode +@@ -1187,7 +1187,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, } out: spin_unlock(&ctx->flc_lock); @@ -54,7 +56,7 @@ Signed-off-by: Peter Zijlstra (Intel) /* * Free any unused locks. */ -@@ -1462,7 +1462,7 @@ int __break_lease(struct inode *inode, u +@@ -1462,7 +1462,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) return error; } @@ -63,7 +65,7 @@ Signed-off-by: Peter Zijlstra (Intel) spin_lock(&ctx->flc_lock); time_out_leases(inode, &dispose); -@@ -1514,13 +1514,13 @@ int __break_lease(struct inode *inode, u +@@ -1514,13 +1514,13 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) locks_insert_block(fl, new_fl); trace_break_lease_block(inode, new_fl); spin_unlock(&ctx->flc_lock); @@ -79,7 +81,7 @@ Signed-off-by: Peter Zijlstra (Intel) spin_lock(&ctx->flc_lock); trace_break_lease_unblock(inode, new_fl); locks_delete_block(new_fl); -@@ -1537,7 +1537,7 @@ int __break_lease(struct inode *inode, u +@@ -1537,7 +1537,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) } out: spin_unlock(&ctx->flc_lock); @@ -106,7 +108,7 @@ Signed-off-by: Peter Zijlstra (Intel) locks_dispose_list(&dispose); } -@@ -1693,7 +1693,7 @@ generic_add_lease(struct file *filp, lon +@@ -1693,7 +1693,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr return -EINVAL; } @@ -115,7 +117,7 @@ Signed-off-by: Peter Zijlstra (Intel) spin_lock(&ctx->flc_lock); time_out_leases(inode, &dispose); error = check_conflicting_open(dentry, arg, lease->fl_flags); -@@ -1764,7 +1764,7 @@ generic_add_lease(struct file *filp, lon +@@ -1764,7 +1764,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr lease->fl_lmops->lm_setup(lease, priv); out: spin_unlock(&ctx->flc_lock); @@ -124,7 +126,7 @@ Signed-off-by: Peter Zijlstra (Intel) locks_dispose_list(&dispose); if (is_deleg) inode_unlock(inode); -@@ -1787,7 +1787,7 @@ static int generic_delete_lease(struct f +@@ -1787,7 +1787,7 @@ static int generic_delete_lease(struct file *filp, void *owner) return error; } @@ -133,7 +135,7 @@ Signed-off-by: Peter Zijlstra (Intel) spin_lock(&ctx->flc_lock); list_for_each_entry(fl, &ctx->flc_lease, fl_list) { if (fl->fl_file == filp && -@@ -1800,7 +1800,7 @@ static int generic_delete_lease(struct f +@@ -1800,7 +1800,7 @@ static int generic_delete_lease(struct file *filp, void *owner) if (victim) error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose); spin_unlock(&ctx->flc_lock); @@ -142,7 +144,7 @@ Signed-off-by: Peter Zijlstra (Intel) locks_dispose_list(&dispose); return error; } -@@ -2531,13 +2531,13 @@ locks_remove_lease(struct file *filp, st +@@ -2531,13 +2531,13 @@ locks_remove_lease(struct file *filp, struct file_lock_context *ctx) if (list_empty(&ctx->flc_lease)) return; @@ -158,9 +160,11 @@ Signed-off-by: Peter Zijlstra (Intel) locks_dispose_list(&dispose); } +diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h +index 79b99d653e03..fb44e237316d 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h -@@ -29,7 +29,7 @@ static struct percpu_rw_semaphore name = +@@ -29,7 +29,7 @@ static struct percpu_rw_semaphore name = { \ extern int __percpu_down_read(struct percpu_rw_semaphore *, int); extern void __percpu_up_read(struct percpu_rw_semaphore *); @@ -169,7 +173,7 @@ Signed-off-by: Peter Zijlstra (Intel) { might_sleep(); -@@ -47,16 +47,10 @@ static inline void percpu_down_read_pree +@@ -47,16 +47,10 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore * __this_cpu_inc(*sem->read_count); if (unlikely(!rcu_sync_is_idle(&sem->rss))) __percpu_down_read(sem, false); /* Unconditional memory barrier */ @@ -187,7 +191,7 @@ Signed-off-by: Peter Zijlstra (Intel) preempt_enable(); } -@@ -83,13 +77,9 @@ static inline int percpu_down_read_trylo +@@ -83,13 +77,9 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem) return ret; } @@ -203,7 +207,7 @@ Signed-off-by: Peter Zijlstra (Intel) /* * Same as in percpu_down_read(). */ -@@ -102,12 +92,6 @@ static inline void percpu_up_read_preemp +@@ -102,12 +92,6 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_); } @@ -216,3 +220,6 @@ Signed-off-by: Peter Zijlstra (Intel) extern void percpu_down_write(struct percpu_rw_semaphore *); extern void percpu_up_write(struct percpu_rw_semaphore *); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0173-mm-protect-activate-switch-mm.patch b/kernel/patches-4.19.x-rt/0171-mm-Protect-activate_mm-by-preempt_-disable-enable-_r.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0173-mm-protect-activate-switch-mm.patch rename to kernel/patches-4.19.x-rt/0171-mm-Protect-activate_mm-by-preempt_-disable-enable-_r.patch index e22f28f17..db596df00 100644 --- a/kernel/patches-4.19.x-rt/0173-mm-protect-activate-switch-mm.patch +++ b/kernel/patches-4.19.x-rt/0171-mm-Protect-activate_mm-by-preempt_-disable-enable-_r.patch @@ -1,6 +1,8 @@ +From 28f91f849d8485292f7b25ce6a2ceae9fe18fb4d Mon Sep 17 00:00:00 2001 From: Yong Zhang Date: Tue, 15 May 2012 13:53:56 +0800 -Subject: mm: Protect activate_mm() by preempt_[disable&enable]_rt() +Subject: [PATCH 171/269] mm: Protect activate_mm() by + preempt_[disable&enable]_rt() User preempt_*_rt instead of local_irq_*_rt or otherwise there will be warning on ARM like below: @@ -30,13 +32,15 @@ Cc: Steven Rostedt Link: http://lkml.kernel.org/r/1337061236-1766-1-git-send-email-yong.zhang0@gmail.com Signed-off-by: Thomas Gleixner --- - fs/exec.c | 2 ++ - mm/mmu_context.c | 2 ++ + fs/exec.c | 2 ++ + mm/mmu_context.c | 2 ++ 2 files changed, 4 insertions(+) +diff --git a/fs/exec.c b/fs/exec.c +index 433b1257694a..352c1a6fa6a9 100644 --- a/fs/exec.c +++ b/fs/exec.c -@@ -1028,12 +1028,14 @@ static int exec_mmap(struct mm_struct *m +@@ -1028,12 +1028,14 @@ static int exec_mmap(struct mm_struct *mm) } } task_lock(tsk); @@ -51,6 +55,8 @@ Signed-off-by: Thomas Gleixner task_unlock(tsk); if (old_mm) { up_read(&old_mm->mmap_sem); +diff --git a/mm/mmu_context.c b/mm/mmu_context.c +index 3e612ae748e9..d0ccc070979f 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -25,6 +25,7 @@ void use_mm(struct mm_struct *mm) @@ -69,3 +75,6 @@ Signed-off-by: Thomas Gleixner task_unlock(tsk); #ifdef finish_arch_post_lock_switch finish_arch_post_lock_switch(); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0174-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-in.patch b/kernel/patches-4.19.x-rt/0172-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-ini.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0174-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-in.patch rename to kernel/patches-4.19.x-rt/0172-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-ini.patch index 6c2589977..0d9b01181 100644 --- a/kernel/patches-4.19.x-rt/0174-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-in.patch +++ b/kernel/patches-4.19.x-rt/0172-fs-dcache-bring-back-explicit-INIT_HLIST_BL_HEAD-ini.patch @@ -1,6 +1,8 @@ +From bbbfae78f8bad17199822dcfb994d1c927de5c32 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 13 Sep 2017 12:32:34 +0200 -Subject: [PATCH] fs/dcache: bring back explicit INIT_HLIST_BL_HEAD init +Subject: [PATCH 172/269] fs/dcache: bring back explicit INIT_HLIST_BL_HEAD + init Commit 3d375d78593c ("mm: update callers to use HASH_ZERO flag") removed INIT_HLIST_BL_HEAD and uses the ZERO flag instead for the init. However @@ -9,12 +11,14 @@ that. Signed-off-by: Sebastian Andrzej Siewior --- - fs/dcache.c | 11 +++++++++++ + fs/dcache.c | 11 +++++++++++ 1 file changed, 11 insertions(+) +diff --git a/fs/dcache.c b/fs/dcache.c +index cb515f183482..7e15f1bff5ea 100644 --- a/fs/dcache.c +++ b/fs/dcache.c -@@ -3058,6 +3058,8 @@ static int __init set_dhash_entries(char +@@ -3058,6 +3058,8 @@ __setup("dhash_entries=", set_dhash_entries); static void __init dcache_init_early(void) { @@ -23,7 +27,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* If hashes are distributed across NUMA nodes, defer * hash allocation until vmalloc space is available. */ -@@ -3074,11 +3076,16 @@ static void __init dcache_init_early(voi +@@ -3074,11 +3076,16 @@ static void __init dcache_init_early(void) NULL, 0, 0); @@ -51,3 +55,6 @@ Signed-off-by: Sebastian Andrzej Siewior d_hash_shift = 32 - d_hash_shift; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0175-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch b/kernel/patches-4.19.x-rt/0173-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0175-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch rename to kernel/patches-4.19.x-rt/0173-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch index 063fe84a6..6e927e56d 100644 --- a/kernel/patches-4.19.x-rt/0175-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch +++ b/kernel/patches-4.19.x-rt/0173-fs-dcache-disable-preemption-on-i_dir_seq-s-write-si.patch @@ -1,6 +1,8 @@ +From 2f25e633c3f100305735735e8f7728a335395f94 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 20 Oct 2017 11:29:53 +0200 -Subject: [PATCH] fs/dcache: disable preemption on i_dir_seq's write side +Subject: [PATCH 173/269] fs/dcache: disable preemption on i_dir_seq's write + side i_dir_seq is an opencoded seqcounter. Based on the code it looks like we could have two writers in parallel despite the fact that the d_lock is @@ -15,12 +17,14 @@ Cc: stable-rt@vger.kernel.org Reported-by: Oleg.Karfich@wago.com Signed-off-by: Sebastian Andrzej Siewior --- - fs/dcache.c | 12 +++++++----- - fs/inode.c | 2 +- - fs/libfs.c | 6 ++++-- - include/linux/fs.h | 2 +- + fs/dcache.c | 12 +++++++----- + fs/inode.c | 2 +- + fs/libfs.c | 6 ++++-- + include/linux/fs.h | 2 +- 4 files changed, 13 insertions(+), 9 deletions(-) +diff --git a/fs/dcache.c b/fs/dcache.c +index 7e15f1bff5ea..173b53b536f0 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -2400,9 +2400,10 @@ EXPORT_SYMBOL(d_rehash); @@ -36,7 +40,7 @@ Signed-off-by: Sebastian Andrzej Siewior return n; cpu_relax(); } -@@ -2410,7 +2411,8 @@ static inline unsigned start_dir_add(str +@@ -2410,7 +2411,8 @@ static inline unsigned start_dir_add(struct inode *dir) static inline void end_dir_add(struct inode *dir, unsigned n) { @@ -46,7 +50,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static void d_wait_lookup(struct dentry *dentry) -@@ -2443,7 +2445,7 @@ struct dentry *d_alloc_parallel(struct d +@@ -2443,7 +2445,7 @@ struct dentry *d_alloc_parallel(struct dentry *parent, retry: rcu_read_lock(); @@ -55,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior r_seq = read_seqbegin(&rename_lock); dentry = __d_lookup_rcu(parent, name, &d_seq); if (unlikely(dentry)) { -@@ -2471,7 +2473,7 @@ struct dentry *d_alloc_parallel(struct d +@@ -2471,7 +2473,7 @@ struct dentry *d_alloc_parallel(struct dentry *parent, } hlist_bl_lock(b); @@ -64,9 +68,11 @@ Signed-off-by: Sebastian Andrzej Siewior hlist_bl_unlock(b); rcu_read_unlock(); goto retry; +diff --git a/fs/inode.c b/fs/inode.c +index 42f6d25f32a5..97f11df6ca6a 100644 --- a/fs/inode.c +++ b/fs/inode.c -@@ -155,7 +155,7 @@ int inode_init_always(struct super_block +@@ -155,7 +155,7 @@ int inode_init_always(struct super_block *sb, struct inode *inode) inode->i_bdev = NULL; inode->i_cdev = NULL; inode->i_link = NULL; @@ -75,9 +81,11 @@ Signed-off-by: Sebastian Andrzej Siewior inode->i_rdev = 0; inode->dirtied_when = 0; +diff --git a/fs/libfs.c b/fs/libfs.c +index 0fb590d79f30..cd95874a1952 100644 --- a/fs/libfs.c +++ b/fs/libfs.c -@@ -90,7 +90,7 @@ static struct dentry *next_positive(stru +@@ -90,7 +90,7 @@ static struct dentry *next_positive(struct dentry *parent, struct list_head *from, int count) { @@ -86,7 +94,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct dentry *res; struct list_head *p; bool skipped; -@@ -123,8 +123,9 @@ static struct dentry *next_positive(stru +@@ -123,8 +123,9 @@ static struct dentry *next_positive(struct dentry *parent, static void move_cursor(struct dentry *cursor, struct list_head *after) { struct dentry *parent = cursor->d_parent; @@ -97,7 +105,7 @@ Signed-off-by: Sebastian Andrzej Siewior for (;;) { n = *seq; if (!(n & 1) && cmpxchg(seq, n, n + 1) == n) -@@ -137,6 +138,7 @@ static void move_cursor(struct dentry *c +@@ -137,6 +138,7 @@ static void move_cursor(struct dentry *cursor, struct list_head *after) else list_add_tail(&cursor->d_child, &parent->d_subdirs); smp_store_release(seq, n + 2); @@ -105,6 +113,8 @@ Signed-off-by: Sebastian Andrzej Siewior spin_unlock(&parent->d_lock); } +diff --git a/include/linux/fs.h b/include/linux/fs.h +index 7b6084854bfe..6782a83a8d4f 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -669,7 +669,7 @@ struct inode { @@ -116,3 +126,6 @@ Signed-off-by: Sebastian Andrzej Siewior }; __u32 i_generation; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0176-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch b/kernel/patches-4.19.x-rt/0174-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0176-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch rename to kernel/patches-4.19.x-rt/0174-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch index 9265989bb..d9e669185 100644 --- a/kernel/patches-4.19.x-rt/0176-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch +++ b/kernel/patches-4.19.x-rt/0174-squashfs-make-use-of-local-lock-in-multi_cpu-decompr.patch @@ -1,6 +1,7 @@ +From cef566ebb92c429f8d12735d50bf7d6772daa4dc Mon Sep 17 00:00:00 2001 From: Julia Cartwright Date: Mon, 7 May 2018 08:58:57 -0500 -Subject: [PATCH] squashfs: make use of local lock in multi_cpu +Subject: [PATCH 174/269] squashfs: make use of local lock in multi_cpu decompressor Currently, the squashfs multi_cpu decompressor makes use of @@ -21,9 +22,11 @@ Tested-by: Alexander Stein Signed-off-by: Julia Cartwright Signed-off-by: Sebastian Andrzej Siewior --- - fs/squashfs/decompressor_multi_percpu.c | 16 ++++++++++++---- + fs/squashfs/decompressor_multi_percpu.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) +diff --git a/fs/squashfs/decompressor_multi_percpu.c b/fs/squashfs/decompressor_multi_percpu.c +index 23a9c28ad8ea..6a73c4fa88e7 100644 --- a/fs/squashfs/decompressor_multi_percpu.c +++ b/fs/squashfs/decompressor_multi_percpu.c @@ -10,6 +10,7 @@ @@ -43,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior void *squashfs_decompressor_create(struct squashfs_sb_info *msblk, void *comp_opts) { -@@ -79,10 +82,15 @@ int squashfs_decompress(struct squashfs_ +@@ -79,10 +82,15 @@ int squashfs_decompress(struct squashfs_sb_info *msblk, struct buffer_head **bh, { struct squashfs_stream __percpu *percpu = (struct squashfs_stream __percpu *) msblk->stream; @@ -63,3 +66,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (res < 0) ERROR("%s decompression failed, data probably corrupt\n", +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0177-thermal-Defer-thermal-wakups-to-threads.patch b/kernel/patches-4.19.x-rt/0175-thermal-Defer-thermal-wakups-to-threads.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0177-thermal-Defer-thermal-wakups-to-threads.patch rename to kernel/patches-4.19.x-rt/0175-thermal-Defer-thermal-wakups-to-threads.patch index f7067c4a0..2774e13e3 100644 --- a/kernel/patches-4.19.x-rt/0177-thermal-Defer-thermal-wakups-to-threads.patch +++ b/kernel/patches-4.19.x-rt/0175-thermal-Defer-thermal-wakups-to-threads.patch @@ -1,6 +1,7 @@ +From 63284d578bc862d28f5f85f74fdc9fdadc90bea3 Mon Sep 17 00:00:00 2001 From: Daniel Wagner Date: Tue, 17 Feb 2015 09:37:44 +0100 -Subject: thermal: Defer thermal wakups to threads +Subject: [PATCH 175/269] thermal: Defer thermal wakups to threads On RT the spin lock in pkg_temp_thermal_platfrom_thermal_notify will call schedule while we run in irq context. @@ -23,9 +24,11 @@ Signed-off-by: Daniel Wagner [bigeasy: reoder init/denit position. TODO: flush swork on exit] Signed-off-by: Sebastian Andrzej Siewior --- - drivers/thermal/x86_pkg_temp_thermal.c | 52 +++++++++++++++++++++++++++++++-- + drivers/thermal/x86_pkg_temp_thermal.c | 52 ++++++++++++++++++++++++-- 1 file changed, 49 insertions(+), 3 deletions(-) +diff --git a/drivers/thermal/x86_pkg_temp_thermal.c b/drivers/thermal/x86_pkg_temp_thermal.c +index 1ef937d799e4..a5991cbb408f 100644 --- a/drivers/thermal/x86_pkg_temp_thermal.c +++ b/drivers/thermal/x86_pkg_temp_thermal.c @@ -29,6 +29,7 @@ @@ -36,7 +39,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include -@@ -329,7 +330,7 @@ static void pkg_thermal_schedule_work(in +@@ -329,7 +330,7 @@ static void pkg_thermal_schedule_work(int cpu, struct delayed_work *work) schedule_delayed_work_on(cpu, work, ms); } @@ -45,7 +48,7 @@ Signed-off-by: Sebastian Andrzej Siewior { int cpu = smp_processor_id(); struct pkg_device *pkgdev; -@@ -348,9 +349,47 @@ static int pkg_thermal_notify(u64 msr_va +@@ -348,9 +349,47 @@ static int pkg_thermal_notify(u64 msr_val) } spin_unlock_irqrestore(&pkg_temp_lock, flags); @@ -93,7 +96,7 @@ Signed-off-by: Sebastian Andrzej Siewior static int pkg_temp_thermal_device_add(unsigned int cpu) { int pkgid = topology_logical_package_id(cpu); -@@ -515,11 +554,16 @@ static int __init pkg_temp_thermal_init( +@@ -515,11 +554,16 @@ static int __init pkg_temp_thermal_init(void) if (!x86_match_cpu(pkg_temp_thermal_ids)) return -ENODEV; @@ -112,7 +115,7 @@ Signed-off-by: Sebastian Andrzej Siewior ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "thermal/x86_pkg:online", pkg_thermal_cpu_online, pkg_thermal_cpu_offline); -@@ -537,6 +581,7 @@ static int __init pkg_temp_thermal_init( +@@ -537,6 +581,7 @@ static int __init pkg_temp_thermal_init(void) return 0; err: @@ -120,7 +123,7 @@ Signed-off-by: Sebastian Andrzej Siewior kfree(packages); return ret; } -@@ -550,6 +595,7 @@ static void __exit pkg_temp_thermal_exit +@@ -550,6 +595,7 @@ static void __exit pkg_temp_thermal_exit(void) cpuhp_remove_state(pkg_thermal_hp_state); debugfs_remove_recursive(debugfs); kfree(packages); @@ -128,3 +131,6 @@ Signed-off-by: Sebastian Andrzej Siewior } module_exit(pkg_temp_thermal_exit) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0178-x86-fpu-Disable-preemption-around-local_bh_disable.patch b/kernel/patches-4.19.x-rt/0176-x86-fpu-Disable-preemption-around-local_bh_disable.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0178-x86-fpu-Disable-preemption-around-local_bh_disable.patch rename to kernel/patches-4.19.x-rt/0176-x86-fpu-Disable-preemption-around-local_bh_disable.patch index 39ea6a637..6fbed2ef9 100644 --- a/kernel/patches-4.19.x-rt/0178-x86-fpu-Disable-preemption-around-local_bh_disable.patch +++ b/kernel/patches-4.19.x-rt/0176-x86-fpu-Disable-preemption-around-local_bh_disable.patch @@ -1,6 +1,7 @@ +From ac8e13bf3ba7c4ef2587d4b8932ca56d30ca4841 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 11 Dec 2018 15:10:33 +0100 -Subject: [PATCH] x86/fpu: Disable preemption around local_bh_disable() +Subject: [PATCH 176/269] x86/fpu: Disable preemption around local_bh_disable() __fpu__restore_sig() restores the content of the FPU state in the CPUs and in order to avoid concurency it disbles BH. On !RT it also disables @@ -11,12 +12,14 @@ Add preempt_disable() while the FPU state is restored. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/kernel/fpu/signal.c | 2 ++ + arch/x86/kernel/fpu/signal.c | 2 ++ 1 file changed, 2 insertions(+) +diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c +index d99a8ee9e185..5e0274a94133 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c -@@ -344,10 +344,12 @@ static int __fpu__restore_sig(void __use +@@ -344,10 +344,12 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) sanitize_restored_xstate(tsk, &env, xfeatures, fx_only); } @@ -29,3 +32,6 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } else { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0179-epoll-use-get-cpu-light.patch b/kernel/patches-4.19.x-rt/0177-fs-epoll-Do-not-disable-preemption-on-RT.patch similarity index 62% rename from kernel/patches-4.19.x-rt/0179-epoll-use-get-cpu-light.patch rename to kernel/patches-4.19.x-rt/0177-fs-epoll-Do-not-disable-preemption-on-RT.patch index 79e571153..f893c3102 100644 --- a/kernel/patches-4.19.x-rt/0179-epoll-use-get-cpu-light.patch +++ b/kernel/patches-4.19.x-rt/0177-fs-epoll-Do-not-disable-preemption-on-RT.patch @@ -1,6 +1,7 @@ -Subject: fs/epoll: Do not disable preemption on RT +From 364aac82cf51da276aaf325fbcc1d837b41ebd6d Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Fri, 08 Jul 2011 16:35:35 +0200 +Date: Fri, 8 Jul 2011 16:35:35 +0200 +Subject: [PATCH 177/269] fs/epoll: Do not disable preemption on RT ep_call_nested() takes a sleeping lock so we can't disable preemption. The light version is enough since ep_call_nested() doesn't mind beeing @@ -8,12 +9,14 @@ invoked twice on the same CPU. Signed-off-by: Thomas Gleixner --- - fs/eventpoll.c | 4 ++-- + fs/eventpoll.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) +diff --git a/fs/eventpoll.c b/fs/eventpoll.c +index 58f48ea0db23..a41120a34e6d 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c -@@ -571,12 +571,12 @@ static int ep_poll_wakeup_proc(void *pri +@@ -571,12 +571,12 @@ static int ep_poll_wakeup_proc(void *priv, void *cookie, int call_nests) static void ep_poll_safewake(wait_queue_head_t *wq) { @@ -28,3 +31,6 @@ Signed-off-by: Thomas Gleixner } #else +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0180-mm-vmalloc-use-get-cpu-light.patch b/kernel/patches-4.19.x-rt/0178-mm-vmalloc-Another-preempt-disable-region-which-suck.patch similarity index 66% rename from kernel/patches-4.19.x-rt/0180-mm-vmalloc-use-get-cpu-light.patch rename to kernel/patches-4.19.x-rt/0178-mm-vmalloc-Another-preempt-disable-region-which-suck.patch index d874456ca..c5e8b74c7 100644 --- a/kernel/patches-4.19.x-rt/0180-mm-vmalloc-use-get-cpu-light.patch +++ b/kernel/patches-4.19.x-rt/0178-mm-vmalloc-Another-preempt-disable-region-which-suck.patch @@ -1,18 +1,22 @@ -Subject: mm/vmalloc: Another preempt disable region which sucks +From 27414c4ed0a59bb7044e708938c07d3141da2f38 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 12 Jul 2011 11:39:36 +0200 +Subject: [PATCH 178/269] mm/vmalloc: Another preempt disable region which + sucks Avoid the preempt disable version of get_cpu_var(). The inner-lock should provide enough serialisation. Signed-off-by: Thomas Gleixner --- - mm/vmalloc.c | 13 ++++++++----- + mm/vmalloc.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) +diff --git a/mm/vmalloc.c b/mm/vmalloc.c +index a46ec261a44e..5c6939cc28b7 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c -@@ -848,7 +848,7 @@ static void *new_vmap_block(unsigned int +@@ -852,7 +852,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) struct vmap_block *vb; struct vmap_area *va; unsigned long vb_idx; @@ -21,7 +25,7 @@ Signed-off-by: Thomas Gleixner void *vaddr; node = numa_node_id(); -@@ -891,11 +891,12 @@ static void *new_vmap_block(unsigned int +@@ -895,11 +895,12 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) BUG_ON(err); radix_tree_preload_end(); @@ -36,7 +40,7 @@ Signed-off-by: Thomas Gleixner return vaddr; } -@@ -964,6 +965,7 @@ static void *vb_alloc(unsigned long size +@@ -968,6 +969,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) struct vmap_block *vb; void *vaddr = NULL; unsigned int order; @@ -44,7 +48,7 @@ Signed-off-by: Thomas Gleixner BUG_ON(offset_in_page(size)); BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); -@@ -978,7 +980,8 @@ static void *vb_alloc(unsigned long size +@@ -982,7 +984,8 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) order = get_order(size); rcu_read_lock(); @@ -54,7 +58,7 @@ Signed-off-by: Thomas Gleixner list_for_each_entry_rcu(vb, &vbq->free, free_list) { unsigned long pages_off; -@@ -1001,7 +1004,7 @@ static void *vb_alloc(unsigned long size +@@ -1005,7 +1008,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) break; } @@ -63,3 +67,6 @@ Signed-off-by: Thomas Gleixner rcu_read_unlock(); /* Allocate new block if nothing was found */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0181-block-mq-use-cpu_light.patch b/kernel/patches-4.19.x-rt/0179-block-mq-use-cpu_light.patch similarity index 67% rename from kernel/patches-4.19.x-rt/0181-block-mq-use-cpu_light.patch rename to kernel/patches-4.19.x-rt/0179-block-mq-use-cpu_light.patch index ddcd4e826..83d65236d 100644 --- a/kernel/patches-4.19.x-rt/0181-block-mq-use-cpu_light.patch +++ b/kernel/patches-4.19.x-rt/0179-block-mq-use-cpu_light.patch @@ -1,18 +1,21 @@ +From 42ff48e7b8242871b11a0c7c5e8753c702c8aee5 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 9 Apr 2014 10:37:23 +0200 -Subject: block: mq: use cpu_light() +Subject: [PATCH 179/269] block: mq: use cpu_light() there is a might sleep splat because get_cpu() disables preemption and later we grab a lock. As a workaround for this we use get_cpu_light(). Signed-off-by: Sebastian Andrzej Siewior --- - block/blk-mq.h | 4 ++-- + block/blk-mq.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) +diff --git a/block/blk-mq.h b/block/blk-mq.h +index 9497b47e2526..e55c8599b90b 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h -@@ -113,12 +113,12 @@ static inline struct blk_mq_ctx *__blk_m +@@ -113,12 +113,12 @@ static inline struct blk_mq_ctx *__blk_mq_get_ctx(struct request_queue *q, */ static inline struct blk_mq_ctx *blk_mq_get_ctx(struct request_queue *q) { @@ -27,3 +30,6 @@ Signed-off-by: Sebastian Andrzej Siewior } struct blk_mq_alloc_data { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0182-block-mq-drop-preempt-disable.patch b/kernel/patches-4.19.x-rt/0180-block-mq-do-not-invoke-preempt_disable.patch similarity index 68% rename from kernel/patches-4.19.x-rt/0182-block-mq-drop-preempt-disable.patch rename to kernel/patches-4.19.x-rt/0180-block-mq-do-not-invoke-preempt_disable.patch index 8f28f7118..82ea47664 100644 --- a/kernel/patches-4.19.x-rt/0182-block-mq-drop-preempt-disable.patch +++ b/kernel/patches-4.19.x-rt/0180-block-mq-do-not-invoke-preempt_disable.patch @@ -1,6 +1,7 @@ +From 1574b433606302c16705ba46441b23c6f286e3a0 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 14 Jul 2015 14:26:34 +0200 -Subject: block/mq: do not invoke preempt_disable() +Subject: [PATCH 180/269] block/mq: do not invoke preempt_disable() preempt_disable() and get_cpu() don't play well together with the sleeping locks it tries to allocate later. @@ -8,12 +9,14 @@ It seems to be enough to replace it with get_cpu_light() and migrate_disable(). Signed-off-by: Sebastian Andrzej Siewior --- - block/blk-mq.c | 10 +++++----- + block/blk-mq.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) +diff --git a/block/blk-mq.c b/block/blk-mq.c +index b0d0b74cf5a6..430037cda971 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c -@@ -570,7 +570,7 @@ static void __blk_mq_complete_request(st +@@ -570,7 +570,7 @@ static void __blk_mq_complete_request(struct request *rq) return; } @@ -22,7 +25,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags)) shared = cpus_share_cache(cpu, ctx->cpu); -@@ -582,7 +582,7 @@ static void __blk_mq_complete_request(st +@@ -582,7 +582,7 @@ static void __blk_mq_complete_request(struct request *rq) } else { rq->q->softirq_done_fn(rq); } @@ -31,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static void hctx_unlock(struct blk_mq_hw_ctx *hctx, int srcu_idx) -@@ -1360,14 +1360,14 @@ static void __blk_mq_delay_run_hw_queue( +@@ -1368,14 +1368,14 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async, return; if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) { @@ -49,3 +52,6 @@ Signed-off-by: Sebastian Andrzej Siewior } kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work, +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0183-block-mq-don-t-complete-requests-via-IPI.patch b/kernel/patches-4.19.x-rt/0181-block-mq-don-t-complete-requests-via-IPI.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0183-block-mq-don-t-complete-requests-via-IPI.patch rename to kernel/patches-4.19.x-rt/0181-block-mq-don-t-complete-requests-via-IPI.patch index f0edb21b8..3564ecb61 100644 --- a/kernel/patches-4.19.x-rt/0183-block-mq-don-t-complete-requests-via-IPI.patch +++ b/kernel/patches-4.19.x-rt/0181-block-mq-don-t-complete-requests-via-IPI.patch @@ -1,21 +1,24 @@ +From 9ec5d3b932b407e0b6780392ddb1f7f2fe1251e4 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 29 Jan 2015 15:10:08 +0100 -Subject: block/mq: don't complete requests via IPI +Subject: [PATCH 181/269] block/mq: don't complete requests via IPI The IPI runs in hardirq context and there are sleeping locks. This patch moves the completion into a workqueue. Signed-off-by: Sebastian Andrzej Siewior --- - block/blk-core.c | 3 +++ - block/blk-mq.c | 23 +++++++++++++++++++++++ - include/linux/blk-mq.h | 2 +- - include/linux/blkdev.h | 3 +++ + block/blk-core.c | 3 +++ + block/blk-mq.c | 23 +++++++++++++++++++++++ + include/linux/blk-mq.h | 2 +- + include/linux/blkdev.h | 3 +++ 4 files changed, 30 insertions(+), 1 deletion(-) +diff --git a/block/blk-core.c b/block/blk-core.c +index eb8b52241453..581bf704154a 100644 --- a/block/blk-core.c +++ b/block/blk-core.c -@@ -189,6 +189,9 @@ void blk_rq_init(struct request_queue *q +@@ -189,6 +189,9 @@ void blk_rq_init(struct request_queue *q, struct request *rq) INIT_LIST_HEAD(&rq->queuelist); INIT_LIST_HEAD(&rq->timeout_list); @@ -25,9 +28,11 @@ Signed-off-by: Sebastian Andrzej Siewior rq->cpu = -1; rq->q = q; rq->__sector = (sector_t) -1; +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 430037cda971..9560ebae322d 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c -@@ -320,6 +320,9 @@ static struct request *blk_mq_rq_ctx_ini +@@ -320,6 +320,9 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->extra_len = 0; rq->__deadline = 0; @@ -37,7 +42,7 @@ Signed-off-by: Sebastian Andrzej Siewior INIT_LIST_HEAD(&rq->timeout_list); rq->timeout = 0; -@@ -547,12 +550,24 @@ void blk_mq_end_request(struct request * +@@ -547,12 +550,24 @@ void blk_mq_end_request(struct request *rq, blk_status_t error) } EXPORT_SYMBOL(blk_mq_end_request); @@ -62,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void __blk_mq_complete_request(struct request *rq) { -@@ -575,10 +590,18 @@ static void __blk_mq_complete_request(st +@@ -575,10 +590,18 @@ static void __blk_mq_complete_request(struct request *rq) shared = cpus_share_cache(cpu, ctx->cpu); if (cpu != ctx->cpu && !shared && cpu_online(ctx->cpu)) { @@ -81,9 +86,11 @@ Signed-off-by: Sebastian Andrzej Siewior } else { rq->q->softirq_done_fn(rq); } +diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h +index 1da59c16f637..04c15b5ca76c 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h -@@ -249,7 +249,7 @@ static inline u16 blk_mq_unique_tag_to_t +@@ -249,7 +249,7 @@ static inline u16 blk_mq_unique_tag_to_tag(u32 unique_tag) return unique_tag & BLK_MQ_UNIQUE_TAG_MASK; } @@ -92,6 +99,8 @@ Signed-off-by: Sebastian Andrzej Siewior int blk_mq_request_started(struct request *rq); void blk_mq_start_request(struct request *rq); void blk_mq_end_request(struct request *rq, blk_status_t error); +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h +index 6980014357d4..f93ae914abda 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -149,6 +149,9 @@ enum mq_rq_state { @@ -104,3 +113,6 @@ Signed-off-by: Sebastian Andrzej Siewior struct blk_mq_ctx *mq_ctx; int cpu; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0184-md-raid5-percpu-handling-rt-aware.patch b/kernel/patches-4.19.x-rt/0182-md-raid5-Make-raid5_percpu-handling-RT-aware.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0184-md-raid5-percpu-handling-rt-aware.patch rename to kernel/patches-4.19.x-rt/0182-md-raid5-Make-raid5_percpu-handling-RT-aware.patch index 81991cdfa..bd6ec9e39 100644 --- a/kernel/patches-4.19.x-rt/0184-md-raid5-percpu-handling-rt-aware.patch +++ b/kernel/patches-4.19.x-rt/0182-md-raid5-Make-raid5_percpu-handling-RT-aware.patch @@ -1,6 +1,7 @@ +From 6c971609e903127436e633a14252b0f3cf42c919 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 6 Apr 2010 16:51:31 +0200 -Subject: md: raid5: Make raid5_percpu handling RT aware +Subject: [PATCH 182/269] md: raid5: Make raid5_percpu handling RT aware __raid_run_ops() disables preemption with get_cpu() around the access to the raid5_percpu variables. That causes scheduling while atomic @@ -12,15 +13,16 @@ preemptible. Reported-by: Udo van den Heuvel Signed-off-by: Thomas Gleixner Tested-by: Udo van den Heuvel - --- - drivers/md/raid5.c | 8 +++++--- - drivers/md/raid5.h | 1 + + drivers/md/raid5.c | 8 +++++--- + drivers/md/raid5.h | 1 + 2 files changed, 6 insertions(+), 3 deletions(-) +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index ae38895c44b2..abc559dc516f 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c -@@ -2069,8 +2069,9 @@ static void raid_run_ops(struct stripe_h +@@ -2069,8 +2069,9 @@ static void raid_run_ops(struct stripe_head *sh, unsigned long ops_request) struct raid5_percpu *percpu; unsigned long cpu; @@ -31,7 +33,7 @@ Tested-by: Udo van den Heuvel if (test_bit(STRIPE_OP_BIOFILL, &ops_request)) { ops_run_biofill(sh); overlap_clear++; -@@ -2129,7 +2130,8 @@ static void raid_run_ops(struct stripe_h +@@ -2129,7 +2130,8 @@ static void raid_run_ops(struct stripe_head *sh, unsigned long ops_request) if (test_and_clear_bit(R5_Overlap, &dev->flags)) wake_up(&sh->raid_conf->wait_for_overlap); } @@ -41,7 +43,7 @@ Tested-by: Udo van den Heuvel } static void free_stripe(struct kmem_cache *sc, struct stripe_head *sh) -@@ -6803,6 +6805,7 @@ static int raid456_cpu_up_prepare(unsign +@@ -6803,6 +6805,7 @@ static int raid456_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) __func__, cpu); return -ENOMEM; } @@ -49,7 +51,7 @@ Tested-by: Udo van den Heuvel return 0; } -@@ -6813,7 +6816,6 @@ static int raid5_alloc_percpu(struct r5c +@@ -6813,7 +6816,6 @@ static int raid5_alloc_percpu(struct r5conf *conf) conf->percpu = alloc_percpu(struct raid5_percpu); if (!conf->percpu) return -ENOMEM; @@ -57,6 +59,8 @@ Tested-by: Udo van den Heuvel err = cpuhp_state_add_instance(CPUHP_MD_RAID5_PREPARE, &conf->node); if (!err) { conf->scribble_disks = max(conf->raid_disks, +diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h +index 8474c224127b..a3bf907ab2af 100644 --- a/drivers/md/raid5.h +++ b/drivers/md/raid5.h @@ -637,6 +637,7 @@ struct r5conf { @@ -67,3 +71,6 @@ Tested-by: Udo van den Heuvel struct page *spare_page; /* Used when checking P/Q in raid6 */ struct flex_array *scribble; /* space for constructing buffer * lists and performing address +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0185-rt-introduce-cpu-chill.patch b/kernel/patches-4.19.x-rt/0183-rt-Introduce-cpu_chill.patch similarity index 85% rename from kernel/patches-4.19.x-rt/0185-rt-introduce-cpu-chill.patch rename to kernel/patches-4.19.x-rt/0183-rt-Introduce-cpu_chill.patch index f96cff930..902762262 100644 --- a/kernel/patches-4.19.x-rt/0185-rt-introduce-cpu-chill.patch +++ b/kernel/patches-4.19.x-rt/0183-rt-Introduce-cpu_chill.patch @@ -1,6 +1,7 @@ -Subject: rt: Introduce cpu_chill() +From 70f8f6e166aff0215e6e440d9365f8ce0ade2336 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Wed, 07 Mar 2012 20:51:03 +0100 +Date: Wed, 7 Mar 2012 20:51:03 +0100 +Subject: [PATCH 183/269] rt: Introduce cpu_chill() Retry loops on RT might loop forever when the modifying side was preempted. Add cpu_chill() to replace cpu_relax(). cpu_chill() @@ -55,13 +56,15 @@ Signed-off-by: Thomas Gleixner Signed-off-by: Steven Rostedt Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/delay.h | 6 ++++++ - kernel/time/hrtimer.c | 21 +++++++++++++++++++++ + include/linux/delay.h | 6 ++++++ + kernel/time/hrtimer.c | 21 +++++++++++++++++++++ 2 files changed, 27 insertions(+) +diff --git a/include/linux/delay.h b/include/linux/delay.h +index b78bab4395d8..7c4bc414a504 100644 --- a/include/linux/delay.h +++ b/include/linux/delay.h -@@ -64,4 +64,10 @@ static inline void ssleep(unsigned int s +@@ -64,4 +64,10 @@ static inline void ssleep(unsigned int seconds) msleep(seconds * 1000); } @@ -72,9 +75,11 @@ Signed-off-by: Sebastian Andrzej Siewior +#endif + #endif /* defined(_LINUX_DELAY_H) */ +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index cfa3599fa789..851b2134e77f 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c -@@ -1894,6 +1894,27 @@ COMPAT_SYSCALL_DEFINE2(nanosleep, struct +@@ -1894,6 +1894,27 @@ COMPAT_SYSCALL_DEFINE2(nanosleep, struct compat_timespec __user *, rqtp, } #endif @@ -102,3 +107,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Functions related to boot-time initialization: */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0186-hrtimer-Don-t-lose-state-in-cpu_chill.patch b/kernel/patches-4.19.x-rt/0184-hrtimer-Don-t-lose-state-in-cpu_chill.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0186-hrtimer-Don-t-lose-state-in-cpu_chill.patch rename to kernel/patches-4.19.x-rt/0184-hrtimer-Don-t-lose-state-in-cpu_chill.patch index 482fe7452..93d0dc1d8 100644 --- a/kernel/patches-4.19.x-rt/0186-hrtimer-Don-t-lose-state-in-cpu_chill.patch +++ b/kernel/patches-4.19.x-rt/0184-hrtimer-Don-t-lose-state-in-cpu_chill.patch @@ -1,6 +1,7 @@ +From 420f45d08b300f698438e0a208f03e0f89aa8009 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 19 Feb 2019 16:59:15 +0100 -Subject: [PATCH] hrtimer: Don't lose state in cpu_chill() +Subject: [PATCH 184/269] hrtimer: Don't lose state in cpu_chill() In cpu_chill() the state is set to TASK_UNINTERRUPTIBLE and a timer is programmed. On return the state is always TASK_RUNNING which means we @@ -14,9 +15,11 @@ state in order to avoid updating ->task_state_change. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - kernel/time/hrtimer.c | 5 ++++- + kernel/time/hrtimer.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index 851b2134e77f..6f2736ec4b8e 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1902,15 +1902,18 @@ void cpu_chill(void) @@ -39,3 +42,6 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(cpu_chill); #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0187-hrtimer-cpu_chill-save-task-state-in-saved_state.patch b/kernel/patches-4.19.x-rt/0185-hrtimer-cpu_chill-save-task-state-in-saved_state.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0187-hrtimer-cpu_chill-save-task-state-in-saved_state.patch rename to kernel/patches-4.19.x-rt/0185-hrtimer-cpu_chill-save-task-state-in-saved_state.patch index 350e7762a..e0a10d096 100644 --- a/kernel/patches-4.19.x-rt/0187-hrtimer-cpu_chill-save-task-state-in-saved_state.patch +++ b/kernel/patches-4.19.x-rt/0185-hrtimer-cpu_chill-save-task-state-in-saved_state.patch @@ -1,6 +1,8 @@ +From 39c4c7819a0377ee59a1197664454bc54012907b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 26 Feb 2019 12:31:10 +0100 -Subject: [PATCH] hrtimer: cpu_chill(): save task state in ->saved_state() +Subject: [PATCH 185/269] hrtimer: cpu_chill(): save task state in + ->saved_state() In the previous change I saved the current task state on stack. This was bad because while the task is scheduled-out it might receive a wake-up. @@ -14,12 +16,14 @@ Reported-by: Mike Galbraith Tested-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - kernel/time/hrtimer.c | 18 +++++++++++++----- + kernel/time/hrtimer.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index 6f2736ec4b8e..e1040b80362c 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c -@@ -1900,20 +1900,28 @@ COMPAT_SYSCALL_DEFINE2(nanosleep, struct +@@ -1900,20 +1900,28 @@ COMPAT_SYSCALL_DEFINE2(nanosleep, struct compat_timespec __user *, rqtp, */ void cpu_chill(void) { @@ -53,3 +57,6 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(cpu_chill); #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0188-block-blk-mq-move-blk_queue_usage_counter_release-in.patch b/kernel/patches-4.19.x-rt/0186-block-blk-mq-move-blk_queue_usage_counter_release-in.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0188-block-blk-mq-move-blk_queue_usage_counter_release-in.patch rename to kernel/patches-4.19.x-rt/0186-block-blk-mq-move-blk_queue_usage_counter_release-in.patch index a23254b1a..d82e8df3b 100644 --- a/kernel/patches-4.19.x-rt/0188-block-blk-mq-move-blk_queue_usage_counter_release-in.patch +++ b/kernel/patches-4.19.x-rt/0186-block-blk-mq-move-blk_queue_usage_counter_release-in.patch @@ -1,6 +1,7 @@ +From 3933bc43d3be58eb86a118b1bd147cd4a2c9b33d Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 13 Mar 2018 13:49:16 +0100 -Subject: [PATCH] block: blk-mq: move blk_queue_usage_counter_release() +Subject: [PATCH 186/269] block: blk-mq: move blk_queue_usage_counter_release() into process context | BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914 @@ -45,13 +46,15 @@ The wq_has_sleeper() check has been suggested by Peter Zijlstra. Signed-off-by: Sebastian Andrzej Siewior --- - block/blk-core.c | 14 +++++++++++++- - include/linux/blkdev.h | 2 ++ + block/blk-core.c | 14 +++++++++++++- + include/linux/blkdev.h | 2 ++ 2 files changed, 15 insertions(+), 1 deletion(-) +diff --git a/block/blk-core.c b/block/blk-core.c +index 581bf704154a..0a651b442cec 100644 --- a/block/blk-core.c +++ b/block/blk-core.c -@@ -968,12 +968,21 @@ void blk_queue_exit(struct request_queue +@@ -968,12 +968,21 @@ void blk_queue_exit(struct request_queue *q) percpu_ref_put(&q->q_usage_counter); } @@ -74,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static void blk_rq_timed_out_timer(struct timer_list *t) -@@ -1066,6 +1075,7 @@ struct request_queue *blk_alloc_queue_no +@@ -1066,6 +1075,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, queue_flag_set_unlocked(QUEUE_FLAG_BYPASS, q); init_waitqueue_head(&q->mq_freeze_wq); @@ -91,6 +94,8 @@ Signed-off-by: Sebastian Andrzej Siewior request_cachep = kmem_cache_create("blkdev_requests", sizeof(struct request), 0, SLAB_PANIC, NULL); +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h +index f93ae914abda..940c794042ae 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -27,6 +27,7 @@ @@ -109,3 +114,6 @@ Signed-off-by: Sebastian Andrzej Siewior struct percpu_ref q_usage_counter; struct list_head all_q_node; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0189-block-use-cpu-chill.patch b/kernel/patches-4.19.x-rt/0187-block-Use-cpu_chill-for-retry-loops.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0189-block-use-cpu-chill.patch rename to kernel/patches-4.19.x-rt/0187-block-Use-cpu_chill-for-retry-loops.patch index 83b2351bd..8b194178d 100644 --- a/kernel/patches-4.19.x-rt/0189-block-use-cpu-chill.patch +++ b/kernel/patches-4.19.x-rt/0187-block-Use-cpu_chill-for-retry-loops.patch @@ -1,6 +1,7 @@ -Subject: block: Use cpu_chill() for retry loops +From 608d51b75238d882851b21f980b37aa54d26620e Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 20 Dec 2012 18:28:26 +0100 +Subject: [PATCH 187/269] block: Use cpu_chill() for retry loops Retry loops on RT might loop forever when the modifying side was preempted. Steven also observed a live lock when there was a @@ -10,11 +11,12 @@ Use cpu_chill() instead of cpu_relax() to let the system make progress. Signed-off-by: Thomas Gleixner - --- - block/blk-ioc.c | 5 +++-- + block/blk-ioc.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) +diff --git a/block/blk-ioc.c b/block/blk-ioc.c +index 01580f88fcb3..98d87e52ccdc 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -9,6 +9,7 @@ @@ -25,7 +27,7 @@ Signed-off-by: Thomas Gleixner #include "blk.h" -@@ -118,7 +119,7 @@ static void ioc_release_fn(struct work_s +@@ -118,7 +119,7 @@ static void ioc_release_fn(struct work_struct *work) spin_unlock(q->queue_lock); } else { spin_unlock_irqrestore(&ioc->lock, flags); @@ -34,7 +36,7 @@ Signed-off-by: Thomas Gleixner spin_lock_irqsave_nested(&ioc->lock, flags, 1); } } -@@ -202,7 +203,7 @@ void put_io_context_active(struct io_con +@@ -202,7 +203,7 @@ void put_io_context_active(struct io_context *ioc) spin_unlock(icq->q->queue_lock); } else { spin_unlock_irqrestore(&ioc->lock, flags); @@ -43,3 +45,6 @@ Signed-off-by: Thomas Gleixner goto retry; } } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0190-fs-dcache-use-cpu-chill-in-trylock-loops.patch b/kernel/patches-4.19.x-rt/0188-fs-dcache-Use-cpu_chill-in-trylock-loops.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0190-fs-dcache-use-cpu-chill-in-trylock-loops.patch rename to kernel/patches-4.19.x-rt/0188-fs-dcache-Use-cpu_chill-in-trylock-loops.patch index a027688b9..1375f0ab1 100644 --- a/kernel/patches-4.19.x-rt/0190-fs-dcache-use-cpu-chill-in-trylock-loops.patch +++ b/kernel/patches-4.19.x-rt/0188-fs-dcache-Use-cpu_chill-in-trylock-loops.patch @@ -1,18 +1,20 @@ -Subject: fs: dcache: Use cpu_chill() in trylock loops +From 4e8f4b38754fe437338d35cde5fafd8bfa53aaa3 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Wed, 07 Mar 2012 21:00:34 +0100 +Date: Wed, 7 Mar 2012 21:00:34 +0100 +Subject: [PATCH 188/269] fs: dcache: Use cpu_chill() in trylock loops Retry loops on RT might loop forever when the modifying side was preempted. Use cpu_chill() instead of cpu_relax() to let the system make progress. Signed-off-by: Thomas Gleixner - --- - fs/autofs/expire.c | 3 ++- - fs/namespace.c | 8 ++++++-- + fs/autofs/expire.c | 3 ++- + fs/namespace.c | 8 ++++++-- 2 files changed, 8 insertions(+), 3 deletions(-) +diff --git a/fs/autofs/expire.c b/fs/autofs/expire.c +index 28d9c2b1b3bb..354b7147cead 100644 --- a/fs/autofs/expire.c +++ b/fs/autofs/expire.c @@ -8,6 +8,7 @@ @@ -23,7 +25,7 @@ Signed-off-by: Thomas Gleixner #include "autofs_i.h" /* Check if a dentry can be expired */ -@@ -153,7 +154,7 @@ static struct dentry *get_next_positive_ +@@ -153,7 +154,7 @@ static struct dentry *get_next_positive_dentry(struct dentry *prev, parent = p->d_parent; if (!spin_trylock(&parent->d_lock)) { spin_unlock(&p->d_lock); @@ -32,6 +34,8 @@ Signed-off-by: Thomas Gleixner goto relock; } spin_unlock(&p->d_lock); +diff --git a/fs/namespace.c b/fs/namespace.c +index 1fce41ba3535..5dc970027e30 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -14,6 +14,7 @@ @@ -56,3 +60,6 @@ Signed-off-by: Thomas Gleixner /* * After the slowpath clears MNT_WRITE_HOLD, mnt_is_readonly will * be set to match its requirements. So we must not load that until +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0191-net-use-cpu-chill.patch b/kernel/patches-4.19.x-rt/0189-net-Use-cpu_chill-instead-of-cpu_relax.patch similarity index 66% rename from kernel/patches-4.19.x-rt/0191-net-use-cpu-chill.patch rename to kernel/patches-4.19.x-rt/0189-net-Use-cpu_chill-instead-of-cpu_relax.patch index e82598515..f7a3cfe77 100644 --- a/kernel/patches-4.19.x-rt/0191-net-use-cpu-chill.patch +++ b/kernel/patches-4.19.x-rt/0189-net-Use-cpu_chill-instead-of-cpu_relax.patch @@ -1,18 +1,20 @@ -Subject: net: Use cpu_chill() instead of cpu_relax() +From 128245989afa7b20f2b7e7fc43727086cce5bf13 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Wed, 07 Mar 2012 21:10:04 +0100 +Date: Wed, 7 Mar 2012 21:10:04 +0100 +Subject: [PATCH 189/269] net: Use cpu_chill() instead of cpu_relax() Retry loops on RT might loop forever when the modifying side was preempted. Use cpu_chill() instead of cpu_relax() to let the system make progress. Signed-off-by: Thomas Gleixner - --- - net/packet/af_packet.c | 5 +++-- - net/rds/ib_rdma.c | 3 ++- + net/packet/af_packet.c | 5 +++-- + net/rds/ib_rdma.c | 3 ++- 2 files changed, 5 insertions(+), 3 deletions(-) +diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c +index a0d295478e69..ce1bfcbbda45 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -63,6 +63,7 @@ @@ -23,7 +25,7 @@ Signed-off-by: Thomas Gleixner #include #include #include -@@ -667,7 +668,7 @@ static void prb_retire_rx_blk_timer_expi +@@ -667,7 +668,7 @@ static void prb_retire_rx_blk_timer_expired(struct timer_list *t) if (BLOCK_NUM_PKTS(pbd)) { while (atomic_read(&pkc->blk_fill_in_prog)) { /* Waiting for skb_copy_bits to finish... */ @@ -32,7 +34,7 @@ Signed-off-by: Thomas Gleixner } } -@@ -929,7 +930,7 @@ static void prb_retire_current_block(str +@@ -929,7 +930,7 @@ static void prb_retire_current_block(struct tpacket_kbdq_core *pkc, if (!(status & TP_STATUS_BLK_TMO)) { while (atomic_read(&pkc->blk_fill_in_prog)) { /* Waiting for skb_copy_bits to finish... */ @@ -41,6 +43,8 @@ Signed-off-by: Thomas Gleixner } } prb_close_block(pkc, pbd, po, status); +diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c +index 63c8d107adcf..671f8ad38864 100644 --- a/net/rds/ib_rdma.c +++ b/net/rds/ib_rdma.c @@ -34,6 +34,7 @@ @@ -51,7 +55,7 @@ Signed-off-by: Thomas Gleixner #include "rds_single_path.h" #include "ib_mr.h" -@@ -222,7 +223,7 @@ static inline void wait_clean_list_grace +@@ -222,7 +223,7 @@ static inline void wait_clean_list_grace(void) for_each_online_cpu(cpu) { flag = &per_cpu(clean_list_grace, cpu); while (test_bit(CLEAN_LIST_BUSY_BIT, flag)) @@ -60,3 +64,6 @@ Signed-off-by: Thomas Gleixner } } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0192-fs-dcache-use-swait_queue-instead-of-waitqueue.patch b/kernel/patches-4.19.x-rt/0190-fs-dcache-use-swait_queue-instead-of-waitqueue.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0192-fs-dcache-use-swait_queue-instead-of-waitqueue.patch rename to kernel/patches-4.19.x-rt/0190-fs-dcache-use-swait_queue-instead-of-waitqueue.patch index c056af3ea..c8e75b182 100644 --- a/kernel/patches-4.19.x-rt/0192-fs-dcache-use-swait_queue-instead-of-waitqueue.patch +++ b/kernel/patches-4.19.x-rt/0190-fs-dcache-use-swait_queue-instead-of-waitqueue.patch @@ -1,28 +1,31 @@ +From 0e5745ddcc9a0454ba787dfcb0da5e9753b787dc Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 14 Sep 2016 14:35:49 +0200 -Subject: [PATCH] fs/dcache: use swait_queue instead of waitqueue +Subject: [PATCH 190/269] fs/dcache: use swait_queue instead of waitqueue __d_lookup_done() invokes wake_up_all() while holding a hlist_bl_lock() which disables preemption. As a workaround convert it to swait. Signed-off-by: Sebastian Andrzej Siewior --- - fs/cifs/readdir.c | 2 +- - fs/dcache.c | 27 +++++++++++++++------------ - fs/fuse/dir.c | 2 +- - fs/namei.c | 4 ++-- - fs/nfs/dir.c | 4 ++-- - fs/nfs/unlink.c | 4 ++-- - fs/proc/base.c | 2 +- - fs/proc/proc_sysctl.c | 2 +- - include/linux/dcache.h | 4 ++-- - include/linux/nfs_xdr.h | 2 +- - kernel/sched/swait.c | 1 + + fs/cifs/readdir.c | 2 +- + fs/dcache.c | 27 +++++++++++++++------------ + fs/fuse/dir.c | 2 +- + fs/namei.c | 4 ++-- + fs/nfs/dir.c | 4 ++-- + fs/nfs/unlink.c | 4 ++-- + fs/proc/base.c | 2 +- + fs/proc/proc_sysctl.c | 2 +- + include/linux/dcache.h | 4 ++-- + include/linux/nfs_xdr.h | 2 +- + kernel/sched/swait.c | 1 + 11 files changed, 29 insertions(+), 25 deletions(-) +diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c +index 3925a7bfc74d..33f7723fb83e 100644 --- a/fs/cifs/readdir.c +++ b/fs/cifs/readdir.c -@@ -80,7 +80,7 @@ cifs_prime_dcache(struct dentry *parent, +@@ -80,7 +80,7 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name, struct inode *inode; struct super_block *sb = parent->d_sb; struct cifs_sb_info *cifs_sb = CIFS_SB(sb); @@ -31,9 +34,11 @@ Signed-off-by: Sebastian Andrzej Siewior cifs_dbg(FYI, "%s: for %s\n", __func__, name->name); +diff --git a/fs/dcache.c b/fs/dcache.c +index 173b53b536f0..7cb44c7218a4 100644 --- a/fs/dcache.c +++ b/fs/dcache.c -@@ -2417,21 +2417,24 @@ static inline void end_dir_add(struct in +@@ -2417,21 +2417,24 @@ static inline void end_dir_add(struct inode *dir, unsigned n) static void d_wait_lookup(struct dentry *dentry) { @@ -69,7 +74,7 @@ Signed-off-by: Sebastian Andrzej Siewior { unsigned int hash = name->hash; struct hlist_bl_head *b = in_lookup_hash(parent, hash); -@@ -2546,7 +2549,7 @@ void __d_lookup_done(struct dentry *dent +@@ -2546,7 +2549,7 @@ void __d_lookup_done(struct dentry *dentry) hlist_bl_lock(b); dentry->d_flags &= ~DCACHE_PAR_LOOKUP; __hlist_bl_del(&dentry->d_u.d_in_lookup_hash); @@ -78,9 +83,11 @@ Signed-off-by: Sebastian Andrzej Siewior dentry->d_wait = NULL; hlist_bl_unlock(b); INIT_HLIST_NODE(&dentry->d_u.d_alias); +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c +index 82a13221775e..58324a93e3c0 100644 --- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c -@@ -1203,7 +1203,7 @@ static int fuse_direntplus_link(struct f +@@ -1203,7 +1203,7 @@ static int fuse_direntplus_link(struct file *file, struct inode *dir = d_inode(parent); struct fuse_conn *fc; struct inode *inode; @@ -89,9 +96,11 @@ Signed-off-by: Sebastian Andrzej Siewior if (!o->nodeid) { /* +diff --git a/fs/namei.c b/fs/namei.c +index 914178cdbe94..2a8c41bc227f 100644 --- a/fs/namei.c +++ b/fs/namei.c -@@ -1645,7 +1645,7 @@ static struct dentry *__lookup_slow(cons +@@ -1645,7 +1645,7 @@ static struct dentry *__lookup_slow(const struct qstr *name, { struct dentry *dentry, *old; struct inode *inode = dir->d_inode; @@ -100,7 +109,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Don't go there if it's already dead */ if (unlikely(IS_DEADDIR(inode))) -@@ -3135,7 +3135,7 @@ static int lookup_open(struct nameidata +@@ -3135,7 +3135,7 @@ static int lookup_open(struct nameidata *nd, struct path *path, struct dentry *dentry; int error, create_error = 0; umode_t mode = op->mode; @@ -109,6 +118,8 @@ Signed-off-by: Sebastian Andrzej Siewior if (unlikely(IS_DEADDIR(dir_inode))) return -ENOENT; +diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c +index 62afe8ca1e36..9818a5dfb472 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -445,7 +445,7 @@ static @@ -120,7 +131,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct dentry *dentry; struct dentry *alias; struct inode *dir = d_inode(parent); -@@ -1459,7 +1459,7 @@ int nfs_atomic_open(struct inode *dir, s +@@ -1459,7 +1459,7 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry, struct file *file, unsigned open_flags, umode_t mode) { @@ -129,6 +140,8 @@ Signed-off-by: Sebastian Andrzej Siewior struct nfs_open_context *ctx; struct dentry *res; struct iattr attr = { .ia_valid = ATTR_OPEN }; +diff --git a/fs/nfs/unlink.c b/fs/nfs/unlink.c +index ce9100b5604d..839bfa76f41e 100644 --- a/fs/nfs/unlink.c +++ b/fs/nfs/unlink.c @@ -13,7 +13,7 @@ @@ -140,7 +153,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include -@@ -206,7 +206,7 @@ nfs_async_unlink(struct dentry *dentry, +@@ -206,7 +206,7 @@ nfs_async_unlink(struct dentry *dentry, const struct qstr *name) goto out_free_name; } data->res.dir_attr = &data->dir_attr; @@ -149,9 +162,11 @@ Signed-off-by: Sebastian Andrzej Siewior status = -EBUSY; spin_lock(&dentry->d_lock); +diff --git a/fs/proc/base.c b/fs/proc/base.c +index 81d77b15b347..2c0ac4338e17 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c -@@ -1876,7 +1876,7 @@ bool proc_fill_cache(struct file *file, +@@ -1872,7 +1872,7 @@ bool proc_fill_cache(struct file *file, struct dir_context *ctx, child = d_hash_and_lookup(dir, &qname); if (!child) { @@ -160,9 +175,11 @@ Signed-off-by: Sebastian Andrzej Siewior child = d_alloc_parallel(dir, &qname, &wq); if (IS_ERR(child)) goto end_instantiate; +diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c +index d65390727541..abd4d1632e7c 100644 --- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c -@@ -677,7 +677,7 @@ static bool proc_sys_fill_cache(struct f +@@ -677,7 +677,7 @@ static bool proc_sys_fill_cache(struct file *file, child = d_lookup(dir, &qname); if (!child) { @@ -171,6 +188,8 @@ Signed-off-by: Sebastian Andrzej Siewior child = d_alloc_parallel(dir, &qname, &wq); if (IS_ERR(child)) return false; +diff --git a/include/linux/dcache.h b/include/linux/dcache.h +index ef4b70f64f33..be6ab83705aa 100644 --- a/include/linux/dcache.h +++ b/include/linux/dcache.h @@ -105,7 +105,7 @@ struct dentry { @@ -182,7 +201,7 @@ Signed-off-by: Sebastian Andrzej Siewior }; struct list_head d_child; /* child of parent list */ struct list_head d_subdirs; /* our children */ -@@ -236,7 +236,7 @@ extern struct dentry * d_alloc(struct de +@@ -236,7 +236,7 @@ extern struct dentry * d_alloc(struct dentry *, const struct qstr *); extern struct dentry * d_alloc_anon(struct super_block *); extern struct dentry * d_alloc_pseudo(struct super_block *, const struct qstr *); extern struct dentry * d_alloc_parallel(struct dentry *, const struct qstr *, @@ -191,6 +210,8 @@ Signed-off-by: Sebastian Andrzej Siewior extern struct dentry * d_splice_alias(struct inode *, struct dentry *); extern struct dentry * d_add_ci(struct dentry *, struct inode *, struct qstr *); extern struct dentry * d_exact_alias(struct dentry *, struct inode *); +diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h +index bd1c889a9ed9..1fc27eb1f021 100644 --- a/include/linux/nfs_xdr.h +++ b/include/linux/nfs_xdr.h @@ -1549,7 +1549,7 @@ struct nfs_unlinkdata { @@ -202,9 +223,11 @@ Signed-off-by: Sebastian Andrzej Siewior struct rpc_cred *cred; struct nfs_fattr dir_attr; long timeout; +diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c +index c7cb30cdd1b7..119a56d7f739 100644 --- a/kernel/sched/swait.c +++ b/kernel/sched/swait.c -@@ -70,6 +70,7 @@ void swake_up_all(struct swait_queue_hea +@@ -70,6 +70,7 @@ void swake_up_all(struct swait_queue_head *q) struct swait_queue *curr; LIST_HEAD(tmp); @@ -212,3 +235,6 @@ Signed-off-by: Sebastian Andrzej Siewior raw_spin_lock_irq(&q->lock); list_splice_init(&q->task_list, &tmp); while (!list_empty(&tmp)) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0193-workqueue-use-rcu.patch b/kernel/patches-4.19.x-rt/0191-workqueue-Use-normal-rcu.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0193-workqueue-use-rcu.patch rename to kernel/patches-4.19.x-rt/0191-workqueue-Use-normal-rcu.patch index bfd7d95a8..4f38270ca 100644 --- a/kernel/patches-4.19.x-rt/0193-workqueue-use-rcu.patch +++ b/kernel/patches-4.19.x-rt/0191-workqueue-Use-normal-rcu.patch @@ -1,6 +1,7 @@ -Subject: workqueue: Use normal rcu +From e29f4dc4c3456a8de27d079dc97e6489b05b61b0 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 24 Jul 2013 15:26:54 +0200 +Subject: [PATCH 191/269] workqueue: Use normal rcu There is no need for sched_rcu. The undocumented reason why sched_rcu is used is to avoid a few explicit rcu_read_lock()/unlock() pairs by @@ -9,9 +10,11 @@ protected by preempt or irq disabled regions. Signed-off-by: Thomas Gleixner --- - kernel/workqueue.c | 95 +++++++++++++++++++++++++++++------------------------ + kernel/workqueue.c | 95 +++++++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 43 deletions(-) +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index 0280deac392e..ca8014edaa84 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -127,7 +127,7 @@ enum { @@ -50,7 +53,7 @@ Signed-off-by: Thomas Gleixner * determined without grabbing wq->mutex. */ struct work_struct unbound_release_work; -@@ -357,20 +357,20 @@ static void workqueue_sysfs_unregister(s +@@ -357,20 +357,20 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq); #include #define assert_rcu_or_pool_mutex() \ @@ -77,7 +80,7 @@ Signed-off-by: Thomas Gleixner #define for_each_cpu_worker_pool(pool, cpu) \ for ((pool) = &per_cpu(cpu_worker_pools, cpu)[0]; \ -@@ -382,7 +382,7 @@ static void workqueue_sysfs_unregister(s +@@ -382,7 +382,7 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq); * @pool: iteration cursor * @pi: integer used for iteration * @@ -86,7 +89,7 @@ Signed-off-by: Thomas Gleixner * locked. If the pool needs to be used beyond the locking in effect, the * caller is responsible for guaranteeing that the pool stays online. * -@@ -414,7 +414,7 @@ static void workqueue_sysfs_unregister(s +@@ -414,7 +414,7 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq); * @pwq: iteration cursor * @wq: the target workqueue * @@ -95,7 +98,7 @@ Signed-off-by: Thomas Gleixner * If the pwq needs to be used beyond the locking in effect, the caller is * responsible for guaranteeing that the pwq stays online. * -@@ -550,7 +550,7 @@ static int worker_pool_assign_id(struct +@@ -550,7 +550,7 @@ static int worker_pool_assign_id(struct worker_pool *pool) * @wq: the target workqueue * @node: the node ID * @@ -104,7 +107,7 @@ Signed-off-by: Thomas Gleixner * read locked. * If the pwq needs to be used beyond the locking in effect, the caller is * responsible for guaranteeing that the pwq stays online. -@@ -694,8 +694,8 @@ static struct pool_workqueue *get_work_p +@@ -694,8 +694,8 @@ static struct pool_workqueue *get_work_pwq(struct work_struct *work) * @work: the work item of interest * * Pools are created and destroyed under wq_pool_mutex, and allows read @@ -115,7 +118,7 @@ Signed-off-by: Thomas Gleixner * * All fields of the returned pool are accessible as long as the above * mentioned locking is in effect. If the returned pool needs to be used -@@ -1100,7 +1100,7 @@ static void put_pwq_unlocked(struct pool +@@ -1100,7 +1100,7 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq) { if (pwq) { /* @@ -124,7 +127,7 @@ Signed-off-by: Thomas Gleixner * following lock operations are safe. */ spin_lock_irq(&pwq->pool->lock); -@@ -1228,6 +1228,7 @@ static int try_to_grab_pending(struct wo +@@ -1228,6 +1228,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork, if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) return 0; @@ -132,7 +135,7 @@ Signed-off-by: Thomas Gleixner /* * The queueing is in progress, or it is already queued. Try to * steal it from ->worklist without clearing WORK_STRUCT_PENDING. -@@ -1266,10 +1267,12 @@ static int try_to_grab_pending(struct wo +@@ -1266,10 +1267,12 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork, set_work_pool_and_keep_pending(work, pool->id); spin_unlock(&pool->lock); @@ -145,7 +148,7 @@ Signed-off-by: Thomas Gleixner local_irq_restore(*flags); if (work_is_canceling(work)) return -ENOENT; -@@ -1383,6 +1386,7 @@ static void __queue_work(int cpu, struct +@@ -1383,6 +1386,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq, if (unlikely(wq->flags & __WQ_DRAINING) && WARN_ON_ONCE(!is_chained_work(wq))) return; @@ -153,7 +156,7 @@ Signed-off-by: Thomas Gleixner retry: if (req_cpu == WORK_CPU_UNBOUND) cpu = wq_select_unbound_cpu(raw_smp_processor_id()); -@@ -1439,10 +1443,8 @@ static void __queue_work(int cpu, struct +@@ -1439,10 +1443,8 @@ static void __queue_work(int cpu, struct workqueue_struct *wq, /* pwq determined, queue */ trace_workqueue_queue_work(req_cpu, pwq, work); @@ -166,7 +169,7 @@ Signed-off-by: Thomas Gleixner pwq->nr_in_flight[pwq->work_color]++; work_flags = work_color_to_flags(pwq->work_color); -@@ -1460,7 +1462,9 @@ static void __queue_work(int cpu, struct +@@ -1460,7 +1462,9 @@ static void __queue_work(int cpu, struct workqueue_struct *wq, insert_work(pwq, work, worklist, work_flags); @@ -176,7 +179,7 @@ Signed-off-by: Thomas Gleixner } /** -@@ -2855,14 +2859,14 @@ static bool start_flush_work(struct work +@@ -2855,14 +2859,14 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, might_sleep(); @@ -194,7 +197,7 @@ Signed-off-by: Thomas Gleixner /* see the comment in try_to_grab_pending() with the same code */ pwq = get_work_pwq(work); if (pwq) { -@@ -2894,10 +2898,11 @@ static bool start_flush_work(struct work +@@ -2894,10 +2898,11 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr, lock_map_acquire(&pwq->wq->lockdep_map); lock_map_release(&pwq->wq->lockdep_map); } @@ -207,7 +210,7 @@ Signed-off-by: Thomas Gleixner return false; } -@@ -3341,7 +3346,7 @@ static void rcu_free_pool(struct rcu_hea +@@ -3341,7 +3346,7 @@ static void rcu_free_pool(struct rcu_head *rcu) * put_unbound_pool - put a worker_pool * @pool: worker_pool to put * @@ -216,7 +219,7 @@ Signed-off-by: Thomas Gleixner * safe manner. get_unbound_pool() calls this function on its failure path * and this function should be able to release pools which went through, * successfully or not, init_worker_pool(). -@@ -3395,8 +3400,8 @@ static void put_unbound_pool(struct work +@@ -3395,8 +3400,8 @@ static void put_unbound_pool(struct worker_pool *pool) del_timer_sync(&pool->idle_timer); del_timer_sync(&pool->mayday_timer); @@ -227,7 +230,7 @@ Signed-off-by: Thomas Gleixner } /** -@@ -3503,14 +3508,14 @@ static void pwq_unbound_release_workfn(s +@@ -3503,14 +3508,14 @@ static void pwq_unbound_release_workfn(struct work_struct *work) put_unbound_pool(pool); mutex_unlock(&wq_pool_mutex); @@ -244,7 +247,7 @@ Signed-off-by: Thomas Gleixner } /** -@@ -4195,7 +4200,7 @@ void destroy_workqueue(struct workqueue_ +@@ -4195,7 +4200,7 @@ void destroy_workqueue(struct workqueue_struct *wq) * The base ref is never dropped on per-cpu pwqs. Directly * schedule RCU free. */ @@ -253,7 +256,7 @@ Signed-off-by: Thomas Gleixner } else { /* * We're the sole accessor of @wq at this point. Directly -@@ -4305,7 +4310,8 @@ bool workqueue_congested(int cpu, struct +@@ -4305,7 +4310,8 @@ bool workqueue_congested(int cpu, struct workqueue_struct *wq) struct pool_workqueue *pwq; bool ret; @@ -263,7 +266,7 @@ Signed-off-by: Thomas Gleixner if (cpu == WORK_CPU_UNBOUND) cpu = smp_processor_id(); -@@ -4316,7 +4322,8 @@ bool workqueue_congested(int cpu, struct +@@ -4316,7 +4322,8 @@ bool workqueue_congested(int cpu, struct workqueue_struct *wq) pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu)); ret = !list_empty(&pwq->delayed_works); @@ -273,7 +276,7 @@ Signed-off-by: Thomas Gleixner return ret; } -@@ -4342,15 +4349,15 @@ unsigned int work_busy(struct work_struc +@@ -4342,15 +4349,15 @@ unsigned int work_busy(struct work_struct *work) if (work_pending(work)) ret |= WORK_BUSY_PENDING; @@ -331,7 +334,7 @@ Signed-off-by: Thomas Gleixner } out_unlock: mutex_unlock(&wq_pool_mutex); -@@ -5190,7 +5197,8 @@ static ssize_t wq_pool_ids_show(struct d +@@ -5190,7 +5197,8 @@ static ssize_t wq_pool_ids_show(struct device *dev, const char *delim = ""; int node, written = 0; @@ -341,7 +344,7 @@ Signed-off-by: Thomas Gleixner for_each_node(node) { written += scnprintf(buf + written, PAGE_SIZE - written, "%s%d:%d", delim, node, -@@ -5198,7 +5206,8 @@ static ssize_t wq_pool_ids_show(struct d +@@ -5198,7 +5206,8 @@ static ssize_t wq_pool_ids_show(struct device *dev, delim = " "; } written += scnprintf(buf + written, PAGE_SIZE - written, "\n"); @@ -351,3 +354,6 @@ Signed-off-by: Thomas Gleixner return written; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0194-workqueue-use-locallock.patch b/kernel/patches-4.19.x-rt/0192-workqueue-Use-local-irq-lock-instead-of-irq-disable-.patch similarity index 80% rename from kernel/patches-4.19.x-rt/0194-workqueue-use-locallock.patch rename to kernel/patches-4.19.x-rt/0192-workqueue-Use-local-irq-lock-instead-of-irq-disable-.patch index 9b21c6d4c..3b99a937e 100644 --- a/kernel/patches-4.19.x-rt/0194-workqueue-use-locallock.patch +++ b/kernel/patches-4.19.x-rt/0192-workqueue-Use-local-irq-lock-instead-of-irq-disable-.patch @@ -1,15 +1,19 @@ -Subject: workqueue: Use local irq lock instead of irq disable regions +From 693d52e4cc082c2aafb8154ee7581e38f4c584d3 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 17 Jul 2011 21:42:26 +0200 +Subject: [PATCH 192/269] workqueue: Use local irq lock instead of irq disable + regions Use a local_irq_lock as a replacement for irq off regions. We keep the semantic of irq-off in regard to the pool->lock and remain preemptible. Signed-off-by: Thomas Gleixner --- - kernel/workqueue.c | 45 ++++++++++++++++++++++++++++++--------------- + kernel/workqueue.c | 45 ++++++++++++++++++++++++++++++--------------- 1 file changed, 30 insertions(+), 15 deletions(-) +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index ca8014edaa84..1e8b2ff804e3 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -49,6 +49,7 @@ @@ -20,7 +24,7 @@ Signed-off-by: Thomas Gleixner #include "workqueue_internal.h" -@@ -350,6 +351,8 @@ EXPORT_SYMBOL_GPL(system_power_efficient +@@ -350,6 +351,8 @@ EXPORT_SYMBOL_GPL(system_power_efficient_wq); struct workqueue_struct *system_freezable_power_efficient_wq __read_mostly; EXPORT_SYMBOL_GPL(system_freezable_power_efficient_wq); @@ -29,7 +33,7 @@ Signed-off-by: Thomas Gleixner static int worker_thread(void *__worker); static void workqueue_sysfs_unregister(struct workqueue_struct *wq); -@@ -1103,9 +1106,11 @@ static void put_pwq_unlocked(struct pool +@@ -1103,9 +1106,11 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq) * As both pwqs and pools are RCU protected, the * following lock operations are safe. */ @@ -43,7 +47,7 @@ Signed-off-by: Thomas Gleixner } } -@@ -1209,7 +1214,7 @@ static int try_to_grab_pending(struct wo +@@ -1209,7 +1214,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork, struct worker_pool *pool; struct pool_workqueue *pwq; @@ -52,7 +56,7 @@ Signed-off-by: Thomas Gleixner /* try to steal the timer if it exists */ if (is_dwork) { -@@ -1273,7 +1278,7 @@ static int try_to_grab_pending(struct wo +@@ -1273,7 +1278,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork, spin_unlock(&pool->lock); fail: rcu_read_unlock(); @@ -61,7 +65,7 @@ Signed-off-by: Thomas Gleixner if (work_is_canceling(work)) return -ENOENT; cpu_relax(); -@@ -1378,7 +1383,13 @@ static void __queue_work(int cpu, struct +@@ -1378,7 +1383,13 @@ static void __queue_work(int cpu, struct workqueue_struct *wq, * queued or lose PENDING. Grabbing PENDING and queueing should * happen with IRQ disabled. */ @@ -75,7 +79,7 @@ Signed-off-by: Thomas Gleixner debug_work_activate(work); -@@ -1484,14 +1495,14 @@ bool queue_work_on(int cpu, struct workq +@@ -1484,14 +1495,14 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq, bool ret = false; unsigned long flags; @@ -92,7 +96,7 @@ Signed-off-by: Thomas Gleixner return ret; } EXPORT_SYMBOL(queue_work_on); -@@ -1500,8 +1511,11 @@ void delayed_work_timer_fn(struct timer_ +@@ -1500,8 +1511,11 @@ void delayed_work_timer_fn(struct timer_list *t) { struct delayed_work *dwork = from_timer(dwork, t, timer); @@ -104,7 +108,7 @@ Signed-off-by: Thomas Gleixner } EXPORT_SYMBOL(delayed_work_timer_fn); -@@ -1556,14 +1570,14 @@ bool queue_delayed_work_on(int cpu, stru +@@ -1556,14 +1570,14 @@ bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq, unsigned long flags; /* read the comment in __queue_work() */ @@ -121,7 +125,7 @@ Signed-off-by: Thomas Gleixner return ret; } EXPORT_SYMBOL(queue_delayed_work_on); -@@ -1598,7 +1612,7 @@ bool mod_delayed_work_on(int cpu, struct +@@ -1598,7 +1612,7 @@ bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq, if (likely(ret >= 0)) { __queue_delayed_work(cpu, wq, dwork, delay); @@ -145,7 +149,7 @@ Signed-off-by: Thomas Gleixner } /** -@@ -2999,7 +3014,7 @@ static bool __cancel_work_timer(struct w +@@ -2999,7 +3014,7 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork) /* tell other tasks trying to grab @work to back off */ mark_work_canceling(work); @@ -167,7 +171,7 @@ Signed-off-by: Thomas Gleixner return flush_work(&dwork->work); } EXPORT_SYMBOL(flush_delayed_work); -@@ -3101,7 +3116,7 @@ static bool __cancel_work(struct work_st +@@ -3101,7 +3116,7 @@ static bool __cancel_work(struct work_struct *work, bool is_dwork) return false; set_work_pool_and_clear_pending(work, get_work_pool_id(work)); @@ -176,3 +180,6 @@ Signed-off-by: Thomas Gleixner return ret; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0195-work-queue-work-around-irqsafe-timer-optimization.patch b/kernel/patches-4.19.x-rt/0193-workqueue-Prevent-workqueue-versus-ata-piix-livelock.patch similarity index 93% rename from kernel/patches-4.19.x-rt/0195-work-queue-work-around-irqsafe-timer-optimization.patch rename to kernel/patches-4.19.x-rt/0193-workqueue-Prevent-workqueue-versus-ata-piix-livelock.patch index 42a5e3745..6d700c3f1 100644 --- a/kernel/patches-4.19.x-rt/0195-work-queue-work-around-irqsafe-timer-optimization.patch +++ b/kernel/patches-4.19.x-rt/0193-workqueue-Prevent-workqueue-versus-ata-piix-livelock.patch @@ -1,6 +1,7 @@ +From d874f4bd157934c3b8f5f30c0291b9716f86e849 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Mon, 01 Jul 2013 11:02:42 +0200 -Subject: workqueue: Prevent workqueue versus ata-piix livelock +Date: Mon, 1 Jul 2013 11:02:42 +0200 +Subject: [PATCH 193/269] workqueue: Prevent workqueue versus ata-piix livelock An Intel i7 system regularly detected rcu_preempt stalls after the kernel was upgraded from 3.6-rt to 3.8-rt. When the stall happened, disk I/O was no @@ -108,9 +109,11 @@ Signed-off-by: Carsten Emde Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior --- - kernel/workqueue.c | 3 ++- + kernel/workqueue.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index 1e8b2ff804e3..f6551d189ca4 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -50,6 +50,7 @@ @@ -121,7 +124,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "workqueue_internal.h" -@@ -1281,7 +1282,7 @@ static int try_to_grab_pending(struct wo +@@ -1281,7 +1282,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork, local_unlock_irqrestore(pendingb_lock, *flags); if (work_is_canceling(work)) return -ENOENT; @@ -130,3 +133,6 @@ Signed-off-by: Sebastian Andrzej Siewior return -EAGAIN; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0196-workqueue-distangle-from-rq-lock.patch b/kernel/patches-4.19.x-rt/0194-sched-Distangle-worker-accounting-from-rqlock.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0196-workqueue-distangle-from-rq-lock.patch rename to kernel/patches-4.19.x-rt/0194-sched-Distangle-worker-accounting-from-rqlock.patch index 1d916c0b9..1485b5ebb 100644 --- a/kernel/patches-4.19.x-rt/0196-workqueue-distangle-from-rq-lock.patch +++ b/kernel/patches-4.19.x-rt/0194-sched-Distangle-worker-accounting-from-rqlock.patch @@ -1,12 +1,13 @@ +From 4452796adea3514d123d9e41188dfcfc86adc6d0 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Wed Jun 22 19:47:03 2011 +0200 -Subject: sched: Distangle worker accounting from rqlock - +Date: Wed, 22 Jun 2011 19:47:03 +0200 +Subject: [PATCH 194/269] sched: Distangle worker accounting from rqlock + The worker accounting for cpu bound workers is plugged into the core scheduler code and the wakeup code. This is not a hard requirement and can be avoided by keeping track of the state in the workqueue code itself. - + Keep track of the sleeping state in the worker itself and call the notifier before entering the core scheduler. There might be false positives when the task is woken between that call and actually @@ -14,7 +15,7 @@ scheduling, but that's not really different from scheduling and being woken immediately after switching away. There is also no harm from updating nr_running when the task returns from scheduling instead of accounting it in the wakeup code. - + Signed-off-by: Thomas Gleixner Cc: Peter Zijlstra Cc: Tejun Heo @@ -26,14 +27,16 @@ Signed-off-by: Thomas Gleixner Oliveira] Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/core.c | 90 ++++++++++---------------------------------- - kernel/workqueue.c | 52 +++++++++++-------------- - kernel/workqueue_internal.h | 5 +- + kernel/sched/core.c | 90 +++++++++---------------------------- + kernel/workqueue.c | 52 ++++++++++----------- + kernel/workqueue_internal.h | 5 ++- 3 files changed, 47 insertions(+), 100 deletions(-) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index fb205b1ec799..1cd1abc45097 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -1702,10 +1702,6 @@ static inline void ttwu_activate(struct +@@ -1704,10 +1704,6 @@ static inline void ttwu_activate(struct rq *rq, struct task_struct *p, int en_fl { activate_task(rq, p, en_flags); p->on_rq = TASK_ON_RQ_QUEUED; @@ -44,10 +47,11 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -2142,56 +2138,6 @@ try_to_wake_up(struct task_struct *p, un +@@ -2143,56 +2139,6 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) + return success; } - /** +-/** - * try_to_wake_up_local - try to wake up a local task with rq lock held - * @p: the thread to be awakened - * @rf: request-queue flags for pinning @@ -97,11 +101,10 @@ Signed-off-by: Sebastian Andrzej Siewior - raw_spin_unlock(&p->pi_lock); -} - --/** + /** * wake_up_process - Wake up a specific process * @p: The process to be woken up. - * -@@ -3518,21 +3464,6 @@ static void __sched notrace __schedule(b +@@ -3520,21 +3466,6 @@ static void __sched notrace __schedule(bool preempt) atomic_inc(&rq->nr_iowait); delayacct_blkio_start(); } @@ -123,7 +126,7 @@ Signed-off-by: Sebastian Andrzej Siewior } switch_count = &prev->nvcsw; } -@@ -3592,6 +3523,20 @@ static inline void sched_submit_work(str +@@ -3594,6 +3525,20 @@ static inline void sched_submit_work(struct task_struct *tsk) { if (!tsk->state || tsk_is_pi_blocked(tsk)) return; @@ -144,7 +147,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. -@@ -3600,6 +3545,12 @@ static inline void sched_submit_work(str +@@ -3602,6 +3547,12 @@ static inline void sched_submit_work(struct task_struct *tsk) blk_schedule_flush_plug(tsk); } @@ -157,7 +160,7 @@ Signed-off-by: Sebastian Andrzej Siewior asmlinkage __visible void __sched schedule(void) { struct task_struct *tsk = current; -@@ -3610,6 +3561,7 @@ asmlinkage __visible void __sched schedu +@@ -3612,6 +3563,7 @@ asmlinkage __visible void __sched schedule(void) __schedule(false); sched_preempt_enable_no_resched(); } while (need_resched()); @@ -165,9 +168,11 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(schedule); +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index f6551d189ca4..bf7be926ce5f 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c -@@ -843,43 +843,32 @@ static void wake_up_worker(struct worker +@@ -843,43 +843,32 @@ static void wake_up_worker(struct worker_pool *pool) } /** @@ -222,7 +227,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct worker_pool *pool; /* -@@ -888,13 +877,15 @@ struct task_struct *wq_worker_sleeping(s +@@ -888,13 +877,15 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task) * checking NOT_RUNNING. */ if (worker->flags & WORKER_NOT_RUNNING) @@ -242,7 +247,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * The counterpart of the following dec_and_test, implied mb, -@@ -908,9 +899,12 @@ struct task_struct *wq_worker_sleeping(s +@@ -908,9 +899,12 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task) * lock is safe. */ if (atomic_dec_and_test(&pool->nr_running) && @@ -258,6 +263,8 @@ Signed-off-by: Sebastian Andrzej Siewior } /** +diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h +index 66fbb5a9e633..30cfed226b39 100644 --- a/kernel/workqueue_internal.h +++ b/kernel/workqueue_internal.h @@ -44,6 +44,7 @@ struct worker { @@ -268,7 +275,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Opaque string set with work_set_desc(). Printed out with task -@@ -69,7 +70,7 @@ static inline struct worker *current_wq_ +@@ -69,7 +70,7 @@ static inline struct worker *current_wq_worker(void) * Scheduler hooks for concurrency managed workqueue. Only to be used from * sched/core.c and workqueue.c. */ @@ -278,3 +285,6 @@ Signed-off-by: Sebastian Andrzej Siewior +void wq_worker_sleeping(struct task_struct *task); #endif /* _KERNEL_WORKQUEUE_INTERNAL_H */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0197-debugobjects-rt.patch b/kernel/patches-4.19.x-rt/0195-debugobjects-Make-RT-aware.patch similarity index 58% rename from kernel/patches-4.19.x-rt/0197-debugobjects-rt.patch rename to kernel/patches-4.19.x-rt/0195-debugobjects-Make-RT-aware.patch index 767759b82..067e08778 100644 --- a/kernel/patches-4.19.x-rt/0197-debugobjects-rt.patch +++ b/kernel/patches-4.19.x-rt/0195-debugobjects-Make-RT-aware.patch @@ -1,17 +1,20 @@ -Subject: debugobjects: Make RT aware +From bfbfd69e3adaeffcc546f391f1f039dd715b2d57 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 17 Jul 2011 21:41:35 +0200 +Subject: [PATCH 195/269] debugobjects: Make RT aware Avoid filling the pool / allocating memory with irqs off(). Signed-off-by: Thomas Gleixner --- - lib/debugobjects.c | 5 ++++- + lib/debugobjects.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) +diff --git a/lib/debugobjects.c b/lib/debugobjects.c +index 14afeeb7d6ef..e28481c402ae 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c -@@ -376,7 +376,10 @@ static void +@@ -376,7 +376,10 @@ __debug_object_init(void *addr, struct debug_obj_descr *descr, int onstack) struct debug_obj *obj; unsigned long flags; @@ -23,3 +26,6 @@ Signed-off-by: Thomas Gleixner db = get_bucket((unsigned long) addr); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0198-seqlock-prevent-rt-starvation.patch b/kernel/patches-4.19.x-rt/0196-seqlock-Prevent-rt-starvation.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0198-seqlock-prevent-rt-starvation.patch rename to kernel/patches-4.19.x-rt/0196-seqlock-Prevent-rt-starvation.patch index 31779c079..ce7014c25 100644 --- a/kernel/patches-4.19.x-rt/0198-seqlock-prevent-rt-starvation.patch +++ b/kernel/patches-4.19.x-rt/0196-seqlock-Prevent-rt-starvation.patch @@ -1,6 +1,7 @@ -Subject: seqlock: Prevent rt starvation +From 62e2b0613933b1d4557d86f4557375a9ee647fa7 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 22 Feb 2012 12:03:30 +0100 +Subject: [PATCH 196/269] seqlock: Prevent rt starvation If a low prio writer gets preempted while holding the seqlock write locked, a high prio reader spins forever on RT. @@ -18,16 +19,16 @@ Nicholas Mc Guire: - __write_seqcount_begin => __raw_write_seqcount_begin Signed-off-by: Thomas Gleixner - - --- - include/linux/seqlock.h | 57 +++++++++++++++++++++++++++++++++++++----------- - include/net/neighbour.h | 6 ++--- + include/linux/seqlock.h | 57 ++++++++++++++++++++++++++++++++--------- + include/net/neighbour.h | 6 ++--- 2 files changed, 48 insertions(+), 15 deletions(-) +diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h +index bcf4cf26b8c8..689ed53016c7 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h -@@ -221,20 +221,30 @@ static inline int read_seqcount_retry(co +@@ -221,20 +221,30 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start) return __read_seqcount_retry(s, start); } @@ -96,7 +97,7 @@ Signed-off-by: Thomas Gleixner static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) { -@@ -446,36 +479,36 @@ static inline unsigned read_seqretry(con +@@ -446,36 +479,36 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) static inline void write_seqlock(seqlock_t *sl) { spin_lock(&sl->lock); @@ -139,7 +140,7 @@ Signed-off-by: Thomas Gleixner spin_unlock_irq(&sl->lock); } -@@ -484,7 +517,7 @@ static inline unsigned long __write_seql +@@ -484,7 +517,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) unsigned long flags; spin_lock_irqsave(&sl->lock, flags); @@ -148,7 +149,7 @@ Signed-off-by: Thomas Gleixner return flags; } -@@ -494,7 +527,7 @@ static inline unsigned long __write_seql +@@ -494,7 +527,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) { @@ -157,9 +158,11 @@ Signed-off-by: Thomas Gleixner spin_unlock_irqrestore(&sl->lock, flags); } +diff --git a/include/net/neighbour.h b/include/net/neighbour.h +index beeeed126872..6dd1765e22ec 100644 --- a/include/net/neighbour.h +++ b/include/net/neighbour.h -@@ -451,7 +451,7 @@ static inline int neigh_hh_bridge(struct +@@ -451,7 +451,7 @@ static inline int neigh_hh_bridge(struct hh_cache *hh, struct sk_buff *skb) } #endif @@ -168,7 +171,7 @@ Signed-off-by: Thomas Gleixner { unsigned int hh_alen = 0; unsigned int seq; -@@ -493,7 +493,7 @@ static inline int neigh_hh_output(const +@@ -493,7 +493,7 @@ static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb static inline int neigh_output(struct neighbour *n, struct sk_buff *skb) { @@ -186,3 +189,6 @@ Signed-off-by: Thomas Gleixner const struct net_device *dev) { unsigned int seq; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0199-sunrpc-make-svc_xprt_do_enqueue-use-get_cpu_light.patch b/kernel/patches-4.19.x-rt/0197-sunrpc-Make-svc_xprt_do_enqueue-use-get_cpu_light.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0199-sunrpc-make-svc_xprt_do_enqueue-use-get_cpu_light.patch rename to kernel/patches-4.19.x-rt/0197-sunrpc-Make-svc_xprt_do_enqueue-use-get_cpu_light.patch index d6980dc43..3e9ff2643 100644 --- a/kernel/patches-4.19.x-rt/0199-sunrpc-make-svc_xprt_do_enqueue-use-get_cpu_light.patch +++ b/kernel/patches-4.19.x-rt/0197-sunrpc-Make-svc_xprt_do_enqueue-use-get_cpu_light.patch @@ -1,6 +1,8 @@ +From b1572dc20a39a216ac1fbb36998f32af0f79b9ae Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Wed, 18 Feb 2015 16:05:28 +0100 -Subject: sunrpc: Make svc_xprt_do_enqueue() use get_cpu_light() +Subject: [PATCH 197/269] sunrpc: Make svc_xprt_do_enqueue() use + get_cpu_light() |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:915 |in_atomic(): 1, irqs_disabled(): 0, pid: 3194, name: rpc.nfsd @@ -28,12 +30,14 @@ Subject: sunrpc: Make svc_xprt_do_enqueue() use get_cpu_light() Signed-off-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - net/sunrpc/svc_xprt.c | 4 ++-- + net/sunrpc/svc_xprt.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) +diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c +index 6cf0fd37cbf0..48c0a0b90946 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c -@@ -393,7 +393,7 @@ void svc_xprt_do_enqueue(struct svc_xprt +@@ -393,7 +393,7 @@ void svc_xprt_do_enqueue(struct svc_xprt *xprt) if (test_and_set_bit(XPT_BUSY, &xprt->xpt_flags)) return; @@ -42,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior pool = svc_pool_for_cpu(xprt->xpt_server, cpu); atomic_long_inc(&pool->sp_stats.packets); -@@ -417,7 +417,7 @@ void svc_xprt_do_enqueue(struct svc_xprt +@@ -417,7 +417,7 @@ void svc_xprt_do_enqueue(struct svc_xprt *xprt) rqstp = NULL; out_unlock: rcu_read_unlock(); @@ -51,3 +55,6 @@ Signed-off-by: Sebastian Andrzej Siewior trace_svc_xprt_do_enqueue(xprt, rqstp); } EXPORT_SYMBOL_GPL(svc_xprt_do_enqueue); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0200-skbufhead-raw-lock.patch b/kernel/patches-4.19.x-rt/0198-net-Use-skbufhead-with-raw-lock.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0200-skbufhead-raw-lock.patch rename to kernel/patches-4.19.x-rt/0198-net-Use-skbufhead-with-raw-lock.patch index 6b4332cc2..d119b921c 100644 --- a/kernel/patches-4.19.x-rt/0200-skbufhead-raw-lock.patch +++ b/kernel/patches-4.19.x-rt/0198-net-Use-skbufhead-with-raw-lock.patch @@ -1,6 +1,7 @@ +From 4893c0317fda3cc20eac3b4bbfcdd808ef3db828 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 12 Jul 2011 15:38:34 +0200 -Subject: net: Use skbufhead with raw lock +Subject: [PATCH 198/269] net: Use skbufhead with raw lock Use the rps lock as rawlock so we can keep irq-off regions. It looks low latency. However we can't kfree() from this context therefore we defer this @@ -8,14 +9,16 @@ to the softirq and use the tofree_queue list for it (similar to process_queue). Signed-off-by: Thomas Gleixner --- - include/linux/netdevice.h | 1 + - include/linux/skbuff.h | 7 +++++++ - net/core/dev.c | 33 +++++++++++++++++++++++++-------- + include/linux/netdevice.h | 1 + + include/linux/skbuff.h | 7 +++++++ + net/core/dev.c | 33 +++++++++++++++++++++++++-------- 3 files changed, 33 insertions(+), 8 deletions(-) +diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h +index 8c2fec0bcb26..384c63ecb9ae 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h -@@ -2970,6 +2970,7 @@ struct softnet_data { +@@ -2973,6 +2973,7 @@ struct softnet_data { unsigned int dropped; struct sk_buff_head input_pkt_queue; struct napi_struct backlog; @@ -23,6 +26,8 @@ Signed-off-by: Thomas Gleixner }; +diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h +index 820903ceac4f..f7f3abb41acb 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -287,6 +287,7 @@ struct sk_buff_head { @@ -33,7 +38,7 @@ Signed-off-by: Thomas Gleixner }; struct sk_buff; -@@ -1702,6 +1703,12 @@ static inline void skb_queue_head_init(s +@@ -1702,6 +1703,12 @@ static inline void skb_queue_head_init(struct sk_buff_head *list) __skb_queue_head_init(list); } @@ -46,9 +51,11 @@ Signed-off-by: Thomas Gleixner static inline void skb_queue_head_init_class(struct sk_buff_head *list, struct lock_class_key *class) { +diff --git a/net/core/dev.c b/net/core/dev.c +index b8208b940b5d..327a985bf0c7 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -217,14 +217,14 @@ static inline struct hlist_head *dev_ind +@@ -217,14 +217,14 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex) static inline void rps_lock(struct softnet_data *sd) { #ifdef CONFIG_RPS @@ -65,7 +72,7 @@ Signed-off-by: Thomas Gleixner #endif } -@@ -5244,7 +5244,7 @@ static void flush_backlog(struct work_st +@@ -5260,7 +5260,7 @@ static void flush_backlog(struct work_struct *work) skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) { if (skb->dev->reg_state == NETREG_UNREGISTERING) { __skb_unlink(skb, &sd->input_pkt_queue); @@ -74,7 +81,7 @@ Signed-off-by: Thomas Gleixner input_queue_head_incr(sd); } } -@@ -5254,11 +5254,14 @@ static void flush_backlog(struct work_st +@@ -5270,11 +5270,14 @@ static void flush_backlog(struct work_struct *work) skb_queue_walk_safe(&sd->process_queue, skb, tmp) { if (skb->dev->reg_state == NETREG_UNREGISTERING) { __skb_unlink(skb, &sd->process_queue); @@ -90,7 +97,7 @@ Signed-off-by: Thomas Gleixner } static void flush_all_backlogs(void) -@@ -5837,7 +5840,9 @@ static int process_backlog(struct napi_s +@@ -5853,7 +5856,9 @@ static int process_backlog(struct napi_struct *napi, int quota) while (again) { struct sk_buff *skb; @@ -100,7 +107,7 @@ Signed-off-by: Thomas Gleixner rcu_read_lock(); __netif_receive_skb(skb); rcu_read_unlock(); -@@ -5845,9 +5850,9 @@ static int process_backlog(struct napi_s +@@ -5861,9 +5866,9 @@ static int process_backlog(struct napi_struct *napi, int quota) if (++work >= quota) return work; @@ -111,7 +118,7 @@ Signed-off-by: Thomas Gleixner rps_lock(sd); if (skb_queue_empty(&sd->input_pkt_queue)) { /* -@@ -6312,13 +6317,21 @@ static __latent_entropy void net_rx_acti +@@ -6328,13 +6333,21 @@ static __latent_entropy void net_rx_action(struct softirq_action *h) unsigned long time_limit = jiffies + usecs_to_jiffies(netdev_budget_usecs); int budget = netdev_budget; @@ -133,7 +140,7 @@ Signed-off-by: Thomas Gleixner for (;;) { struct napi_struct *n; -@@ -9307,10 +9320,13 @@ static int dev_cpu_dead(unsigned int old +@@ -9323,10 +9336,13 @@ static int dev_cpu_dead(unsigned int oldcpu) netif_rx_ni(skb); input_queue_head_incr(oldsd); } @@ -148,7 +155,7 @@ Signed-off-by: Thomas Gleixner return 0; } -@@ -9619,8 +9635,9 @@ static int __init net_dev_init(void) +@@ -9635,8 +9651,9 @@ static int __init net_dev_init(void) INIT_WORK(flush, flush_backlog); @@ -160,3 +167,6 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_XFRM_OFFLOAD skb_queue_head_init(&sd->xfrm_backlog); #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0201-net-move-xmit_recursion-to-per-task-variable-on-RT.patch b/kernel/patches-4.19.x-rt/0199-net-move-xmit_recursion-to-per-task-variable-on-RT.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0201-net-move-xmit_recursion-to-per-task-variable-on-RT.patch rename to kernel/patches-4.19.x-rt/0199-net-move-xmit_recursion-to-per-task-variable-on-RT.patch index 4198f8399..53def4411 100644 --- a/kernel/patches-4.19.x-rt/0201-net-move-xmit_recursion-to-per-task-variable-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0199-net-move-xmit_recursion-to-per-task-variable-on-RT.patch @@ -1,6 +1,7 @@ +From e6cdcf7dbf2aa921c55ed19673c775491efc2a75 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 13 Jan 2016 15:55:02 +0100 -Subject: net: move xmit_recursion to per-task variable on -RT +Subject: [PATCH 199/269] net: move xmit_recursion to per-task variable on -RT A softirq on -RT can be preempted. That means one task is in __dev_queue_xmit(), gets preempted and another task may enter @@ -16,12 +17,14 @@ CPU number. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/netdevice.h | 95 ++++++++++++++++++++++++++++++++++++++++++---- - include/linux/sched.h | 3 + - net/core/dev.c | 15 ++++--- - net/core/filter.c | 6 +- + include/linux/netdevice.h | 95 ++++++++++++++++++++++++++++++++++++--- + include/linux/sched.h | 3 ++ + net/core/dev.c | 15 ++++--- + net/core/filter.c | 6 +-- 4 files changed, 104 insertions(+), 15 deletions(-) +diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h +index 384c63ecb9ae..b6a75296eb46 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -587,7 +587,11 @@ struct netdev_queue { @@ -36,7 +39,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Time (in jiffies) of last Tx */ -@@ -2608,14 +2612,53 @@ void netdev_freemem(struct net_device *d +@@ -2611,14 +2615,53 @@ void netdev_freemem(struct net_device *dev); void synchronize_net(void); int init_dummy_netdev(struct net_device *dev); @@ -91,7 +94,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct net_device *dev_get_by_index(struct net *net, int ifindex); struct net_device *__dev_get_by_index(struct net *net, int ifindex); struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex); -@@ -3791,10 +3834,48 @@ static inline u32 netif_msg_init(int deb +@@ -3794,10 +3837,48 @@ static inline u32 netif_msg_init(int debug_value, int default_msg_enable_bits) return (1 << debug_value) - 1; } @@ -141,7 +144,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static inline bool __netif_tx_acquire(struct netdev_queue *txq) -@@ -3811,32 +3892,32 @@ static inline void __netif_tx_release(st +@@ -3814,32 +3895,32 @@ static inline void __netif_tx_release(struct netdev_queue *txq) static inline void __netif_tx_lock_bh(struct netdev_queue *txq) { spin_lock_bh(&txq->_xmit_lock); @@ -179,21 +182,25 @@ Signed-off-by: Sebastian Andrzej Siewior txq->trans_start = jiffies; } +diff --git a/include/linux/sched.h b/include/linux/sched.h +index a023e1ba5d8f..a9a5edfa9689 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h -@@ -1209,6 +1209,9 @@ struct task_struct { +@@ -1208,6 +1208,9 @@ struct task_struct { + #endif #ifdef CONFIG_DEBUG_ATOMIC_SLEEP unsigned long task_state_change; - #endif ++#endif +#ifdef CONFIG_PREEMPT_RT_FULL + int xmit_recursion; -+#endif + #endif int pagefault_disabled; #ifdef CONFIG_MMU - struct task_struct *oom_reaper_list; +diff --git a/net/core/dev.c b/net/core/dev.c +index 327a985bf0c7..ee90223959fc 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -3523,8 +3523,10 @@ static void skb_update_prio(struct sk_bu +@@ -3537,8 +3537,10 @@ static void skb_update_prio(struct sk_buff *skb) #define skb_update_prio(skb) #endif @@ -204,7 +211,7 @@ Signed-off-by: Sebastian Andrzej Siewior /** * dev_loopback_xmit - loop back @skb -@@ -3815,9 +3817,12 @@ static int __dev_queue_xmit(struct sk_bu +@@ -3829,9 +3831,12 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev) if (dev->flags & IFF_UP) { int cpu = smp_processor_id(); /* ok because BHs are off */ @@ -219,7 +226,7 @@ Signed-off-by: Sebastian Andrzej Siewior goto recursion_alert; skb = validate_xmit_skb(skb, dev, &again); -@@ -3827,9 +3832,9 @@ static int __dev_queue_xmit(struct sk_bu +@@ -3841,9 +3846,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev) HARD_TX_LOCK(dev, txq, cpu); if (!netif_xmit_stopped(txq)) { @@ -231,7 +238,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (dev_xmit_complete(rc)) { HARD_TX_UNLOCK(dev, txq); goto out; -@@ -8372,7 +8377,7 @@ static void netdev_init_one_queue(struct +@@ -8388,7 +8393,7 @@ static void netdev_init_one_queue(struct net_device *dev, /* Initialize queue lock */ spin_lock_init(&queue->_xmit_lock); netdev_set_xmit_lockdep_class(&queue->_xmit_lock, dev->type); @@ -240,9 +247,11 @@ Signed-off-by: Sebastian Andrzej Siewior netdev_queue_numa_node_write(queue, NUMA_NO_NODE); queue->dev = dev; #ifdef CONFIG_BQL +diff --git a/net/core/filter.c b/net/core/filter.c +index eb81e9db4093..2dd1f2eef4fa 100644 --- a/net/core/filter.c +++ b/net/core/filter.c -@@ -2000,7 +2000,7 @@ static inline int __bpf_tx_skb(struct ne +@@ -2000,7 +2000,7 @@ static inline int __bpf_tx_skb(struct net_device *dev, struct sk_buff *skb) { int ret; @@ -251,7 +260,7 @@ Signed-off-by: Sebastian Andrzej Siewior net_crit_ratelimited("bpf: recursion limit reached on datapath, buggy bpf program?\n"); kfree_skb(skb); return -ENETDOWN; -@@ -2008,9 +2008,9 @@ static inline int __bpf_tx_skb(struct ne +@@ -2008,9 +2008,9 @@ static inline int __bpf_tx_skb(struct net_device *dev, struct sk_buff *skb) skb->dev = dev; @@ -263,3 +272,6 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0202-net-provide-a-way-to-delegate-processing-a-softirq-t.patch b/kernel/patches-4.19.x-rt/0200-net-provide-a-way-to-delegate-processing-a-softirq-t.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0202-net-provide-a-way-to-delegate-processing-a-softirq-t.patch rename to kernel/patches-4.19.x-rt/0200-net-provide-a-way-to-delegate-processing-a-softirq-t.patch index 2a3057c42..c42b0e78b 100644 --- a/kernel/patches-4.19.x-rt/0202-net-provide-a-way-to-delegate-processing-a-softirq-t.patch +++ b/kernel/patches-4.19.x-rt/0200-net-provide-a-way-to-delegate-processing-a-softirq-t.patch @@ -1,7 +1,8 @@ +From 0ba4f1b56a7639a293956b84416566f0211c8c77 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 20 Jan 2016 15:39:05 +0100 -Subject: net: provide a way to delegate processing a softirq to - ksoftirqd +Subject: [PATCH 200/269] net: provide a way to delegate processing a softirq + to ksoftirqd If the NET_RX uses up all of his budget it moves the following NAPI invocations into the `ksoftirqd`. On -RT it does not do so. Instead it @@ -13,11 +14,13 @@ __raise_softirq_irqoff_ksoft() which raises the softirq in the ksoftird. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/interrupt.h | 8 ++++++++ - kernel/softirq.c | 21 +++++++++++++++++++++ - net/core/dev.c | 2 +- + include/linux/interrupt.h | 8 ++++++++ + kernel/softirq.c | 21 +++++++++++++++++++++ + net/core/dev.c | 2 +- 3 files changed, 30 insertions(+), 1 deletion(-) +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index 99f8b7ace7c9..72333899f043 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -524,6 +524,14 @@ extern void thread_do_softirq(void); @@ -35,12 +38,15 @@ Signed-off-by: Sebastian Andrzej Siewior extern void raise_softirq_irqoff(unsigned int nr); extern void raise_softirq(unsigned int nr); +diff --git a/kernel/softirq.c b/kernel/softirq.c +index 27a4bb2303d0..25bcf2f2714b 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c -@@ -722,6 +722,27 @@ void __raise_softirq_irqoff(unsigned int +@@ -721,6 +721,27 @@ void __raise_softirq_irqoff(unsigned int nr) + wakeup_proper_softirq(nr); } - /* ++/* + * Same as __raise_softirq_irqoff() but will process them in ksoftirqd + */ +void __raise_softirq_irqoff_ksoft(unsigned int nr) @@ -61,13 +67,14 @@ Signed-off-by: Sebastian Andrzej Siewior + wakeup_proper_softirq(nr); +} + -+/* + /* * This function must run with irqs disabled! */ - void raise_softirq_irqoff(unsigned int nr) +diff --git a/net/core/dev.c b/net/core/dev.c +index ee90223959fc..da95705ccb67 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -6366,7 +6366,7 @@ static __latent_entropy void net_rx_acti +@@ -6382,7 +6382,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h) list_splice_tail(&repoll, &list); list_splice(&list, &sd->poll_list); if (!list_empty(&sd->poll_list)) @@ -76,3 +83,6 @@ Signed-off-by: Sebastian Andrzej Siewior net_rps_action_and_irq_enable(sd); out: +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0203-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch b/kernel/patches-4.19.x-rt/0201-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0203-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch rename to kernel/patches-4.19.x-rt/0201-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch index 954ae4287..8d4e95701 100644 --- a/kernel/patches-4.19.x-rt/0203-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch +++ b/kernel/patches-4.19.x-rt/0201-net-dev-always-take-qdisc-s-busylock-in-__dev_xmit_s.patch @@ -1,6 +1,8 @@ +From 9e7513a103f18db66ffaf2bcfd13c834cba602d7 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 30 Mar 2016 13:36:29 +0200 -Subject: [PATCH] net: dev: always take qdisc's busylock in __dev_xmit_skb() +Subject: [PATCH 201/269] net: dev: always take qdisc's busylock in + __dev_xmit_skb() The root-lock is dropped before dev_hard_start_xmit() is invoked and after setting the __QDISC___STATE_RUNNING bit. If this task is now pushed away @@ -15,12 +17,14 @@ low-prio task and submit the packet. Signed-off-by: Sebastian Andrzej Siewior --- - net/core/dev.c | 4 ++++ + net/core/dev.c | 4 ++++ 1 file changed, 4 insertions(+) +diff --git a/net/core/dev.c b/net/core/dev.c +index da95705ccb67..351e81f8a72d 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -3451,7 +3451,11 @@ static inline int __dev_xmit_skb(struct +@@ -3465,7 +3465,11 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, * This permits qdisc->running owner to get the lock more * often and dequeue packets faster. */ @@ -32,3 +36,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (unlikely(contended)) spin_lock(&q->busylock); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0204-net-Qdisc-use-a-seqlock-instead-seqcount.patch b/kernel/patches-4.19.x-rt/0202-net-Qdisc-use-a-seqlock-instead-seqcount.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0204-net-Qdisc-use-a-seqlock-instead-seqcount.patch rename to kernel/patches-4.19.x-rt/0202-net-Qdisc-use-a-seqlock-instead-seqcount.patch index 4922bfa18..62e3e6179 100644 --- a/kernel/patches-4.19.x-rt/0204-net-Qdisc-use-a-seqlock-instead-seqcount.patch +++ b/kernel/patches-4.19.x-rt/0202-net-Qdisc-use-a-seqlock-instead-seqcount.patch @@ -1,6 +1,7 @@ +From 8f5f7360b52bbe5081ba3204a2004f6fdeb75114 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 14 Sep 2016 17:36:35 +0200 -Subject: [PATCH] net/Qdisc: use a seqlock instead seqcount +Subject: [PATCH 202/269] net/Qdisc: use a seqlock instead seqcount The seqcount disables preemption on -RT while it is held which can't remove. Also we don't want the reader to spin for ages if the writer is @@ -9,20 +10,22 @@ the lock while writer is active. Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/seqlock.h | 9 +++++++++ - include/net/gen_stats.h | 9 +++++---- - include/net/net_seq_lock.h | 15 +++++++++++++++ - include/net/sch_generic.h | 19 +++++++++++++++++-- - net/core/gen_estimator.c | 6 +++--- - net/core/gen_stats.c | 8 ++++---- - net/sched/sch_api.c | 2 +- - net/sched/sch_generic.c | 12 ++++++++++++ + include/linux/seqlock.h | 9 +++++++++ + include/net/gen_stats.h | 9 +++++---- + include/net/net_seq_lock.h | 15 +++++++++++++++ + include/net/sch_generic.h | 19 +++++++++++++++++-- + net/core/gen_estimator.c | 6 +++--- + net/core/gen_stats.c | 8 ++++---- + net/sched/sch_api.c | 2 +- + net/sched/sch_generic.c | 12 ++++++++++++ 8 files changed, 66 insertions(+), 14 deletions(-) create mode 100644 include/net/net_seq_lock.h +diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h +index 689ed53016c7..58f9909d6659 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h -@@ -482,6 +482,15 @@ static inline void write_seqlock(seqlock +@@ -482,6 +482,15 @@ static inline void write_seqlock(seqlock_t *sl) __raw_write_seqcount_begin(&sl->seqcount); } @@ -38,6 +41,8 @@ Signed-off-by: Sebastian Andrzej Siewior static inline void write_sequnlock(seqlock_t *sl) { __raw_write_seqcount_end(&sl->seqcount); +diff --git a/include/net/gen_stats.h b/include/net/gen_stats.h +index 883bb9085f15..3b593cdeb9af 100644 --- a/include/net/gen_stats.h +++ b/include/net/gen_stats.h @@ -6,6 +6,7 @@ @@ -48,7 +53,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct gnet_stats_basic_cpu { struct gnet_stats_basic_packed bstats; -@@ -36,11 +37,11 @@ int gnet_stats_start_copy_compat(struct +@@ -36,11 +37,11 @@ int gnet_stats_start_copy_compat(struct sk_buff *skb, int type, spinlock_t *lock, struct gnet_dump *d, int padattr); @@ -62,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct gnet_stats_basic_packed *bstats, struct gnet_stats_basic_cpu __percpu *cpu, struct gnet_stats_basic_packed *b); -@@ -60,13 +61,13 @@ int gen_new_estimator(struct gnet_stats_ +@@ -60,13 +61,13 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats, struct gnet_stats_basic_cpu __percpu *cpu_bstats, struct net_rate_estimator __rcu **rate_est, spinlock_t *lock, @@ -78,6 +83,9 @@ Signed-off-by: Sebastian Andrzej Siewior bool gen_estimator_active(struct net_rate_estimator __rcu **ptr); bool gen_estimator_read(struct net_rate_estimator __rcu **ptr, struct gnet_stats_rate_est64 *sample); +diff --git a/include/net/net_seq_lock.h b/include/net/net_seq_lock.h +new file mode 100644 +index 000000000000..a7034298a82a --- /dev/null +++ b/include/net/net_seq_lock.h @@ -0,0 +1,15 @@ @@ -96,6 +104,8 @@ Signed-off-by: Sebastian Andrzej Siewior +#endif + +#endif +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index c44da48de7df..c85ac38f7fa9 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -10,6 +10,7 @@ @@ -106,7 +116,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include #include -@@ -97,7 +98,7 @@ struct Qdisc { +@@ -100,7 +101,7 @@ struct Qdisc { struct sk_buff_head gso_skb ____cacheline_aligned_in_smp; struct qdisc_skb_head q; struct gnet_stats_basic_packed bstats; @@ -115,7 +125,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct gnet_stats_queue qstats; unsigned long state; struct Qdisc *next_sched; -@@ -118,7 +119,11 @@ static inline bool qdisc_is_running(stru +@@ -121,7 +122,11 @@ static inline bool qdisc_is_running(struct Qdisc *qdisc) { if (qdisc->flags & TCQ_F_NOLOCK) return spin_is_locked(&qdisc->seqlock); @@ -127,7 +137,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static inline bool qdisc_run_begin(struct Qdisc *qdisc) -@@ -129,17 +134,27 @@ static inline bool qdisc_run_begin(struc +@@ -132,17 +137,27 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) } else if (qdisc_is_running(qdisc)) { return false; } @@ -155,7 +165,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (qdisc->flags & TCQ_F_NOLOCK) spin_unlock(&qdisc->seqlock); } -@@ -458,7 +473,7 @@ static inline spinlock_t *qdisc_root_sle +@@ -453,7 +468,7 @@ static inline spinlock_t *qdisc_root_sleeping_lock(const struct Qdisc *qdisc) return qdisc_lock(root); } @@ -164,6 +174,8 @@ Signed-off-by: Sebastian Andrzej Siewior { struct Qdisc *root = qdisc_root_sleeping(qdisc); +diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c +index e4e442d70c2d..c8fa906733fb 100644 --- a/net/core/gen_estimator.c +++ b/net/core/gen_estimator.c @@ -46,7 +46,7 @@ @@ -175,7 +187,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct gnet_stats_basic_cpu __percpu *cpu_bstats; u8 ewma_log; u8 intvl_log; /* period : (250ms << intvl_log) */ -@@ -129,7 +129,7 @@ int gen_new_estimator(struct gnet_stats_ +@@ -129,7 +129,7 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats, struct gnet_stats_basic_cpu __percpu *cpu_bstats, struct net_rate_estimator __rcu **rate_est, spinlock_t *lock, @@ -184,7 +196,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct nlattr *opt) { struct gnet_estimator *parm = nla_data(opt); -@@ -227,7 +227,7 @@ int gen_replace_estimator(struct gnet_st +@@ -227,7 +227,7 @@ int gen_replace_estimator(struct gnet_stats_basic_packed *bstats, struct gnet_stats_basic_cpu __percpu *cpu_bstats, struct net_rate_estimator __rcu **rate_est, spinlock_t *lock, @@ -193,9 +205,11 @@ Signed-off-by: Sebastian Andrzej Siewior { return gen_new_estimator(bstats, cpu_bstats, rate_est, lock, running, opt); +diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c +index e2fd8baec65f..8bab88738691 100644 --- a/net/core/gen_stats.c +++ b/net/core/gen_stats.c -@@ -142,7 +142,7 @@ static void +@@ -142,7 +142,7 @@ __gnet_stats_copy_basic_cpu(struct gnet_stats_basic_packed *bstats, } void @@ -204,7 +218,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct gnet_stats_basic_packed *bstats, struct gnet_stats_basic_cpu __percpu *cpu, struct gnet_stats_basic_packed *b) -@@ -155,10 +155,10 @@ void +@@ -155,10 +155,10 @@ __gnet_stats_copy_basic(const seqcount_t *running, } do { if (running) @@ -226,9 +240,11 @@ Signed-off-by: Sebastian Andrzej Siewior struct gnet_dump *d, struct gnet_stats_basic_cpu __percpu *cpu, struct gnet_stats_basic_packed *b) +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c +index be7cd140b2a3..5b8f90de0615 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c -@@ -1166,7 +1166,7 @@ static struct Qdisc *qdisc_create(struct +@@ -1166,7 +1166,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev, rcu_assign_pointer(sch->stab, stab); } if (tca[TCA_RATE]) { @@ -237,6 +253,8 @@ Signed-off-by: Sebastian Andrzej Siewior err = -EOPNOTSUPP; if (sch->flags & TCQ_F_MQROOT) { +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index 31b9c2b415b4..b0cc57ff96e3 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -570,7 +570,11 @@ struct Qdisc noop_qdisc = { @@ -251,7 +269,7 @@ Signed-off-by: Sebastian Andrzej Siewior .busylock = __SPIN_LOCK_UNLOCKED(noop_qdisc.busylock), }; EXPORT_SYMBOL(noop_qdisc); -@@ -860,9 +864,17 @@ struct Qdisc *qdisc_alloc(struct netdev_ +@@ -859,9 +863,17 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, lockdep_set_class(&sch->busylock, dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); @@ -269,3 +287,6 @@ Signed-off-by: Sebastian Andrzej Siewior sch->ops = ops; sch->flags = ops->static_flags; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0205-net-add-back-the-missing-serialization-in-ip_send_un.patch b/kernel/patches-4.19.x-rt/0203-net-add-back-the-missing-serialization-in-ip_send_un.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0205-net-add-back-the-missing-serialization-in-ip_send_un.patch rename to kernel/patches-4.19.x-rt/0203-net-add-back-the-missing-serialization-in-ip_send_un.patch index 1946d6dd8..cf4e2292b 100644 --- a/kernel/patches-4.19.x-rt/0205-net-add-back-the-missing-serialization-in-ip_send_un.patch +++ b/kernel/patches-4.19.x-rt/0203-net-add-back-the-missing-serialization-in-ip_send_un.patch @@ -1,6 +1,7 @@ +From de40c876cec758a0735fda3a4dffd05924f12a4b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 31 Aug 2016 17:21:56 +0200 -Subject: [PATCH] net: add back the missing serialization in +Subject: [PATCH 203/269] net: add back the missing serialization in ip_send_unicast_reply() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 @@ -37,9 +38,11 @@ This is brings back the old locks. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - net/ipv4/tcp_ipv4.c | 6 ++++++ + net/ipv4/tcp_ipv4.c | 6 ++++++ 1 file changed, 6 insertions(+) +diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c +index 11101cf8693b..2b7205ad261a 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -62,6 +62,7 @@ @@ -50,7 +53,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include -@@ -634,6 +635,7 @@ void tcp_v4_send_check(struct sock *sk, +@@ -634,6 +635,7 @@ void tcp_v4_send_check(struct sock *sk, struct sk_buff *skb) } EXPORT_SYMBOL(tcp_v4_send_check); @@ -58,7 +61,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * This routine will send an RST to the other tcp. * -@@ -768,6 +770,7 @@ static void tcp_v4_send_reset(const stru +@@ -768,6 +770,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb) arg.tos = ip_hdr(skb)->tos; arg.uid = sock_net_uid(net, sk && sk_fullsock(sk) ? sk : NULL); local_bh_disable(); @@ -66,7 +69,7 @@ Signed-off-by: Sebastian Andrzej Siewior ctl_sk = *this_cpu_ptr(net->ipv4.tcp_sk); if (sk) ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? -@@ -780,6 +783,7 @@ static void tcp_v4_send_reset(const stru +@@ -780,6 +783,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb) ctl_sk->sk_mark = 0; __TCP_INC_STATS(net, TCP_MIB_OUTSEGS); __TCP_INC_STATS(net, TCP_MIB_OUTRSTS); @@ -74,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior local_bh_enable(); #ifdef CONFIG_TCP_MD5SIG -@@ -860,6 +864,7 @@ static void tcp_v4_send_ack(const struct +@@ -860,6 +864,7 @@ static void tcp_v4_send_ack(const struct sock *sk, arg.tos = tos; arg.uid = sock_net_uid(net, sk_fullsock(sk) ? sk : NULL); local_bh_disable(); @@ -82,7 +85,7 @@ Signed-off-by: Sebastian Andrzej Siewior ctl_sk = *this_cpu_ptr(net->ipv4.tcp_sk); if (sk) ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? -@@ -871,6 +876,7 @@ static void tcp_v4_send_ack(const struct +@@ -871,6 +876,7 @@ static void tcp_v4_send_ack(const struct sock *sk, ctl_sk->sk_mark = 0; __TCP_INC_STATS(net, TCP_MIB_OUTSEGS); @@ -90,3 +93,6 @@ Signed-off-by: Sebastian Andrzej Siewior local_bh_enable(); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0206-net-add-a-lock-around-icmp_sk.patch b/kernel/patches-4.19.x-rt/0204-net-add-a-lock-around-icmp_sk.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0206-net-add-a-lock-around-icmp_sk.patch rename to kernel/patches-4.19.x-rt/0204-net-add-a-lock-around-icmp_sk.patch index a0774be74..f4d24b875 100644 --- a/kernel/patches-4.19.x-rt/0206-net-add-a-lock-around-icmp_sk.patch +++ b/kernel/patches-4.19.x-rt/0204-net-add-a-lock-around-icmp_sk.patch @@ -1,6 +1,7 @@ +From c35d9dd75bf9f6d2e39202e23d04a8850172240f Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 31 Aug 2016 17:54:09 +0200 -Subject: [PATCH] net: add a lock around icmp_sk() +Subject: [PATCH 204/269] net: add a lock around icmp_sk() It looks like the this_cpu_ptr() access in icmp_sk() is protected with local_bh_disable(). To avoid missing serialization in -RT I am adding @@ -9,9 +10,11 @@ here a local lock. No crash has been observed, this is just precaution. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - net/ipv4/icmp.c | 8 ++++++++ + net/ipv4/icmp.c | 8 ++++++++ 1 file changed, 8 insertions(+) +diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c +index ad75c468ecfb..1770ff1638bc 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -77,6 +77,7 @@ @@ -22,7 +25,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include #include -@@ -204,6 +205,8 @@ static const struct icmp_control icmp_po +@@ -204,6 +205,8 @@ static const struct icmp_control icmp_pointers[NR_ICMP_TYPES+1]; * * On SMP we have one ICMP socket per-cpu. */ @@ -31,7 +34,7 @@ Signed-off-by: Sebastian Andrzej Siewior static struct sock *icmp_sk(struct net *net) { return *this_cpu_ptr(net->ipv4.icmp_sk); -@@ -214,12 +217,16 @@ static inline struct sock *icmp_xmit_loc +@@ -214,12 +217,16 @@ static inline struct sock *icmp_xmit_lock(struct net *net) { struct sock *sk; @@ -48,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior return NULL; } return sk; -@@ -228,6 +235,7 @@ static inline struct sock *icmp_xmit_loc +@@ -228,6 +235,7 @@ static inline struct sock *icmp_xmit_lock(struct net *net) static inline void icmp_xmit_unlock(struct sock *sk) { spin_unlock(&sk->sk_lock.slock); @@ -56,3 +59,6 @@ Signed-off-by: Sebastian Andrzej Siewior } int sysctl_icmp_msgs_per_sec __read_mostly = 1000; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0207-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch b/kernel/patches-4.19.x-rt/0205-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0207-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch rename to kernel/patches-4.19.x-rt/0205-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch index 0888d7d97..7fc11ad5d 100644 --- a/kernel/patches-4.19.x-rt/0207-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch +++ b/kernel/patches-4.19.x-rt/0205-net-Have-__napi_schedule_irqoff-disable-interrupts-o.patch @@ -1,7 +1,8 @@ +From bdd2169d3d5cc93fcaca144c2166ac375331e25d Mon Sep 17 00:00:00 2001 From: Steven Rostedt Date: Tue, 6 Dec 2016 17:50:30 -0500 -Subject: [PATCH] net: Have __napi_schedule_irqoff() disable interrupts on - RT +Subject: [PATCH 205/269] net: Have __napi_schedule_irqoff() disable interrupts + on RT A customer hit a crash where the napi sd->poll_list became corrupted. The customer had the bnx2x driver, which does a @@ -22,13 +23,15 @@ Cc: stable-rt@vger.kernel.org Signed-off-by: Steven Rostedt (Red Hat) Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/netdevice.h | 12 ++++++++++++ - net/core/dev.c | 2 ++ + include/linux/netdevice.h | 12 ++++++++++++ + net/core/dev.c | 2 ++ 2 files changed, 14 insertions(+) +diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h +index b6a75296eb46..946875cae933 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h -@@ -422,7 +422,19 @@ typedef enum rx_handler_result rx_handle +@@ -422,7 +422,19 @@ typedef enum rx_handler_result rx_handler_result_t; typedef rx_handler_result_t rx_handler_func_t(struct sk_buff **pskb); void __napi_schedule(struct napi_struct *n); @@ -48,9 +51,11 @@ Signed-off-by: Sebastian Andrzej Siewior static inline bool napi_disable_pending(struct napi_struct *n) { +diff --git a/net/core/dev.c b/net/core/dev.c +index 351e81f8a72d..50fe1e3ee26d 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -5936,6 +5936,7 @@ bool napi_schedule_prep(struct napi_stru +@@ -5952,6 +5952,7 @@ bool napi_schedule_prep(struct napi_struct *n) } EXPORT_SYMBOL(napi_schedule_prep); @@ -58,7 +63,7 @@ Signed-off-by: Sebastian Andrzej Siewior /** * __napi_schedule_irqoff - schedule for receive * @n: entry to schedule -@@ -5947,6 +5948,7 @@ void __napi_schedule_irqoff(struct napi_ +@@ -5963,6 +5964,7 @@ void __napi_schedule_irqoff(struct napi_struct *n) ____napi_schedule(this_cpu_ptr(&softnet_data), n); } EXPORT_SYMBOL(__napi_schedule_irqoff); @@ -66,3 +71,6 @@ Signed-off-by: Sebastian Andrzej Siewior bool napi_complete_done(struct napi_struct *n, int work_done) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0208-irqwork-push_most_work_into_softirq_context.patch b/kernel/patches-4.19.x-rt/0206-irqwork-push-most-work-into-softirq-context.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0208-irqwork-push_most_work_into_softirq_context.patch rename to kernel/patches-4.19.x-rt/0206-irqwork-push-most-work-into-softirq-context.patch index 52c36c6d1..eb9be8bcc 100644 --- a/kernel/patches-4.19.x-rt/0208-irqwork-push_most_work_into_softirq_context.patch +++ b/kernel/patches-4.19.x-rt/0206-irqwork-push-most-work-into-softirq-context.patch @@ -1,6 +1,7 @@ -Subject: irqwork: push most work into softirq context +From 01a7f110c5d6b059012d7f6cf4c1b3af79253a7c Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 23 Jun 2015 15:32:51 +0200 +Subject: [PATCH 206/269] irqwork: push most work into softirq context Initially we defered all irqwork into softirq because we didn't want the latency spikes if perf or another user was busy and delayed the RT task. @@ -21,14 +22,16 @@ Mike Galbraith, hard and soft variant] Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/irq_work.h | 8 ++++++ - kernel/irq_work.c | 60 ++++++++++++++++++++++++++++++++++++----------- - kernel/rcu/tree.c | 1 - kernel/sched/topology.c | 1 - kernel/time/tick-sched.c | 1 - kernel/time/timer.c | 2 + + include/linux/irq_work.h | 8 ++++++ + kernel/irq_work.c | 60 +++++++++++++++++++++++++++++++--------- + kernel/rcu/tree.c | 1 + + kernel/sched/topology.c | 1 + + kernel/time/tick-sched.c | 1 + + kernel/time/timer.c | 2 ++ 6 files changed, 60 insertions(+), 13 deletions(-) +diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h +index b11fcdfd0770..0c50559987c5 100644 --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -18,6 +18,8 @@ @@ -40,7 +43,7 @@ Signed-off-by: Sebastian Andrzej Siewior #define IRQ_WORK_CLAIMED (IRQ_WORK_PENDING | IRQ_WORK_BUSY) -@@ -52,4 +54,10 @@ static inline bool irq_work_needs_cpu(vo +@@ -52,4 +54,10 @@ static inline bool irq_work_needs_cpu(void) { return false; } static inline void irq_work_run(void) { } #endif @@ -51,6 +54,8 @@ Signed-off-by: Sebastian Andrzej Siewior +#endif + #endif /* _LINUX_IRQ_WORK_H */ +diff --git a/kernel/irq_work.c b/kernel/irq_work.c +index 6b7cdf17ccf8..7b41d9aa3e9b 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -17,6 +17,7 @@ @@ -70,7 +75,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* All work should have been flushed before going offline */ WARN_ON_ONCE(cpu_is_offline(cpu)); -@@ -76,7 +79,12 @@ bool irq_work_queue_on(struct irq_work * +@@ -76,7 +79,12 @@ bool irq_work_queue_on(struct irq_work *work, int cpu) if (!irq_work_claim(work)) return false; @@ -84,7 +89,7 @@ Signed-off-by: Sebastian Andrzej Siewior arch_send_call_function_single_ipi(cpu); #else /* #ifdef CONFIG_SMP */ -@@ -89,6 +97,9 @@ bool irq_work_queue_on(struct irq_work * +@@ -89,6 +97,9 @@ bool irq_work_queue_on(struct irq_work *work, int cpu) /* Enqueue the irq work @work on the current CPU */ bool irq_work_queue(struct irq_work *work) { @@ -94,7 +99,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Only queue if not already pending */ if (!irq_work_claim(work)) return false; -@@ -96,13 +107,15 @@ bool irq_work_queue(struct irq_work *wor +@@ -96,13 +107,15 @@ bool irq_work_queue(struct irq_work *work) /* Queue the entry and raise the IPI if needed. */ preempt_disable(); @@ -129,7 +134,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* All work should have been flushed before going offline */ WARN_ON_ONCE(cpu_is_offline(smp_processor_id())); -@@ -135,8 +147,12 @@ static void irq_work_run_list(struct lli +@@ -135,8 +147,12 @@ static void irq_work_run_list(struct llist_head *list) struct llist_node *llnode; unsigned long flags; @@ -143,7 +148,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (llist_empty(list)) return; -@@ -168,7 +184,16 @@ static void irq_work_run_list(struct lli +@@ -168,7 +184,16 @@ static void irq_work_run_list(struct llist_head *list) void irq_work_run(void) { irq_work_run_list(this_cpu_ptr(&raised_list)); @@ -179,9 +184,11 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Synchronize against the irq_work @entry, ensures the entry is not +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index f162a4f54b05..278fe66bfb70 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c -@@ -1296,6 +1296,7 @@ static int rcu_implicit_dynticks_qs(stru +@@ -1296,6 +1296,7 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp) !rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq && (rnp->ffmask & rdp->grpmask)) { init_irq_work(&rdp->rcu_iw, rcu_iw_handler); @@ -189,9 +196,11 @@ Signed-off-by: Sebastian Andrzej Siewior rdp->rcu_iw_pending = true; rdp->rcu_iw_gp_seq = rnp->gp_seq; irq_work_queue_on(&rdp->rcu_iw, rdp->cpu); +diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c +index c0a751464971..6e95f1ca3e22 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c -@@ -279,6 +279,7 @@ static int init_rootdomain(struct root_d +@@ -279,6 +279,7 @@ static int init_rootdomain(struct root_domain *rd) rd->rto_cpu = -1; raw_spin_lock_init(&rd->rto_lock); init_irq_work(&rd->rto_push_work, rto_push_irq_work_func); @@ -199,9 +208,11 @@ Signed-off-by: Sebastian Andrzej Siewior #endif init_dl_bw(&rd->dl_bw); +diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c +index 6482945f8ae8..da4a3f8feb56 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c -@@ -232,6 +232,7 @@ static void nohz_full_kick_func(struct i +@@ -232,6 +232,7 @@ static void nohz_full_kick_func(struct irq_work *work) static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = { .func = nohz_full_kick_func, @@ -209,9 +220,11 @@ Signed-off-by: Sebastian Andrzej Siewior }; /* +diff --git a/kernel/time/timer.c b/kernel/time/timer.c +index 696e7583137c..781483c76b17 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c -@@ -1733,6 +1733,8 @@ static __latent_entropy void run_timer_s +@@ -1733,6 +1733,8 @@ static __latent_entropy void run_timer_softirq(struct softirq_action *h) { struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]); @@ -220,3 +233,6 @@ Signed-off-by: Sebastian Andrzej Siewior __run_timers(base); if (IS_ENABLED(CONFIG_NO_HZ_COMMON)) __run_timers(this_cpu_ptr(&timer_bases[BASE_DEF])); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0209-printk-rt-aware.patch b/kernel/patches-4.19.x-rt/0207-printk-Make-rt-aware.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0209-printk-rt-aware.patch rename to kernel/patches-4.19.x-rt/0207-printk-Make-rt-aware.patch index 7293dcf1a..4d4b87b22 100644 --- a/kernel/patches-4.19.x-rt/0209-printk-rt-aware.patch +++ b/kernel/patches-4.19.x-rt/0207-printk-Make-rt-aware.patch @@ -1,18 +1,21 @@ -Subject: printk: Make rt aware +From 4d49bcfa2103be6571f2f53e06e8fa71d49feb9b Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 19 Sep 2012 14:50:37 +0200 +Subject: [PATCH 207/269] printk: Make rt aware Drop the lock before calling the console driver and do not disable interrupts while printing to a serial console. Signed-off-by: Thomas Gleixner --- - kernel/printk/printk.c | 33 ++++++++++++++++++++++++++++++--- + kernel/printk/printk.c | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-) +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c +index 6553508ff388..d983c509f74a 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c -@@ -1617,6 +1617,7 @@ SYSCALL_DEFINE3(syslog, int, type, char +@@ -1617,6 +1617,7 @@ SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len) return do_syslog(type, buf, len, SYSLOG_FROM_READER); } @@ -20,7 +23,7 @@ Signed-off-by: Thomas Gleixner /* * Special console_lock variants that help to reduce the risk of soft-lockups. * They allow to pass console_lock to another printk() call using a busy wait. -@@ -1757,6 +1758,15 @@ static int console_trylock_spinning(void +@@ -1757,6 +1758,15 @@ static int console_trylock_spinning(void) return 1; } @@ -36,7 +39,7 @@ Signed-off-by: Thomas Gleixner /* * Call the console drivers, asking them to write out * log_buf[start] to log_buf[end - 1]. -@@ -1772,6 +1782,7 @@ static void call_console_drivers(const c +@@ -1772,6 +1782,7 @@ static void call_console_drivers(const char *ext_text, size_t ext_len, if (!console_drivers) return; @@ -44,7 +47,7 @@ Signed-off-by: Thomas Gleixner for_each_console(con) { if (exclusive_console && con != exclusive_console) continue; -@@ -1787,6 +1798,7 @@ static void call_console_drivers(const c +@@ -1787,6 +1798,7 @@ static void call_console_drivers(const char *ext_text, size_t ext_len, else con->write(con, text, len); } @@ -52,7 +55,7 @@ Signed-off-by: Thomas Gleixner } int printk_delay_msec __read_mostly; -@@ -1978,20 +1990,30 @@ asmlinkage int vprintk_emit(int facility +@@ -1978,20 +1990,30 @@ asmlinkage int vprintk_emit(int facility, int level, /* If called from the scheduler, we can not call up(). */ if (!in_sched) { @@ -105,3 +108,6 @@ Signed-off-by: Thomas Gleixner if (do_cond_resched) cond_resched(); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0210-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch b/kernel/patches-4.19.x-rt/0208-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0210-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch rename to kernel/patches-4.19.x-rt/0208-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch index 8b917b609..fd480ad8a 100644 --- a/kernel/patches-4.19.x-rt/0210-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch +++ b/kernel/patches-4.19.x-rt/0208-kernel-printk-Don-t-try-to-print-from-IRQ-NMI-region.patch @@ -1,6 +1,7 @@ +From 160a19dcfe1a664e430a678562901a32630f7ee2 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 19 May 2016 17:45:27 +0200 -Subject: [PATCH] kernel/printk: Don't try to print from IRQ/NMI region +Subject: [PATCH 208/269] kernel/printk: Don't try to print from IRQ/NMI region On -RT we try to acquire sleeping locks which might lead to warnings from lockdep or a warn_on() from spin_try_lock() (which is a rtmutex on @@ -10,12 +11,14 @@ this via console_unblank() / bust_spinlocks() as well. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/printk/printk.c | 10 ++++++++++ + kernel/printk/printk.c | 10 ++++++++++ 1 file changed, 10 insertions(+) +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c +index d983c509f74a..f15988a33860 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c -@@ -1782,6 +1782,11 @@ static void call_console_drivers(const c +@@ -1782,6 +1782,11 @@ static void call_console_drivers(const char *ext_text, size_t ext_len, if (!console_drivers) return; @@ -39,3 +42,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * console_unblank can no longer be called in interrupt context unless * oops_in_progress is set to 1.. +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0211-HACK-printk-drop-the-logbuf_lock-more-often.patch b/kernel/patches-4.19.x-rt/0209-printk-Drop-the-logbuf_lock-more-often.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0211-HACK-printk-drop-the-logbuf_lock-more-often.patch rename to kernel/patches-4.19.x-rt/0209-printk-Drop-the-logbuf_lock-more-often.patch index 0e0148abb..da49f51d3 100644 --- a/kernel/patches-4.19.x-rt/0211-HACK-printk-drop-the-logbuf_lock-more-often.patch +++ b/kernel/patches-4.19.x-rt/0209-printk-Drop-the-logbuf_lock-more-often.patch @@ -1,18 +1,21 @@ +From bf31931f09583088100f40d4c4b255571cc72578 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 21 Mar 2013 19:01:05 +0100 -Subject: printk: Drop the logbuf_lock more often +Subject: [PATCH 209/269] printk: Drop the logbuf_lock more often The lock is hold with irgs off. The latency drops 500us+ on my arm bugs with a "full" buffer after executing "dmesg" on the shell. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/printk/printk.c | 28 ++++++++++++++++++++++++++++ + kernel/printk/printk.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c +index f15988a33860..a43d07d4e043 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c -@@ -1420,12 +1420,23 @@ static int syslog_print_all(char __user +@@ -1420,12 +1420,23 @@ static int syslog_print_all(char __user *buf, int size, bool clear) u64 next_seq; u64 seq; u32 idx; @@ -36,7 +39,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Find first record that fits, including all following records, * into the user-provided buffer for this dump. -@@ -1438,6 +1449,14 @@ static int syslog_print_all(char __user +@@ -1438,6 +1449,14 @@ static int syslog_print_all(char __user *buf, int size, bool clear) len += msg_print_text(msg, true, NULL, 0); idx = log_next(idx); seq++; @@ -51,7 +54,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* move first record forward until length fits into the buffer */ -@@ -1449,6 +1468,14 @@ static int syslog_print_all(char __user +@@ -1449,6 +1468,14 @@ static int syslog_print_all(char __user *buf, int size, bool clear) len -= msg_print_text(msg, true, NULL, 0); idx = log_next(idx); seq++; @@ -66,7 +69,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* last message fitting into this dump */ -@@ -1486,6 +1513,7 @@ static int syslog_print_all(char __user +@@ -1486,6 +1513,7 @@ static int syslog_print_all(char __user *buf, int size, bool clear) clear_seq = log_next_seq; clear_idx = log_next_idx; } @@ -74,3 +77,6 @@ Signed-off-by: Sebastian Andrzej Siewior logbuf_unlock_irq(); kfree(text); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0212-ARM-enable-irq-in-translation-section-permission-fau.patch b/kernel/patches-4.19.x-rt/0210-ARM-enable-irq-in-translation-section-permission-fau.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0212-ARM-enable-irq-in-translation-section-permission-fau.patch rename to kernel/patches-4.19.x-rt/0210-ARM-enable-irq-in-translation-section-permission-fau.patch index e41d4889a..e379a2146 100644 --- a/kernel/patches-4.19.x-rt/0212-ARM-enable-irq-in-translation-section-permission-fau.patch +++ b/kernel/patches-4.19.x-rt/0210-ARM-enable-irq-in-translation-section-permission-fau.patch @@ -1,6 +1,11 @@ +From 740bf3655673f2b77230957eb21238798aa0b203 Mon Sep 17 00:00:00 2001 From: "Yadi.hu" Date: Wed, 10 Dec 2014 10:32:09 +0800 -Subject: ARM: enable irq in translation/section permission fault handlers +Subject: [PATCH 210/269] ARM: enable irq in translation/section permission + fault handlers +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit Probably happens on all ARM, with CONFIG_PREEMPT_RT_FULL @@ -58,12 +63,14 @@ permission exception. Signed-off-by: Yadi.hu Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/mm/fault.c | 6 ++++++ + arch/arm/mm/fault.c | 6 ++++++ 1 file changed, 6 insertions(+) +diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c +index 3232afb6fdc0..3bec1f73a9aa 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c -@@ -439,6 +439,9 @@ do_translation_fault(unsigned long addr, +@@ -439,6 +439,9 @@ do_translation_fault(unsigned long addr, unsigned int fsr, if (addr < TASK_SIZE) return do_page_fault(addr, fsr, regs); @@ -73,7 +80,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (user_mode(regs)) goto bad_area; -@@ -506,6 +509,9 @@ do_translation_fault(unsigned long addr, +@@ -506,6 +509,9 @@ do_translation_fault(unsigned long addr, unsigned int fsr, static int do_sect_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) { @@ -83,3 +90,6 @@ Signed-off-by: Sebastian Andrzej Siewior do_bad_area(addr, fsr, regs); return 0; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0213-genirq-update-irq_set_irqchip_state-documentation.patch b/kernel/patches-4.19.x-rt/0211-genirq-update-irq_set_irqchip_state-documentation.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0213-genirq-update-irq_set_irqchip_state-documentation.patch rename to kernel/patches-4.19.x-rt/0211-genirq-update-irq_set_irqchip_state-documentation.patch index a2f12d933..45139f702 100644 --- a/kernel/patches-4.19.x-rt/0213-genirq-update-irq_set_irqchip_state-documentation.patch +++ b/kernel/patches-4.19.x-rt/0211-genirq-update-irq_set_irqchip_state-documentation.patch @@ -1,6 +1,7 @@ +From 9179df818d04fdf3d3cc195a5d19fac4b4c904f1 Mon Sep 17 00:00:00 2001 From: Josh Cartwright Date: Thu, 11 Feb 2016 11:54:00 -0600 -Subject: genirq: update irq_set_irqchip_state documentation +Subject: [PATCH 211/269] genirq: update irq_set_irqchip_state documentation On -rt kernels, the use of migrate_disable()/migrate_enable() is sufficient to guarantee a task isn't moved to another CPU. Update the @@ -9,12 +10,14 @@ irq_set_irqchip_state() documentation to reflect this. Signed-off-by: Josh Cartwright Signed-off-by: Sebastian Andrzej Siewior --- - kernel/irq/manage.c | 2 +- + kernel/irq/manage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index ba5bba5f1ffd..48c2690070f3 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c -@@ -2274,7 +2274,7 @@ EXPORT_SYMBOL_GPL(irq_get_irqchip_state) +@@ -2277,7 +2277,7 @@ EXPORT_SYMBOL_GPL(irq_get_irqchip_state); * This call sets the internal irqchip state of an interrupt, * depending on the value of @which. * @@ -23,3 +26,6 @@ Signed-off-by: Sebastian Andrzej Siewior * interrupt controller has per-cpu registers. */ int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which, +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0214-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch b/kernel/patches-4.19.x-rt/0212-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0214-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch rename to kernel/patches-4.19.x-rt/0212-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch index 5a7896a1f..b183d9b0b 100644 --- a/kernel/patches-4.19.x-rt/0214-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch +++ b/kernel/patches-4.19.x-rt/0212-KVM-arm-arm64-downgrade-preempt_disable-d-region-to-.patch @@ -1,6 +1,8 @@ +From 7635f97cb803db25caa49d5fd48ecb46672272d9 Mon Sep 17 00:00:00 2001 From: Josh Cartwright Date: Thu, 11 Feb 2016 11:54:01 -0600 -Subject: KVM: arm/arm64: downgrade preempt_disable()d region to migrate_disable() +Subject: [PATCH 212/269] KVM: arm/arm64: downgrade preempt_disable()d region + to migrate_disable() kvm_arch_vcpu_ioctl_run() disables the use of preemption when updating the vgic and timer states to prevent the calling task from migrating to @@ -17,12 +19,14 @@ Reported-by: Manish Jaggi Signed-off-by: Josh Cartwright Signed-off-by: Sebastian Andrzej Siewior --- - virt/kvm/arm/arm.c | 6 +++--- + virt/kvm/arm/arm.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) +diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c +index 1415e36fed3d..8d8caad49eb6 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c -@@ -699,7 +699,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v +@@ -709,7 +709,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) * involves poking the GIC, which must be done in a * non-preemptible context. */ @@ -31,7 +35,7 @@ Signed-off-by: Sebastian Andrzej Siewior kvm_pmu_flush_hwstate(vcpu); -@@ -748,7 +748,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v +@@ -758,7 +758,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) kvm_timer_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu); local_irq_enable(); @@ -40,7 +44,7 @@ Signed-off-by: Sebastian Andrzej Siewior continue; } -@@ -826,7 +826,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v +@@ -836,7 +836,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) /* Exit types that need handling before we can be preempted */ handle_exit_early(vcpu, run, ret); @@ -49,3 +53,6 @@ Signed-off-by: Sebastian Andrzej Siewior ret = handle_exit(vcpu, run, ret); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0215-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch b/kernel/patches-4.19.x-rt/0213-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0215-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch rename to kernel/patches-4.19.x-rt/0213-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch index 9a6063b8e..10bb1711e 100644 --- a/kernel/patches-4.19.x-rt/0215-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch +++ b/kernel/patches-4.19.x-rt/0213-arm64-fpsimd-use-preemp_disable-in-addition-to-local.patch @@ -1,6 +1,7 @@ +From 25f8f6ec0e7c56b6029b247d513eec0ba512da9b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 25 Jul 2018 14:02:38 +0200 -Subject: [PATCH] arm64: fpsimd: use preemp_disable in addition to +Subject: [PATCH 213/269] arm64: fpsimd: use preemp_disable in addition to local_bh_disable() In v4.16-RT I noticed a number of warnings from task_fpsimd_load(). The @@ -13,12 +14,14 @@ Add preempt_disable()/enable() to enfore the required semantic on -RT. Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm64/kernel/fpsimd.c | 31 +++++++++++++++++++++++++++++-- + arch/arm64/kernel/fpsimd.c | 31 +++++++++++++++++++++++++++++-- 1 file changed, 29 insertions(+), 2 deletions(-) +diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c +index 58c53bc96928..71252cd8b594 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c -@@ -159,6 +159,16 @@ static void sve_free(struct task_struct +@@ -159,6 +159,16 @@ static void sve_free(struct task_struct *task) __sve_free(task); } @@ -35,7 +38,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * TIF_SVE controls whether a task can use SVE without trapping while * in userspace, and also the way a task's FPSIMD/SVE state is stored -@@ -547,6 +557,7 @@ int sve_set_vector_length(struct task_st +@@ -547,6 +557,7 @@ int sve_set_vector_length(struct task_struct *task, * non-SVE thread. */ if (task == current) { @@ -43,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior local_bh_disable(); fpsimd_save(); -@@ -557,8 +568,10 @@ int sve_set_vector_length(struct task_st +@@ -557,8 +568,10 @@ int sve_set_vector_length(struct task_struct *task, if (test_and_clear_tsk_thread_flag(task, TIF_SVE)) sve_to_fpsimd(task); @@ -55,7 +58,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Force reallocation of task SVE state to the correct size -@@ -813,6 +826,7 @@ asmlinkage void do_sve_acc(unsigned int +@@ -813,6 +826,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs) sve_alloc(current); @@ -63,7 +66,7 @@ Signed-off-by: Sebastian Andrzej Siewior local_bh_disable(); fpsimd_save(); -@@ -826,6 +840,7 @@ asmlinkage void do_sve_acc(unsigned int +@@ -826,6 +840,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs) WARN_ON(1); /* SVE access shouldn't have trapped */ local_bh_enable(); @@ -71,7 +74,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -892,10 +907,12 @@ void fpsimd_thread_switch(struct task_st +@@ -892,10 +907,12 @@ void fpsimd_thread_switch(struct task_struct *next) void fpsimd_flush_thread(void) { int vl, supported_vl; @@ -130,7 +133,7 @@ Signed-off-by: Sebastian Andrzej Siewior } /* -@@ -1031,6 +1054,7 @@ void fpsimd_update_current_state(struct +@@ -1031,6 +1054,7 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state) if (!system_supports_fpsimd()) return; @@ -138,7 +141,7 @@ Signed-off-by: Sebastian Andrzej Siewior local_bh_disable(); current->thread.uw.fpsimd_state = *state; -@@ -1043,6 +1067,7 @@ void fpsimd_update_current_state(struct +@@ -1043,6 +1067,7 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state) clear_thread_flag(TIF_FOREIGN_FPSTATE); local_bh_enable(); @@ -162,3 +165,6 @@ Signed-off-by: Sebastian Andrzej Siewior } EXPORT_SYMBOL(kernel_neon_begin); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0216-kgb-serial-hackaround.patch b/kernel/patches-4.19.x-rt/0214-kgdb-serial-Short-term-workaround.patch similarity index 70% rename from kernel/patches-4.19.x-rt/0216-kgb-serial-hackaround.patch rename to kernel/patches-4.19.x-rt/0214-kgdb-serial-Short-term-workaround.patch index 6794740ea..8f65bd685 100644 --- a/kernel/patches-4.19.x-rt/0216-kgb-serial-hackaround.patch +++ b/kernel/patches-4.19.x-rt/0214-kgdb-serial-Short-term-workaround.patch @@ -1,6 +1,7 @@ +From b9a4d200f0fc873f1ad960b730b283ea779c74a4 Mon Sep 17 00:00:00 2001 From: Jason Wessel Date: Thu, 28 Jul 2011 12:42:23 -0500 -Subject: kgdb/serial: Short term workaround +Subject: [PATCH 214/269] kgdb/serial: Short term workaround On 07/27/2011 04:37 PM, Thomas Gleixner wrote: > - KGDB (not yet disabled) is reportedly unusable on -rt right now due @@ -16,13 +17,14 @@ change separation between the console and the HW to have a polled mode Thanks, Jason. - --- - drivers/tty/serial/8250/8250_port.c | 3 +++ - include/linux/kdb.h | 2 ++ - kernel/debug/kdb/kdb_io.c | 2 ++ + drivers/tty/serial/8250/8250_port.c | 3 +++ + include/linux/kdb.h | 2 ++ + kernel/debug/kdb/kdb_io.c | 2 ++ 3 files changed, 7 insertions(+) +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index 851d7f6046a4..a2705b401efc 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -31,6 +31,7 @@ @@ -33,7 +35,7 @@ Jason. #include #include #include -@@ -3241,6 +3242,8 @@ void serial8250_console_write(struct uar +@@ -3241,6 +3242,8 @@ void serial8250_console_write(struct uart_8250_port *up, const char *s, if (port->sysrq || oops_in_progress) locked = 0; @@ -42,9 +44,11 @@ Jason. else spin_lock_irqsave(&port->lock, flags); +diff --git a/include/linux/kdb.h b/include/linux/kdb.h +index 68bd88223417..e033b25b0b72 100644 --- a/include/linux/kdb.h +++ b/include/linux/kdb.h -@@ -167,6 +167,7 @@ extern __printf(2, 0) int vkdb_printf(en +@@ -167,6 +167,7 @@ extern __printf(2, 0) int vkdb_printf(enum kdb_msgsrc src, const char *fmt, extern __printf(1, 2) int kdb_printf(const char *, ...); typedef __printf(1, 2) int (*kdb_printf_t)(const char *, ...); @@ -52,7 +56,7 @@ Jason. extern void kdb_init(int level); /* Access to kdb specific polling devices */ -@@ -201,6 +202,7 @@ extern int kdb_register_flags(char *, kd +@@ -201,6 +202,7 @@ extern int kdb_register_flags(char *, kdb_func_t, char *, char *, extern int kdb_unregister(char *); #else /* ! CONFIG_KGDB_KDB */ static inline __printf(1, 2) int kdb_printf(const char *fmt, ...) { return 0; } @@ -60,6 +64,8 @@ Jason. static inline void kdb_init(int level) {} static inline int kdb_register(char *cmd, kdb_func_t func, char *usage, char *help, short minlen) { return 0; } +diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c +index 6a4b41484afe..197cb422f6e1 100644 --- a/kernel/debug/kdb/kdb_io.c +++ b/kernel/debug/kdb/kdb_io.c @@ -857,9 +857,11 @@ int kdb_printf(const char *fmt, ...) @@ -74,3 +80,6 @@ Jason. return r; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0217-sysfs-realtime-entry.patch b/kernel/patches-4.19.x-rt/0215-sysfs-Add-sys-kernel-realtime-entry.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0217-sysfs-realtime-entry.patch rename to kernel/patches-4.19.x-rt/0215-sysfs-Add-sys-kernel-realtime-entry.patch index 85c90149d..4e3a81ff6 100644 --- a/kernel/patches-4.19.x-rt/0217-sysfs-realtime-entry.patch +++ b/kernel/patches-4.19.x-rt/0215-sysfs-Add-sys-kernel-realtime-entry.patch @@ -1,6 +1,7 @@ -Subject: sysfs: Add /sys/kernel/realtime entry +From 65880324093a78662b662259e6d79ad55ac8a4bf Mon Sep 17 00:00:00 2001 From: Clark Williams -Date: Sat Jul 30 21:55:53 2011 -0500 +Date: Sat, 30 Jul 2011 21:55:53 -0500 +Subject: [PATCH 215/269] sysfs: Add /sys/kernel/realtime entry Add a /sys/kernel entry to indicate that the kernel is a realtime kernel. @@ -14,9 +15,11 @@ Are there better solutions? Should it exist and return 0 on !-rt? Signed-off-by: Clark Williams Signed-off-by: Peter Zijlstra --- - kernel/ksysfs.c | 12 ++++++++++++ + kernel/ksysfs.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) +diff --git a/kernel/ksysfs.c b/kernel/ksysfs.c +index 46ba853656f6..9a23632b6294 100644 --- a/kernel/ksysfs.c +++ b/kernel/ksysfs.c @@ -140,6 +140,15 @@ KERNEL_ATTR_RO(vmcoreinfo); @@ -35,13 +38,16 @@ Signed-off-by: Peter Zijlstra /* whether file capabilities are enabled */ static ssize_t fscaps_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) -@@ -231,6 +240,9 @@ static struct attribute * kernel_attrs[] +@@ -230,6 +239,9 @@ static struct attribute * kernel_attrs[] = { + #ifndef CONFIG_TINY_RCU &rcu_expedited_attr.attr, &rcu_normal_attr.attr, - #endif ++#endif +#ifdef CONFIG_PREEMPT_RT_FULL + &realtime_attr.attr, -+#endif + #endif NULL }; - +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0218-mm-rt-kmap-atomic-scheduling.patch b/kernel/patches-4.19.x-rt/0216-mm-rt-kmap_atomic-scheduling.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0218-mm-rt-kmap-atomic-scheduling.patch rename to kernel/patches-4.19.x-rt/0216-mm-rt-kmap_atomic-scheduling.patch index 0e5d34b6d..465c28745 100644 --- a/kernel/patches-4.19.x-rt/0218-mm-rt-kmap-atomic-scheduling.patch +++ b/kernel/patches-4.19.x-rt/0216-mm-rt-kmap_atomic-scheduling.patch @@ -1,6 +1,7 @@ -Subject: mm, rt: kmap_atomic scheduling +From e8dfb76eeb36e00d6827406f9b0d110eee60a084 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Thu, 28 Jul 2011 10:43:51 +0200 +Subject: [PATCH 216/269] mm, rt: kmap_atomic scheduling In fact, with migrate_disable() existing one could play games with kmap_atomic. You could save/restore the kmap_atomic slots on context @@ -19,15 +20,17 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins and the pte content right away in the task struct. Shortens the context switch code. ] --- - arch/x86/kernel/process_32.c | 32 ++++++++++++++++++++++++++++++++ - arch/x86/mm/highmem_32.c | 13 ++++++++++--- - arch/x86/mm/iomap_32.c | 9 ++++++++- - include/linux/highmem.h | 31 +++++++++++++++++++++++++------ - include/linux/sched.h | 7 +++++++ - include/linux/uaccess.h | 2 ++ - mm/highmem.c | 6 ++++-- + arch/x86/kernel/process_32.c | 32 ++++++++++++++++++++++++++++++++ + arch/x86/mm/highmem_32.c | 13 ++++++++++--- + arch/x86/mm/iomap_32.c | 9 ++++++++- + include/linux/highmem.h | 31 +++++++++++++++++++++++++------ + include/linux/sched.h | 7 +++++++ + include/linux/uaccess.h | 2 ++ + mm/highmem.c | 6 ++++-- 7 files changed, 88 insertions(+), 12 deletions(-) +diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c +index d3e593eb189f..84afe55625f8 100644 --- a/arch/x86/kernel/process_32.c +++ b/arch/x86/kernel/process_32.c @@ -38,6 +38,7 @@ @@ -38,7 +41,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins #include #include -@@ -198,6 +199,35 @@ start_thread(struct pt_regs *regs, unsig +@@ -198,6 +199,35 @@ start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp) } EXPORT_SYMBOL_GPL(start_thread); @@ -74,7 +77,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins /* * switch_to(x,y) should switch tasks from x to y. -@@ -267,6 +297,8 @@ EXPORT_SYMBOL_GPL(start_thread); +@@ -267,6 +297,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) switch_to_extra(prev_p, next_p); @@ -83,6 +86,8 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins /* * Leave lazy mode, flushing any hypercalls made here. * This must be done before restoring TLS segments so +diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c +index 6d18b70ed5a9..f752724c22e8 100644 --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -32,10 +32,11 @@ EXPORT_SYMBOL(kunmap); @@ -98,7 +103,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins pagefault_disable(); if (!PageHighMem(page)) -@@ -45,7 +46,10 @@ void *kmap_atomic_prot(struct page *page +@@ -45,7 +46,10 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot) idx = type + KM_TYPE_NR*smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); BUG_ON(!pte_none(*(kmap_pte-idx))); @@ -129,6 +134,8 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins } EXPORT_SYMBOL(__kunmap_atomic); +diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c +index b3294d36769d..d5a48210d0f6 100644 --- a/arch/x86/mm/iomap_32.c +++ b/arch/x86/mm/iomap_32.c @@ -59,6 +59,7 @@ EXPORT_SYMBOL_GPL(iomap_free); @@ -139,7 +146,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins unsigned long vaddr; int idx, type; -@@ -68,7 +69,10 @@ void *kmap_atomic_prot_pfn(unsigned long +@@ -68,7 +69,10 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot) type = kmap_atomic_idx_push(); idx = type + KM_TYPE_NR * smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); @@ -161,9 +168,11 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins kpte_clear_flush(kmap_pte-idx, vaddr); kmap_atomic_idx_pop(); } +diff --git a/include/linux/highmem.h b/include/linux/highmem.h +index 0690679832d4..1ac89e4718bf 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h -@@ -66,7 +66,7 @@ static inline void kunmap(struct page *p +@@ -66,7 +66,7 @@ static inline void kunmap(struct page *page) static inline void *kmap_atomic(struct page *page) { @@ -172,7 +181,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins pagefault_disable(); return page_address(page); } -@@ -75,7 +75,7 @@ static inline void *kmap_atomic(struct p +@@ -75,7 +75,7 @@ static inline void *kmap_atomic(struct page *page) static inline void __kunmap_atomic(void *addr) { pagefault_enable(); @@ -181,7 +190,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins } #define kmap_atomic_pfn(pfn) kmap_atomic(pfn_to_page(pfn)) -@@ -87,32 +87,51 @@ static inline void __kunmap_atomic(void +@@ -87,32 +87,51 @@ static inline void __kunmap_atomic(void *addr) #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) @@ -237,6 +246,8 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins #endif } +diff --git a/include/linux/sched.h b/include/linux/sched.h +index a9a5edfa9689..76e6cdafb992 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -28,6 +28,7 @@ @@ -260,9 +271,11 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins #ifdef CONFIG_DEBUG_ATOMIC_SLEEP unsigned long task_state_change; #endif +diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h +index efe79c1cdd47..128a8489047d 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h -@@ -185,6 +185,7 @@ static __always_inline void pagefault_di +@@ -185,6 +185,7 @@ static __always_inline void pagefault_disabled_dec(void) */ static inline void pagefault_disable(void) { @@ -270,7 +283,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins pagefault_disabled_inc(); /* * make sure to have issued the store before a pagefault -@@ -201,6 +202,7 @@ static inline void pagefault_enable(void +@@ -201,6 +202,7 @@ static inline void pagefault_enable(void) */ barrier(); pagefault_disabled_dec(); @@ -278,6 +291,8 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins } /* +diff --git a/mm/highmem.c b/mm/highmem.c +index 59db3223a5d6..22aa3ddbd87b 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -30,10 +30,11 @@ @@ -293,7 +308,7 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins /* * Virtual_count is not a pure "count". -@@ -108,8 +109,9 @@ static inline wait_queue_head_t *get_pkm +@@ -108,8 +109,9 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color) unsigned long totalhigh_pages __read_mostly; EXPORT_SYMBOL(totalhigh_pages); @@ -304,3 +319,6 @@ Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins unsigned int nr_free_highpages (void) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0219-x86-highmem-add-a-already-used-pte-check.patch b/kernel/patches-4.19.x-rt/0217-x86-highmem-Add-a-already-used-pte-check.patch similarity index 59% rename from kernel/patches-4.19.x-rt/0219-x86-highmem-add-a-already-used-pte-check.patch rename to kernel/patches-4.19.x-rt/0217-x86-highmem-Add-a-already-used-pte-check.patch index 47860fc3a..e7224911e 100644 --- a/kernel/patches-4.19.x-rt/0219-x86-highmem-add-a-already-used-pte-check.patch +++ b/kernel/patches-4.19.x-rt/0217-x86-highmem-Add-a-already-used-pte-check.patch @@ -1,17 +1,20 @@ +From c22bb5db4da4e6b17aa8a6387ffcd503dea51ec5 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 11 Mar 2013 17:09:55 +0100 -Subject: x86/highmem: Add a "already used pte" check +Subject: [PATCH 217/269] x86/highmem: Add a "already used pte" check This is a copy from kmap_atomic_prot(). Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/mm/iomap_32.c | 2 ++ + arch/x86/mm/iomap_32.c | 2 ++ 1 file changed, 2 insertions(+) +diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c +index d5a48210d0f6..c0ec8d430c02 100644 --- a/arch/x86/mm/iomap_32.c +++ b/arch/x86/mm/iomap_32.c -@@ -69,6 +69,8 @@ void *kmap_atomic_prot_pfn(unsigned long +@@ -69,6 +69,8 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot) type = kmap_atomic_idx_push(); idx = type + KM_TYPE_NR * smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); @@ -20,3 +23,6 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_PREEMPT_RT_FULL current->kmap_pte[type] = pte; #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0220-arm-highmem-flush-tlb-on-unmap.patch b/kernel/patches-4.19.x-rt/0218-arm-highmem-Flush-tlb-on-unmap.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0220-arm-highmem-flush-tlb-on-unmap.patch rename to kernel/patches-4.19.x-rt/0218-arm-highmem-Flush-tlb-on-unmap.patch index 08e17cab7..eae79a306 100644 --- a/kernel/patches-4.19.x-rt/0220-arm-highmem-flush-tlb-on-unmap.patch +++ b/kernel/patches-4.19.x-rt/0218-arm-highmem-Flush-tlb-on-unmap.patch @@ -1,6 +1,7 @@ +From fba4ff7b8883d22067b9453a1d158c520f067b70 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 11 Mar 2013 21:37:27 +0100 -Subject: arm/highmem: Flush tlb on unmap +Subject: [PATCH 218/269] arm/highmem: Flush tlb on unmap The tlb should be flushed on unmap and thus make the mapping entry invalid. This is only done in the non-debug case which does not look @@ -8,9 +9,11 @@ right. Signed-off-by: Sebastian Andrzej Siewior --- - arch/arm/mm/highmem.c | 2 +- + arch/arm/mm/highmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c +index d02f8187b1cc..eb4b225d28c9 100644 --- a/arch/arm/mm/highmem.c +++ b/arch/arm/mm/highmem.c @@ -112,10 +112,10 @@ void __kunmap_atomic(void *kvaddr) @@ -25,3 +28,6 @@ Signed-off-by: Sebastian Andrzej Siewior kmap_atomic_idx_pop(); } else if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) { /* this address was obtained through kmap_high_get() */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0221-arm-enable-highmem-for-rt.patch b/kernel/patches-4.19.x-rt/0219-arm-Enable-highmem-for-rt.patch similarity index 84% rename from kernel/patches-4.19.x-rt/0221-arm-enable-highmem-for-rt.patch rename to kernel/patches-4.19.x-rt/0219-arm-Enable-highmem-for-rt.patch index 3891177cb..621a0ec00 100644 --- a/kernel/patches-4.19.x-rt/0221-arm-enable-highmem-for-rt.patch +++ b/kernel/patches-4.19.x-rt/0219-arm-Enable-highmem-for-rt.patch @@ -1,16 +1,19 @@ -Subject: arm: Enable highmem for rt +From 1a0e06d9a75c6d9d6ec21e345030430e78e81a84 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 13 Feb 2013 11:03:11 +0100 +Subject: [PATCH 219/269] arm: Enable highmem for rt fixup highmem for ARM. Signed-off-by: Thomas Gleixner --- - arch/arm/include/asm/switch_to.h | 8 +++++ - arch/arm/mm/highmem.c | 56 +++++++++++++++++++++++++++++++++------ - include/linux/highmem.h | 1 + arch/arm/include/asm/switch_to.h | 8 +++++ + arch/arm/mm/highmem.c | 56 +++++++++++++++++++++++++++----- + include/linux/highmem.h | 1 + 3 files changed, 57 insertions(+), 8 deletions(-) +diff --git a/arch/arm/include/asm/switch_to.h b/arch/arm/include/asm/switch_to.h +index d3e937dcee4d..6ab96a2ce1f8 100644 --- a/arch/arm/include/asm/switch_to.h +++ b/arch/arm/include/asm/switch_to.h @@ -4,6 +4,13 @@ @@ -27,7 +30,7 @@ Signed-off-by: Thomas Gleixner /* * For v7 SMP cores running a preemptible kernel we may be pre-empted * during a TLB maintenance operation, so execute an inner-shareable dsb -@@ -26,6 +33,7 @@ extern struct task_struct *__switch_to(s +@@ -26,6 +33,7 @@ extern struct task_struct *__switch_to(struct task_struct *, struct thread_info #define switch_to(prev,next,last) \ do { \ __complete_pending_tlbi(); \ @@ -35,9 +38,11 @@ Signed-off-by: Thomas Gleixner last = __switch_to(prev,task_thread_info(prev), task_thread_info(next)); \ } while (0) +diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c +index eb4b225d28c9..542692dbd40a 100644 --- a/arch/arm/mm/highmem.c +++ b/arch/arm/mm/highmem.c -@@ -34,6 +34,11 @@ static inline pte_t get_fixmap_pte(unsig +@@ -34,6 +34,11 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr) return *ptep; } @@ -161,6 +166,8 @@ Signed-off-by: Thomas Gleixner + } +} +#endif +diff --git a/include/linux/highmem.h b/include/linux/highmem.h +index 1ac89e4718bf..eaa2ef9bc10e 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -8,6 +8,7 @@ @@ -171,3 +178,6 @@ Signed-off-by: Thomas Gleixner #include +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0222-scsi-fcoe-rt-aware.patch b/kernel/patches-4.19.x-rt/0220-scsi-fcoe-Make-RT-aware.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0222-scsi-fcoe-rt-aware.patch rename to kernel/patches-4.19.x-rt/0220-scsi-fcoe-Make-RT-aware.patch index 6b3ca2c3b..b59acb33c 100644 --- a/kernel/patches-4.19.x-rt/0222-scsi-fcoe-rt-aware.patch +++ b/kernel/patches-4.19.x-rt/0220-scsi-fcoe-Make-RT-aware.patch @@ -1,20 +1,23 @@ -Subject: scsi/fcoe: Make RT aware. +From f4644bebeab291324244e2cb3d957c692cec7168 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sat, 12 Nov 2011 14:00:48 +0100 +Subject: [PATCH 220/269] scsi/fcoe: Make RT aware. Do not disable preemption while taking sleeping locks. All user look safe for migrate_diable() only. Signed-off-by: Thomas Gleixner --- - drivers/scsi/fcoe/fcoe.c | 16 ++++++++-------- - drivers/scsi/fcoe/fcoe_ctlr.c | 4 ++-- - drivers/scsi/libfc/fc_exch.c | 4 ++-- + drivers/scsi/fcoe/fcoe.c | 16 ++++++++-------- + drivers/scsi/fcoe/fcoe_ctlr.c | 4 ++-- + drivers/scsi/libfc/fc_exch.c | 4 ++-- 3 files changed, 12 insertions(+), 12 deletions(-) +diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c +index 6768b2e8148a..c20f51af6bdf 100644 --- a/drivers/scsi/fcoe/fcoe.c +++ b/drivers/scsi/fcoe/fcoe.c -@@ -1459,11 +1459,11 @@ static int fcoe_rcv(struct sk_buff *skb, +@@ -1459,11 +1459,11 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev, static int fcoe_alloc_paged_crc_eof(struct sk_buff *skb, int tlen) { struct fcoe_percpu_s *fps; @@ -29,7 +32,7 @@ Signed-off-by: Thomas Gleixner return rc; } -@@ -1650,11 +1650,11 @@ static inline int fcoe_filter_frames(str +@@ -1650,11 +1650,11 @@ static inline int fcoe_filter_frames(struct fc_lport *lport, return 0; } @@ -43,7 +46,7 @@ Signed-off-by: Thomas Gleixner return -EINVAL; } -@@ -1697,7 +1697,7 @@ static void fcoe_recv_frame(struct sk_bu +@@ -1697,7 +1697,7 @@ static void fcoe_recv_frame(struct sk_buff *skb) */ hp = (struct fcoe_hdr *) skb_network_header(skb); @@ -52,7 +55,7 @@ Signed-off-by: Thomas Gleixner if (unlikely(FC_FCOE_DECAPS_VER(hp) != FC_FCOE_VER)) { if (stats->ErrorFrames < 5) printk(KERN_WARNING "fcoe: FCoE version " -@@ -1729,13 +1729,13 @@ static void fcoe_recv_frame(struct sk_bu +@@ -1729,13 +1729,13 @@ static void fcoe_recv_frame(struct sk_buff *skb) goto drop; if (!fcoe_filter_frames(lport, fp)) { @@ -68,9 +71,11 @@ Signed-off-by: Thomas Gleixner kfree_skb(skb); } +diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c +index 7dc4ffa24430..4946df66a5ab 100644 --- a/drivers/scsi/fcoe/fcoe_ctlr.c +++ b/drivers/scsi/fcoe/fcoe_ctlr.c -@@ -835,7 +835,7 @@ static unsigned long fcoe_ctlr_age_fcfs( +@@ -838,7 +838,7 @@ static unsigned long fcoe_ctlr_age_fcfs(struct fcoe_ctlr *fip) INIT_LIST_HEAD(&del_list); @@ -79,7 +84,7 @@ Signed-off-by: Thomas Gleixner list_for_each_entry_safe(fcf, next, &fip->fcfs, list) { deadline = fcf->time + fcf->fka_period + fcf->fka_period / 2; -@@ -871,7 +871,7 @@ static unsigned long fcoe_ctlr_age_fcfs( +@@ -874,7 +874,7 @@ static unsigned long fcoe_ctlr_age_fcfs(struct fcoe_ctlr *fip) sel_time = fcf->time; } } @@ -88,9 +93,11 @@ Signed-off-by: Thomas Gleixner list_for_each_entry_safe(fcf, next, &del_list, list) { /* Removes fcf from current list */ +diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c +index 42bcf7f3a0f9..2ce045d6860c 100644 --- a/drivers/scsi/libfc/fc_exch.c +++ b/drivers/scsi/libfc/fc_exch.c -@@ -833,10 +833,10 @@ static struct fc_exch *fc_exch_em_alloc( +@@ -833,10 +833,10 @@ static struct fc_exch *fc_exch_em_alloc(struct fc_lport *lport, } memset(ep, 0, sizeof(*ep)); @@ -103,3 +110,6 @@ Signed-off-by: Thomas Gleixner /* peek cache of free slot */ if (pool->left != FC_XID_UNKNOWN) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0223-x86-crypto-reduce-preempt-disabled-regions.patch b/kernel/patches-4.19.x-rt/0221-x86-crypto-Reduce-preempt-disabled-regions.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0223-x86-crypto-reduce-preempt-disabled-regions.patch rename to kernel/patches-4.19.x-rt/0221-x86-crypto-Reduce-preempt-disabled-regions.patch index 5b2f529ee..c74f8ad4a 100644 --- a/kernel/patches-4.19.x-rt/0223-x86-crypto-reduce-preempt-disabled-regions.patch +++ b/kernel/patches-4.19.x-rt/0221-x86-crypto-Reduce-preempt-disabled-regions.patch @@ -1,6 +1,7 @@ -Subject: x86: crypto: Reduce preempt disabled regions +From 3f5be0658bbd8160961eec6f903d89aad36f03f1 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 14 Nov 2011 18:19:27 +0100 +Subject: [PATCH 221/269] x86: crypto: Reduce preempt disabled regions Restrict the preempt disabled regions to the actual floating point operations and enable preemption for the administrative actions. @@ -13,12 +14,14 @@ Signed-off-by: Peter Zijlstra Signed-off-by: Thomas Gleixner --- - arch/x86/crypto/aesni-intel_glue.c | 22 ++++++++++++---------- + arch/x86/crypto/aesni-intel_glue.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) +diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c +index 917f25e4d0a8..58d8c03fc32d 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c -@@ -434,14 +434,14 @@ static int ecb_encrypt(struct skcipher_r +@@ -434,14 +434,14 @@ static int ecb_encrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); @@ -35,7 +38,7 @@ Signed-off-by: Thomas Gleixner return err; } -@@ -456,14 +456,14 @@ static int ecb_decrypt(struct skcipher_r +@@ -456,14 +456,14 @@ static int ecb_decrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); @@ -52,7 +55,7 @@ Signed-off-by: Thomas Gleixner return err; } -@@ -478,14 +478,14 @@ static int cbc_encrypt(struct skcipher_r +@@ -478,14 +478,14 @@ static int cbc_encrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); @@ -69,7 +72,7 @@ Signed-off-by: Thomas Gleixner return err; } -@@ -500,14 +500,14 @@ static int cbc_decrypt(struct skcipher_r +@@ -500,14 +500,14 @@ static int cbc_decrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); @@ -86,7 +89,7 @@ Signed-off-by: Thomas Gleixner return err; } -@@ -557,18 +557,20 @@ static int ctr_crypt(struct skcipher_req +@@ -557,18 +557,20 @@ static int ctr_crypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); @@ -109,3 +112,6 @@ Signed-off-by: Thomas Gleixner return err; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0224-crypto-Reduce-preempt-disabled-regions-more-algos.patch b/kernel/patches-4.19.x-rt/0222-crypto-Reduce-preempt-disabled-regions-more-algos.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0224-crypto-Reduce-preempt-disabled-regions-more-algos.patch rename to kernel/patches-4.19.x-rt/0222-crypto-Reduce-preempt-disabled-regions-more-algos.patch index c9f0040d9..ef6d5f4bf 100644 --- a/kernel/patches-4.19.x-rt/0224-crypto-Reduce-preempt-disabled-regions-more-algos.patch +++ b/kernel/patches-4.19.x-rt/0222-crypto-Reduce-preempt-disabled-regions-more-algos.patch @@ -1,6 +1,7 @@ +From e17c7ea4fb043fe1d4e89e4a42ff80b20d157f12 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 21 Feb 2014 17:24:04 +0100 -Subject: crypto: Reduce preempt disabled regions, more algos +Subject: [PATCH 222/269] crypto: Reduce preempt disabled regions, more algos Don Estabrook reported | kernel: WARNING: CPU: 2 PID: 858 at kernel/sched/core.c:2428 migrate_disable+0xed/0x100() @@ -37,13 +38,15 @@ the bug is gone. Reported-by: Don Estabrook Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/crypto/cast5_avx_glue.c | 21 +++++++++------------ - arch/x86/crypto/glue_helper.c | 31 ++++++++++++++++--------------- + arch/x86/crypto/cast5_avx_glue.c | 21 +++++++++------------ + arch/x86/crypto/glue_helper.c | 31 ++++++++++++++++--------------- 2 files changed, 25 insertions(+), 27 deletions(-) +diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c +index 41034745d6a2..d4bf7fc02ee7 100644 --- a/arch/x86/crypto/cast5_avx_glue.c +++ b/arch/x86/crypto/cast5_avx_glue.c -@@ -61,7 +61,7 @@ static inline void cast5_fpu_end(bool fp +@@ -61,7 +61,7 @@ static inline void cast5_fpu_end(bool fpu_enabled) static int ecb_crypt(struct skcipher_request *req, bool enc) { @@ -52,7 +55,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cast5_ctx *ctx = crypto_skcipher_ctx(tfm); struct skcipher_walk walk; -@@ -76,7 +76,7 @@ static int ecb_crypt(struct skcipher_req +@@ -76,7 +76,7 @@ static int ecb_crypt(struct skcipher_request *req, bool enc) u8 *wsrc = walk.src.virt.addr; u8 *wdst = walk.dst.virt.addr; @@ -61,7 +64,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Process multi-block batch */ if (nbytes >= bsize * CAST5_PARALLEL_BLOCKS) { -@@ -105,10 +105,9 @@ static int ecb_crypt(struct skcipher_req +@@ -105,10 +105,9 @@ static int ecb_crypt(struct skcipher_request *req, bool enc) } while (nbytes >= bsize); done: @@ -73,7 +76,7 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } -@@ -212,7 +211,7 @@ static int cbc_decrypt(struct skcipher_r +@@ -212,7 +211,7 @@ static int cbc_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cast5_ctx *ctx = crypto_skcipher_ctx(tfm); @@ -82,7 +85,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct skcipher_walk walk; unsigned int nbytes; int err; -@@ -220,12 +219,11 @@ static int cbc_decrypt(struct skcipher_r +@@ -220,12 +219,11 @@ static int cbc_decrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, false); while ((nbytes = walk.nbytes)) { @@ -97,7 +100,7 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } -@@ -292,7 +290,7 @@ static int ctr_crypt(struct skcipher_req +@@ -292,7 +290,7 @@ static int ctr_crypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cast5_ctx *ctx = crypto_skcipher_ctx(tfm); @@ -106,7 +109,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct skcipher_walk walk; unsigned int nbytes; int err; -@@ -300,13 +298,12 @@ static int ctr_crypt(struct skcipher_req +@@ -300,13 +298,12 @@ static int ctr_crypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, false); while ((nbytes = walk.nbytes) >= CAST5_BLOCK_SIZE) { @@ -122,9 +125,11 @@ Signed-off-by: Sebastian Andrzej Siewior if (walk.nbytes) { ctr_crypt_final(&walk, ctx); err = skcipher_walk_done(&walk, 0); +diff --git a/arch/x86/crypto/glue_helper.c b/arch/x86/crypto/glue_helper.c +index a78ef99a9981..dac489a1c4da 100644 --- a/arch/x86/crypto/glue_helper.c +++ b/arch/x86/crypto/glue_helper.c -@@ -38,7 +38,7 @@ int glue_ecb_req_128bit(const struct com +@@ -38,7 +38,7 @@ int glue_ecb_req_128bit(const struct common_glue_ctx *gctx, void *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); const unsigned int bsize = 128 / 8; struct skcipher_walk walk; @@ -133,7 +138,7 @@ Signed-off-by: Sebastian Andrzej Siewior unsigned int nbytes; int err; -@@ -51,7 +51,7 @@ int glue_ecb_req_128bit(const struct com +@@ -51,7 +51,7 @@ int glue_ecb_req_128bit(const struct common_glue_ctx *gctx, unsigned int i; fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit, @@ -142,7 +147,7 @@ Signed-off-by: Sebastian Andrzej Siewior for (i = 0; i < gctx->num_funcs; i++) { func_bytes = bsize * gctx->funcs[i].num_blocks; -@@ -69,10 +69,9 @@ int glue_ecb_req_128bit(const struct com +@@ -69,10 +69,9 @@ int glue_ecb_req_128bit(const struct common_glue_ctx *gctx, if (nbytes < bsize) break; } @@ -154,7 +159,7 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } EXPORT_SYMBOL_GPL(glue_ecb_req_128bit); -@@ -115,7 +114,7 @@ int glue_cbc_decrypt_req_128bit(const st +@@ -115,7 +114,7 @@ int glue_cbc_decrypt_req_128bit(const struct common_glue_ctx *gctx, void *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); const unsigned int bsize = 128 / 8; struct skcipher_walk walk; @@ -163,7 +168,7 @@ Signed-off-by: Sebastian Andrzej Siewior unsigned int nbytes; int err; -@@ -129,7 +128,7 @@ int glue_cbc_decrypt_req_128bit(const st +@@ -129,7 +128,7 @@ int glue_cbc_decrypt_req_128bit(const struct common_glue_ctx *gctx, u128 last_iv; fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit, @@ -172,7 +177,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* Start of the last block. */ src += nbytes / bsize - 1; dst += nbytes / bsize - 1; -@@ -161,10 +160,10 @@ int glue_cbc_decrypt_req_128bit(const st +@@ -161,10 +160,10 @@ int glue_cbc_decrypt_req_128bit(const struct common_glue_ctx *gctx, done: u128_xor(dst, dst, (u128 *)walk.iv); *(u128 *)walk.iv = last_iv; @@ -184,7 +189,7 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } EXPORT_SYMBOL_GPL(glue_cbc_decrypt_req_128bit); -@@ -175,7 +174,7 @@ int glue_ctr_req_128bit(const struct com +@@ -175,7 +174,7 @@ int glue_ctr_req_128bit(const struct common_glue_ctx *gctx, void *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); const unsigned int bsize = 128 / 8; struct skcipher_walk walk; @@ -193,7 +198,7 @@ Signed-off-by: Sebastian Andrzej Siewior unsigned int nbytes; int err; -@@ -189,7 +188,7 @@ int glue_ctr_req_128bit(const struct com +@@ -189,7 +188,7 @@ int glue_ctr_req_128bit(const struct common_glue_ctx *gctx, le128 ctrblk; fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit, @@ -202,7 +207,7 @@ Signed-off-by: Sebastian Andrzej Siewior be128_to_le128(&ctrblk, (be128 *)walk.iv); -@@ -213,11 +212,10 @@ int glue_ctr_req_128bit(const struct com +@@ -213,11 +212,10 @@ int glue_ctr_req_128bit(const struct common_glue_ctx *gctx, } le128_to_be128((be128 *)walk.iv, &ctrblk); @@ -215,7 +220,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (nbytes) { le128 ctrblk; u128 tmp; -@@ -278,7 +276,7 @@ int glue_xts_req_128bit(const struct com +@@ -278,7 +276,7 @@ int glue_xts_req_128bit(const struct common_glue_ctx *gctx, { const unsigned int bsize = 128 / 8; struct skcipher_walk walk; @@ -224,7 +229,7 @@ Signed-off-by: Sebastian Andrzej Siewior unsigned int nbytes; int err; -@@ -289,21 +287,24 @@ int glue_xts_req_128bit(const struct com +@@ -289,21 +287,24 @@ int glue_xts_req_128bit(const struct common_glue_ctx *gctx, /* set minimum length to bsize, for tweak_fn */ fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit, @@ -252,3 +257,6 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } EXPORT_SYMBOL_GPL(glue_xts_req_128bit); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0225-crypto-limit-more-FPU-enabled-sections.patch b/kernel/patches-4.19.x-rt/0223-crypto-limit-more-FPU-enabled-sections.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0225-crypto-limit-more-FPU-enabled-sections.patch rename to kernel/patches-4.19.x-rt/0223-crypto-limit-more-FPU-enabled-sections.patch index 927731e85..65e5c34c2 100644 --- a/kernel/patches-4.19.x-rt/0225-crypto-limit-more-FPU-enabled-sections.patch +++ b/kernel/patches-4.19.x-rt/0223-crypto-limit-more-FPU-enabled-sections.patch @@ -1,6 +1,7 @@ +From da94fdf57dbc4e55dd359d103c8f61cc2811f47c Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 30 Nov 2017 13:40:10 +0100 -Subject: [PATCH] crypto: limit more FPU-enabled sections +Subject: [PATCH 223/269] crypto: limit more FPU-enabled sections MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -28,14 +29,16 @@ performance. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/crypto/chacha20_glue.c | 9 +++++---- - arch/x86/include/asm/fpu/api.h | 1 + - arch/x86/kernel/fpu/core.c | 12 ++++++++++++ + arch/x86/crypto/chacha20_glue.c | 9 +++++---- + arch/x86/include/asm/fpu/api.h | 1 + + arch/x86/kernel/fpu/core.c | 12 ++++++++++++ 3 files changed, 18 insertions(+), 4 deletions(-) +diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c +index dce7c5d39c2f..6194160b7fbc 100644 --- a/arch/x86/crypto/chacha20_glue.c +++ b/arch/x86/crypto/chacha20_glue.c -@@ -81,23 +81,24 @@ static int chacha20_simd(struct skcipher +@@ -81,23 +81,24 @@ static int chacha20_simd(struct skcipher_request *req) crypto_chacha20_init(state, ctx, walk.iv); @@ -64,6 +67,8 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } +diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h +index a9caac9d4a72..18b31f22ca5d 100644 --- a/arch/x86/include/asm/fpu/api.h +++ b/arch/x86/include/asm/fpu/api.h @@ -25,6 +25,7 @@ extern void __kernel_fpu_begin(void); @@ -74,6 +79,8 @@ Signed-off-by: Sebastian Andrzej Siewior extern bool irq_fpu_usable(void); /* +diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c +index 2ea85b32421a..6914dc569d1e 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -138,6 +138,18 @@ void kernel_fpu_end(void) @@ -95,3 +102,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Save the FPU state (mark it for reload if necessary): * +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0226-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch b/kernel/patches-4.19.x-rt/0224-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0226-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch rename to kernel/patches-4.19.x-rt/0224-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch index 29fc6e938..7264edc0c 100644 --- a/kernel/patches-4.19.x-rt/0226-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch +++ b/kernel/patches-4.19.x-rt/0224-crypto-scompress-serialize-RT-percpu-scratch-buffer-.patch @@ -1,7 +1,8 @@ +From d46edae98108392143e56a64ada43af295b537a9 Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Wed, 11 Jul 2018 17:14:47 +0200 -Subject: [PATCH] crypto: scompress - serialize RT percpu scratch buffer - access with a local lock +Subject: [PATCH 224/269] crypto: scompress - serialize RT percpu scratch + buffer access with a local lock | BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:974 | in_atomic(): 1, irqs_disabled(): 0, pid: 1401, name: cryptomgr_test @@ -35,9 +36,11 @@ causing the splat above. Serialize with a local lock for RT instead. Signed-off-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - crypto/scompress.c | 6 ++++-- + crypto/scompress.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) +diff --git a/crypto/scompress.c b/crypto/scompress.c +index 968bbcf65c94..c2f0077e0801 100644 --- a/crypto/scompress.c +++ b/crypto/scompress.c @@ -24,6 +24,7 @@ @@ -48,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include #include -@@ -34,6 +35,7 @@ static void * __percpu *scomp_src_scratc +@@ -34,6 +35,7 @@ static void * __percpu *scomp_src_scratches; static void * __percpu *scomp_dst_scratches; static int scomp_scratch_users; static DEFINE_MUTEX(scomp_lock); @@ -56,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_NET static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg) -@@ -146,7 +148,7 @@ static int scomp_acomp_comp_decomp(struc +@@ -146,7 +148,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir) void **tfm_ctx = acomp_tfm_ctx(tfm); struct crypto_scomp *scomp = *tfm_ctx; void **ctx = acomp_request_ctx(req); @@ -65,7 +68,7 @@ Signed-off-by: Sebastian Andrzej Siewior u8 *scratch_src = *per_cpu_ptr(scomp_src_scratches, cpu); u8 *scratch_dst = *per_cpu_ptr(scomp_dst_scratches, cpu); int ret; -@@ -181,7 +183,7 @@ static int scomp_acomp_comp_decomp(struc +@@ -181,7 +183,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir) 1); } out: @@ -74,3 +77,6 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0227-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch b/kernel/patches-4.19.x-rt/0225-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0227-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch rename to kernel/patches-4.19.x-rt/0225-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch index eee9ac835..669fbef4b 100644 --- a/kernel/patches-4.19.x-rt/0227-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch +++ b/kernel/patches-4.19.x-rt/0225-crypto-cryptd-add-a-lock-instead-preempt_disable-loc.patch @@ -1,6 +1,7 @@ +From b1616c1d9f52000a3614707e3c3ffe2b63c5fde9 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 26 Jul 2018 18:52:00 +0200 -Subject: [PATCH] crypto: cryptd - add a lock instead +Subject: [PATCH 225/269] crypto: cryptd - add a lock instead preempt_disable/local_bh_disable cryptd has a per-CPU lock which protected with local_bh_disable() and @@ -14,12 +15,14 @@ actual ressource is protected by the spinlock. Signed-off-by: Sebastian Andrzej Siewior --- - crypto/cryptd.c | 19 +++++++++---------- + crypto/cryptd.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) +diff --git a/crypto/cryptd.c b/crypto/cryptd.c +index addca7bae33f..8ad657cddc0a 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c -@@ -39,6 +39,7 @@ MODULE_PARM_DESC(cryptd_max_cpu_qlen, "S +@@ -39,6 +39,7 @@ MODULE_PARM_DESC(cryptd_max_cpu_qlen, "Set cryptd Max queue depth"); struct cryptd_cpu_queue { struct crypto_queue queue; struct work_struct work; @@ -27,7 +30,7 @@ Signed-off-by: Sebastian Andrzej Siewior }; struct cryptd_queue { -@@ -117,6 +118,7 @@ static int cryptd_init_queue(struct cryp +@@ -117,6 +118,7 @@ static int cryptd_init_queue(struct cryptd_queue *queue, cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu); crypto_init_queue(&cpu_queue->queue, max_cpu_qlen); INIT_WORK(&cpu_queue->work, cryptd_queue_worker); @@ -35,7 +38,7 @@ Signed-off-by: Sebastian Andrzej Siewior } pr_info("cryptd: max_cpu_qlen set to %d\n", max_cpu_qlen); return 0; -@@ -141,8 +143,10 @@ static int cryptd_enqueue_request(struct +@@ -141,8 +143,10 @@ static int cryptd_enqueue_request(struct cryptd_queue *queue, struct cryptd_cpu_queue *cpu_queue; atomic_t *refcnt; @@ -48,7 +51,7 @@ Signed-off-by: Sebastian Andrzej Siewior err = crypto_enqueue_request(&cpu_queue->queue, request); refcnt = crypto_tfm_ctx(request->tfm); -@@ -158,7 +162,7 @@ static int cryptd_enqueue_request(struct +@@ -158,7 +162,7 @@ static int cryptd_enqueue_request(struct cryptd_queue *queue, atomic_inc(refcnt); out_put_cpu: @@ -57,7 +60,7 @@ Signed-off-by: Sebastian Andrzej Siewior return err; } -@@ -174,16 +178,11 @@ static void cryptd_queue_worker(struct w +@@ -174,16 +178,11 @@ static void cryptd_queue_worker(struct work_struct *work) cpu_queue = container_of(work, struct cryptd_cpu_queue, work); /* * Only handle one request at a time to avoid hogging crypto workqueue. @@ -76,3 +79,6 @@ Signed-off-by: Sebastian Andrzej Siewior if (!req) return; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0228-panic-disable-random-on-rt.patch b/kernel/patches-4.19.x-rt/0226-panic-skip-get_random_bytes-for-RT_FULL-in-init_oops.patch similarity index 66% rename from kernel/patches-4.19.x-rt/0228-panic-disable-random-on-rt.patch rename to kernel/patches-4.19.x-rt/0226-panic-skip-get_random_bytes-for-RT_FULL-in-init_oops.patch index 51aa8c6c3..2263b74f8 100644 --- a/kernel/patches-4.19.x-rt/0228-panic-disable-random-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0226-panic-skip-get_random_bytes-for-RT_FULL-in-init_oops.patch @@ -1,15 +1,19 @@ +From c3ce683225b678190d7c42bd8bc695ad74595ac8 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 14 Jul 2015 14:26:34 +0200 -Subject: panic: skip get_random_bytes for RT_FULL in init_oops_id +Subject: [PATCH 226/269] panic: skip get_random_bytes for RT_FULL in + init_oops_id Disable on -RT. If this is invoked from irq-context we will have problems to acquire the sleeping lock. Signed-off-by: Thomas Gleixner --- - kernel/panic.c | 2 ++ + kernel/panic.c | 2 ++ 1 file changed, 2 insertions(+) +diff --git a/kernel/panic.c b/kernel/panic.c +index 6a6df23acd1a..8f0a896e8428 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -479,9 +479,11 @@ static u64 oops_id; @@ -24,3 +28,6 @@ Signed-off-by: Thomas Gleixner oops_id++; return 0; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0229-x86-stackprot-no-random-on-rt.patch b/kernel/patches-4.19.x-rt/0227-x86-stackprotector-Avoid-random-pool-on-rt.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0229-x86-stackprot-no-random-on-rt.patch rename to kernel/patches-4.19.x-rt/0227-x86-stackprotector-Avoid-random-pool-on-rt.patch index 66c2ecf59..23fee6867 100644 --- a/kernel/patches-4.19.x-rt/0229-x86-stackprot-no-random-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0227-x86-stackprotector-Avoid-random-pool-on-rt.patch @@ -1,6 +1,7 @@ +From 3daaf6574c9be1128d8384deff5de6c53bc2712f Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 16 Dec 2010 14:25:18 +0100 -Subject: x86: stackprotector: Avoid random pool on rt +Subject: [PATCH 227/269] x86: stackprotector: Avoid random pool on rt CPU bringup calls into the random pool to initialize the stack canary. During boot that works nicely even on RT as the might sleep @@ -12,11 +13,12 @@ entropy and we rely on the TSC randomnness. Reported-by: Carsten Emde Signed-off-by: Thomas Gleixner - --- - arch/x86/include/asm/stackprotector.h | 8 +++++++- + arch/x86/include/asm/stackprotector.h | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) +diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h +index 8ec97a62c245..7bc85841fc56 100644 --- a/arch/x86/include/asm/stackprotector.h +++ b/arch/x86/include/asm/stackprotector.h @@ -60,7 +60,7 @@ @@ -28,7 +30,7 @@ Signed-off-by: Thomas Gleixner u64 tsc; #ifdef CONFIG_X86_64 -@@ -71,8 +71,14 @@ static __always_inline void boot_init_st +@@ -71,8 +71,14 @@ static __always_inline void boot_init_stack_canary(void) * of randomness. The TSC only matters for very early init, * there it already has some randomness on most systems. Later * on during the bootup the random pool has true entropy too. @@ -43,3 +45,6 @@ Signed-off-by: Thomas Gleixner tsc = rdtsc(); canary += tsc + (tsc << 32UL); canary &= CANARY_MASK; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0230-random-make-it-work-on-rt.patch b/kernel/patches-4.19.x-rt/0228-random-Make-it-work-on-rt.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0230-random-make-it-work-on-rt.patch rename to kernel/patches-4.19.x-rt/0228-random-Make-it-work-on-rt.patch index 5ccf43fdd..09eb38f87 100644 --- a/kernel/patches-4.19.x-rt/0230-random-make-it-work-on-rt.patch +++ b/kernel/patches-4.19.x-rt/0228-random-Make-it-work-on-rt.patch @@ -1,6 +1,7 @@ -Subject: random: Make it work on rt +From 5310182891f60d9a88c1abbc7512eca69f680a99 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 21 Aug 2012 20:38:50 +0200 +Subject: [PATCH 228/269] random: Make it work on rt Delegate the random insertion to the forced threaded interrupt handler. Store the return IP of the hard interrupt handler in the irq @@ -8,20 +9,21 @@ descriptor and feed it into the random generator as a source of entropy. Signed-off-by: Thomas Gleixner - --- - drivers/char/random.c | 11 +++++------ - drivers/hv/hv.c | 4 +++- - drivers/hv/vmbus_drv.c | 4 +++- - include/linux/irqdesc.h | 1 + - include/linux/random.h | 2 +- - kernel/irq/handle.c | 8 +++++++- - kernel/irq/manage.c | 6 ++++++ + drivers/char/random.c | 11 +++++------ + drivers/hv/hv.c | 4 +++- + drivers/hv/vmbus_drv.c | 4 +++- + include/linux/irqdesc.h | 1 + + include/linux/random.h | 2 +- + kernel/irq/handle.c | 8 +++++++- + kernel/irq/manage.c | 6 ++++++ 7 files changed, 26 insertions(+), 10 deletions(-) +diff --git a/drivers/char/random.c b/drivers/char/random.c +index c75b6cdf0053..4c20da67edd5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c -@@ -1229,28 +1229,27 @@ static __u32 get_reg(struct fast_pool *f +@@ -1229,28 +1229,27 @@ static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs) return *ptr; } @@ -55,9 +57,11 @@ Signed-off-by: Thomas Gleixner fast_mix(fast_pool); add_interrupt_bench(cycles); +diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c +index 748a1c4172a6..4258244fa314 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c -@@ -112,10 +112,12 @@ int hv_post_message(union hv_connection_ +@@ -112,10 +112,12 @@ int hv_post_message(union hv_connection_id connection_id, static void hv_stimer0_isr(void) { struct hv_per_cpu_context *hv_cpu; @@ -71,6 +75,8 @@ Signed-off-by: Thomas Gleixner } static int hv_ce_set_next_event(unsigned long delta, +diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c +index 9aa18f387a34..39aaa14993cc 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c @@ -1042,6 +1042,8 @@ static void vmbus_isr(void) @@ -91,9 +97,11 @@ Signed-off-by: Thomas Gleixner } /* +diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h +index 875c41b23f20..ff5eb8d1ede4 100644 --- a/include/linux/irqdesc.h +++ b/include/linux/irqdesc.h -@@ -70,6 +70,7 @@ struct irq_desc { +@@ -71,6 +71,7 @@ struct irq_desc { unsigned int irqs_unhandled; atomic_t threads_handled; int threads_handled_last; @@ -101,9 +109,11 @@ Signed-off-by: Thomas Gleixner raw_spinlock_t lock; struct cpumask *percpu_enabled; const struct cpumask *percpu_affinity; +diff --git a/include/linux/random.h b/include/linux/random.h +index 445a0ea4ff49..a7b7d9f97580 100644 --- a/include/linux/random.h +++ b/include/linux/random.h -@@ -32,7 +32,7 @@ static inline void add_latent_entropy(vo +@@ -32,7 +32,7 @@ static inline void add_latent_entropy(void) {} extern void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; @@ -112,9 +122,11 @@ Signed-off-by: Thomas Gleixner extern void get_random_bytes(void *buf, int nbytes); extern int wait_for_random_bytes(void); +diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c +index 38554bc35375..06a80bbf78af 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c -@@ -185,10 +185,16 @@ irqreturn_t handle_irq_event_percpu(stru +@@ -185,10 +185,16 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc) { irqreturn_t retval; unsigned int flags = 0; @@ -132,9 +144,11 @@ Signed-off-by: Thomas Gleixner if (!noirqdebug) note_interrupt(desc, retval); +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index 48c2690070f3..9d7be2c33d19 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c -@@ -1076,6 +1076,12 @@ static int irq_thread(void *data) +@@ -1079,6 +1079,12 @@ static int irq_thread(void *data) if (action_ret == IRQ_WAKE_THREAD) irq_wake_secondary(desc, action); @@ -147,3 +161,6 @@ Signed-off-by: Thomas Gleixner wake_threads_waitq(desc); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0231-random-avoid-preempt_disable-ed-section.patch b/kernel/patches-4.19.x-rt/0229-random-avoid-preempt_disable-ed-section.patch similarity index 86% rename from kernel/patches-4.19.x-rt/0231-random-avoid-preempt_disable-ed-section.patch rename to kernel/patches-4.19.x-rt/0229-random-avoid-preempt_disable-ed-section.patch index 846d4616f..cfaf35113 100644 --- a/kernel/patches-4.19.x-rt/0231-random-avoid-preempt_disable-ed-section.patch +++ b/kernel/patches-4.19.x-rt/0229-random-avoid-preempt_disable-ed-section.patch @@ -1,6 +1,7 @@ +From 58450ccb54ddabe50f8c0990f4ea69f7cdaabdac Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 12 May 2017 15:46:17 +0200 -Subject: [PATCH] random: avoid preempt_disable()ed section +Subject: [PATCH 229/269] random: avoid preempt_disable()ed section extract_crng() will use sleeping locks while in a preempt_disable() section due to get_cpu_var(). @@ -9,9 +10,11 @@ Work around it with local_locks. Cc: stable-rt@vger.kernel.org # where it applies to Signed-off-by: Sebastian Andrzej Siewior --- - drivers/char/random.c | 11 +++++++---- + drivers/char/random.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) +diff --git a/drivers/char/random.c b/drivers/char/random.c +index 4c20da67edd5..91c1972b6a17 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -265,6 +265,7 @@ @@ -22,7 +25,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include -@@ -2223,6 +2224,7 @@ static rwlock_t batched_entropy_reset_lo +@@ -2223,6 +2224,7 @@ static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_ * at any point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); @@ -72,3 +75,6 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } EXPORT_SYMBOL(get_random_u32); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0232-cpu-hotplug--Implement-CPU-pinning.patch b/kernel/patches-4.19.x-rt/0230-cpu-hotplug-Implement-CPU-pinning.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0232-cpu-hotplug--Implement-CPU-pinning.patch rename to kernel/patches-4.19.x-rt/0230-cpu-hotplug-Implement-CPU-pinning.patch index 2841608b7..702b05839 100644 --- a/kernel/patches-4.19.x-rt/0232-cpu-hotplug--Implement-CPU-pinning.patch +++ b/kernel/patches-4.19.x-rt/0230-cpu-hotplug-Implement-CPU-pinning.patch @@ -1,13 +1,16 @@ -Subject: cpu/hotplug: Implement CPU pinning +From d4c787bcf728f34398550a7ad54acb389cd41654 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 19 Jul 2017 17:31:20 +0200 +Subject: [PATCH 230/269] cpu/hotplug: Implement CPU pinning Signed-off-by: Thomas Gleixner --- - include/linux/sched.h | 1 + - kernel/cpu.c | 38 ++++++++++++++++++++++++++++++++++++++ + include/linux/sched.h | 1 + + kernel/cpu.c | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 76e6cdafb992..0445d5c7ced0 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -671,6 +671,7 @@ struct task_struct { @@ -18,9 +21,11 @@ Signed-off-by: Thomas Gleixner # ifdef CONFIG_SCHED_DEBUG int migrate_disable_atomic; # endif +diff --git a/kernel/cpu.c b/kernel/cpu.c +index f684f41492d3..3340c4f873ad 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c -@@ -75,6 +75,11 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_s +@@ -75,6 +75,11 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_state, cpuhp_state) = { .fail = CPUHP_INVALID, }; @@ -75,7 +80,7 @@ Signed-off-by: Thomas Gleixner } DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock); -@@ -828,6 +861,7 @@ static int take_cpu_down(void *_param) +@@ -853,6 +886,7 @@ static int take_cpu_down(void *_param) static int takedown_cpu(unsigned int cpu) { @@ -83,7 +88,7 @@ Signed-off-by: Thomas Gleixner struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu); int err; -@@ -840,11 +874,14 @@ static int takedown_cpu(unsigned int cpu +@@ -865,11 +899,14 @@ static int takedown_cpu(unsigned int cpu) */ irq_lock_sparse(); @@ -98,7 +103,7 @@ Signed-off-by: Thomas Gleixner /* CPU refused to die */ irq_unlock_sparse(); /* Unpark the hotplug thread so we can rollback there */ -@@ -863,6 +900,7 @@ static int takedown_cpu(unsigned int cpu +@@ -888,6 +925,7 @@ static int takedown_cpu(unsigned int cpu) wait_for_ap_thread(st, false); BUG_ON(st->state != CPUHP_AP_IDLE_DEAD); @@ -106,3 +111,6 @@ Signed-off-by: Thomas Gleixner /* Interrupts are moved away from the dying cpu, reenable alloc/free */ irq_unlock_sparse(); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0233-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch b/kernel/patches-4.19.x-rt/0231-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch similarity index 71% rename from kernel/patches-4.19.x-rt/0233-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch rename to kernel/patches-4.19.x-rt/0231-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch index 4d22d7372..939c76a05 100644 --- a/kernel/patches-4.19.x-rt/0233-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch +++ b/kernel/patches-4.19.x-rt/0231-sched-Allow-pinned-user-tasks-to-be-awakened-to-the-.patch @@ -1,7 +1,8 @@ +From 579810b4daa730ec872b6c1e8940d5ab6625bb44 Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Sun, 19 Aug 2018 08:28:35 +0200 -Subject: [PATCH] sched: Allow pinned user tasks to be awakened to the CPU they - pinned +Subject: [PATCH 231/269] sched: Allow pinned user tasks to be awakened to the + CPU they pinned Since commit 7af443ee16976 ("sched/core: Require cpu_active() in select_task_rq(), for user tasks") select_fallback_rq() will BUG() if @@ -16,12 +17,14 @@ Cc: stable-rt@vger.kernel.org Signed-off-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - kernel/sched/core.c | 2 +- + kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 1cd1abc45097..960271e088ab 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -902,7 +902,7 @@ static inline bool is_cpu_allowed(struct +@@ -904,7 +904,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu) if (!cpumask_test_cpu(cpu, p->cpus_ptr)) return false; @@ -30,3 +33,6 @@ Signed-off-by: Sebastian Andrzej Siewior return cpu_online(cpu); return cpu_active(cpu); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0234-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch b/kernel/patches-4.19.x-rt/0232-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch similarity index 80% rename from kernel/patches-4.19.x-rt/0234-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch rename to kernel/patches-4.19.x-rt/0232-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch index fa9c3485b..cb313022a 100644 --- a/kernel/patches-4.19.x-rt/0234-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch +++ b/kernel/patches-4.19.x-rt/0232-hotplug-duct-tape-RT-rwlock-usage-for-non-RT.patch @@ -1,6 +1,7 @@ +From e8484e1a8250b915f8da072e0693769465f9e956 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 4 Aug 2017 18:31:00 +0200 -Subject: [PATCH] hotplug: duct-tape RT-rwlock usage for non-RT +Subject: [PATCH 232/269] hotplug: duct-tape RT-rwlock usage for non-RT This type is only available on -RT. We need to craft something for non-RT. Since the only migrate_disable() user is -RT only, there is no @@ -8,12 +9,14 @@ damage. Signed-off-by: Sebastian Andrzej Siewior --- - kernel/cpu.c | 14 +++++++++++++- + kernel/cpu.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) +diff --git a/kernel/cpu.c b/kernel/cpu.c +index 3340c4f873ad..ad2d23d9fee2 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c -@@ -75,7 +75,7 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_s +@@ -75,7 +75,7 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_state, cpuhp_state) = { .fail = CPUHP_INVALID, }; @@ -54,7 +57,7 @@ Signed-off-by: Sebastian Andrzej Siewior } DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock); -@@ -861,7 +865,9 @@ static int take_cpu_down(void *_param) +@@ -886,7 +890,9 @@ static int take_cpu_down(void *_param) static int takedown_cpu(unsigned int cpu) { @@ -64,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu); int err; -@@ -874,14 +880,18 @@ static int takedown_cpu(unsigned int cpu +@@ -899,14 +905,18 @@ static int takedown_cpu(unsigned int cpu) */ irq_lock_sparse(); @@ -83,7 +86,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* CPU refused to die */ irq_unlock_sparse(); /* Unpark the hotplug thread so we can rollback there */ -@@ -900,7 +910,9 @@ static int takedown_cpu(unsigned int cpu +@@ -925,7 +935,9 @@ static int takedown_cpu(unsigned int cpu) wait_for_ap_thread(st, false); BUG_ON(st->state != CPUHP_AP_IDLE_DEAD); @@ -93,3 +96,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* Interrupts are moved away from the dying cpu, reenable alloc/free */ irq_unlock_sparse(); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0235-upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch b/kernel/patches-4.19.x-rt/0233-net-Remove-preemption-disabling-in-netif_rx.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0235-upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch rename to kernel/patches-4.19.x-rt/0233-net-Remove-preemption-disabling-in-netif_rx.patch index 8a74934c0..692cf668a 100644 --- a/kernel/patches-4.19.x-rt/0235-upstream-net-rt-remove-preemption-disabling-in-netif_rx.patch +++ b/kernel/patches-4.19.x-rt/0233-net-Remove-preemption-disabling-in-netif_rx.patch @@ -1,6 +1,7 @@ -Subject: net: Remove preemption disabling in netif_rx() +From d11da9d22d701a9a3e48a6ce8b2e94bfb3c922c2 Mon Sep 17 00:00:00 2001 From: Priyanka Jain Date: Thu, 17 May 2012 09:35:11 +0530 +Subject: [PATCH 233/269] net: Remove preemption disabling in netif_rx() 1)enqueue_to_backlog() (called from netif_rx) should be bind to a particluar CPU. This can be achieved by @@ -30,14 +31,14 @@ Link: http://lkml.kernel.org/r/1337227511-2271-1-git-send-email-Priyanka.Jain@fr Signed-off-by: Thomas Gleixner --- - Testing: Tested successfully on p4080ds(8-core SMP system) - - net/core/dev.c | 8 ++++---- + net/core/dev.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) +diff --git a/net/core/dev.c b/net/core/dev.c +index 50fe1e3ee26d..0c7238cc6ae2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -4470,7 +4470,7 @@ static int netif_rx_internal(struct sk_b +@@ -4484,7 +4484,7 @@ static int netif_rx_internal(struct sk_buff *skb) struct rps_dev_flow voidflow, *rflow = &voidflow; int cpu; @@ -46,7 +47,7 @@ Signed-off-by: Thomas Gleixner rcu_read_lock(); cpu = get_rps_cpu(skb->dev, skb, &rflow); -@@ -4480,14 +4480,14 @@ static int netif_rx_internal(struct sk_b +@@ -4494,14 +4494,14 @@ static int netif_rx_internal(struct sk_buff *skb) ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail); rcu_read_unlock(); @@ -64,3 +65,6 @@ Signed-off-by: Thomas Gleixner } return ret; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0236-net-another-local-irq-disable-alloc-atomic-headache.patch b/kernel/patches-4.19.x-rt/0234-net-Another-local_irq_disable-kmalloc-headache.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0236-net-another-local-irq-disable-alloc-atomic-headache.patch rename to kernel/patches-4.19.x-rt/0234-net-Another-local_irq_disable-kmalloc-headache.patch index 6bb46fc3f..ae355fe1e 100644 --- a/kernel/patches-4.19.x-rt/0236-net-another-local-irq-disable-alloc-atomic-headache.patch +++ b/kernel/patches-4.19.x-rt/0234-net-Another-local_irq_disable-kmalloc-headache.patch @@ -1,14 +1,17 @@ +From c82cf443e33d996e2ec0d6ea914dbb03c9540f12 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 26 Sep 2012 16:21:08 +0200 -Subject: net: Another local_irq_disable/kmalloc headache +Subject: [PATCH 234/269] net: Another local_irq_disable/kmalloc headache Replace it by a local lock. Though that's pretty inefficient :( Signed-off-by: Thomas Gleixner --- - net/core/skbuff.c | 10 ++++++---- + net/core/skbuff.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index 8b5768113acd..f89d5388ea07 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -63,6 +63,7 @@ @@ -27,7 +30,7 @@ Signed-off-by: Thomas Gleixner static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask) { -@@ -337,10 +339,10 @@ static void *__netdev_alloc_frag(unsigne +@@ -337,10 +339,10 @@ static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask) unsigned long flags; void *data; @@ -40,7 +43,7 @@ Signed-off-by: Thomas Gleixner return data; } -@@ -412,13 +414,13 @@ struct sk_buff *__netdev_alloc_skb(struc +@@ -412,13 +414,13 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (sk_memalloc_socks()) gfp_mask |= __GFP_MEMALLOC; @@ -56,3 +59,6 @@ Signed-off-by: Thomas Gleixner if (unlikely(!data)) return NULL; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0237-net-core-protect-users-of-napi_alloc_cache-against-r.patch b/kernel/patches-4.19.x-rt/0235-net-core-protect-users-of-napi_alloc_cache-against-r.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0237-net-core-protect-users-of-napi_alloc_cache-against-r.patch rename to kernel/patches-4.19.x-rt/0235-net-core-protect-users-of-napi_alloc_cache-against-r.patch index 64cbb8557..eb1381cd3 100644 --- a/kernel/patches-4.19.x-rt/0237-net-core-protect-users-of-napi_alloc_cache-against-r.patch +++ b/kernel/patches-4.19.x-rt/0235-net-core-protect-users-of-napi_alloc_cache-against-r.patch @@ -1,6 +1,7 @@ +From aee85b9563699974c6712aa097ca316a0ad1949b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 15 Jan 2016 16:33:34 +0100 -Subject: net/core: protect users of napi_alloc_cache against +Subject: [PATCH 235/269] net/core: protect users of napi_alloc_cache against reentrance On -RT the code running in BH can not be moved to another CPU so CPU @@ -12,9 +13,11 @@ This patch ensures that each user of napi_alloc_cache uses a local lock. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - net/core/skbuff.c | 25 +++++++++++++++++++------ + net/core/skbuff.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index f89d5388ea07..e20b1f25a273 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -332,6 +332,7 @@ struct napi_alloc_cache { @@ -53,7 +56,7 @@ Signed-off-by: Sebastian Andrzej Siewior len += NET_SKB_PAD + NET_IP_ALIGN; -@@ -481,7 +487,10 @@ struct sk_buff *__napi_alloc_skb(struct +@@ -481,7 +487,10 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, if (sk_memalloc_socks()) gfp_mask |= __GFP_MEMALLOC; @@ -64,7 +67,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (unlikely(!data)) return NULL; -@@ -492,7 +501,7 @@ struct sk_buff *__napi_alloc_skb(struct +@@ -492,7 +501,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, } /* use OR instead of assignment to avoid clearing of bits in mask */ @@ -73,7 +76,7 @@ Signed-off-by: Sebastian Andrzej Siewior skb->pfmemalloc = 1; skb->head_frag = 1; -@@ -724,23 +733,26 @@ void __consume_stateless_skb(struct sk_b +@@ -724,23 +733,26 @@ void __consume_stateless_skb(struct sk_buff *skb) void __kfree_skb_flush(void) { @@ -102,7 +105,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* record skb to CPU local list */ nc->skb_cache[nc->skb_count++] = skb; -@@ -755,6 +767,7 @@ static inline void _kfree_skb_defer(stru +@@ -755,6 +767,7 @@ static inline void _kfree_skb_defer(struct sk_buff *skb) nc->skb_cache); nc->skb_count = 0; } @@ -110,3 +113,6 @@ Signed-off-by: Sebastian Andrzej Siewior } void __kfree_skb_defer(struct sk_buff *skb) { +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0238-net-fix-iptable-xt-write-recseq-begin-rt-fallout.patch b/kernel/patches-4.19.x-rt/0236-net-netfilter-Serialize-xt_write_recseq-sections-on-.patch similarity index 72% rename from kernel/patches-4.19.x-rt/0238-net-fix-iptable-xt-write-recseq-begin-rt-fallout.patch rename to kernel/patches-4.19.x-rt/0236-net-netfilter-Serialize-xt_write_recseq-sections-on-.patch index c8ac66972..9aac89aa8 100644 --- a/kernel/patches-4.19.x-rt/0238-net-fix-iptable-xt-write-recseq-begin-rt-fallout.patch +++ b/kernel/patches-4.19.x-rt/0236-net-netfilter-Serialize-xt_write_recseq-sections-on-.patch @@ -1,6 +1,8 @@ -Subject: net: netfilter: Serialize xt_write_recseq sections on RT +From 0cabd4b2f5b341ccb079e8a59ec58999bd69ed9b Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Sun, 28 Oct 2012 11:18:08 +0100 +Subject: [PATCH 236/269] net: netfilter: Serialize xt_write_recseq sections on + RT The netfilter code relies only on the implicit semantics of local_bh_disable() for serializing wt_write_recseq sections. RT breaks @@ -8,12 +10,13 @@ that and needs explicit serialization here. Reported-by: Peter LaDow Signed-off-by: Thomas Gleixner - --- - include/linux/netfilter/x_tables.h | 7 +++++++ - net/netfilter/core.c | 6 ++++++ + include/linux/netfilter/x_tables.h | 7 +++++++ + net/netfilter/core.c | 6 ++++++ 2 files changed, 13 insertions(+) +diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h +index 9077b3ebea08..1710f2aff350 100644 --- a/include/linux/netfilter/x_tables.h +++ b/include/linux/netfilter/x_tables.h @@ -6,6 +6,7 @@ @@ -24,7 +27,7 @@ Signed-off-by: Thomas Gleixner #include /* Test a struct->invflags and a boolean for inequality */ -@@ -345,6 +346,8 @@ void xt_free_table_info(struct xt_table_ +@@ -345,6 +346,8 @@ void xt_free_table_info(struct xt_table_info *info); */ DECLARE_PER_CPU(seqcount_t, xt_recseq); @@ -33,7 +36,7 @@ Signed-off-by: Thomas Gleixner /* xt_tee_enabled - true if x_tables needs to handle reentrancy * * Enabled if current ip(6)tables ruleset has at least one -j TEE rule. -@@ -365,6 +368,9 @@ static inline unsigned int xt_write_recs +@@ -365,6 +368,9 @@ static inline unsigned int xt_write_recseq_begin(void) { unsigned int addend; @@ -43,7 +46,7 @@ Signed-off-by: Thomas Gleixner /* * Low order bit of sequence is set if we already * called xt_write_recseq_begin(). -@@ -395,6 +401,7 @@ static inline void xt_write_recseq_end(u +@@ -395,6 +401,7 @@ static inline void xt_write_recseq_end(unsigned int addend) /* this is kind of a write_seqcount_end(), but addend is 0 or 1 */ smp_wmb(); __this_cpu_add(xt_recseq.sequence, addend); @@ -51,6 +54,8 @@ Signed-off-by: Thomas Gleixner } /* +diff --git a/net/netfilter/core.c b/net/netfilter/core.c +index dc240cb47ddf..9bd8f062ebc1 100644 --- a/net/netfilter/core.c +++ b/net/netfilter/core.c @@ -20,6 +20,7 @@ @@ -73,3 +78,6 @@ Signed-off-by: Thomas Gleixner const struct nf_ipv6_ops __rcu *nf_ipv6_ops __read_mostly; EXPORT_SYMBOL_GPL(nf_ipv6_ops); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0239-net-make-devnet_rename_seq-a-mutex.patch b/kernel/patches-4.19.x-rt/0237-net-Add-a-mutex-around-devnet_rename_seq.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0239-net-make-devnet_rename_seq-a-mutex.patch rename to kernel/patches-4.19.x-rt/0237-net-Add-a-mutex-around-devnet_rename_seq.patch index d237db4d0..520b21947 100644 --- a/kernel/patches-4.19.x-rt/0239-net-make-devnet_rename_seq-a-mutex.patch +++ b/kernel/patches-4.19.x-rt/0237-net-Add-a-mutex-around-devnet_rename_seq.patch @@ -1,6 +1,7 @@ +From 7beec1c3857d0010fff01b209cbb4fa4c6674c1b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 20 Mar 2013 18:06:20 +0100 -Subject: net: Add a mutex around devnet_rename_seq +Subject: [PATCH 237/269] net: Add a mutex around devnet_rename_seq On RT write_seqcount_begin() disables preemption and device_rename() allocates memory with GFP_KERNEL and grabs later the sysfs_mutex @@ -16,12 +17,14 @@ it when it detects a writer in progress. This keeps the normal case Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner --- - net/core/dev.c | 34 ++++++++++++++++++++-------------- + net/core/dev.c | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) +diff --git a/net/core/dev.c b/net/core/dev.c +index 0c7238cc6ae2..848937d85a41 100644 --- a/net/core/dev.c +++ b/net/core/dev.c -@@ -195,6 +195,7 @@ static unsigned int napi_gen_id = NR_CPU +@@ -195,6 +195,7 @@ static unsigned int napi_gen_id = NR_CPUS; static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8); static seqcount_t devnet_rename_seq; @@ -29,7 +32,7 @@ Signed-off-by: Thomas Gleixner static inline void dev_base_seq_inc(struct net *net) { -@@ -920,7 +921,8 @@ int netdev_get_name(struct net *net, cha +@@ -920,7 +921,8 @@ int netdev_get_name(struct net *net, char *name, int ifindex) strcpy(name, dev->name); rcu_read_unlock(); if (read_seqcount_retry(&devnet_rename_seq, seq)) { @@ -39,8 +42,8 @@ Signed-off-by: Thomas Gleixner goto retry; } -@@ -1183,20 +1185,17 @@ int dev_change_name(struct net_device *d - if (dev->flags & IFF_UP) +@@ -1197,20 +1199,17 @@ int dev_change_name(struct net_device *dev, const char *newname) + likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK))) return -EBUSY; - write_seqcount_begin(&devnet_rename_seq); @@ -66,7 +69,7 @@ Signed-off-by: Thomas Gleixner if (oldname[0] && !strchr(oldname, '%')) netdev_info(dev, "renamed from %s\n", oldname); -@@ -1209,11 +1208,12 @@ int dev_change_name(struct net_device *d +@@ -1223,11 +1222,12 @@ int dev_change_name(struct net_device *dev, const char *newname) if (ret) { memcpy(dev->name, oldname, IFNAMSIZ); dev->name_assign_type = old_assign_type; @@ -82,7 +85,7 @@ Signed-off-by: Thomas Gleixner netdev_adjacent_rename_links(dev, oldname); -@@ -1234,7 +1234,8 @@ int dev_change_name(struct net_device *d +@@ -1248,7 +1248,8 @@ int dev_change_name(struct net_device *dev, const char *newname) /* err >= 0 after dev_alloc_name() or stores the first errno */ if (err >= 0) { err = ret; @@ -92,7 +95,7 @@ Signed-off-by: Thomas Gleixner memcpy(dev->name, oldname, IFNAMSIZ); memcpy(oldname, newname, IFNAMSIZ); dev->name_assign_type = old_assign_type; -@@ -1247,6 +1248,11 @@ int dev_change_name(struct net_device *d +@@ -1261,6 +1262,11 @@ int dev_change_name(struct net_device *dev, const char *newname) } return err; @@ -104,3 +107,6 @@ Signed-off-by: Thomas Gleixner } /** +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0240-lockdep-selftest-only-do-hardirq-context-test-for-raw-spinlock.patch b/kernel/patches-4.19.x-rt/0238-lockdep-selftest-Only-do-hardirq-context-test-for-ra.patch similarity index 86% rename from kernel/patches-4.19.x-rt/0240-lockdep-selftest-only-do-hardirq-context-test-for-raw-spinlock.patch rename to kernel/patches-4.19.x-rt/0238-lockdep-selftest-Only-do-hardirq-context-test-for-ra.patch index 6162cd2da..1b9d60d8e 100644 --- a/kernel/patches-4.19.x-rt/0240-lockdep-selftest-only-do-hardirq-context-test-for-raw-spinlock.patch +++ b/kernel/patches-4.19.x-rt/0238-lockdep-selftest-Only-do-hardirq-context-test-for-ra.patch @@ -1,8 +1,8 @@ -Subject: lockdep: selftest: Only do hardirq context test for raw spinlock -From: Yong Zhang -Date: Mon, 16 Apr 2012 15:01:56 +0800 - +From fdee0604e425474b4b3ba2935764f5b995764ba4 Mon Sep 17 00:00:00 2001 From: Yong Zhang +Date: Mon, 16 Apr 2012 15:01:56 +0800 +Subject: [PATCH 238/269] lockdep: selftest: Only do hardirq context test for + raw spinlock On -rt there is no softirq context any more and rwlock is sleepable, disable softirq context test and rwlock+irq test. @@ -12,9 +12,11 @@ Cc: Yong Zhang Link: http://lkml.kernel.org/r/1334559716-18447-3-git-send-email-yong.zhang0@gmail.com Signed-off-by: Thomas Gleixner --- - lib/locking-selftest.c | 23 +++++++++++++++++++++++ + lib/locking-selftest.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) +diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c +index 1e1bbf171eca..5cdf3809905e 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -2057,6 +2057,7 @@ void locking_selftest(void) @@ -54,3 +56,6 @@ Signed-off-by: Thomas Gleixner ww_tests(); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0241-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch b/kernel/patches-4.19.x-rt/0239-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0241-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch rename to kernel/patches-4.19.x-rt/0239-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch index 01cc757dd..9839b5a94 100644 --- a/kernel/patches-4.19.x-rt/0241-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch +++ b/kernel/patches-4.19.x-rt/0239-lockdep-selftest-fix-warnings-due-to-missing-PREEMPT.patch @@ -1,6 +1,8 @@ +From 726d6192b03ebe0886b0592a3cb6e071b84f9580 Mon Sep 17 00:00:00 2001 From: Josh Cartwright Date: Wed, 28 Jan 2015 13:08:45 -0600 -Subject: lockdep: selftest: fix warnings due to missing PREEMPT_RT conditionals +Subject: [PATCH 239/269] lockdep: selftest: fix warnings due to missing + PREEMPT_RT conditionals "lockdep: Selftest: Only do hardirq context test for raw spinlock" disabled the execution of certain tests with PREEMPT_RT_FULL, but did @@ -23,9 +25,11 @@ Signed-off-by: Xander Huff Acked-by: Gratian Crisan Signed-off-by: Sebastian Andrzej Siewior --- - lib/locking-selftest.c | 27 +++++++++++++++++++++++++++ + lib/locking-selftest.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) +diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c +index 5cdf3809905e..32db9532ddd4 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -742,6 +742,8 @@ GENERATE_TESTCASE(init_held_rtmutex); @@ -37,7 +41,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "locking-selftest-rlock-hardirq.h" GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_rlock) -@@ -757,9 +759,12 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_ +@@ -757,9 +759,12 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_rlock) #include "locking-selftest-wlock-softirq.h" GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_wlock) @@ -50,7 +54,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Enabling hardirqs with a softirq-safe lock held: */ -@@ -792,6 +797,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A +@@ -792,6 +797,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A_rlock) #undef E1 #undef E2 @@ -59,7 +63,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Enabling irqs with an irq-safe lock held: */ -@@ -815,6 +822,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A +@@ -815,6 +822,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A_rlock) #include "locking-selftest-spin-hardirq.h" GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_spin) @@ -68,7 +72,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "locking-selftest-rlock-hardirq.h" GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_rlock) -@@ -830,6 +839,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B +@@ -830,6 +839,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_rlock) #include "locking-selftest-wlock-softirq.h" GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_wlock) @@ -77,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior #undef E1 #undef E2 -@@ -861,6 +872,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B +@@ -861,6 +872,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_wlock) #include "locking-selftest-spin-hardirq.h" GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_spin) @@ -86,7 +90,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "locking-selftest-rlock-hardirq.h" GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_rlock) -@@ -876,6 +889,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_ +@@ -876,6 +889,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_rlock) #include "locking-selftest-wlock-softirq.h" GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_wlock) @@ -95,7 +99,7 @@ Signed-off-by: Sebastian Andrzej Siewior #undef E1 #undef E2 #undef E3 -@@ -909,6 +924,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_ +@@ -909,6 +924,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_wlock) #include "locking-selftest-spin-hardirq.h" GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_spin) @@ -104,7 +108,7 @@ Signed-off-by: Sebastian Andrzej Siewior #include "locking-selftest-rlock-hardirq.h" GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_rlock) -@@ -924,10 +941,14 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_ +@@ -924,10 +941,14 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_rlock) #include "locking-selftest-wlock-softirq.h" GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_wlock) @@ -119,7 +123,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * read-lock / write-lock irq inversion. * -@@ -990,6 +1011,10 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_inver +@@ -990,6 +1011,10 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_wlock) #undef E2 #undef E3 @@ -130,7 +134,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * read-lock / write-lock recursion that is actually safe. */ -@@ -1028,6 +1053,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_ +@@ -1028,6 +1053,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft) #undef E2 #undef E3 @@ -139,3 +143,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * read-lock / write-lock recursion that is unsafe. */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0242-preempt-lazy-support.patch b/kernel/patches-4.19.x-rt/0240-sched-Add-support-for-lazy-preemption.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0242-preempt-lazy-support.patch rename to kernel/patches-4.19.x-rt/0240-sched-Add-support-for-lazy-preemption.patch index a8353ec2f..311164b31 100644 --- a/kernel/patches-4.19.x-rt/0242-preempt-lazy-support.patch +++ b/kernel/patches-4.19.x-rt/0240-sched-Add-support-for-lazy-preemption.patch @@ -1,6 +1,7 @@ -Subject: sched: Add support for lazy preemption +From ccc79764a3c2281d5d0f7e15ba4628bceabd7a37 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 26 Oct 2012 18:50:54 +0100 +Subject: [PATCH 240/269] sched: Add support for lazy preemption It has become an obsession to mitigate the determinism vs. throughput loss of RT. Looking at the mainline semantics of preemption points @@ -52,21 +53,23 @@ performance. Signed-off-by: Thomas Gleixner --- - include/linux/preempt.h | 35 +++++++++++++++++- - include/linux/sched.h | 38 +++++++++++++++++++ - include/linux/thread_info.h | 12 +++++- - include/linux/trace_events.h | 1 - kernel/Kconfig.preempt | 6 +++ - kernel/cpu.c | 2 + - kernel/sched/core.c | 83 +++++++++++++++++++++++++++++++++++++++++-- - kernel/sched/fair.c | 16 ++++---- - kernel/sched/features.h | 3 + - kernel/sched/sched.h | 9 ++++ - kernel/trace/trace.c | 36 ++++++++++-------- - kernel/trace/trace.h | 2 + - kernel/trace/trace_output.c | 14 ++++++- + include/linux/preempt.h | 35 ++++++++++++++- + include/linux/sched.h | 38 +++++++++++++++++ + include/linux/thread_info.h | 12 +++++- + include/linux/trace_events.h | 1 + + kernel/Kconfig.preempt | 6 +++ + kernel/cpu.c | 2 + + kernel/sched/core.c | 83 +++++++++++++++++++++++++++++++++++- + kernel/sched/fair.c | 16 +++---- + kernel/sched/features.h | 3 ++ + kernel/sched/sched.h | 9 ++++ + kernel/trace/trace.c | 36 +++++++++------- + kernel/trace/trace.h | 2 + + kernel/trace/trace_output.c | 14 +++++- 13 files changed, 228 insertions(+), 29 deletions(-) +diff --git a/include/linux/preempt.h b/include/linux/preempt.h +index ed8413e7140f..9c74a019bf57 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -180,6 +180,20 @@ extern void preempt_count_sub(int val); @@ -139,9 +142,11 @@ Signed-off-by: Thomas Gleixner set_preempt_need_resched(); \ } while (0) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 0445d5c7ced0..dd95bd64504e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h -@@ -1725,6 +1725,44 @@ static inline int test_tsk_need_resched( +@@ -1725,6 +1725,44 @@ static inline int test_tsk_need_resched(struct task_struct *tsk) return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED)); } @@ -186,9 +191,11 @@ Signed-off-by: Thomas Gleixner static inline bool __task_is_stopped_or_traced(struct task_struct *task) { if (task->state & (__TASK_STOPPED | __TASK_TRACED)) +diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h +index 8d8821b3689a..d3fcab20d2a3 100644 --- a/include/linux/thread_info.h +++ b/include/linux/thread_info.h -@@ -97,7 +97,17 @@ static inline int test_ti_thread_flag(st +@@ -97,7 +97,17 @@ static inline int test_ti_thread_flag(struct thread_info *ti, int flag) #define test_thread_flag(flag) \ test_ti_thread_flag(current_thread_info(), flag) @@ -207,6 +214,8 @@ Signed-off-by: Thomas Gleixner #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES static inline int arch_within_stack_frames(const void * const stack, +diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h +index 0403d9696944..c20237c5ab66 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -64,6 +64,7 @@ struct trace_entry { @@ -217,6 +226,8 @@ Signed-off-by: Thomas Gleixner }; #define TRACE_EVENT_TYPE_MAX \ +diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt +index 907d72b3ba95..306567f72a3e 100644 --- a/kernel/Kconfig.preempt +++ b/kernel/Kconfig.preempt @@ -6,6 +6,12 @@ config PREEMPT_RT_BASE @@ -232,6 +243,8 @@ Signed-off-by: Thomas Gleixner choice prompt "Preemption Model" default PREEMPT_NONE +diff --git a/kernel/cpu.c b/kernel/cpu.c +index ad2d23d9fee2..46118ba36e3e 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -304,11 +304,13 @@ void pin_current_cpu(void) @@ -248,9 +261,11 @@ Signed-off-by: Thomas Gleixner if (cpu != smp_processor_id()) { __read_rt_unlock(cpuhp_pin); goto again; +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 960271e088ab..6d06dd682cd5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -491,6 +491,48 @@ void resched_curr(struct rq *rq) +@@ -493,6 +493,48 @@ void resched_curr(struct rq *rq) trace_sched_wake_idle_without_ipi(cpu); } @@ -299,7 +314,7 @@ Signed-off-by: Thomas Gleixner void resched_cpu(int cpu) { struct rq *rq = cpu_rq(cpu); -@@ -2403,6 +2445,9 @@ int sched_fork(unsigned long clone_flags +@@ -2405,6 +2447,9 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) p->on_cpu = 0; #endif init_task_preempt_count(p); @@ -309,7 +324,7 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_SMP plist_node_init(&p->pushable_tasks, MAX_PRIO); RB_CLEAR_NODE(&p->pushable_dl_tasks); -@@ -3470,6 +3515,7 @@ static void __sched notrace __schedule(b +@@ -3472,6 +3517,7 @@ static void __sched notrace __schedule(bool preempt) next = pick_next_task(rq, prev, &rf); clear_tsk_need_resched(prev); @@ -317,7 +332,7 @@ Signed-off-by: Thomas Gleixner clear_preempt_need_resched(); if (likely(prev != next)) { -@@ -3650,6 +3696,30 @@ static void __sched notrace preempt_sche +@@ -3652,6 +3698,30 @@ static void __sched notrace preempt_schedule_common(void) } while (need_resched()); } @@ -348,7 +363,7 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_PREEMPT /* * this is the entry point to schedule() from in-kernel preemption -@@ -3664,7 +3734,8 @@ asmlinkage __visible void __sched notrac +@@ -3666,7 +3736,8 @@ asmlinkage __visible void __sched notrace preempt_schedule(void) */ if (likely(!preemptible())) return; @@ -358,7 +373,7 @@ Signed-off-by: Thomas Gleixner preempt_schedule_common(); } NOKPROBE_SYMBOL(preempt_schedule); -@@ -3691,6 +3762,9 @@ asmlinkage __visible void __sched notrac +@@ -3693,6 +3764,9 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void) if (likely(!preemptible())) return; @@ -368,7 +383,7 @@ Signed-off-by: Thomas Gleixner do { /* * Because the function tracer can trace preempt_count_sub() -@@ -5459,7 +5533,9 @@ void init_idle(struct task_struct *idle, +@@ -5461,7 +5535,9 @@ void init_idle(struct task_struct *idle, int cpu) /* Set the preempt count _outside_ the spinlocks! */ init_idle_preempt_count(idle, cpu); @@ -379,7 +394,7 @@ Signed-off-by: Thomas Gleixner /* * The idle tasks have their own, simple scheduling class: */ -@@ -7181,6 +7257,7 @@ void migrate_disable(void) +@@ -7183,6 +7259,7 @@ void migrate_disable(void) } preempt_disable(); @@ -387,7 +402,7 @@ Signed-off-by: Thomas Gleixner pin_current_cpu(); migrate_disable_update_cpus_allowed(p); -@@ -7248,6 +7325,7 @@ void migrate_enable(void) +@@ -7250,6 +7327,7 @@ void migrate_enable(void) arg.dest_cpu = dest_cpu; unpin_current_cpu(); @@ -395,7 +410,7 @@ Signed-off-by: Thomas Gleixner preempt_enable(); stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); tlb_migrate_finish(p->mm); -@@ -7256,6 +7334,7 @@ void migrate_enable(void) +@@ -7258,6 +7336,7 @@ void migrate_enable(void) } } unpin_current_cpu(); @@ -403,9 +418,11 @@ Signed-off-by: Thomas Gleixner preempt_enable(); } EXPORT_SYMBOL(migrate_enable); +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index c17d63b06026..3b29a0b6748a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c -@@ -4017,7 +4017,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq +@@ -4017,7 +4017,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) ideal_runtime = sched_slice(cfs_rq, curr); delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime; if (delta_exec > ideal_runtime) { @@ -414,7 +431,7 @@ Signed-off-by: Thomas Gleixner /* * The current task ran long enough, ensure it doesn't get * re-elected due to buddy favours. -@@ -4041,7 +4041,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq +@@ -4041,7 +4041,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr) return; if (delta > ideal_runtime) @@ -423,7 +440,7 @@ Signed-off-by: Thomas Gleixner } static void -@@ -4183,7 +4183,7 @@ entity_tick(struct cfs_rq *cfs_rq, struc +@@ -4183,7 +4183,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) * validating it and just reschedule. */ if (queued) { @@ -432,7 +449,7 @@ Signed-off-by: Thomas Gleixner return; } /* -@@ -4367,7 +4367,7 @@ static void __account_cfs_rq_runtime(str +@@ -4367,7 +4367,7 @@ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec) * hierarchy can be throttled */ if (!assign_cfs_rq_runtime(cfs_rq) && likely(cfs_rq->curr)) @@ -441,7 +458,7 @@ Signed-off-by: Thomas Gleixner } static __always_inline -@@ -5038,7 +5038,7 @@ static void hrtick_start_fair(struct rq +@@ -5063,7 +5063,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) if (delta < 0) { if (rq->curr == p) @@ -450,7 +467,7 @@ Signed-off-by: Thomas Gleixner return; } hrtick_start(rq, delta); -@@ -6614,7 +6614,7 @@ static void check_preempt_wakeup(struct +@@ -6639,7 +6639,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_ return; preempt: @@ -459,7 +476,7 @@ Signed-off-by: Thomas Gleixner /* * Only set the backward buddy when the current task is still * on the rq. This can happen when a wakeup gets interleaved -@@ -9701,7 +9701,7 @@ static void task_fork_fair(struct task_s +@@ -9726,7 +9726,7 @@ static void task_fork_fair(struct task_struct *p) * 'current' within the tree based on its new key value. */ swap(curr->vruntime, se->vruntime); @@ -468,7 +485,7 @@ Signed-off-by: Thomas Gleixner } se->vruntime -= cfs_rq->min_vruntime; -@@ -9725,7 +9725,7 @@ prio_changed_fair(struct rq *rq, struct +@@ -9750,7 +9750,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio) */ if (rq->curr == p) { if (p->prio > oldprio) @@ -477,6 +494,8 @@ Signed-off-by: Thomas Gleixner } else check_preempt_curr(rq, p, 0); } +diff --git a/kernel/sched/features.h b/kernel/sched/features.h +index 68de18405857..12a12be6770b 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -48,6 +48,9 @@ SCHED_FEAT(NONTASK_CAPACITY, true) @@ -489,9 +508,11 @@ Signed-off-by: Thomas Gleixner #else /* +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index dd6ae39957ce..58d3972ae0d4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h -@@ -1638,6 +1638,15 @@ extern void reweight_task(struct task_st +@@ -1638,6 +1638,15 @@ extern void reweight_task(struct task_struct *p, int prio); extern void resched_curr(struct rq *rq); extern void resched_cpu(int cpu); @@ -507,9 +528,11 @@ Signed-off-by: Thomas Gleixner extern struct rt_bandwidth def_rt_bandwidth; extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime); +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 0af14953d52d..02a29282b828 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c -@@ -2134,6 +2134,7 @@ tracing_generic_entry_update(struct trac +@@ -2134,6 +2134,7 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags, struct task_struct *tsk = current; entry->preempt_count = pc & 0xff; @@ -517,7 +540,7 @@ Signed-off-by: Thomas Gleixner entry->pid = (tsk) ? tsk->pid : 0; entry->flags = #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT -@@ -2144,7 +2145,8 @@ tracing_generic_entry_update(struct trac +@@ -2144,7 +2145,8 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags, ((pc & NMI_MASK ) ? TRACE_FLAG_NMI : 0) | ((pc & HARDIRQ_MASK) ? TRACE_FLAG_HARDIRQ : 0) | ((pc & SOFTIRQ_OFFSET) ? TRACE_FLAG_SOFTIRQ : 0) | @@ -527,7 +550,7 @@ Signed-off-by: Thomas Gleixner (test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0); entry->migrate_disable = (tsk) ? __migrate_disabled(tsk) & 0xFF : 0; -@@ -3346,15 +3348,17 @@ get_total_entries(struct trace_buffer *b +@@ -3346,15 +3348,17 @@ get_total_entries(struct trace_buffer *buf, static void print_lat_help_header(struct seq_file *m) { @@ -554,7 +577,7 @@ Signed-off-by: Thomas Gleixner } static void print_event_info(struct trace_buffer *buf, struct seq_file *m) -@@ -3390,15 +3394,17 @@ static void print_func_help_header_irq(s +@@ -3392,15 +3396,17 @@ static void print_func_help_header_irq(struct trace_buffer *buf, struct seq_file tgid ? tgid_space : space); seq_printf(m, "# %s / _----=> need-resched\n", tgid ? tgid_space : space); @@ -577,6 +600,8 @@ Signed-off-by: Thomas Gleixner tgid ? " | " : space); } +diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h +index 447bd96ee658..65afd0c04622 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -127,6 +127,7 @@ struct kretprobe_trace_entry_head { @@ -595,9 +620,11 @@ Signed-off-by: Thomas Gleixner }; #define TRACE_BUF_SIZE 1024 +diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c +index 46c96744f09d..3f78b0afb729 100644 --- a/kernel/trace/trace_output.c +++ b/kernel/trace/trace_output.c -@@ -448,6 +448,7 @@ int trace_print_lat_fmt(struct trace_seq +@@ -448,6 +448,7 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry) { char hardsoft_irq; char need_resched; @@ -605,7 +632,7 @@ Signed-off-by: Thomas Gleixner char irqs_off; int hardirq; int softirq; -@@ -478,6 +479,9 @@ int trace_print_lat_fmt(struct trace_seq +@@ -478,6 +479,9 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry) break; } @@ -615,7 +642,7 @@ Signed-off-by: Thomas Gleixner hardsoft_irq = (nmi && hardirq) ? 'Z' : nmi ? 'z' : -@@ -486,14 +490,20 @@ int trace_print_lat_fmt(struct trace_seq +@@ -486,14 +490,20 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry) softirq ? 's' : '.' ; @@ -638,3 +665,6 @@ Signed-off-by: Thomas Gleixner if (entry->migrate_disable) trace_seq_printf(s, "%x", entry->migrate_disable); else +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0243-ftrace-Fix-trace-header-alignment.patch b/kernel/patches-4.19.x-rt/0241-ftrace-Fix-trace-header-alignment.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0243-ftrace-Fix-trace-header-alignment.patch rename to kernel/patches-4.19.x-rt/0241-ftrace-Fix-trace-header-alignment.patch index 2baa6c6ad..92d7dce6a 100644 --- a/kernel/patches-4.19.x-rt/0243-ftrace-Fix-trace-header-alignment.patch +++ b/kernel/patches-4.19.x-rt/0241-ftrace-Fix-trace-header-alignment.patch @@ -1,6 +1,7 @@ +From 9dc4f4dc93a57dce9f30fb429753a23f0e339749 Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Sun, 16 Oct 2016 05:08:30 +0200 -Subject: [PATCH] ftrace: Fix trace header alignment +Subject: [PATCH 241/269] ftrace: Fix trace header alignment Line up helper arrows to the right column. @@ -9,12 +10,14 @@ Signed-off-by: Mike Galbraith [bigeasy: fixup function tracer header] Signed-off-by: Sebastian Andrzej Siewior --- - kernel/trace/trace.c | 22 +++++++++++----------- + kernel/trace/trace.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 02a29282b828..fb2ff2dfd134 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c -@@ -3348,17 +3348,17 @@ get_total_entries(struct trace_buffer *b +@@ -3348,17 +3348,17 @@ get_total_entries(struct trace_buffer *buf, static void print_lat_help_header(struct seq_file *m) { @@ -43,3 +46,6 @@ Signed-off-by: Sebastian Andrzej Siewior } static void print_event_info(struct trace_buffer *buf, struct seq_file *m) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0244-x86-preempt-lazy.patch b/kernel/patches-4.19.x-rt/0242-x86-Support-for-lazy-preemption.patch similarity index 79% rename from kernel/patches-4.19.x-rt/0244-x86-preempt-lazy.patch rename to kernel/patches-4.19.x-rt/0242-x86-Support-for-lazy-preemption.patch index ffb6e9af5..181aa082b 100644 --- a/kernel/patches-4.19.x-rt/0244-x86-preempt-lazy.patch +++ b/kernel/patches-4.19.x-rt/0242-x86-Support-for-lazy-preemption.patch @@ -1,20 +1,23 @@ -Subject: x86: Support for lazy preemption +From 85dc65ec7e8efc7a7842a1c52b964fe3a5f3214e Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Thu, 01 Nov 2012 11:03:47 +0100 +Date: Thu, 1 Nov 2012 11:03:47 +0100 +Subject: [PATCH 242/269] x86: Support for lazy preemption Implement the x86 pieces for lazy preempt. Signed-off-by: Thomas Gleixner --- - arch/x86/Kconfig | 1 + - arch/x86/entry/common.c | 4 ++-- - arch/x86/entry/entry_32.S | 17 +++++++++++++++++ - arch/x86/entry/entry_64.S | 16 ++++++++++++++++ - arch/x86/include/asm/preempt.h | 31 ++++++++++++++++++++++++++++++- - arch/x86/include/asm/thread_info.h | 11 +++++++++++ - arch/x86/kernel/asm-offsets.c | 2 ++ + arch/x86/Kconfig | 1 + + arch/x86/entry/common.c | 4 ++-- + arch/x86/entry/entry_32.S | 17 ++++++++++++++++ + arch/x86/entry/entry_64.S | 16 +++++++++++++++ + arch/x86/include/asm/preempt.h | 31 +++++++++++++++++++++++++++++- + arch/x86/include/asm/thread_info.h | 11 +++++++++++ + arch/x86/kernel/asm-offsets.c | 2 ++ 7 files changed, 79 insertions(+), 3 deletions(-) +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index 1b05ae86bdde..736e369e141b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -180,6 +180,7 @@ config X86 @@ -25,9 +28,11 @@ Signed-off-by: Thomas Gleixner select HAVE_RCU_TABLE_FREE if PARAVIRT select HAVE_RCU_TABLE_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API +diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c +index ec46ee700791..fbb14008bd43 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c -@@ -133,7 +133,7 @@ static long syscall_trace_enter(struct p +@@ -133,7 +133,7 @@ static long syscall_trace_enter(struct pt_regs *regs) #define EXIT_TO_USERMODE_LOOP_FLAGS \ (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ @@ -36,7 +41,7 @@ Signed-off-by: Thomas Gleixner static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) { -@@ -148,7 +148,7 @@ static void exit_to_usermode_loop(struct +@@ -148,7 +148,7 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags) /* We have work to do. */ local_irq_enable(); @@ -45,6 +50,8 @@ Signed-off-by: Thomas Gleixner schedule(); #ifdef ARCH_RT_DELAYS_SIGNAL_SEND +diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S +index fbbf1ba57ec6..0169c257cfff 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -764,8 +764,25 @@ END(ret_from_exception) @@ -73,9 +80,11 @@ Signed-off-by: Thomas Gleixner testl $X86_EFLAGS_IF, PT_EFLAGS(%esp) # interrupts off (exception path) ? jz restore_all_kernel call preempt_schedule_irq +diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S +index ce2a6587ed11..d01d68de64ae 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S -@@ -705,7 +705,23 @@ GLOBAL(swapgs_restore_regs_and_return_to +@@ -706,7 +706,23 @@ retint_kernel: btl $9, EFLAGS(%rsp) /* were interrupts off? */ jnc 1f 0: cmpl $0, PER_CPU_VAR(__preempt_count) @@ -99,9 +108,11 @@ Signed-off-by: Thomas Gleixner call preempt_schedule_irq jmp 0b 1: +diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h +index 7f2dbd91fc74..22992c837795 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h -@@ -86,17 +86,46 @@ static __always_inline void __preempt_co +@@ -86,17 +86,46 @@ static __always_inline void __preempt_count_sub(int val) * a decrement which hits zero means we have no preempt_count and should * reschedule. */ @@ -149,6 +160,8 @@ Signed-off-by: Thomas Gleixner } #ifdef CONFIG_PREEMPT +diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h +index 82b73b75d67c..dc267291f131 100644 --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -56,17 +56,24 @@ struct task_struct; @@ -201,6 +214,8 @@ Signed-off-by: Thomas Gleixner #define STACK_WARN (THREAD_SIZE/8) /* +diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c +index 01de31db300d..ce1c5b9fbd8c 100644 --- a/arch/x86/kernel/asm-offsets.c +++ b/arch/x86/kernel/asm-offsets.c @@ -38,6 +38,7 @@ void common(void) { @@ -219,3 +234,6 @@ Signed-off-by: Thomas Gleixner /* TLB state for the entry code */ OFFSET(TLB_STATE_user_pcid_flush_mask, tlb_state, user_pcid_flush_mask); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0245-x86-lazy-preempt-properly-check-against-preempt-mask.patch b/kernel/patches-4.19.x-rt/0243-x86-lazy-preempt-properly-check-against-preempt-mask.patch similarity index 62% rename from kernel/patches-4.19.x-rt/0245-x86-lazy-preempt-properly-check-against-preempt-mask.patch rename to kernel/patches-4.19.x-rt/0243-x86-lazy-preempt-properly-check-against-preempt-mask.patch index 0151e3943..5ec769fa0 100644 --- a/kernel/patches-4.19.x-rt/0245-x86-lazy-preempt-properly-check-against-preempt-mask.patch +++ b/kernel/patches-4.19.x-rt/0243-x86-lazy-preempt-properly-check-against-preempt-mask.patch @@ -1,6 +1,8 @@ +From d35f3a1ee1cf19c8b8aefe555a8af80a5f5b8fe1 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 18 Feb 2019 16:57:09 +0100 -Subject: [PATCH] x86: lazy-preempt: properly check against preempt-mask +Subject: [PATCH 243/269] x86: lazy-preempt: properly check against + preempt-mask should_resched() should check against preempt_offset after unmasking the need-resched-bit. Otherwise should_resched() won't work for @@ -9,12 +11,14 @@ preempt_offset != 0 and lazy-preempt set. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/include/asm/preempt.h | 2 +- + arch/x86/include/asm/preempt.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h +index 22992c837795..f66708779274 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h -@@ -118,7 +118,7 @@ static __always_inline bool should_resch +@@ -118,7 +118,7 @@ static __always_inline bool should_resched(int preempt_offset) /* preempt count == 0 ? */ tmp &= ~PREEMPT_NEED_RESCHED; @@ -23,3 +27,6 @@ Signed-off-by: Sebastian Andrzej Siewior return false; if (current_thread_info()->preempt_lazy_count) return false; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0246-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch b/kernel/patches-4.19.x-rt/0244-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0246-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch rename to kernel/patches-4.19.x-rt/0244-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch index 98b1ce39b..563e670d6 100644 --- a/kernel/patches-4.19.x-rt/0246-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch +++ b/kernel/patches-4.19.x-rt/0244-x86-lazy-preempt-use-proper-return-label-on-32bit-x8.patch @@ -1,6 +1,8 @@ +From 48a22e409d7de1904f5577d83d0c8f9cb69ce766 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 26 Feb 2019 14:53:49 +0100 -Subject: [PATCH] x86: lazy-preempt: use proper return label on 32bit-x86 +Subject: [PATCH 244/269] x86: lazy-preempt: use proper return label on + 32bit-x86 The lazy-preempt uses the wrong return label in case preemption isn't possible. This results crash while returning to the kernel. @@ -10,9 +12,11 @@ Use the correct return label if preemption isn' possible. Reported-by: Andri Yngvason Signed-off-by: Sebastian Andrzej Siewior --- - arch/x86/entry/entry_32.S | 8 ++++---- + arch/x86/entry/entry_32.S | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) +diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S +index 0169c257cfff..e6f61c813baf 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -773,15 +773,15 @@ ENTRY(resume_kernel) @@ -35,3 +39,6 @@ Signed-off-by: Sebastian Andrzej Siewior #endif testl $X86_EFLAGS_IF, PT_EFLAGS(%esp) # interrupts off (exception path) ? jz restore_all_kernel +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0247-arm-preempt-lazy-support.patch b/kernel/patches-4.19.x-rt/0245-arm-Add-support-for-lazy-preemption.patch similarity index 74% rename from kernel/patches-4.19.x-rt/0247-arm-preempt-lazy-support.patch rename to kernel/patches-4.19.x-rt/0245-arm-Add-support-for-lazy-preemption.patch index 99a7df7d9..dc625b966 100644 --- a/kernel/patches-4.19.x-rt/0247-arm-preempt-lazy-support.patch +++ b/kernel/patches-4.19.x-rt/0245-arm-Add-support-for-lazy-preemption.patch @@ -1,19 +1,22 @@ -Subject: arm: Add support for lazy preemption +From 4b5643e59aaece3f42def2a9ea0fe2dd07cab601 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Wed, 31 Oct 2012 12:04:11 +0100 +Subject: [PATCH 245/269] arm: Add support for lazy preemption Implement the arm pieces for lazy preempt. Signed-off-by: Thomas Gleixner --- - arch/arm/Kconfig | 1 + - arch/arm/include/asm/thread_info.h | 8 ++++++-- - arch/arm/kernel/asm-offsets.c | 1 + - arch/arm/kernel/entry-armv.S | 19 ++++++++++++++++--- - arch/arm/kernel/entry-common.S | 9 +++++++-- - arch/arm/kernel/signal.c | 3 ++- + arch/arm/Kconfig | 1 + + arch/arm/include/asm/thread_info.h | 8 ++++++-- + arch/arm/kernel/asm-offsets.c | 1 + + arch/arm/kernel/entry-armv.S | 19 ++++++++++++++++--- + arch/arm/kernel/entry-common.S | 9 +++++++-- + arch/arm/kernel/signal.c | 3 ++- 6 files changed, 33 insertions(+), 8 deletions(-) +diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig +index 91f4f80a6f24..cba596677f6e 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -90,6 +90,7 @@ config ARM @@ -24,6 +27,8 @@ Signed-off-by: Thomas Gleixner select HAVE_RCU_TABLE_FREE if (SMP && ARM_LPAE) select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RSEQ +diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h +index 8f55dc520a3e..4f834bfca470 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -49,6 +49,7 @@ struct cpu_context_save { @@ -34,7 +39,7 @@ Signed-off-by: Thomas Gleixner mm_segment_t addr_limit; /* address limit */ struct task_struct *task; /* main task structure */ __u32 cpu; /* cpu */ -@@ -139,7 +140,8 @@ extern int vfp_restore_user_hwstate(stru +@@ -139,7 +140,8 @@ extern int vfp_restore_user_hwstate(struct user_vfp *, #define TIF_SYSCALL_TRACE 4 /* syscall trace active */ #define TIF_SYSCALL_AUDIT 5 /* syscall auditing active */ #define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */ @@ -44,7 +49,7 @@ Signed-off-by: Thomas Gleixner #define TIF_NOHZ 12 /* in adaptive nohz mode */ #define TIF_USING_IWMMXT 17 -@@ -149,6 +151,7 @@ extern int vfp_restore_user_hwstate(stru +@@ -149,6 +151,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *, #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) @@ -52,7 +57,7 @@ Signed-off-by: Thomas Gleixner #define _TIF_UPROBE (1 << TIF_UPROBE) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) -@@ -164,7 +167,8 @@ extern int vfp_restore_user_hwstate(stru +@@ -164,7 +167,8 @@ extern int vfp_restore_user_hwstate(struct user_vfp *, * Change these and you break ASM code in entry-common.S */ #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ @@ -62,6 +67,8 @@ Signed-off-by: Thomas Gleixner #endif /* __KERNEL__ */ #endif /* __ASM_ARM_THREAD_INFO_H */ +diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c +index 3968d6c22455..b35d373fc982 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -56,6 +56,7 @@ int main(void) @@ -72,9 +79,11 @@ Signed-off-by: Thomas Gleixner DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit)); DEFINE(TI_TASK, offsetof(struct thread_info, task)); DEFINE(TI_CPU, offsetof(struct thread_info, cpu)); +diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S +index e85a3af9ddeb..cc67c0a3ae7b 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S -@@ -216,11 +216,18 @@ ENDPROC(__dabt_svc) +@@ -216,11 +216,18 @@ __irq_svc: #ifdef CONFIG_PREEMPT ldr r8, [tsk, #TI_PREEMPT] @ get preempt count @@ -95,7 +104,7 @@ Signed-off-by: Thomas Gleixner #endif svc_exit r5, irq = 1 @ return from exception -@@ -235,8 +242,14 @@ ENDPROC(__irq_svc) +@@ -235,8 +242,14 @@ svc_preempt: 1: bl preempt_schedule_irq @ irq en/disable is done inside ldr r0, [tsk, #TI_FLAGS] @ get new tasks TI_FLAGS tst r0, #_TIF_NEED_RESCHED @@ -111,9 +120,11 @@ Signed-off-by: Thomas Gleixner #endif __und_fault: +diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S +index 746565a876dc..156e3ba4b319 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S -@@ -56,7 +56,9 @@ saved_pc .req lr +@@ -56,7 +56,9 @@ __ret_fast_syscall: cmp r2, #TASK_SIZE blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing @@ -124,7 +135,7 @@ Signed-off-by: Thomas Gleixner bne fast_work_pending -@@ -93,8 +95,11 @@ ENDPROC(ret_fast_syscall) +@@ -93,8 +95,11 @@ __ret_fast_syscall: cmp r2, #TASK_SIZE blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing @@ -137,9 +148,11 @@ Signed-off-by: Thomas Gleixner UNWIND(.fnend ) ENDPROC(ret_fast_syscall) +diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c +index b908382b69ff..339fbc281cf1 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c -@@ -652,7 +652,8 @@ do_work_pending(struct pt_regs *regs, un +@@ -652,7 +652,8 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall) */ trace_hardirqs_off(); do { @@ -149,3 +162,6 @@ Signed-off-by: Thomas Gleixner schedule(); } else { if (unlikely(!user_mode(regs))) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0248-powerpc-preempt-lazy-support.patch b/kernel/patches-4.19.x-rt/0246-powerpc-Add-support-for-lazy-preemption.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0248-powerpc-preempt-lazy-support.patch rename to kernel/patches-4.19.x-rt/0246-powerpc-Add-support-for-lazy-preemption.patch index 2e9979c76..807e40149 100644 --- a/kernel/patches-4.19.x-rt/0248-powerpc-preempt-lazy-support.patch +++ b/kernel/patches-4.19.x-rt/0246-powerpc-Add-support-for-lazy-preemption.patch @@ -1,18 +1,21 @@ +From 0f9163aaaab913d5d2fe2dc92e8c82e588eef09b Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Thu, 1 Nov 2012 10:14:11 +0100 -Subject: powerpc: Add support for lazy preemption +Subject: [PATCH 246/269] powerpc: Add support for lazy preemption Implement the powerpc pieces for lazy preempt. Signed-off-by: Thomas Gleixner --- - arch/powerpc/Kconfig | 1 + - arch/powerpc/include/asm/thread_info.h | 9 +++++++-- - arch/powerpc/kernel/asm-offsets.c | 1 + - arch/powerpc/kernel/entry_32.S | 17 ++++++++++++----- - arch/powerpc/kernel/entry_64.S | 16 ++++++++++++---- + arch/powerpc/Kconfig | 1 + + arch/powerpc/include/asm/thread_info.h | 9 +++++++-- + arch/powerpc/kernel/asm-offsets.c | 1 + + arch/powerpc/kernel/entry_32.S | 17 ++++++++++++----- + arch/powerpc/kernel/entry_64.S | 16 ++++++++++++---- 5 files changed, 33 insertions(+), 11 deletions(-) +diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig +index 1563820a37e8..d4835f8cfcf2 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,6 +216,7 @@ config PPC @@ -23,6 +26,8 @@ Signed-off-by: Thomas Gleixner select HAVE_RCU_TABLE_FREE if SMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN +diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h +index 3c0002044bc9..ce316076bc52 100644 --- a/arch/powerpc/include/asm/thread_info.h +++ b/arch/powerpc/include/asm/thread_info.h @@ -37,6 +37,8 @@ struct thread_info { @@ -34,7 +39,7 @@ Signed-off-by: Thomas Gleixner unsigned long local_flags; /* private flags for thread */ #ifdef CONFIG_LIVEPATCH unsigned long *livepatch_sp; -@@ -81,7 +83,7 @@ extern int arch_dup_task_struct(struct t +@@ -81,7 +83,7 @@ extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src #define TIF_SIGPENDING 1 /* signal pending */ #define TIF_NEED_RESCHED 2 /* rescheduling necessary */ #define TIF_FSCHECK 3 /* Check FS is USER_DS on return */ @@ -43,7 +48,7 @@ Signed-off-by: Thomas Gleixner #define TIF_RESTORE_TM 5 /* need to restore TM FP/VEC/VSX */ #define TIF_PATCH_PENDING 6 /* pending live patching update */ #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ -@@ -100,6 +102,7 @@ extern int arch_dup_task_struct(struct t +@@ -100,6 +102,7 @@ extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src #define TIF_ELF2ABI 18 /* function descriptors must die! */ #endif #define TIF_POLLING_NRFLAG 19 /* true if poll_idle() is polling TIF_NEED_RESCHED */ @@ -51,7 +56,7 @@ Signed-off-by: Thomas Gleixner /* as above, but as bit values */ #define _TIF_SYSCALL_TRACE (1< #define _TIF_FSCHECK (1< /* Bits in local_flags */ /* Don't move TLF_NAPPING without adjusting the code in entry_32.S */ +diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c +index 89cf15566c4e..1870c87fb22a 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -156,6 +156,7 @@ int main(void) @@ -80,9 +87,11 @@ Signed-off-by: Thomas Gleixner OFFSET(TI_TASK, thread_info, task); OFFSET(TI_CPU, thread_info, cpu); +diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S +index 26b3f853cbf6..3783f3ef17a4 100644 --- a/arch/powerpc/kernel/entry_32.S +++ b/arch/powerpc/kernel/entry_32.S -@@ -885,7 +885,14 @@ user_exc_return: /* r10 contains MSR_KE +@@ -888,7 +888,14 @@ resume_kernel: cmpwi 0,r0,0 /* if non-zero, just restore regs and return */ bne restore andi. r8,r8,_TIF_NEED_RESCHED @@ -97,7 +106,7 @@ Signed-off-by: Thomas Gleixner lwz r3,_MSR(r1) andi. r0,r3,MSR_EE /* interrupts off? */ beq restore /* don't schedule if so */ -@@ -896,11 +903,11 @@ user_exc_return: /* r10 contains MSR_KE +@@ -899,11 +906,11 @@ resume_kernel: */ bl trace_hardirqs_off #endif @@ -112,7 +121,7 @@ Signed-off-by: Thomas Gleixner #ifdef CONFIG_TRACE_IRQFLAGS /* And now, to properly rebalance the above, we tell lockdep they * are being turned back on, which will happen when we return -@@ -1223,7 +1230,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRE +@@ -1232,7 +1239,7 @@ global_dbcr0: #endif /* !(CONFIG_4xx || CONFIG_BOOKE) */ do_work: /* r10 contains MSR_KERNEL here */ @@ -121,7 +130,7 @@ Signed-off-by: Thomas Gleixner beq do_user_signal do_resched: /* r10 contains MSR_KERNEL here */ -@@ -1244,7 +1251,7 @@ do_resched: /* r10 contains MSR_KERNEL +@@ -1253,7 +1260,7 @@ recheck: MTMSRD(r10) /* disable interrupts */ CURRENT_THREAD_INFO(r9, r1) lwz r9,TI_FLAGS(r9) @@ -130,9 +139,11 @@ Signed-off-by: Thomas Gleixner bne- do_resched andi. r0,r9,_TIF_USER_WORK_MASK beq restore_user +diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S +index 7a46e0e57a36..7671fa5da9fa 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S -@@ -171,7 +171,7 @@ system_call: /* label this so stack tr +@@ -176,7 +176,7 @@ system_call: /* label this so stack traces look sane */ * based on caller's run-mode / personality. */ ld r11,SYS_CALL_TABLE@toc(2) @@ -141,7 +152,7 @@ Signed-off-by: Thomas Gleixner beq 15f addi r11,r11,8 /* use 32-bit syscall entries */ clrldi r3,r3,32 -@@ -763,7 +763,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEG +@@ -768,7 +768,7 @@ _GLOBAL(ret_from_except_lite) bl restore_math b restore #endif @@ -150,7 +161,7 @@ Signed-off-by: Thomas Gleixner beq 2f bl restore_interrupts SCHEDULE_USER -@@ -825,10 +825,18 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEG +@@ -830,10 +830,18 @@ resume_kernel: #ifdef CONFIG_PREEMPT /* Check if we need to preempt */ @@ -170,7 +181,7 @@ Signed-off-by: Thomas Gleixner cmpwi cr0,r8,0 bne restore ld r0,SOFTE(r1) -@@ -845,7 +853,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEG +@@ -850,7 +858,7 @@ resume_kernel: /* Re-test flags and eventually loop */ CURRENT_THREAD_INFO(r9, r1) ld r4,TI_FLAGS(r9) @@ -179,3 +190,6 @@ Signed-off-by: Thomas Gleixner bne 1b /* +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0249-arch-arm64-Add-lazy-preempt-support.patch b/kernel/patches-4.19.x-rt/0247-arch-arm64-Add-lazy-preempt-support.patch similarity index 75% rename from kernel/patches-4.19.x-rt/0249-arch-arm64-Add-lazy-preempt-support.patch rename to kernel/patches-4.19.x-rt/0247-arch-arm64-Add-lazy-preempt-support.patch index 7b2c832d5..ae97a17d4 100644 --- a/kernel/patches-4.19.x-rt/0249-arch-arm64-Add-lazy-preempt-support.patch +++ b/kernel/patches-4.19.x-rt/0247-arch-arm64-Add-lazy-preempt-support.patch @@ -1,6 +1,7 @@ +From 896a4ed8a9134811455719d2bc0ba8e5248c5a0f Mon Sep 17 00:00:00 2001 From: Anders Roxell Date: Thu, 14 May 2015 17:52:17 +0200 -Subject: arch/arm64: Add lazy preempt support +Subject: [PATCH 247/269] arch/arm64: Add lazy preempt support arm64 is missing support for PREEMPT_RT. The main feature which is lacking is support for lazy preemption. The arch-specific entry code, @@ -11,13 +12,15 @@ indicate that support for full RT preemption is now available. Signed-off-by: Anders Roxell --- - arch/arm64/Kconfig | 1 + - arch/arm64/include/asm/thread_info.h | 6 +++++- - arch/arm64/kernel/asm-offsets.c | 1 + - arch/arm64/kernel/entry.S | 12 +++++++++--- - arch/arm64/kernel/signal.c | 2 +- + arch/arm64/Kconfig | 1 + + arch/arm64/include/asm/thread_info.h | 6 +++++- + arch/arm64/kernel/asm-offsets.c | 1 + + arch/arm64/kernel/entry.S | 12 +++++++++--- + arch/arm64/kernel/signal.c | 2 +- 5 files changed, 17 insertions(+), 5 deletions(-) +diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig +index 1b1a0e95c751..418a75d30f5c 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -140,6 +140,7 @@ config ARM64 @@ -28,6 +31,8 @@ Signed-off-by: Anders Roxell select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE select HAVE_RSEQ +diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h +index cb2c10a8f0a8..f1820f7318b6 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -43,6 +43,7 @@ struct thread_info { @@ -38,7 +43,7 @@ Signed-off-by: Anders Roxell }; #define thread_saved_pc(tsk) \ -@@ -76,6 +77,7 @@ void arch_release_task_struct(struct tas +@@ -76,6 +77,7 @@ void arch_release_task_struct(struct task_struct *tsk); #define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */ #define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */ #define TIF_FSCHECK 5 /* Check FS is USER_DS on return */ @@ -46,7 +51,7 @@ Signed-off-by: Anders Roxell #define TIF_NOHZ 7 #define TIF_SYSCALL_TRACE 8 #define TIF_SYSCALL_AUDIT 9 -@@ -94,6 +96,7 @@ void arch_release_task_struct(struct tas +@@ -94,6 +96,7 @@ void arch_release_task_struct(struct task_struct *tsk); #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE) @@ -54,7 +59,7 @@ Signed-off-by: Anders Roxell #define _TIF_NOHZ (1 << TIF_NOHZ) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) -@@ -106,8 +109,9 @@ void arch_release_task_struct(struct tas +@@ -106,8 +109,9 @@ void arch_release_task_struct(struct task_struct *tsk); #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \ @@ -65,6 +70,8 @@ Signed-off-by: Anders Roxell #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ _TIF_NOHZ) +diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c +index 323aeb5f2fe6..7edd5a2668ea 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -41,6 +41,7 @@ int main(void) @@ -75,9 +82,11 @@ Signed-off-by: Anders Roxell DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit)); #ifdef CONFIG_ARM64_SW_TTBR0_PAN DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0)); +diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S +index 8556876c9109..d30ca1b304cd 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S -@@ -623,11 +623,16 @@ ENDPROC(el1_sync) +@@ -623,11 +623,16 @@ el1_irq: #ifdef CONFIG_PREEMPT ldr w24, [tsk, #TSK_TI_PREEMPT] // get preempt count @@ -97,7 +106,7 @@ Signed-off-by: Anders Roxell #endif #ifdef CONFIG_TRACE_IRQFLAGS bl trace_hardirqs_on -@@ -641,6 +646,7 @@ ENDPROC(el1_irq) +@@ -641,6 +646,7 @@ el1_preempt: 1: bl preempt_schedule_irq // irq en/disable is done inside ldr x0, [tsk, #TSK_TI_FLAGS] // get new tasks TI_FLAGS tbnz x0, #TIF_NEED_RESCHED, 1b // needs rescheduling? @@ -105,9 +114,11 @@ Signed-off-by: Anders Roxell ret x24 #endif +diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c +index 5dcc942906db..4fec251fe147 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c -@@ -926,7 +926,7 @@ asmlinkage void do_notify_resume(struct +@@ -926,7 +926,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, /* Check valid user FS if needed */ addr_limit_user_check(); @@ -116,3 +127,6 @@ Signed-off-by: Anders Roxell /* Unmask Debug and SError for the next task */ local_daif_restore(DAIF_PROCCTX_NOIRQ); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0250-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch b/kernel/patches-4.19.x-rt/0248-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch similarity index 82% rename from kernel/patches-4.19.x-rt/0250-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch rename to kernel/patches-4.19.x-rt/0248-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch index f91af26e9..1d33f820d 100644 --- a/kernel/patches-4.19.x-rt/0250-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch +++ b/kernel/patches-4.19.x-rt/0248-connector-cn_proc-Protect-send_msg-with-a-local-lock.patch @@ -1,7 +1,8 @@ +From 221c555911b760b4e7b8712860fe2368dd85d4e2 Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Sun, 16 Oct 2016 05:11:54 +0200 -Subject: [PATCH] connector/cn_proc: Protect send_msg() with a local lock - on RT +Subject: [PATCH 248/269] connector/cn_proc: Protect send_msg() with a local + lock on RT |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931 |in_atomic(): 1, irqs_disabled(): 0, pid: 31807, name: sleep @@ -30,9 +31,11 @@ delivery") which is v4.7-rc6. Signed-off-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - drivers/connector/cn_proc.c | 6 ++++-- + drivers/connector/cn_proc.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) +diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c +index ad48fd52cb53..c5264b3ee0b0 100644 --- a/drivers/connector/cn_proc.c +++ b/drivers/connector/cn_proc.c @@ -32,6 +32,7 @@ @@ -43,7 +46,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* * Size of a cn_msg followed by a proc_event structure. Since the -@@ -54,10 +55,11 @@ static struct cb_id cn_proc_event_id = { +@@ -54,10 +55,11 @@ static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC }; /* proc_event_counts is used as the sequence number of the netlink message */ static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 }; @@ -56,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior msg->seq = __this_cpu_inc_return(proc_event_counts) - 1; ((struct proc_event *)msg->data)->cpu = smp_processor_id(); -@@ -70,7 +72,7 @@ static inline void send_msg(struct cn_ms +@@ -70,7 +72,7 @@ static inline void send_msg(struct cn_msg *msg) */ cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT); @@ -65,3 +68,6 @@ Signed-off-by: Sebastian Andrzej Siewior } void proc_fork_connector(struct task_struct *task) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0251-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch b/kernel/patches-4.19.x-rt/0249-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch similarity index 77% rename from kernel/patches-4.19.x-rt/0251-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch rename to kernel/patches-4.19.x-rt/0249-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch index 11ae91954..4330ccf73 100644 --- a/kernel/patches-4.19.x-rt/0251-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch +++ b/kernel/patches-4.19.x-rt/0249-drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch @@ -1,7 +1,8 @@ +From 666113236b467b8463b3a9f1976d21bd61e8f88e Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Thu, 31 Mar 2016 04:08:28 +0200 -Subject: [PATCH] drivers/block/zram: Replace bit spinlocks with rtmutex - for -rt +Subject: [PATCH 249/269] drivers/block/zram: Replace bit spinlocks with + rtmutex for -rt They're nondeterministic, and lead to ___might_sleep() splats in -rt. OTOH, they're a lot less wasteful than an rtmutex per page. @@ -9,10 +10,12 @@ OTOH, they're a lot less wasteful than an rtmutex per page. Signed-off-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - drivers/block/zram/zram_drv.c | 38 ++++++++++++++++++++++++++++++++++++++ - drivers/block/zram/zram_drv.h | 3 +++ + drivers/block/zram/zram_drv.c | 38 +++++++++++++++++++++++++++++++++++ + drivers/block/zram/zram_drv.h | 3 +++ 2 files changed, 41 insertions(+) +diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c +index a65505db09e5..f35eccc43558 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -53,6 +53,40 @@ static size_t huge_class_size; @@ -56,7 +59,7 @@ Signed-off-by: Sebastian Andrzej Siewior static int zram_slot_trylock(struct zram *zram, u32 index) { return bit_spin_trylock(ZRAM_LOCK, &zram->table[index].value); -@@ -67,6 +101,7 @@ static void zram_slot_unlock(struct zram +@@ -67,6 +101,7 @@ static void zram_slot_unlock(struct zram *zram, u32 index) { bit_spin_unlock(ZRAM_LOCK, &zram->table[index].value); } @@ -73,7 +76,7 @@ Signed-off-by: Sebastian Andrzej Siewior static void zram_meta_free(struct zram *zram, u64 disksize) { size_t num_pages = disksize >> PAGE_SHIFT; -@@ -930,6 +967,7 @@ static bool zram_meta_alloc(struct zram +@@ -930,6 +967,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize) if (!huge_class_size) huge_class_size = zs_huge_class_size(zram->mem_pool); @@ -81,6 +84,8 @@ Signed-off-by: Sebastian Andrzej Siewior return true; } +diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h +index d1095dfdffa8..144e91061df8 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -61,6 +61,9 @@ struct zram_table_entry { @@ -93,3 +98,6 @@ Signed-off-by: Sebastian Andrzej Siewior #ifdef CONFIG_ZRAM_MEMORY_TRACKING ktime_t ac_time; #endif +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0252-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch b/kernel/patches-4.19.x-rt/0250-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch similarity index 69% rename from kernel/patches-4.19.x-rt/0252-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch rename to kernel/patches-4.19.x-rt/0250-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch index e6ea828d3..d7f82ee52 100644 --- a/kernel/patches-4.19.x-rt/0252-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch +++ b/kernel/patches-4.19.x-rt/0250-drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch @@ -1,6 +1,7 @@ +From 84d4ca0b3c56c0dbc248508726c5f69cbf14d0cc Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Thu, 20 Oct 2016 11:15:22 +0200 -Subject: [PATCH] drivers/zram: Don't disable preemption in +Subject: [PATCH 250/269] drivers/zram: Don't disable preemption in zcomp_stream_get/put() In v4.7, the driver switched to percpu compression streams, disabling @@ -13,14 +14,16 @@ Signed-off-by: Mike Galbraith [bigeasy: get_locked_var() -> per zcomp_strm lock] Signed-off-by: Sebastian Andrzej Siewior --- - drivers/block/zram/zcomp.c | 12 ++++++++++-- - drivers/block/zram/zcomp.h | 1 + - drivers/block/zram/zram_drv.c | 5 +++-- + drivers/block/zram/zcomp.c | 12 ++++++++++-- + drivers/block/zram/zcomp.h | 1 + + drivers/block/zram/zram_drv.c | 5 +++-- 3 files changed, 14 insertions(+), 4 deletions(-) +diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c +index 4ed0a78fdc09..dd65a27ae2cc 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c -@@ -116,12 +116,19 @@ ssize_t zcomp_available_show(const char +@@ -116,12 +116,19 @@ ssize_t zcomp_available_show(const char *comp, char *buf) struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { @@ -42,7 +45,7 @@ Signed-off-by: Sebastian Andrzej Siewior } int zcomp_compress(struct zcomp_strm *zstrm, -@@ -171,6 +178,7 @@ int zcomp_cpu_up_prepare(unsigned int cp +@@ -171,6 +178,7 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) pr_err("Can't allocate a compression stream\n"); return -ENOMEM; } @@ -50,6 +53,8 @@ Signed-off-by: Sebastian Andrzej Siewior *per_cpu_ptr(comp->stream, cpu) = zstrm; return 0; } +diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h +index 41c1002a7d7d..d424eafcbf8e 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -14,6 +14,7 @@ struct zcomp_strm { @@ -60,9 +65,11 @@ Signed-off-by: Sebastian Andrzej Siewior }; /* dynamic per-device compression frontend */ +diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c +index f35eccc43558..b2a347b8b517 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c -@@ -1026,6 +1026,7 @@ static int __zram_bvec_read(struct zram +@@ -1026,6 +1026,7 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index, unsigned long handle; unsigned int size; void *src, *dst; @@ -70,7 +77,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (zram_wb_enabled(zram)) { zram_slot_lock(zram, index); -@@ -1060,6 +1061,7 @@ static int __zram_bvec_read(struct zram +@@ -1060,6 +1061,7 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index, size = zram_get_obj_size(zram, index); @@ -78,7 +85,7 @@ Signed-off-by: Sebastian Andrzej Siewior src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); if (size == PAGE_SIZE) { dst = kmap_atomic(page); -@@ -1067,14 +1069,13 @@ static int __zram_bvec_read(struct zram +@@ -1067,14 +1069,13 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index, kunmap_atomic(dst); ret = 0; } else { @@ -94,3 +101,6 @@ Signed-off-by: Sebastian Andrzej Siewior zram_slot_unlock(zram, index); /* Should NEVER happen. Return bio error if it does. */ +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0253-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch b/kernel/patches-4.19.x-rt/0251-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch similarity index 65% rename from kernel/patches-4.19.x-rt/0253-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch rename to kernel/patches-4.19.x-rt/0251-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch index fde000b05..51bd20e91 100644 --- a/kernel/patches-4.19.x-rt/0253-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch +++ b/kernel/patches-4.19.x-rt/0251-drivers-zram-fix-zcomp_stream_get-smp_processor_id-u.patch @@ -1,7 +1,8 @@ +From 24fddbe29940c9217a8e2f5e9443ca29f941281a Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Wed, 23 Aug 2017 11:57:29 +0200 -Subject: [PATCH] drivers/zram: fix zcomp_stream_get() smp_processor_id() use - in preemptible code +Subject: [PATCH 251/269] drivers/zram: fix zcomp_stream_get() + smp_processor_id() use in preemptible code Use get_local_ptr() instead this_cpu_ptr() to avoid a warning regarding smp_processor_id() in preemptible code. @@ -13,12 +14,14 @@ Cc: stable-rt@vger.kernel.org Signed-off-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - drivers/block/zram/zcomp.c | 3 ++- + drivers/block/zram/zcomp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) +diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c +index dd65a27ae2cc..eece02262000 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c -@@ -118,7 +118,7 @@ struct zcomp_strm *zcomp_stream_get(stru +@@ -118,7 +118,7 @@ struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { struct zcomp_strm *zstrm; @@ -27,7 +30,7 @@ Signed-off-by: Sebastian Andrzej Siewior spin_lock(&zstrm->zcomp_lock); return zstrm; } -@@ -129,6 +129,7 @@ void zcomp_stream_put(struct zcomp *comp +@@ -129,6 +129,7 @@ void zcomp_stream_put(struct zcomp *comp) zstrm = *this_cpu_ptr(comp->stream); spin_unlock(&zstrm->zcomp_lock); @@ -35,3 +38,6 @@ Signed-off-by: Sebastian Andrzej Siewior } int zcomp_compress(struct zcomp_strm *zstrm, +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0254-tpm_tis-fix-stall-after-iowrite-s.patch b/kernel/patches-4.19.x-rt/0252-tpm_tis-fix-stall-after-iowrite-s.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0254-tpm_tis-fix-stall-after-iowrite-s.patch rename to kernel/patches-4.19.x-rt/0252-tpm_tis-fix-stall-after-iowrite-s.patch index 012162724..6338f0652 100644 --- a/kernel/patches-4.19.x-rt/0254-tpm_tis-fix-stall-after-iowrite-s.patch +++ b/kernel/patches-4.19.x-rt/0252-tpm_tis-fix-stall-after-iowrite-s.patch @@ -1,6 +1,7 @@ +From e17cfb4da190f56567819460296b640854ef8af0 Mon Sep 17 00:00:00 2001 From: Haris Okanovic Date: Tue, 15 Aug 2017 15:13:08 -0500 -Subject: [PATCH] tpm_tis: fix stall after iowrite*()s +Subject: [PATCH 252/269] tpm_tis: fix stall after iowrite*()s ioread8() operations to TPM MMIO addresses can stall the cpu when immediately following a sequence of iowrite*()'s to the same region. @@ -20,12 +21,14 @@ amortize the cost of flushing data to chip across multiple instructions. Signed-off-by: Haris Okanovic Signed-off-by: Sebastian Andrzej Siewior --- - drivers/char/tpm/tpm_tis.c | 29 +++++++++++++++++++++++++++-- + drivers/char/tpm/tpm_tis.c | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-) +diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c +index f08949a5f678..9fefcfcae593 100644 --- a/drivers/char/tpm/tpm_tis.c +++ b/drivers/char/tpm/tpm_tis.c -@@ -53,6 +53,31 @@ static inline struct tpm_tis_tcg_phy *to +@@ -53,6 +53,31 @@ static inline struct tpm_tis_tcg_phy *to_tpm_tis_tcg_phy(struct tpm_tis_data *da return container_of(data, struct tpm_tis_tcg_phy, priv); } @@ -57,7 +60,7 @@ Signed-off-by: Sebastian Andrzej Siewior static bool interrupts = true; module_param(interrupts, bool, 0444); MODULE_PARM_DESC(interrupts, "Enable interrupts"); -@@ -150,7 +175,7 @@ static int tpm_tcg_write_bytes(struct tp +@@ -150,7 +175,7 @@ static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len, struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); while (len--) @@ -66,7 +69,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } -@@ -177,7 +202,7 @@ static int tpm_tcg_write32(struct tpm_ti +@@ -177,7 +202,7 @@ static int tpm_tcg_write32(struct tpm_tis_data *data, u32 addr, u32 value) { struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); @@ -75,3 +78,6 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0255-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch b/kernel/patches-4.19.x-rt/0253-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch similarity index 78% rename from kernel/patches-4.19.x-rt/0255-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch rename to kernel/patches-4.19.x-rt/0253-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch index 2e954cf7d..027c4bc96 100644 --- a/kernel/patches-4.19.x-rt/0255-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0253-watchdog-prevent-deferral-of-watchdogd-wakeup-on-RT.patch @@ -1,6 +1,7 @@ +From 2e143bef6376db39d9e876eae3e3f1f718ff0b23 Mon Sep 17 00:00:00 2001 From: Julia Cartwright Date: Fri, 28 Sep 2018 21:03:51 +0000 -Subject: [PATCH] watchdog: prevent deferral of watchdogd wakeup on RT +Subject: [PATCH 253/269] watchdog: prevent deferral of watchdogd wakeup on RT When PREEMPT_RT_FULL is enabled, all hrtimer expiry functions are deferred for execution into the context of ktimersoftd unless otherwise @@ -31,12 +32,14 @@ Acked-by: Guenter Roeck [bigeasy: use only HRTIMER_MODE_REL_HARD] Signed-off-by: Sebastian Andrzej Siewior --- - drivers/watchdog/watchdog_dev.c | 8 ++++---- + drivers/watchdog/watchdog_dev.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) +diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c +index ffbdc4642ea5..84f75b5045f6 100644 --- a/drivers/watchdog/watchdog_dev.c +++ b/drivers/watchdog/watchdog_dev.c -@@ -147,7 +147,7 @@ static inline void watchdog_update_worke +@@ -147,7 +147,7 @@ static inline void watchdog_update_worker(struct watchdog_device *wdd) ktime_t t = watchdog_next_keepalive(wdd); if (t > 0) @@ -45,7 +48,7 @@ Signed-off-by: Sebastian Andrzej Siewior } else { hrtimer_cancel(&wd_data->timer); } -@@ -166,7 +166,7 @@ static int __watchdog_ping(struct watchd +@@ -166,7 +166,7 @@ static int __watchdog_ping(struct watchdog_device *wdd) if (ktime_after(earliest_keepalive, now)) { hrtimer_start(&wd_data->timer, ktime_sub(earliest_keepalive, now), @@ -54,7 +57,7 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } -@@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct +@@ -945,7 +945,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno) return -ENODEV; kthread_init_work(&wd_data->work, watchdog_ping_work); @@ -63,7 +66,7 @@ Signed-off-by: Sebastian Andrzej Siewior wd_data->timer.function = watchdog_timer_expired; if (wdd->id == 0) { -@@ -992,7 +992,7 @@ static int watchdog_cdev_register(struct +@@ -992,7 +992,7 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno) __module_get(wdd->ops->owner); kref_get(&wd_data->kref); if (handle_boot_enabled) @@ -72,3 +75,6 @@ Signed-off-by: Sebastian Andrzej Siewior else pr_info("watchdog%d running and kernel based pre-userspace handler disabled\n", wdd->id); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0256-drmradeoni915_Use_preempt_disableenable_rt()_where_recommended.patch b/kernel/patches-4.19.x-rt/0254-drm-radeon-i915-Use-preempt_disable-enable_rt-where-.patch similarity index 60% rename from kernel/patches-4.19.x-rt/0256-drmradeoni915_Use_preempt_disableenable_rt()_where_recommended.patch rename to kernel/patches-4.19.x-rt/0254-drm-radeon-i915-Use-preempt_disable-enable_rt-where-.patch index 208b03f55..9ee35e474 100644 --- a/kernel/patches-4.19.x-rt/0256-drmradeoni915_Use_preempt_disableenable_rt()_where_recommended.patch +++ b/kernel/patches-4.19.x-rt/0254-drm-radeon-i915-Use-preempt_disable-enable_rt-where-.patch @@ -1,6 +1,8 @@ -Subject: drm,radeon,i915: Use preempt_disable/enable_rt() where recommended +From 14ab946c30ebc65a97dd2a3a68f5f1bb0bfb8c7a Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Sat, 27 Feb 2016 08:09:11 +0100 +Subject: [PATCH 254/269] drm,radeon,i915: Use preempt_disable/enable_rt() + where recommended DRM folks identified the spots, so use them. @@ -9,13 +11,15 @@ Cc: Sebastian Andrzej Siewior Cc: linux-rt-users Signed-off-by: Thomas Gleixner --- - drivers/gpu/drm/i915/i915_irq.c | 2 ++ - drivers/gpu/drm/radeon/radeon_display.c | 2 ++ + drivers/gpu/drm/i915/i915_irq.c | 2 ++ + drivers/gpu/drm/radeon/radeon_display.c | 2 ++ 2 files changed, 4 insertions(+) +diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c +index 29877969310d..f65817c51c2a 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c -@@ -1025,6 +1025,7 @@ static bool i915_get_crtc_scanoutpos(str +@@ -1025,6 +1025,7 @@ static bool i915_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe, spin_lock_irqsave(&dev_priv->uncore.lock, irqflags); /* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */ @@ -23,7 +27,7 @@ Signed-off-by: Thomas Gleixner /* Get optional system timestamp before query. */ if (stime) -@@ -1076,6 +1077,7 @@ static bool i915_get_crtc_scanoutpos(str +@@ -1076,6 +1077,7 @@ static bool i915_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe, *etime = ktime_get(); /* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */ @@ -31,9 +35,11 @@ Signed-off-by: Thomas Gleixner spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags); +diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c +index 9d3ac8b981da..bde228c7739a 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c -@@ -1813,6 +1813,7 @@ int radeon_get_crtc_scanoutpos(struct dr +@@ -1813,6 +1813,7 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe, struct radeon_device *rdev = dev->dev_private; /* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */ @@ -41,7 +47,7 @@ Signed-off-by: Thomas Gleixner /* Get optional system timestamp before query. */ if (stime) -@@ -1905,6 +1906,7 @@ int radeon_get_crtc_scanoutpos(struct dr +@@ -1905,6 +1906,7 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe, *etime = ktime_get(); /* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */ @@ -49,3 +55,6 @@ Signed-off-by: Thomas Gleixner /* Decode into vertical and horizontal scanout position. */ *vpos = position & 0x1fff; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0257-drmi915_Use_local_lockunlock_irq()_in_intel_pipe_update_startend().patch b/kernel/patches-4.19.x-rt/0255-drm-i915-Use-local_lock-unlock_irq-in-intel_pipe_upd.patch similarity index 85% rename from kernel/patches-4.19.x-rt/0257-drmi915_Use_local_lockunlock_irq()_in_intel_pipe_update_startend().patch rename to kernel/patches-4.19.x-rt/0255-drm-i915-Use-local_lock-unlock_irq-in-intel_pipe_upd.patch index 21ac78071..c22488c0b 100644 --- a/kernel/patches-4.19.x-rt/0257-drmi915_Use_local_lockunlock_irq()_in_intel_pipe_update_startend().patch +++ b/kernel/patches-4.19.x-rt/0255-drm-i915-Use-local_lock-unlock_irq-in-intel_pipe_upd.patch @@ -1,7 +1,8 @@ -Subject: drm,i915: Use local_lock/unlock_irq() in intel_pipe_update_start/end() +From fa836f911e7122a32cf1d934a9736497b5dee45d Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Sat, 27 Feb 2016 09:01:42 +0100 - +Subject: [PATCH 255/269] drm,i915: Use local_lock/unlock_irq() in + intel_pipe_update_start/end() [ 8.014039] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:918 [ 8.014041] in_atomic(): 0, irqs_disabled(): 1, pid: 78, name: kworker/u4:4 @@ -56,9 +57,11 @@ Cc: Sebastian Andrzej Siewior Cc: linux-rt-users Signed-off-by: Thomas Gleixner --- - drivers/gpu/drm/i915/intel_sprite.c | 13 ++++++++----- + drivers/gpu/drm/i915/intel_sprite.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) +diff --git a/drivers/gpu/drm/i915/intel_sprite.c b/drivers/gpu/drm/i915/intel_sprite.c +index f7026e887fa9..07e4ddebdd80 100644 --- a/drivers/gpu/drm/i915/intel_sprite.c +++ b/drivers/gpu/drm/i915/intel_sprite.c @@ -36,6 +36,7 @@ @@ -69,7 +72,7 @@ Signed-off-by: Thomas Gleixner #include "intel_drv.h" #include "intel_frontbuffer.h" #include -@@ -60,6 +61,8 @@ int intel_usecs_to_scanlines(const struc +@@ -60,6 +61,8 @@ int intel_usecs_to_scanlines(const struct drm_display_mode *adjusted_mode, #define VBLANK_EVASION_TIME_US 100 #endif @@ -78,7 +81,7 @@ Signed-off-by: Thomas Gleixner /** * intel_pipe_update_start() - start update of a set of display registers * @new_crtc_state: the new crtc state -@@ -107,7 +110,7 @@ void intel_pipe_update_start(const struc +@@ -107,7 +110,7 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state) if (intel_psr_wait_for_idle(new_crtc_state)) DRM_ERROR("PSR idle timed out, atomic update may fail\n"); @@ -87,7 +90,7 @@ Signed-off-by: Thomas Gleixner crtc->debug.min_vbl = min; crtc->debug.max_vbl = max; -@@ -131,11 +134,11 @@ void intel_pipe_update_start(const struc +@@ -131,11 +134,11 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state) break; } @@ -101,7 +104,7 @@ Signed-off-by: Thomas Gleixner } finish_wait(wq, &wait); -@@ -168,7 +171,7 @@ void intel_pipe_update_start(const struc +@@ -168,7 +171,7 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state) return; irq_disable: @@ -110,7 +113,7 @@ Signed-off-by: Thomas Gleixner } /** -@@ -204,7 +207,7 @@ void intel_pipe_update_end(struct intel_ +@@ -204,7 +207,7 @@ void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state) new_crtc_state->base.event = NULL; } @@ -119,3 +122,6 @@ Signed-off-by: Thomas Gleixner if (intel_vgpu_active(dev_priv)) return; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0258-drm-i915-disable-tracing-on-RT.patch b/kernel/patches-4.19.x-rt/0256-drm-i915-disable-tracing-on-RT.patch similarity index 80% rename from kernel/patches-4.19.x-rt/0258-drm-i915-disable-tracing-on-RT.patch rename to kernel/patches-4.19.x-rt/0256-drm-i915-disable-tracing-on-RT.patch index 634ce8c06..729dceb0a 100644 --- a/kernel/patches-4.19.x-rt/0258-drm-i915-disable-tracing-on-RT.patch +++ b/kernel/patches-4.19.x-rt/0256-drm-i915-disable-tracing-on-RT.patch @@ -1,6 +1,7 @@ +From 0d087f448e0154cd673da85a57d305bb17f43f48 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Thu, 6 Dec 2018 09:52:20 +0100 -Subject: [PATCH] drm/i915: disable tracing on -RT +Subject: [PATCH 256/269] drm/i915: disable tracing on -RT Luca Abeni reported this: | BUG: scheduling while atomic: kworker/u8:2/15203/0x00000003 @@ -22,9 +23,11 @@ Cc: stable-rt@vger.kernel.org Reported-by: Luca Abeni Signed-off-by: Sebastian Andrzej Siewior --- - drivers/gpu/drm/i915/i915_trace.h | 4 ++++ + drivers/gpu/drm/i915/i915_trace.h | 4 ++++ 1 file changed, 4 insertions(+) +diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h +index b50c6b829715..cc54ec0ef75c 100644 --- a/drivers/gpu/drm/i915/i915_trace.h +++ b/drivers/gpu/drm/i915/i915_trace.h @@ -2,6 +2,10 @@ @@ -38,3 +41,6 @@ Signed-off-by: Sebastian Andrzej Siewior #include #include #include +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0259-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch b/kernel/patches-4.19.x-rt/0257-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch similarity index 66% rename from kernel/patches-4.19.x-rt/0259-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch rename to kernel/patches-4.19.x-rt/0257-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch index c660aeb73..8640dd0ae 100644 --- a/kernel/patches-4.19.x-rt/0259-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch +++ b/kernel/patches-4.19.x-rt/0257-drm-i915-skip-DRM_I915_LOW_LEVEL_TRACEPOINTS-with-NO.patch @@ -1,6 +1,8 @@ +From 1e0d82558c60f1e889452550fe5766802e54c9bc Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 19 Dec 2018 10:47:02 +0100 -Subject: [PATCH] drm/i915: skip DRM_I915_LOW_LEVEL_TRACEPOINTS with NOTRACE +Subject: [PATCH 257/269] drm/i915: skip DRM_I915_LOW_LEVEL_TRACEPOINTS with + NOTRACE The order of the header files is important. If this header file is included after tracepoint.h was included then the NOTRACE here becomes a @@ -9,12 +11,14 @@ behind DRM_I915_LOW_LEVEL_TRACEPOINTS. Signed-off-by: Sebastian Andrzej Siewior --- - drivers/gpu/drm/i915/i915_trace.h | 2 +- + drivers/gpu/drm/i915/i915_trace.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) +diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h +index cc54ec0ef75c..33028d8f470e 100644 --- a/drivers/gpu/drm/i915/i915_trace.h +++ b/drivers/gpu/drm/i915/i915_trace.h -@@ -683,7 +683,7 @@ DEFINE_EVENT(i915_request, i915_request_ +@@ -683,7 +683,7 @@ DEFINE_EVENT(i915_request, i915_request_add, TP_ARGS(rq) ); @@ -23,3 +27,6 @@ Signed-off-by: Sebastian Andrzej Siewior DEFINE_EVENT(i915_request, i915_request_submit, TP_PROTO(struct i915_request *rq), TP_ARGS(rq) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0260-cgroups-use-simple-wait-in-css_release.patch b/kernel/patches-4.19.x-rt/0258-cgroups-use-simple-wait-in-css_release.patch similarity index 81% rename from kernel/patches-4.19.x-rt/0260-cgroups-use-simple-wait-in-css_release.patch rename to kernel/patches-4.19.x-rt/0258-cgroups-use-simple-wait-in-css_release.patch index 8963a3e86..bc0b65a05 100644 --- a/kernel/patches-4.19.x-rt/0260-cgroups-use-simple-wait-in-css_release.patch +++ b/kernel/patches-4.19.x-rt/0258-cgroups-use-simple-wait-in-css_release.patch @@ -1,6 +1,7 @@ +From 12874386b3141dd4afa5b6e4aee17e99f529f37e Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Fri, 13 Feb 2015 15:52:24 +0100 -Subject: cgroups: use simple wait in css_release() +Subject: [PATCH 258/269] cgroups: use simple wait in css_release() To avoid: |BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914 @@ -28,10 +29,12 @@ To avoid: Signed-off-by: Sebastian Andrzej Siewior --- - include/linux/cgroup-defs.h | 2 ++ - kernel/cgroup/cgroup.c | 9 +++++---- + include/linux/cgroup-defs.h | 2 ++ + kernel/cgroup/cgroup.c | 9 +++++---- 2 files changed, 7 insertions(+), 4 deletions(-) +diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h +index 6002275937f5..ba64953d53d9 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -20,6 +20,7 @@ @@ -50,9 +53,11 @@ Signed-off-by: Sebastian Andrzej Siewior struct rcu_work destroy_rwork; /* +diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c +index 63dae7e0ccae..4377e0fd8827 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c -@@ -4625,10 +4625,10 @@ static void css_free_rwork_fn(struct wor +@@ -4628,10 +4628,10 @@ static void css_free_rwork_fn(struct work_struct *work) } } @@ -65,7 +70,7 @@ Signed-off-by: Sebastian Andrzej Siewior struct cgroup_subsys *ss = css->ss; struct cgroup *cgrp = css->cgroup; -@@ -4688,8 +4688,8 @@ static void css_release(struct percpu_re +@@ -4691,8 +4691,8 @@ static void css_release(struct percpu_ref *ref) struct cgroup_subsys_state *css = container_of(ref, struct cgroup_subsys_state, refcnt); @@ -76,7 +81,7 @@ Signed-off-by: Sebastian Andrzej Siewior } static void init_and_link_css(struct cgroup_subsys_state *css, -@@ -5411,6 +5411,7 @@ static int __init cgroup_wq_init(void) +@@ -5414,6 +5414,7 @@ static int __init cgroup_wq_init(void) */ cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1); BUG_ON(!cgroup_destroy_wq); @@ -84,3 +89,6 @@ Signed-off-by: Sebastian Andrzej Siewior return 0; } core_initcall(cgroup_wq_init); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0261-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch b/kernel/patches-4.19.x-rt/0259-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch similarity index 87% rename from kernel/patches-4.19.x-rt/0261-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch rename to kernel/patches-4.19.x-rt/0259-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch index db1c0e36e..82c41f985 100644 --- a/kernel/patches-4.19.x-rt/0261-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch +++ b/kernel/patches-4.19.x-rt/0259-cpuset-Convert-callback_lock-to-raw_spinlock_t.patch @@ -1,6 +1,7 @@ +From 3bf07cd523e1ceabae1252c9c286b5fa88608994 Mon Sep 17 00:00:00 2001 From: Mike Galbraith Date: Sun, 8 Jan 2017 09:32:25 +0100 -Subject: [PATCH] cpuset: Convert callback_lock to raw_spinlock_t +Subject: [PATCH 259/269] cpuset: Convert callback_lock to raw_spinlock_t The two commits below add up to a cpuset might_sleep() splat for RT: @@ -45,9 +46,11 @@ Cc: stable-rt@vger.kernel.org Signed-off-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior --- - kernel/cgroup/cpuset.c | 66 ++++++++++++++++++++++++------------------------- + kernel/cgroup/cpuset.c | 66 +++++++++++++++++++++--------------------- 1 file changed, 33 insertions(+), 33 deletions(-) +diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c +index ef085d84a940..3e5d90076368 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -288,7 +288,7 @@ static struct cpuset top_cpuset = { @@ -59,7 +62,7 @@ Signed-off-by: Sebastian Andrzej Siewior static struct workqueue_struct *cpuset_migrate_mm_wq; -@@ -922,9 +922,9 @@ static void update_cpumasks_hier(struct +@@ -922,9 +922,9 @@ static void update_cpumasks_hier(struct cpuset *cs, struct cpumask *new_cpus) continue; rcu_read_unlock(); @@ -71,7 +74,7 @@ Signed-off-by: Sebastian Andrzej Siewior WARN_ON(!is_in_v2_mode() && !cpumask_equal(cp->cpus_allowed, cp->effective_cpus)); -@@ -989,9 +989,9 @@ static int update_cpumask(struct cpuset +@@ -989,9 +989,9 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, if (retval < 0) return retval; @@ -83,7 +86,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* use trialcs->cpus_allowed as a temp variable */ update_cpumasks_hier(cs, trialcs->cpus_allowed); -@@ -1175,9 +1175,9 @@ static void update_nodemasks_hier(struct +@@ -1175,9 +1175,9 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems) continue; rcu_read_unlock(); @@ -95,7 +98,7 @@ Signed-off-by: Sebastian Andrzej Siewior WARN_ON(!is_in_v2_mode() && !nodes_equal(cp->mems_allowed, cp->effective_mems)); -@@ -1245,9 +1245,9 @@ static int update_nodemask(struct cpuset +@@ -1245,9 +1245,9 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs, if (retval < 0) goto done; @@ -107,7 +110,7 @@ Signed-off-by: Sebastian Andrzej Siewior /* use trialcs->mems_allowed as a temp variable */ update_nodemasks_hier(cs, &trialcs->mems_allowed); -@@ -1338,9 +1338,9 @@ static int update_flag(cpuset_flagbits_t +@@ -1338,9 +1338,9 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, spread_flag_changed = ((is_spread_slab(cs) != is_spread_slab(trialcs)) || (is_spread_page(cs) != is_spread_page(trialcs))); @@ -119,7 +122,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed) rebuild_sched_domains_locked(); -@@ -1755,7 +1755,7 @@ static int cpuset_common_seq_show(struct +@@ -1755,7 +1755,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v) cpuset_filetype_t type = seq_cft(sf)->private; int ret = 0; @@ -128,7 +131,7 @@ Signed-off-by: Sebastian Andrzej Siewior switch (type) { case FILE_CPULIST: -@@ -1774,7 +1774,7 @@ static int cpuset_common_seq_show(struct +@@ -1774,7 +1774,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v) ret = -EINVAL; } @@ -137,7 +140,7 @@ Signed-off-by: Sebastian Andrzej Siewior return ret; } -@@ -1989,12 +1989,12 @@ static int cpuset_css_online(struct cgro +@@ -1989,12 +1989,12 @@ static int cpuset_css_online(struct cgroup_subsys_state *css) cpuset_inc(); @@ -152,7 +155,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags)) goto out_unlock; -@@ -2021,12 +2021,12 @@ static int cpuset_css_online(struct cgro +@@ -2021,12 +2021,12 @@ static int cpuset_css_online(struct cgroup_subsys_state *css) } rcu_read_unlock(); @@ -167,7 +170,7 @@ Signed-off-by: Sebastian Andrzej Siewior out_unlock: mutex_unlock(&cpuset_mutex); return 0; -@@ -2065,7 +2065,7 @@ static void cpuset_css_free(struct cgrou +@@ -2065,7 +2065,7 @@ static void cpuset_css_free(struct cgroup_subsys_state *css) static void cpuset_bind(struct cgroup_subsys_state *root_css) { mutex_lock(&cpuset_mutex); @@ -176,7 +179,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (is_in_v2_mode()) { cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask); -@@ -2076,7 +2076,7 @@ static void cpuset_bind(struct cgroup_su +@@ -2076,7 +2076,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css) top_cpuset.mems_allowed = top_cpuset.effective_mems; } @@ -185,7 +188,7 @@ Signed-off-by: Sebastian Andrzej Siewior mutex_unlock(&cpuset_mutex); } -@@ -2174,12 +2174,12 @@ hotplug_update_tasks_legacy(struct cpuse +@@ -2174,12 +2174,12 @@ hotplug_update_tasks_legacy(struct cpuset *cs, { bool is_empty; @@ -213,7 +216,7 @@ Signed-off-by: Sebastian Andrzej Siewior if (cpus_updated) update_tasks_cpumask(cs); -@@ -2312,21 +2312,21 @@ static void cpuset_hotplug_workfn(struct +@@ -2312,21 +2312,21 @@ static void cpuset_hotplug_workfn(struct work_struct *work) /* synchronize cpus_allowed to cpu_active_mask */ if (cpus_updated) { @@ -239,7 +242,7 @@ Signed-off-by: Sebastian Andrzej Siewior update_tasks_nodemask(&top_cpuset); } -@@ -2425,11 +2425,11 @@ void cpuset_cpus_allowed(struct task_str +@@ -2425,11 +2425,11 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) { unsigned long flags; @@ -253,7 +256,7 @@ Signed-off-by: Sebastian Andrzej Siewior } void cpuset_cpus_allowed_fallback(struct task_struct *tsk) -@@ -2477,11 +2477,11 @@ nodemask_t cpuset_mems_allowed(struct ta +@@ -2477,11 +2477,11 @@ nodemask_t cpuset_mems_allowed(struct task_struct *tsk) nodemask_t mask; unsigned long flags; @@ -267,7 +270,7 @@ Signed-off-by: Sebastian Andrzej Siewior return mask; } -@@ -2573,14 +2573,14 @@ bool __cpuset_node_allowed(int node, gfp +@@ -2573,14 +2573,14 @@ bool __cpuset_node_allowed(int node, gfp_t gfp_mask) return true; /* Not hardwall and node outside mems_allowed: scan up cpusets */ @@ -284,3 +287,6 @@ Signed-off-by: Sebastian Andrzej Siewior return allowed; } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0262-apparmor-use-a-locallock-instead-preempt_disable.patch b/kernel/patches-4.19.x-rt/0260-apparmor-use-a-locallock-instead-preempt_disable.patch similarity index 76% rename from kernel/patches-4.19.x-rt/0262-apparmor-use-a-locallock-instead-preempt_disable.patch rename to kernel/patches-4.19.x-rt/0260-apparmor-use-a-locallock-instead-preempt_disable.patch index 5a6742fd8..0c5246307 100644 --- a/kernel/patches-4.19.x-rt/0262-apparmor-use-a-locallock-instead-preempt_disable.patch +++ b/kernel/patches-4.19.x-rt/0260-apparmor-use-a-locallock-instead-preempt_disable.patch @@ -1,6 +1,7 @@ +From f03e611745700fad514b850296eab0b098b3c12d Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 11 Oct 2017 17:43:49 +0200 -Subject: apparmor: use a locallock instead preempt_disable() +Subject: [PATCH 260/269] apparmor: use a locallock instead preempt_disable() get_buffers() disables preemption which acts as a lock for the per-CPU variable. Since we can't disable preemption here on RT, a local_lock is @@ -9,10 +10,12 @@ than one user within the critical section. Signed-off-by: Sebastian Andrzej Siewior --- - security/apparmor/include/path.h | 19 ++++++++++++++++--- - security/apparmor/lsm.c | 2 +- + security/apparmor/include/path.h | 19 ++++++++++++++++--- + security/apparmor/lsm.c | 2 +- 2 files changed, 17 insertions(+), 4 deletions(-) +diff --git a/security/apparmor/include/path.h b/security/apparmor/include/path.h +index b6380c5f0097..12abfddb19c9 100644 --- a/security/apparmor/include/path.h +++ b/security/apparmor/include/path.h @@ -40,8 +40,10 @@ struct aa_buffers { @@ -26,7 +29,7 @@ Signed-off-by: Sebastian Andrzej Siewior #define ASSIGN(FN, A, X, N) ((X) = FN(A, N)) #define EVAL1(FN, A, X) ASSIGN(FN, A, X, 0) /*X = FN(0)*/ -@@ -51,7 +53,17 @@ DECLARE_PER_CPU(struct aa_buffers, aa_bu +@@ -51,7 +53,17 @@ DECLARE_PER_CPU(struct aa_buffers, aa_buffers); #define for_each_cpu_buffer(I) for ((I) = 0; (I) < MAX_PATH_BUFFERS; (I)++) @@ -45,7 +48,7 @@ Signed-off-by: Sebastian Andrzej Siewior #define AA_BUG_PREEMPT_ENABLED(X) AA_BUG(preempt_count() <= 0, X) #else #define AA_BUG_PREEMPT_ENABLED(X) /* nop */ -@@ -67,14 +79,15 @@ DECLARE_PER_CPU(struct aa_buffers, aa_bu +@@ -67,14 +79,15 @@ DECLARE_PER_CPU(struct aa_buffers, aa_buffers); #define get_buffers(X...) \ do { \ @@ -63,6 +66,8 @@ Signed-off-by: Sebastian Andrzej Siewior } while (0) #endif /* __AA_PATH_H */ +diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c +index 8b8b70620bbe..8330ef57a784 100644 --- a/security/apparmor/lsm.c +++ b/security/apparmor/lsm.c @@ -45,7 +45,7 @@ @@ -74,3 +79,6 @@ Signed-off-by: Sebastian Andrzej Siewior /* * LSM hook functions +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0263-workqueue-prevent-deadlock-stall.patch b/kernel/patches-4.19.x-rt/0261-workqueue-Prevent-deadlock-stall-on-RT.patch similarity index 83% rename from kernel/patches-4.19.x-rt/0263-workqueue-prevent-deadlock-stall.patch rename to kernel/patches-4.19.x-rt/0261-workqueue-Prevent-deadlock-stall-on-RT.patch index 478256a09..a37a49945 100644 --- a/kernel/patches-4.19.x-rt/0263-workqueue-prevent-deadlock-stall.patch +++ b/kernel/patches-4.19.x-rt/0261-workqueue-Prevent-deadlock-stall-on-RT.patch @@ -1,6 +1,7 @@ -Subject: workqueue: Prevent deadlock/stall on RT +From 936c037e636229e54d45ea6887e110d47d891059 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner -Date: Fri, 27 Jun 2014 16:24:52 +0200 (CEST) +Date: Fri, 27 Jun 2014 16:24:52 +0200 +Subject: [PATCH 261/269] workqueue: Prevent deadlock/stall on RT Austin reported a XFS deadlock/stall on RT where scheduled work gets never exececuted and tasks are waiting for each other for ever. @@ -35,15 +36,16 @@ Signed-off-by: Thomas Gleixner Link: http://vger.kernel.org/r/alpine.DEB.2.10.1406271249510.5170@nanos Cc: Richard Weinberger Cc: Steven Rostedt - --- - kernel/sched/core.c | 6 +++-- - kernel/workqueue.c | 60 ++++++++++++++++++++++++++++++++++++++++------------ + kernel/sched/core.c | 6 +++-- + kernel/workqueue.c | 60 +++++++++++++++++++++++++++++++++++---------- 2 files changed, 51 insertions(+), 15 deletions(-) +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 6d06dd682cd5..d2a475e00af8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c -@@ -3567,9 +3567,8 @@ void __noreturn do_task_dead(void) +@@ -3569,9 +3569,8 @@ void __noreturn do_task_dead(void) static inline void sched_submit_work(struct task_struct *tsk) { @@ -54,7 +56,7 @@ Cc: Steven Rostedt /* * If a worker went to sleep, notify and ask workqueue whether * it wants to wake up a task to maintain concurrency. -@@ -3583,6 +3582,9 @@ static inline void sched_submit_work(str +@@ -3585,6 +3584,9 @@ static inline void sched_submit_work(struct task_struct *tsk) preempt_enable_no_resched(); } @@ -64,6 +66,8 @@ Cc: Steven Rostedt /* * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index bf7be926ce5f..84397c2a4465 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -125,6 +125,11 @@ enum { @@ -78,7 +82,7 @@ Cc: Steven Rostedt * A: wq_pool_attach_mutex protected. * * PL: wq_pool_mutex protected. -@@ -430,6 +435,31 @@ static void workqueue_sysfs_unregister(s +@@ -430,6 +435,31 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq); if (({ assert_rcu_or_wq_mutex(wq); false; })) { } \ else @@ -110,7 +114,7 @@ Cc: Steven Rostedt #ifdef CONFIG_DEBUG_OBJECTS_WORK static struct debug_obj_descr work_debug_descr; -@@ -836,10 +866,16 @@ static struct worker *first_idle_worker( +@@ -836,10 +866,16 @@ static struct worker *first_idle_worker(struct worker_pool *pool) */ static void wake_up_worker(struct worker_pool *pool) { @@ -128,7 +132,7 @@ Cc: Steven Rostedt } /** -@@ -868,7 +904,7 @@ void wq_worker_running(struct task_struc +@@ -868,7 +904,7 @@ void wq_worker_running(struct task_struct *task) */ void wq_worker_sleeping(struct task_struct *task) { @@ -137,7 +141,7 @@ Cc: Steven Rostedt struct worker_pool *pool; /* -@@ -885,26 +921,18 @@ void wq_worker_sleeping(struct task_stru +@@ -885,26 +921,18 @@ void wq_worker_sleeping(struct task_struct *task) return; worker->sleeping = 1; @@ -167,7 +171,7 @@ Cc: Steven Rostedt } /** -@@ -1675,7 +1703,9 @@ static void worker_enter_idle(struct wor +@@ -1675,7 +1703,9 @@ static void worker_enter_idle(struct worker *worker) worker->last_active = jiffies; /* idle_list is LIFO */ @@ -177,7 +181,7 @@ Cc: Steven Rostedt if (too_many_workers(pool) && !timer_pending(&pool->idle_timer)) mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT); -@@ -1708,7 +1738,9 @@ static void worker_leave_idle(struct wor +@@ -1708,7 +1738,9 @@ static void worker_leave_idle(struct worker *worker) return; worker_clr_flags(worker, WORKER_IDLE); pool->nr_idle--; @@ -187,7 +191,7 @@ Cc: Steven Rostedt } static struct worker *alloc_worker(int node) -@@ -1876,7 +1908,9 @@ static void destroy_worker(struct worker +@@ -1876,7 +1908,9 @@ static void destroy_worker(struct worker *worker) pool->nr_workers--; pool->nr_idle--; @@ -197,3 +201,6 @@ Cc: Steven Rostedt worker->flags |= WORKER_DIE; wake_up_process(worker->task); } +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0264-signals-allow-rt-tasks-to-cache-one-sigqueue-struct.patch b/kernel/patches-4.19.x-rt/0262-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch similarity index 73% rename from kernel/patches-4.19.x-rt/0264-signals-allow-rt-tasks-to-cache-one-sigqueue-struct.patch rename to kernel/patches-4.19.x-rt/0262-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch index dd828e3c6..bf453a098 100644 --- a/kernel/patches-4.19.x-rt/0264-signals-allow-rt-tasks-to-cache-one-sigqueue-struct.patch +++ b/kernel/patches-4.19.x-rt/0262-signals-Allow-rt-tasks-to-cache-one-sigqueue-struct.patch @@ -1,20 +1,22 @@ +From d05a6a9bf872f14f98543e61c6ef160307078b7c Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Fri, 3 Jul 2009 08:44:56 -0500 -Subject: signals: Allow rt tasks to cache one sigqueue struct +Subject: [PATCH 262/269] signals: Allow rt tasks to cache one sigqueue struct To avoid allocation allow rt tasks to cache one sigqueue struct in task struct. Signed-off-by: Thomas Gleixner - --- - include/linux/sched.h | 2 + - include/linux/signal.h | 1 - kernel/exit.c | 2 - - kernel/fork.c | 1 - kernel/signal.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++--- + include/linux/sched.h | 2 ++ + include/linux/signal.h | 1 + + kernel/exit.c | 2 +- + kernel/fork.c | 1 + + kernel/signal.c | 69 +++++++++++++++++++++++++++++++++++++++--- 5 files changed, 70 insertions(+), 5 deletions(-) +diff --git a/include/linux/sched.h b/include/linux/sched.h +index dd95bd64504e..c342fa06ab99 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -895,6 +895,8 @@ struct task_struct { @@ -26,9 +28,11 @@ Signed-off-by: Thomas Gleixner sigset_t blocked; sigset_t real_blocked; /* Restored if set_restore_sigmask() was used: */ +diff --git a/include/linux/signal.h b/include/linux/signal.h +index e4d01469ed60..746dd5d28c54 100644 --- a/include/linux/signal.h +++ b/include/linux/signal.h -@@ -245,6 +245,7 @@ static inline void init_sigpending(struc +@@ -245,6 +245,7 @@ static inline void init_sigpending(struct sigpending *sig) } extern void flush_sigqueue(struct sigpending *queue); @@ -36,9 +40,11 @@ Signed-off-by: Thomas Gleixner /* Test if 'sig' is valid signal. Use this instead of testing _NSIG directly */ static inline int valid_signal(unsigned long sig) +diff --git a/kernel/exit.c b/kernel/exit.c +index 5c0964dc805a..47d4161d1104 100644 --- a/kernel/exit.c +++ b/kernel/exit.c -@@ -160,7 +160,7 @@ static void __exit_signal(struct task_st +@@ -160,7 +160,7 @@ static void __exit_signal(struct task_struct *tsk) * Do this under ->siglock, we can race with another thread * doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals. */ @@ -47,9 +53,11 @@ Signed-off-by: Thomas Gleixner tsk->sighand = NULL; spin_unlock(&sighand->siglock); +diff --git a/kernel/fork.c b/kernel/fork.c +index f62ae61064c7..1cd87e9c9f17 100644 --- a/kernel/fork.c +++ b/kernel/fork.c -@@ -1802,6 +1802,7 @@ static __latent_entropy struct task_stru +@@ -1802,6 +1802,7 @@ static __latent_entropy struct task_struct *copy_process( spin_lock_init(&p->alloc_lock); init_sigpending(&p->pending); @@ -57,6 +65,8 @@ Signed-off-by: Thomas Gleixner p->utime = p->stime = p->gtime = 0; #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME +diff --git a/kernel/signal.c b/kernel/signal.c +index 57c48b3d1491..367e10c919d1 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -19,6 +19,7 @@ @@ -67,7 +77,7 @@ Signed-off-by: Thomas Gleixner #include #include #include -@@ -388,13 +389,30 @@ void task_join_group_stop(struct task_st +@@ -388,13 +389,30 @@ void task_join_group_stop(struct task_struct *task) } } @@ -99,7 +109,7 @@ Signed-off-by: Thomas Gleixner { struct sigqueue *q = NULL; struct user_struct *user; -@@ -411,7 +429,10 @@ static struct sigqueue * +@@ -411,7 +429,10 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi if (override_rlimit || atomic_read(&user->sigpending) <= task_rlimit(t, RLIMIT_SIGPENDING)) { @@ -111,7 +121,7 @@ Signed-off-by: Thomas Gleixner } else { print_dropped_signal(sig); } -@@ -428,6 +449,13 @@ static struct sigqueue * +@@ -428,6 +449,13 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi return q; } @@ -125,7 +135,7 @@ Signed-off-by: Thomas Gleixner static void __sigqueue_free(struct sigqueue *q) { if (q->flags & SIGQUEUE_PREALLOC) -@@ -437,6 +465,21 @@ static void __sigqueue_free(struct sigqu +@@ -437,6 +465,21 @@ static void __sigqueue_free(struct sigqueue *q) kmem_cache_free(sigqueue_cachep, q); } @@ -147,10 +157,11 @@ Signed-off-by: Thomas Gleixner void flush_sigqueue(struct sigpending *queue) { struct sigqueue *q; -@@ -450,6 +493,21 @@ void flush_sigqueue(struct sigpending *q +@@ -449,6 +492,21 @@ void flush_sigqueue(struct sigpending *queue) + } } - /* ++/* + * Called from __exit_signal. Flush tsk->pending and + * tsk->sigqueue_cache + */ @@ -165,11 +176,10 @@ Signed-off-by: Thomas Gleixner + kmem_cache_free(sigqueue_cachep, q); +} + -+/* + /* * Flush all pending signals for this kthread. */ - void flush_signals(struct task_struct *t) -@@ -572,7 +630,7 @@ static void collect_signal(int sig, stru +@@ -572,7 +630,7 @@ static void collect_signal(int sig, struct sigpending *list, siginfo_t *info, (info->si_code == SI_TIMER) && (info->si_sys_private); @@ -178,7 +188,7 @@ Signed-off-by: Thomas Gleixner } else { /* * Ok, it wasn't in the queue. This must be -@@ -609,6 +667,8 @@ int dequeue_signal(struct task_struct *t +@@ -609,6 +667,8 @@ int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info) bool resched_timer = false; int signr; @@ -197,3 +207,6 @@ Signed-off-by: Thomas Gleixner if (q) q->flags |= SIGQUEUE_PREALLOC; +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0263-Add-localversion-for-RT-release.patch b/kernel/patches-4.19.x-rt/0263-Add-localversion-for-RT-release.patch new file mode 100644 index 000000000..6d7a761dd --- /dev/null +++ b/kernel/patches-4.19.x-rt/0263-Add-localversion-for-RT-release.patch @@ -0,0 +1,21 @@ +From 7b48c4366f0f483bb81cc05f7f427176bff52bf8 Mon Sep 17 00:00:00 2001 +From: Thomas Gleixner +Date: Fri, 8 Jul 2011 20:25:16 +0200 +Subject: [PATCH 263/269] Add localversion for -RT release + +Signed-off-by: Thomas Gleixner +--- + localversion-rt | 1 + + 1 file changed, 1 insertion(+) + create mode 100644 localversion-rt + +diff --git a/localversion-rt b/localversion-rt +new file mode 100644 +index 000000000000..1199ebade17b +--- /dev/null ++++ b/localversion-rt +@@ -0,0 +1 @@ ++-rt16 +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0264-powerpc-pseries-iommu-Use-a-locallock-instead-local_.patch b/kernel/patches-4.19.x-rt/0264-powerpc-pseries-iommu-Use-a-locallock-instead-local_.patch new file mode 100644 index 000000000..4de5728b2 --- /dev/null +++ b/kernel/patches-4.19.x-rt/0264-powerpc-pseries-iommu-Use-a-locallock-instead-local_.patch @@ -0,0 +1,96 @@ +From 8c6c7ae29703351a50e4ab8c71d130f8c7c06c91 Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior +Date: Tue, 26 Mar 2019 18:31:54 +0100 +Subject: [PATCH 264/269] powerpc/pseries/iommu: Use a locallock instead + local_irq_save() + +The locallock protects the per-CPU variable tce_page. The function +attempts to allocate memory while tce_page is protected (by disabling +interrupts). + +Use local_irq_save() instead of local_irq_disable(). + +Cc: stable-rt@vger.kernel.org +Signed-off-by: Sebastian Andrzej Siewior +Signed-off-by: Steven Rostedt (VMware) +--- + arch/powerpc/platforms/pseries/iommu.c | 16 ++++++++++------ + 1 file changed, 10 insertions(+), 6 deletions(-) + +diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c +index 06f02960b439..d80d919c78d3 100644 +--- a/arch/powerpc/platforms/pseries/iommu.c ++++ b/arch/powerpc/platforms/pseries/iommu.c +@@ -38,6 +38,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -212,6 +213,7 @@ static int tce_build_pSeriesLP(struct iommu_table *tbl, long tcenum, + } + + static DEFINE_PER_CPU(__be64 *, tce_page); ++static DEFINE_LOCAL_IRQ_LOCK(tcp_page_lock); + + static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum, + long npages, unsigned long uaddr, +@@ -232,7 +234,8 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum, + direction, attrs); + } + +- local_irq_save(flags); /* to protect tcep and the page behind it */ ++ /* to protect tcep and the page behind it */ ++ local_lock_irqsave(tcp_page_lock, flags); + + tcep = __this_cpu_read(tce_page); + +@@ -243,7 +246,7 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum, + tcep = (__be64 *)__get_free_page(GFP_ATOMIC); + /* If allocation fails, fall back to the loop implementation */ + if (!tcep) { +- local_irq_restore(flags); ++ local_unlock_irqrestore(tcp_page_lock, flags); + return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr, + direction, attrs); + } +@@ -277,7 +280,7 @@ static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum, + tcenum += limit; + } while (npages > 0 && !rc); + +- local_irq_restore(flags); ++ local_unlock_irqrestore(tcp_page_lock, flags); + + if (unlikely(rc == H_NOT_ENOUGH_RESOURCES)) { + ret = (int)rc; +@@ -435,13 +438,14 @@ static int tce_setrange_multi_pSeriesLP(unsigned long start_pfn, + u64 rc = 0; + long l, limit; + +- local_irq_disable(); /* to protect tcep and the page behind it */ ++ /* to protect tcep and the page behind it */ ++ local_lock_irq(tcp_page_lock); + tcep = __this_cpu_read(tce_page); + + if (!tcep) { + tcep = (__be64 *)__get_free_page(GFP_ATOMIC); + if (!tcep) { +- local_irq_enable(); ++ local_unlock_irq(tcp_page_lock); + return -ENOMEM; + } + __this_cpu_write(tce_page, tcep); +@@ -487,7 +491,7 @@ static int tce_setrange_multi_pSeriesLP(unsigned long start_pfn, + + /* error cleanup: caller will clear whole range */ + +- local_irq_enable(); ++ local_unlock_irq(tcp_page_lock); + return rc; + } + +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0265-localversion.patch b/kernel/patches-4.19.x-rt/0265-localversion.patch deleted file mode 100644 index 0cccc7790..000000000 --- a/kernel/patches-4.19.x-rt/0265-localversion.patch +++ /dev/null @@ -1,13 +0,0 @@ -Subject: Add localversion for -RT release -From: Thomas Gleixner -Date: Fri, 08 Jul 2011 20:25:16 +0200 - -Signed-off-by: Thomas Gleixner ---- - localversion-rt | 1 + - 1 file changed, 1 insertion(+) - ---- /dev/null -+++ b/localversion-rt -@@ -0,0 +1 @@ -+-rt16 diff --git a/kernel/patches-4.19.x-rt/0265-powerpc-reshuffle-TIF-bits.patch b/kernel/patches-4.19.x-rt/0265-powerpc-reshuffle-TIF-bits.patch new file mode 100644 index 000000000..d9d26ea34 --- /dev/null +++ b/kernel/patches-4.19.x-rt/0265-powerpc-reshuffle-TIF-bits.patch @@ -0,0 +1,151 @@ +From 9b0199e0f5b4e5782a4588e31d4db3e75aa3bbff Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior +Date: Fri, 22 Mar 2019 17:15:58 +0100 +Subject: [PATCH 265/269] powerpc: reshuffle TIF bits + +Powerpc32/64 does not compile because TIF_SYSCALL_TRACE's bit is higher +than 15 and the assembly instructions don't expect that. + +Move TIF_RESTOREALL, TIF_NOERROR to the higher bits and keep +TIF_NEED_RESCHED_LAZY in the lower range. As a result one split load is +needed and otherwise we can use immediates. + +Signed-off-by: Sebastian Andrzej Siewior +Signed-off-by: Steven Rostedt (VMware) +--- + arch/powerpc/include/asm/thread_info.h | 11 +++++++---- + arch/powerpc/kernel/entry_32.S | 12 +++++++----- + arch/powerpc/kernel/entry_64.S | 12 +++++++----- + 3 files changed, 21 insertions(+), 14 deletions(-) + +diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h +index ce316076bc52..64c3d1a720e2 100644 +--- a/arch/powerpc/include/asm/thread_info.h ++++ b/arch/powerpc/include/asm/thread_info.h +@@ -83,18 +83,18 @@ extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src + #define TIF_SIGPENDING 1 /* signal pending */ + #define TIF_NEED_RESCHED 2 /* rescheduling necessary */ + #define TIF_FSCHECK 3 /* Check FS is USER_DS on return */ +-#define TIF_NEED_RESCHED_LAZY 4 /* lazy rescheduling necessary */ + #define TIF_RESTORE_TM 5 /* need to restore TM FP/VEC/VSX */ + #define TIF_PATCH_PENDING 6 /* pending live patching update */ + #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ + #define TIF_SINGLESTEP 8 /* singlestepping active */ + #define TIF_NOHZ 9 /* in adaptive nohz mode */ + #define TIF_SECCOMP 10 /* secure computing */ +-#define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */ +-#define TIF_NOERROR 12 /* Force successful syscall return */ ++ ++#define TIF_NEED_RESCHED_LAZY 11 /* lazy rescheduling necessary */ ++#define TIF_SYSCALL_TRACEPOINT 12 /* syscall tracepoint instrumentation */ ++ + #define TIF_NOTIFY_RESUME 13 /* callback before returning to user */ + #define TIF_UPROBE 14 /* breakpointed or single-stepping */ +-#define TIF_SYSCALL_TRACEPOINT 15 /* syscall tracepoint instrumentation */ + #define TIF_EMULATE_STACK_STORE 16 /* Is an instruction emulation + for stack store? */ + #define TIF_MEMDIE 17 /* is terminating due to OOM killer */ +@@ -103,6 +103,9 @@ extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src + #endif + #define TIF_POLLING_NRFLAG 19 /* true if poll_idle() is polling TIF_NEED_RESCHED */ + #define TIF_32BIT 20 /* 32 bit binary */ ++#define TIF_RESTOREALL 21 /* Restore all regs (implies NOERROR) */ ++#define TIF_NOERROR 22 /* Force successful syscall return */ ++ + + /* as above, but as bit values */ + #define _TIF_SYSCALL_TRACE (1< +Date: Wed, 13 Mar 2019 11:40:34 +0000 +Subject: [PATCH 266/269] tty/sysrq: Convert show_lock to raw_spinlock_t + +Systems which don't provide arch_trigger_cpumask_backtrace() will +invoke showacpu() from a smp_call_function() function which is invoked +with disabled interrupts even on -RT systems. + +The function acquires the show_lock lock which only purpose is to +ensure that the CPUs don't print simultaneously. Otherwise the +output would clash and it would be hard to tell the output from CPUx +apart from CPUy. + +On -RT the spin_lock() can not be acquired from this context. A +raw_spin_lock() is required. It will introduce the system's latency +by performing the sysrq request and other CPUs will block on the lock +until the request is done. This is okay because the user asked for a +backtrace of all active CPUs and under "normal circumstances in +production" this path should not be triggered. + +Signed-off-by: Julien Grall +Signed-off-by: Steven Rostedt (VMware) +[bigeasy@linuxtronix.de: commit description] +Signed-off-by: Sebastian Andrzej Siewior +Acked-by: Sebastian Andrzej Siewior +Signed-off-by: Greg Kroah-Hartman +Cc: stable-rt@vger.kernel.org +Signed-off-by: Sebastian Andrzej Siewior +--- + drivers/tty/sysrq.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c +index 06ed20dd01ba..627517ad55bf 100644 +--- a/drivers/tty/sysrq.c ++++ b/drivers/tty/sysrq.c +@@ -215,7 +215,7 @@ static struct sysrq_key_op sysrq_showlocks_op = { + #endif + + #ifdef CONFIG_SMP +-static DEFINE_SPINLOCK(show_lock); ++static DEFINE_RAW_SPINLOCK(show_lock); + + static void showacpu(void *dummy) + { +@@ -225,10 +225,10 @@ static void showacpu(void *dummy) + if (idle_cpu(smp_processor_id())) + return; + +- spin_lock_irqsave(&show_lock, flags); ++ raw_spin_lock_irqsave(&show_lock, flags); + pr_info("CPU%d:\n", smp_processor_id()); + show_stack(NULL, NULL); +- spin_unlock_irqrestore(&show_lock, flags); ++ raw_spin_unlock_irqrestore(&show_lock, flags); + } + + static void sysrq_showregs_othercpus(struct work_struct *dummy) +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0267-drm-i915-Don-t-disable-interrupts-independently-of-t.patch b/kernel/patches-4.19.x-rt/0267-drm-i915-Don-t-disable-interrupts-independently-of-t.patch new file mode 100644 index 000000000..1e7f184ae --- /dev/null +++ b/kernel/patches-4.19.x-rt/0267-drm-i915-Don-t-disable-interrupts-independently-of-t.patch @@ -0,0 +1,50 @@ +From 4f5c0777eb039305fafbfdf628f44cd4192d7dd8 Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior +Date: Wed, 10 Apr 2019 11:01:37 +0200 +Subject: [PATCH 267/269] drm/i915: Don't disable interrupts independently of + the lock + +The locks (timeline->lock and rq->lock) need to be taken with disabled +interrupts. This is done in __retire_engine_request() by disabling the +interrupts independently of the locks itself. +While local_irq_disable()+spin_lock() equals spin_lock_irq() on vanilla +it does not on RT. Also, it is not obvious if there is a special reason +to why the interrupts are disabled independently of the lock. + +Enable/disable interrupts as part of the locking instruction. + +Signed-off-by: Sebastian Andrzej Siewior +Signed-off-by: Steven Rostedt (VMware) +--- + drivers/gpu/drm/i915/i915_request.c | 8 ++------ + 1 file changed, 2 insertions(+), 6 deletions(-) + +diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c +index 5c2c93cbab12..7124510b9131 100644 +--- a/drivers/gpu/drm/i915/i915_request.c ++++ b/drivers/gpu/drm/i915/i915_request.c +@@ -356,9 +356,7 @@ static void __retire_engine_request(struct intel_engine_cs *engine, + + GEM_BUG_ON(!i915_request_completed(rq)); + +- local_irq_disable(); +- +- spin_lock(&engine->timeline.lock); ++ spin_lock_irq(&engine->timeline.lock); + GEM_BUG_ON(!list_is_first(&rq->link, &engine->timeline.requests)); + list_del_init(&rq->link); + spin_unlock(&engine->timeline.lock); +@@ -372,9 +370,7 @@ static void __retire_engine_request(struct intel_engine_cs *engine, + GEM_BUG_ON(!atomic_read(&rq->i915->gt_pm.rps.num_waiters)); + atomic_dec(&rq->i915->gt_pm.rps.num_waiters); + } +- spin_unlock(&rq->lock); +- +- local_irq_enable(); ++ spin_unlock_irq(&rq->lock); + + /* + * The backing object for the context is done after switching to the +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0268-sched-completion-Fix-a-lockup-in-wait_for_completion.patch b/kernel/patches-4.19.x-rt/0268-sched-completion-Fix-a-lockup-in-wait_for_completion.patch new file mode 100644 index 000000000..33d012c5e --- /dev/null +++ b/kernel/patches-4.19.x-rt/0268-sched-completion-Fix-a-lockup-in-wait_for_completion.patch @@ -0,0 +1,68 @@ +From 3fedc60594022bd98689b88034899528d221db8d Mon Sep 17 00:00:00 2001 +From: Corey Minyard +Date: Thu, 9 May 2019 14:33:20 -0500 +Subject: [PATCH 268/269] sched/completion: Fix a lockup in + wait_for_completion() + +Consider following race: + + T0 T1 T2 + wait_for_completion() + do_wait_for_common() + __prepare_to_swait() + schedule() + complete() + x->done++ (0 -> 1) + raw_spin_lock_irqsave() + swake_up_locked() wait_for_completion() + wake_up_process(T0) + list_del_init() + raw_spin_unlock_irqrestore() + raw_spin_lock_irq(&x->wait.lock) + raw_spin_lock_irq(&x->wait.lock) x->done != UINT_MAX, 1 -> 0 + raw_spin_unlock_irq(&x->wait.lock) + return 1 + while (!x->done && timeout), + continue loop, not enqueued + on &x->wait + +Basically, the problem is that the original wait queues used in +completions did not remove the item from the queue in the wakeup +function, but swake_up_locked() does. + +Fix it by adding the thread to the wait queue inside the do loop. +The design of swait detects if it is already in the list and doesn't +do the list add again. + +Cc: stable-rt@vger.kernel.org +Fixes: a04ff6b4ec4ee7e ("completion: Use simple wait queues") +Signed-off-by: Corey Minyard +Acked-by: Steven Rostedt (VMware) +Signed-off-by: Steven Rostedt (VMware) +[bigeasy: shorten commit message ] +Signed-off-by: Sebastian Andrzej Siewior +--- + kernel/sched/completion.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c +index 755a58084978..49c14137988e 100644 +--- a/kernel/sched/completion.c ++++ b/kernel/sched/completion.c +@@ -72,12 +72,12 @@ do_wait_for_common(struct completion *x, + if (!x->done) { + DECLARE_SWAITQUEUE(wait); + +- __prepare_to_swait(&x->wait, &wait); + do { + if (signal_pending_state(state, current)) { + timeout = -ERESTARTSYS; + break; + } ++ __prepare_to_swait(&x->wait, &wait); + __set_current_state(state); + raw_spin_unlock_irq(&x->wait.lock); + timeout = action(timeout); +-- +2.20.1 + diff --git a/kernel/patches-4.19.x-rt/0269-Linux-4.19.37-rt20-REBASE.patch b/kernel/patches-4.19.x-rt/0269-Linux-4.19.37-rt20-REBASE.patch new file mode 100644 index 000000000..c1f04bf74 --- /dev/null +++ b/kernel/patches-4.19.x-rt/0269-Linux-4.19.37-rt20-REBASE.patch @@ -0,0 +1,19 @@ +From febb7083d474aead8166900edeb557681119dcc4 Mon Sep 17 00:00:00 2001 +From: "Steven Rostedt (VMware)" +Date: Fri, 24 May 2019 14:22:06 -0400 +Subject: [PATCH 269/269] Linux 4.19.37-rt20 REBASE + +--- + localversion-rt | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/localversion-rt b/localversion-rt +index 1199ebade17b..e095ab819714 100644 +--- a/localversion-rt ++++ b/localversion-rt +@@ -1 +1 @@ +--rt16 ++-rt20 +-- +2.20.1 +