OpenVZ-legacy
  1. OpenVZ-legacy

linux-2.6.32-openvz

Public
AuthorCommitMessageCommit dateIssues
Pavel EmelyanovPavel Emelyanov
763921f076cOpenVZ kernel 2.6.32-dyomin releasedNamed after Lev Stepanovich Dyomin - a soviet cosmonaut Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Pavel EmelyanovPavel Emelyanov
47ac42d7e37ubc: Remove cpuset_(un)lock callsThese were removed with .22 stable update Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Pavel EmelyanovPavel Emelyanov
a2c26f94372MMerged linux-2.6.32.22Conflicts: Makefile drivers/net/tun.c kernel/cpu.c kernel/sched.c
Greg Kroah-HartmanGreg Kroah-Hartman
eaa1ace05aaLinux 2.6.32.22
Chris WilsonGreg Kroah-HartmanChris Wilson
6185dfa6d0adrm: Only decouple the old_fb from the crtc is we call mode_set*commit 356ad3cd616185631235ffb48b3efbf39f9923b3 upstream. Otherwise when disabling the output we switch to the new fb (which is likely NULL) and skip the call to mode_set -- leaking driver private state on the old_fb. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=29857 Reported-by: Sitsofe Wheeler <sitsofe@yahoo.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Air...
Chris WilsonGreg Kroah-HartmanChris Wilson
adae8733702drm/i915: Prevent double dpms oncommit 032d2a0d068b0368296a56469761394ef03207c3 upstream. Arguably this is a bug in drm-core in that we should not be called twice in succession with DPMS_ON, however this is still occuring and we see FDI link training failures on the second call leading to the occassional blank display. For the time being ignore the repeated call. Original patch by Dave Airlie <airlied@redhat.com> Signed-off...
Dan CarpenterGreg Kroah-HartmanDan Carpenter
079908f18d7i915_gem: return -EFAULT if copy_to_user failscommit c877cdce93a44eea96f6cf7fc04be7d0372db2be upstream. copy_to_user() returns the number of bytes remaining to be copied and I'm pretty sure we want to return a negative error code here. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Dan CarpenterGreg Kroah-HartmanDan Carpenter
9cd74cb5174i915: return -EFAULT if copy_to_user failscommit 9927a403ca8c97798129953fa9cbb5dc259c7cb9 upstream. copy_to_user returns the number of bytes remaining to be copied, but we want to return a negative error code here. These are returned to userspace. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Trond MyklebustGreg Kroah-HartmanTrond Myklebust
f7f040a6b08SUNRPC: Fix race corrupting rpc upcallcommit 5a67657a2e90c9e4a48518f95d4ba7777aa20fbb upstream. If rpc_queue_upcall() adds a new upcall to the rpci->pipe list just after rpc_pipe_release calls rpc_purge_list(), but before it calls gss_pipe_release (as rpci->ops->release_pipe(inode)), then the latter will free a message without deleting it from the rpci->pipe list. We will be left with a freed object on the rpc->pipe list. Most f...
Trond MyklebustGreg Kroah-HartmanTrond Myklebust
60804a1f482NFS: Fix a typo in nfs_sockaddr_match_ipaddr6commit b20d37ca9561711c6a3c4b859c2855f49565e061 upstream. Reported-by: Ben Greear <greearb@candelatech.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Anton VorontsovGreg Kroah-HartmanAnton Vorontsov
198aa4b040aapm_power: Add missing break statementcommit 1d220334d6a8a711149234dc5f98d34ae02226b8 upstream. The missing break statement causes wrong capacity calculation for batteries that report energy. Reported-by: d binderman <dcb314@hotmail.com> Signed-off-by: Anton Vorontsov <cbouatmailru@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Guillem JoverGreg Kroah-HartmanGuillem Jover
ebd5849e44bhwmon: (f75375s) Do not overwrite values read from registerscommit c3b327d60bbba3f5ff8fd87d1efc0e95eb6c121b upstream. All bits in the values read from registers to be used for the next write were getting overwritten, avoid doing so to not mess with the current configuration. Signed-off-by: Guillem Jover <guillem@hadrons.org> Cc: Riku Voipio <riku.voipio@iki.fi> Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@...
Guillem JoverGreg Kroah-HartmanGuillem Jover
2e999ac7a15hwmon: (f75375s) Shift control mode to the correct bit positioncommit 96f3640894012be7dd15a384566bfdc18297bc6c upstream. The spec notes that fan0 and fan1 control mode bits are located in bits 7-6 and 5-4 respectively, but the FAN_CTRL_MODE macro was making the bits shift by 5 instead of by 4. Signed-off-by: Guillem Jover <guillem@hadrons.org> Cc: Riku Voipio <riku.voipio@iki.fi> Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-...
Al ViroGreg Kroah-HartmanAl Viro
ef8deb2ab7farm: fix really nasty sigreturn bugcommit 653d48b22166db2d8b1515ebe6f9f0f7c95dfc86 upstream. If a signal hits us outside of a syscall and another gets delivered when we are in sigreturn (e.g. because it had been in sa_mask for the first one and got sent to us while we'd been in the first handler), we have a chance of returning from the second handler to location one insn prior to where we ought to return. If r0 happens to cont...
Takashi IwaiGreg Kroah-HartmanTakashi Iwai
86ea76f3417ALSA: hda - Handle pin NID 0x1a on ALC259/269commit b08b1637ce1c0196970348bcabf40f04b6b3d58e upstream. The pin NID 0x1a should be handled as well as NID 0x1b. Also added comments. Signed-off-by: Takashi Iwai <tiwai@suse.de> Cc: David Henningsson <david.henningsson@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Takashi IwaiGreg Kroah-HartmanTakashi Iwai
98984831cfaALSA: hda - Handle missing NID 0x1b on ALC259 codeccommit 5d4abf93ea3192cc666430225a29a4978c97c57d upstream. Since ALC259/269 use the same parser of ALC268, the pin 0x1b was ignored as an invalid widget. Just add this NID to handle properly. This will add the missing mixer controls for some devices. Signed-off-by: Takashi Iwai <tiwai@suse.de> Cc: David Henningsson <david.henningsson@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@su...
Anton BlanchardGreg Kroah-HartmanAnton Blanchard
277899ef0a3sched: cpuacct: Use bigger percpu counter batch values for stats counterscommit fa535a77bd3fa32b9215ba375d6a202fe73e1dd6 upstream When CONFIG_VIRT_CPU_ACCOUNTING and CONFIG_CGROUP_CPUACCT are enabled we can call cpuacct_update_stats with values much larger than percpu_counter_batch. This means the call to percpu_counter_add will always add to the global count which is protected by a spinlock and we end up with a global spinlock in the scheduler. Based on an idea ...
Suresh SiddhaGreg Kroah-HartmanSuresh Siddha
337a0179991sched: Fix select_idle_sibling() logic in select_task_rq_fair()commit 99bd5e2f245d8cd17d040c82d40becdb3efd9b69 upstream Issues in the current select_idle_sibling() logic in select_task_rq_fair() in the context of a task wake-up: a) Once we select the idle sibling, we use that domain (spanning the cpu that the task is currently woken-up and the idle sibling that we found) in our wake_affine() decisions. This domain is completely different from the ...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
6efd9bbce0dsched: Pre-compute cpumask_weight(sched_domain_span(sd))commit 669c55e9f99b90e46eaa0f98a67ec53d46dc969a upstream Dave reported that his large SPARC machines spend lots of time in hweight64(), try and optimize some of those needless cpumask_weight() invocations (esp. with the large offstack cpumasks these are very expensive indeed). Reported-by: David Miller <davem@davemloft.net> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference...
Mike GalbraithGreg Kroah-HartmanMike Galbraith
ac8f51da798sched: Fix select_idle_sibling()commit 8b911acdf08477c059d1c36c21113ab1696c612b upstream Don't bother with selection when the current cpu is idle. Recent load balancing changes also make it no longer necessary to check wake_affine() success before returning the selected sibling, so we now always use it. Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <126...
Mike GalbraithGreg Kroah-HartmanMike Galbraith
b971284b4afsched: Fix vmark regression on big machinescommit 50b926e439620c469565e8be0f28be78f5fca1ce upstream SD_PREFER_SIBLING is set at the CPU domain level if power saving isn't enabled, leading to many cache misses on large machines as we traverse looking for an idle shared cache to wake to. Change the enabler of select_idle_sibling() to SD_SHARE_PKG_RESOURCES, and enable same at the sibling domain level. Reported-by: Lin Ming <ming.m.lin@...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
81c8021a882sched: More generic WAKE_AFFINE vs select_idle_sibling()commit fe3bcfe1f6c1fc4ea7706ac2d05e579fd9092682 upstream Instead of only considering SD_WAKE_AFFINE | SD_PREFER_SIBLING domains also allow all SD_PREFER_SIBLING domains below a SD_WAKE_AFFINE domain to change the affinity target. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <20091112145610.909723612@chello.nl> Signed-off-by: Ingo Mo...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
505afcbd47asched: Cleanup select_task_rq_fair()commit a50bde5130f65733142b32975616427d0ea50856 upstream Clean up the new affine to idle sibling bits while trying to grok them. Should not have any function differences. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <20091112145610.832503781@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Mike Galbraith <efault@...
Daniel J BluemanGreg Kroah-HartmanDaniel J Blueman
0b88f2ba7casched: apply RCU protection to wake_affine()commit f3b577dec1f2ce32d2db6d2ca6badff7002512af upstream The task_group() function returns a pointer that must be protected by either RCU, the ->alloc_lock, or the cgroup lock (see the rcu_dereference_check() in task_subsys_state(), which is invoked by task_group()). The wake_affine() function currently does none of these, which means that a concurrent update would be within its rights to fre...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
58e9934fe30sched: Remove unnecessary RCU exclusioncommit fb58bac5c75bfff8bbf7d02071a10a62f32fe28b upstream As Nick pointed out, and realized by myself when doing: sched: Fix balance vs hotplug race the patch: sched: for_each_domain() vs RCU is wrong, sched_domains are freed after synchronize_sched(), which means disabling preemption is enough. Reported-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chell...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
6c55d19c1d2sched: Fix rq->clock synchronization when migrating taskscommit 861d034ee814917a83bd5de4b26e3b8336ddeeb8 upstream sched_fork() -- we do task placement in ->task_fork_fair() ensure we update_rq_clock() so we work with current time. We leave the vruntime in relative state, so the time delay until wake_up_new_task() doesn't matter. wake_up_new_task() -- Since task_fork_fair() left p->vruntime in relative state we can safely migrate, the activa...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
bea226b8e64sched: Fix nr_uninterruptible countcommit cc87f76a601d2d256118f7bab15e35254356ae21 upstream The cpuload calculation in calc_load_account_active() assumes rq->nr_uninterruptible will not change on an offline cpu after migrate_nr_uninterruptible(). However the recent migrate on wakeup changes broke that and would result in decrementing the offline cpu's rq->nr_uninterruptible. Fix this by accounting the nr_uninterruptible on the...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
6975fc131cbsched: Optimize task_rq_lock()commit 65cc8e4859ff29a9ddc989c88557d6059834c2a2 upstream Now that we hold the rq->lock over set_task_cpu() again, we can do away with most of the TASK_WAKING checks and reduce them again to set_cpus_allowed_ptr(). Removes some conditionals from scheduling hot-paths. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Oleg Nesterov <oleg@redhat.com> LKML-Reference: <new-submission> Sig...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
6d94134f5f3sched: Fix TASK_WAKING vs fork deadlockcommit 0017d735092844118bef006696a750a0e4ef6ebd upstream Oleg noticed a few races with the TASK_WAKING usage on fork. - since TASK_WAKING is basically a spinlock, it should be IRQ safe - since we set TASK_WAKING (*) without holding rq->lock it could be there still is a rq->lock holder, thereby not actually providing full serialization. (*) in fact we clear PF_STARTING, which in effec...
Oleg NesterovGreg Kroah-HartmanOleg Nesterov
81695bf0ee1sched: Make select_fallback_rq() cpuset friendlycommit 9084bb8246ea935b98320554229e2f371f7f52fa upstream Introduce cpuset_cpus_allowed_fallback() helper to fix the cpuset problems with select_fallback_rq(). It can be called from any context and can't use any cpuset locks including task_lock(). It is called when the task doesn't have online cpus in ->cpus_allowed but ttwu/etc must be able to find a suitable cpu. I am not proud of this patch...
Oleg NesterovGreg Kroah-HartmanOleg Nesterov
1ccc5a299bdsched: _cpu_down(): Don't play with current->cpus_allowedcommit 6a1bdc1b577ebcb65f6603c57f8347309bc4ab13 upstream _cpu_down() changes the current task's affinity and then recovers it at the end. The problems are well known: we can't restore old_allowed if it was bound to the now-dead-cpu, and we can race with the userspace which can change cpu-affinity during unplug. _cpu_down() should not play with current->cpus_allowed at all. Instead, take_cpu_d...
Oleg NesterovGreg Kroah-HartmanOleg Nesterov
cc28db46425sched: sched_exec(): Remove the select_fallback_rq() logiccommit 30da688ef6b76e01969b00608202fff1eed2accc upstream. sched_exec()->select_task_rq() reads/updates ->cpus_allowed lockless. This can race with other CPUs updating our ->cpus_allowed, and this looks meaningless to me. The task is current and running, it must have online cpus in ->cpus_allowed, the fallback mode is bogus. And, if ->sched_class returns the "wrong" cpu, this likely means we r...
Oleg NesterovGreg Kroah-HartmanOleg Nesterov
79d79c4fed3sched: move_task_off_dead_cpu(): Remove retry logiccommit c1804d547dc098363443667609c272d1e4d15ee8 upstream The previous patch preserved the retry logic, but it looks unneeded. __migrate_task() can only fail if we raced with migration after we dropped the lock, but in this case the caller of set_cpus_allowed/etc must initiate migration itself if ->on_rq == T. We already fixed p->cpus_allowed, the changes in active/online masks must be visibl...
Oleg NesterovGreg Kroah-HartmanOleg Nesterov
296a3f11b1fsched: move_task_off_dead_cpu(): Take rq->lock around select_fallback_rq()commit 1445c08d06c5594895b4fae952ef8a457e89c390 upstream move_task_off_dead_cpu()->select_fallback_rq() reads/updates ->cpus_allowed lockless. We can race with set_cpus_allowed() running in parallel. Change it to take rq->lock around select_fallback_rq(). Note that it is not trivial to move this spin_lock() into select_fallback_rq(), we must recheck the task was not migrated after we take the...
Oleg NesterovGreg Kroah-HartmanOleg Nesterov
e624a13a60bsched: Kill the broken and deadlockable cpuset_lock/cpuset_cpus_allowed_locked codecommit 897f0b3c3ff40b443c84e271bef19bd6ae885195 upstream This patch just states the fact the cpusets/cpuhotplug interaction is broken and removes the deadlockable code which only pretends to work. - cpuset_lock() doesn't really work. It is needed for cpuset_cpus_allowed_locked() but we can't take this lock in try_to_wake_up()->select_fallback_rq() path. - cpuset_lock() is deadlockable. S...
Oleg NesterovGreg Kroah-HartmanOleg Nesterov
20b622cd1f1sched: set_cpus_allowed_ptr(): Don't use rq->migration_thread after unlockcommit 47a70985e5c093ae03d8ccf633c70a93761d86f2 upstream Trivial typo fix. rq->migration_thread can be NULL after task_rq_unlock(), this is why we have "mt" which should be used instead. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100330165829.GA18284@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off...
Thomas GleixnerGreg Kroah-HartmanThomas Gleixner
4b3d9e7ce5asched: Queue a deboosted task to the head of the RT prio queuecommit 60db48cacb9b253d5607a5ff206112a59cd09e34 upstream rtmutex_set_prio() is used to implement priority inheritance for futexes. When a task is deboosted it gets enqueued at the tail of its RT priority list. This is violating the POSIX scheduling semantics: rt priority list X contains two runnable tasks A and B task A runs with priority X and holds mutex M task C preempts A and is blocke...
Thomas GleixnerGreg Kroah-HartmanThomas Gleixner
afc0109e65esched: Implement head queueing for sched_rtcommit 37dad3fce97f01e5149d69de0833d8452c0e862e upstream The ability of enqueueing a task to the head of a SCHED_FIFO priority list is required to fix some violations of POSIX scheduling policy. Implement the functionality in sched_rt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Carsten Emde <cbe@osadl.org> Tested-by: Mathias...
Thomas GleixnerGreg Kroah-HartmanThomas Gleixner
e788d930654sched: Extend enqueue_task to allow head queueingcommit ea87bb7853168434f4a82426dd1ea8421f9e604d upstream The ability of enqueueing a task to the head of a SCHED_FIFO priority list is required to fix some violations of POSIX scheduling policy. Extend the related functions with a "head" argument. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Carsten Emde <cbe@osadl.org> Tested...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
7607da8d463sched: Fix race between ttwu() and task_rq_lock()commit 0970d2992dfd7d5ec2c787417cf464f01eeaf42a upstream Thomas found that due to ttwu() changing a task's cpu without holding the rq->lock, task_rq_lock() might end up locking the wrong rq. Avoid this by serializing against TASK_WAKING. Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1266241712.15770.420.camel@laptop>...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
c8035d9f08csched: Fix incorrect sanity checkcommit 11854247e2c851e7ff9ce138e501c6cffc5a4217 upstream We moved to migrate on wakeup, which means that sleeping tasks could still be present on offline cpus. Amend the check to only test running tasks. Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Mike Galbraith <efault...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
20f95b412b7sched: Fix fork vs hotplug vs cpuset namespacescommit fabf318e5e4bda0aca2b0d617b191884fda62703 upstream There are a number of issues: 1) TASK_WAKING vs cgroup_clone (cpusets) copy_process(): sched_fork() child->state = TASK_WAKING; /* waiting for wake_up_new_task() */ if (current->nsproxy != p->nsproxy) ns_cgroup_clone() cgroup_clone() mutex_lock(inode->i_mutex) mutex_lock(cgroup_mutex) cgr...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
05e067a973dsched: Fix hotplug hangcommit 70f1120527797adb31c68bdc6f1b45e182c342c7 upstream The hot-unplug kstopmachine usage does a wakeup after deactivating the cpu, hence we cannot use cpu_active() here but must rely on the good olde online. Reported-by: Sachin Sant <sachinp@in.ibm.com> Reported-by: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Tested-by: Jens Axboe <jens.axboe@or...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
54a2695e9fdsched: Remove the cfs_rq dependency from set_task_cpu()commit 88ec22d3edb72b261f8628226cd543589a6d5e1b upstream In order to remove the cfs_rq dependency from set_task_cpu() we need to ensure the task is cfs_rq invariant for all callsites. The simple approach is to substract cfs_rq->min_vruntime from se->vruntime on dequeue, and add cfs_rq->min_vruntime on enqueue. However, this has the downside of breaking FAIR_SLEEPERS since we loose the old vr...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
668530636f4sched: Add pre and post wakeup hookscommit efbbd05a595343a413964ad85a2ad359b7b7efbd upstream As will be apparent in the next patch, we need a pre wakeup hook for sched_fair task migration, hence rename the post wakeup hook and one pre wakeup. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <20091216170518.114746117@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> Si...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
ca6d844e585sched: Fix select_task_rq() vs hotplug issuescommit 5da9a0fb673a0ea0a093862f95f6b89b3390c31e upstream Since select_task_rq() is now responsible for guaranteeing ->cpus_allowed and cpu_active_mask, we need to verify this. select_task_rq_rt() can blindly return smp_processor_id()/task_cpu() without checking the valid masks, select_task_rq_fair() can do the same in the rare case that all SD_flags are disabled. Signed-off-by: Peter Zijlstr...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
31a9936c6fcsched: Fix sched_exec() balancingcommit 3802290628348674985d14914f9bfee7b9084548 upstream sched: Fix sched_exec() balancing Since we access ->cpus_allowed without holding rq->lock we need a retry loop to validate the result, this comes for near free when we merge sched_migrate_task() into sched_exec() since that already does the needed check. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@...
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
9f2243e5817sched: Fix broken assertioncommit 077614ee1e93245a3b9a4e1213659405dbeb0ba6 upstream There's a preemption race in the set_task_cpu() debug check in that when we get preempted after setting task->state we'd still be on the rq proper, but fail the test. Check for preempted tasks, since those are always on the RQ. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20091217121830.137155561@chello.nl> S...
Ingo MolnarGreg Kroah-HartmanIngo Molnar
07ad0106405sched: Make warning less noisycommit 416eb39556a03d1c7e52b0791e9052ccd71db241 upstream Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> LKML-Reference: <20091216170517.807938893@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Peter ZijlstraGreg Kroah-HartmanPeter Zijlstra
854c9840128sched: Ensure set_task_cpu() is never called on blocked taskscommit e2912009fb7b715728311b0d8fe327a1432b3f79 upstream In order to clean up the set_task_cpu() rq dependencies we need to ensure it is never called on blocked tasks because such usage does not pair with consistent rq->lock usage. This puts the migration burden on ttwu(). Furthermore we need to close a race against changing ->cpus_allowed, since select_task_rq() runs with only preemption di...