summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2009-12-08Enable ACPI PDC handshake for VIA/Centaur CPUsHarald Welte
commit d77b81974521c82fa6fda38dfff1b491dcc62a32 upstream. In commit 0de51088e6a82bc8413d3ca9e28bbca2788b5b53, we introduced the use of acpi-cpufreq on VIA/Centaur CPU's by removing a vendor check for VENDOR_INTEL. However, as it turns out, at least the Nano CPU's also need the PDC (processor driver capabilities) handshake in order to activate the methods required for acpi-cpufreq. Since arch_acpi_processor_init_pdc() contains another vendor check for Intel, the PDC is not initialized on VIA CPU's. The resulting behavior of a current mainline kernel on such systems is: acpi-cpufreq loads and it indicates CPU frequency changes. However, the CPU stays at a single frequency This trivial patch ensures that init_intel_pdc() is called on Intel and VIA/Centaur CPU's alike. Signed-off-by: Harald Welte <HaraldWelte@viatech.com> Signed-off-by: Dave Jones <davej@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-11-09x86/amd-iommu: Workaround for erratum 63Joerg Roedel
commit c5cca146aa03e1f60fb179df65f0dbaf17bc64ed upstream. There is an erratum for IOMMU hardware which documents undefined behavior when forwarding SMI requests from peripherals and the DTE of that peripheral has a sysmgt value of 01b. This problem caused weird IO_PAGE_FAULTS in my case. This patch implements the suggested workaround for that erratum into the AMD IOMMU driver. The erratum is documented with number 63. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-11-09x86/amd-iommu: Un__init function required on shutdownJoerg Roedel
commit ca0207114f1708b563f510b7781a360ec5b98359 upstream. The function iommu_feature_disable is required on system shutdown to disable the IOMMU but it is marked as __init. This may result in a panic if the memory is reused. This patch fixes this bug. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-11-09KVM: Prevent overflow in KVM_GET_SUPPORTED_CPUID (CVE-2009-3638)Avi Kivity
commit 6a54435560efdab1a08f429a954df4d6c740bddf upstream. The number of entries is multiplied by the entry size, which can overflow on 32-bit hosts. Bound the entry count instead. Reported-by: David Wagner <daw@cs.berkeley.edu> Signed-off-by: Avi Kivity <avi@redhat.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-11-09x86-64: Fix register leak in 32-bit syscall audtingJan Beulich
commit 81766741fe1eee3884219e8daaf03f466f2ed52f upstream. Restoring %ebp after the call to audit_syscall_exit() is not only unnecessary (because the register didn't get clobbered), but in the sysenter case wasn't even doing the right thing: It loaded %ebp from a location below the top of stack (RBP < ARGOFFSET), i.e. arbitrary kernel data got passed back to user mode in the register. Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: Roland McGrath <roland@redhat.com> LKML-Reference: <4AE5CC4D020000780001BD13@vpn.id2.novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-11-09tty: Mark generic_serial users as BROKENAlan Cox
commit 412145947adfca60a4b5b4893fbae82dffa25edd upstream. There isn't much else I can do with these. I can find no hardware for any of them and no users. The code is broken. Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12KVM: x86: Disallow hypercalls for guest callers in rings > 0 [CVE-2009-3290]Jan Kiszka
[ backport to 2.6.27 by Chuck Ebbert <cebbert@redhat.com> ] commit 07708c4af1346ab1521b26a202f438366b7bcffd upstream. So far unprivileged guest callers running in ring 3 can issue, e.g., MMU hypercalls. Normally, such callers cannot provide any hand-crafted MMU command structure as it has to be passed by its physical address, but they can still crash the guest kernel by passing random addresses. To close the hole, this patch considers hypercalls valid only if issued from guest ring 0. This may still be relaxed on a per-hypercall base in the future once required. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12x86: Increase MIN_GAP to include randomized stackMichal Hocko
[ trivial backport to 2.6.27: Chuck Ebbert <cebbert@redhat.com> ] commit 80938332d8cf652f6b16e0788cf0ca136befe0b5 upstream. Currently we are not including randomized stack size when calculating mmap_base address in arch_pick_mmap_layout for topdown case. This might cause that mmap_base starts in the stack reserved area because stack is randomized by 1GB for 64b (8MB for 32b) and the minimum gap is 128MB. If the stack really grows down to mmap_base then we can get silent mmap region overwrite by the stack values. Let's include maximum stack randomization size into MIN_GAP which is used as the low bound for the gap in mmap. Signed-off-by: Michal Hocko <mhocko@suse.cz> LKML-Reference: <1252400515-6866-1-git-send-email-mhocko@suse.cz> Acked-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12x86: Don't leak 64-bit kernel register values to 32-bit processesJan Beulich
commit 24e35800cdc4350fc34e2bed37b608a9e13ab3b6 upstream x86: Don't leak 64-bit kernel register values to 32-bit processes While 32-bit processes can't directly access R8...R15, they can gain access to these registers by temporarily switching themselves into 64-bit mode. Therefore, registers not preserved anyway by called C functions (i.e. R8...R11) must be cleared prior to returning to user mode. Signed-off-by: Jan Beulich <jbeulich@novell.com> LKML-Reference: <4AC34D73020000780001744A@vpn.id2.novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12x86-64: slightly stream-line 32-bit syscall entry codeJan Beulich
commit 295286a89107c353b9677bc604361c537fd6a1c0 upstream x86-64: slightly stream-line 32-bit syscall entry code [ required for following patch to apply properly ] Avoid updating registers or memory twice as well as needlessly loading or copying registers. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24powerpc/pseries: Fix to handle slb resize across migrationBrian King
commit 46db2f86a3b2a94e0b33e0b4548fb7b7b6bdff66 upstream. The SLB can change sizes across a live migration, which was not being handled, resulting in possible machine crashes during migration if migrating to a machine which has a smaller max SLB size than the source machine. Fix this by first reducing the SLB size to the minimum possible value, which is 32, prior to migration. Then during the device tree update which occurs after migration, we make the call to ensure the SLB gets updated. Also add the slb_size to the lparcfg output so that the migration tools can check to make sure the kernel has this capability before allowing migration in scenarios where the SLB size will change. BenH: Fixed #include <asm/mmu-hash64.h> -> <asm/mmu.h> to avoid breaking ppc32 build Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: MMU: protect kvm_mmu_change_mmu_pages with mmu_lockMarcelo Tosatti
(cherry picked from commit 7c8a83b75a38a807d37f5a4398eca2a42c8cf513) kvm_handle_hva, called by MMU notifiers, manipulates mmu data only with the protection of mmu_lock. Update kvm_mmu_change_mmu_pages callers to take mmu_lock, thus protecting against kvm_handle_hva. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: x86: check for cr3 validity in mmu_alloc_rootsMarcelo Tosatti
(cherry picked from commit 8986ecc0ef58c96eec48d8502c048f3ab67fd8e2) Verify the cr3 address stored in vcpu->arch.cr3 points to an existant memslot. If not, inject a triple fault. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: MMU: do not free active mmu pages in free_mmu_pages()Gleb Natapov
(cherry picked from commit f00be0cae4e6ad0a8c7be381c6d9be3586800b3e) free_mmu_pages() should only undo what alloc_mmu_pages() does. Free mmu pages from the generic VM destruction function, kvm_destroy_vm(). Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Fix PDPTR reloading on CR4 writesAvi Kivity
(cherry picked from commit a2edf57f510cce6a389cc14e58c6ad0a4296d6f9) The processor is documented to reload the PDPTRs while in PAE mode if any of the CR4 bits PSE, PGE, or PAE change. Linux relies on this behaviour when zapping the low mappings of PAE kernels during boot. The code already handled changes to CR4.PAE; augment it to also notice changes to PSE and PGE. This triggered while booting an F11 PAE kernel; the futex initialization code runs before any CR3 reloads and writes to a NULL pointer; the futex subsystem ended up uninitialized, killing PI futexes and pulseaudio which uses them. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Make paravirt tlb flush also reload the PAE PDPTRsAvi Kivity
(cherry picked from commit a8cd0244e9cebcf9b358d24c7e7410062f3665cb) The paravirt tlb flush may be used not only to flush TLBs, but also to reload the four page-directory-pointer-table entries, as it is used as a replacement for reloading CR3. Change the code to do the entire CR3 reloading dance instead of simply flushing the TLB. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: VMX: Handle vmx instruction vmexitsAvi Kivity
(cherry picked from commit e3c7cb6ad7191e92ba89d00a7ae5f5dd1ca0c214) IF a guest tries to use vmx instructions, inject a #UD to let it know the instruction is not implemented, rather than crashing. This prevents guest userspace from crashing the guest kernel. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Make EFER reads safe when EFER does not existAvi Kivity
(cherry picked from commit e286e86e6d2042d67d09244aa0e05ffef75c9d54) Some processors don't have EFER; don't oops if userspace wants us to read EFER when we check NX. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: SVM: Remove port 80 passthroughAvi Kivity
(cherry picked from commit 99f85a28a78e96d28907fe036e1671a218fee597) KVM optimizes guest port 80 accesses by passthing them through to the host. Some AMD machines die on port 80 writes, allowing the guest to hard-lock the host. Remove the port passthrough to avoid the problem. Reported-by: Piotr Jaroszyński <p.jaroszynski@gmail.com> Tested-by: Piotr Jaroszyński <p.jaroszynski@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: VMX: Don't allow uninhibited access to EFER on i386Avi Kivity
(cherry picked from commit 16175a796d061833aacfbd9672235f2d2725df65) vmx_set_msr() does not allow i386 guests to touch EFER, but they can still do so through the default: label in the switch. If they set EFER_LME, they can oops the host. Fix by having EFER access through the normal channel (which will check for EFER_LME) even on i386. Reported-and-tested-by: Benjamin Gilbert <bgilbert@cs.cmu.edu> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: VMX: Set IGMT bit in EPT entrySheng Yang
(cherry picked from commit 928d4bf747e9c290b690ff515d8f81e8ee226d97) There is a potential issue that, when guest using pagetable without vmexit when EPT enabled, guest would use PAT/PCD/PWT bits to index PAT msr for it's memory, which would be inconsistent with host side and would cause host MCE due to inconsistent cache attribute. The patch set IGMT bit in EPT entry to ignore guest PAT and use WB as default memory type to protect host (notice that all memory mapped by KVM should be WB). Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: MMU: increase per-vcpu rmap cache alloc sizeMarcelo Tosatti
(cherry picked from commit c41ef344de212bd918f7765af21b5008628c03e0) The page fault path can use two rmap_desc structures, if: - walk_addr's dirty pte update allocates one rmap_desc. - mmu_lock is dropped, sptes are zapped resulting in rmap_desc being freed. - fetch->mmu_set_spte allocates another rmap_desc. Increase to 4 for safety. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: set debug registers after "schedulable" sectionMarcelo Tosatti
(cherry picked from commit 29415c37f043d1d54dcf356601d738ff6633b72b) The vcpu thread can be preempted after the guest_debug_pre() callback, resulting in invalid debug registers on the new vcpu. Move it inside the non-preemptable section. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: add MC5_MISC msr read supportJoerg Roedel
(cherry picked from commit a89c1ad270ca7ad0eec2667bc754362ce7b142be) Currently KVM implements MC0-MC4_MISC read support. When booting Linux this results in KVM warnings in the kernel log when the guest tries to read MC5_MISC. Fix this warnings with this patch. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Reduce stack usage in kvm_pv_mmu_op()Dave Hansen
(cherry picked from commit 6ad18fba05228fb1d47cdbc0339fe8b3fca1ca26) We're in a hot path. We can't use kmalloc() because it might impact performance. So, we just stick the buffer that we need into the kvm_vcpu_arch structure. This is used very often, so it is not really a waste. We also have to move the buffer structure's definition to the arch-specific x86 kvm header. Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Reduce stack usage in kvm_arch_vcpu_ioctl()Dave Hansen
(cherry picked from commit b772ff362ec6b821c8a5227a3355e263f917bfad) [sheng: fix KVM_GET_LAPIC using wrong size] Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Sheng Yang <sheng.yang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Reduce kvm stack usage in kvm_arch_vm_ioctl()Dave Hansen
(cherry picked from commit f0d662759a2465babdba1160749c446648c9d159) On my machine with gcc 3.4, kvm uses ~2k of stack in a few select functions. This is mostly because gcc fails to notice that the different case: statements could have their stack usage combined. It overflows very nicely if interrupts happen during one of these large uses. This patch uses two methods for reducing stack usage. 1. dynamically allocate large objects instead of putting on the stack. 2. Use a union{} member for all of the case variables. This tricks gcc into combining them all into a single stack allocation. (There's also a comment on this) Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: MMU: Fix setting the accessed bit on non-speculative sptesAvi Kivity
(cherry picked from commit 3201b5d9f0f7ef392886cd76dcd2c69186d9d5cd) The accessed bit was accidentally turned on in a random flag word, rather than, the spte itself, which was lucky, since it used the non-EPT compatible PT_ACCESSED_MASK. Fix by turning the bit on in the spte and changing it to use the portable accessed mask. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: MMU: Flush tlbs after clearing write permission when accessing dirty logAvi Kivity
(cherry picked from commit 171d595d3b3254b9a952af8d1f6965d2e85dcbaa) Otherwise, the cpu may allow writes to the tracked pages, and we lose some display bits or fail to migrate correctly. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: MMU: Add locking around kvm_mmu_slot_remove_write_access()Avi Kivity
(cherry picked from commit 2245a28fe2e6fdb1bdabc4dcde1ea3a5c37e2a9e) It was generally safe due to slots_lock being held for write, but it wasn't very nice. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Allocate guest memory as MAP_PRIVATE, not MAP_SHAREDAvi Kivity
(cherry picked from commit acee3c04e8208c17aad1baff99baa68d71640a19) There is no reason to share internal memory slots with fork()ed instances. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: Load real mode segments correctlyAvi Kivity
(cherry picked from commit f4bbd9aaaae23007e4d79536d35a30cbbb11d407) Real mode segments to not reference the GDT or LDT; they simply compute base = selector * 16. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: VMX: Change segment dpl at reset to 3Avi Kivity
(cherry picked from commit a16b20da879430fdf245ed45461ed40ffef8db3c) This is more emulation friendly, if not 100% correct. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08KVM: VMX: Change cs reset state to be a data segmentAvi Kivity
(cherry picked from commit 5706be0dafd6f42852f85fbae292301dcad4ccec) Real mode cs is a data segment, not a code segment. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-08-16x86: enable GART-IOMMU only after setting up protection methodsMark Langsdorf
commit fe2245c905631a3a353504fc04388ce3dfaf9d9e upstream. The current code to set up the GART as an IOMMU enables GART translations before it removes the aperture from the kernel memory map, sets the GART PTEs to UC, sets up the guard and scratch pages, or does a wbinvd(). This leaves the possibility of cache aliasing open and can cause system crashes. Re-order the code so as to enable the GART translations only after all safeguards are in place and the tlb has been flushed. AMD has tested this patch on both Istanbul systems and 1st generation Opteron systems with APG enabled and seen no adverse effects. Istanbul systems with HT Assist enabled sometimes see MCE errors due to cache artifacts with the unmodified code. Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: akpm@linux-foundation.org Cc: jbarnes@virtuousgeek.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-16parisc: ensure broadcast tlb purge runs single threadedHelge Deller
commit e82a3b75127188f20c7780bec580e148beb29da7 upstream parisc: ensure broadcast tlb purge runs single threaded The TLB flushing functions on hppa, which causes PxTLB broadcasts on the system bus, needs to be protected by irq-safe spinlocks to avoid irq handlers to deadlock the kernel. The deadlocks only happened during I/O intensive loads and triggered pretty seldom, which is why this bug went so long unnoticed. Signed-off-by: Helge Deller <deller@gmx.de> [edited to use spin_lock_irqsave on UP as well since we'd been locking there all this time anyway, --kyle] Signed-off-by: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-07-30x86: don't use 'access_ok()' as a range check in get_user_pages_fast()Linus Torvalds
[ Upstream commit 7f8189068726492950bf1a2dcfd9b51314560abf - modified for stable to not use the sloppy __VIRTUAL_MASK_SHIFT ] It's really not right to use 'access_ok()', since that is meant for the normal "get_user()" and "copy_from/to_user()" accesses, which are done through the TLB, rather than through the page tables. Why? access_ok() does both too few, and too many checks. Too many, because it is meant for regular kernel accesses that will not honor the 'user' bit in the page tables, and because it honors the USER_DS vs KERNEL_DS distinction that we shouldn't care about in GUP. And too few, because it doesn't do the 'canonical' check on the address on x86-64, since the TLB will do that for us. So instead of using a function that isn't meant for this, and does something else and much more complicated, just do the real rules: we don't want the range to overflow, and on x86-64, we want it to be a canonical low address (on 32-bit, all addresses are canonical). Acked-by: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-07-30x86-64: Fix bad_srat() to clear all stateAndi Kleen
commit 429b2b319af3987e808c18f6b81313104caf782c upstream. Need to clear both nodes and nodes_add state for start/end. Signed-off-by: Andi Kleen <ak@linux.intel.com> LKML-Reference: <20090718065657.GA2898@basil.fritz.box> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-07-02x86: handle initrd that extends into unusable memoryYinghai Lu
commit 8c5dd8f43367f4f266dd616f11658005bc2d20ef upstream. On a system where system memory (according e820) is not covered by mtrr, mtrr_trim_memory converts a portion of memory to reserved, but bootloader has already put the initrd in that range. Thus, we need to have 64bit to use relocate_initrd too. [ Impact: fix using initrd when mtrr_trim_memory happen ] Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-07-02x86: quirk for reboot stalls on a Dell Optiplex 330Steve Conklin
commit 093bac154c142fa1fb31a3ac69ae1bc08930231b upstream. Dell Optiplex 330 appears to hang on reboot. This is resolved by adding a quirk to set bios reboot. Signed-off-by: Leann Ogasawara <leann.ogasawara@canonical.com> Signed-off-by: Steve Conklin <steve.conklin@canonical.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-07-02x86: Add quirk for reboot stalls on a Dell Optiplex 360Jean Delvare
commit 4a4aca641bc4598e77b866804f47c651ec4a764d upstream. The Dell Optiplex 360 hangs on reboot, just like the Optiplex 330, so the same quirk is needed. Signed-off-by: Jean Delvare <jdelvare@suse.de> Cc: Steve Conklin <steve.conklin@canonical.com> Cc: Leann Ogasawara <leann.ogasawara@canonical.com> LKML-Reference: <200906051202.38311.jdelvare@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11x86: fix DMI on EFIBrian Maly
commit ff0c0874905fb312ca1491bbdac2653b0b48c20b upstream. Impact: reactivate DMI quirks on EFI hardware DMI tables are loaded by EFI, so the dmi calls must happen after efi_init() and not before. Currently Apple hardware uses DMI to determine the framebuffer mappings for efifb. Without DMI working you also have no video on MacBook Pro. This patch resolves the DMI issue for EFI hardware (DMI is now properly detected at boot), and additionally efifb now loads on Apple hardware (i.e. video works). Signed-off-by: Brian Maly <bmaly@redhat> Acked-by: Yinghai Lu <yinghai@kernel.org> Cc: ying.huang@intel.com LKML-Reference: <49ADEDA3.1030406@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11x86/pci: fix mmconfig detection with 32bit near 4gYinghai Lu
commit 75e613cdc7bb2ba3795b1bc3ddf19476c767ba68 upstream. Pascal reported and bisected a commit: | x86/PCI: don't call e820_all_mapped with -1 in the mmconfig case which broke one system system. ACPI: Using IOAPIC for interrupt routing PCI: MCFG configuration 0: base f0000000 segment 0 buses 0 - 255 PCI: MCFG area at f0000000 reserved in ACPI motherboard resources PCI: Using MMCONFIG for extended config space it didn't have PCI: updated MCFG configuration 0: base f0000000 segment 0 buses 0 - 63 anymore, and try to use 0xf000000 - 0xffffffff for mmconfig For 32bit, mcfg_res->end could be 32bit only (if 64 resources aren't used) So use end - 1 to pass the value in mcfg->end to avoid overflow. We don't need to worry about the e820 path, they are always 64 bit. Reported-by: Pascal Terjan <pterjan@mandriva.com> Bisected-by: Pascal Terjan <pterjan@mandriva.com> Tested-by: Pascal Terjan <pterjan@mandriva.com> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11x86: ignore VM_LOCKED when determining if hugetlb-backed page tables can be ↵Mel Gorman
shared or not commit 32b154c0b0bae2879bf4e549d861caf1759a3546 upstream. Addresses http://bugzilla.kernel.org/show_bug.cgi?id=13302 On x86 and x86-64, it is possible that page tables are shared beween shared mappings backed by hugetlbfs. As part of this, page_table_shareable() checks a pair of vma->vm_flags and they must match if they are to be shared. All VMA flags are taken into account, including VM_LOCKED. The problem is that VM_LOCKED is cleared on fork(). When a process with a shared memory segment forks() to exec() a helper, there will be shared VMAs with different flags. The impact is that the shared segment is sometimes considered shareable and other times not, depending on what process is checking. What happens is that the segment page tables are being shared but the count is inaccurate depending on the ordering of events. As the page tables are freed with put_page(), bad pmd's are found when some of the children exit. The hugepage counters also get corrupted and the Total and Free count will no longer match even when all the hugepage-backed regions are freed. This requires a reboot of the machine to "fix". This patch addresses the problem by comparing all flags except VM_LOCKED when deciding if pagetables should be shared or not for hugetlbfs-backed mapping. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Ingo Molnar <mingo@elte.hu> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: <starlight@binnacle.cx> Cc: Eric B Munson <ebmunson@us.ibm.com> Cc: Adam Litke <agl@us.ibm.com> Cc: Andy Whitcroft <apw@canonical.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11x86: work around Fedora-11 x86-32 kernel failures on Intel Atom CPUsIngo Molnar
commit 211b3d03c7400f48a781977a50104c9d12f4e229 upstream [Trivial backport to 2.6.27 by cebbert@redhat.com] x86: work around Fedora-11 x86-32 kernel failures on Intel Atom CPUs Impact: work around boot crash Work around Intel Atom erratum AAH41 (probabilistically) - it's triggering in the field. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Tested-by: Kyle McMartin <kyle@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11sparc64: Reschedule KGDB capture to a software interrupt.David S. Miller
[ Upstream commit 42cc77c861e8e850e86252bb5b1e12e006261973 ] Otherwise it might interrupt switch_to() midstream and use half-cooked register window state. Reported-by: Chris Torek <chris.torek@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11sparc64: Fix lost interrupts on sun4u.David S. Miller
[ Upstream commit d0cac39e4ec8097e4c7099d291b1fdcc0fe56b58 ] Based upon a report by Meelis Roos. Sparc64 SBUS and PCI controllers use a combination of IMAP and ICLR registers to manage device interrupts. The IMAP register contains the "valid" enable bit as well as CPU targetting information. Whereas the ICLR register is written with zero at the end of handling an interrupt to reset the state machine for that interrupt to IDLE so it can be sent again. For PCI slot and SBUS slot devices we can have multiple interrupts sharing the same IMAP register. There are individual ICLR registers but only one IMAP register for managing those. We represent each shared case with individual virtual IRQs so the generic IRQ layer thinks there is only one user of the IRQ instance. In such shared IMAP cases this is wrong, so if there are multiple active users then a free_irq() call will prematurely turn off the interrupt by clearing the Valid bit in the IMAP register even though there are other active users. Fix this by simply doing nothing in sun4u_disable_irq() and checking IRQF_DISABLED during IRQ dispatch. This situation doesn't exist in the hypervisor sun4v cases, so I left those alone. Tested-by: Meelis Roos <mroos@linux.ee> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11sparc64: Fix crash with /proc/iomemMikulas Patocka
[ Upstream commit 192d7a4667c6d11d1a174ec4cad9a3c5d5f9043c ] When you compile kernel on Sparc64 with heap memory checking and type "cat /proc/iomem", you get a crash, because pointers in struct resource are uninitialized. Most code fills struct resource with zeros, so I assume that it is responsibility of the caller of request_resource to initialized it, not the responsibility of request_resource functuion. After 2.6.29 is out, there could be a check for uninitialized fields added to request_resource to avoid crashes like this. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11sparc64: Flush TLB before releasing pages.David S. Miller
[ Upstream commit 86ee79c3dbd48d7430fd81edc1da3516c9f6dabc ] tlb_flush_mmu() needs to flush pending TLB entries before processing the mmu_gather ->pages list. Noticed by Benjamin Herrenschmidt. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-06-11sparc64: Fix MM refcount check in smp_flush_tlb_pending().David S. Miller
[ Upstream commit f9384d41c02408dd404aa64d66d0ef38adcf6479 ] As explained by Benjamin Herrenschmidt: > CPU 0 is running the context, task->mm == task->active_mm == your > context. The CPU is in userspace happily churning things. > > CPU 1 used to run it, not anymore, it's now running fancyfsd which > is a kernel thread, but current->active_mm still points to that > same context. > > Because there's only one "real" user, mm_users is 1 (but mm_count is > elevated, it's just that the presence on CPU 1 as active_mm has no > effect on mm_count(). > > At this point, fancyfsd decides to invalidate a mapping currently mapped > by that context, for example because a networked file has changed > remotely or something like that, using unmap_mapping_ranges(). > > So CPU 1 goes into the zapping code, which eventually ends up calling > flush_tlb_pending(). Your test will succeed, as current->active_mm is > indeed the target mm for the flush, and mm_users is indeed 1. So you > will -not- send an IPI to the other CPU, and CPU 0 will continue happily > accessing the pages that should have been unmapped. To fix this problem, check ->mm instead of ->active_mm, and this means: > So if you test current->mm, you effectively account for mm_users == 1, > so the only way the mm can be active on another processor is as a lazy > mm for a kernel thread. So your test should work properly as long > as you don't have a HW that will do speculative TLB reloads into the > TLB on that other CPU (and even if you do, you flush-on-switch-in should > get rid of any crap here). And therefore we should be OK. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>