summaryrefslogtreecommitdiff
path: root/arch/x86/kernel/head_64.S
diff options
context:
space:
mode:
authorKees Cook <keescook@chromium.org>2013-07-16 11:34:41 -0700
committerH. Peter Anvin <hpa@linux.intel.com>2013-07-16 15:14:48 -0700
commit4df05f361937ee86e5a8c9ead8aeb6a19ea9b7d7 (patch)
tree5ecfbd89245a33b1902b210cf92619587e12209c /arch/x86/kernel/head_64.S
parent5ff560fd48d5b3d82fa0c3aff625c9da1a301911 (diff)
x86: Make sure IDT is page aligned
Since the IDT is referenced from a fixmap, make sure it is page aligned. Merge with 32-bit one, since it was already aligned to deal with F00F bug. Since bss is cleared before IDT setup, it can live there. This also moves the other *_idt_table variables into common locations. This avoids the risk of the IDT ever being moved in the bss and having the mapping be offset, resulting in calling incorrect handlers. In the current upstream kernel this is not a manifested bug, but heavily patched kernels (such as those using the PaX patch series) did encounter this bug. The tables other than idt_table technically do not need to be page aligned, at least not at the current time, but using a common declaration avoids mistakes. On 64 bits the table is exactly one page long, anyway. Signed-off-by: Kees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/20130716183441.GA14232@www.outflux.net Reported-by: PaX Team <pageexec@gmail.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'arch/x86/kernel/head_64.S')
-rw-r--r--arch/x86/kernel/head_64.S15
1 files changed, 0 insertions, 15 deletions
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 5e4d8a8a5c40..e1aabdb314c8 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -512,21 +512,6 @@ ENTRY(phys_base)
#include "../../x86/xen/xen-head.S"
- .section .bss, "aw", @nobits
- .align L1_CACHE_BYTES
-ENTRY(idt_table)
- .skip IDT_ENTRIES * 16
-
- .align L1_CACHE_BYTES
-ENTRY(debug_idt_table)
- .skip IDT_ENTRIES * 16
-
-#ifdef CONFIG_TRACING
- .align L1_CACHE_BYTES
-ENTRY(trace_idt_table)
- .skip IDT_ENTRIES * 16
-#endif
-
__PAGE_ALIGNED_BSS
NEXT_PAGE(empty_zero_page)
.skip PAGE_SIZE