summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@fb.com>2018-01-30 03:37:40 +0100
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2018-02-03 17:04:24 +0100
commit361fb0481247bea4da3eb122e685c8b72ef7c8a9 (patch)
tree93b9ebd0270569e1e27db65cb8b16b6240b4b601 /kernel
parent5a802e670c46ee2027ae43ec03501ecdb85d080a (diff)
bpf: fix bpf_tail_call() x64 JIT
[ upstream commit 90caccdd8cc0215705f18b92771b449b01e2474a ] - bpf prog_array just like all other types of bpf array accepts 32-bit index. Clarify that in the comment. - fix x64 JIT of bpf_tail_call which was incorrectly loading 8 instead of 4 bytes - tighten corresponding check in the interpreter to stay consistent The JIT bug can be triggered after introduction of BPF_F_NUMA_NODE flag in commit 96eabe7a40aa in 4.14. Before that the map_flags would stay zero and though JIT code is wrong it will check bounds correctly. Hence two fixes tags. All other JITs don't have this problem. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Fixes: 96eabe7a40aa ("bpf: Allow selecting numa node during map creation") Fixes: b52f00e6a715 ("x86: bpf_jit: implement bpf_tail_call() helper") Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/bpf/core.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 3fd76cf0c21e..e54ea31c5103 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -517,7 +517,7 @@ select_insn:
struct bpf_map *map = (struct bpf_map *) (unsigned long) BPF_R2;
struct bpf_array *array = container_of(map, struct bpf_array, map);
struct bpf_prog *prog;
- u64 index = BPF_R3;
+ u32 index = BPF_R3;
if (unlikely(index >= array->map.max_entries))
goto out;