summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorAlexei Starovoitov <ast@fb.com>2018-01-29 02:48:55 +0100
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2018-01-31 12:55:56 +0100
commit5226bb3b95515d7f6f2e1c11ac78b612e0056342 (patch)
tree2aaf4203705feb57341839ce55b6c1d268607e37 /kernel
parentc964ad34f6d9c5c907e5cc2f1a855d044346aa8a (diff)
bpf: fix bpf_tail_call() x64 JIT
[ upstream commit 90caccdd8cc0215705f18b92771b449b01e2474a ] - bpf prog_array just like all other types of bpf array accepts 32-bit index. Clarify that in the comment. - fix x64 JIT of bpf_tail_call which was incorrectly loading 8 instead of 4 bytes - tighten corresponding check in the interpreter to stay consistent The JIT bug can be triggered after introduction of BPF_F_NUMA_NODE flag in commit 96eabe7a40aa in 4.14. Before that the map_flags would stay zero and though JIT code is wrong it will check bounds correctly. Hence two fixes tags. All other JITs don't have this problem. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Fixes: 96eabe7a40aa ("bpf: Allow selecting numa node during map creation") Fixes: b52f00e6a715 ("x86: bpf_jit: implement bpf_tail_call() helper") Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/bpf/core.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index aa6d98154106..ab9576b3bde5 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -715,7 +715,7 @@ select_insn:
struct bpf_map *map = (struct bpf_map *) (unsigned long) BPF_R2;
struct bpf_array *array = container_of(map, struct bpf_array, map);
struct bpf_prog *prog;
- u64 index = BPF_R3;
+ u32 index = BPF_R3;
if (unlikely(index >= array->map.max_entries))
goto out;