Far much easier to suppress kernel/driver log of kernel addresses and deny access to /dev/kmem, et. al.
Leaving eBPF access open demonstratively has made way for file-less persistent malware to linger on unwantedly.
A real cybersecurity specialist would only allow eBPF access on host OS if no network access can be made to the host OS (and its ok for guest VMs to have eBPF).
An Uber cybersecurity goon, however, would compile out the eBPF JIT access from the Linux kernel (or use BSD-variant, instead).
nibbleshifter 592 days ago [-]
Hmmm, there's interesting possibilities here to build a kind of application-IDS.
Execute and monitor a program/app while running its full test suite, to generate a model of all the stuff that program normally does.
Then monitor it in prod and if it starts behaving weirdly, kill it (and investigate).
I wonder how well the models will hold up against attacks that merely exercise normal application functions in unusual ways?
Also worth looking into is seccomp profiles, although that's a bit different but useful for containers and securing your own code where the attack surface might be massive or you may be running untrusted code. Think trying to secure things like online language "playgrounds" from server side exploitation.
jeffbee 591 days ago [-]
It might be worth it in certain cases of extreme security requirements, but the implications of what you suggest are severe. For example, you've ruled out the convenience of many operator actions. Instead of being able to change your resolver configs, first you'd have to change the resolver configs in the training environment, deploy a model that permits the old and new behavior into prod, then finally deploy your new configs. The same would be true for other things like timezone database updates. Any kind of external stimulus that changes your application's syscall pattern would require such forethought, and it could be a DoS vector.
Also, I think people underestimate the runtime cost of linux syscall tracing. It's pretty high.
tomrod 591 days ago [-]
When people describe process monitoring, this is what I imagine.
Leaving eBPF access open demonstratively has made way for file-less persistent malware to linger on unwantedly.
A real cybersecurity specialist would only allow eBPF access on host OS if no network access can be made to the host OS (and its ok for guest VMs to have eBPF).
An Uber cybersecurity goon, however, would compile out the eBPF JIT access from the Linux kernel (or use BSD-variant, instead).
Execute and monitor a program/app while running its full test suite, to generate a model of all the stuff that program normally does.
Then monitor it in prod and if it starts behaving weirdly, kill it (and investigate).
I wonder how well the models will hold up against attacks that merely exercise normal application functions in unusual ways?
https://debian-handbook.info/browse/stable/sect.apparmor.htm...
Also worth looking into is seccomp profiles, although that's a bit different but useful for containers and securing your own code where the attack surface might be massive or you may be running untrusted code. Think trying to secure things like online language "playgrounds" from server side exploitation.
Also, I think people underestimate the runtime cost of linux syscall tracing. It's pretty high.
0: https://people.scs.carleton.ca/~mvvelzen/pH/pH.html