-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ksymbols): reimplement ksymbols #4464
base: main
Are you sure you want to change the base?
Conversation
165e3d5
to
8eabeb9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work overall, though I do have some comments in mind.
a37c7cd
to
f3e5990
Compare
e5c5324
to
eaa12ab
Compare
I added an additional memory optimization - kernel symbols now only store the lower 48 bits of the address with the assumption that all addresses begin with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have one critical request to make: please avoid the mixed mutex transactions in the kernel symbols table. From experience this tends to cause transaction mixes where one write operation will happen in the middle of a mixed operation for example. Imagine the following methods m1 and m2 where m1 is w and m2 has rw. The following could occur:
- m2 - r
- m1 - wait for w
- m2 - release r
- m1 - back for w
- m1 - release
- m2 - back to w, wiht r assumptions changed due to m1.
Please either pick for each method that it is R or W and make the lock last for the whole operation. You could even opt for a regular mutex instead of a RWMutex, I don't think we have very frequent reads or writes to this struct anyway.
Symbols lookups could be very frequent in the future (stack trace processing). Using a write lock for the entire duration of In both functions, the write operations only add new data, they don't change or remove existing data. In the case of I could solve the issue with |
I could also change the API of the kernel symbol table so that reading It also solves a race condition where if a lookup happens between |
Do you mean with the current implementation? If so then yes, I only noticed it now.
Which solution do prefer? IMO the second one (RO symbol table, create a new one instead of updating) is favorable. Additionally, I could change |
That is just equivalent to giving it a separate mutex, and I don't see why we would want to limit the owner number. It might be slightly more readable. BTW, you may want to use the
I mean if you make it RO and require full regeneration with a new symbol you, I think you may easily encounter multiple regenerations per stack trace extraction.
Which is why the RO symbol table seems like a bad choice for your future needs, unless I am missing something in what you've suggested. Therefore I suggest you use a set, LRU, mutex on the owners, such that the overall concurrency mutex is limited to its relevant data. |
There is no regeneration on failed request, only manually (used in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass so far. I've only reviewed kernelSymbolInternal
with some suggestions that could make the code easier to read and change - later I'm going to reach the rest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did one pass more.
Besides my thoughts put, I would recommend you bring also a test file for concurrency like this: https://github.com/aquasecurity/tracee/blob/main/pkg/capabilities/capabilities_test.go
I didn't reviewed KernelSymbolTable
yet. It will wait for the next pass. 👍🏼
} | ||
} | ||
|
||
// Sort the symbols slice by address in descending order |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps sorting it in ascending order would bring a small performance and API features (as boolean found or the ordered index where a value should be inserted when not found) by using slices.BinarySearch()
or slices.BinarySearchFunc()
. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand what's the benefit of sorting in ascending order. I already perform binary searches (using sort.Search
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slices.BinarySearchFunc() returns the index and found boolean. So you'll be able to shortcut the later checking in LookupByAddressExact(), by instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how this function behaves when there are multiple symbols with the same requested address, I need to find all of them. sort.Search
guarantees to find me the first match and then I iterate forward, finding the rest. I could also iterate backwards, but I don't think this would end up being simpler than the current method.
st.mu.RLock() | ||
// We call RUnlock manually and not using defer because we may need to upgrade to a write lock later | ||
|
||
// Lookup the name in the name to symbol mapping | ||
if symbols, found := st.symbolsByName[name]; found { | ||
st.mu.RUnlock() | ||
return symbols, nil | ||
} | ||
|
||
// Lazy name lookup is disabled, the lookup failed | ||
if !st.lazyNameLookup { | ||
st.mu.RUnlock() | ||
return nil, ErrSymbolNotFound | ||
} | ||
|
||
// Lazy name lookup is enabled, perform a linear search to find the requested name | ||
symbols := []*T{} | ||
for _, symbol := range st.sortedSymbols { | ||
if (*symbol).Name() == name { | ||
symbols = append(symbols, symbol) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WDYT about moving this reading logic into a private method, so all read mutex control would be managed by it... i.e.: symbols := st.lookupByName()
Then we should write lock only if symbols len is > 0.
If you agree, provide comments about the locking in the inner call to warn about unexpected deadlocks if it's used out of context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has the opposite effect of what @NDStrahilevitz commented regarding race conditions with mixed mutex transactions (assumption from read lock is outdated by the time we acquire the write lock) because it places the read unlock and the write lock further apart.
As I commented above, the worst case scenario for such a race condition is duplicate work with no negative effect on the data (the same name is looked up linearly more than once). WDYT? Is such a race condition even worth consideration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider that you're only checking for the existence of symbols with the name passed. I believe that the worst scenario is getting RLock avoiding a write to it (to happen) that would be the finding to update the symbolsByName. So it will be a mismatch even the symbol write locked and waiting for the reading release. What are the odds?
If it is susceptible to happen very often, I would use a write lock for the entire scope then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understood what you mean. The odds of 2 lookups for the same name at very similar times that result in a duplicate linear search is very low (there aren't many name lookups). IMO it's not worth accounting for this.
Anyway regarding your proposal, we would need to return from the internal st.lookupByName
an indication on whether st.symbolsByName
needs to be updated, I'm not sure there is much point in separating the two.
// Not found or not exact match | ||
if idx == len(st.sortedSymbols) || (*st.sortedSymbols[idx]).Address() != address { | ||
return nil, ErrSymbolNotFound | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As commented above, I suppose that by using BinarySearch this check might not be necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I said above, I do use binary search. A failed search will return idx == len(st.sortedSymbols)
. The search condition is that the address of the current symbol is smaller or equal to the address being looked up, so we have to verify that it's equal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Final pass, I believe. Anything, just ping me.
return &KernelSymbol{ | ||
Name: symbol.Name(), | ||
Address: symbol.Address(), | ||
Owner: kst.idxToSymbolOwner[symbol.owner()], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is the unique access, how about moving it to above, reducing the RLock time even more? I mean:
kst.ownersMutex.RLock()
owner := symbol.owner()
kst.ownersMutex.RUnlock()
func (k *KernelSymbolTable) GetSymbolByAddr(addr uint64) ([]KernelSymbol, error) { | ||
k.updateLock.Lock() | ||
defer k.updateLock.Unlock() | ||
func (kst *KernelSymbolTable) symbolFromInternal(symbol *kernelSymbolInternal) *KernelSymbol { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, this is called from loops, so it will lock/unlock many times when the iteration is greater than one. Is that behaviour expected? If it isn't, consider to move the locking calls upper.
a346008
to
31b26bc
Compare
@geyslan some of your remaining comments are regarding lock behavior in WDYT? @NDStrahilevitz @yanivagman it would be great if you could weigh in as well |
@oshaked1 I worry that jt would expose an easy attack against tracee, simply load and unload a module many times. of course this is already suspicious behavior, and not the only "smoke screen" tactic possible, but adding another one isn't great. That said considering it is not the only such tactic maybe it shouldn't be a blocker for the option. a way to circumvent this altogether would be if you could cache the delta per module load. if we know for sure that the module in a load loop is the same one, there's no need to calculate the differences each time. although this might be too memory heavy. Overall no strong opinion on going that way, going RO has many obvious benefits, but the exploit surface slightly worries me. |
This attack is also a problem with the current implementation, because we clear the underlying symbol table and add the symbols from |
True, you're right. So, on my end, I see no issue with moving to an RO implementation. |
Me neither. 👍🏼 |
ce508b3
to
5008a8b
Compare
@NDStrahilevitz @geyslan I changed |
cd5b576
to
47d48e1
Compare
pkg/ebpf/tracee.go
Outdated
@@ -123,6 +122,9 @@ type Tracee struct { | |||
policyManager *policy.Manager | |||
// The dependencies of events used by Tracee | |||
eventsDependencies *dependencies.Manager | |||
// A reference to a environment.KernelSymbolTable that might change at runtime. | |||
// This should only be accessed using t.getKernelSymbols() and t.setKernelSymbols() | |||
kernelSymbols unsafe.Pointer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why using unsafe.Pointer and not *environment.KernelSymbolTable
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just to ensure that it's only accessed using the safe getter and setter.
pkg/ebpf/tracee.go
Outdated
func (t *Tracee) getKernelSymbols() *environment.KernelSymbolTable { | ||
return (*environment.KernelSymbolTable)(atomic.LoadPointer(&t.kernelSymbols)) | ||
} | ||
|
||
func (t *Tracee) setKernelSymbols(kernelSymbols *environment.KernelSymbolTable) { | ||
atomic.StorePointer(&t.kernelSymbols, unsafe.Pointer(kernelSymbols)) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm don't understand why we need those... What does the atomic protect from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yanivagman The atomic operations protect from simultaneous reads of t.kernelSymbols
and an update that replaces it. While there is probably no actual risk, go defines such simultaneous access as a data race so it's just to be safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we place this file under environment? I don't think this logic will have any use out of ksymbols context, isn't that so?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I plan to use it in the future for ELF symbols
This implementation stores all symbols, or if a `requiredDataSymbolsOnly` flag is used when creating the symbol table, only non-data symbols are saved (and required data symbols must be registered before updating). This new implementation uses a generic symbol table implementation that is responsible for managing symbol lookups, and can be used by future code for managing exeutable file symbols.
After running the init function of a kernel module, the kernel frees the memory that was allocated for it but doesn't remove its symbol from kallsyms. This resulsts in a scenario where a subsequent loaded module can be allocated to the same area as the free'd init function of the prevous module. This could result in 2 symbols at the same address, one is the free'd init function and another from the newly loaded module. This caused an undeterminism in which symbol is used by the hooked_syscall event, which only used the first symbol that was found, resulting in random test failures. This commit changes the hooked_syscall event to emit one event for each found symbol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after all the changes, nice work.
The thread stack area was extracted by finding the VMA containing the SP of the new thread, but because the SP might be just past the end of its allocated VMA (top of the stack), sometimes the correct VMA was not found.
1. Explain what the PR does
The previous ksymbols implementation used a lazy lookup method, where only symbols marked as required ahead of time were stored. Trying to lookup a symbol that was not stored resulted in
/proc/kallsyms
being read and parsed in its entirety.While most symbols being looked up were registered as required ahead of time, some weren't (in particular symbols needed for kprobe attachment) which incurred significant overhead when tracee is being initialized.
This new implementation stores all symbols, or if a
requiredDataSymbolsOnly
flag is used when creating the symbol table (used by default), only non-data symbols are stored (and required data symbols must be registered before updating). Some additional memory usage optimizations are included, for example encoding symbol owners as an index into a list of owner names, and also lazy symbol name lookups where the map of symbol name to symbol is populated only for symbols that were looked up once.From measurements I performed, the extra memory consumption is around 21MB (from ~159MB to ~180MB when running tracee with no arguments on my machine).
Under the hood, this ksymbols implementation uses a generic symbol table implementation that can be used by future code for managing executable file symbols.
A significant advantage gained by storing all non-data symbols is the ability to lookup a function symbol that contains a given code address, a feature that I plan to use in the future.
This PR closes #4463 and renders #4325 irrelevant (because
/proc/kallsyms
reads no-longer happen "spontaneously").