Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace func trie with hashmap #179

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ChAoSUnItY
Copy link
Collaborator

Previously, trie implementation is not consistent, mainly because of using index to point the referencing func_t to FUNCS, additionally, it lacks of dynamic allocation which might cause segmentation fault and results more technical debt to debug on either FUNCS or FUNCS_TRIE. Thus, in this PR, we can resolve this issue by introducing a dynamic hashmap.

Current implementation is using FNV-1a hashing algorithm (32-bit edition to be precise), and due to lack of unsigned integer implementation, hashing result ranges from 0 to 2,147,483,647.

Notice that current implementation may suffer from lookup issue when the function amount keeps increasing since current hashmap implementation doesn't offer rehashing based on load factor (which ideally, 0.75 would be best and currently shecc does not support floating number).

This also enables us to refactor more structures later with hashmap implementation in shecc.

Benchmark for ./tests/hello.c compilation

Before

Command being timed: "./out/shecc tests/hello.c"
        User time (seconds): 0.00
        System time (seconds): 0.02
        Percent of CPU this job got: 76%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.03
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 52112
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 12220
        Voluntary context switches: 0
        Involuntary context switches: 0
        Swaps: 0
        File system inputs: 0
        File system outputs: 32
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

After

Command being timed: "./out/shecc tests/hello.c"
        User time (seconds): 0.00
        System time (seconds): 0.02
        Percent of CPU this job got: 71%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.03
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 49916
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 12224
        Voluntary context switches: 1
        Involuntary context switches: 0
        Swaps: 0
        File system inputs: 8
        File system outputs: 32
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

@jserv
Copy link
Collaborator

jserv commented Jan 19, 2025

@visitorckw, can you comment this?

src/globals.c Outdated Show resolved Hide resolved
src/globals.c Outdated Show resolved Hide resolved
@visitorckw
Copy link
Contributor

Looks good as is.

However, as mentioned, a large number of functions may cause excessive collisions and slow down performance. For smaller function counts, the default 512 buckets might be overkill. Therefore, a radix tree with dynamic memory allocation could still be a method worth exploring in the future.

@ChAoSUnItY
Copy link
Collaborator Author

I'm concerning that dynamic memory allocation at this moment is not reliable and potentially flawed, I've attempted to implement rehashing algorithm before, but on stage 2 the compilation will fail, while the GCC and stage 1 are fine.

src/globals.c Outdated Show resolved Hide resolved
src/globals.c Outdated Show resolved Hide resolved
src/globals.c Outdated Show resolved Hide resolved
@jserv

This comment was marked as resolved.

src/globals.c Show resolved Hide resolved
src/globals.c Outdated Show resolved Hide resolved
src/globals.c Outdated Show resolved Hide resolved
src/globals.c Outdated Show resolved Hide resolved
src/globals.c Outdated Show resolved Hide resolved
@ChAoSUnItY ChAoSUnItY force-pushed the refactor/hashmap branch 4 times, most recently from 1bacbd7 to cb82f7a Compare January 19, 2025 10:27
src/globals.c Outdated Show resolved Hide resolved

for (; *key; key++) {
hash ^= *key;
hash *= 0x01000193;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The multiplication here may cause overflow, leading to undefined behavior. Signed integer overflow is undefined, while unsigned integer overflow is not. Since shecc currently lacks support for unsigned integers, we might consider adding it to address this issue.

Copy link
Collaborator Author

@ChAoSUnItY ChAoSUnItY Jan 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can just simply add unsigned type at this moment to simplify the effort of new type? This way, unsigned can still uses current signed's arithmetic algorithm (due to the fact that they both based on 2's complement), since in ARMv7 and RISC-V 32bit assembly, signed overflow is well-defined, the only difference would be intrepretation of most-significant bit.

Edit: I just realized we still need to handle comparison, but I prefer to defer unsigned integer feature since we have an ongoing project which requires full resolution of shecc's specification, and which doesn't include unsigned types at this moment. I think this addition would alters the simplicity of project. @jserv should we postpone this hashmap implementation?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we postpone this hashmap implementation?

You can simply convert this pull request to draft state.

Copy link
Collaborator Author

@ChAoSUnItY ChAoSUnItY Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this pull request should be implemented as soon as possible, the reason is that I'm currently working on type_t refactor, but I have encountered the weird free(): invalid pointer issue when adding functions in globals.c, which I assume the reason is that the function trie cannot hold more than certain numbers of functions and probably function name's length could also contribute to this issue, these 2 factors and the flaw is already described here:

shecc/src/globals.c

Lines 103 to 109 in 09bb918

if (!trie->next[fc]) {
/* FIXME: The func_tries_idx variable may exceed the maximum number,
* which can lead to a segmentation fault. This issue is affected by
* the number of functions and the length of their names. The proper
* way to handle this is to dynamically allocate a new element.
*/
trie->next[fc] = func_tries_idx++;

But after cherry-picked this branch without any changes to function structures, the issue immediately gone.

One possible solution towards this is to add -fwrapv compilation flag to gcc to instruct compiler to wrap signed integer overflow result according to the 2's compliment representation, this ensures defined behavior at least when compiling with gcc. Meanwhile in shecc, it's fine at this moment since both ARM 32bit and RISC-V 32bit assembly wraps the overflow value according to the 2's compliment representation as well.

Copy link
Collaborator

@vacantron vacantron Jan 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could enlarging limitation in refs.h temporarily fix this problem? I remember the trie count is almost reaching the limitation in the last change.

This workaround could be remove after applying this patch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing macro MAX_FUNC_TRIES in defs.h to 3000 does the trick.

@ChAoSUnItY ChAoSUnItY marked this pull request as draft January 19, 2025 19:55
Previously, trie implementation is not consistent, mainly because of
using index to point the referencing func_t to FUNCS, additionally,
trie's advantage is that enables prefix lookup, but in shecc, it hasn't
been used in this way, furthur more, it takes 512 bytes per trie node,
while in this implementation, it 24 + W (W stands for key length
including NULL character) bytes per hashmap bucket node, which
significantly reduces memory usage.

This also allows for future refactoring of additional structures using
a hashmap implementation.

Notice that currently FNV-1a hashing function uses signed integer to
hash keys, which would lead to undefined behavior, instead of adding
unsigned integer to resolve this, we add "-fwrapv" compiler flag to
instruct gcc to wrap overflow result according to 2's complement
representation. Meanwhile in shecc, it's guaranteed to be always wrap
around according to 2's complement representation.
@ChAoSUnItY ChAoSUnItY marked this pull request as ready for review January 24, 2025 06:22
@jserv jserv requested a review from visitorckw January 24, 2025 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants