Hello! Now that my project is getting bigger I'm finding that some files sometimes cross a threshold where compilation changes from a couple of seconds to minutes. Even using quite low max-allocs-per-nodesettings. I have no idea why it happens. Is there any way to profile the compiling process to at least pinpoint where it's spending it's time? If it's just a tradeoff of how the compiling/optimization works I have no problem splitting functions or making changes to code to make it easier for the compiler but I'd like to know what is tripping it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Since SDCC effectively compiles on a per-function basis, keeping functions short is usually a good strategy. I am not aware of any integrated profiling capabilities beyond --cyclomatic.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
There is [bugs:#3884].
In my experience, global common subexpression elimination (GCSE) can get very slow when functions become big (big after macro expansion and inlining). And unlike register allocation and lospre, where we have --max-allocs-per-node for the compilation time / code quality trade-off, or generalized constant propagation, where there is a similar, but not user-configurable bound, GCSE will just keep going until it is done.
This is becoming an actual bottleneck in my project. Having to wait minutes for a change in one file is killing productivity and tracking/finding which functions might be responsible is not easy and time consuming.
In cseAllBlocks a loop calls cseBBlock for every block. Do you think this calls could be done in different threads at the same time? (I'm willing to try at least for a local patch to speed things up).
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello! Now that my project is getting bigger I'm finding that some files sometimes cross a threshold where compilation changes from a couple of seconds to minutes. Even using quite low
max-allocs-per-nodesettings. I have no idea why it happens. Is there any way to profile the compiling process to at least pinpoint where it's spending it's time? If it's just a tradeoff of how the compiling/optimization works I have no problem splitting functions or making changes to code to make it easier for the compiler but I'd like to know what is tripping it.Since SDCC effectively compiles on a per-function basis, keeping functions short is usually a good strategy. I am not aware of any integrated profiling capabilities beyond
--cyclomatic.There is [bugs:#3884].
In my experience, global common subexpression elimination (GCSE) can get very slow when functions become big (big after macro expansion and inlining). And unlike register allocation and lospre, where we have --max-allocs-per-node for the compilation time / code quality trade-off, or generalized constant propagation, where there is a similar, but not user-configurable bound, GCSE will just keep going until it is done.
Related
Bugs: #3884
Is there a description for the numbers of cyclomatic?
I got this numbers for example:
This is becoming an actual bottleneck in my project. Having to wait minutes for a change in one file is killing productivity and tracking/finding which functions might be responsible is not easy and time consuming.
In
cseAllBlocksa loop callscseBBlockfor every block. Do you think this calls could be done in different threads at the same time? (I'm willing to try at least for a local patch to speed things up).