My application needs to run many separate contexts in the same (single-threaded) process. They all share a single
The process will run many contexts (in the thread sense); that is, each one runs a function in a continuation object based on
boost::context (still on vault, pre-approved lib) it means that each context can yield, but they basically run in the same single-threaded process. Each one should run basically independent of the other, and more importantly, a compilation error in each one should not affect the execution of the others.
Each of these contexts, will invoke code dynamically that spans multiple translation units (TU). some Translation units may be shared across many of these contexts. compilation errors in a new or modified translation unit should not affect other contexts.
For example, T.U. A might be shared among two contexts, context X and Y. just for the sake of having a full picture, lets say that X will also be running code from other translations units, I.e B and D, while Y will have also C. At some point, X decides to make a modification to A, so it creates a new T.U A.1, which is a copy of A, and applies the modification there, so those will not affect context Y. Hope this example makes it clear the requirement.
My initial impulse was to associate one
llvm::Module for each context, but since its undefined in LLVM what happens with a module in an intermediate state of compilation, I decided to add one
llvm::Module for each translation unit (see this question for the reason), plus the copy-on-write policy I explained before for when modifications of a translation unit happen locally to a context, in order to avoid a modification affecting other contexts.
The main two-fold question I’ve have is:
How do I link together the different modules in a context in order to invoke them as an unified library? I’m using the C++ api. I’m particularly wary of this nasty, old bug affecting this functionality. Would this bug still affect me if I transferred ownership of all the modules to the JIT with
What are the required steps once a modification on a translation unit forces the update of one of the modules? do I need to drop/delete the old module object and create a new one? is there a recycling policy that I haven’t read about?
A secondary question I have about this is:
- How many
ExecutionEnginedo I need? one for the whole application? one per context? one per module?
Hope the scope of the question is not too overwhelming.
I think you need a conceptual framework to “hang” your ideas on. Thinking of the various executing bits as commands (perhaps even implementing using the command pattern) will give you a more obvious set of interaction points. That being said; you will need a context for each discrete execution you wish to return to. More than two will require that you create appropriate book keeping. I believe that two is handled essentially for free in boost.
Communication between executing bits is similarly up to you. Creating a state (memento) that is shared across execution contexts is one solution that comes to mind. You may also already have suitable state built into your run time, then no extra layer will be required. As you pointed out globals are not your friend in these interactions.
Versioning and Name resolution are also an issue. Keeping the executing bits separate goes a long way toward solving this problem. Once you resolve the coordination issue this is more a matter of tracking what bits you have already created. This also means that there is no need for recycling, just create new each time and there is no reload. You will also have to manage the end of life of these bits once they have completed executing.
I am proposing one
ExecutionEngine per executing bit. To not do this means a great deal more work attempting to “protect” working code from the effects of code that is wrong. I believe it is possible to do this with a single engine, but it would be significantly more risky.