By: Jukka Larja (roskakori2006.delete@this.gmail.com), March 22, 2021 7:11 am
Room: Moderated Discussions
Moritz (better.delete@this.not.tell) on March 21, 2021 6:00 am wrote:
> Jukka Larja (roskakori2006.delete@this.gmail.com) on March 21, 2021 12:26 am wrote:
>
> > We have a system in place to write:
> > for (...) { doSomethingInBackgroundtask(...); } waitforBackgroundTasks();
>
> The for-loop is an example of code that you write without wanting it.
> You do not want a counter, you do not want to issue operations on it, you do not want to jump,
> you do not want to compare, you likely do not care if it gets done in random order, descending
> order or all at once. All you wanted to say is: "Do this parameterized job N times."
That for loop has precily one extra line (waitforBackgroundTasks();), because one can't just presume that something running independently in parallel with main code sequence will otherwise be done when needed. I don't know how it could be written in much simpler way.
> > A coder suggested we should have this. It was easy enough to add for what we had before, so I added it.
> > So yeah, I think the problem is it's not actually a very common pattern. It tends to require lot of
> > work to make sure things really can run in parallel.
>
> "People are neither used to nor trained to work that way." Is not an argument against new technology/ways.
> If that were the case we would not use forks and would not have this discussion.
The problem is not that it's too hard to write to call a function with slightly different syntax or calling waitForBackgroundTasks() once all the work has been sent. The problem is that except for too trivial cases, the loop iterations are hardly ever completely independent. They are often trivial to run in parallel, if just some tiny part (like writing a result) is mutexed, but that's a huge problem, if amount of work per iteration is small.
> > However, spawning tasks has overhead, which I don't see going away
>
> Then do not spawn an actual thread but only let the compiler add/sprinkle the instructions into
> the normal flow of the main task at times it assumes there is a stall or free capacity.
Consider usual flow of code: 1) do some thing, 2) use results of work done in 1), 3) use results of work done in 1) and 2) etc..
At which point does the "sprinkling of instructions into the normal flow" happen? Unless the compiler is magical, the simple to write pattern is to annotate some work as parallel, wait for it to complete and continue to next part.
> > I can't imagine an architecture that could run the normal code of "single-threaded
> > spaghetti sprinkled with tiny sections of potentially parallel computing" any better than current ones
> > do
>
> Then something about the code has to change to allow the CPU to fetch more executable/ready instructions.
> Maybe some very convenient concept like the stack must go.
Well yes, sure. If things are totally changed, then things can be totally different. I'm just a simple programmer. I can't imagine a language resting on totally different concepts (not that stacklessness alone is really such a thing).
-JLarja
> Jukka Larja (roskakori2006.delete@this.gmail.com) on March 21, 2021 12:26 am wrote:
>
> > We have a system in place to write:
> > for (...) { doSomethingInBackgroundtask(...); } waitforBackgroundTasks();
>
> The for-loop is an example of code that you write without wanting it.
> You do not want a counter, you do not want to issue operations on it, you do not want to jump,
> you do not want to compare, you likely do not care if it gets done in random order, descending
> order or all at once. All you wanted to say is: "Do this parameterized job N times."
That for loop has precily one extra line (waitforBackgroundTasks();), because one can't just presume that something running independently in parallel with main code sequence will otherwise be done when needed. I don't know how it could be written in much simpler way.
> > A coder suggested we should have this. It was easy enough to add for what we had before, so I added it.
> > So yeah, I think the problem is it's not actually a very common pattern. It tends to require lot of
> > work to make sure things really can run in parallel.
>
> "People are neither used to nor trained to work that way." Is not an argument against new technology/ways.
> If that were the case we would not use forks and would not have this discussion.
The problem is not that it's too hard to write to call a function with slightly different syntax or calling waitForBackgroundTasks() once all the work has been sent. The problem is that except for too trivial cases, the loop iterations are hardly ever completely independent. They are often trivial to run in parallel, if just some tiny part (like writing a result) is mutexed, but that's a huge problem, if amount of work per iteration is small.
> > However, spawning tasks has overhead, which I don't see going away
>
> Then do not spawn an actual thread but only let the compiler add/sprinkle the instructions into
> the normal flow of the main task at times it assumes there is a stall or free capacity.
Consider usual flow of code: 1) do some thing, 2) use results of work done in 1), 3) use results of work done in 1) and 2) etc..
At which point does the "sprinkling of instructions into the normal flow" happen? Unless the compiler is magical, the simple to write pattern is to annotate some work as parallel, wait for it to complete and continue to next part.
> > I can't imagine an architecture that could run the normal code of "single-threaded
> > spaghetti sprinkled with tiny sections of potentially parallel computing" any better than current ones
> > do
>
> Then something about the code has to change to allow the CPU to fetch more executable/ready instructions.
> Maybe some very convenient concept like the stack must go.
Well yes, sure. If things are totally changed, then things can be totally different. I'm just a simple programmer. I can't imagine a language resting on totally different concepts (not that stacklessness alone is really such a thing).
-JLarja