By: Foo_ (foo.delete@this.nomail.com), October 29, 2006 3:51 am
Room: Moderated Discussions
Tzvetan Mikov (tzvetanmi@yahoo.com) on 10/28/06 wrote:
---------------------------
>Actually it enforces a much stricter memory
>model: all writes will always be seen by all threads in the order they occured.
>Additionally it can guarantee atomicity for certain operations like increments, etc.
Hmm, perhaps it does indeed. I don't know how it's implemented, probably a mutex or something.
>So, the memory model of the clasic Python is just an accident of the implementation,
Indeed.
>I wonder what percentage of the existing Python code
>would break because of this if executed under Jython or IronPython.
Probably a very small part. In the Twisted framework, there is a part where a list can be accessed by several threads (it's the list that allows callbacks to be registered from a several thread, with the callFromThread() method).
This list isn't protected, the code assumes the "append()" method is atomic which it is not under Jython.
However, most applications are coded against CPython (the official interpreter) rather than Jython or IronPython. The latter two are only used in very specific setups IMO.
They often lag feature-wise compared to CPython (they probably don't have decorators, for example). I'm not sure they support the package distribution infrastructure that comes with CPython (distutils, setuptools, cheeseshop). And they probably can't import third-party packages which include some native code.
>Makes sense. With a global lock a multithreaded application will always be slower
>than a single threaded one, no matter how many CPUs you have.
Not necessarily, because as I've mentioned, extensions written in C can explicitly release the GIL. If you have a native extension that does some heavy calculation (or waits for IO), it's a good idea to release the GIL so that another Python thread can run in parallel.
But in the case where most of the CPU time in spent executing Python bytecode, yes, multithreading is not very useful.
---------------------------
>Actually it enforces a much stricter memory
>model: all writes will always be seen by all threads in the order they occured.
>Additionally it can guarantee atomicity for certain operations like increments, etc.
Hmm, perhaps it does indeed. I don't know how it's implemented, probably a mutex or something.
>So, the memory model of the clasic Python is just an accident of the implementation,
Indeed.
>I wonder what percentage of the existing Python code
>would break because of this if executed under Jython or IronPython.
Probably a very small part. In the Twisted framework, there is a part where a list can be accessed by several threads (it's the list that allows callbacks to be registered from a several thread, with the callFromThread() method).
This list isn't protected, the code assumes the "append()" method is atomic which it is not under Jython.
However, most applications are coded against CPython (the official interpreter) rather than Jython or IronPython. The latter two are only used in very specific setups IMO.
They often lag feature-wise compared to CPython (they probably don't have decorators, for example). I'm not sure they support the package distribution infrastructure that comes with CPython (distutils, setuptools, cheeseshop). And they probably can't import third-party packages which include some native code.
>Makes sense. With a global lock a multithreaded application will always be slower
>than a single threaded one, no matter how many CPUs you have.
Not necessarily, because as I've mentioned, extensions written in C can explicitly release the GIL. If you have a native extension that does some heavy calculation (or waits for IO), it's a good idea to release the GIL so that another Python thread can run in parallel.
But in the case where most of the CPU time in spent executing Python bytecode, yes, multithreading is not very useful.