Quote:
Originally Posted by Inverness
I don't think you'll find a more efficient implementation than Python's. And Python uses a trailing L to differentiate between the long object (arbitrary), and the int object (signed 32-bit).
Though in Python 3.0 the long object is being removed and the int object will behave as a long did. I'm sure this would not be done unless their implementation was efficient enough to warrant it. The source code is available for you to view on your own 
|
It's not really a matter of "more efficient" when there is a upper bound on the mathematical model required to do arbitrary-precision operations.
Where two n-digit numbers:
Addition and subtraction take O(n)
For multiplication and division, they either use the Toom-Cook (O(n**1.465)) or Schonhage-Strassen (O(n * log n * log log n)) algorithm based on the size of the integer.
These are the accepted time complexities for arbitrary-precision operations, which makes certain algorithms horribly inefficient compared to doing the operations directly with the processor.
And in response to your "efficient enough to warrant it" comment, there is a reason that algorithms are coded in ASM and C++ rather than Python. When finer granularity is needed for something you wouldn't use a high-level language like Python.
However, something as simple as a unsigned bit-wise shift should be possible in GraalScript, which is why I'm inquiring Stefan.