How to use Numba in Python queries
This guide will show you how to use Numba in your Python queries in Deephaven.
Numba is an open-source just-in-time (JIT) compiler for Python. It can be used to translate portions of Python code into optimized machine code using LLVM. The use of Numba can make your queries faster and more responsive.
What is just-in-time (JIT) compilation?
JIT compiles code into optimized machine code at runtime. When using Numba, you can specify which functions and blocks of code you want to be compiled at runtime. Compiling code blocks into optimized machine code adds some overhead, but subsequent function calls can be much faster. Thus, JIT can be powerful when used on functions that are complex, large, or will be used many times.
caution
JIT does not guarantee faster code. Sometimes, code cannot be optimized well by Numba and may actually run slower.
Usage
Numba decorators specify which blocks of code to JIT compile. Common Numba decorators include @jit
and @vectorize
. See http://numba.pydata.org/ for more details.
In the following example, @jit
and @vectorize
are used to JIT compile functionOne
and functionTwo
.
import math
from numba import jit, vectorize, double, int64
@jit
def functionOne(a, b):
return math.sqrt(a**2 + b**2)
@vectorize([double(double, int64)])
def functionTwo(decimal_value, integer_value):
return decimal_value * integer_value
Lazy compilation
If the Numba decorator is used without a function signature, Numba will infer the argument types at call time and generate optimized code based upon the inferred types. Numba will also compile separate specializations for different input types. The compilation is deferred to the first function call. This is called lazy compilation
.
The example below uses lazy compilation
on the function func
. When the code is first run, the function is compiled to optimized machine code, which takes extra overhead and results in longer execution time. On the second function call, Python uses the already optimized machine code, which results in very fast execution.
import numpy as np
from numba import jit
import time
# Lazy optimized addition function
@jit
def func(x, y):
return x + y
x = 0.01 * np.arange(100000)
y = 0.02 * np.arange(100000)
# This function call is slow. It has to be compiled into optimized machine code.
start = time.time()
z1 = func(x, y)
end = time.time()
print("First function call took " + str(end - start) + " seconds.")
# This function call is fast. It uses the previously JIT-compiled machine code.
start = time.time()
z2 = func(x, y)
end = time.time()
print("Second function call took " + str(end - start) + " seconds.")
- Log
Eager compilation
If the Numba decorator is used with a function signature, Numba will compile the function when the function is defined. This is called eager compilation
.
The example below uses eager compilation
on the function func
by specifying one or more function signatures. These function signatures denote the allowed input and output data types to and from the function. With eager compilation
, the code is compiled into optimized machine code at the function definition. When the code is run the first time, it has already been compiled into optimized machine code, so it's just as fast as the second function call. If the compiled function is used with a data type not specified in the function signature, an error will occur.
from numba import vectorize, int64, double
import numpy as np
import time
# Eager compilation for the int64(int64, int64) and double(double, double) function signatures happens here
@vectorize([int64(int64, int64),
double(double, double)])
def func(x, y):
z = x + y
return z
x1 = 0.01 * np.arange(10000)
y1 = 0.02 * np.arange(10000)
x2 = 1 * np.arange(10000)
y2 = 2 * np.arange(10000)
# Run it the first time on doubles
start = time.time()
z1 = func(x1, y1)
end = time.time()
print("Add vectors of doubles (first run): " + str(end - start) + " seconds.")
# Run it a second time on doubles
start = time.time()
z2 = func(x1, y1)
end = time.time()
print("Add vectors of doubles (second run): " + str(end - start) + " seconds.")
# Run it the first time on integers
start = time.time()
z3 = func(x2, y2)
end = time.time()
print("Add vectors of integers (first run): " + str(end - start) + " seconds.")
# Run it a second time on integers
start = time.time()
z4 = func(x2, y2)
end = time.time()
print("Add vectors of integers (second run): " + str(end - start) + " seconds.")
- Log
caution
Eager compilation
creates functions that only support specific function signatures. If these functions are applied to arguments of mismatched types, an error will occur.
@jit vs @vectorize
We've seen the @jit
and @vectorize
decorators used to optimize various functions. So far, they've been used without an explanation of how they differ. So, what is different about each decorator?
@jit
is a general-purpose decorator that can optimize a wide variety of different functions.@vectorize
is a decorator meant to allow functions that operate on arrays to be written as if they operate on scalars.
This can be visualized with a simple example:
from numba import jit, vectorize, int64
import numpy as np
x = np.arange(5)
y = x + 5
@jit([int64(int64, int64)])
def jitAddArrays(x, y):
return x + y
@vectorize([int64(int64, int64)])
def vectorizeAddArrays(x, y):
return x + y
z1 = vectorizeAddArrays(x, y)
print(z1)
z2 = jitAddArrays(x, y)
print(z2)
Functions created using @vectorize
can be applied to arrays. However, attempting to do the same with @jit
results in an error.
Examples
Numpy matrix
The following example uses jit
to calculate the element-wise sum of a 250x250 matrix. This is a total of 62,500 additions.
Here, the nopython=True
option is used. This option produces faster, primitive-optimized code that does not need the Python interpreter to execute. Without this flag, Numba will fall back to the slower object mode in some circumstances.
This example looks at three cases:
- A regular function without JIT
- A JIT function that needs compilation
- A JIT function that is already compiled
The first time the JIT-enabled function is run on a matrix of integers, it's almost ten times slower than the standard function. However, after compilation, the JIT-enabled function is almost two hundred times faster!
from numba import jit
import numpy as np
import time
x = np.arange(62500).reshape(250, 250)
y = 0.01 * x
def calc(a):
matrix_sum = 0.0
for i in range(a.shape[0]):
for j in range(a.shape[1]):
matrix_sum += a[i, j]
return matrix_sum
@jit(nopython=True)
def jitCalc(a):
matrix_sum = 0.0
for i in range(a.shape[0]):
for j in range(a.shape[1]):
matrix_sum += a[i, j]
return matrix_sum
# Time without JIT
start = time.time()
calc(x)
end = time.time()
print("Execution time (without JIT) = %s" % (end - start))
# Time with compilation and JIT
start = time.time()
jitCalc(x)
end = time.time()
print("Execution time (JIT + compilation) = %s" % (end - start))
# Time with JIT (already compiled)
start = time.time()
jitCalc(x)
end = time.time()
print("Execution time (JIT) = %s" % (end - start))
- Log
Using Deephaven tables
To show how the performance of @jit
and @vectorize differ when applied to Deephaven tables, we will create identical functions that use these decorators. We then measure the performance of creating new columns in a 625,000 row table when using the functions.
from deephaven import empty_table
from numba import jit, vectorize, double, int64
import time
def addColumns(A, B):
return A + B
@jit([int64(int64, int64)])
def jitAddColumns(A, B):
return A + B
@vectorize([int64(int64, int64)])
def vectorizeAddColumns(A, B):
return A + B
def cubicFunc(C):
return 0.0025 * C**3 - 1.75 * C**2 - C + 10
@jit([double(int64)])
def jitCubicFunc(C):
return 0.0025 * C**3 - 1.75 * C**2 - C + 10
@vectorize([double(int64)])
def vectorizeCubicFunc(C):
return 0.0025 * C**3 - 1.75 * C**2 - C + 10
t = empty_table(625000).update(formulas=["A = ii", "B = ii"])
# Time column addition without Numba
start = time.time()
t2 = t.update(formulas=["C = addColumns(A, B)"])
end = time.time()
print("column addition - Execution time (without Numba) = %s" % (end - start))
# Time column addition with jit
start = time.time()
t3 = t.update(formulas=["C = jitAddColumns(A, B)"])
end = time.time()
print("column addition - Execution time (jit) = %s" % (end - start))
# Time column addition with vectorize
start = time.time()
t4 = t.update(formulas=["C = vectorizeAddColumns(A, B)"])
end = time.time()
print("column addition - Execution time (vectorize) = %s" % (end - start))
# Time cubic polynomial without Numba
start = time.time()
t5 = t2.update(formulas=["D = cubicFunc(C)"])
end = time.time()
print("cubic polynomial - Execution time (without Numba) = %s" % (end - start))
# Time cubic polynomial with jit
start = time.time()
t5 = t2.update(formulas=["D = jitCubicFunc(C)"])
end = time.time()
print("cubic polynomial - Execution time (jit) = %s" % (end - start))
# Time a cubic polynomial with vectorize
start = time.time()
t6 = t2.update(formulas=["D = vectorizeCubicFunc(C)"])
end = time.time()
print("cubic polynomial - Execution time (vectorize) = %s" % (end - start))
- t
- t2
- t3
- t4
- t5
- t6
- Log
The use of @jit
with functions operating on Deephaven tables results in a very small performance increase over its standard counterparts. This performance increase is small enough to make the additional overhead of compiling the function into optimized machine code not worth it.
The use of @vectorize with functions operating on Deephaven tables results in a large performance increase over its standard counterparts. This performance increase is large enough to warrant the additional overhead associated with compiling the function into optimized machine code.