Python Language List comprehensions Avoid repetitive and expensive operations using conditional clause

Help us to keep this website almost Ad Free! It takes only 10 seconds of your time:
> Step 1: Go view our video on YouTube: EF Core Bulk Insert
> Step 2: And Like the video. BONUS: You can also share it!

Example

Consider the below list comprehension:

>>> def f(x):
...     import time
...     time.sleep(.1)       # Simulate expensive function
...     return x**2

>>> [f(x) for x in range(1000) if f(x) > 10]
[16, 25, 36, ...]

This results in two calls to f(x) for 1,000 values of x: one call for generating the value and the other for checking the if condition. If f(x) is a particularly expensive operation, this can have significant performance implications. Worse, if calling f() has side effects, it can have surprising results.

Instead, you should evaluate the expensive operation only once for each value of x by generating an intermediate iterable (generator expression) as follows:

>>> [v for v in (f(x) for x in range(1000)) if v > 10]
[16, 25, 36, ...]

Or, using the builtin map equivalent:

>>> [v for v in map(f, range(1000)) if v > 10]
[16, 25, 36, ...]

Another way that could result in a more readable code is to put the partial result (v in the previous example) in an iterable (such as a list or a tuple) and then iterate over it. Since v will be the only element in the iterable, the result is that we now have a reference to the output of our slow function computed only once:

>>> [v for x in range(1000) for v in [f(x)] if v > 10]
[16, 25, 36, ...]

However, in practice, the logic of code can be more complicated and it's important to keep it readable. In general, a separate generator function is recommended over a complex one-liner:

>>> def process_prime_numbers(iterable):
...     for x in iterable:
...         if is_prime(x):
...             yield f(x)
...
>>> [x for x in process_prime_numbers(range(1000)) if x > 10]
[11, 13, 17, 19, ...]

Another way to prevent computing f(x) multiple times is to use the @functools.lru_cache()(Python 3.2+) decorator on f(x). This way since the output of f for the input x has already been computed once, the second function invocation of the original list comprehension will be as fast as a dictionary lookup. This approach uses memoization to improve efficiency, which is comparable to using generator expressions.


Say you have to flatten a list

l = [[1, 2, 3], [4, 5, 6], [7], [8, 9]]

Some of the methods could be:

reduce(lambda x, y: x+y, l)

sum(l, [])

list(itertools.chain(*l))

However list comprehension would provide the best time complexity.

[item for sublist in l for item in sublist]

The shortcuts based on + (including the implied use in sum) are, of necessity, O(L^2) when there are L sublists -- as the intermediate result list keeps getting longer, at each step a new intermediate result list object gets allocated, and all the items in the previous intermediate result must be copied over (as well as a few new ones added at the end). So (for simplicity and without actual loss of generality) say you have L sublists of I items each: the first I items are copied back and forth L-1 times, the second I items L-2 times, and so on; total number of copies is I times the sum of x for x from 1 to L excluded, i.e., I * (L**2)/2.

The list comprehension just generates one list, once, and copies each item over (from its original place of residence to the result list) also exactly once.



Got any Python Language Question?