[closed] Execution time changes 10-fold within versions of find_element from unit 3.26-3.27

It seems to me that the different versions of find_element(p,t) from unit 3.26 and 3.27 can have very different execution times when working with big lists, one can take as much as 10 times longer than other. I'm not sure why this is so, perhaps someone can enlight me.

For example, if we make a really big list

big_list = []
size = 10**7
for i in range(size):
    big_list.append(i)

and call for the last element of it

find_element(big_list, size)

The function defined as

def find_element(p,t):
    i = 0
    while i<len(p):
        if p[i] == t:
            return i
        i = i + 1
    return -1

takes (in my machine) about 7.6 seconds to run while the function

def find_element(p,t):
    if t in p:
        return p.index(t)
    else:
        return -1

takes only 0.77 seconds to run. By the way, I meassure this time with

from datetime import datetime
start_time = datetime.now()
find_element(big_list,size)
end_time = datetime.now()
elapsed_time = end_time - start_time

I've never programmed in Python but I know a little Common Lisp (and people like to use A LOT of lists there). Perhaps the problem is because in the first case we traverse the list too many times? Does the call p[i] makes Python traverse the list up to the ith element each time? (Lisp would do that). In the second example we traverse (I suspect) the list only 2 times: once with the t in p statement and the other with the p.index(t) statement.

Is there a clever way to traverse the list only one? I'd like to know.

... or perhaps the whole problem is that I am timing wrong the execution times.

asked 06 Mar '12, 20:03

Andres%20Garcia%20Saravia%20Ortiz%20M.'s gravatar image

Andres Garci...
1.6k42161

accept rate: 80%

closed 13 Mar '12, 17:23

1

You're right about traversing the list twice. In the first example, in each iteration you have to increment i, test whether it is less than len(p) - which means calculating len(p) each time, and then look up the number at position i. I believe Python lists were taken from LISP, so each cell contains an element plus a pointer to the next element, so they can be traversed easily.

(13 Mar '12, 18:01)

Peter Collin...

Peter%20Collingridge's gravatar image

@ Peter yes, that's what I thought. however I recently found this text by Peter Norvig and it says Lisp lists are different from Python lists: "Lists are not Conses. Python lists are actually like adjustable arrays in Lisp or Vectors in Java. That means that list access is O(1)[...]". If this were Lisp indeed you would traverse the list twice, but it seems Python handles lists in a different way.

(13 Mar '12, 18:16)

Andres Garci...

Andres%20Garcia%20Saravia%20Ortiz%20M.'s gravatar image

Hmmm interesting. I had always thought them the same.

(13 Mar '12, 18:20)

Peter Collin...

Peter%20Collingridge's gravatar image

The question has been closed for the following reason "The question is answered, right answer was accepted" by Andres Garcia Saravia Ortiz M. 13 Mar '12, 17:23


5 Answers:

It looks interesting. In the first function, you are going through elements one by one, searching for the right value, which is O(n). In the second one, you are using in statement and then index method on list, both of which are O(n). Thus the second function should be (very roughly of course) two times as slow as the first one, yet this is not what you have timed.

Note: List in Python is implemented internally as a continuous array of elements, which gives the above mentioned complexities. Resource for those information can be found here.

My best guess in order to explain your numbers would be that both functions are of the same complexity (the second one has only twice as big constant). But the first function is implemented in Python itself (is interpreted), while the second one is calling Python's built-in functions, which are implemented in C (?) and thus it is much faster than the first one. Another, even wilder guess would be that you are doing the same operation (walking through the list until the same element) twice in very close succession and some optimization would do its thing.

However, I does not look as a good explanation to me, because your times are way too different in order to base explanation just on this. Yet it is the only one I can think of. Would you run it with large lists of different sizes? Anyone any ideas?

link

answered 06 Mar '12, 20:19

Mareq's gravatar image

Mareq
4.8k164285

edited 06 Mar '12, 20:37

Your answer misses the mark, except for the part about interpretation vs. built-in functions. Andres asked why one implementation ran slower than the other. He did not ask about the growth orders. Yes, they are both O(n), but so what? The speeds differ by a factor of 10 because of coding choices that Andres made, and because of the interpreter implementation that underlies those choices. That is the level at which an answer is needed.

(08 Mar '12, 02:55)

Kenneth I. L...

Kenneth%20I.%20Laws-1's gravatar image

Interesting comment, but as list are implemented with arrays in Python maybe the built-in functions could use some other search method than linear search.

(13 Mar '12, 18:08)

Lucas de Oli...

Lucas%20de%20Oliveira%20Teixeira's gravatar image

I am also a Python novice, but I believe your own answer is on the right track. Your first code example is doing a lot of arithmetic using the variable i: incrementing it, testing it, and computing an offset address in the p list (which probably involves a multiplication). The value of len(p) is also being retrieved or computed on every iteration, and that could be a slow operation (depending on the Python implementation -- some might actually traverse the list looking for a terminal sentinal). You can probably speed up your code by storing len(p) in a variable, then testing against that variable. You could also play with adding a sentinal value at the end of the list and watching for it instead of checking the length. Or, with suitable modifications, start at the end of the list and decrement i, since testing for i==0 or i<0 will be faster than any other test.

In contrast, your second code example is probably compiled to use register additions for address calculations. List traversal code is no doubt highly optimized. I suppose a really clever interpreter might even save the position at which the target is found, just in case the following instruction requires p.index(t). (Optimizing compilers examine the code as a whole to determine whether such an action would be useful.)

I can't answer the second part of your question, about whether there is an efficient single-traversal technique. It would have to be built into the interpreter. Seems like something that might well be defined in Python, or provided in a library.

link

answered 06 Mar '12, 20:36

Kenneth%20I.%20Laws-1's gravatar image

Kenneth I. L...
26.8k1983184

Getting a length of Python's list is O(1) operation (see link in my post). Thus, we are still speaking about two functions, complexity of which differs only in constants. Therefore, one being 10 times faster than the other on such a big data set looks to me like we are missing something...

(06 Mar '12, 20:41)

Mareq

Mareq's gravatar image

Or did I just nail it? Meaning: Ten fold difference in running times does not matter much especially for such a big data set (i.e. such a large N)?

(06 Mar '12, 21:01)

Mareq

Mareq's gravatar image

Your comment is puzzling. A ten-fold difference in running time may not matter for very small problems, but sometimes matters very much for large ones. In any case, the issue in a CS101 class is to understand why some algorithms run faster than others. Such investigations help us develop good coding styles.

(08 Mar '12, 03:01)

Kenneth I. L...

Kenneth%20I.%20Laws-1's gravatar image

They both seem to grow linearly with size.

     Size Iterative time   index() time
 10000000 0:00:01.682416 0:00:00.273911
100000000 0:00:16.824791 0:00:02.726292

I haven't looked into how this is all implemented but I would hope that calling index() is resulting in compiled C code in a library being executed, versus iterating in python is resulting in python being interpreted. So the 6X difference could just be compiled versus interpreted code?

link

answered 06 Mar '12, 21:14

Keith%20Evans's gravatar image

Keith Evans
2.0k102142

1

True, they both seem linear with size. I made a couple more examples with lists of sizes 104, 105, 106 and 107 and both examples scale linearly

size     iterative-time  index() time
10**4          0.004723      0.000718
10**5          0.047821      0.008289
10**6          0.489827      0.080579
10**7          4.780098      0.800948

It seems to me that @Mareq is right and both functions have the same complexity O(N) just with different scale factors.

Oh, I also made a local variable to hold len(p) (as suggested by @kilaws) and that also helped noticeable.

(06 Mar '12, 21:38)

Andres Garci...

Andres%20Garcia%20Saravia%20Ortiz%20M.'s gravatar image
1

Note that this is about 2X as fast - I didn't make the len(p) change.

def find_element1a(p,t):
    i = 0
    for e in p:
        if e == t:
            return i
        i = i + 1
    return -1

I would say that 10X matters more for larger datasets because it's more noticeable.

Regarding the len(p) change - yes it's always a good idea to move computations & function calls out of a loop when possible.

(06 Mar '12, 22:03)

Keith Evans

Keith%20Evans's gravatar image

Using 'in' is probably much more efficient than your iterative procedure. For yours, it must go through every element, testing its value. If it is not present, it must go through the entire list before returning. With that in mind, the longer the list you pass in, the longer it can potentially take. As for why it is taking over 7 seconds to return, I don't see why it should be taking that long unless you are passing in a very large list. Or, if you are using the online interpreter, it may simply be an issue with latency.

link

answered 06 Mar '12, 20:15

pseudonoob's gravatar image

pseudonoob
200312

I used an interpreter in my netbook (not the online one) for this examples and yes, I passed a very large list with 10**7 elements. I'm not sure if any real-world application would use such a list, but I am curious anyway.

I'm not sure I understand your answer, I would (naively) think that in the first case we 'walk' through the list just once, because we state p[i] once for each i until we find the answer. In the second case we 'walk' though the list twice. The first time with the "t in p" statement and the second with "p.index(t)". Yet this last one seems faster.

(06 Mar '12, 20:28)

Andres Garci...

Andres%20Garcia%20Saravia%20Ortiz%20M.'s gravatar image

The integer addition in the while loop is not a machine code level "add operation".
the "i" in the while loop is an integer object.
so, I think that there's much overhead time at looking for the "+" operator of python integer object.

link

answered 06 Mar '12, 21:23

Sung%20Tae%20Kim's gravatar image

Sung Tae Kim
29748

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags

×29,696
×121
×24
×14

Asked: 06 Mar '12, 20:03

Seen: 379 times

Last updated: 13 Mar '12, 18:20