# [closed] Execution time changes 10-fold within versions of find_element from unit 3.26-3.27

 3 1 It seems to me that the different versions of find_element(p,t) from unit 3.26 and 3.27 can have very different execution times when working with big lists, one can take as much as 10 times longer than other. I'm not sure why this is so, perhaps someone can enlight me. For example, if we make a really big list big_list = [] size = 10**7 for i in range(size): big_list.append(i)  and call for the last element of it find_element(big_list, size)  The function defined as def find_element(p,t): i = 0 while i

### The question has been closed for the following reason "The question is answered, right answer was accepted" by Andres Garcia Saravia Ortiz M. 13 Mar '12, 17:23

 1 It looks interesting. In the first function, you are going through elements one by one, searching for the right value, which is O(n). In the second one, you are using in statement and then index method on list, both of which are O(n). Thus the second function should be (very roughly of course) two times as slow as the first one, yet this is not what you have timed. Note: List in Python is implemented internally as a continuous array of elements, which gives the above mentioned complexities. Resource for those information can be found here. My best guess in order to explain your numbers would be that both functions are of the same complexity (the second one has only twice as big constant). But the first function is implemented in Python itself (is interpreted), while the second one is calling Python's built-in functions, which are implemented in C (?) and thus it is much faster than the first one. Another, even wilder guess would be that you are doing the same operation (walking through the list until the same element) twice in very close succession and some optimization would do its thing. However, I does not look as a good explanation to me, because your times are way too different in order to base explanation just on this. Yet it is the only one I can think of. Would you run it with large lists of different sizes? Anyone any ideas? answered 06 Mar '12, 20:19 Marek Bálint 4.7k●16●42●85 Your answer misses the mark, except for the part about interpretation vs. built-in functions. Andres asked why one implementation ran slower than the other. He did not ask about the growth orders. Yes, they are both O(n), but so what? The speeds differ by a factor of 10 because of coding choices that Andres made, and because of the interpreter implementation that underlies those choices. That is the level at which an answer is needed. (08 Mar '12, 02:55) Kenneth I. L... Interesting comment, but as list are implemented with arrays in Python maybe the built-in functions could use some other search method than linear search. (13 Mar '12, 18:08) Lucas de Oli...
 2 They both seem to grow linearly with size.  Size Iterative time index() time 10000000 0:00:01.682416 0:00:00.273911 100000000 0:00:16.824791 0:00:02.726292  I haven't looked into how this is all implemented but I would hope that calling index() is resulting in compiled C code in a library being executed, versus iterating in python is resulting in python being interpreted. So the 6X difference could just be compiled versus interpreted code? answered 06 Mar '12, 21:14 Keith Evans 2.0k●10●21●42 1 True, they both seem linear with size. I made a couple more examples with lists of sizes 104, 105, 106 and 107 and both examples scale linearly size iterative-time index() time 10**4 0.004723 0.000718 10**5 0.047821 0.008289 10**6 0.489827 0.080579 10**7 4.780098 0.800948  It seems to me that @Mareq is right and both functions have the same complexity O(N) just with different scale factors. Oh, I also made a local variable to hold len(p) (as suggested by @kilaws) and that also helped noticeable. (06 Mar '12, 21:38) Andres Garci... 1 Note that this is about 2X as fast - I didn't make the len(p) change. def find_element1a(p,t): i = 0 for e in p: if e == t: return i i = i + 1 return -1  I would say that 10X matters more for larger datasets because it's more noticeable. Regarding the len(p) change - yes it's always a good idea to move computations & function calls out of a loop when possible. (06 Mar '12, 22:03) Keith Evans
 0 Using 'in' is probably much more efficient than your iterative procedure. For yours, it must go through every element, testing its value. If it is not present, it must go through the entire list before returning. With that in mind, the longer the list you pass in, the longer it can potentially take. As for why it is taking over 7 seconds to return, I don't see why it should be taking that long unless you are passing in a very large list. Or, if you are using the online interpreter, it may simply be an issue with latency. answered 06 Mar '12, 20:15 pseudonoob 200●3●12 I used an interpreter in my netbook (not the online one) for this examples and yes, I passed a very large list with 10**7 elements. I'm not sure if any real-world application would use such a list, but I am curious anyway. I'm not sure I understand your answer, I would (naively) think that in the first case we 'walk' through the list just once, because we state p[i] once for each i until we find the answer. In the second case we 'walk' though the list twice. The first time with the "t in p" statement and the second with "p.index(t)". Yet this last one seems faster. (06 Mar '12, 20:28) Andres Garci...
 0 The integer addition in the while loop is not a machine code level "add operation". the "i" in the while loop is an integer object. so, I think that there's much overhead time at looking for the "+" operator of python integer object. answered 06 Mar '12, 21:23 Sung Tae Kim 297●4●8

Markdown Basics

• *italic* or _italic_
• **bold** or __bold__
• image?![alt text](/path/img.jpg "Title")
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported

×15,262
×101
×21
×14