It seems to me that the different versions of find_element(p,t) from unit 3.26 and 3.27 can have very different execution times when working with big lists, one can take as much as 10 times longer than other. I'm not sure why this is so, perhaps someone can enlight me.
For example, if we make a really big list
and call for the last element of it
The function defined as
takes (in my machine) about 7.6 seconds to run while the function
takes only 0.77 seconds to run. By the way, I meassure this time with
I've never programmed in Python but I know a little Common Lisp (and people like to use A LOT of lists there). Perhaps the problem is because in the first case we traverse the list too many times? Does the call p[i] makes Python traverse the list up to the ith element each time? (Lisp would do that). In the second example we traverse (I suspect) the list only 2 times: once with the t in p statement and the other with the p.index(t) statement.
Is there a clever way to traverse the list only one? I'd like to know.
... or perhaps the whole problem is that I am timing wrong the execution times.
The question has been closed for the following reason "The question is answered, right answer was accepted" by Andres Garcia Saravia Ortiz M. 13 Mar '12, 17:23
It looks interesting. In the first function, you are going through elements one by one, searching for the right value, which is O(n). In the second one, you are using
Note: List in Python is implemented internally as a continuous array of elements, which gives the above mentioned complexities. Resource for those information can be found here.
My best guess in order to explain your numbers would be that both functions are of the same complexity (the second one has only twice as big constant). But the first function is implemented in Python itself (is interpreted), while the second one is calling Python's built-in functions, which are implemented in C (?) and thus it is much faster than the first one. Another, even wilder guess would be that you are doing the same operation (walking through the list until the same element) twice in very close succession and some optimization would do its thing.
However, I does not look as a good explanation to me, because your times are way too different in order to base explanation just on this. Yet it is the only one I can think of. Would you run it with large lists of different sizes? Anyone any ideas?
I am also a Python novice, but I believe your own answer is on the right track. Your first code example is doing a lot of arithmetic using the variable i: incrementing it, testing it, and computing an offset address in the p list (which probably involves a multiplication). The value of len(p) is also being retrieved or computed on every iteration, and that could be a slow operation (depending on the Python implementation -- some might actually traverse the list looking for a terminal sentinal). You can probably speed up your code by storing len(p) in a variable, then testing against that variable. You could also play with adding a sentinal value at the end of the list and watching for it instead of checking the length. Or, with suitable modifications, start at the end of the list and decrement i, since testing for i==0 or i<0 will be faster than any other test.
In contrast, your second code example is probably compiled to use register additions for address calculations. List traversal code is no doubt highly optimized. I suppose a really clever interpreter might even save the position at which the target is found, just in case the following instruction requires p.index(t). (Optimizing compilers examine the code as a whole to determine whether such an action would be useful.)
I can't answer the second part of your question, about whether there is an efficient single-traversal technique. It would have to be built into the interpreter. Seems like something that might well be defined in Python, or provided in a library.
Kenneth I. L...
They both seem to grow linearly with size.
I haven't looked into how this is all implemented but I would hope that calling index() is resulting in compiled C code in a library being executed, versus iterating in python is resulting in python being interpreted. So the 6X difference could just be compiled versus interpreted code?
Using 'in' is probably much more efficient than your iterative procedure. For yours, it must go through every element, testing its value. If it is not present, it must go through the entire list before returning. With that in mind, the longer the list you pass in, the longer it can potentially take. As for why it is taking over 7 seconds to return, I don't see why it should be taking that long unless you are passing in a very large list. Or, if you are using the online interpreter, it may simply be an issue with latency.
The integer addition in the while loop is not a machine code level "add operation".
Sung Tae Kim