# unit5-5 Path Smoothing - making it faster

 2 1 Let us consider different approaches for smoothing technique. First method This one was represented in unit5-5. We can implement gradient descent as an iterative method. It looks like successive over relaxation method (SOR) , where weight_smooth and weight_data are weights and relaxation factors at the same time. y[i] += weight_data * (x[i] - y[i]) y[i] += weight_smooth * (y[i+1] + y[i-1] - 2*y) y[i] += 0.5 * weight_smooth * (2*y[i-1] - y[i-2] - y[i]) y[i] += 0.5 * weight_smooth * (2*y[i+1] - y[i+2] - y[i])  This approach has disadvantages such as: 1)It converges too slowly. For certain values of weight_data and weight_smooth with given ratio weight_data/weight_smooth the first method converges faster. But for most values, the first method converges very slowly. 2)It needs two parameters, while the result depends only on their ratio. 3)Inaccurate. Iteration split on several steps. The result depends on the order of those steps. Second method Let us find local equilibrium point for y[i] with given equations above. y[i] is in equilibrium if it does not change through iterations. It means that the sum of the right parts of those equations have to be zero. This gives us a formula directly for y[i]: y[i] = (2*weight_data*x[i] + 4*weight_smooth*(y[i+1] + y[i-1]) - weight_smooth*(y[i+2] + y[i-2])) / (6*weight_smooth + 2*weight_data)  This equation depends only on ratio weight_ratio = weight_data/weight_smooth. So one can rewrite it like this: y[i] = (2*weight_ratio*x[i] + 4*(y[i+1] + y[i-1]) - y[i+2] + y[i-2]) / (6 + 2*weight_ratio)  Is converges faster then the previous one. It finds the local equilibrium point at once, while the first method needs several iterations for the local equilibrium to be established. Third method The previous method can be modified as successive over relaxation method. It needs additional parameter alpha as relaxation factor. After all we can write: y[i] = (1-alpha)y[i] + alpha*(2*weight_ratio*x[i] + 4*(y[i+1] + y[i-1]) - y[i+2] + y[i-2]) / (6 + 2*weight_ratio)  This is true successive over relaxation (SOR) method. It converges really fast. The first method looked like SOR, but it can't provide good perfomance Perfomance Let us calculate iteration needed for data given in homework 5-3. First method weight_data = 0 weight_smooth = 0.1 (as in homework) -> iterations = 1036 weight_smooth = 0.21 (optimal) -> iterations = 672 Second method weight_ratio = 0 -> iterations = 166 Third method weight_ratio = 0 alpha = 1.56 (optimal) -> iterations = 50  Thus SOR converges about ten times faster that the first method. asked 22 Mar '12, 12:36 Sergei Koryakin 61●2●9 accept rate: 0% Anne Paulson 9.5k●32●76●98 Fixed numbering in tag and title. Its unit 5, not 4 :-) Nice information, thank you (22 Mar '12, 12:42) Gundega ♦♦ Do all three methods converge to the same path? (22 Mar '12, 12:49) Matt Bradley Do all three methods converge to the same path? All three of them converge to the same path. Just one condition: weight_smooth in first methot must be small (less then 0.1). Otherwise it converges to different path depending on order of steps. Thats why I told that it is not accurate. I need more research about that. Maybe later I'll post the results. (22 Mar '12, 13:45) Sergei Koryakin
Be the first one to answer this question!
Question text:

Markdown Basics

• *italic* or _italic_
• **bold** or __bold__
• image?![alt text](/path/img.jpg "Title")
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported

×5,185
×81
×5