Let us consider different approaches for smoothing technique.
This one was represented in unit5-5. We can implement gradient descent as an iterative method. It looks like successive over relaxation method (SOR) , where weight_smooth and weight_data are weights and relaxation factors at the same time.
This approach has disadvantages such as:
1)It converges too slowly. For certain values of weight_data and weight_smooth with given ratio weight_data/weight_smooth the first method converges faster. But for most values, the first method converges very slowly.
2)It needs two parameters, while the result depends only on their ratio.
3)Inaccurate. Iteration split on several steps. The result depends on the order of those steps.
Let us find local equilibrium point for y[i] with given equations above. y[i] is in equilibrium if it does not change through iterations. It means that the sum of the right parts of those equations have to be zero. This gives us a formula directly for y[i]:
This equation depends only on ratio weight_ratio = weight_data/weight_smooth. So one can rewrite it like this:
Is converges faster then the previous one. It finds the local equilibrium point at once, while the first method needs several iterations for the local equilibrium to be established.
The previous method can be modified as successive over relaxation method. It needs additional parameter alpha as relaxation factor.
This is true successive over relaxation (SOR) method. It converges really fast. The first method looked like SOR, but it can't provide good perfomance
Let us calculate iteration needed for data given in homework 5-3.
Thus SOR converges about ten times faster that the first method.
asked 22 Mar '12, 12:36