Questions Tagged With cs373http://forums.udacity.com/tags/cs373/?type=rssquestions tagged <span class="tag">cs373</span>enWed, 23 Apr 2014 18:52:02 -0400- In this lesson I missedhttp://forums.udacity.com/questions/100171917/in-this-lesson-i-missed<p>I have obtained the solutions shown videos</p>
<p><strong>Final parameters: [2.9229268964347743, 10.326767087320677, 0.4932708323372665] <br>
3.61146228557e-17</strong></p>
<p>In this case the message is displayed: </p>
<p><strong>Your error is too large! Try to get it below 10^-10.</strong></p>
<p>But 3.61146228557e-17 is less than 10^-10. What happens?</p>
<p>With <br>
<strong>dparams[2]=0.0<br>
dparams[1]=0.0</strong><br>
I got<br>
<strong>Final parameters: [0.28689543321944344, 0.0, 0.0] <br>
0.552909163589</strong></p>
<p>And with<br>
<strong>#myrobot.set_steering_drift(10.0 / 180.0 * pi) # 10 degree steering error</strong><br>
I get<br>
<strong>Final parameters: [0.001667718169966636, 0.0, 0.0] <br>
0.103838564491</strong></p>
<p>What I could not get is: </p>
<p><strong>[.188, 3.147,0] --) 5.70004e-11</strong></p>
<p><strong>What solution should I arrive?<br>
What should I do?</strong></p>
<p>Thank you very much for your attention</p>Marco Antonio Alonso PĂ©rezWed, 23 Apr 2014 18:52:02 -0400http://forums.udacity.com/questions/100171917/in-this-lesson-i-missedunit5-16cs373m-48702479
- 3D Planning with Astarhttp://forums.udacity.com/questions/100171687/3d-planning-with-astar<p>Hi,</p>
<p>Well, I could not catch the point how Astar is being used for 3D state space (X,Y,Orientation), you show some simulation results in which Astar is being used for 3D case . We see it in dynamic programming, and also Dr. Thrun wrote "3D" next to dynamic programming ( did not write it for the Astar). I have also some ideas about how it can be done with Astar but is there a source about this in UDACITY?</p>
<p>Best!</p>Volkan-5Wed, 23 Apr 2014 11:28:51 -0400http://forums.udacity.com/questions/100171687/3d-planning-with-astarunit4-20m-48738129cs373
- Normal Kalman Filter doesn't work!?http://forums.udacity.com/questions/100171529/normal-kalman-filter-doesnt-work<p>The standard Kalman Filter can't localize the robot in 5000 steps for all three cases.</p>
<p>Maybe the extended one (EKF) will work, since the motion model of the robot is non-linear. Anyone already gave it a try, please share your experience! </p>
<p>Thanks, sincerely.</p>Tung NguyenWed, 23 Apr 2014 02:32:39 -0400http://forums.udacity.com/questions/100171529/normal-kalman-filter-doesnt-workunit1-5m-670128579cs373
- Want to Learn Einstein's Relativity? Check World Science U from Renown Physicist Brian Greenehttp://forums.udacity.com/questions/100171121/want-to-learn-einsteins-relativity-check-world-science-u-from-renown-physicist-brian-greene<p>Not sure if I can call this a MOOC, it doesn't look like any MOOC I have ever seen, but surely looks interesting: <a rel="nofollow" href="http://www.worldscienceu.com"><strong>World Science U</strong></a>.</p>
<p>There are quizzes, a final exam (no deadlines though) and you get a free certificate signed by the instructor if you pass the course.</p>
<p>They have a course on Special Relativity (with the math!) taught by Dr. Brian Greene. To say that Brian Greene is engaging is probably an understatement, but I can't express it in better words...</p>
<p><img src="http://i59.tinypic.com/345o1gl.jpg" alt="add your post above this line, enter image description here"></p>
<p>Joe Hanson (from the educational YouTube channel <a rel="nofollow" href="http://www.youtube.com/user/itsokaytobesmart">"It's Ok to be Smart"</a>) <a rel="nofollow" href="https://www.youtube.com/watch?v=_Og6t38GXUQ">interviewed Dr. Greene and asked about WSU</a>.</p>Marcio GualtieriMon, 21 Apr 2014 14:08:17 -0400http://forums.udacity.com/questions/100171121/want-to-learn-einsteins-relativity-check-world-science-u-from-renown-physicist-brian-greenest101ph100cs222ma006ma004cs373st095cs271cs101ma008
- Prof Evans takes the lead, offering Instructor-Signed Certificates!http://forums.udacity.com/questions/100170849/prof-evans-takes-the-lead-offering-instructor-signed-certificates<p>I would like to applaud Professor Evans (who TBH always struck me as a really nice guy :) for taking the initiative of obtaining Udacity's permission to issue instructor-signed certificates once availability of 'official' ones ends. (See links below.)</p>
<p>I'm not sure how well this news has been disseminated, so I'd like to encourage cross-posting to the other course fora, and for people to encourage other instructors to arrange similar informal certification for their courses.</p>
<p>Stay Udacious folks! ;)</p>
<p>See Prof Evans' recent comment here:<br>
<a rel="nofollow" href="http://blog.udacity.com/2014/04/phasing-out-certificates-of-free16.html">http://blog.udacity.com/2014/04/phasing-out-certificates-of-free16.html</a><br>
And home page:<br>
<a rel="nofollow" href="http://www.cs.virginia.edu/evans/cs101/">http://www.cs.virginia.edu/evans/cs101/</a> </p>BironeSun, 20 Apr 2014 09:02:24 -0400http://forums.udacity.com/questions/100170849/prof-evans-takes-the-lead-offering-instructor-signed-certificatesst101ph100cs222cs101cs253cs373cs212cs215cs291cs387staff
- Landmark Segmentation in Graph SLAMhttp://forums.udacity.com/questions/100170118/landmark-segmentation-in-graph-slam<p>In graph SLAM, how are the number of landmarks determined? Is this apriori knowledge or are they dynamically added as sensor data is gathered? If they are dynamically gathered, how is one landmark distinguished from another?</p>Paul-972Wed, 16 Apr 2014 21:42:21 -0400http://forums.udacity.com/questions/100170118/landmark-segmentation-in-graph-slamgraphslamunit6-11m-48717420landmarkscs373
- Integral Term in PID Controller when time step is not equal to 1http://forums.udacity.com/questions/100170107/integral-term-in-pid-controller-when-time-step-is-not-equal-to-1<p>If I were to implement a PID controller with a timestep dt=0.2, obviously the derivative term would be divided by 0.2. Wouldn't the integral term be equal to the sum of cross-track errors times 0.2?</p>Paul-972Wed, 16 Apr 2014 20:48:13 -0400http://forums.udacity.com/questions/100170107/integral-term-in-pid-controller-when-time-step-is-not-equal-to-1unit5-14m-48728346cs373
- help me pleezhttp://forums.udacity.com/questions/100170034/help-me-pleez<p>I want to help, please, I have a search around the ROBO T., I need an algorithm to move the robot, for example, forms a over b</p>fatmiWed, 16 Apr 2014 15:55:16 -0400http://forums.udacity.com/questions/100170034/help-me-pleezcs373
- Final Project - Question 4 - Difference in Target 3?http://forums.udacity.com/questions/100169611/final-project-question-4-difference-in-target-3<p>For part 4 of the final project, I have a solution to capture target bots 1 and 2 pretty consistently, but catching bot 3 always fails, even after tweaking the parameters of my algorithm. I'm wondering if a hint could be provided as to the differences in behavior of each of the three bots. I'm finding it a little difficult to debug given I cannot see what bot 3 is doing.</p>Peter-84Tue, 15 Apr 2014 00:28:28 -0400http://forums.udacity.com/questions/100169611/final-project-question-4-difference-in-target-3unit1-9cs373m-670128583
- Why not just to kill particles with small weight?http://forums.udacity.com/questions/100169460/why-not-just-to-kill-particles-with-small-weight<p>Why do we need kill of particles be a random process? Why can't we just build a threshold of surviving dependent on weight?</p>Taras-SvitozarMon, 14 Apr 2014 10:25:42 -0400http://forums.udacity.com/questions/100169460/why-not-just-to-kill-particles-with-small-weightcs373
- Particle Filter for Velocity estimation of another robothttp://forums.udacity.com/questions/100168883/particle-filter-for-velocity-estimation-of-another-robot<p>Hi Everyone, </p>
<p>I am trying to use particle filter for <strong>estimating the velocity of a robot</strong> from another robot(assume stationary). The motion of target robot is assumed to be with constant velocity in a 2-D space (x and y). The acceleration is considered to be a noise. The model is given below.</p>
<pre><code>X(k+1) = A * X(k) + M * V(k)
where, X(k) = [ x vel_x y vel_y ] - 4 D vector of position (x,y) and respective velocities.
A is a 4 x 4 matrix, V (k) is a noise vector, [ acc_x acc_y] in which we model acceleration as a random gaussian noise. M is a 4x2 matrix.
</code></pre>
<p>The measurements are from a laser scanner which provides [x y] with some random gaussian noise. </p>
<pre><code>Z(k) = H * X(k) + W(k)
where, Z(k) is a 2x1 measurement vector [x y]. W (k) is a measurement noise vector [noise_x noise_y] which is gaussian.
</code></pre>
<p>As you can see, if we use particle filter for this problem, we compute posterior pdf based on just two elements of the state vector ( x and y). Basically, <strong>we assign weights to 4 dimensional particles based just 2 dimensional measurements</strong>. So, when I run the filter with 10000 particles(too high), for just 1 iteration, the particles are <strong>converging to a single wrong particle</strong>. (This is called ensemble collapse I guess)</p>
<p>However, in the example provided in this course, we estimate 3-D state (x,y,theta) using 2-D measurements (x,y) and it seems to work well. <br>
Questions:<br>
1. <strong>Do I really need measurements for (vel_x, vel_y) for this to work?</strong> What if I don't have any sensor which gives me these and what if I mathematically compute them using delta_x/delta_t and use those as additional measurements?<br>
2. Even then, I read that particle filters are not good for high dimensional state spaces and the required number of particles increases exponentially with number of dimensions. So, <strong>do I need more than 10000 particles for it to work effectively?</strong> If yes, then I don't think it will be good for real-time situation.</p>
<p>Any suggestions on how to solve this problem using particle filter?</p>
<p>Thanks,<br>
Raj</p>Keerthi-4Fri, 11 Apr 2014 14:02:52 -0400http://forums.udacity.com/questions/100168883/particle-filter-for-velocity-estimation-of-another-robotdimensionalunit3-27highparticle-filtersm-48665972cs373
- Extended Kalman Filter for a 2D motionhttp://forums.udacity.com/questions/100168880/extended-kalman-filter-for-a-2d-motion<p>Hi,</p>
<p>I am trying to implement an Extended Kalman Filter for a 2D car motion model to carry out velocity estimation. It is similar to that of the last programming exercise in Problem Set 2 (CS37) but in this case the state vector is <strong>[x x_dot y y_dot omega]</strong> where <strong>omega is the rotational velocity</strong> of the mobile robot. Does anyone think that this approach would be feasible to estimate "omega" even though there are no measurements that give us either the angle or the rotational velocity?<br>
Basically, my code gives correct answers for the linear velocities and positions but not for the rotational velocity. Am I going completely off-track by trying to estimate "omega" from practically nothing? [My motion model takes the omega into consideration, it is a non linear model hence the usage of EKF instead of basic Kalman]</p>AsheshFri, 11 Apr 2014 13:51:08 -0400http://forums.udacity.com/questions/100168880/extended-kalman-filter-for-a-2d-motionekfproblemset2cs373
- certificateshttp://forums.udacity.com/questions/100168755/certificates<p>where i can download free udacity certificates</p>Priyanka-1Fri, 11 Apr 2014 01:38:00 -0400http://forums.udacity.com/questions/100168755/certificatescs373
- Spoiler: 4-19 A-Star (rather than dynamic programming) solutionhttp://forums.udacity.com/questions/100168731/spoiler-4-19-a-star-rather-than-dynamic-programming-solution<p>Since we're asked to find a single best path for the output, rather than a full policy, I decided to try my hand at implementing this using A-Star. Below is my code for anyone who might be interested.</p>
<pre><code>def optimum_policy2D():
# Initialize an empty return grid and a 3D best-value matrix
policy2D = [[' ' for row in range(len(grid[0]))] for col in range(len(grid))]
bestvalue = [[[999 for o in range(len(forward))] for y in range(len(grid[0]))] for x in range(len(grid))]
goalx = goal[0]
goaly = goal[1]
# Initialize the frontier with the starting state
# We'll keep a running path for how we got to each state in the frontier, so
# when we reach the goal we can just play back the actions to make the grid.
# Our heuristic is "manhattan distance" - non-diagonal distance if no obstacles.
x = init[0]
y = init[1]
o = init[2]
path = []
g = 0
h = abs(x - goalx) + abs(y - goaly)
f = g + h
frontier = []
frontier.append([f, g, x, y, o, path])
bestvalue[x][y][o] = g
# Run the A-Star algorithm
while(len(frontier) > 0):
frontier.sort()
# print "frontier:", frontier
current = frontier.pop(0)
g = current[1]
x = current[2]
y = current[3]
o = current[4]
path = current[5]
if x == goalx and y == goaly:
# print "found", current
frontier = []
else:
# print "take:", current
for a in range(len(action)):
o2 = (o + action[a]) % len(forward)
x2 = x + forward[o2][0]
y2 = y + forward[o2][1]
g2 = g + cost[a]
if x2 >= 0 and x2 < len(grid) and y2 >=0 and y2 < len(grid[0]) and grid[x2][y2] == 0:
# If this is the cheapest way we've found to get to the next state, add it to the frontier
if g2 < bestvalue[x2][y2][o2]:
h2 = abs(x2 - goalx) + abs(y2 - goaly)
f2 = g2 + h2
path2 = list(path)
path2.append(a)
nxt = [f2, g2, x2, y2, o2, path2]
# print "add:", nxt
frontier.append(nxt)
bestvalue[x2][y2][o2] = g2
# Now chart the path in the policy grid so we can return it
x = init[0]
y = init[1]
o = init[2]
for i in range(len(path)):
a = path[i]
policy2D[x][y] = action_name[a]
o = (o + action[a]) % len(forward)
x += forward[o][0]
y += forward[o][1]
policy2D[x][y] = '*'
return policy2D # Make sure your function returns the expected grid.
</code></pre>AndrewCrThu, 10 Apr 2014 23:43:22 -0400http://forums.udacity.com/questions/100168731/spoiler-4-19-a-star-rather-than-dynamic-programming-solutionunit4-19m-48646840cs373
- the last parthttp://forums.udacity.com/questions/100168246/the-last-part<p>u r talking about final localization or where it was initial</p>ajay singhWed, 09 Apr 2014 10:24:06 -0400http://forums.udacity.com/questions/100168246/the-last-partunit1-3m-48673629cs373
- Particle Filter on Matlab / simulink.http://forums.udacity.com/questions/100167783/particle-filter-on-matlab-simulink<p>I want to implement the same particle filter on Matlab. Is it possible ?</p>Junaid-20Mon, 07 Apr 2014 01:56:07 -0400http://forums.udacity.com/questions/100167783/particle-filter-on-matlab-simulinkcs373
- Objective C particle filter implementationhttp://forums.udacity.com/questions/100167708/objective-c-particle-filter-implementation<p>I've built an open-source particle filter library in Objective C, based on what I've learned in this class. The code can be found here: <a rel="nofollow" href="https://bitbucket.org/codeswell/cdsparticlefilter">https://bitbucket.org/codeswell/cdsparticlefilter</a> and class documentation here: <a rel="nofollow" href="http://cocoadocs.org/docsets/CDSParticleFilter/">http://cocoadocs.org/docsets/CDSParticleFilter/</a> And an example application is here: <a rel="nofollow" href="https://bitbucket.org/codeswell/ibeacon-localization">https://bitbucket.org/codeswell/ibeacon-localization</a> (The example uses the particle filter to localize an iPhone using a set of iBeacons as landmarks.)</p>
<p>It can be used as a CocoaPod, or just by grabbing the source files and including them in your own project(s). I welcome any feedback, ideas, bugs, etc.</p>
<p>Thanks,<br>
Andy</p>AndrewCrSun, 06 Apr 2014 17:21:22 -0400http://forums.udacity.com/questions/100167708/objective-c-particle-filter-implementationunit3-1-qam-48271828objective_ccs373ios
- Torrent Video Downloadshttp://forums.udacity.com/questions/100167615/torrent-video-downloads<p>It would be very helpful if the videos are available via torrent downloads. Would that be an issue?</p>
<p>If not, are there any good folks out there to make it happen?</p>
<p>Thanks.</p>Shafiq RehmanSun, 06 Apr 2014 04:58:22 -0400http://forums.udacity.com/questions/100167615/torrent-video-downloadsdownloadcs373
- I A GETING AN ERRORhttp://forums.udacity.com/questions/100167433/i-a-geting-an-error<p>Traceback (most recent call last):<br>
File "vm_main.py", line 33, in <module><br>
import main<br>
File "/tmp/vmuser_esqzpghvzt/main.py", line 190<br>
P = # initial uncertainty<br>
^<br>
SyntaxError: invalid syntax</p>
<p>this is my whole code ; </p>
<h1>Fill in the matrices P, F, H, R and I at the bottom</h1>
<h1></h1>
<h1>This question requires NO CODING, just fill in the</h1>
<h1>matrices where indicated. Please do not delete or modify</h1>
<h1>any provided code OR comments. Good luck!</h1>
<p>from math import *</p>
<p>class matrix:</p>
<pre><code># implements basic operations of a matrix class
def __init__(self, value):
self.value = value
self.dimx = len(value)
self.dimy = len(value[0])
if value == [[]]:
self.dimx = 0
def zero(self, dimx, dimy):
# check if valid dimensions
if dimx < 1 or dimy < 1:
raise ValueError, "Invalid size of matrix"
else:
self.dimx = dimx
self.dimy = dimy
self.value = [[0 for row in range(dimy)] for col in range(dimx)]
def identity(self, dim):
# check if valid dimension
if dim < 1:
raise ValueError, "Invalid size of matrix"
else:
self.dimx = dim
self.dimy = dim
self.value = [[0 for row in range(dim)] for col in range(dim)]
for i in range(dim):
self.value[i][i] = 1
def show(self):
for i in range(self.dimx):
print self.value[i]
print ' '
def __add__(self, other):
# check if correct dimensions
if self.dimx != other.dimx or self.dimy != other.dimy:
raise ValueError, "Matrices must be of equal dimensions to add"
else:
# add if correct dimensions
res = matrix([[]])
res.zero(self.dimx, self.dimy)
for i in range(self.dimx):
for j in range(self.dimy):
res.value[i][j] = self.value[i][j] + other.value[i][j]
return res
def __sub__(self, other):
# check if correct dimensions
if self.dimx != other.dimx or self.dimy != other.dimy:
raise ValueError, "Matrices must be of equal dimensions to subtract"
else:
# subtract if correct dimensions
res = matrix([[]])
res.zero(self.dimx, self.dimy)
for i in range(self.dimx):
for j in range(self.dimy):
res.value[i][j] = self.value[i][j] - other.value[i][j]
return res
def __mul__(self, other):
# check if correct dimensions
if self.dimy != other.dimx:
raise ValueError, "Matrices must be m*n and n*p to multiply"
else:
# subtract if correct dimensions
res = matrix([[]])
res.zero(self.dimx, other.dimy)
for i in range(self.dimx):
for j in range(other.dimy):
for k in range(self.dimy):
res.value[i][j] += self.value[i][k] * other.value[k][j]
return res
def transpose(self):
# compute transpose
res = matrix([[]])
res.zero(self.dimy, self.dimx)
for i in range(self.dimx):
for j in range(self.dimy):
res.value[j][i] = self.value[i][j]
return res
# Thanks to Ernesto P. Adorio for use of Cholesky and CholeskyInverse functions
def Cholesky(self, ztol=1.0e-5):
# Computes the upper triangular Cholesky factorization of
# a positive definite matrix.
res = matrix([[]])
res.zero(self.dimx, self.dimx)
for i in range(self.dimx):
S = sum([(res.value[k][i])**2 for k in range(i)])
d = self.value[i][i] - S
if abs(d) < ztol:
res.value[i][i] = 0.0
else:
if d < 0.0:
raise ValueError, "Matrix not positive-definite"
res.value[i][i] = sqrt(d)
for j in range(i+1, self.dimx):
S = sum([res.value[k][i] * res.value[k][j] for k in range(self.dimx)])
if abs(S) < ztol:
S = 0.0
res.value[i][j] = (self.value[i][j] - S)/res.value[i][i]
return res
def CholeskyInverse(self):
# Computes inverse of matrix given its Cholesky upper Triangular
# decomposition of matrix.
res = matrix([[]])
res.zero(self.dimx, self.dimx)
# Backward step for inverse.
for j in reversed(range(self.dimx)):
tjj = self.value[j][j]
S = sum([self.value[j][k]*res.value[j][k] for k in range(j+1, self.dimx)])
res.value[j][j] = 1.0/tjj**2 - S/tjj
for i in reversed(range(j)):
res.value[j][i] = res.value[i][j] = -sum([self.value[i][k]*res.value[k][j] for k in range(i+1, self.dimx)])/self.value[i][i]
return res
def inverse(self):
aux = self.Cholesky()
res = aux.CholeskyInverse()
return res
def __repr__(self):
return repr(self.value)
</code></pre>
<h6></h6>
<p>def filter(x, P):<br>
for n in range(len(measurements)):</p>
<pre><code> # prediction
x = (F * x) + u
P = F * P * F.transpose()
# measurement update
Z = matrix([measurements[n]])
y = Z.transpose() - (H * x)
S = H * P * H.transpose() + R
K = P * H.transpose() * S.inverse()
x = x + (K * y)
P = (I - (K * H)) * P
print 'x= '
x.show()
print 'P= '
P.show()
</code></pre>
<h6></h6>
<p>print "### 4-dimensional example ###"</p>
<p>measurements = [[5., 10.], [6., 8.], [7., 6.], [8., 4.], [9., 2.], [10., 0.]]<br>
initial_xy = [4., 12.]</p>
<h1>measurements = [[1., 4.], [6., 0.], [11., -4.], [16., -8.]]</h1>
<h1>initial_xy = [-4., 8.]</h1>
<h1>measurements = [[1., 17.], [1., 15.], [1., 13.], [1., 11.]]</h1>
<h1>initial_xy = [1., 19.]</h1>
<p>dt = 0.1</p>
<p>x = matrix([[initial_xy[0]], [initial_xy[1]], [0.], [0.]]) # initial state (location and velocity)<br>
u = matrix([[0.], [0.], [0.], [0.]]) # external motion<br>
P = matrix([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 1000., 0.], [0., 0., 0., 1000.]])# initial uncertainty<br>
F = matrix([[1., 0., dt, 0.], [0., 1., 0., dt], [0., 0., 1., 0.], [0., 0., 0., 1.]])# next state function<br>
H = matrix([[1., 0., 0., 0.], [0., 1., 0., 0.]])# measurement function<br>
R = matrix([[.1, 0.], [0., .1]])# measurement uncertainty<br>
I = matrix([[1., 0., 0., 0.,], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]) # identity matrix</p>
<h4>DO NOT MODIFY ANYTHING ABOVE HERE</h4>
<h4>fill this in, remember to use the matrix() function!:</h4>
<p>P = # initial uncertainty<br>
F = # next state function<br>
H = # measurement function<br>
R = # measurement uncertainty<br>
I = # identity matrix</p>
<h6>DO NOT MODIFY ANYTHING HERE</h6>
<p>filter(x, P)</p>vishal-33Sat, 05 Apr 2014 05:16:22 -0400http://forums.udacity.com/questions/100167433/i-a-geting-an-errorm-48743144cs373ps2-6
- How to deal with overflow of integral term?http://forums.udacity.com/questions/100166963/how-to-deal-with-overflow-of-integral-term<p>If I implement a PID controller and run a control loop continuously, sooner or later my integral term (my sum of all errors) is going to overflow the variable in which it's held.</p>
<p>How do I deal with that?</p>Matthew-850Wed, 02 Apr 2014 20:07:01 -0400http://forums.udacity.com/questions/100166963/how-to-deal-with-overflow-of-integral-termunit5-14m-48728346cs373
- spoiler - code works for examples, but does not gradehttp://forums.udacity.com/questions/100166952/spoiler-code-works-for-examples-but-does-not-grade<p>The code below works on various examples I tried, but when I try to submit it I get an error which says there was an error executing the code and asks me to check it in Test Run ( which I already did ). Does anyone have an idea why ?</p>
<p>def search():<br>
o = {0:[init]}<br>
ticked = []<br>
index = 0<br>
while True:<br>
if index + 1 not in o:<br>
o[index + 1] = []<br>
for l in o[index]:<br>
for d in range(4):<br>
n = [min(goal[0], max(0, l[0] + delta[d][0])), <br>
min(goal[1], max(0, l[1] + delta[d][1]))]<br>
if n == goal:<br>
return [index + 1, n[0], n[1]]<br>
if n in ticked:<br>
continue<br>
if grid[n[0]][n[1]] == 1:<br>
continue<br>
ticked.append(n)<br>
o[index + 1].append(n)<br>
index += 1<br>
return 'fail'</p>Michael NirschlWed, 02 Apr 2014 18:40:33 -0400http://forums.udacity.com/questions/100166952/spoiler-code-works-for-examples-but-does-not-gradem-48726347unit4-8cs373
- Simpler solution for racetrack problemhttp://forums.udacity.com/questions/100166922/simpler-solution-for-racetrack-problem<p>In the solution professor Thrun proposes a 4 branch if condition. I came with a solution with a 3 branch if. I am wondering if is correct or not ( by the way this solution passes the submission).<br>
My solution differs from the "official" one while calculating CTE in the straight lines (ie. when x is in range [radius:3*radius]<br>
I simply checked if the car is inside or outside the cirucuit with following code: <br>
carRadius = abs(self.y-radius) <br>
cte = carRadius -radius<br>
if the car is inside the track I return a positive cte error, otherwise I return a negative CTE.<br>
Any comment?<br>
Thanks<br>
Maurizio</p>Maurizio-1Wed, 02 Apr 2014 16:19:01 -0400http://forums.udacity.com/questions/100166922/simpler-solution-for-racetrack-problemps5-4m-48701530cs373
- [Spoiler] Exact or Approximate?http://forums.udacity.com/questions/100166791/spoiler-exact-or-approximate<p><em>Nota Bene: this is not me asking a question but instead sharing a thought.</em></p>
<p>What is the meaning of "exact"? I had my idea of what that may mean, and I got it wrong... But let me explain as I believe my point of view may be interesting (although wrong given the definition of "exact" as meant by the instructor).</p>
<p>In a discrete space, I estimated that the position wasn't "approximated" but instead "exact" and the uncertainty was about the knowledge (belief), not the position itself. Indeed, in the measurement of the position, if I tell you that you are either in square A or in square B, you know "exactly" where A and B are.</p>
<p>However, in a continuous space, if I tell you that you are at coordinates (1.0, 2.5) or (sqrt(2), 1.0) ... it will be very difficult to "exactly" mark that position. You may however draw a point encompassing that position, but then we are discretizing again... (I am not even mentioning that the robot is not a point)</p>
<p>Moreover edge cases in discrete can be dealt with just by reducing the resolution... This doesn't help much regarding positioning, but it helps keeping the position "exact" ;-) </p>
<p>Anyways... That's just the embryo of an idea and my initial assumption over the meaning of "exact".</p>
<p>As you may have guessed my first (and wrong) answer was: </p>
<ul>
<li>Histogram => Discrete => Exact</li>
<li>Kalman => Continuous => Approximate</li>
</ul>
<p>It is always interesting to experience how we or other people may assume different meanings for words <br>
in the absence of a definition.</p>
<p>Great course, <br>
Cheers,<br>
Pierre</p>Pierre-76Wed, 02 Apr 2014 04:13:45 -0400http://forums.udacity.com/questions/100166791/spoiler-exact-or-approximateunit3-5m-48759001cs373
- Programming assignment 1http://forums.udacity.com/questions/100166777/programming-assignment-1<pre><code>colors = [['red', 'green', 'green', 'red' , 'red'],
['red', 'red', 'green', 'red', 'red'],
['red', 'red', 'green', 'green', 'red'],
['red', 'red', 'red', 'red', 'red']]
measurements = ['green', 'green', 'green' ,'green', 'green']
motions = [[0,0],[0,1],[1,0],[1,0],[0,1]]
sensor_right = 0.7
p_move = 0.8
def show(p):
for i in range(len(p)):
print p[i]
p = []
x = len(colors)
y = len(colors[0])
n = x*y
# initiliazation of the probability array
for i in range(x):
p.append([])
for j in range(y):
p[i].append(1/float(n))
def sense(p,Z):
q = []
for i in range(x):
q.append([])
for j in range(y):
hit = ( Z == colors[i][j] )
q[i].append( p[i][j]*(hit*sensor_right + (1-hit)*(1-sensor_right) ) )
s = 0
for i in range(x):
for j in range(y):
s = s + q[i][j]
for i in range(x):
for j in range(y):
q[i][j] = q[i][j]/s
return q
def move(p,U):
q = []
for i in range(x):
q.append([])
for j in range(y):
q[i].append( p[(i-U[0])%x][(j-U[1])%y]*p_move + p[i][j]*(1-p_move) )
return q
#Your probability array must be printed
#with the following code.
for i in range(len(measurements)):
p = move(p,motions[i])
p = sense(p,measurements[i])
show(p)
</code></pre>
<p>When I run my code I get the right answer , but when I click on submit it shows 'try again' , what is the problem ? </p>K AbhishekWed, 02 Apr 2014 02:25:10 -0400http://forums.udacity.com/questions/100166777/programming-assignment-1m-48736211ps1-4cs373
- Particle filter, virtual particles' motions have noise?http://forums.udacity.com/questions/100166702/particle-filter-virtual-particles-motions-have-noise<p>Please tell me if my understanding is wrong.</p>
<p>So far what I can learn is that those particles are just virtual and are used to localize the real robot in case the initial position of the real robot is unknown (but its motions are known), so it is strange that those virtual particles' motion have noise, aren't they just some imaginary references? Why do they need to have noise? </p>
<p>I deleted the noise part for the hw 3-6 in the function move(), the code is still running ok, though.</p>
<p>Please someone help me to clarify the working principle of the particle filter, thanks!</p>Chen-17Tue, 01 Apr 2014 18:32:42 -0400http://forums.udacity.com/questions/100166702/particle-filter-virtual-particles-motions-have-noisecs373ps3-6noiseparticlehw3-6
- Question about check_output(final_robot, estimated_position) in HW3.6http://forums.udacity.com/questions/100166672/question-about-check_outputfinal_robot-estimated_position-in-hw36<p>When checking the error of orientation:</p>
<pre><code>error_orientation = abs(final_robot.orientation - estimated_position[2])
error_orientation = (error_orientation + pi) % (2.0 * pi) - pi
correct = error_x < tolerance_xy and error_y < tolerance_xy \
and error_orientation < tolerance_orientation
</code></pre>
<p>Is there a need to add an "abs" to make the error_orientation positive after the normalization in the second line?</p>Chen-17Tue, 01 Apr 2014 17:07:38 -0400http://forums.udacity.com/questions/100166672/question-about-check_outputfinal_robot-estimated_position-in-hw36ps3-6cs373homework3-6
- Normalization of the sensor being wronghttp://forums.udacity.com/questions/100166649/normalization-of-the-sensor-being-wrong<p>In the code we use sensor_right and (1.0 - sensor_right). Is the reason that we don't have to normalize the probability for hit or miss, because the probability of being red is 0.5 and the probability of being green is 0.5?</p>
<p>Thus<br>
P(red) = P(green) = 0.5<br>
P(sensor_right) = 0.7<br>
P(sensor_wrong) = 0.3</p>
<p>P_(red | detectedRed) = 0.5 * 0.7<br>
P_(green | detectedRed) = 0.5 * 0.3<br>
P (red | detectedRed) = 0.5 * 0.7 * 2 = 0.7<br>
P (green | detectedRed) = 0.5 * 0.3 * 2 = 0.3</p>
<p>The example with the neighbor and the fire gives different numbers, but I think that was because the probability of fire was 0.001 and not 0.5</p>
<p>Is this correct?</p>HeadbanginghippieTue, 01 Apr 2014 15:18:02 -0400http://forums.udacity.com/questions/100166649/normalization-of-the-sensor-being-wrongm-48736211ps1-4cs373
- Why is the world not an input for the function?http://forums.udacity.com/questions/100166166/why-is-the-world-not-an-input-for-the-function<p>I wondered whether there is a reason why <strong>p</strong> and <strong>Z</strong> are inputs for the sense function and <strong>world</strong> is a global variable?</p>HeadbanginghippieSun, 30 Mar 2014 04:17:18 -0400http://forums.udacity.com/questions/100166166/why-is-the-world-not-an-input-for-the-functionm-48692805unit1-12cs373
- real world implementationhttp://forums.udacity.com/questions/100166095/real-world-implementation<p>i want to see how localization algorithm or all algorithm works that is taught in this course. Do anyone know is there any such website or software where we can apply such AI algorithm to see how it really works in real world ?</p>tushantSat, 29 Mar 2014 20:35:15 -0400http://forums.udacity.com/questions/100166095/real-world-implementationcs373
- How to get the result of the homework in Localization Program.http://forums.udacity.com/questions/100165937/how-to-get-the-result-of-the-homework-in-localization-program<p>I am a chinese, I just can't understand what does the instrustion mean. The code below here is the problem. Does it mean I should compute the probability of every elments ? And after that I should add my result in p =[]?</p>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<blockquote>
<p>colors = [['red', 'green', 'green', 'red' , 'red'],<br>
['red', 'red', 'green', 'red', 'red'],<br>
['red', 'red', 'green', 'green', 'red'],<br>
['red', 'red', 'red', 'red', 'red']]</p>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<p>measurements = ['green', 'green', 'green' ,'green', 'green']</p>
<p>motions = [[0,0],[0,1],[1,0],[1,0],[0,1]]</p>
<p>sensor_right = 0.7</p>
<p>p_move = 0.8</p>
<p>def show(p):<br>
for i in range(len(p)):<br>
print p[i]</p>
<h1>DO NOT USE IMPORT</h1>
<h1>ENTER CODE BELOW HERE</h1>
<h1>ANY CODE ABOVE WILL CAUSE</h1>
<h1>HOMEWORK TO BE GRADED</h1>
<h1>INCORRECT</h1>
<p>p = []</p>
<h1>Your probability array must be printed</h1>
<h1>with the following code.</h1>
<h1>show(p)</h1>WatershadeSat, 29 Mar 2014 06:02:50 -0400http://forums.udacity.com/questions/100165937/how-to-get-the-result-of-the-homework-in-localization-programm-48736211ps1-4cs373