We are pleased to announce a contest for students in the pioneering cs101 class!
The goal of this contest is to build on the ideas in the CS101 class in a creative way. The contest is open to all students enrolled in Udacity’s CS101 Course. You can do anything you want, so long as it is legal and tasteful (please see the full contest rules for details).
To enter the contest, post an answer here containing a first line, Submission, followed by:
All the material you submit for the contest must be released under a Creative Commons CC BY-NC-SA license. Contest submissions must be posted by 23:59 UTC on Friday 20 April 2012 to be eligible for the contest.
Note added 9 April: Gabriel Weinberg, creator or DuckDuckGo will be one of the judges for the contest!
Note added 11 May: The winners are now announced! Congratulations to everyone who participated in the contest.
Youdacity VideoSearch & Extended Media Linking
Project Description (v1.3)
PLEASE NOTE: If you try out the web-app, all SEARCH TERMS must be entered in LOWERCASE!!
... or even several media links at once
Create your own summary links, it's fun :D
I read on the forum that some students were void of ideas while others had problems getting started from a technical point of view, e.g. not knowing how to set up and deploy a web application. The application that I built consists of very few lines of documented Python code and can easily be deployed on the Google Application Engine free of charge.
I have left ample room for improvements / additions.
IF YOU ARE INTERESTED IN USING THIS APPLICATION AS A STARTING POINT FOR YOUR OWN PROJECT, PLEASE FEEL FREE TO DO SO. (PROVIDED THAT THIS IS ALLOWED)
There are still eight days left to place a submission which is one day more than god had to create the universe. So give it a shot. Maybe try to add multi-term or compound term search, scrolling subtitles, or place additional fragments onto the webpage that display search hits in forum content or lecture slides.
Good luck to everyone
So, instead of doing a couple of exam questions in my learning time tonite I decided to make a contest entry.
Among my many hobbies is amateur radio. I have been licensed for >20 years but have drifted in and out of this pursuit very much. Last fall I started drifting in again. Not having any equipment and having moved into a new home since my last drift into it, one of the things I needed to procure and install was an antenna of some sort. Being that my home is in a bit of a restricted community (and any of you who are ham's will understand what I'm talking about) I had to come up with a fairly unobtrusive antenna. And I didn't want to spend a whole lot of cash on it either. So after much research and deliberation I went with a telescoping fiberglass vertical (again for those in the know, it's an S9v). Basically, this is a somewhat limber 31 foot tall tapered fiberglass tube with a wire in it. It works pretty well with my radio and tuner, I've only had a little time on it (class here is taking up all hobby time at the moment) but I have had good contacts to California and Argentina from my home here in Illinois on 10 meters. Anyhow, this background finally brings me to the point. See, this antenna being not very expensive, fairly invisible behind my house and all... it doesn't handle higher winds all that well. They say it's good up to about 40mph winds but I don't want to test it that far, after that it'll break and I'll have repairs and parts to order. Being that it's designed as a portable antenna, it's very easy to put up and take down and that's what I do. Put it up when I know the wind is going to be ok and take it down if I know the wind is going to be too much for it. Initially I had looked around the web for some service that would just send an automated email with a 12 hr wind forecast. I didn't find much that was suitable for my needs. Weather Channel has emailing but only current weather that I could find. I did find a service that had wind forecast for the next couple of days but it's mingled in with a bunch of other stuff that I didn't really need, it was a HTML email and all clogged with ads, etc. I did also know about NOAA's NDFD service (National Digital Forecast Database) where they supply various forecast info in XML format real nice. I had looked at that a bit and made a half-hearted attempt at trying to make a Win Phone 7 app to grab it but that was quite over my head.
Until... this class and the superb coverage of pulling of web pages, parsing out text and the various goings on with Python. Thus, I submit -
The Wind Gust Forecast Emailer (WGFE)
When the contest was announced I had thought about this. But at the time I was in the thick of the Unit 6 HW which wasn't going all that well for me (turns out it went fine, got 100%). Today at work the thought came up again and certain things came together in my head. A bit of guessing, a lot of googling, much testing and about 3 hrs later I have my Windows 7 Task Scheduler running a python script that emails me nothing but the next 12 hrs maximum wind gust speed forecast in mph.
Hmm, how much to discuss here and how much to leave for the reader?
The NOAA NDFD has a facility whereby you can enter some parameters in a webform, lots of parameters, click submit and receive an XML document giving what you requested. I ran that for my zip code and timeframe and requested only the Wind Gust forecast for 12 hrs. That provided me a basic URL which I would only need to change the dates on to get future days forecasts. It's a fairly huge URL and to me (being more of a SQL person) basically seems like it's passing a bunch of parameters to the server where it pulls and formats the output for you based on your parameters.
I knew I had to import urllib, we've used that before with the crawling. I basically recycled get_page to pull back the XML document. One wrinkle was filling in the appropriate dates into the url. Googling led me to the datetime library and a bit of work on that plus some string concatenation made it so I could dynamically insert a date into the URL that I'll pass to get_page.
Now to pull out the wind gust figures, for 12 hrs there are 5 one every three hours (i.e., 12, 3, 6, 9, 12), I was able to commandeer get_all_links and get_next_target to extract the 5 values into a list. Then I had to write a new procedure to pick out the maximum number from that list. Rather than doing it the split the list and sort way I thought it was easier to just set a variable to 0 then loop thru the list to see if the next list value is bigger than the variable's current value and if so just reassign and then try the next one etc. And then once the maximum number is established, quickly convert it from knots to MPH, round it up and make it a string for later use.
What was left was the emailing part. More googling led me very easily to smtplib and a basic template that I modified to suit my situation.
I don't really know what's required for the submission, but I'll make a basic walkthru of the code for anyone who is interested -
First the libraries are imported - urllib, datetime, smtplib, string (this one was with the example for smtplib so I kept it. I'm a noob so they are on separate lines, I'm sure there is a way to put them all on one.
Next, the five procedures are entered, more on them later.
After that some variables are set -
Then the action, starting from the bottom -
get_speeds goes thru the document using get_next_speed (which is really just get_next_target) and makes the list of forecasts.
That result goes back to find_max and receives the maximum value.
And finally that maximum value and the date are passed to send_mail where magic happens and my ISP is forced into SMTP'ing the info to me.
The automating of this is done in Windows Task Scheduler, I'm familiar with this from work where I have in the past scheduled various batch files to do things. Making it run python stuff was a little bit of a struggle but some searching and testing ended in a working system.
Python code is here. Hope I've submitted this correctly.
I'm sure that this is somewhat subpar compared to what other individuals and teams are going to submit and I'm sure it can be improved a whole lot which I'll probably work on from time to time going forward. It's been super fun developing this and actually seeing test emails in my box. Amazing really.
In closing I want to thank Dr. Evans, Dr Thrun, Mr. Chapman, all of their support staff and all of you Udacians for providing this immensely important opportunity and wonderful community to the world. It's a very interesting time we live in.
Participants making this submittal - rrburton
I made a bit of an update. To better handle the date and time variables I created procedure set_vars which takes the current datetime and produces the appropriate start and end strings to insert into the url and modified the url formula to suit. I also modified the body text for the email (in send_mail) to read a bit better and use the more appropriate start string. I found that all of the imports could be placed on a single line separated by commas. Lastly, I added a few comments, including the CC, etc etc at the top.
Frivolo.us (search engine)
It is a search engine that grants fundamental rights to algorithms, provides advanced search capabilities, and it intelligently supports a person's supernatural predisposition.
Frivolous is a replacement for traditional search engines. Search engines today discriminate in favor of "better" algorithms and shun others by using pejoratives like exponential time, impractical, and vulnerable.
Search like you used to.
On the surface, frivolous looks hauntingly similar to other search engines. It features a list of links, the title of the page, snippets of text below the link, spell checking, and links are sorted based on the PageRank™ algorithm.
Note: Spell checking works by looking at the words in the index instead of a dictionary of words. So any misspellings in the index will make a wrong recommendation.
Relive the old days.
Ever wonder how cool it would be to search the web back in 1998 when search was at its infancy? Well, now you can. Using our platform for recruiting unused and abandoned algorithms, we have successfully implemented 1998 technology to work today. We introduce the altavista hashtag. Feel the thrill of exploiting vulnerable algorithms yesterday, today!
Note: This counts the number of times the words in the query appear in a webpage as described in the AltaVista lecture.
Our search engine is clever enough to provide answers to general queries.
Note: I use DuckDuckGo's API to answer general queries.
Most search engines today can understand mathematics. To equalize the playing field, our research team has developed the nerd hashtag for all your symbolic manipulation needs.
Note: I used the eval function so that SymPy could deal with the equations in the query.
Search evolved. Literally.
Our search algorithm has the ability to transform a random text string and it evolves by randomly changing characters in the string but selecting the fittest string from the children by using the edit distance algorithm. This is triggered by the weasel hashtag.
Note: I got this idea from Richard Dawkins' "Growing Up in the Universe" series.
Palindrome search. A new kind of search.
Have you ever wanted to search for a page with the most palindromes? Palindrome search is for all palindrome lovers. Heck, It even checks your query for any palindromes. Try that with your search engine.
TED Talks built-in. Amazing.
Are you bored? Do you want to search the latest and greatest ideas that could change our world for the better? Then TED is the place to go. By including TED to our search engine, you now have the chance to manipulate the results to your liking. You're welcome.
Note: The program visits http://feeds.feedburner.com/tedtalks_video and extracts all the links, descriptions, and titles using BeautifulSoup.
But that is just the beginning. What if we want the altavista hashtag with the ted hashtag? Easy.
Bam! The TED hashtag is pretty special because it temporarily changes the index used by the other hashtags.
Instant crowdsourced links.
Wikipedia has a lot of external links which act as references to the article. The thing is, they usually link to relevant sites at least for general searches. Frivolous can now fetch these links to augment search results.
Note: Printing the results can take time because it still has to download the webpage and retrieve the title of the page. It uses Wikipedia's API.
We understand your supernatural predispositions.
Millions of people are born unlucky. They wake up in the morning, spill orange juice on their suit, trip on a curb, and they miss the bus to school. But once they try searching on search engines today, they are automatically given the best results. Why are search engines trying to change one's destiny? If you are destined to be unlucky then it is your right to stay unlucky! How dare they fiddle with your life. Starting today, we are proudly releasing the unlucky hashtag. You're lucky you're unlucky.
The unlucky hashtag too is special like the ted hashtag which means it can be combined with other hashtags. Here's a combination of unlucky and altavista hashtags:
or unlucky, ted, and palindrome hashtags:
Make your search engine solve hard things.
Have you always wanted to impress your crossword puzzle buddies? Do you want to know what it feels like to be a crossword puzzle rock star? Well, you're in luck because we have just implemented the crossword hashtag!.
Note: Inspired from Wolfram|Alpha's crossword puzzle solver. Like the spelling suggestion, it also highly depends on the indexed words.
We partner with other search companies.
We're pleased to announce that as of April 15, 2012, we've formed our first search partnership. Search is a tough problem to solve and we think that the best way to tackle search giants is for search start-ups to collaborate. Now introducing the all-new and improved searchwithpeter.info hashtag which combines the website's best search results with ours. There has never been a better time to search the web.
Note: The program doesn't actually access searchwithpeter.info. It goes directly to udacity-forums.com.
The source code can be found at GitHub.
jtalon - talon.jag at gmail dot com
All images, text (including the spaces between them), and code are licensed under a Creative Commons CC BY-NC-SA license.
I’ve been teaching myself Python for a few months, but Udacity’s CS101 course was my first formal introduction to computer science concepts. Before this course, I was okay at hacking things together on my own, but Udacity has helped me clean up my code, think beyond Python to the fundamentals of computing, and understand how to break big problems into little parts. At the end of class, I was a little disappointed that we never implemented our web crawler. For my project, I wanted to use as much of our original search code as possible to build a web application that searches the Udacity site, forums, and course materials. The result is DaveDaveFind.
As Peter once pointed out, it’s important for any new search engine to have a good name. Unfortunately, “searchwithpeter.info” was already taken by a much more useful site, so I decided to call my site DaveDaveFind. DaveDaveFind searches the full-text of the Udacity website, CS101 forums, course documents, and lecture transcripts. It supports multi-word lookup, but sometimes works better for single word searches.
If DaveDaveFind finds your search query inside a video transcript, it will try to link inside the video to the moment the query occurs. If your search query is a common Python-related term, it will try to look up information in the Python documentation. Try searching for the name of a built-in function or standard library module, like
The search box also accepts commands inspired by bang syntax. Try typing
I learned to use the Bottle web framework and Google App Engine in order to build this project. I also used the DuckDuckGo API, BeautifulSoup, and the Robot Exclusion Rules Parser. The web app uses styles from Twitter Bootstrap. I learned to use all these free tools by reading their documentation (plus a lot of trial and error). Update: It looks like CS253 is going to teach Google App Engine, if you're not already enrolled.
I also kept a blog with lots of notes on this project. I hope other Udacity students will use it as a resource, realize that the code we wrote in class isn't too far from a working application, and avoid a lot of the dumb mistakes I made.
All the code is available in this GitHub repository. My Udacity ID is email@example.com, and my forum ID is ecmendenhall.
What I did
This is what I did: ZhuFangZhi
A search engine for housing rental!
(Currently, it supports most cities in China.)
Copy any row of the following (or choose an arbitrary place in China) and paste it in the text box at ZhuFangZhi and press enter:
Framework of ZhuFangZhi
The crawler called zfz-bot collects several big sites in China that provides housing renting information and store the url and the corresponding information (like renting fee, address etc) to database.
When a user access ZhuFangZhi's website, and query for a place, the nginx server will pass the place information to tornado server, which will retrieve relevant housing renting information from database and generating the result web page.
What does ZhuFangZhi mean in Chinese?
It's a homophonic to a Chinese word meaning renting a house.
I grant a Creative Commons CC BY - NC- SA license to both this description and the source code.
My Udacity ID
UPDATES: so I had some time and I went on to do some improvements
Project Name: HackingPot
Objective: I really enjoy doing some DIY/hacking projects. The thing is, I don't have much time to do them. But, once in a while, I have a weekend off and I really feel like building something, but then I face a problem: what should I build??? Normally I would look at my parts bin and waste hours searching the web for cool projects. With HackingPot, I can do this easily!
All I need to do is enter some 'ingredients' (or components/parts) and HackingPot searches selected DIY/Tutorial websites (in this version only Make: Projects, and specifically eletronic related ones (that's why the code specifies starting points/targets)), ranking each project by it's own algorithm that determines how close my 'ingredient list' is to the project's needed materials. There are currently 317 projects listed
Idea origin: I've been wanting to build something like this for over 6 months, but I have a business background and my computer skills where very limited until I did CS101 (they still are but I have seen a great improvement!).
Is it legal?: This code uses as source Make: Projects, whose content is Licensed under Creative Commons BY-NC-SA.
Next steps: During CS253 I plan on turning this into a real web application, using a real database (and not Pickle, like I did on this version). I have a list of possible improvements already that I couldn't manage to do due to a lack of available time:
Code and Demo: My code is available at GitHub under Creative Commons BY-NC-SA License. There is also a DEMO. If you use it, let me know what you think! :] Any code improvements would be appreciated also.
Finishing thoughts: I would like to thank all of the Udacity Staff for giving the world the opportunity to learn such a great course for free, with such an extremely talented group of professionals. This is really game changing. Long live Udacity!
Submission by: Gian Carlo / @gcmartinelli / gcmartinelli AT gmail
Gian Carlo M...
Name Project: Adjective Crawler for Books
Repository Link: https://github.com/astenolit/adjective_crawler
I am a Spanish teacher of Spanish-French Language and Literature in
But after running it, I realised that the statistic output data was a
Why it can be useful.
How it works.
I chose the classical book "The Ingenious Gentleman Don Quixote
I ran the code with different strings to search and also modified
adjective_crawler ('don_quixote_part_1.txt','Dulcinea del Toboso', 2, 4)
(notice that the code allows to search a string composed of multiple words)
The code displays:
According to the above data, we can infer some interesting
And we can get all this data even without reading the whole book!
Thank you Udacity for all the things I've learnt.
Enrique Contreras (astenolit)
From questions that are trickling in, with all due respect, it seems some of us are not seeing the bigger picture. I think Udacity is not trying to find that one killer app, in the winner-takes-it-all mentality, that has been deeply routed in all our systems (even academia). The point is look, we want to see if this course alone can PRODUCE students that can develop nice usable products. As long as you create something, you are already a winner! It will make more sense if you apply most of the things we've learnt in class, than perhaps something outside (including the language used). IMHO, I think this contest is more about proving that the course was useful than proving that the winner-to-be is truly a guru!
The moral lesson here is, when you learn something, go out there and apply it in the real world! So take it easy friends :-)
When we were making the Urank procedure the drawings by David were illustrative, but I wished to be able to manipulate and visualize pages and links in a more systematic way. Since we are dealing with graphs I thought to myself "this is a job for Graphviz". My submission is a simple set of procedures that write a file in the DOT language that can be read by Graphviz to produce some pretty images.
The code is hosted here.
The idea is REALLY simple, a graph is just a set of lines describing nodes and edges. Each node and edge can have some options that modify the way they are displayed. Here is an example:
This examples makes a simple directed graph (digraph) with two nodes and one edge from node1 to node2.
All the procedures are straightforward. Most of them just concatenate strings so the graph_dot procedure can return a string with all the information needed to specify the graph in the DOT language. The one procedure of a different kind is lookup_graph which makes a lucky_search of a keyword and returns a graph in which the result from the lookup is the center and all ingoing and outgoing links are displayed.
There are two more procedures (write_dot_file and write_dot_lookup) that actually write the .dot file usable by Graphviz.
The "web" we used in the class is crawled by default and there are three variables defined with the results: index, ranks and graph. To produce a file named, for example, "web.dot" all you need to do is
Now you can use Graphviz (in a shell outside the Python interpreter) to make the actual image
This will make a file web.svg
Notice how each node and edge is scaled according to their rank. If you open the actual svg image with your browser you will notice each node is clickable and directs you to the corresponding page.
There is also a way to make a lucky_search and produce a .dot file with ingoing and outgoing links from the result. For example
... and using Graphviz to make an svg image gives you (complete image here)
There are A LOT of thing that can be improved, specially when crawling the "real world". I tried crawling the web with a max_depth = 1 from http://www.xkcd.com and the images produced are terrible. In part because there are some pages that have many links and this clutter the images. But also there are some glitches in the dot file produced that I haven't tracked down. At the end of my README fille you can look at some ideas I have that could improve this submission. However I won't be able to work on them at least in the next 3-4 weeks (I'm getting married in less than 14 days!! so I won't have time to even look at a computer screen at all until I'm back home). I will like to improve this code in the hope it can one day produce illustrative images for future Udacians ;)
My big thanks to David, Peter and all of you! this has truly been an extraordinary experience!
- Andrés García Saravia Ortíz de Montellano
Before I start, I would just like to give a massive Thank you to the whole Udacity team! I've taken both courses (CS101 and CS373) and I feel that I have learnt a lot! It has been a lot of fun, and I think you guys are on your way to change education forever!
--- Video & source code ---
--- Background information ---
I'm a Business administration student in Barcelona, Spain. I've felt really passionate about technology since I was a little kid, and finally I get the opportunity to study Computer Science! I'm really excited!
What I wanted to do, is to combine both of my passions: Business Strategy and Technology / Computer Science. I felt like this was a great opportunity so I've programmed a crawler that crawls Collective Buying sites (more info. in the video and http://en.wikipedia.org/wiki/Group_buying)
--- Why is this interesting? ---
--- How to use ---
If you just want to try it out, it's really simple:
That's all there is to it! If you want to fiddle around with the code, you are most welcome ;)
--- Udacity ID ---