## Contest Announcement and Submissions

We are pleased to announce a contest for students in the pioneering cs101 class!

The goal of this contest is to build on the ideas in the CS101 class in a creative way. The contest is open to all students enrolled in Udacity’s CS101 Course. You can do anything you want, so long as it is legal and tasteful (please see the full contest rules for details).

To enter the contest, post an answer here containing a first line, Submission, followed by:

1. Text or images describing what you did.
2. A link to all your code. (This could be a link to a pastebin file, or a github repository.)
3. A list of the Udacity or Forum IDs of all team members

All the material you submit for the contest must be released under a Creative Commons CC BY-NC-SA license. Contest submissions must be posted by 23:59 UTC on Friday 20 April 2012 to be eligible for the contest.

At least one (and possibly more than one) winner will be selected. Although you may vote for submissions on this forum, the votes are not used to determine the winner. The winner will be selected by a panel of judges (including myself and Peter), based on the creativity, originality, relevance, and execution of the submissions. The winner(s) will be awarded an expenses-paid trip to Palo Alto. You may submit an entry as an individual, or work in a team. If a team wins, only one team member will be awarded the prize.

Note added 9 April: Gabriel Weinberg, creator or DuckDuckGo will be one of the judges for the contest!

Note added 9 April: DuckDuckGo has released an open API, DuckDuckHack, for creating DuckDuckGo plugins. There is support for writing "Fathead" and "Longtail" plugins using Python. It is, of course, not necessary to use this for your contest submission, but I hope some teams will develop interesting ideas using these!

Note added 21 April: The submission deadline has now passed. If you missed the deadline, you can still share your work here, but only submissions that were submitted by the deadline (and are marked as "accepted") will be considered for the contest. There are over 150 submissions, and many are quite amazing! It is wonderful to see the creativity and ingenuity of our students, and will be very challenging to pick the winners.

Note added 11 May: The winners are now announced! Congratulations to everyone who participated in the contest.

This post is just an informal description of the contest. Participation in the Contest constitutes acceptance of the Official Contest Rules.

UdacityDave ♦♦
31.1k25181103

accept rate: 455%

11

I'm very excited to see what our talented students create!

(30 Mar '12, 18:00)

1

For me it would be a CalTrain ticket if I win since I'm up in Berkeley LOL.

Can't wait for the details of the contest.

(30 Mar '12, 18:25)

1

@malckwan I think all the details are now there in the official contest rules.

(30 Mar '12, 19:08)

1

I'm going to crawl the Internet, print it out and publish a book of 500,000 volumes! Not sure if it's going to be finished by April 20, though... :(

(30 Mar '12, 19:09)

3

Cool. Will team entries be judged by the same standards as individual entries? (e.g. a two person team isn't expected to produce something twice as good as a solo entrant)

(30 Mar '12, 19:16)

1

According to the rules, there is a maximum of one submission per entrant (or per group). Does this mean I can join several groups or only one? And: can I submit something as an individual and still be part of a group as well? This is not very clear to me.

(30 Mar '12, 19:16)

4

graemeblake: We're happy to have team entries, and will judge all entries based on their own merits, regardless of whether they were done by teams or individuals.

(30 Mar '12, 19:27)

5

Tom: you can join more than one group and be part of multiple submissions.

(30 Mar '12, 19:34)

9

I live in Palo Alto, so I would win what? Lunch? :-)

(30 Mar '12, 19:53)

1

If you live in Palo Alto why are you taking this class ;)

(30 Mar '12, 21:24)

1

Thanks for the long awaited announcement!

(30 Mar '12, 21:49)

1

In the full contest rules ("How to Enter: a") it says to "Create a program that builds on the web search engine from CS101". Does this mean our project has to have some kind of search functionality or incorporate that code?

(30 Mar '12, 23:07)

1
1. Does the airfare includes the return?
2. I think few wants to go Palo Alto alone, so I suggest the prize should include a friend of the winner's fee.
(30 Mar '12, 23:14)

1

When you say "build on the ideas of" do you mean that the submissions must be related with the search engine code? Like an extension?

(30 Mar '12, 23:15)

1

Can we use the code of the search engine that we built in during the course? The code is Udacity's idea and I was wondering if we could use it. Thanks!

(31 Mar '12, 04:48)

2

@JuandelaJohn: Yes, you can; all the code is creative commons.

(31 Mar '12, 05:35)

1

@jesyspa What about code shared in the forums? Would it also be creative commoms? I am not sure if we have an agreement on that.

(31 Mar '12, 05:45)

1

@kilaws
lets form a team then, this way if we win i will get the trip

(31 Mar '12, 16:11)

1

@UdacityDave : This page is becoming extreemly long, so by the time everyone has posted there submission it would become impossibe(Atleast for most of the contestents to checkout there compatation :D).
Why not make it so that every one can post there submission as a new question, and tag it with a special tag? say "submission-final".

(01 Apr '12, 08:50)

1

@hittaruki: I would encourage everyone to mark their submissions clearly by putting Submission in the top line, but we do want to keep all the submissions as answers in this forum.

(01 Apr '12, 11:20)

1

(01 Apr '12, 11:51)

1

Should i post my github repo before 20th April? as i am over with my initial commit, but the thing that refrains me to do that is the fear of someone stealing my idea or something, it may sound childish but that's what i feel. So what should i do?

(02 Apr '12, 12:47)

1

As long as you post it the repo with a link on this post before the due date, your project will be considered.

(02 Apr '12, 12:52)

1
(02 Apr '12, 13:58)

2

@kilaws: There is an other Palo Alto in Texas. You could go there, in case you have won the contest :-)

(03 Apr '12, 05:56)

2

If I win a round trip to Palo Alto, does it matter which leg of the trip I take first? And can my "home" be in, say, Stockholm? :-)

(04 Apr '12, 20:13)

1

@shipik, your clever solution reminds me of the Burma Shave offer: Free Free / A Trip / to Mars / for 900 / Empty Jars. When Arliss "Frenchie" French collected 900 jars, the company bought him off with a trip to Moers, Germany -- on the condition that he would wear a space suit for the trip, to which he agreed.

(05 Apr '12, 00:51)

1

Oh and are third-party APIs allowed?

(08 Apr '12, 10:07)

1

@UdacityDave Could you make my submission an "accepted answer" as the other submissions? Just to give me peace of mind.

(10 Apr '12, 12:10)

1

Submissions waiting for approval:
By @Laura : submission
By @MarkIrvine : submission
By @LeslieK : submission
By @jtalon : submission

Hoping that the page number won't change before you can see that comment.
Thanks to @Laura for noticing.

(12 Apr '12, 23:05)

1

@UdacityDave : Does my entry have to be restricted to some extension of the search engine? I have other creative ideas, can I submit those as legal entries into this competition?

(13 Apr '12, 04:30)

1

April 21st here so I think it's closed.

(21 Apr '12, 00:32)

showing 10 of 32 show 22 more comments

Submission

Hello everyone!

I am a portuguese guy that loves cooking. I love to cook different dishes to please my family and friends.
However, since I did not take any cooking degree, I sometimes recur to recipe sites to gather some ideas for the meals. My favorite site is "http://gastronomias.com/". It has a lot of recipes, sorted by category, such as meat, fish, soups, sauces, etc. To check, for instance, the meat dishes, you can go to "http://www.gastronomias.com/receitas/carnes.htm", and click in the name of the recipe you want to cook.

The site structure is very handy if one has already an idea of what one wants to cook. However, for an inexperienced cook like me, it would certainly be more useful to know which dishes can be cooked from a set of ingredients. Moreover, the dish names do not contain explicit information about the ingredients they contain. If you have, for instance a certain fish ("peixe") in your house, by checking the recipe names, you may not know that you can make a dish called "Caldeirada à Algarvia". Besides that, a search by ingredients can be really useful when a person is diabetic or allergic to some food (such as chocolate)!

Being this course about search engines, I decided to make myself a recipe search engine. My world wide web is the "gastromias" website, and my websites are the recipe descriptions. My recipe search engine, entitled "reciPY" (recipe + python [.py]), allows to search recipes by ingredients and by dish names. Besides that, my search engine allows to discriminate the ingredients I don't want in my search results. If a person is allergic to chocolate, she simply needs to specify that in the search query!

reciPY specs:

• runs from the shell
• crawls all recipes
• crawls all ingredients from all recipes
• normalizes search queries to acquire a better accuracy (i.e. "Puré" --> "pure")
• prepositions inside queries are ignored (such as "em" (in), "com" (with)...)
• results are paginated, showing 20 results per page (yes, pagination in the shell :p)
• queries include ingredients included/excluded (*1)
• search results are sorted by relevance (*2)
• smart index storage (*3)
• recipes can be opened from the shell (*4)

(*1)
The queries are such as:

-- Query:

< Recipe Name / Ingredients > : Bacalhau com Natas, Cebola, Azeite

< Exclude > : Cenoura

Where:

* "Bacalhau com Natas" - cod with cream

* "Cebola" - onion

* "Azeite" - olive oil

* "Cenoura" - carrot

The user may not need to insert commas between the ingredient names. If there is more than one ingredient, the search engine will lookup all the recipes that include each of them. For this query in particular, the results will include:

• all the recipes with cod.

• all the recipes with cream.

• all the recipes with onion.

• all the recipes with olive oil.

• EXCEPT the recipes that contain carrot.

(2)
The recipe ranking, in my search engine, uses only the query information. This means that, the first recipes shown are the ones containing all the ingredients from the query. From the example in (
1), the priority is:

1: recipes containing all four ingredients

2: recipes containing three ingredients

3: recipes containing two ingredients

4: recipes containing one ingredient

(1 has the larger priority)

(*3)
The "gastronomias" website contains lots of recipes, which makes the crawler open a huge number of links. My computer took +- 20 mins to gather all indexing information. Therefore, and in order for you to use it right away, the index can be loaded, locally, from the second run. This means that, in the first run, the crawler will get all the information and store it in your hard drive. In the second run, it will load the index from the disk, taking just a few seconds. I have put my index in my dropbox (it is also in the git repository). You can access it here: http://dl.dropbox.com/u/7923799/index.pkl. Place it in the same directory as the reciPY and start cooking your meals :).

(*4)
The search results are presented in the shell in this fashion:

-- Results:

( page 1 of 20)

[2] molho especial para bacalhau cozido

[3] pasteis de bacalhau a minha moda

[4] pasteis de bacalhau a portuguesa

[5] pataniscas de bacalhau

...

The user can simply specify the id (inside brackets) of the recipe she wants to check, and press enter.
The recipe will be opened in her default browser!

Well... Thats it! If you want to check the awesome user-friendly interface I made, please check this screenshot: http://dl.dropbox.com/u/7923799/reciPY.png.

The code is here: https://github.com/Amarals/reciPY
(If you want to run reciPY in an environment other than MacOS, please open reciPY.py and change the shell command that you use to open urls by changing the variable "browser_command". Do not forget to place the index.pkl in the same directory as reciPY.py!)

NOTE: I use an external function to convert diacritics. The credits are in the "external.py".

NOTE_v2: I forgot to mention that, in order to make the interface more user-friendly, i used some unix commands, such as "clear". If you try to use reciPY on Windows, please change those commands :)

The entire code was made by me, Amarals (pedro.m.t.amaral@gmail.com).

Have yourself a good meal :)

Amarals
171135

2

That's a great idea! Sadly, my Spanish isn't fluent enough to use your application, but the ability to search recipes for given ingrediences is very useful. (I wonder, why the cooking sites I know haven't implemented some sort of search like this by themselves already)

(12 Apr '12, 17:24)

1

Thank you! I decided to explore a portuguese site, not only because it is my native language, but also since i will like to use it in the future. Nevertheless, if someone is interested in an application such as this, the only thing he needs to change is the way the webpages are processed. All the lookup strategy (pagination, search, etc.) can be used as it is implemented ;)

(13 Apr '12, 11:06)

2

Muito bom Amarals! :)
I just published my project a few hours ago and now I'm looking at other people's submission. Turns out our ideas where very similar. You focused on food, I focused on DIY projects :) Check it out: http://gcmartinelli.webfactional.com
If you'd like to see my submission, it's somewhere in page 5 of this thread, I believe.

(17 Apr '12, 13:02)

2

Great idea!

PS: Being allergic to chocolate must be really disturbing.

(20 Apr '12, 18:04)

1

Indeed! But I know some people who, unfortunately, are... :(

Thank you!

(20 Apr '12, 18:05)

Submission

I've always been intrigued by the power of the internet, particularly when programs can talk with each other. I've seen many companies advertising APIs but I've never quite been able to figure out how they work, or how to utilize them. This class helped me understand data structures like lists and dictionaries and things like web requests just enough that I decided to try to build something using an API.

## News Story

Aside from the multimedia and real-time data, modern online newspapers look about the same as the paper ones did 100 years ago. There is way too much information out there for any one person to read, and personalization algorithms sometimes make it hard to be exposed to new and interesting pieces of news. I decided to experiment with a different way to find articles to read.

I created a program that automatically creates what I will call a "news story". It starts by requesting the most recent 10 articles published to the New York Times website. The program randomly selects one of those ten, and then selects one of the words in the title to search the New York Times archives (back to 1984 I believe) for another article (randomly selecting one of the first few articles returned for that keyword), and then selecting a word from this second article title, and so on. The result is ten usually unrelated headlines that sound like they go together because they use some of the same words. It is really interesting to read the headlines and make connections that otherwise wouldn't be there (usually these stories wouldn't end up in the same section of a newspaper). In reality, everything is connected. It is also fun that the "news story" is different every time because of the frequency of the updates from the New York Times and the random element in each article selection (which then affects the search keywords).

## The Code

The python code can be accessed here:

https://github.com/djneeley/news-story/blob/master/news-story.py

However, I've removed my private API key issued by the New York Times so that code won't run in your interpreter unless you go to developer.nytimes.com and request keys for the Times NewsWire API and the Article Search API. Knowing that would be a pain for you to do, I was able to read some tutorials on Twitter's Bootstrap and Google App Engine and apply what little I've learned so far about computer science to create a web app that uses the procedures I wrote to generate a "news story" complete with links to the accompanying articles. That can be found here:

http://news-story.appspot.com/

(The "news stories" are best read out loud.)

## Team Members

Just me on this one, though will look forward to collaborating with others in the future.

## Kudos

Thanks, Dave and Peter and all at Udacity! This course was great and I learned a lot. I've recommended it to at least 3 people who are planning to take it this next hexemester. I look forward to developing my developer skills through future courses. Cheers!

Dan
518136

1

Very creative use of your computer knowledge. Great work. Can't wait to see what you come up with next.

(16 Apr '12, 12:34)

1

It sounds very interesting.

(16 Apr '12, 23:07)

1

@djneeley : This is a very interesting idea, demonstrating a simple yet effective heuristics for news selection. But as of now (April 19th), your demo site http://news-story.appspot.com/ seems to be broken. It is probably important to fix this before the deadline so that your submission can be easily reviewed.

(19 Apr '12, 17:00)

1

Thanks, @ogerard, for the heads up. I'm limited on the number of requests from the NY Times Search API per second (and the code requests at least 9 per story), so if multiple people are accessing the site at the same time it sometimes throws an exception. Google App engine has some limitations as well. I've usually been able to get the site to work by refreshing once or twice and waiting for it to load. Hopefully I can figure out a more robust way to make this site work online after the CS253 class. Best!

(19 Apr '12, 17:10)

1

@djneeley: Thanks for the information, I was now able to see it working. Find it nice. It would be perhaps better to make sure to have the stories in chronological descending order because one of the views I had was odd in this respect : maybe restrict selection of the n+1th story to the period before the n-th story (and perhaps less than one year or a few months before if possible). It would enhance the sense of connectedness between them.

Suggestion about the time request limit : maybe you can arrange the following : the first page of your website would be a static page with several links:
1 for a static HTML demo snapshot produced once and for all with your system (so that it is viewed without crossing the threshold of requests from your API Keys) , you can even comment it graphically // 2 a link for the current dynamic demo of your code. // 3 the reference and contact links you currently put on the page.

(19 Apr '12, 17:29)

1

Good idea, @oregard. Eventually I'll have the server make the requests to the API on a regularly scheduled basis and populate a dictionary or other datastore with all the news stories, and then the website would just show one of those pre-made ones (and click through lots of them). I think the website would respond a lot faster, and it would be a little more "polite" to the NY Times. Just need to find the time to learn how to do all that and make the updates. Also like your idea of finding ways to connect the stories a little more than just one keyword, though in some ways I like how they can jump around in time.

(21 Apr '12, 09:46)

To be more effective, don't think about the winning bit, see how you can apply whatever we've learnt to SOLVE A REAL WORLD PROBLEM. Let winning come secondary, so that you won't be surprised if you don't win :-) But at the end of the day, you have your usable product... I hope we'll bring the GitHub servers down by 20th April, due to the traffic :-D All the best!

ProfNandaa
491125

Submission

Uspicious is a search engine for python code. In a large project using many files, you might have a procedure you wish to use but not remember the exact order of the arguments. Uspicious indexes all defined procedures in all .py files in the seed directory and the whole underlying tree. When you search for the procedure name, you get the names of all of the files that contain procedures with that name. Furthermore, for each one you also get all of the text from 'def' to ':', so that you can see the order for the parameters; think of this as akin to the context provided when you do a search on any big search engine.

Comments are also indexed. You can search for a string and find all of the files that contain that word in a comment plus the comments themselves containing that word, as the context. However, it's often the case that comments have uncertain punctuation or spacing so that you might not be able to find what you want. Therefore, Uspicious also provides the ability to search for fragments of keys, providing the same results as above for all keys which contain the fragment as a substring, case-insensitive. This ability to look for fragments is also available for procedures.

There is also a recursively defined procedure for printing the returned results in a more readable way than as a python list of strings or a dictionary.

The README file in the github repository contains some results from running the code on the directory containing its own source code. This demonstrates how to use the features.

Participants larry96

Larry Wilson
1.4k41222

1

hey you stole my idea!

Well a part of the idea :P

(12 Apr '12, 15:43)

1

This is cool.

(17 Apr '12, 00:13)

1

Thanks, rush, I appreciate that.

(17 Apr '12, 01:03)

1

Great work! You may also want to read about docstrings, and for each procedure output the docstring if possible.

(18 Apr '12, 12:45)

Submission

Project Name: Monitor Web

The Idea:-

Did you came across a website in which you were really interested, and wanted to check for any updates, or maybe you are interested in getting updates if the documentation of your favorite library or code repository was updated, If the answer is yes, Monitor-Web is your one stop solution, Monitor-Web tracks any changes in your favorite content and alerts you with proper log of differences. So, now never waste time surfing the web to check if there are any updates, Simply add the website you wish to monitor and relax, Whenever you need to check simply run the program and it will automatically sync for any changes or are provide you with a diff like output. It works best for static websites, mainly online HTML ebooks, online documentation, course lists, wiki's or something similar.

Repo:-

https://github.com/lionaneesh/Monitor-Web

Conclusion:-

The project uses each and every skill/aspect which i learnt in this 7 week class and I have tried to apply what we were taught in a creative way. The project is totally dedicated to The Udacity Team (Prof. Evans, Dr Thrun, Mr.Chapman [Thanks for answering my stupid questions] and others.), Greetings to fellow Udacians.

Participants:-

Aneesh Dogra (lionaneesh-at-gmail-dot-com) @Aneesh Dogra

Aneesh Dogra
1.5k51538

1

Nice job @Aneesh Dogra, well done indeed!

(06 Apr '12, 05:04)

1

@CodyHacker Thanks and regarding that problem email you sent me, Its actually a careless mistake i made in a commit removing some unused code, I'll revert that and push it asap. Thanks again! :)

(06 Apr '12, 06:10)

Submission

Name:
Froogle - a Frugal search engine

Info:
For my submission I have extended the functionality of the provided search engine in a number of areas.

The main point of my submission was to enhance the search results,
and provide a simple optional web interface.
The webserver was added last after the search results were improved as best I could.

Code:
the single python file can be found at http://pastebin.com/cSRytrb8
a screenshot of the web interface can be found at http://picpaste.com/python_search_engine_results-Rr6hIAxP.png

Team member:
Udacity ID: Craig Huggins
Forum ID: cehbab
Email: craighuggins@hotmail.com

What has been done:

• the crawler has been limited, and its link traversal improved.

• the crawler uses the cached web content if available, else uses urllib function.

• the gathering of relevant words from crawled page content and query keywords has been refined,
no longer using string.split()

• the urls within crawled page content have been cleaned by mangling and verifying ./.. paths.

• the single keyword has been extended to be multiple keywords for search criteia
default of 'criteria' variable is set to 'Add Recipe'

• the words within the crawled content and query keywords are no longer treated as case sensitive,
reducing the word set available, but improving search accuracy.

• the crawler adds weights and proximity information related to words [and ranks as per homework]
the search results were improved by adding a proximity rating and a word weighting factor.

• the proximity rating is based around finding the shortest path to the most keywords found,
in relation to the number of keywords matched, total keywords requested, and the pages word_count.
the theory here is that the shorter the path to match the maximum keywords found
will give results related to the keywords by their smallest proximity to each other.

• the word weight factor is based on more prevalent words such as ['at', 'as', 'the', ..]
getting a lower weighting, so that the more obscure the word the higher it is factored
into the results.

• an inbuilt webserver, that performs only a small subset of a full webserver.
accept sockets, up to 'max_hits' limit.
verify GET/POST request
parse http headers [and ignore them]
extract url parameters from query
perform search via web parameters, criteria and search modes from checkboxes
build resulting html with search form and results if they exist and respond to client
loop...

To initialise the webcrawler:

• you may set the 'seed_url' variable
at the top of python_search_engine.py
via setting the 'seed_url' variable to 'http://some.url'

• it defaults to 'http://udacity.com/cs101x/urank/index.html' and is precached
• you may set the 'max_urls' variable
at the top of python_search_engine.py
via setting the 'max_urls' variable to some number of urls we are allowed to crawl

• it defaults to 10
Note: set 'max_urls' to <= 0 to crawl until all urls are exhausted

My submission can be run in two modes:

• the first mode is standalone search configured
at the top of python_search_engine.py

• via setting the 'criteria' variable to 'some values'
Note: set 'criteria' to None to disable standalone mode
• the second mode is via an inbuilt webserver which can be configured
at the top of python_search_engine.py

• via setting the 'criteria' variable to None.
Note: 'criteria' defaults to 'Add Recipe'
• and setting the 'port' variable to within 1 <-> 65535 for listening port
ie; set 'port' to 8080 as it defaults to 0,
Note: set 'port' to <= 0 to disable webserver mode
• and setting the 'host' variable to some resovable hostname or ip address
it defaults to 'localhost'.
Note: symbolic name meaning the local host
• and setting the 'max_hits' variable to > 0 for number of hits, else <= 0 is infinite
it defaults to 20

In the webserver mode you can use your browser for searching

Once the crawler has begun it traverses only up to the limit specified by 'max_urls',
starting from the url specified within the 'seed_url' variable.

• if it does not finish,
try a more limited 'max_urls' variable
or a different 'seed_url' variable

If running in standalone mode, the criteria will be applied against the crawled urls,
and results will be displayed in the python console.

The results will be listed in sections giving the full range of search results:

• each section will have the entries,
firstly the section title,
second the lucky_search result url,
ending with the ordered_search result urls.

• the sections are listed as follows:

• no sort
• by weight
• by proximity
• by weight & proximity
• by rank
• by rank & weight
• by rank & proximity
• by rank & weight & proximity

If running in webserver mode, the browser can be directed to the url given in the python console.
- ie direct your browser to http://localhost:8080

From here you can choose which search mode to operate, and modify your criteria many times,
depending on the limit imposed by the 'max_hits' variable, without having to restart the
python interpreter and loosing the crawled cache of urls.

• if your search page is not responding, and your 'max_hits' has been exceeded
try a larger 'max_hits' variable
or set 'max_hits' to 0 to allow unlmited search requests.

Craig Huggins-1
182610

1

Froogle is a Google trademark, although it is now known as "Google Product Search". You might want to change the name.

(10 Apr '12, 16:40)

1

The name Froogle was made up, and I was not aware of it being a trademarked name. I do not reserve any ownership of this submission, and the name of this submission can be renamed to anything Udacity wishes, as it is an arbitrary title to begin with. All rights are granted to Udacity under their Creative Commons license as required. This submission has not be made public outside of the pastebin url.

If the name is in violation of the contest rules, and is an issue and would result in disqualification from the contest then I am wondering if I am allowed to rename the project after the fact that the submission deadline has passed, as to do so would involve altering the url that the screenshot points to as it includes the title of Froogle.
The linked python code url and also the python code itself would have to be altered as it contains the term Froogle in the part of the source code that generates the html results.

I'm specifically asking if using the term Froogle has disqualified me, and am I able to alter this title in screenshot and sourcecode after the submission date has closed, or would this also invalidate my submission ?

If possible could someone from Udacity comment on this concern ?

(26 Apr '12, 09:52)

I'm curious about whether we are expected to stick with concepts and syntax we have learned in CS101 for this project. From the forums, it is quite evident that there are several folks here who are already quite well versed in Python and some of the advanced syntax/functionality. Is there any sort of leveling of the playing field as far as judging the entries go, or are folks who are more well versed with programming and Python going to be at a big advantage?

Sudeep Mandal-1
2.8k32239

5

Submissions can include Python features that go beyond what we have covered in cs101, but our primary judging criteria are creativity and relevance, so we hope there will be some submissions from students who entered the class with no background that will contend for the top prizes.

(31 Mar '12, 02:07)

1

Creativity and relevance are my domain, but I still can't code a single block without bugs. I find the contest is in bad taste and wonder if there is another agenda. I read a few articles online about Udacity. Don't get me wrong, I love Udacity and it's been a dream come true to be able to learn to code in Python. But I wonder about the contest, in any case.

Why would I want to go to Palo Alto other than the scenery? If I win, which is the goal of entering any contest, in my opinion, then will I just be a pony at some big event, to show off the merits of "Udacity", and not my talent? Why the need for a Creative Commons license on the code? What if by some miraculous twist of fate, I actually come up with something actually worthwhile, or worse, something "marketable"?

What developer in their right mind would settle for attribution and nothing else? I never understood that. I love open source because I don't have to pay for it. I'm not remixing it, though. I'm an artist, intellectual property is my game. There's no such thing as a free Picasso, though. It's how I make a living (not selling Picassos). Creative Commons is okay if I write a blog post about my cat Ti-Loup. Can someone explain how software developers can make a living if their code is released under a CC BY-NC-SA license?

Another question. If professional programmers who build search engines make fortunes, and assuredly there are some developers at The Stanford School of Engineering who I'm pretty sure don't release everything under a Creative Commons CC BY-NC-SA license.. why would I, a student, be asked to do so? @UdacityDave, if you had to choose between all possibilities, would you choose this license for your own creations?

Or is it just a question of it being a contest? Sorry if I'm backward and not seeing the bigger picture. Like I said, IP is what puts bread and button on my table. I have a hard time seeing any other picture, than the ones I make that hang on people's walls. By the very virtue of my being born in Canada, all my works are protected by copyright. I grew up in a family of painters, I'm the third generation. I started painting when I was 4. What is coding? Is it not a form of writing, can it not be a form of creative expression? Why would my "writing" be up for grabs?

You may think I'm talking out of my ass. I'm no amateur. I'm an amateur programmer, surely. But my ideas are golden. We learned to build a search engine. We were told about Google. It wasn't Google's search engine that was genius. It was their advertising platform that made them a multibillion dollar company. Their search engine, by all standards, is shoddy. It's not even based on the firmly established traditions of information retrieval in Knowledge Management, i.e. applications such as STRIX, for instance, by Dr. Tony Kent. I'm not saying Sergei and Larry weren't smart, they obviously were. But the search part is not genius, by my standards.

Their - Google's - recent acquisition history shows that they are struggling to innovate. They are buying companies, buying talent, desperate to find the golden ass of sustained growth. They are not at the forefront of Innovation with a big I, not anymore. They risk not being relevant in 20 years. I know, it sounds awfully ironic. In the last few years, they have made SERIOUS mistakes, what in contemporary parlance are called #epic #fails, over and over again, and their brand equity has suffered much. Believe me, they are a struggling company. Why else would they have a Google office in Montréal, Québec, Canada? Answer: Motorola, Android, Wireless Communications. They needed to get their "ears" a little closer to the Bell Labs, now part of Alcatel-Lucent, because wireless communications is the future. It's nice to have learned to build a search engine, but unfortunately search is dead. It died with Dr. Tony Kent. A true visionary, a true genius. Google is hoping Montréal can get them closer to the Innovation gold-mine, maybe take Android to the next level.

Anyway I have winning ideas and want to participate, but hate competing in contests, I find it demeaning. The game changes, though, if visiting Palo Alto means I can spend an hour with an advisor at The Stanford School of Engineering, where I'm thinking of studying in the near future. That changes everything. 15 minutes is all I need. I'm scared of airplanes, though, so I'd rather the prize be converted into an advance on my future tuition fees. :)

(01 Apr '12, 23:46)

5

@antiface you're not obligated to take part in the contest or participate in any events given the case that you do win. I understand where your concerns are coming from, but I think they're a bit unguided. At the end of the day, for us, the students, this is all about learning and showing what we can do with our new found knowledge. Udacity has given us this amazing opportunity to take university level CS courses for absolutely free, in the comfort of our own homes; they've even released all of their content with the same Creative Commons license. If they want to use me, their student, to show off or advertise to the world what sort of students their courses can produce, I'd feel more than honoured.

(02 Apr '12, 15:16)

1

@antiface: I like your paintings. You seem to be a very creative young man. And creative people can't help but being creative. ;-) So what's done once it could be done twice. If you'll experience that the decision to take part in this contest was false you could do it better the next time and then get as rich as Mark Zuckerberg. ;-)
That's the really great thing about being creative, I think.

"What developer in their right mind would settle for attribution and nothing else?" Good question, well, Tim Berners-Lee did it.

(06 Apr '12, 15:48)

(06 Apr '12, 17:08)

"It's 99% PR, 1% creativity.. " Yeah, that's why I decided to get myself a job with much sparetime and distribute my music under CC. I think that's the main difference between an artist and s.o. who's just creative. And although I still hope that my souls hides an artist I'm not strong enough to release him in such an uncompromising way. ;-)

(06 Apr '12, 18:44)

Submission

===== PURPOSE =====

This simple program helps you learn English through this great game called Draw Something by finding all possible words in an anagram with the given characters and it also prints a definition and a translation in the wanted language of all the possible solutions

===== BACKGROUND =====

I love gaming, a month ago I started playing this awesome game called Draw Something which is about drawing and guessing what another user has drawn for us (more information here: http://itunes.apple.com/us/app/draw-something-by-omgpop/id488627858?mt=8 it's just 1 dollar in case you want to buy it :D) by using the given letters which are at most 12 and selecting wisely some of them (at most 8) to guess what the other player has drawn for us. My little sister who watched me playing it started to cry because she wanted to play but she doesn't know English (Spanish is our native language here in Bolivia :P) so I decided to make a simple program so she could play Draw Something.

===== SOLUTION =====

FIRST PART: Solve the anagram

The game gives you 12 letters, to guess the friend's drawn you're also given the length of the guess which is at most 8 letters. First I wanted to know how many words with the given length I could make, it turns out that the number of words is proporcional to the length of the word so if the length is 5 then the number of words is 12 * 11 * 10 * 9 * 8 considering that in the first spot one character of twelve left can be used and in the next spot one character of 11 left can be used (one character has already been used in the previous spot) so I guessed this formula to find the total number of words [number_words = factorial(12) / factorial(12 - desired_length + 1)], in the worst case it gives around 19958400 possible words (generating all the words takes some time) but we also have to check how many of this words are valid in English (there may be cases like aaaaaa which is not a valid English word) so we also need to have a python dictionary with most of the english words to check if a generated word is in the dictionary, however doing this took me a looot of time (in the worst case took like 20 minutes to solve the anagram :O).

I needed another approach to solve this problem and after some googling I found about this data structure called Trie which saves a dictionary and looks if a word is in the dictionary pretty fast and also it helped me avoid going through further branchs in the Trie!!! so if a word started like "azq" (we know that there's no word starting like this) then instead of going further making words with this prefix we simply erase the last character and start checking with the next character available :D

The first step is to create the dictionary, with some google searches I found that Ubuntu holds it's own dictionary at /usr/share/dict/words so the only thing I did was to copy this dictionary and take only those words which have a length greater or equal to 3 and lower or equal to 8 (with the help of a little script called dictionary_parser.py which writes all the filtered words to a file called parsed_dictionary) then with this dictionary we build the Trie in the main program which is inside the file scrabble_find.py

SECOND PART: Analyzing the data

Solving the anagram could provide a possible answer for the game however this can't help us to learn english :[, so the next thing I did was to browse over the internet (using python of course) and look for a definition for the current possible answer so that my sister could also look the definition of the word (in english) in case she didn't know what was the meaning of the word and I also printed the translation of the possible word in spanish so she could learn both the definition of the word and it's translation in english :D, then I expanded the translation to most given languages.

The second step is to grab the user input: the "scrabbled" characters, the wanted length and the language of the translation, then we try to find all the possible permutations of the word and for those valid we make some queries in the page http://oxforddictionaries.com/definition/ to get the correct definition and also a query in the page http://translate.reference.com/ to get the translation of the possible answer in the provided language

===== HOW TO USE IT =====

Just run the script: scrabble_find.py and provide the required inputs, the outputs are:

valid_permutation_of_the_letters [english definitions] [translation in the provided language]


Ex:

Please enter the unordered characters: aplirane
Please enter the size of the new word to form: 8
The valid permutations are:  ['airplane']
airplane ['a powered flying vehicle with fixed wings and a weight greater than that of the air it displaces'] ['avi\xc3\xb3n, aeronave, aeroplano']


===== REPOSITORY =====

https://github.com/maurizzzio/Udacity/tree/master/cs101/scramble_solver

===== UDACITY ID =====

maurizzzio - mauro.41090@gmail.com

Mauricio Poppe
6371617

2

I like this submission!

Just a little footnote - when I thought about my project for the contest, I thought of putting some requests to a dictionary platform as well - most of them sadly restrict you to say 1000 requests per day..

(11 Apr '12, 02:42)

really liked this idea! great implementation of what we learnt!

(07 May '12, 21:10)

FVSearch

I started this course to learn how to build a search engine, so that's what I did. I made some heavy modifications to the crawler and incorporated multi-term search capabilities. I also built a text UI for the console to actually allow for searches. The full code, along with a pretty in-depth comment/markup can be found here.

1. Allowed for multiple search terms (exact match only)
2. I modified the 'URank' algorithm. It now returns more relevant results.
• A more thorough explanation is attached to the multi_ranks function.
3. I have made the crawler more polite by following instructions in robots.txt
4. I incorporated both the max_pages and max_depth arguments in crawl web.
• This allows more control over how much is crawled.
5. I incorporated my own split_string function instead of the built in .split
6. I have set everything to lower-case to eliminate duplicate, unconnected index entries.
7. I have built a console UI for executing searches with a number of features.
• Search results limited to a certain number per 'page.' Default is 10. Changeable by user.
• Able to navigate forward and backward through result pages.
• Constraints to prevent going to pages out of bounds.
• Results page lists number of matches, number of pages, current page & current result range.

Edit: I used two freely available python modules in completing this submission: Beautiful Soup and html5lib
Both are released under the MIT license.

David Harris
8.2k2155111

1

Thank you for this submission. I really appreciate the well commented code. This will help me remember how it works when I come back to this class in six months...

(13 Apr '12, 02:25)

SUBMISSION

Project: Extended search engine, and a tutorial: searching the right way

Incentives to make this little application:

When I first read about this contest, I really felt obliged to participate: I learned a lot from this course, and I would love to show what I've learned to the Udacity staff and my fellow Udacians, so I started thinking and brainstorming. I thought about the biggest, most ridiculous applications that I could make, but most of them had nothing to do with what we learned. That's when I started thinking: why make something extra? Why not just extend it, and why not just make sure everyone can use a search engine?

That was the moment when the idea popped in my head: make a tutorial for using a search engine! Of course everyone can use a search engine. It's the same as playing table tennis: a lot of people have a table at home, and they can play table tennis. But can they?

I constantly read on forums that people can't find information about a certain subject. I then suggest them to go search for it on Google or any other search engine, and they then claim they already did! After 1 simple search query, I found everything he needed.

There are tons of features a search engine supports that nobody uses. Simple symbols that would make your search so much more better and efficient, but people simply don't know about them. With my program, I would like to change this.

Target group:

This would mainly be used for kids starting to use the internet and search engines, but even adults could profit from this application. It is clearly not designed for IT people, or kids/adults skilled in the Computer Science.

My ultimate goal for this project:

See the idea (not my application, it's too small and too limited due to my skills and the given time frame) being used by the big search engines like Google and Yahoo, to make the internet and search world an easier, less frustrating place

I extended the current search engine we had to make for multiple queries, so that it was able to read in a query, parse it into the right format, and give back the correct url(s).
If you made a working multiple_lookup function, it supported only the literal strings. This means that "Monty Python" would be there, in that order.

I extended that, so that also random order of words were allowed.
Another extension was the inclusion of the exclude symbol "-".
This means that if in your query, eg the substring -program is found, that every url that contains the word program would be excluded, and thus not found in the solution. I also made it case insensitive.

About the use of an interface: I am sort of convinced that I would've been able to make it fancier, by the use of a graphical package, but the goal of this project is to show what Udacity learned you, and therefore I decided not to do the research.
I never wrote a single line of Python before this class, and I learned very much!

Everything I did in my project was learned through Udacity courses. The only thing I did "lookup" (kinda knew it from courses)
is how to import a file correctly, so it would be a bit cleaner

Some extra information can be found in the README, I suggest you go read it

Creator:

I was the only person who worked on this project. I figured I'd do it on my own, since it's more the idea than the application itself that I would like to submit

Udacity CS101 Profile link - ID 2728), E-MAIL: bcools91@gmail.com

My code can be found in my github repository

PS: A friend of mine was so kind to lend me some hosting space on his website:
check this out to see the "seed page" and its links used in the tutorial

Bart Cools-2
1.2k52034

Question text:

Markdown Basics

• *italic* or _italic_
• **bold** or __bold__
• image?![alt text](/path/img.jpg "Title")
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported

×30,723
×80
×10
×3