|
| 1 | +## **Note** |
| 2 | + |
1 | 3 | ### **We cannot reimburse you for any charges**
|
2 | 4 |
|
3 | 5 | ### **Terminating an AWS cluster**
|
4 | 6 |
|
5 |
| -When you are done running Pig scripts, make sure to **ALSO** terminate your job flow. This is a step that you need to do **in addition to ** stopping pig and Hadoop (if necessary). |
| 7 | +When you are done running Pig scripts, make sure to **ALSO** terminate your cluster. This is a step that you need to do **in addition to ** stopping pig and Hadoop (if necessary). |
6 | 8 |
|
7 |
| -1. 1.Go to the [Management Console.](https://console.aws.amazon.com/elasticmapreduce/home) |
8 |
| -2. 2.Select the job in the list. |
9 |
| -3. 3.Click the Terminate button (you may also need to turn off Termination protection). |
10 |
| -4. 4.Wait for a while (may take minutes) and recheck until the job state becomes TERMINATED. |
| 9 | +1. Go to the [Management Console.](https://console.aws.amazon.com/elasticmapreduce/home) |
| 10 | +2. Select the cluster in the list. |
| 11 | +3. Click the Terminate button (you may also need to turn off Termination protection). |
| 12 | +4. Wait for a while (may take minutes) and recheck until the cluster state becomes TERMINATED. |
11 | 13 |
|
12 |
| -### **If you fail to terminate your job and only close the browser, or log off AWS, your AWS will continue to run, and AWS will continue to charge your credit card: for hours, days, and weeks. Make sure you don't leave the console until you have confirmation that the job is terminated.** |
| 14 | +**If you fail to terminate your job and only close the browser, or log off AWS, your AWS will continue to run, and AWS will continue to charge your credit card: for hours, days, and weeks. Make sure you don't leave the console until you have confirmation that the job is terminated.** |
13 | 15 |
|
14 |
| -## **Notes** |
| 16 | +The quiz should cost no more than 10-20 dollars if you only use medium aws instances. |
15 | 17 |
|
16 |
| -This assignment will be very difficult from Windows; the instructions assume you have access to a Linux command line. |
| 18 | +## **Problem 0: Setup your Pig Cluster** |
17 | 19 |
|
18 |
| -The quiz should cost no more than 5-10 dollars if you only use small aws instances |
| 20 | +1. Follow [these instructions](https://github.com/uwescience/datasci_course_materials/blob/master/assignment4/awsinstructions.md) to setup the cluster. NOTE: It will take you a good **60 minutes** to go through all these instructions without even trying to run example.pig at the end. But they are worth it. You are learning how to use the Amazon cloud, which is by far the most popular cloud platform today. At the end, the instructions will refer to example.pig. This is the name of the sample program that we will run in the next step. |
| 21 | +2. You will find example.pig in the course materials repo at: |
19 | 22 |
|
20 |
| -## **Problem 0: Setup your Pig Cluster** |
| 23 | + https://github.com/uwescience/datasci_course_materials/blob/master/assignment4/ |
21 | 24 |
|
22 |
| -1. Follow [these instructions](https://github.com/uwescience/datasci_course_materials/blob/master/assignment4/awsinstructions.md) to setup the cluster. NOTE: It will take you a good **60 minutes** to go through all these instructions without even trying to run example.pig at the end. But they are worth it. You are learning how to use the Amazon cloud, which is by far the most popular cloud platform today. At the end, the instructions will refer to _example.pig_. This is the name of the sample program that we will run in the next step. |
23 |
| -2. You will find example.pig in the [course materials repo](https://github.com/uwescience/datasci_course_materials). example.pig is a Pig Latin script that loads and parses the billion triple dataset that we will use in this assignment into triples: (subject, predicate, object). Then it groups the triples by their object attribute and sorts them in descending order based on the count of tuple in each group. |
24 |
| -3. Follow the README.txt: it provides more information on how to run the sample program called example.pig. |
| 25 | + example.pig is a Pig Latin script that loads and parses the billion triple dataset that we will use in this assignment into triples: (subject, predicate, object). Then it groups the triples by their object attribute and sorts them in descending order based on the count of tuple in each group. |
| 26 | +3. Follow awsinstructions.md: it provides more information on how to run the sample program called example.pig. |
25 | 27 | 4. There is nothing to turn in for Problem 0
|
26 | 28 |
|
27 | 29 | ## **Useful Links**
|
28 | 30 |
|
29 |
| -[Pig Latin reference](http://pig.apache.org/docs/r0.7.0/piglatin_ref2.html) |
| 31 | +[Pig Latin reference](http://pig.apache.org/docs/r0.15.0/piglatin_ref2.html) |
30 | 32 |
|
31 | 33 | [Counting rows in an alias](http://stackoverflow.com/questions/9900761/pig-how-to-count-a-number-of-rows-in-alias)
|
32 | 34 |
|
@@ -81,7 +83,7 @@ Modify example.pig to use the file uw-cse-344-oregon.aws.amazon.com/btc-2010-chu
|
81 | 83 | - After the command objects = ...
|
82 | 84 | - After the command count\_by\_object = ...
|
83 | 85 |
|
84 |
| -**Hint 1** : [Use the job tracker](https://class.coursera.org/datasci-001/wiki/view?page=awssetup) to see the number of map and reduce tasks for your MapReduce jobs. |
| 86 | +**Hint 1** : Use the Hadoop monitor to see the number of map and reduce tasks for your MapReduce jobs. |
85 | 87 |
|
86 | 88 | **Hint 2:** To see the schema for intermediate results, you can use Pig's interactive command line client grunt, which you can launch by running Pig without specifying an input script on the command line. When using grunt, a command that you may want to know about is [describe](http://pig.apache.org/docs/r0.7.0/piglatin_ref2.html#DESCRIBE) . To see a list of other commands, type help.
|
87 | 89 |
|
|
0 commit comments