This lab will build on the techniques covered in the Spark tutorial to develop a simple word count application. The volume of unstructured text in existence is growing dramatically, and Spark is an excellent tool for analyzing this type of data. In this lab, we will write code that calculates the most common words in the Complete Works of William Shakespeare retrieved from Project Gutenberg. This could also be scaled to larger applications, such as finding the most common words in Wikipedia.
During this lab we will cover:Source
Note that for reference, you can look up the details of the relevant methods in Spark's Python API.
In this part of the lab, we will explore creating a base DataFrame with
sqlContext.createDataFrame and using DataFrame operations to count words.
(1a) Create a DataFrame
We'll start by generating a base DataFrame by using a Python list of tuples and the
sqlContext.createDataFrame method. Then we'll print out the type and schema of the DataFrame. The Python API has several examples for using the
wordsDF = sqlContext.createDataFrame([('cat',), ('elephant',), ('rat',), ('rat',), ('cat', )], ['word']) wordsDF.show() print type(wordsDF) wordsDF.printSchema()
(1b) Using DataFrame functions to add an 's'
Let's create a new DataFrame from
wordsDF by performing an operation that adds an 's' to each word. To do this, we'll call the
select DataFrame function and pass in a column that has the recipe for adding an 's' to our existing column. To generate this
Column object you should use the
concat function found in the
pyspark.sql.functions module. Note that
concat takes in two or more string columns and returns a single string column. In order to pass in a constant or literal value like 's', you'll need to wrap that value with the
lit column function.
<FILL IN> with your solution. After you have created
pluralDF you can run the next cell which contains two tests. If you implementation is correct it will print
1 test passed for each test.
This is the general form that exercises will take. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more
<FILL IN> sections. The cell that needs to be modified will have
# TODO: Replace <FILL IN> with appropriate code on its first line. Once the
<FILL IN> sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests.
Note: Make sure that the resulting DataFrame has one column which is named 'word'.
# TODO: Replace <FILL IN> with appropriate code from pyspark.sql.functions import lit, concat pluralDF = wordsDF.select(concat(wordsDF.word,lit('s')).alias('word')) pluralDF.show()
# Load in the testing code and check to see if your answer is correct # If incorrect it will report back '1 test failed' for each failed test # Make sure to rerun any cell you change before trying the test again from databricks_test_helper import Test # TEST Using DataFrame functions to add an 's' (1b) Test.assertEquals(pluralDF.first(), 'cats', 'incorrect result: you need to add an s') Test.assertEquals(pluralDF.columns, ['word'], "there should be one column named 'word'")
(1c) Length of each word
Now use the SQL
length function to find the number of characters in each word. The
length function is found in the
# TODO: Replace <FILL IN> with appropriate code from pyspark.sql.functions import length pluralLengthsDF = pluralDF.select(length(pluralDF.word)) pluralLengthsDF.show()
# TEST Length of each word (1e) from collections import Iterable asSelf = lambda v: map(lambda r: r if isinstance(r, Iterable) and len(r) == 1 else r, v) Test.assertEquals(asSelf(pluralLengthsDF.collect()), [4, 9, 4, 4, 4], 'incorrect values for pluralLengths')
Now, let's count the number of times a particular word appears in the 'word' column. There are multiple ways to perform the counting, but some are much less efficient than others.
A naive approach would be to call
collect on all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations.
Using DataFrames, we can preform aggregations by grouping the data using the
groupBy function on the DataFrame. Using
groupBy returns a
GroupedData object and we can use the functions available for
GroupedData to aggregate the groups. For example, we can call
count on a
GroupedData object to obtain the average of the values in the groups or the number of occurrences in the groups, respectively.
To find the counts of words, group by the words and then use the
count function to find the number of times that words occur.
# TODO: Replace <FILL IN> with appropriate code wordCountsDF = (wordsDF .groupBy(wordsDF.word).count()) wordCountsDF.show()
# TEST groupBy and count (2a) Test.assertEquals(wordCountsDF.collect(), [('cat', 2), ('rat', 2), ('elephant', 1)], 'incorrect counts for wordCountsDF')
(3a) Unique words
Calculate the number of unique words in
wordsDF. You can use other DataFrames that you have already created to make this easier.
from spark_notebook_helpers import printDataFrames #This function returns all the DataFrames in the notebook and their corresponding column names. printDataFrames(True)
# TODO: Replace <FILL IN> with appropriate code uniqueWordsCount = wordCountsDF.select('word').count() print uniqueWordsCount
# TEST Unique words (3a) Test.assertEquals(uniqueWordsCount, 3, 'incorrect count of unique words')
(3b) Means of groups using DataFrames
Find the mean number of occurrences of words in
You should use the
mean GroupedData method to accomplish this. Note that when you use
groupBy you don't need to pass in any columns. A call without columns just prepares the DataFrame so that aggregation functions like
mean can be applied.
# TODO: Replace <FILL IN> with appropriate code averageCount = (wordCountsDF. groupBy().mean('count').collect() ) print averageCount
# TEST Means of groups using DataFrames (3b) Test.assertEquals(round(averageCount, 2), 1.67, 'incorrect value of averageCount')
In this section we will finish developing our word count application. We'll have to build the
wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data.
First, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in a DataFrame that is a list of words like
wordsDF and return a DataFrame that has all of the words and their associated counts.
# TODO: Replace <FILL IN> with appropriate code def wordCount(wordListDF): """Creates a DataFrame with word counts. Args: wordListDF (DataFrame of str): A DataFrame consisting of one string column called 'word'. Returns: DataFrame of (str, int): A DataFrame containing 'word' and 'count' columns. """ return wordListDF.groupBy('word').count() wordCount(wordsDF).show()
# TEST wordCount function (4a) Test.assertEquals(sorted(wordCount(wordsDF).collect()), [('cat', 2), ('elephant', 1), ('rat', 2)], 'incorrect definition for wordCountDF function')
(4b) Capitalization and punctuation
Real world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are:
Define the function
removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python regexp_replace module to remove any text that is not a letter, number, or space. If you are unfamiliar with regular expressions, you may want to review this tutorial from Google. Also, this website is a great resource for debugging your regular expression.
You should also use the
lower functions found in pyspark.sql.functions.
Note that you shouldn't use any RDD operations or need to create custom user defined functions (udfs) to accomplish this task
# TODO: Replace <FILL IN> with appropriate code from pyspark.sql.functions import regexp_replace, trim, col, lower def removePunctuation(column): """Removes punctuation, changes to lower case, and strips leading and trailing spaces. Note: Only spaces, letters, and numbers should be retained. Other characters should should be eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after punctuation is removed. Args: column (Column): A Column containing a sentence. Returns: Column: A Column named 'sentence' with clean-up operations applied. """ return trim(lower(regexp_replace(column, '[^A-Za-z0-9 ]', ''))).alias('sentence') sentenceDF = sqlContext.createDataFrame([('Hi, you!',), (' No under_score!',), (' * Remove punctuation then spaces * ',)], ['sentence']) sentenceDF.show(truncate=False) (sentenceDF .select(removePunctuation(col('sentence'))) .show(truncate=False))
# TEST Capitalization and punctuation (4b) testPunctDF = sqlContext.createDataFrame([(" The Elephant's 4 cats. ",)]) Test.assertEquals(testPunctDF.select(removePunctuation(col('_1'))).first(), 'the elephants 4 cats', 'incorrect definition for removePunctuation function')
(4c) Load a text file
For the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into a DataFrame, we use the
sqlContext.read.text() method. We also apply the recently defined
removePunctuation() function using a
select() transformation to strip out the punctuation and change all text to lower case. Since the file is large we use
show(15), so that we only print 15 lines.
fileName = "dbfs:/databricks-datasets/cs100/lab1/data-001/shakespeare.txt" shakespeareDF = sqlContext.read.text(fileName).select(removePunctuation(col('value'))) shakespeareDF.show(15, truncate=False)
(4d) Words from lines
Before we can use the
wordcount() function, we have to address two issues with the format of the DataFrame:
Apply a transformation that will split each 'sentence' in the DataFrame by its spaces, and then transform from a DataFrame that contains lists of words into a DataFrame with each word in its own row. To accomplish these two tasks you can use the
explode functions found in pyspark.sql.functions.
Once you have a DataFrame with one word per row you can apply the DataFrame operation
where to remove the rows that contain ''.
shakeWordsDFshould be a DataFrame with one column named
# TODO: Replace <FILL IN> with appropriate code from pyspark.sql.functions import split, explode shakeWordsDF = (shakespeareDF .select(explode(split('sentence',' ')).alias('word')).where(length('word')>0)) shakeWordsDF.show() shakeWordsDFCount = shakeWordsDF.count() print shakeWordsDFCount
# TEST Remove empty elements (4d) Test.assertEquals(shakeWordsDF.count(), 882996, 'incorrect value for shakeWordCount') Test.assertEquals(shakeWordsDF.columns, ['word'], "shakeWordsDF should only contain the Column 'word'")
(4e) Count the words
We now have a DataFrame that is only words. Next, let's apply the
wordCount() function to produce a list of word counts. We can view the first 20 words by using the
show() action; however, we'd like to see the words in descending order of count, so we'll need to apply the
orderBy DataFrame method to first sort the DataFrame that is returned from
You'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.
# TODO: Replace <FILL IN> with appropriate code from pyspark.sql.functions import desc topWordsAndCountsDF = wordCount(shakeWordsDF).orderBy('count',ascending=False) topWordsAndCountsDF.show()
# TEST Count the words (4e) Test.assertEquals(topWordsAndCountsDF.take(15), [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463), (u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890), (u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)], 'incorrect value for top15WordsAndCountsDF')