diff --git a/building-features.jpg b/building-features.jpg new file mode 100644 index 0000000..0dd24d2 Binary files /dev/null and b/building-features.jpg differ diff --git a/challenge-1.ipynb b/challenge-1.ipynb new file mode 100644 index 0000000..df9fbb5 --- /dev/null +++ b/challenge-1.ipynb @@ -0,0 +1,335 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Challenge 1: Prepare Textual Data for Analysis\n", + "\n", + "In this challenge, we will walk you through how to prepare raw text data for NLP analysis. Due to time limitation, we will cover **text cleaning, tokenization, stemming, lemmatization, and stop words removal** but skip POS tags, named entity recognition, and trunking. The latter 3 steps are more advanced and not required for our next challenge on sentiment analysis. \n", + "\n", + "## Objectives\n", + "\n", + "* Learn how to prepare text data for NLP analysis in Python\n", + "* Write the functions you will use in Challenge 3 for cleaning, tokenizing, stemming, and lemmatizing data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Text Cleaning\n", + "\n", + "Text cleaning is also called text cleansing. The goal is to clean up the messy real-world textual data in order to improve the text analysis accuracy at later steps. For generic textual data sources, we usually need to fix the following problems:\n", + "\n", + "* Missing values\n", + "* Special characters\n", + "* Numbers\n", + "\n", + "For web data, we need to additinally fix:\n", + "\n", + "* HTML tags\n", + "* JavaScripts\n", + "* CSS\n", + "* URLs\n", + "\n", + "Case by case, there may also be special problems we need to fix for certain types of data. For instance, for Twitter tweets data we need to fix hashtags and the Twitter handler including a *@* sign and Twitter usernames.\n", + "\n", + "In addition, we also need to convert the texts to lower cases so that when we anaylize the words later, NLTK will not think *Ironhack* and *ironhack* mean different things.\n", + "\n", + "Note that the above are the general steps to clean up data for NLP analysis. In specific cases, not all those steps apply. For example, if you are analyzing textual data on history, you probably don't want to remove numbers because numbers (such as years and dates) are important in history. Besides, if you are doing something like network analysis on web data, you may want to retain hyperlinks so that you will be able to extract the outbounding links in the next steps. Sometimes you may also need to do some cleaning first, then extract some features, then do more cleaning, then extract more features. You'll have to make these judgments by yourself case by case. \n", + "\n", + "In this challenge we are keeping things relatively simple so **you only need to clean up special characters, numbers, and URLs**. Let's say you have the following messy string to clean up:\n", + "\n", + "```\n", + "@Ironhack's-#Q website 776-is http://ironhack.com [(2018)]\")\n", + "```\n", + "\n", + "You will write a function, which will be part of you NLP analysis pipeline in the next challenge, to clean up strings like above and output:\n", + "\n", + "```\n", + "ironhack s q website is\n", + "```\n", + "\n", + "**In the cell below, write a function called `clean_up`**. Test your function with the above string and make sure you receive the expected output.\n", + "\n", + "*Notes:*\n", + "\n", + "* Use regular expressions to identify URL patterns and remove URLs.\n", + "\n", + "* You don't want to replace special characters/numbers with an empty string. Because that will join words that shouldn't be joined. For instance, if you replace the `'` in `you're`, you will get `youre` which is undesirable. So instead, replace special characters and numbers with a whitespace.\n", + "\n", + "* The order matters in terms of what to clean before others. For example, if you clean special characters before URLs, it will be difficult to identify the URLs patterns.\n", + "\n", + "* Don't worry about single letters and multiple whitespaces in your returned string. In our next steps those issues will be fixed." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import re\n", + "\n", + "def clean_up(s):\n", + " pattern = r'[^a-zA-Z\\s]+|http\\S+'\n", + " cleaned_text = re.sub(pattern, '', s)\n", + " return cleaned_text\n", + "\n", + "cleaned_text = clean_up(\"@Ironhack's-#Q website 776-is http://ironhack.com [(2018)]\")" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'IronhacksQ website is '" + ] + }, + "execution_count": 2, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "cleaned_text" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tokenization\n", + "\n", + "We have actually discussed the concept of tokenization in the Bag of Words lab before. In that lab, we did both tokenization and calculated the [matrix of document-term frequency](https://en.wikipedia.org/wiki/Document-term_matrix). In this lab, we only need tokenization.\n", + "\n", + "In the cell below, write a function called **`tokenize`** to convert a string to a list of words. We'll use the string we received in the previous step *`ironhack s q website is`* to test your function. Your function shoud return:\n", + "\n", + "```python\n", + "['ironhack', 's', 'q', 'website', 'is']\n", + "```\n", + "\n", + "*Hint: use the `word_tokenize` function in NLTK.*" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[nltk_data] Downloading package punkt to /Users/pedro/nltk_data...\n", + "[nltk_data] Package punkt is already up-to-date!\n" + ] + }, + { + "data": { + "text/plain": [ + "['ironhack', 's', 'q', 'website', 'is']" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import nltk\n", + "from nltk.tokenize import word_tokenize\n", + "\n", + "nltk.download('punkt')\n", + "\n", + "def tokenize(s):\n", + " tokens = word_tokenize(s)\n", + " return tokens\n", + "\n", + "tokens = tokenize(\"ironhack s q website is\")\n", + "tokens" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stemming and Lemmatization\n", + "\n", + "We will do stemming and lemmatization in the same step because otherwise we'll have to loop each token lists twice. You have learned in the previous challenge that stemming and lemmatization are similar but have different purposes for text normalization:\n", + "\n", + "**Stemming reduces words to their root forms (stems) even if the stem itself is not a valid word**. For instance, *token*, *tokenize*, and *tokenization* will be reduced to the same stem - *token*. And *change*, *changed*, *changing* will be reduced to *chang*.\n", + "\n", + "In NLTK, there are three stemming libraries: [*Porter*](https://www.nltk.org/_modules/nltk/stem/porter.html), [*Snowball*](https://www.nltk.org/_modules/nltk/stem/snowball.html), and [*Lancaster*](https://www.nltk.org/_modules/nltk/stem/lancaster.html). The difference among the three is the agressiveness with which they perform stemming. Porter is the most gentle stemmer that preserves the word's original form if it has doubts. In contrast, Lancaster is the most aggressive one that sometimes produces wrong outputs. And Snowball is in between. **In most cases you will use either Porter or Snowball**.\n", + "\n", + "**Lemmatization differs from stemming in that lemmatization cares about whether the reduced form belongs to the target language and it often requires the context (i.e. POS or parts-of-speech) in order to perform the correct transformation**. For example, the [*Word Net lemmatizer* in NLTK](https://www.nltk.org/_modules/nltk/stem/wordnet.html) yields different results with and without being told that *was* is a verb:\n", + "\n", + "```python\n", + ">>> from nltk.stem import WordNetLemmatizer\n", + ">>> lemmatizer = WordNetLemmatizer()\n", + ">>> lemmatizer.lemmatize('was')\n", + "'wa'\n", + ">>> lemmatizer.lemmatize('runs', pos='v')\n", + "'be'\n", + "```\n", + "\n", + "In the cell below, import the necessary libraries and define a function called `stem_and_lemmatize` that performs both stemming and lemmatization on a list of words. Don't worry about the POS part of lemmatization for now." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[nltk_data] Downloading package averaged_perceptron_tagger to\n", + "[nltk_data] /Users/pedro/nltk_data...\n", + "[nltk_data] Package averaged_perceptron_tagger is already up-to-\n", + "[nltk_data] date!\n", + "[nltk_data] Downloading package wordnet to /Users/pedro/nltk_data...\n", + "[nltk_data] Package wordnet is already up-to-date!\n", + "[nltk_data] Downloading package punkt to /Users/pedro/nltk_data...\n", + "[nltk_data] Package punkt is already up-to-date!\n", + "[nltk_data] Downloading package omw-1.4 to /Users/pedro/nltk_data...\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Stemmed words: ['ironhack', 's', 'q', 'websit', 'is']\n", + "Lemmatized words: ['ironhack', 's', 'q', 'website', 'be']\n" + ] + } + ], + "source": [ + "import nltk\n", + "from nltk.stem import PorterStemmer, WordNetLemmatizer\n", + "from nltk.corpus import wordnet\n", + "\n", + "nltk.download('averaged_perceptron_tagger')\n", + "nltk.download('wordnet')\n", + "nltk.download('punkt')\n", + "nltk.download('omw-1.4')\n", + "\n", + "def stem_and_lemmatize(l):\n", + " stemmer = PorterStemmer()\n", + " lemmatizer = WordNetLemmatizer()\n", + "\n", + " stemmed_words = [stemmer.stem(word) for word in l]\n", + " lemmatized_words = [lemmatizer.lemmatize(word, get_wordnet_pos(word)) for word in l]\n", + "\n", + " return stemmed_words, lemmatized_words\n", + "\n", + "def get_wordnet_pos(word):\n", + " tag = nltk.pos_tag([word])[0][1][0].upper()\n", + " tag_dict = {\n", + " 'J': wordnet.ADJ,\n", + " 'N': wordnet.NOUN,\n", + " 'V': wordnet.VERB,\n", + " 'R': wordnet.ADV\n", + " }\n", + " return tag_dict.get(tag, wordnet.NOUN)\n", + "\n", + "words = ['ironhack', 's', 'q', 'website', 'is']\n", + "stemmed_words, lemmatized_words = stem_and_lemmatize(words)\n", + "\n", + "print(\"Stemmed words:\", stemmed_words)\n", + "print(\"Lemmatized words:\", lemmatized_words)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stop Words Removal\n", + "\n", + "Stop Words are the most commonly used words in a language that don't contribute to the main meaning of the texts. Examples of English stop words are `i`, `me`, `is`, `and`, `the`, `but`, and `here`. We want to remove stop words from analysis because otherwise stop words will take the overwhelming portion in our tokenized word list and the NLP algorithms will have problems in identifying the truely important words.\n", + "\n", + "NLTK has a `stopwords` package that allows us to import the most common stop words in over a dozen langauges including English, Spanish, French, German, Dutch, Portuguese, Italian, etc. These are the bare minimum stop words (100-150 words in each language) that can get beginners started. Some other NLP packages such as [*stop-words*](https://pypi.org/project/stop-words/) and [*wordcloud*](https://amueller.github.io/word_cloud/generated/wordcloud.WordCloud.html) provide bigger lists of stop words.\n", + "\n", + "Now in the cell below, create a function called `remove_stopwords` that loop through a list of words that have been stemmed and lemmatized to check and remove stop words. Return a new list where stop words have been removed." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[nltk_data] Downloading package stopwords to /Users/pedro/nltk_data...\n", + "[nltk_data] Package stopwords is already up-to-date!\n" + ] + }, + { + "data": { + "text/plain": [ + "['ironhack', 'q', 'website']" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import nltk\n", + "from nltk.corpus import stopwords\n", + "\n", + "nltk.download('stopwords')\n", + "\n", + "def remove_stopwords(l):\n", + "\n", + " stop_words = set(stopwords.words('english'))\n", + " filtered_words = [word for word in l if word.lower() not in stop_words]\n", + " return filtered_words\n", + "\n", + "words = ['ironhack', 's', 'q', 'website', 'is']\n", + "filtered_words = remove_stopwords(words)\n", + "\n", + "filtered_words\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary\n", + "\n", + "In this challenge you have learned several text preparation techniques in more depths including text cleaning, tokenization, stemming, lemmatization, and stopwords removal. You have also written the functions you will be using in the next challenge to prepare texts for NLP analysis. Now we are ready to move on to the next challenge - Sentiment Analysis." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/challenge-2.ipynb b/challenge-2.ipynb new file mode 100644 index 0000000..79272bb --- /dev/null +++ b/challenge-2.ipynb @@ -0,0 +1,327 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Challenge 2: Sentiment Analysis\n", + "\n", + "In this challenge we will learn sentiment analysis and practice performing sentiment analysis on Twitter tweets. \n", + "\n", + "## Introduction\n", + "\n", + "Sentiment analysis is to *systematically identify, extract, quantify, and study affective states and subjective information* based on texts ([reference](https://en.wikipedia.org/wiki/Sentiment_analysis)). In simple words, it's to understand whether a person is happy or unhappy in producing the piece of text. Why we (or rather, companies) care about sentiment in texts? It's because by understanding the sentiments in texts, we will be able to know if our customers are happy or unhappy about our products and services. If they are unhappy, the subsequent action is to figure out what have caused the unhappiness and make improvements.\n", + "\n", + "Basic sentiment analysis only understands the *positive* or *negative* (sometimes *neutral* too) polarities of the sentiment. More advanced sentiment analysis will also consider dimensions such as agreement, subjectivity, confidence, irony, and so on. In this challenge we will conduct the basic positive vs negative sentiment analysis based on real Twitter tweets.\n", + "\n", + "NLTK comes with a [sentiment analysis package](https://www.nltk.org/api/nltk.sentiment.html). This package is great for dummies to perform sentiment analysis because it requires only the textual data to make predictions. For example:\n", + "\n", + "```python\n", + ">>> from nltk.sentiment.vader import SentimentIntensityAnalyzer\n", + ">>> txt = \"Ironhack is a Global Tech School ranked num 2 worldwide. 
", + "
", + "Our mission is to help people transform their careers and join a thriving community of tech professionals that love what they do.\"\n", + ">>> analyzer = SentimentIntensityAnalyzer()\n", + ">>> analyzer.polarity_scores(txt)\n", + "{'neg': 0.0, 'neu': 0.741, 'pos': 0.259, 'compound': 0.8442}\n", + "```\n", + "\n", + "In this challenge, however, you will not use NLTK's sentiment analysis package because in your Machine Learning training in the past 2 weeks you have learned how to make predictions more accurate than that. The [tweets data](https://www.kaggle.com/kazanova/sentiment140) we will be using today are already coded for the positive/negative sentiment. You will be able to use the Naïve Bayes classifier you learned in the lesson to predict the sentiment of tweets based on the labels." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Conducting Sentiment Analysis\n", + "\n", + "### Loading and Exploring Data\n", + "\n", + "The dataset we'll be using today is located on Kaggle (https://www.kaggle.com/kazanova/sentiment140). Once you have downloaded and imported the dataset, it you will need to define the columns names: df.columns = ['target','id','date','flag','user','text']\n", + "\n", + "*Notes:* \n", + "\n", + "* The dataset is huuuuge (1.6m tweets). When you develop your data analysis codes, you can sample a subset of the data (e.g. 20k records) so that you will save a lot of time when you test your codes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import pandas as pd\n", + "import re\n", + "\n", + "data = pd.read_csv(\"training.1600000.processed.noemoticon.csv\", encoding='latin1', header=None)\n", + "data.columns = ['target', 'id', 'date', 'flag', 'user', 'text']\n", + "\n", + "def clean_up(s):\n", + " pattern = r'[^a-zA-Z\\s]+|http\\S+'\n", + " cleaned_text = re.sub(pattern, '', s)\n", + " return cleaned_text\n", + "\n", + "data['text_processed'] = data['text'].apply(clean_up)\n", + "data['text_processed'] = data['text_processed'].apply(lambda x: x.split())\n", + "\n", + "data.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prepare Textual Data for Sentiment Analysis\n", + "\n", + "Now, apply the functions you have written in Challenge 1 to your whole data set. These functions include:\n", + "\n", + "* `clean_up()`\n", + "\n", + "* `tokenize()`\n", + "\n", + "* `stem_and_lemmatize()`\n", + "\n", + "* `remove_stopwords()`\n", + "\n", + "Create a new column called `text_processed` in the dataframe to contain the processed data. At the end, your `text_processed` column should contain lists of word tokens that are cleaned up. Your data should look like below:\n", + "\n", + "![Processed Data](data-cleaning-results.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Creating Bag of Words\n", + "\n", + "The purpose of this step is to create a [bag of words](https://en.wikipedia.org/wiki/Bag-of-words_model) from the processed data. The bag of words contains all the unique words in your whole text body (a.k.a. *corpus*) with the number of occurrence of each word. It will allow you to understand which words are the most important features across the whole corpus.\n", + "\n", + "Also, you can imagine you will have a massive set of words. The less important words (i.e. those of very low number of occurrence) do not contribute much to the sentiment. Therefore, you only need to use the most important words to build your feature set in the next step. In our case, we will use the top 5,000 words with the highest frequency to build the features.\n", + "\n", + "In the cell below, combine all the words in `text_processed` and calculate the frequency distribution of all words. A convenient library to calculate the term frequency distribution is NLTK's `FreqDist` class ([documentation](https://www.nltk.org/api/nltk.html#module-nltk.probability)). Then select the top 5,000 words from the frequency distribution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "from nltk import FreqDist\n", + "import nltk\n", + "\n", + "words = [word for words_list in data['text_processed'] for word in words_list]\n", + "frequency = FreqDist(words)\n", + "top_words = frequency.most_common(5000)\n", + "\n", + "for word, frequency in top_words:\n", + " print(word, frequency)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Building Features\n", + "\n", + "Now let's build the features. Using the top 5,000 words, create a 2-dimensional matrix to record whether each of those words is contained in each document (tweet). Then you also have an output column to indicate whether the sentiment in each tweet is positive. For example, assuming your bag of words has 5 items (`['one', 'two', 'three', 'four', 'five']`) out of 4 documents (`['A', 'B', 'C', 'D']`), your feature set is essentially:\n", + "\n", + "| Doc | one | two | three | four | five | is_positive |\n", + "|---|---|---|---|---|---|---|\n", + "| A | True | False | False | True | False | True |\n", + "| B | False | False | False | True | True | False |\n", + "| C | False | True | False | False | False | True |\n", + "| D | True | False | False | False | True | False|\n", + "\n", + "However, because the `nltk.NaiveBayesClassifier.train` class we will use in the next step does not work with Pandas dataframe, the structure of your feature set should be converted to the Python list looking like below:\n", + "\n", + "```python\n", + "[\n", + "\t({\n", + "\t\t'one': True,\n", + "\t\t'two': False,\n", + "\t\t'three': False,\n", + "\t\t'four': True,\n", + "\t\t'five': False\n", + "\t}, True),\n", + "\t({\n", + "\t\t'one': False,\n", + "\t\t'two': False,\n", + "\t\t'three': False,\n", + "\t\t'four': True,\n", + "\t\t'five': True\n", + "\t}, False),\n", + "\t({\n", + "\t\t'one': False,\n", + "\t\t'two': True,\n", + "\t\t'three': False,\n", + "\t\t'four': False,\n", + "\t\t'five': False\n", + "\t}, True),\n", + "\t({\n", + "\t\t'one': True,\n", + "\t\t'two': False,\n", + "\t\t'three': False,\n", + "\t\t'four': False,\n", + "\t\t'five': True\n", + "\t}, False)\n", + "]\n", + "```\n", + "\n", + "To help you in this step, watch the [following video](https://www.youtube.com/watch?v=-vVskDsHcVc) to learn how to build the feature set with Python and NLTK. The source code in this video can be found [here](https://pythonprogramming.net/words-as-features-nltk-tutorial/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[![Building Features](building-features.jpg)](https://www.youtube.com/watch?v=-vVskDsHcVc)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "import pandas as pd\n", + "from nltk import FreqDist\n", + "\n", + "words = [word for words_list in data['text_processed'] for word in words_list]\n", + "frequency = FreqDist(words)\n", + "top_words = frequency.most_common(5000)\n", + "\n", + "bag_of_words = [word for word, _ in top_words]\n", + "num_documents = len(data['text_processed'])\n", + "num_words = len(bag_of_words)\n", + "features = np.zeros((num_documents, num_words), dtype=bool)\n", + "\n", + "for i, words_list in enumerate(data['text_processed']):\n", + " for word in words_list:\n", + " if word in bag_of_words:\n", + " word_index = bag_of_words.index(word)\n", + " features[i, word_index] = True\n", + "\n", + "feature_data = {word: features[:, i] for i, word in enumerate(bag_of_words)}\n", + "data = pd.DataFrame(feature_data)\n", + "\n", + "data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from nltk.sentiment import SentimentIntensityAnalyzer\n", + "nltk.download('vader_lexicon')\n", + "\n", + "sia = SentimentIntensityAnalyzer()\n", + "\n", + "def get_sentiment(text):\n", + " scores = sia.polarity_scores(text)\n", + " compound_score = scores['compound']\n", + " return compound_score > 0\n", + "\n", + "data['is_positive'] = data.apply(lambda row: get_sentiment(' '.join(row.index[row])), axis=1)\n", + "data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Building and Traininng Naive Bayes Model\n", + "\n", + "In this step you will split your feature set into a training and a test set. Then you will create a Bayes classifier instance using `nltk.NaiveBayesClassifier.train` ([example](https://www.nltk.org/book/ch06.html)) to train with the training dataset.\n", + "\n", + "After training the model, call `classifier.show_most_informative_features()` to inspect the most important features. The output will look like:\n", + "\n", + "```\n", + "Most Informative Features\n", + "\t snow = True False : True = 34.3 : 1.0\n", + "\t easter = True False : True = 26.2 : 1.0\n", + "\t headach = True False : True = 20.9 : 1.0\n", + "\t argh = True False : True = 17.6 : 1.0\n", + "\tunfortun = True False : True = 16.9 : 1.0\n", + "\t jona = True True : False = 16.2 : 1.0\n", + "\t ach = True False : True = 14.9 : 1.0\n", + "\t sad = True False : True = 13.0 : 1.0\n", + "\t parent = True False : True = 12.9 : 1.0\n", + "\t spring = True False : True = 12.7 : 1.0\n", + "```\n", + "\n", + "The [following video](https://www.youtube.com/watch?v=rISOsUaTrO4) will help you complete this step. The source code in this video can be found [here](https://pythonprogramming.net/naive-bayes-classifier-nltk-tutorial/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[![Building and Training NB](nb-model-building.jpg)](https://www.youtube.com/watch?v=rISOsUaTrO4)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.model_selection import train_test_split\n", + "from nltk.classify import NaiveBayesClassifier\n", + "\n", + "train_set, test_set = train_test_split(data, test_size=0.2, random_state=42)\n", + "train_data = [(row.to_dict(), row['is_positive']) for _, row in train_set.iterrows()]\n", + "\n", + "classifier = NaiveBayesClassifier.train(train_data)\n", + "classifier.show_most_informative_features()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Testing Naive Bayes Model\n", + "\n", + "Now we'll test our classifier with the test dataset. This is done by calling `nltk.classify.accuracy(classifier, test)`.\n", + "\n", + "As mentioned in one of the tutorial videos, a Naive Bayes model is considered OK if your accuracy score is over 0.6. If your accuracy score is over 0.7, you've done a great job!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "accuracy_score = accuracy(classifier, test_data)\n", + "print(\"Accuracy:\", accuracy_score)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/data-cleaning-results.png b/data-cleaning-results.png new file mode 100644 index 0000000..59f91c3 Binary files /dev/null and b/data-cleaning-results.png differ diff --git a/nb-model-building.jpg b/nb-model-building.jpg new file mode 100644 index 0000000..f42bbe2 Binary files /dev/null and b/nb-model-building.jpg differ