Data Science for Linguists 2023

Course home for
     LING 1340/2340

HOME
• Policies
• Term project guidelines
• Learning resources by topic
• Schedule table

Daily To-do Assignments

To-do #1

Due 1/11 (Wed), 9:45am

The goal of this To-do is to get you started with Git. To that end, complete my LSA 2019 tutorial Part 1 “Intro to Git”, linked under the “Git” section of the Learning Resources page. Detailed instructions:

SUBMISSION: On Canvas. Upload your screenshot files through the To-do1 submission link.

To-do #2

Due 1/23 (Fri), 9:45am

The Internet is full of published linguistic data sets. Let’s data-surf! Instructions:

  1. Go out and find two linguistic data sets you like. One should be a corpus, the other should be some other format. They must be free and downloadable in full. Make sure they are linguistic data sets, meaning designed specifically for linguistic inquiries or NLP engineering purposes.
  2. You might want to start with various bookmark sites listed in the following Learning Resources sections: Linguistic Data, Open Access, Data Publishing and Corpus Linguistics. But don’t be constrained by them.
  3. Download the data sets and poke around. Open up a file or two to take a peek. (No need to do this in Python: Save that for HW1.)
  4. In a text file named datasets_yourname.txt, (note the .txt extension), make note of:
    • The name of the data resource
    • The author(s)
    • The URL of the download page
    • Its makeup: size, type of language, format, etc.
    • License: whether it comes with one, and if so what kind?
    • Anything else noteworthy about the data. A sentence or two will do.
  5. If you are comfortable with markdown, make an .md file instead of a text file.

Git/GitHub submission instructions:

  1. If you haven’t already, fork Class-Exercise-Repo from our class GitHub org. Then, clone your fork onto your laptop. Details are on today’s slides.
  2. Inside, you will find todo2/ directory.
  3. Copy or move your file into that directory. Make sure it’s named something like datasets_yourname.txt so it won’t conflict with some other student’s.
  4. Do the usual local git routine, which ends with committing. Then push to your own GitHub fork.
  5. Confirm that your GitHub fork has your file.

If you already know about pull requests, go ahead and create one as the last step. We will go over the mechanics of pulling and merging on Friday.

SUBMISSION: That’s it! Your forked GitHub repository counts as your submission.

To-do #3

Learn about the numpy library: study the Python Data Science Handbook and/or the DataCamp tutorial. While doing so, create your own study notes, as a Jupyter Notebook file entitled numpy_notes_yourname.ipynb. Include examples, explanations, etc. Replicating DataCamp’s examples is also something you could do. You are essentially creating your own reference material.

SUBMISSION: Your file should be in the todo3/ directory of the Class-Exercise-Repo. Make sure your fork is up-to-date. Push to your GitHub fork, and create a pull request for me.

To-do #4

Study the pandas library (through the Python Data Science Handbook and/or the DataCamp tutorials). pandas is a big topic with lots to learn: aim for about 1/2. While doing so, try it out on TWO spreadsheet (.csv, .tsv, etc.) files:

  1. The first file should be your choice. You can get one from this CSV Files archive, or make up your own. Keep it super small and simple at 5-100 rows. This is supposed to be a toy dataset that helps you learn!
  2. The second one should be billboard_lyrics_1964-2015.csv by Kaylin Pavlik, from her project ‘50 Years of Pop Music’. (Note: you might need to specify ISO8859 encoding when opening.)

Don’t change the filename of any downloaded CSV files or edit them in any way – important! Name your Jupyter Notebook file pandas_notes_yourname.ipynb.

SUBMISSION: Your files should be in the todo4/ directory of Class-Exercise-Repo. Commit and push all three files to your GitHub fork, and create a pull request for me.

To-do #5

This one is a continuation of To-do #4: work further on your pandas study notes. You may create a new JNB file, or you can expand the existing one. Also: try out a spreadsheet submitted by a classmate. You are welcome to view the classmate’s notebook to see what they did with it. (How to find out who submitted what? Git/GitHub history of course.) Give them a shout-out.

SUBMISSION: We’ll stick to the todo4/ directory in Class-Exercise-Repo. Push to your GitHub fork, and create a pull request for me.

To-do #6

Plotting time! matplotlib and seaborn are popular Python libraries for plot graphs and visualization. The goal of this To-do is practice them using the “English” data:

Your Jupyter Notebook study notes should be named plot_notes_yourname.ipynb.

SUBMISSION: Your files should be in the todo6/ directory of Class-Exercise-Repo. Commit and push to your GitHub fork, and create a pull request for me.

To-do #7

What have the previous students of LING 1340/2340 accomplished? What do finished projects look like? Let’s have you explore their past projects. Details:

SUBMISSION: As usual, push to your fork and create a pull request. Make sure your team’s markdown file is in good shape!

To-do #8

Due earlier at 9am!!

Let’s dig into the issues of copyright and license in language data. We’ll then pool our questions together for Dr. Lauren Collister.

Review the topics of linguistic data, open access, and data publishing, focusing in particular on her 2022 article for the Open Handbook of Linguistic Data Management and the “Copyright and Intellectual Property Toolkit”. Then watch her guest presentation from a previous class; her slides can be found here.

Think of a question or two on the topic, and add yours along with your name to this Word document posted on our MS Teams forum. Dr. Collister will join our class on Friday to answer them.

SUBMISSION: The shared MS Word document is your submission.

To-do #9

Let’s learn about web scraping. It is in fact a vast topic which requires learning about the very building blocks of web sites (HTML, CSS, etc.). DataCamp has a whole course devoted to it (Web Scraping in Python), but for now, let’s all just dip our toes.

Try out the “Web Scraping with BeautifulSoup” tutorial posted in the Web and Social Media Mining section of our learning resources page. Try out a web page of your own choice! Name your Jupyter Notebook bs4_web_scraping_YOURNAME.ipynb, which should be in the todo9 folder of our Class-Exercise-Repo.

SUBMISSION: As usual, push to your fork and create a pull request.

To-do #10

With AI and natural language technologies making big waves, computational semantics is enjoying renewed popularity. One of the well-known projects is Abstract Meaning Representation (AMR), a formalism for semantic representation of English sentences. The project home page is found here: https://amr.isi.edu/index.html

It might initially look cryptic, but you might see similarities to the PropBank we learned in LING 1330. Your job: give yourself a crash course to learn as much as you can about AMR. Details:

To-do #11

Let’s try sentiment analysis on movie reviews. Follow this tutorial in your own Jupyter Notebook file. Feel free to explore and make changes as you see fit. If you haven’t already, review the Python Data Science Handbook chapters to give yourself a good grounding. If you want to get a serious start on ML learning: watch DataCamp tutorials Supervised Learning with scikit-learn, and NLP Fundamentals in Python.

Students who took LING 1330: compare sklearn’s Naive Bayes with the NLTK’s treatment and include a blurb on your impressions and questions. (You don’t have to run NLTK’s code, unless you want to!)

SUBMISSION: Your jupyter notebook file should be in the todo11 folder of Class-Exercise-Repo. As usual, push to your fork and create a pull request.

To-do #12

What has everyone been up to? Let’s take a look – it’s a “visit your classmates” day!

SUBMISSION: Since Class-Lounge is a fully collaborative repo, there is no formal submission process.

To-do #13

Let’s poke at big data. Well, big-ish – how about 7 million restaurant reviews? The Yelp DataSet Challenge has been going strong for 10+ years now, where Yelp make their huge review dataset available for academic groups that participate in a data mining competition. Challenge accepted! Before we begin:

Mode of operation

Step 1: Preparation, exploration

Let’s download this beast and poke around.

  1. Download the JSON portion of the data. (We don’t need the photos.)
  2. Move the downloaded archive file into your Documents/Data_Science directory. You might want to create a new folder there for the data files.
  3. From this point on, operate exclusively in command line.
  4. The file is in the .tar format. Look it up if you are not familiar. Untar it using tar -xvf. I will extract 5 json files along with a PDF document.
  5. Using various unix commands (ls -laFh, head, tail, wc -l, etc.), find out: how big are the json files? What do the contents look like? How many reviews are there?
  6. How many reviews use the word ‘horrible’? Find out through grep and wc -l. Take a look at the first few through head | less. Do they seem to have high or low stars?
  7. How many reviews use the word ‘scrumptious’? Do they seem to have high stars this time?

Step 2: A stab at processing

How much processing can our own puny personal computer handle? Let’s find out.

  1. First, take stock of your computer hardware: disk space, memory, processor, and how old it is.
  2. Create a Python script file: process_reviews.py. Content below. You can use nano, or you could use your favorite editor (atom, notepad++) provided that you launch the application through command line.
import pandas as pd
import sys
from collections import Counter

filename = sys.argv[1]

df = pd.read_json(filename, lines=True, encoding='utf-8')
print(df.head(5))

wtoks = ' '.join(df['text']).split()
wfreq = Counter(wtoks)
print(wfreq.most_common(20))
  1. We are NOT going to run this on the whole review.json file! Start small by creating a tiny version consisting of the first 10 lines, named FOO.json, using head and >.
  2. Then, run process_reviews.py on FOO.json. Note that the json file should be supplied as command-line argument to the Python script, so your command will look something like below.
    • python process_reviews.py FOO.json
  3. Confirm it ran successfully.
  4. Next, re-create FOO.json with incrementally larger total # of lines and re-run the Python script. The point is to find out how much data your system can reasonably handle. Could that be 1,000 lines? 100,000?
  5. While running this experiment, closely monitor the process on your machine. Windows users should use Task Manager, and Mac users should use Activity Monitor.
  6. Finally, write up a short summary on this shared markdown file in Class-Lounge. A few sentences will do. How was your laptop’s handling of this data set? What sorts of resources would it take to successfully process it in its entirety and through more computationally demanding processes? Any other observations?

SUBMISSION: Your entry on this shared MD file. Make sure to properly resolve conflicts (if any)!

To-do #14

Trying out CRC, with bigger data + better code!

Warm-up

Take 1: Bigger data

Take 2: Better code

import pandas as pd
import sys
from collections import Counter

filename = sys.argv[1]

df_chunks = pd.read_json(filename, chunksize=10000, lines=True, encoding='utf-8')

wfreq = Counter()

for chunk in df_chunks:
    for text in chunk['text']:
        wfreq.update(text.split())

print(wfreq.most_common(20))

Take 3: EVEN BIGGER data and better code (optional, ONLY IF you’re curious!)

SUBMISSION: Your files on CRC are your submission. I have read access to them.

To-do #15

Visit your classmates, round 2.

SUBMISSION: Since Class-Lounge is a fully collaborative repo, there is no formal submission process.

To-do #16

Visit your classmates, round 3. You know what to do!

To-do #17

Visit your classmates, last round! You have 3 classmates you haven’t visited yet. You can visit 2, or if you are inclined, visit all 3.