Course home for
LING 1340/2340
HOME
• Policies
• Term project guidelines
• Learning resources by topic
• Schedule table
Due 1/13 (Th), 3:45pm
The Internet is full of published linguistic data sets. Let’s data-surf! Instructions:
.txt
extension), make note of:
.md
file instead of a text file.SUBMISSION: On Canvas. Upload your text file through the To-do1 submission link.
Due 1/20 (Th), 3:45pm
Learn about the numpy
library: study the Python Data Science Handbook and/or the DataCamp tutorial.
While doing so, create your own study notes, as a Jupyter Notebook file entitled numpy_notes_yourname.ipynb
.
Include examples, explanations, etc. Replicating DataCamp’s examples is also something you could do.
You are essentially creating your own reference material.
SUBMISSION: Your file should be in the todo2/
directory of the Class-Exercise-Repo
.
Make sure it’s configured for the “upstream” remote and your fork is up-to-date. Push to your GitHub fork, and create a pull request for me.
Due 1/25 (Tue)
Study the pandas
library (through the Python Data Science Handbook and/or the DataCamp tutorials). pandas
is a big topic with lots to learn: aim for about 1/2. While doing so, try it out on TWO spreadsheet (.csv, .tsv, etc.) files:
billboard_lyrics_1964-2015.csv
by Kaylin Pavlik, from her project ‘50 Years of Pop Music’.
(Note: you might need to specify ISO8859 encoding when opening.)Don’t change the filename of any downloaded CSV files or edit them in any way – important! Name your Jupyter Notebook file pandas_notes_yourname.ipynb
.
SUBMISSION: Your files should be in the todo3/
directory of Class-Exercise-Repo
.
Commit and push all three files to your GitHub fork, and create a pull request for me.
Due 1/27 (Thu)
This one is a continuation of To-do #3: work further on your pandas
study notes. You may create a new JNB file, or you can expand the existing one. Also: try out a spreadsheet submitted by a classmate. You are welcome to view the classmate’s notebook to see what they did with it. (How to find out who submitted what? Git/GitHub history of course.) Give them a shout-out.
SUBMISSION: We’ll stick to the todo3/
directory in Class-Exercise-Repo
. Push to your GitHub fork, and create a pull request for me.
Due 2/10 (Thu), earlier at 2pm!!
Let’s dig into the issues of copyright and license in language data. We’ll then pool our questions together for Dr. Lauren Collister.
Review the topics of linguistic data, open access, and data publishing, focusing in particular on her 2022 article for the Open Handbook of Linguistic Data Management and the “Copyright and Intellectual Property Toolkit”. Then watch her guest presentation from last year; her slides can be found here.
Think of a question or two on the topic, and add yours along with your name to this Word document posted on our MS Teams forum. Dr. Collister will join our class on Thursday to answer them.
SUBMISSION: The shared MS Word document is your submission.
Due 2/17 (Thu)
Let’s try Twitter mining! On a tiny scale that is. Step-by-step tutorials are posted in this Resources section, so pick one and follow along. Take a look at my in-class demo too: I used the older Twitter API protocol v1.1, but try and see if you can use the latest v2.
Before beginning, you will need to install the tweepy
library. If you are using Anaconda python, you can do so via Anaconda Navigator’s “Environments” tab. If you have python.org’s python, you should use pip
in command line.
Notes on using tweepy
:
git
, redact them in your JNB file by changing the string values to ‘XXXXXXXXXXXXXX’.SUBMISSION: We will use Class-Exercise-Repo
, the todo6/
folder. Your Jupyter Notebook file should have your name in the file name. Push to your fork and create a pull request. Make sure you have redacted your personal API keys!
Due 2/22 (Tue)
Let’s try our hands on annotation! Head to this URL to access Na-Rae’s WebAnno annotation server. Log in with your user ID (same as your Pitt ID) and password (first 4 digits of your Peoplesoft number).
You will see two documents: Japanese.txt
is for part-of-speech annotation, and covid.txt
is for named entity annotation.
Japanese.txt
)
covid.txt
)
Without learning all the details about the annotation guidelines, try your best. This is just getting our hands on the process. The point of this To-do is for us to aggregate everyone’s annotation and see what the process is like from the annotation manager’s point of view. You are also welcome to try out any annotation layer you want.
SUBMISSION Your annotation itself is submission!
Due 3/1 (Tue)
Let’s try sentiment analysis on movie reviews. Follow this tutorial in your own Jupyter Notebook file. Feel free to explore and make changes as you see fit. If you haven’t already, review the Python Data Science Handbook chapters to give yourself a good grounding. Also: watch DataCamp tutorials Supervised Learning with scikit-learn, and NLP Fundamentals in Python.
Students who took LING 1330 (=everyone): compare sklearn’s Naive Bayes with NLTK’s treatment and include a blurb on your impression. (You don’t have to run NLTK’s code, unless you want to!)
SUBMISSION: Your jupyter notebook file should be in the todo8
folder of Class-Exercise-Repo
. As usual, push to your fork and create a pull request.
Due 3/3 (Thu)
What have the previous students of LING 1340/2340 accomplished? What do finished projects look like? Let’s have you explore their past projects. Details:
SUBMISSION: As usual, push to your fork and create a pull request. Make sure your team’s markdown file is in good shape!
Due 3/17 (Thu)
What has everyone been up to? Let’s take a look – it’s a “visit your classmates” day!
Class-Lounge
repo, but you should edit it so that:
SUBMISSION: Since Class-Lounge
is a fully collaborative repo, there is no formal submission process.
Due 3/22 (Tue)
Visit your classmates, round 2.
SUBMISSION: Since Class-Lounge
is a fully collaborative repo, there is no formal submission process.
Due 3/29 (Tue)
Let’s poke at big data. Well, big-ish – how about 8.6 million restaurant reviews? The Yelp DataSet Challenge has been going strong for 10+ years now, where Yelp make their huge review dataset available for academic groups that participate in a data mining competition. Challenge accepted! Before we begin:
Let’s download this beast and poke around.
Documents/Data_Science
directory. You might want to create a new folder there for the data files..tar
format. Look it up if you are not familiar. Untar it using tar -xvf
. I will extract 5 json files along with a PDF document.ls -laFh
, head
, tail
, wc -l
, etc.), find out: how big are the json files? What do the contents look like? How many reviews are there?grep
and wc -l
. Take a look at the first few through head | less
. Do they seem to have high or low stars?How much processing can our own puny personal computer handle? Let’s find out.
process_reviews.py
. Content below. You can use nano, or you could use your favorite editor (atom, notepad++) provided that you launch the application through command line.import pandas as pd
import sys
from collections import Counter
filename = sys.argv[1]
df = pd.read_json(filename, lines=True, encoding='utf-8')
print(df.head(5))
wtoks = ' '.join(df['text']).split()
wfreq = Counter(wtoks)
print(wfreq.most_common(20))
review.json
file! Start small by creating a tiny version consisting of the first 10 lines, named FOO.json
, using head
and >
.process_reviews.py
on FOO.json
. Note that the json file should be supplied as command-line argument to the Python script, so your command will look something like below.
python process_reviews.py FOO.json
FOO.json
with incrementally larger total # of lines and re-run the Python script. The point is to find out how much data your system can reasonably handle. Could that be 1,000 lines? 100,000?Class-Lounge
. A few sentences will do. How was your laptop’s handling of this data set? What sorts of resources would it take to successfully process it in its entirety and through more computationally demanding processes? Any other observations?SUBMISSION: Your entry on this shared MD file. Make sure to properly resolve conflicts (if any)!
Due 3/31 (Thu)
Trying out CRC, with bigger data + better code!
review_4mil.json
(newly created), process_reviews.py
(same Python script), todo13.sh
(new slurm script), todo13.out
(newly generated output file).seff job-id
command.process_reviews_eff.py
with the following. The code produces the same results, but structured differently.import pandas as pd
import sys
from collections import Counter
filename = sys.argv[1]
df_chunks = pd.read_json(filename, chunksize=10000, lines=True, encoding='utf-8')
wfreq = Counter()
for chunk in df_chunks:
for text in chunk['text']:
wfreq.update(text.split())
print(wfreq.most_common(20))
todo13.sh
to run this new script.seff job-id
command. Night and day! What about this new Python code led to this much improvement in efficiency? Give it some thought, we’ll discuss in class.SUBMISSION: Your files on CRC are your submission. I have read access to them.
Due 4/7 (Thu)
Another round of “visit your classmates”. You know what to do!
Due 4/14 (Thu)
4th and final round of “visit your classmates”, also the last To-do! Visit the two remaining classmates.