diff --git a/README.md b/README.md
index ddd12f01ed78b935f473cfe75ea1395bbe03dcc2..e5b54f7a6789a5d5176cdb0382fb6733c9a726ef 100644
--- a/README.md
+++ b/README.md
@@ -1,22 +1,18 @@
 # Machine Learning Hierarchical Story Generation
 
-### Model Reference
-
-Hierarchical Neural Story Generation [https://arxiv.org/abs/1805.04833](https://arxiv.org/abs/1805.04833)
+This repository includes code that deals with the machine learning side of the final project.
 
-## Structure
 
-The _old-repo-rnn_ directory includes the initial, unfinished version of the model - does not produce very good results, I'm only keeping it for reference, not using it in the current version.
+## Description and Instructions
 
-The _fairseq-hsg_ directory hosts all files corresponding to the hierarchical model, including the script to scrape Science fiction stories from a [blog](https://blog.reedsy.com/short-stories/science-fiction/), the collected stories with their writing prompts, a script that analyses and prepares the data for training, and scripts to execute jobs on lara, using slurm. The latter include a script that creates a virtual runtime environment, a script that binarises the dataset prior to training and a training script.
+The `.sh` files are scripts meant to be executed on lara via slurm. They include steps to binarise the data before training the model, the script to train the first model `train-job-01.sh` and the script that trains the fusion model `train-job-02.sh`. Files in the `environment-setup` folder need to be run at the beginning to instantiate a virtual runtime environment on slurm. 
 
-## Instructions
+The two python notebooks deal with dataset preparation. `stories-scraping.ipynb` scrapes prompts (inputs) and stories (outputs) from a [blog](https://blog.reedsy.com/short-stories/science-fiction/) where weekly writing contests are held, and saves them into a temporary `raw_stories` directory. The `stories-analysis.ipynb` performs statistical analysis on the scraped data and prepares the data for training - separates words and punctuation, shortens the stories to a desired length (1800 words) and separates the data into train, valid and testing datasets. It also deletes temporary directories.
 
-TBA
 
-### Note
+### Model Reference
 
-If you're using a notebook, add an exclamation mark before running a python command. e.g.
+[Hierarchical Neural Story Generation Paper](https://arxiv.org/abs/1805.04833)
 
-```!python train.py```
+[fairseq library](https://github.com/pytorch/fairseq/blob/main/examples/stories/README.md)