Lab 7: Form Handling
This lab exercise carries coursework marks. The assessment takes the form of a Peer Assessment Workshop Activity. Please refer to these instructions, and the submission guidelines is given on the VLE.
The deadline for the submission phase of the activity is 23:55 PM Wednesday 6th December (before Lab 8). The deadline for the assessment part of the activity is 23:55 PM on Wednesday 13th December (before Lab 9).
Overview
Last week you began using the pymongo API to facilitate communication between your application's logic layer (the python scripts), and its backend (the database).
You will also have started developing your application in a git repository of your own, and should be regularly committing back to GitLab. You may also have begun to deviate from the class example to some degree.
This week, you will start to emphasise the `data-driven' aspect of your application by allowing a user to send data to the server via an HTML form. Your cgi scripts will process this data, and update the database accordingly.
You should also start to notice various improvements in the application design of the Catflucks app. For example, further steps have been taken to modularise the code with reuseable functions.
Learning objectives
- Utilise an HTML form to trigger a POST request on the web server
- Utilise the cgi.FieldStorage class to retrieve form data sent to a cgi script
- Access fields in the form data and use with mongo's
insert()
and/orupdate()
collection methods to affect the data in the database - Utilise bson.objectid.ObjectId to reference document objects in a python script
Task 1: Output an HTML form
In the example Catflucks application, a form is being used to record whether or not a user wished to `Fluck' or `Skip' a cat image they are presented with.
The HTML form includes 3 input fields:
- A hidden input field with the id of the image that was served
- 2x inputs of type submit, with unique field names.
The action attribute of the HTML form element specifies where to make the request (i.e. in this case, the URL of the processing script), and the method attribute specifies what type of request will be made (i.e. GET or POST).
It looks like this:
<form method="POST" action="/cgi-bin/serve_cat.py">
<input type="hidden" value="{}" name="img_id">
<input type="submit" name="btn_skip" value="Skip">
<input type="submit" name="btn_fluck" value="Fluck">
</form>
The processing script will be able to check which of the 2 submit buttons was used to submit the form, and react accordingly.
The value of the hidden field will be set dynamically with each GET request, according to which random image was retrieved from the database.
For the full script, refer to the examples in lab-exercises (lab-6 and lab-7 folders).
- Output an HTML form from one of your scripts. Whether or not you are following the example, think carefully about what data will be needed by the form processing script (as minimum).
Task 2: Retrieve form data with cgi.FieldStorage
The value you gave the action atribute of your form will dictate which server-side script receives and handles the POST request. This might be the same script that served the cat image in the first place, or it might be another one...it may depend how closely you are following the example!
From your form processing script you will need to import the cgi module which provides the FieldStorage class:
import cgi
It is also a good idea to import and enable the cgitb module, which provides better error reporting on CGI scripts:
import cgitb
cgitb.enable()
You can then retrieve form data from a request body by instantiating a FieldStorage object:
form = cgi.FieldStorage()
This object provides form data in a dictionary-like structure, meaning you can index particular fields like this:
field_1 = form['field_1']
- Refer to the examples in the Lab 6 and Lab 7 folders, but you don't have to copy the examples exactly. Applying these techniques to another bit of app functionality would be a great test of your understanding!
- Note that, while the functionality of both flucks implementations is the same, the lab-7 version has increased the Separation of Concerns. You should think about how this has been achieved, and consider the advantages of designing the application in that way.
Task 3: Update the database in response to form data
In this task, you need to modify the form handling script so that it performs an operation on one or more documents in the database in response to whatever data was submitted in the HTML form.
It might be a good idea to try things out from the mongo shell first, and then translate your operation(s) into pymongo, updating the query parameter with values retrieved from the form.
In the example app, an insert()
query is executed on the flucks collection with each POST request. An if/elif statement checks if the values of either the `Skip' or `Fluck' submit type inputs are present in the form data. If it finds btn_fluck
it sets the is_flucked
field to 1, else if it finds btn_skip
, it sets it to 0.
Task 4: Deploy and test
Although you can test parts of your application without actually running your server, when it comes to testing your application's handling of a POST request, you will need to run the server script and submit the form from a browser.
You can copy the simpleServer.py script you made in Lab 5 into your working directory:
cp lab-exercises/lab-5/simpleServer.py YOUR-WORKING-REPO/simpleServer.py
Run it:
./simpleServer.py
And access the app from a browser:
http://www.doc.gold.ac.uk/usr/<ID>/cgi-bin/<SCRIPT-NAME>.py
Task 5: Commit work back to remote
Don't forget to add, commit and push your work back to the remote origin on GitLab! (Refer to Lab 1 resources if you forgot how to do this).
Extension: Do something different!
- Could you think of something else you could do to improve or adapt Catflucks?
This could be some new piece of functionality, or an improvement to the code.
Task 6: Record a screencast
For the Peer Assessment Task, you need to be able to evidence having accomplished all of the tasks that have been given to you over the past 3 weeks. You will be doing that in the form of a screencast.
In the screencast, you will show that you have done the following:
- You are able to connect to a Virtual Server over SSH (1 mark)
- You are able to run a web server to serve a directory on your file system (1 mark)
- You are able to access your web application in a browser (1 mark)
- You are able to make a GET request to your web server, and receive a response with status code 200 (1 mark)
- A user of your application is able to trigger a POST request to your web server (1 mark)
- As a result of the POST, data in your database has changed (1 mark)
- Your application is not identical to the example. You have changed or improved it in some way. (4 marks)
Review the submission example (including the video) to see exactly what is expected from your screencast.
Please note that you are NOT required to speak over your screencast unless you want to. Furthermore, your demonstration should not depend on the user having audio enabled on their machine.
You will use VLC to record your screencast as an mp4. VLC is available from any Lab PC.
What follows are some instructions for using VLC from a Lab PC.
- Launch VLC and Select the Menu "Media/Open Capture Device"
- A new window will open where you should change Capture mode to Desktop.
- Set the Desired frame rate to 15 fps. The higher the frame rate, the smoother your video will play, but it may make the video too large to upload. 15 fps should be fine here.
- Click the checkbox "Show more" options to reveal additional settings. Set Caching to 0 ms.
- Finally, click the dropdown menu that says "Play" and change it to "Convert". You choose this option because you want to encode the live desktop into a save file rather than view it live.
- Select a Destination File. This is the name of the video file you are creating. You should be making a video file in the mp4 format. It would therefore make sense to call it something like, "lab-7-USERNAME.mp4". Although you can click the Video drop down menu to select a video format, the default setting, H.264 + AAC (MP4), will be fine. Note that "AAC" refers to the audio format, but your screencast may not have any audio.
- Finally, click "Start". VLC will begin recording your desktop. When you want to finish recording, click the "Stop" button on the VLC interface (or the drop-down menu on the top-right of the screen).
Task 7: Submit your screencast on the VLE
Under week 8 you will find the Lab 7 Peer Assessment.
- Upload your mp4 file and enter any supporting text in the text area.
Task 7: Review your peers
You will be able to do this from Thursday 7th December.
Please refer carefully to the assessment instructions on the VLE when performing the peer assessments.
Assessment is worth 20% of the total marks available for this activity.
Please note that, if you deliberately upmark your friends, your own mark may suffer. This is due to your given marks differing from the marks awarded by other reviewers.