CommandsΒΆ

This section provides more detailed documentation of the various MTurk management commands. All commands must be run from the server/ directory.

./manage.py mtconfigure

Takes the settings out of each <app>/experiments.py file and stores them in the database (in mturk.models.Experiment instances). It also sets up any MTurk qualifications.

./manage.py mtconsume

Dispatch all pending content to the marketplace and create new HITs.

With our MTurk platform, tasks are dispatched in a two-step process.

First, mturk.models.PendingContent objects are created for each object that might be labeled (e.g. a photo). PendingContent instances track the priority of each object, what has been scheduled, and how many tasks need to be put on MTurk.

Second, mtconsume searches for PendingContent instances that have not yet been scheduled on MTurk (or that need more responses) and then creates HITs (stored locally as models.mturk.MtHit instances).

./manage.py mtapprove_loop '<regex>'

Instantly approve all submissions, the moment they arrive. In a loop, this script finds all experiments whose slug matches the given regex and approves all submitted assignments.

I suggest running this script whenever you have an experiment that has sentinel objects (secret items with known answers), since workers greatly appreciate speedy approval.

./manage.py mtexpire '<regex>'

Expire all experiments whose slug matches a regex. When a HIT is expired, any current workers may finish, but no new workers may start the HIT.

This script expires in decreasing order of pay.

./manage.py mtsync

Synchronize our local information about HITs and assignments with the Amazon database.

This will mark the local copy of HITs as disposed (hit_status='D') if they do not exist on Amazon’s servers, and it will disable any HITs on Amazon if they are not found locally.

./manage.py mtcubam

Update all labels for the experiments that use CUBAM to aggregate binary answers. Since CUBAM is expensive and can take hours to run if you have millions of labels, it only runs on experiments thar are marked as “dirty”.

To force a re-run, mark the corresponding mturk.models.Experiment instance to be dirty by running the following in a Python shell (you can start one with ./manage.py shell_plus):

Experiment.objects.filter(slug='SLUG').update(cubam_dirty=True)

where SLUG is the human-readable ID for your project.

./manage.py mtbalance

Print the current account balance.