module: mturk.models

class mturk.models.Experiment(*args, **kwargs)

Bases: mturk.models.MtModelBase

High-level separation of HITs.

completed_id = None

identifier for determining which experiments have been completed (if two experiments share this field, then an item completed under one experiment will count as completed under the other experiment)

content_priority(obj)

Returns the priority to assign to object obj

cubam_dirty = None

if True, something was submitted since the last time CUBAM was run on this experiment.

examples
examples_group_attr = None

name of the attribute on each example where good and bad should be grouped together. example: if you have good and bad BRDFs for a shape, and the BRDF points to the shape with the name ‘shape’, then this field would could set to ‘shape’.

experiment_workers
external_task_url()
get_module()
get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
has_tutorial = None

whether there is a dedicated tutorial for this task

hit_types
module = None

name of the module where functions like configure_experiments are held, usually called “<some_app>.experiments”

new_hit_settings
pending_contents
save(*args, **kwargs)
set_new_hit_settings(**kwargs)

Update the new_hit_settings member

slug = None

slug: url and filename-safe name of this experiment. the slug and variant together are unique. this is also the name used for templates.

template_dir = None

directory where the template is stored. the templates for each experiment are constructed as follows:

{template_dir}/{slug}.html              -- mturk task
{template_dir}/{slug}_inst_content.html -- instructions page (just the
                                           content)
{template_dir}/{slug}_inst.html         -- instructions (includes
                                           _inst_content.html)
{template_dir}/{slug}_tut.html          -- tutorial (if there is one)
template_name()
test_contents
test_contents_per_assignment = None

number of sentinel contents given to each user (not stored with the ExperimentSettings object since this is used dynamically as the assignments are created)

variant = None

variant: json-encoded data parameterizing the experiment (e.g. which environment map to use). the slug and variant together are unique. parameters like how many shapes to submit are part of the experiment settings.

version = None
version number
1: unchanged from OpenSurfaces project 2: updated in Intrinsic Images project
class mturk.models.ExperimentExample(*args, **kwargs)

Bases: common.models.EmptyModelBase

An example shown with the example for illustration

content

Provides a generic relation to any object through content-type/object-id fields.

content_type
experiment
class mturk.models.ExperimentSettings(*args, **kwargs)

Bases: mturk.models.MtModelBase

Settings for creating new HITs (existing HITs do not use this data).

auto_add_hits = None

if true, automatically instantiate HITs for this experiment when content is available and instances of out_content_attr are not filled up

auto_approval_delay = None

time (seconds) until the task is automatically approved

content_filter = None

json-encoded dictionary of filters on the table corresponding to content_type

content_model()
content_type

type of instance sent to worker

contents_per_hit = None

number of content_type objects per hit

duration = None

time (seconds) the worker has to complete the task

experiments
feedback_bonus = None

if None, no feedback requested

frame_height = None

vertical size of the frame in pixels

get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
hit_types
lifetime = None

time (seconds) that the task is on the market

max_active_hits = None

at most this number of hits will be live at one time

max_total_hits = None

at most this number of hits will exist in total

min_output_consensus = None

mininum number of similar results before the mode result is considered to be reliable

num_outputs_max = None

number of output instances expected

out_content_attr = None

attr on the output type that points to the input content_type e.g. ‘photo’

out_content_model()
out_content_type

type of content generated by this task

out_count_ratio = None

minumum number of outputs per input per HIT (usually 1, except for segmentation) constraint: must be >= 1

qualifications = None

json-encoded dictionary: amazon-enforced limits on worker history (e.g. assignment accept rate)

requirements = None

json-encoded dictionary: constraints on how little work can be done (e.g. min number of polygons or min. vertices). this is not related to amazon qualifications.

reward = None

reward per HIT

title = None

metadata shown to workers when listing tasks on the marketplace

class mturk.models.ExperimentTestContent(*args, **kwargs)

Bases: common.models.EmptyModelBase

A sentinel object distributed to users where the answer is known

assignments
content

generic relation to the object being tested. the correct answer is assumed to be attached to this object.

content_type
experiment

experiment that this is associated with

priority = None

higher priority contents are shown first

responses
class mturk.models.ExperimentTestContentResponse(*args, **kwargs)

Bases: common.models.EmptyModelBase

A user’s response to an ExperimentTestContent

assignment

assignment where this was submitted

correct = None

did they give the correct answer?

experiment_worker

worker/experiment pair doing the test

response = None

user response

test_content

content being tested

class mturk.models.ExperimentWorker(*args, **kwargs)

Bases: common.models.EmptyModelBase

The stats for a worker and a given experiment

BLOCKED_METHODS = (('A', 'Admin'), ('T', 'Low test accuracy'))
auto_approve = None

If true, automatically approve submissions by this worker 5 minutes after they submit, with the message in auto_approve_message

auto_approve_message = None

Feedback to give when auto-approving. If blank, it will be “Thank you!”.

block(reason='', method='A', all_tasks=False, report_to_mturk=False, save=True)

Prevent a user from working on tasks in the future. Unless report_to_mturk is set, This is only local to your server and the worker’s account on mturk is not flagged.

Parameters:
  • reason – A message to display to the user when they try and complete tasks in the future.
  • all_tasks – if True, block the worker from all experiments.
  • report_to_mturk – if True, block the worker from all experiments, and also block them from MTurk. This will flag the user’s account, so only use this for malicious users who are clearly abusing your task.
blocked = None

block user (only locally; not on mturk)

blocked_method = None

method for setting block

blocked_reason = None

reason for blocking – message to be displayed to the user

experiment

Experiment being done

get_blocked_method_display(*moreargs, **morekwargs)
num_test_correct = None

total number of correct sentinel answers

num_test_incorrect = None

total number of incorrect sentinel answers

set_auto_approve(message='', save=True)
test_accuracy_str()

Helper for templates

test_content_responses
tutorial_completed = None

If the experiment has a tutorial, this records whether the tutorial was completed.

worker

Worker performing the experiment

class mturk.models.MtAssignment(*args, **kwargs)

Bases: mturk.models.MtModelBase

An assignment is a worker assigned to a HIT NOTE: Do not create this – instead call sync_status() on a MtHit object

ASSIGNMENT_STATUSES = (('S', 'Submitted'), ('A', 'Approved'), ('R', 'Rejected'))
accept_time = None

set by Amazon and updated using sync_status

approval_time = None

set by Amazon and updated using sync_status

approve(feedback=None, handle_bonus=True, save=True)

Send command to Amazon approving this assignment

auto_approval_time = None

set by Amazon and updated using sync_status

bonus = None

bonus for good job (sum of all bonuses given)

bonus_message = None

message(s) given to the user after different operations. if multiple messages are sent, they are separated by ‘n’.

deadline = None

set by Amazon and updated using sync_status

experiment_worker()

Returns the ExperimentWorker associated with this assignment

get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
get_status_display(*moreargs, **morekwargs)
grant_bonus(price, reason, save=True)
grant_feedback_bonus(save=True)

Give a bonus for submitting feedback

has_feedback = None

updated by our server

hit
id = None

use the Amazon-provided ID as our ID

manually_rejected = None

if true, then this HIT was manually rejected and should not be un-rejected

needs_feedback_bonus()

True if a bonus is deserved but none is received

num_test_contents = None

number of test_contents. None: not yet prepared.

num_test_correct = None

number of sentinel correct answers

num_test_incorrect = None

number of sentinel incorrect answers

post_data = None

json-encoded request.POST dictionary from last submit.

post_meta = None

json-encoded request.META dictionary from last submit. see: https://docs.djangoproject.com/en/dev/ref/request-response/

reject(feedback=None, force=False, save=True)

Send command to Amazon approving this assignment

Parameters:
  • feedback – message shown to the user
  • force – if the user has always_approve=True, then this method will not actually reject the assignment unless force=True
  • save – whether to save the result in the database (should always be True) unless you are already saving the model again shortly after.
rejection_time = None

set by Amazon and updated using sync_status

save(*args, **kwargs)
screen_height = None

user screen size

screen_width = None

user screen size

status_class_css()
status_str()
status_to_str = {'A': 'Approved', 'S': 'Submitted', 'R': 'Rejected'}
str_to_attr = {'ApprovalTime': 'approval_time', 'SubmitTime': 'submit_time', 'AutoApprovalTime': 'auto_approval_time', 'AcceptTime': 'accept_time', 'Deadline': 'deadline', 'RejectionTime': 'rejection_time'}
str_to_status = {'Approved': 'A', 'Submitted': 'S', 'Rejected': 'R'}
submission_complete = None

If True, then the async task (mturk.tasks.mturk_submit_task) has finished processing what the user submitted. Note that there is a period of time (sometimes up to an hour depending on the celery queue length) where the assignment is submitted (status == 'S') but the responses are not inserted into the database.

submit_time = None

set by Amazon and updated using sync_status

submitted_contents
sync_status(data)

data: instance of boto.mturk.connection.Assignment

test_accuracy_str()

Helper for templates

test_content_responses
test_content_responses_prefetch()
test_contents

sentinel test contents

time_active_ms = None

estimate of the time spent doing the HIT, excluding time where the user is in another window

time_active_percent()
time_active_s()

Helper for templates

time_load_ms = None

estimate of how long the page took to load; note that this ignores server response time, so this will always be ~300ms smaller than reality.

time_load_s()

Helper for templates

time_ms = None

estimate of the time spent doing the HIT

time_s()

Helper for templates

user_agent = None

user-agent string from last submit.

user_agent_parsed()
wage = None

estimate of the wage from this HIT

worker
class mturk.models.MtHit(*args, **kwargs)

Bases: mturk.models.MtModelBase

MTurk HIT (Human Intelligence Task, corresponds to a MTurk HIT object)

HIT_STATUSES = (('A', 'Assignable'), ('U', 'Unassignable'), ('R', 'Reviewable'), ('E', 'Reviewing'), ('D', 'Disposed'))
REVIEW_STATUSES = (('N', 'NotReviewed'), ('M', 'MarkedForReview'), ('A', 'ReviewedAppropriate'), ('I', 'ReviewedInappropriate'))
all_submitted_assignments = None

if True, all assignments have been submitted (useful for filtering)

any_submitted_assignments = None

if True, at least one assignment has been submitted (useful for filtering)

assignments
compatible_count = None

number of people who viewed this HIT but could have accepted

contents
dispose(data=None)

Dispose this HIT – finalize all approve/reject decisions

expire(data=None)

Expire this HIT – no new workers can accept this HIT, but existing workers can finish

get_aws_hit(connection=None)
get_hit_status_display(*moreargs, **morekwargs)
get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
get_review_status_display(*moreargs, **morekwargs)
hit_status_to_str = {'A': 'Assignable', 'E': 'Reviewing', 'R': 'Reviewable', 'U': 'Unassignable', 'D': 'Disposed'}
hit_type
id = None

use Amazon’s id

incompatible_count = None

number of people who viewed this HIT but could not accept (e.g. no WebGL)

num_contents = None

cache the number of contents

out_count_ratio = None

mininum number of objects we expect to generate per input content

pending_contents
review_status_to_str = {'A': 'ReviewedAppropriate', 'I': 'ReviewedInappropriate', 'M': 'MarkedForReview', 'N': 'NotReviewed'}
save(*args, **kwargs)
str_to_attr = {'MaxAssignments': 'max_assignments', 'NumberOfAssignmentsCompleted': 'num_assignments_completed', 'NumberOfAssignmentsPending': 'num_assignments_pending', 'expired': 'expired', 'LifetimeInSeconds': 'lifetime', 'NumberOfAssignmentsAvailable': 'num_assignments_available'}

dictionary converting MTurk attr names to model attr names

str_to_hit_status = {'Disposed': 'D', 'Assignable': 'A', 'Reviewable': 'R', 'Unassignable': 'U', 'Reviewing': 'E'}
str_to_review_status = {'MarkedForReview': 'M', 'ReviewedAppropriate': 'A', 'ReviewedInappropriate': 'I', 'NotReviewed': 'N'}
sync_status(hit=None, sync_assignments=True)

Set this instance status to match the Amazon status.

class mturk.models.MtHitContent(*args, **kwargs)

Bases: common.models.EmptyModelBase

An object attached to a HIT for labeling, e.g. a photo or shape

content

Provides a generic relation to any object through content-type/object-id fields.

content_type

generic relation to an object to be shown (e.g. Photo, Shape)

hit

the HIT that contains this object

class mturk.models.MtHitQualification(*args, **kwargs)

Bases: mturk.models.MtModelBase

Represents a qualification required to start a task

get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
hit_type
name = None

either a predefined name or a slug for MtQualification

to_boto()
class mturk.models.MtHitRequirement(*args, **kwargs)

Bases: mturk.models.MtModelBase

Contains a requirement that needs to be met for a task to be considered complete Example: min shapes per photo, min vertices per shape, min total vertices

get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
hit_type
class mturk.models.MtHitType(*args, **kwargs)

Bases: mturk.models.MtModelBase

Contains the metadata for a HIT (corresponds to a MTurk HITType)

experiment

HIT metadata

experiment_settings

Other HIT settings

external_url = None

external question info

feedback_bonus = None

bonus for giving feedback

frame_height = None

external question info

get_external_question()
get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
hits
id = None

Amazon MTurk fields (fields that are mirrored on the MT database)

qualifications
requirements
save(*args, **kwargs)
class mturk.models.MtModelBase(*args, **kwargs)

Bases: common.models.EmptyModelBase

class Meta
abstract = False
MtModelBase.get_next_by_added(*moreargs, **morekwargs)
MtModelBase.get_next_by_updated(*moreargs, **morekwargs)
MtModelBase.get_previous_by_added(*moreargs, **morekwargs)
MtModelBase.get_previous_by_updated(*moreargs, **morekwargs)
class mturk.models.MtQualification(*args, **kwargs)

Bases: mturk.models.MtModelBase

Custom qualification defined by us.

active = None

whether status is Active or Inactive

assignments
auto_granted = None

Specifies that requests for the Qualification type are granted immediately, without prompting the Worker with a Qualification test.

auto_granted_value = None

value to set when auto-granting

description = None

A long description for the Qualification type.

get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
id = None

MTurk id

keywords = None

One or more words or phrases that describe theQualification type, separated by commas. The Keywords make the type easier to find using a search.

name = None

The name of the Qualification type. The type name is used to identify the type, and to find the type using a Qualification type search.

retry_delay = None

The amount of time, in seconds, Workers must wait after taking the Qualification test before they can take it again. Workers can take a Qualification test multiple times if they were not granted the Qualification from a previous attempt, or if the test offers a gradient score and they want a better score.

save(*args, **kwargs)
class mturk.models.MtQualificationAssignment(*args, **kwargs)

Bases: mturk.models.MtModelBase

MtQualificationAssignment(id, added, updated, qualification_id, worker_id, value, granted, num_correct, num_incorrect)

get_next_by_added(*moreargs, **morekwargs)
get_next_by_updated(*moreargs, **morekwargs)
get_previous_by_added(*moreargs, **morekwargs)
get_previous_by_updated(*moreargs, **morekwargs)
granted = None

if False, mturk does not know about this record

num_correct = None

if this was a test, their score

qualification
revoke(reason=None, save=True)
set_value(value=1, save=True)
value = None

integer value assigned to user

worker
class mturk.models.MtSubmittedContent(*args, **kwargs)

Bases: common.models.EmptyModelBase

Wrapper around an object submitted for a HIT assignment

assignment

the HIT Assignment that contains this object

content

Provides a generic relation to any object through content-type/object-id fields.

content_type

generic relation to the submitted object (e.g. SubmittedShape)

class mturk.models.PendingContent(*args, **kwargs)

Bases: common.models.EmptyModelBase

A generic wrapper that keeps track of how many outputs need to be generated for an object/experiment pair, and how many are scheduled for future generation. Right now, outputs are generated only by HITs.

content

Provides a generic relation to any object through content-type/object-id fields.

content_type

generic relation to the object (e.g. Photo, MaterialShape) being studied

experiment

experiment that will be run on this object

hits

HITs that are/were scheduled to generate more outputs (can be expired)

num_outputs_completed = None

number of outputs completed so far

num_outputs_max = None

maximum number of outputs we will need. set to 0 if this does not pass the filter.

num_outputs_scheduled = None

number of outputs that are scheduled to be completed. as HIT assignments are submitted, this number is updated.

num_to_schedule()
priority = None

contents are sorted by num_outputs_max, then priority

mturk.models.get_or_create_hit_type(**kwargs)

Returns a HIT type and also manages attaching the requirements list

mturk.models.get_or_create_hit_type_from_experiment(experiment)

Creates a MtHitType from an ExperimentSettings object

mturk.models.pending_content_hit_expired(sender, instance, **kwargs)

” Updates pending content when a HIT is expired

mturk.models.pending_content_marked_invalid(sender, instance, **kwargs)