Bases: mturk.models.MtModelBase
High-level separation of HITs.
identifier for determining which experiments have been completed (if two experiments share this field, then an item completed under one experiment will count as completed under the other experiment)
Returns the priority to assign to object obj
if True, something was submitted since the last time CUBAM was run on this experiment.
name of the attribute on each example where good and bad should be grouped together. example: if you have good and bad BRDFs for a shape, and the BRDF points to the shape with the name ‘shape’, then this field would could set to ‘shape’.
whether there is a dedicated tutorial for this task
name of the module where functions like configure_experiments are held, usually called “<some_app>.experiments”
Update the new_hit_settings member
slug: url and filename-safe name of this experiment. the slug and variant together are unique. this is also the name used for templates.
directory where the template is stored. the templates for each experiment are constructed as follows:
{template_dir}/{slug}.html -- mturk task
{template_dir}/{slug}_inst_content.html -- instructions page (just the
content)
{template_dir}/{slug}_inst.html -- instructions (includes
_inst_content.html)
{template_dir}/{slug}_tut.html -- tutorial (if there is one)
number of sentinel contents given to each user (not stored with the ExperimentSettings object since this is used dynamically as the assignments are created)
variant: json-encoded data parameterizing the experiment (e.g. which environment map to use). the slug and variant together are unique. parameters like how many shapes to submit are part of the experiment settings.
Bases: common.models.EmptyModelBase
An example shown with the example for illustration
Provides a generic relation to any object through content-type/object-id fields.
Bases: mturk.models.MtModelBase
Settings for creating new HITs (existing HITs do not use this data).
if true, automatically instantiate HITs for this experiment when content is available and instances of out_content_attr are not filled up
time (seconds) until the task is automatically approved
json-encoded dictionary of filters on the table corresponding to content_type
type of instance sent to worker
number of content_type objects per hit
time (seconds) the worker has to complete the task
if None, no feedback requested
vertical size of the frame in pixels
time (seconds) that the task is on the market
at most this number of hits will be live at one time
at most this number of hits will exist in total
mininum number of similar results before the mode result is considered to be reliable
number of output instances expected
attr on the output type that points to the input content_type e.g. ‘photo’
type of content generated by this task
minumum number of outputs per input per HIT (usually 1, except for segmentation) constraint: must be >= 1
json-encoded dictionary: amazon-enforced limits on worker history (e.g. assignment accept rate)
json-encoded dictionary: constraints on how little work can be done (e.g. min number of polygons or min. vertices). this is not related to amazon qualifications.
reward per HIT
metadata shown to workers when listing tasks on the marketplace
Bases: common.models.EmptyModelBase
A sentinel object distributed to users where the answer is known
generic relation to the object being tested. the correct answer is assumed to be attached to this object.
experiment that this is associated with
higher priority contents are shown first
Bases: common.models.EmptyModelBase
A user’s response to an ExperimentTestContent
assignment where this was submitted
did they give the correct answer?
worker/experiment pair doing the test
user response
content being tested
Bases: common.models.EmptyModelBase
The stats for a worker and a given experiment
If true, automatically approve submissions by this worker 5 minutes after they submit, with the message in auto_approve_message
Feedback to give when auto-approving. If blank, it will be “Thank you!”.
Prevent a user from working on tasks in the future. Unless report_to_mturk is set, This is only local to your server and the worker’s account on mturk is not flagged.
Parameters: |
|
---|
block user (only locally; not on mturk)
method for setting block
reason for blocking – message to be displayed to the user
Experiment being done
total number of correct sentinel answers
total number of incorrect sentinel answers
Helper for templates
If the experiment has a tutorial, this records whether the tutorial was completed.
Worker performing the experiment
Bases: mturk.models.MtModelBase
An assignment is a worker assigned to a HIT NOTE: Do not create this – instead call sync_status() on a MtHit object
set by Amazon and updated using sync_status
set by Amazon and updated using sync_status
Send command to Amazon approving this assignment
set by Amazon and updated using sync_status
bonus for good job (sum of all bonuses given)
message(s) given to the user after different operations. if multiple messages are sent, they are separated by ‘n’.
set by Amazon and updated using sync_status
Returns the ExperimentWorker associated with this assignment
Give a bonus for submitting feedback
updated by our server
use the Amazon-provided ID as our ID
if true, then this HIT was manually rejected and should not be un-rejected
True if a bonus is deserved but none is received
number of test_contents. None: not yet prepared.
number of sentinel correct answers
number of sentinel incorrect answers
json-encoded request.POST dictionary from last submit.
json-encoded request.META dictionary from last submit. see: https://docs.djangoproject.com/en/dev/ref/request-response/
Send command to Amazon approving this assignment
Parameters: |
|
---|
set by Amazon and updated using sync_status
user screen size
user screen size
If True, then the async task (mturk.tasks.mturk_submit_task) has finished processing what the user submitted. Note that there is a period of time (sometimes up to an hour depending on the celery queue length) where the assignment is submitted (status == 'S') but the responses are not inserted into the database.
set by Amazon and updated using sync_status
data: instance of boto.mturk.connection.Assignment
Helper for templates
sentinel test contents
estimate of the time spent doing the HIT, excluding time where the user is in another window
Helper for templates
estimate of how long the page took to load; note that this ignores server response time, so this will always be ~300ms smaller than reality.
Helper for templates
estimate of the time spent doing the HIT
Helper for templates
user-agent string from last submit.
estimate of the wage from this HIT
Bases: mturk.models.MtModelBase
MTurk HIT (Human Intelligence Task, corresponds to a MTurk HIT object)
if True, all assignments have been submitted (useful for filtering)
if True, at least one assignment has been submitted (useful for filtering)
number of people who viewed this HIT but could have accepted
Dispose this HIT – finalize all approve/reject decisions
Expire this HIT – no new workers can accept this HIT, but existing workers can finish
use Amazon’s id
number of people who viewed this HIT but could not accept (e.g. no WebGL)
cache the number of contents
mininum number of objects we expect to generate per input content
dictionary converting MTurk attr names to model attr names
Set this instance status to match the Amazon status.
Bases: common.models.EmptyModelBase
An object attached to a HIT for labeling, e.g. a photo or shape
Provides a generic relation to any object through content-type/object-id fields.
generic relation to an object to be shown (e.g. Photo, Shape)
the HIT that contains this object
Bases: mturk.models.MtModelBase
Represents a qualification required to start a task
either a predefined name or a slug for MtQualification
Bases: mturk.models.MtModelBase
Contains a requirement that needs to be met for a task to be considered complete Example: min shapes per photo, min vertices per shape, min total vertices
Bases: mturk.models.MtModelBase
Contains the metadata for a HIT (corresponds to a MTurk HITType)
HIT metadata
Other HIT settings
external question info
bonus for giving feedback
external question info
Amazon MTurk fields (fields that are mirrored on the MT database)
Bases: common.models.EmptyModelBase
Bases: mturk.models.MtModelBase
Custom qualification defined by us.
whether status is Active or Inactive
Specifies that requests for the Qualification type are granted immediately, without prompting the Worker with a Qualification test.
value to set when auto-granting
A long description for the Qualification type.
MTurk id
One or more words or phrases that describe theQualification type, separated by commas. The Keywords make the type easier to find using a search.
The name of the Qualification type. The type name is used to identify the type, and to find the type using a Qualification type search.
The amount of time, in seconds, Workers must wait after taking the Qualification test before they can take it again. Workers can take a Qualification test multiple times if they were not granted the Qualification from a previous attempt, or if the test offers a gradient score and they want a better score.
Bases: mturk.models.MtModelBase
MtQualificationAssignment(id, added, updated, qualification_id, worker_id, value, granted, num_correct, num_incorrect)
if False, mturk does not know about this record
if this was a test, their score
integer value assigned to user
Bases: common.models.EmptyModelBase
Wrapper around an object submitted for a HIT assignment
the HIT Assignment that contains this object
Provides a generic relation to any object through content-type/object-id fields.
generic relation to the submitted object (e.g. SubmittedShape)
Bases: common.models.EmptyModelBase
A generic wrapper that keeps track of how many outputs need to be generated for an object/experiment pair, and how many are scheduled for future generation. Right now, outputs are generated only by HITs.
Provides a generic relation to any object through content-type/object-id fields.
generic relation to the object (e.g. Photo, MaterialShape) being studied
experiment that will be run on this object
HITs that are/were scheduled to generate more outputs (can be expired)
number of outputs completed so far
maximum number of outputs we will need. set to 0 if this does not pass the filter.
number of outputs that are scheduled to be completed. as HIT assignments are submitted, this number is updated.
contents are sorted by num_outputs_max, then priority
Returns a HIT type and also manages attaching the requirements list
Creates a MtHitType from an ExperimentSettings object
” Updates pending content when a HIT is expired