Parameters: |
- slug – unique human-readable ID (must be valid Python variable name).
The slug and variant are together unique.
- variant – optional string that may be used to include multiple
variations on the same experiment, where the same template and user
interface is used across all variants. The slug and variant
together are unique.
Example: you want to perform object labeling, with different lists of
allowed object names (see shapes.experiments for this example).
- completed_id – optional string that may be used in place of slug
when determining whether an experiment has been completed. If two
experiments share this field, then an item completed under one
experiment will count as completed under the other experiment.
- template_dir –
directory for templates, usually
'<app>/experiments'.
The templates for each experiment are constructed as follows:
{template_dir}/{slug}.html -- mturk task
{template_dir}/{slug}_inst_content.html -- instructions page (just the
content)
{template_dir}/{slug}_inst.html -- instructions (includes
_inst_content.html)
{template_dir}/{slug}_tut.html -- tutorial (if there is one)
- module – module containing the experiments.py file, usually
'<app>.experiments'
- examples_group_attr – the attribute used to group examples together.
Example: if you have good and bad BRDFs for a shape, and the BRDF
points to the shape with the name ‘shape’, then this field would could
set to ‘shape’.
- version – should be the value 2 (note that 1 is for the
original OpenSurfaces publication).
- reward – payment per HIT, as an instance of decimal.Decimal
- num_outputs_max – the number of output items that each input item
will produce. Usually this is 1. An example of another value: for
OpenSurfaces material segmentation, 1 photo will produce 6
segmentations.
- contents_per_hit – the number of contents to include in each HIT
- test_contents_per_assignment – if specified, the number of
secret test items to be added (on top of contents_per_hit)
to each HIT.
- has_tutorial – True if this experiment has a special
tutorial (see intrinsic/experiments.py for an example).
- content_type_model – the model class for input content (content that
is shown to the user)
- out_content_type_model – the model class for output (user responses)
- out_content_attr – on the output model class, the name of the
attribute that gives the input for that output. For example,
for a material segmentation, a Photo is the input and a
SubmittedShape is the output, and SubmittedShape.photo
gives the input photo.
- content_filter –
a dictionary of filters to be applied
to the input content to determine which items should be labeled.
Example for labeling BRDFs:
{
'invalid': False,
'pixel_area__gt': Shape.MIN_PIXEL_AREA,
'num_vertices__gte': 10,
'correct': True,
'substance__isnull': False,
'substance__fail': False,
'photo__whitebalanced': True,
'photo__scene_category_correct': True,
}
- title – string shown in the MTurk marketplace as the title of the
task.
- description – string shown in the MTurk marketplace describing the
task.
- keywords – comma-separated string listing the keywords, e.g.
'keyword1,keyword2,keyword3'.
- frame_height – height in pixels used to display the iframe for
workers. Most workers have 1024x768 or 800x600 screen resolutions, so I
recommend setting this to at most 668 pixels. Alternatively,
you could set it to a very large number and avoid an inner scroll bar.
- requirements – [deprecated feature] dictionary of requirements that
users must satisfy to submit a task, or {} if there are no
requirements. These requirements are passed as context variables. This is
an old feature and is implemented very inefficiently. There
are better ways of getting data into the experiment context,
such as external_task_extra_context().
- auto_add_hits – if True, dispatch new HITs of this type.
|