module: mturk.utils

mturk.utils.aws_str_to_datetime(s)

Parse Amazon date-time string

mturk.utils.configure_all_experiments(show_progress=False)

Configure all experiments by searching for modules of the form ‘<app>.experiments’ (where “app” is an installed app). The method configure_experiment() is then invoked for each such module found.

mturk.utils.configure_experiment(slug, variant='', **kwargs)

Configures an experiment in the database (mturk.models.Experiment). To be called by configure_experiments().

Parameters:
  • slug – unique human-readable ID (must be valid Python variable name). The slug and variant are together unique.
  • variant – optional string that may be used to include multiple variations on the same experiment, where the same template and user interface is used across all variants. The slug and variant together are unique. Example: you want to perform object labeling, with different lists of allowed object names (see shapes.experiments for this example).
  • completed_id – optional string that may be used in place of slug when determining whether an experiment has been completed. If two experiments share this field, then an item completed under one experiment will count as completed under the other experiment.
  • template_dir

    directory for templates, usually '<app>/experiments'. The templates for each experiment are constructed as follows:

    {template_dir}/{slug}.html              -- mturk task
    {template_dir}/{slug}_inst_content.html -- instructions page (just the
                                               content)
    {template_dir}/{slug}_inst.html         -- instructions (includes
                                               _inst_content.html)
    {template_dir}/{slug}_tut.html          -- tutorial (if there is one)
    
  • module – module containing the experiments.py file, usually '<app>.experiments'
  • examples_group_attr – the attribute used to group examples together. Example: if you have good and bad BRDFs for a shape, and the BRDF points to the shape with the name ‘shape’, then this field would could set to ‘shape’.
  • version – should be the value 2 (note that 1 is for the original OpenSurfaces publication).
  • reward – payment per HIT, as an instance of decimal.Decimal
  • num_outputs_max – the number of output items that each input item will produce. Usually this is 1. An example of another value: for OpenSurfaces material segmentation, 1 photo will produce 6 segmentations.
  • contents_per_hit – the number of contents to include in each HIT
  • test_contents_per_assignment – if specified, the number of secret test items to be added (on top of contents_per_hit) to each HIT.
  • has_tutorialTrue if this experiment has a special tutorial (see intrinsic/experiments.py for an example).
  • content_type_model – the model class for input content (content that is shown to the user)
  • out_content_type_model – the model class for output (user responses)
  • out_content_attr – on the output model class, the name of the attribute that gives the input for that output. For example, for a material segmentation, a Photo is the input and a SubmittedShape is the output, and SubmittedShape.photo gives the input photo.
  • content_filter

    a dictionary of filters to be applied to the input content to determine which items should be labeled.

    Example for labeling BRDFs:

    {
        'invalid': False,
        'pixel_area__gt': Shape.MIN_PIXEL_AREA,
        'num_vertices__gte': 10,
        'correct': True,
        'substance__isnull': False,
        'substance__fail': False,
        'photo__whitebalanced': True,
        'photo__scene_category_correct': True,
    }
    
  • title – string shown in the MTurk marketplace as the title of the task.
  • description – string shown in the MTurk marketplace describing the task.
  • keywords – comma-separated string listing the keywords, e.g. 'keyword1,keyword2,keyword3'.
  • frame_height – height in pixels used to display the iframe for workers. Most workers have 1024x768 or 800x600 screen resolutions, so I recommend setting this to at most 668 pixels. Alternatively, you could set it to a very large number and avoid an inner scroll bar.
  • requirements – [deprecated feature] dictionary of requirements that users must satisfy to submit a task, or {} if there are no requirements. These requirements are passed as context variables. This is an old feature and is implemented very inefficiently. There are better ways of getting data into the experiment context, such as external_task_extra_context().
  • auto_add_hits – if True, dispatch new HITs of this type.
mturk.utils.extract_mturk_attr(result_set, attr)

Extracts an attribute from a boto ResultSet

mturk.utils.fetch_content_tuples(content_tuples)

Fetch a list of generic items, given as a list of `[(content_type_id, object_id), ...]`

mturk.utils.fetch_hit_contents(hit)

Fetch the contents (the items shown the user) efficiently in a small number of queries

mturk.utils.get_content_model_prefetch(content_model, content_attr='content')

Returns the fields that should be prefetched, for a relation that starts with ‘<content_attr>__’. If the model has MTURK_PREFETCH, then that is used. Otherwise, some common attributes are tested (photo, shape) and used if those foreign keys exist.

mturk.utils.get_model_prefetch(content_model)

Returns the fields that should be prefetched, for a generic relation

mturk.utils.get_mturk_balance()
mturk.utils.get_mturk_connection()
mturk.utils.get_or_create_mturk_worker(mturk_worker_id)

Returns a UserProfile object for the associated mturk_worker_id

mturk.utils.get_or_create_mturk_worker_from_request(request)
mturk.utils.qualification_dict_to_boto(quals)
mturk.utils.qualification_to_boto(*args)

Convert a qualification to the format required by boto