pytorch_tutorials package

Submodules

pytorch_tutorials.coco_eval module

pytorch_tutorials.coco_utils module

pytorch_tutorials.engine module

pytorch_tutorials.transforms module

class pytorch_tutorials.transforms.Compose(transforms)[source]

Bases: object

class pytorch_tutorials.transforms.FixedSizeCrop(size, fill=0, padding_mode='constant')[source]

Bases: Module

forward(img, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pytorch_tutorials.transforms.PILToTensor(*args, **kwargs)[source]

Bases: Module

forward(image, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[Tensor, Optional[Dict[str, Tensor]]]

class pytorch_tutorials.transforms.RandomHorizontalFlip(p=0.5)[source]

Bases: RandomHorizontalFlip

forward(image, target=None)[source]
Parameters:

img (PIL Image or Tensor) – Image to be flipped.

Returns:

Randomly flipped image.

Return type:

PIL Image or Tensor

class pytorch_tutorials.transforms.RandomIoUCrop(min_scale=0.3, max_scale=1.0, min_aspect_ratio=0.5, max_aspect_ratio=2.0, sampler_options=None, trials=40)[source]

Bases: Module

forward(image, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[Tensor, Optional[Dict[str, Tensor]]]

class pytorch_tutorials.transforms.RandomPhotometricDistort(contrast=(0.5, 1.5), saturation=(0.5, 1.5), hue=(-0.05, 0.05), brightness=(0.875, 1.125), p=0.5)[source]

Bases: Module

forward(image, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[Tensor, Optional[Dict[str, Tensor]]]

class pytorch_tutorials.transforms.RandomShortestSize(min_size, max_size, interpolation=InterpolationMode.BILINEAR)[source]

Bases: Module

forward(image, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[Tensor, Optional[Dict[str, Tensor]]]

class pytorch_tutorials.transforms.RandomZoomOut(fill=None, side_range=(1.0, 4.0), p=0.5)[source]

Bases: Module

forward(image, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[Tensor, Optional[Dict[str, Tensor]]]

class pytorch_tutorials.transforms.ScaleJitter(target_size, scale_range=(0.1, 2.0), interpolation=InterpolationMode.BILINEAR, antialias=True)[source]

Bases: Module

Randomly resizes the image and its bounding boxes within the specified scale range. The class implements the Scale Jitter augmentation as described in the paper `”Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation”

Parameters:
  • target_size (tuple of ints) – The target size for the transform provided in (height, weight) format.

  • scale_range (tuple of ints) – scaling factor interval, e.g (a, b), then scale is randomly sampled from the range a <= scale <= b.

  • interpolation (InterpolationMode) – Desired interpolation enum defined by torchvision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR.

forward(image, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[Tensor, Optional[Dict[str, Tensor]]]

class pytorch_tutorials.transforms.SimpleCopyPaste(blending=True, resize_interpolation=InterpolationMode.BILINEAR)[source]

Bases: Module

forward(images, targets)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[List[Tensor], List[Dict[str, Tensor]]]

class pytorch_tutorials.transforms.ToDtype(dtype, scale=False)[source]

Bases: Module

forward(image, target=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tuple[Tensor, Optional[Dict[str, Tensor]]]

pytorch_tutorials.utils module

class pytorch_tutorials.utils.MetricLogger(delimiter='\t')[source]

Bases: object

add_meter(name, meter)[source]
log_every(iterable, print_freq, header=None)[source]
synchronize_between_processes()[source]
update(**kwargs)[source]
class pytorch_tutorials.utils.SmoothedValue(window_size=20, fmt=None)[source]

Bases: object

Track a series of values and provide access to smoothed values over a window or the global series average.

property avg
property global_avg
property max
property median
synchronize_between_processes()[source]

Warning: does not synchronize the deque!

update(value, n=1)[source]
property value
pytorch_tutorials.utils.all_gather(data)[source]

Run all_gather on arbitrary picklable data (not necessarily tensors) :param data: any picklable object

Returns:

list of data gathered from each rank

Return type:

list[data]

pytorch_tutorials.utils.collate_fn(batch)[source]
pytorch_tutorials.utils.get_rank()[source]
pytorch_tutorials.utils.get_world_size()[source]
pytorch_tutorials.utils.init_distributed_mode(args)[source]
pytorch_tutorials.utils.is_dist_avail_and_initialized()[source]
pytorch_tutorials.utils.is_main_process()[source]
pytorch_tutorials.utils.mkdir(path)[source]
pytorch_tutorials.utils.reduce_dict(input_dict, average=True)[source]
Parameters:
  • input_dict (dict) – all the values will be reduced

  • average (bool) – whether to do average or sum

Reduce the values in the dictionary from all processes so that all processes have the averaged results. Returns a dict with the same fields as input_dict, after reduction.

pytorch_tutorials.utils.save_on_master(*args, **kwargs)[source]
pytorch_tutorials.utils.setup_for_distributed(is_master)[source]

This function disables printing when not in master process